Kernel
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 55 participants
- 16916 discussions
backport psi feature from upstream 5.4
bugzilla: https://gitee.com/openeuler/kernel/issues/I47QS2
Baruch Siach (1):
psi: fix reference to kernel commandline enable
Dan Schatzberg (1):
kernel/sched/psi.c: expose pressure metrics on root cgroup
Johannes Weiner (11):
sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD
sched: loadavg: make calc_load_n() public
sched: sched.h: make rq locking and clock functions available in
stats.h
sched: introduce this_rq_lock_irq()
psi: pressure stall information for CPU, memory, and IO
psi: cgroup support
psi: make disabling/enabling easier for vendor kernels
psi: fix aggregation idle shut-off
psi: avoid divide-by-zero crash inside virtual machines
fs: kernfs: add poll file operation
sched/psi: Fix sampling error and rare div0 crashes with cgroups and
high uptime
Josef Bacik (1):
blk-iolatency: use a percentile approache for ssd's
Liu Xinpeng (2):
psi:enable psi in config
psi:avoid kabi change
Olof Johansson (1):
kernel/sched/psi.c: simplify cgroup_move_task()
Suren Baghdasaryan (6):
psi: introduce state_mask to represent stalled psi states
psi: make psi_enable static
psi: rename psi fields in preparation for psi trigger addition
psi: split update_stats into parts
psi: track changed states
include/: refactor headers to allow kthread.h inclusion in psi_types.h
Documentation/accounting/psi.txt | 73 +++
Documentation/admin-guide/cgroup-v2.rst | 18 +
Documentation/admin-guide/kernel-parameters.txt | 4 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +-
arch/powerpc/platforms/cell/spufs/sched.c | 9 +-
arch/s390/appldata/appldata_os.c | 4 -
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-iolatency.c | 183 +++++-
drivers/cpuidle/governors/menu.c | 4 -
drivers/spi/spi-rockchip.c | 1 +
fs/kernfs/file.c | 31 +-
fs/proc/loadavg.c | 3 -
include/linux/cgroup-defs.h | 12 +
include/linux/cgroup.h | 17 +
include/linux/kernfs.h | 8 +
include/linux/kthread.h | 4 +
include/linux/psi.h | 55 ++
include/linux/psi_types.h | 95 +++
include/linux/sched.h | 13 +
include/linux/sched/loadavg.h | 24 +-
init/Kconfig | 28 +
kernel/cgroup/cgroup.c | 55 +-
kernel/debug/kdb/kdb_main.c | 7 +-
kernel/fork.c | 4 +
kernel/kthread.c | 3 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 16 +-
kernel/sched/loadavg.c | 139 ++--
kernel/sched/psi.c | 823 ++++++++++++++++++++++++
kernel/sched/sched.h | 178 ++---
kernel/sched/stats.h | 86 +++
kernel/workqueue.c | 23 +
kernel/workqueue_internal.h | 6 +-
mm/compaction.c | 5 +
mm/filemap.c | 11 +
mm/page_alloc.c | 9 +
mm/vmscan.c | 9 +
38 files changed, 1726 insertions(+), 241 deletions(-)
create mode 100644 Documentation/accounting/psi.txt
create mode 100644 include/linux/psi.h
create mode 100644 include/linux/psi_types.h
create mode 100644 kernel/sched/psi.c
--
1.8.3.1
2
24
From: zhangguijiang <zhangguijiang(a)huawei.com>
ascend inclusion
category: feature
feature: Ascend emmc adaption
bugzilla: https://gitee.com/openeuler/kernel/issues/I4F4LL
CVE: NA
--------------------
To identify Ascend HiSilicon emmc chip, we add a customized property
to dts. In this patch we add an interface to read this property. At
the same time, we provided a switch, which is CONFIG_ASCEND_HISI_MMC,
for you to get rid of our modifications.
Signed-off-by: zhangguijiang <zhangguijiang(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/mmc/Kconfig | 10 ++++
drivers/mmc/core/host.c | 43 ++++++++++++---
include/linux/mmc/host.h | 115 +++++++++++++++++++++++++++++++++++++++
include/linux/mmc/pm.h | 1 +
4 files changed, 162 insertions(+), 7 deletions(-)
diff --git a/drivers/mmc/Kconfig b/drivers/mmc/Kconfig
index ec21388311db2..8b29ecadd1862 100644
--- a/drivers/mmc/Kconfig
+++ b/drivers/mmc/Kconfig
@@ -12,6 +12,16 @@ menuconfig MMC
If you want MMC/SD/SDIO support, you should say Y here and
also to your specific host controller driver.
+config ASCEND_HISI_MMC
+ bool "Ascend HiSilicon MMC card support"
+ depends on MMC
+ default n
+ help
+ This selects for Hisilicon SoC specific extensions to the
+ Synopsys DesignWare Memory Card Interface driver.
+ You should select this option if you want mmc support on
+ Ascend platform.
+
if MMC
source "drivers/mmc/core/Kconfig"
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index dd1c14d8f6863..b29ee31e7865e 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -348,6 +348,11 @@ int mmc_of_parse(struct mmc_host *host)
EXPORT_SYMBOL(mmc_of_parse);
+static inline int mmc_is_ascend_hi_mci_1(struct device *dev)
+{
+ return !strncmp(dev_name(dev), "hi_mci.1", strlen("hi_mci.1"));
+}
+
/**
* mmc_alloc_host - initialise the per-host structure.
* @extra: sizeof private data structure
@@ -374,7 +379,10 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
}
host->index = err;
-
+ if (mmc_is_ascend_customized(dev)) {
+ if (mmc_is_ascend_hi_mci_1(dev))
+ host->index = 1;
+ }
dev_set_name(&host->class_dev, "mmc%d", host->index);
host->parent = dev;
@@ -383,10 +391,11 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
device_initialize(&host->class_dev);
device_enable_async_suspend(&host->class_dev);
- if (mmc_gpio_alloc(host)) {
- put_device(&host->class_dev);
- return NULL;
- }
+ if (!mmc_is_ascend_customized(host->parent))
+ if (mmc_gpio_alloc(host)) {
+ put_device(&host->class_dev);
+ return NULL;
+ }
spin_lock_init(&host->lock);
init_waitqueue_head(&host->wq);
@@ -439,7 +448,9 @@ int mmc_add_host(struct mmc_host *host)
#endif
mmc_start_host(host);
- mmc_register_pm_notifier(host);
+ if (!mmc_is_ascend_customized(host->parent) ||
+ !(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
+ mmc_register_pm_notifier(host);
return 0;
}
@@ -456,7 +467,9 @@ EXPORT_SYMBOL(mmc_add_host);
*/
void mmc_remove_host(struct mmc_host *host)
{
- mmc_unregister_pm_notifier(host);
+ if (!mmc_is_ascend_customized(host->parent) ||
+ !(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
+ mmc_unregister_pm_notifier(host);
mmc_stop_host(host);
#ifdef CONFIG_DEBUG_FS
@@ -483,3 +496,19 @@ void mmc_free_host(struct mmc_host *host)
}
EXPORT_SYMBOL(mmc_free_host);
+
+
+int mmc_is_ascend_customized(struct device *dev)
+{
+#ifdef CONFIG_ASCEND_HISI_MMC
+ static int is_ascend_customized = -1;
+
+ if (is_ascend_customized == -1)
+ is_ascend_customized = ((dev == NULL) ? 0 :
+ of_find_property(dev->of_node, "customized", NULL) != NULL);
+ return is_ascend_customized;
+#else
+ return 0;
+#endif
+}
+EXPORT_SYMBOL(mmc_is_ascend_customized);
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 7e8e5b20e82b0..2cd5a73ab12a2 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -19,6 +19,9 @@
#include <linux/mmc/pm.h>
#include <linux/dma-direction.h>
+#include <linux/jiffies.h>
+#include <linux/version.h>
+
struct mmc_ios {
unsigned int clock; /* clock rate */
unsigned short vdd;
@@ -63,6 +66,7 @@ struct mmc_ios {
#define MMC_TIMING_MMC_DDR52 8
#define MMC_TIMING_MMC_HS200 9
#define MMC_TIMING_MMC_HS400 10
+#define MMC_TIMING_NEW_SD MMC_TIMING_UHS_SDR12
unsigned char signal_voltage; /* signalling voltage (1.8V or 3.3V) */
@@ -78,7 +82,25 @@ struct mmc_ios {
#define MMC_SET_DRIVER_TYPE_D 3
bool enhanced_strobe; /* hs400es selection */
+#ifdef CONFIG_ASCEND_HISI_MMC
+ unsigned int clock_store; /*store the clock before power off*/
+#endif
+};
+
+#ifdef CONFIG_ASCEND_HISI_MMC
+struct mmc_cmdq_host_ops {
+ int (*enable)(struct mmc_host *mmc);
+ int (*disable)(struct mmc_host *mmc, bool soft);
+ int (*restore_irqs)(struct mmc_host *mmc);
+ int (*request)(struct mmc_host *mmc, struct mmc_request *mrq);
+ int (*halt)(struct mmc_host *mmc, bool halt);
+ void (*post_req)(struct mmc_host *mmc, struct mmc_request *mrq,
+ int err);
+ void (*disable_immediately)(struct mmc_host *mmc);
+ int (*clear_and_halt)(struct mmc_host *mmc);
};
+#endif
+
struct mmc_host;
@@ -168,6 +190,12 @@ struct mmc_host_ops {
*/
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
+#ifdef CONFIG_ASCEND_HISI_MMC
+ /* Slow down clk for ascend chip SD cards */
+ void (*slowdown_clk)(struct mmc_host *host, int timing);
+ int (*enable_enhanced_strobe)(struct mmc_host *host);
+ int (*send_cmd_direct)(struct mmc_host *host, struct mmc_request *mrq);
+#endif
};
struct mmc_cqe_ops {
@@ -255,6 +283,30 @@ struct mmc_context_info {
wait_queue_head_t wait;
};
+#ifdef CONFIG_ASCEND_HISI_MMC
+/**
+ * mmc_cmdq_context_info - describes the contexts of cmdq
+ * @active_reqs requests being processed
+ * @active_dcmd dcmd in progress, don't issue any
+ * more dcmd requests
+ * @rpmb_in_wait do not pull any more reqs till rpmb is handled
+ * @cmdq_state state of cmdq engine
+ * @req_starved completion should invoke the request_fn since
+ * no tags were available
+ * @cmdq_ctx_lock acquire this before accessing this structure
+ */
+struct mmc_cmdq_context_info {
+ unsigned long active_reqs; /* in-flight requests */
+ bool active_dcmd;
+ bool rpmb_in_wait;
+ unsigned long curr_state;
+
+ /* no free tag available */
+ unsigned long req_starved;
+ spinlock_t cmdq_ctx_lock;
+};
+#endif
+
struct regulator;
struct mmc_pwrseq;
@@ -328,6 +380,9 @@ struct mmc_host {
#define MMC_CAP_UHS_SDR50 (1 << 18) /* Host supports UHS SDR50 mode */
#define MMC_CAP_UHS_SDR104 (1 << 19) /* Host supports UHS SDR104 mode */
#define MMC_CAP_UHS_DDR50 (1 << 20) /* Host supports UHS DDR50 mode */
+#ifdef CONFIG_ASCEND_HISI_MMC
+#define MMC_CAP_RUNTIME_RESUME (1 << 20) /* Resume at runtime_resume. */
+#endif
#define MMC_CAP_UHS (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | \
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | \
MMC_CAP_UHS_DDR50)
@@ -368,6 +423,34 @@ struct mmc_host {
#define MMC_CAP2_CQE (1 << 23) /* Has eMMC command queue engine */
#define MMC_CAP2_CQE_DCMD (1 << 24) /* CQE can issue a direct command */
#define MMC_CAP2_AVOID_3_3V (1 << 25) /* Host must negotiate down from 3.3V */
+#ifdef CONFIG_ASCEND_HISI_MMC
+#define MMC_CAP2_CACHE_CTRL (1 << 1) /* Allow cache control */
+#define MMC_CAP2_NO_MULTI_READ (1 << 3) /* Multiblock read don't work */
+#define MMC_CAP2_NO_SLEEP_CMD (1 << 4) /* Don't allow sleep command */
+#define MMC_CAP2_BROKEN_VOLTAGE (1 << 7) /* Use the broken voltage */
+#define MMC_CAP2_DETECT_ON_ERR (1 << 8) /* I/O err check card removal */
+#define MMC_CAP2_HC_ERASE_SZ (1 << 9) /* High-capacity erase size */
+#define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */
+#define MMC_CAP2_PACKED_WR (1 << 13) /* Allow packed write */
+#define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \
+ MMC_CAP2_PACKED_WR)
+#define MMC_CAP2_CMD_QUEUE (1 << 18) /* support eMMC command queue */
+#define MMC_CAP2_ENHANCED_STROBE (1 << 19)
+#define MMC_CAP2_CACHE_FLUSH_BARRIER (1 << 20)
+/* Allow background operations auto enable control */
+#define MMC_CAP2_BKOPS_AUTO_CTRL (1 << 21)
+/* Allow background operations manual enable control */
+#define MMC_CAP2_BKOPS_MANUAL_CTRL (1 << 22)
+
+/* host is connected by via modem through sdio */
+#define MMC_CAP2_SUPPORT_VIA_MODEM (1 << 26)
+/* host is connected by wifi through sdio */
+#define MMC_CAP2_SUPPORT_WIFI (1 << 27)
+/* host is connected to 1102 wifi */
+#define MMC_CAP2_SUPPORT_WIFI_CMD11 (1 << 28)
+/* host do not support low power for wifi*/
+#define MMC_CAP2_WIFI_NO_LOWPWR (1 << 29)
+#endif
int fixed_drv_type; /* fixed driver type for non-removable media */
@@ -461,6 +544,12 @@ struct mmc_host {
bool cqe_on;
unsigned long private[0] ____cacheline_aligned;
+#ifdef CONFIG_ASCEND_HISI_MMC
+ const struct mmc_cmdq_host_ops *cmdq_ops;
+ int sdio_present;
+ unsigned int cmdq_slots;
+ struct mmc_cmdq_context_info cmdq_ctx;
+#endif
};
struct device_node;
@@ -588,4 +677,30 @@ static inline enum dma_data_direction mmc_get_dma_dir(struct mmc_data *data)
int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);
int mmc_abort_tuning(struct mmc_host *host, u32 opcode);
+#ifdef CONFIG_ASCEND_HISI_MMC
+int mmc_cache_ctrl(struct mmc_host *host, u8 enable);
+int mmc_card_awake(struct mmc_host *host);
+int mmc_card_sleep(struct mmc_host *host);
+int mmc_card_can_sleep(struct mmc_host *host);
+#else
+static inline int mmc_cache_ctrl(struct mmc_host *host, u8 enable)
+{
+ return 0;
+}
+static inline int mmc_card_awake(struct mmc_host *host)
+{
+ return 0;
+}
+static inline int mmc_card_sleep(struct mmc_host *host)
+{
+ return 0;
+}
+static inline int mmc_card_can_sleep(struct mmc_host *host)
+{
+ return 0;
+}
+#endif
+
+int mmc_is_ascend_customized(struct device *dev);
+
#endif /* LINUX_MMC_HOST_H */
diff --git a/include/linux/mmc/pm.h b/include/linux/mmc/pm.h
index 4a139204c20c0..6e2d6a135c7e0 100644
--- a/include/linux/mmc/pm.h
+++ b/include/linux/mmc/pm.h
@@ -26,5 +26,6 @@ typedef unsigned int mmc_pm_flag_t;
#define MMC_PM_KEEP_POWER (1 << 0) /* preserve card power during suspend */
#define MMC_PM_WAKE_SDIO_IRQ (1 << 1) /* wake up host system on SDIO IRQ assertion */
+#define MMC_PM_IGNORE_PM_NOTIFY (1 << 2) /* ignore mmc pm notify */
#endif /* LINUX_MMC_PM_H */
--
2.25.1
1
7

27 Oct '21
From: zhangguijiang <zhangguijiang(a)huawei.com>
ascend inclusion
category: feature
feature: Ascend emmc adaption
bugzilla: https://gitee.com/openeuler/kernel/issues/I4F4LL
CVE: NA
--------------------
To identify Ascend HiSilicon emmc chip, we add a customized property
to dts. In this patch we add an interface to read this property. At
the same time, we provided a switch, which is CONFIG_ASCEND_HISI_MMC,
for you to get rid of our modifications.
Signed-off-by: zhangguijiang <zhangguijiang(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/mmc/Kconfig | 10 ++++
drivers/mmc/core/host.c | 47 +++++++++++++---
include/linux/mmc/host.h | 115 +++++++++++++++++++++++++++++++++++++++
include/linux/mmc/pm.h | 1 +
4 files changed, 164 insertions(+), 9 deletions(-)
diff --git a/drivers/mmc/Kconfig b/drivers/mmc/Kconfig
index ec21388311db2..8b29ecadd1862 100644
--- a/drivers/mmc/Kconfig
+++ b/drivers/mmc/Kconfig
@@ -12,6 +12,16 @@ menuconfig MMC
If you want MMC/SD/SDIO support, you should say Y here and
also to your specific host controller driver.
+config ASCEND_HISI_MMC
+ bool "Ascend HiSilicon MMC card support"
+ depends on MMC
+ default n
+ help
+ This selects for Hisilicon SoC specific extensions to the
+ Synopsys DesignWare Memory Card Interface driver.
+ You should select this option if you want mmc support on
+ Ascend platform.
+
if MMC
source "drivers/mmc/core/Kconfig"
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index f57f5de542064..69cc778706855 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -348,6 +348,11 @@ int mmc_of_parse(struct mmc_host *host)
EXPORT_SYMBOL(mmc_of_parse);
+static inline int mmc_is_ascend_hi_mci_1(struct device *dev)
+{
+ return !strncmp(dev_name(dev), "hi_mci.1", strlen("hi_mci.1"));
+}
+
/**
* mmc_alloc_host - initialise the per-host structure.
* @extra: sizeof private data structure
@@ -374,7 +379,10 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
}
host->index = err;
-
+ if (mmc_is_ascend_customized(dev)) {
+ if (mmc_is_ascend_hi_mci_1(dev))
+ host->index = 1;
+ }
dev_set_name(&host->class_dev, "mmc%d", host->index);
host->parent = dev;
@@ -383,12 +391,13 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
device_initialize(&host->class_dev);
device_enable_async_suspend(&host->class_dev);
- if (mmc_gpio_alloc(host)) {
- put_device(&host->class_dev);
- ida_simple_remove(&mmc_host_ida, host->index);
- kfree(host);
- return NULL;
- }
+ if (!mmc_is_ascend_customized(host->parent))
+ if (mmc_gpio_alloc(host)) {
+ put_device(&host->class_dev);
+ ida_simple_remove(&mmc_host_ida, host->index);
+ kfree(host);
+ return NULL;
+ }
spin_lock_init(&host->lock);
init_waitqueue_head(&host->wq);
@@ -441,7 +450,9 @@ int mmc_add_host(struct mmc_host *host)
#endif
mmc_start_host(host);
- mmc_register_pm_notifier(host);
+ if (!mmc_is_ascend_customized(host->parent) ||
+ !(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
+ mmc_register_pm_notifier(host);
return 0;
}
@@ -458,7 +469,9 @@ EXPORT_SYMBOL(mmc_add_host);
*/
void mmc_remove_host(struct mmc_host *host)
{
- mmc_unregister_pm_notifier(host);
+ if (!mmc_is_ascend_customized(host->parent) ||
+ !(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
+ mmc_unregister_pm_notifier(host);
mmc_stop_host(host);
#ifdef CONFIG_DEBUG_FS
@@ -485,3 +498,19 @@ void mmc_free_host(struct mmc_host *host)
}
EXPORT_SYMBOL(mmc_free_host);
+
+
+int mmc_is_ascend_customized(struct device *dev)
+{
+#ifdef CONFIG_ASCEND_HISI_MMC
+ static int is_ascend_customized = -1;
+
+ if (is_ascend_customized == -1)
+ is_ascend_customized = ((dev == NULL) ? 0 :
+ of_find_property(dev->of_node, "customized", NULL) != NULL);
+ return is_ascend_customized;
+#else
+ return 0;
+#endif
+}
+EXPORT_SYMBOL(mmc_is_ascend_customized);
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 840462ed1ec7e..78b4d0a813b71 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -19,6 +19,9 @@
#include <linux/mmc/pm.h>
#include <linux/dma-direction.h>
+#include <linux/jiffies.h>
+#include <linux/version.h>
+
struct mmc_ios {
unsigned int clock; /* clock rate */
unsigned short vdd;
@@ -63,6 +66,7 @@ struct mmc_ios {
#define MMC_TIMING_MMC_DDR52 8
#define MMC_TIMING_MMC_HS200 9
#define MMC_TIMING_MMC_HS400 10
+#define MMC_TIMING_NEW_SD MMC_TIMING_UHS_SDR12
unsigned char signal_voltage; /* signalling voltage (1.8V or 3.3V) */
@@ -78,7 +82,25 @@ struct mmc_ios {
#define MMC_SET_DRIVER_TYPE_D 3
bool enhanced_strobe; /* hs400es selection */
+#ifdef CONFIG_ASCEND_HISI_MMC
+ unsigned int clock_store; /*store the clock before power off*/
+#endif
+};
+
+#ifdef CONFIG_ASCEND_HISI_MMC
+struct mmc_cmdq_host_ops {
+ int (*enable)(struct mmc_host *mmc);
+ int (*disable)(struct mmc_host *mmc, bool soft);
+ int (*restore_irqs)(struct mmc_host *mmc);
+ int (*request)(struct mmc_host *mmc, struct mmc_request *mrq);
+ int (*halt)(struct mmc_host *mmc, bool halt);
+ void (*post_req)(struct mmc_host *mmc, struct mmc_request *mrq,
+ int err);
+ void (*disable_immediately)(struct mmc_host *mmc);
+ int (*clear_and_halt)(struct mmc_host *mmc);
};
+#endif
+
struct mmc_host;
@@ -168,6 +190,12 @@ struct mmc_host_ops {
*/
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
+#ifdef CONFIG_ASCEND_HISI_MMC
+ /* Slow down clk for ascend chip SD cards */
+ void (*slowdown_clk)(struct mmc_host *host, int timing);
+ int (*enable_enhanced_strobe)(struct mmc_host *host);
+ int (*send_cmd_direct)(struct mmc_host *host, struct mmc_request *mrq);
+#endif
};
struct mmc_cqe_ops {
@@ -255,6 +283,30 @@ struct mmc_context_info {
wait_queue_head_t wait;
};
+#ifdef CONFIG_ASCEND_HISI_MMC
+/**
+ * mmc_cmdq_context_info - describes the contexts of cmdq
+ * @active_reqs requests being processed
+ * @active_dcmd dcmd in progress, don't issue any
+ * more dcmd requests
+ * @rpmb_in_wait do not pull any more reqs till rpmb is handled
+ * @cmdq_state state of cmdq engine
+ * @req_starved completion should invoke the request_fn since
+ * no tags were available
+ * @cmdq_ctx_lock acquire this before accessing this structure
+ */
+struct mmc_cmdq_context_info {
+ unsigned long active_reqs; /* in-flight requests */
+ bool active_dcmd;
+ bool rpmb_in_wait;
+ unsigned long curr_state;
+
+ /* no free tag available */
+ unsigned long req_starved;
+ spinlock_t cmdq_ctx_lock;
+};
+#endif
+
struct regulator;
struct mmc_pwrseq;
@@ -328,6 +380,9 @@ struct mmc_host {
#define MMC_CAP_UHS_SDR50 (1 << 18) /* Host supports UHS SDR50 mode */
#define MMC_CAP_UHS_SDR104 (1 << 19) /* Host supports UHS SDR104 mode */
#define MMC_CAP_UHS_DDR50 (1 << 20) /* Host supports UHS DDR50 mode */
+#ifdef CONFIG_ASCEND_HISI_MMC
+#define MMC_CAP_RUNTIME_RESUME (1 << 20) /* Resume at runtime_resume. */
+#endif
#define MMC_CAP_UHS (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | \
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | \
MMC_CAP_UHS_DDR50)
@@ -367,6 +422,34 @@ struct mmc_host {
#define MMC_CAP2_CQE (1 << 23) /* Has eMMC command queue engine */
#define MMC_CAP2_CQE_DCMD (1 << 24) /* CQE can issue a direct command */
#define MMC_CAP2_AVOID_3_3V (1 << 25) /* Host must negotiate down from 3.3V */
+#ifdef CONFIG_ASCEND_HISI_MMC
+#define MMC_CAP2_CACHE_CTRL (1 << 1) /* Allow cache control */
+#define MMC_CAP2_NO_MULTI_READ (1 << 3) /* Multiblock read don't work */
+#define MMC_CAP2_NO_SLEEP_CMD (1 << 4) /* Don't allow sleep command */
+#define MMC_CAP2_BROKEN_VOLTAGE (1 << 7) /* Use the broken voltage */
+#define MMC_CAP2_DETECT_ON_ERR (1 << 8) /* I/O err check card removal */
+#define MMC_CAP2_HC_ERASE_SZ (1 << 9) /* High-capacity erase size */
+#define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */
+#define MMC_CAP2_PACKED_WR (1 << 13) /* Allow packed write */
+#define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \
+ MMC_CAP2_PACKED_WR)
+#define MMC_CAP2_CMD_QUEUE (1 << 18) /* support eMMC command queue */
+#define MMC_CAP2_ENHANCED_STROBE (1 << 19)
+#define MMC_CAP2_CACHE_FLUSH_BARRIER (1 << 20)
+/* Allow background operations auto enable control */
+#define MMC_CAP2_BKOPS_AUTO_CTRL (1 << 21)
+/* Allow background operations manual enable control */
+#define MMC_CAP2_BKOPS_MANUAL_CTRL (1 << 22)
+
+/* host is connected by via modem through sdio */
+#define MMC_CAP2_SUPPORT_VIA_MODEM (1 << 26)
+/* host is connected by wifi through sdio */
+#define MMC_CAP2_SUPPORT_WIFI (1 << 27)
+/* host is connected to 1102 wifi */
+#define MMC_CAP2_SUPPORT_WIFI_CMD11 (1 << 28)
+/* host do not support low power for wifi*/
+#define MMC_CAP2_WIFI_NO_LOWPWR (1 << 29)
+#endif
int fixed_drv_type; /* fixed driver type for non-removable media */
@@ -460,6 +543,12 @@ struct mmc_host {
bool cqe_on;
unsigned long private[0] ____cacheline_aligned;
+#ifdef CONFIG_ASCEND_HISI_MMC
+ const struct mmc_cmdq_host_ops *cmdq_ops;
+ int sdio_present;
+ unsigned int cmdq_slots;
+ struct mmc_cmdq_context_info cmdq_ctx;
+#endif
};
struct device_node;
@@ -587,4 +676,30 @@ static inline enum dma_data_direction mmc_get_dma_dir(struct mmc_data *data)
int mmc_send_tuning(struct mmc_host *host, u32 opcode, int *cmd_error);
int mmc_abort_tuning(struct mmc_host *host, u32 opcode);
+#ifdef CONFIG_ASCEND_HISI_MMC
+int mmc_cache_ctrl(struct mmc_host *host, u8 enable);
+int mmc_card_awake(struct mmc_host *host);
+int mmc_card_sleep(struct mmc_host *host);
+int mmc_card_can_sleep(struct mmc_host *host);
+#else
+static inline int mmc_cache_ctrl(struct mmc_host *host, u8 enable)
+{
+ return 0;
+}
+static inline int mmc_card_awake(struct mmc_host *host)
+{
+ return 0;
+}
+static inline int mmc_card_sleep(struct mmc_host *host)
+{
+ return 0;
+}
+static inline int mmc_card_can_sleep(struct mmc_host *host)
+{
+ return 0;
+}
+#endif
+
+int mmc_is_ascend_customized(struct device *dev);
+
#endif /* LINUX_MMC_HOST_H */
diff --git a/include/linux/mmc/pm.h b/include/linux/mmc/pm.h
index 4a139204c20c0..6e2d6a135c7e0 100644
--- a/include/linux/mmc/pm.h
+++ b/include/linux/mmc/pm.h
@@ -26,5 +26,6 @@ typedef unsigned int mmc_pm_flag_t;
#define MMC_PM_KEEP_POWER (1 << 0) /* preserve card power during suspend */
#define MMC_PM_WAKE_SDIO_IRQ (1 << 1) /* wake up host system on SDIO IRQ assertion */
+#define MMC_PM_IGNORE_PM_NOTIFY (1 << 2) /* ignore mmc pm notify */
#endif /* LINUX_MMC_PM_H */
--
2.25.1
1
7
首先非常感谢大家参与openEuler社区,并给openEuler kernel开源项目提补丁。
openEuler kernel开源项目的openEuler-21.03创新分支以更加开阔的视野接纳企业,高校以及所有爱好和关注Linux内核
人士的想法和建议,期望和大家共同探索底层软件在构建云与计算、5G、终端等全场景下的前景和潜力,共同推动
底软在物联网、智能计算背景下的全新视界;另外openEuler-21.03创新分支同时希望给高校提供更多的教学素材,
为高校基础研究和产教结合道路奉献绵薄之力。
- 如果您对如何参与openEuler kernel开源项目有疑问,可以发邮件至bobo.shaobowang(a)huawei.com,
也可以参考文档:https://mp.weixin.qq.com/s/a42a5VfayFeJgWitqbI8Qw
- 您也可以通过openEuler kernel官网提交issue: https://gitee.com/openeuler/kernel
以下补丁已经过社区maintainer review和openEuler社区的验证测试,将合入到openEuler-21.03分支5.10.0-4.25.0版本。
0ee74f5aa533 (HEAD -> openEuler-21.03, tag: 5.10.0-4.25.0) RDS tcp loopback connection can hang
fc7ec5aebb45 usb: gadget: f_fs: Ensure io_completion_wq is idle during unbind
dae6a368dafc ALSA: seq: Fix race of snd_seq_timer_open()
3a00695cb8e2 RDMA/mlx4: Do not map the core_clock page to user space unless enabled
ed4fd7c42adc Revert "ACPI: sleep: Put the FACS table after using it"
37c85837cf42 ASoC: Intel: bytcr_rt5640: Add quirk for the Glavey TM800A550L tablet
94fccf25dd49 nvme-tcp: remove incorrect Kconfig dep in BLK_DEV_NVME
47f090fbcbb9 regulator: fan53880: Fix missing n_voltages setting
0b9b74807478 net/nfc/rawsock.c: fix a permission check bug
789459f344e7 scsi: core: Only put parent device if host state differs from SHOST_CREATED
08f8e0fb4b59 usb: typec: ucsi: Clear PPM capability data in ucsi_init() error path
0e2bd1220f8a phy: cadence: Sierra: Fix error return code in cdns_sierra_phy_probe()
47f3671cfd67 usb: pd: Set PD_T_SINK_WAIT_CAP to 310ms
559b80a5925d scsi: core: Fix failure handling of scsi_add_host_with_dma()
e897c103ecde ALSA: hda/realtek: headphone and mic don't work on an Acer laptop
d39b22f602f5 isdn: mISDN: netjet: Fix crash in nj_probe:
1013d6a98975 nvmet: fix false keep-alive timeout when a controller is torn down
9ffec7fff577 cgroup: disable controllers at parse time
e2c4bbd88218 RDMA/ipoib: Fix warning caused by destroying non-initial netns
0eb3e33d9814 gpio: wcd934x: Fix shift-out-of-bounds error
af64e02cb927 NFSv4: Fix deadlock between nfs4_evict_inode() and nfs4_opendata_get_inode()
34a0d49e311d usb: dwc3: ep0: fix NULL pointer exception
ca5ed7b6d2ac spi: bcm2835: Fix out-of-bounds access with more than 4 slaves
420a6301307e NFSv4: nfs4_proc_set_acl needs to restore NFS_CAP_UIDGID_NOMAP on error.
762e6acf28f1 regulator: core: resolve supply for boot-on/always-on regulators
d84eb5070d03 net: macb: ensure the device is available before accessing GEMGXL control registers
b9a3b65556e9 sched/fair: Make sure to update tg contrib for blocked load
074babe38e68 KVM: x86: Ensure liveliness of nested VM-Enter fail tracepoint message
447f10de04c8 ALSA: firewire-lib: fix the context to call snd_pcm_stop_xrun()
262986c9f618 dm verity: fix require_signatures module_param permissions
bfa96859a312 usb: chipidea: udc: assign interrupt number to USB gadget structure
177f5f81e9fc regulator: max77620: Use device_set_of_node_from_dev()
818f49a8aa09 USB: serial: omninet: add device id for Zyxel Omni 56K Plus
1790bfdad278 spi: Cleanup on failure of initial setup
642b2258a1f7 drm/msm/a6xx: avoid shadow NULL reference in failure path
be1c43cba161 USB: f_ncm: ncm_bitrate (speed) is unsigned
e41037151205 nvme-fabrics: decode host pathing error for connect
期待后续合作。
Alexandre GRIVEAUX (1):
USB: serial: omninet: add device id for Zyxel Omni 56K Plus
Axel Lin (1):
regulator: fan53880: Fix missing n_voltages setting
Dai Ngo (1):
NFSv4: nfs4_proc_set_acl needs to restore NFS_CAP_UIDGID_NOMAP on
error.
Dmitry Baryshkov (1):
regulator: core: resolve supply for boot-on/always-on regulators
Dmitry Osipenko (1):
regulator: max77620: Use device_set_of_node_from_dev()
Hannes Reinecke (1):
nvme-fabrics: decode host pathing error for connect
Hans de Goede (1):
ASoC: Intel: bytcr_rt5640: Add quirk for the Glavey TM800A550L tablet
Hui Wang (1):
ALSA: hda/realtek: headphone and mic don't work on an Acer laptop
Jeimon (1):
net/nfc/rawsock.c: fix a permission check bug
John Keeping (1):
dm verity: fix require_signatures module_param permissions
Jonathan Marek (1):
drm/msm/a6xx: avoid shadow NULL reference in failure path
Kamal Heib (1):
RDMA/ipoib: Fix warning caused by destroying non-initial netns
Kyle Tso (1):
usb: pd: Set PD_T_SINK_WAIT_CAP to 310ms
Li Jun (1):
usb: chipidea: udc: assign interrupt number to USB gadget structure
Lukas Wunner (2):
spi: Cleanup on failure of initial setup
spi: bcm2835: Fix out-of-bounds access with more than 4 slaves
Maciej Żenczykowski (1):
USB: f_ncm: ncm_bitrate (speed) is unsigned
Marian-Cristian Rotariu (1):
usb: dwc3: ep0: fix NULL pointer exception
Mayank Rana (1):
usb: typec: ucsi: Clear PPM capability data in ucsi_init() error path
Ming Lei (2):
scsi: core: Fix failure handling of scsi_add_host_with_dma()
scsi: core: Only put parent device if host state differs from
SHOST_CREATED
Rao Shoaib (1):
RDS tcp loopback connection can hang
Sagi Grimberg (2):
nvmet: fix false keep-alive timeout when a controller is torn down
nvme-tcp: remove incorrect Kconfig dep in BLK_DEV_NVME
Sean Christopherson (1):
KVM: x86: Ensure liveliness of nested VM-Enter fail tracepoint message
Shakeel Butt (1):
cgroup: disable controllers at parse time
Shay Drory (1):
RDMA/mlx4: Do not map the core_clock page to user space unless enabled
Srinivas Kandagatla (1):
gpio: wcd934x: Fix shift-out-of-bounds error
Takashi Iwai (1):
ALSA: seq: Fix race of snd_seq_timer_open()
Takashi Sakamoto (1):
ALSA: firewire-lib: fix the context to call snd_pcm_stop_xrun()
Trond Myklebust (1):
NFSv4: Fix deadlock between nfs4_evict_inode() and
nfs4_opendata_get_inode()
Vincent Guittot (1):
sched/fair: Make sure to update tg contrib for blocked load
Wang Wensheng (1):
phy: cadence: Sierra: Fix error return code in cdns_sierra_phy_probe()
Wesley Cheng (1):
usb: gadget: f_fs: Ensure io_completion_wq is idle during unbind
Zhang Rui (1):
Revert "ACPI: sleep: Put the FACS table after using it"
Zheyu Ma (1):
isdn: mISDN: netjet: Fix crash in nj_probe:
Zong Li (1):
net: macb: ensure the device is available before accessing GEMGXL
control registers
arch/x86/kvm/trace.h | 6 ++--
drivers/acpi/sleep.c | 4 +--
drivers/gpio/gpio-wcd934x.c | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/infiniband/hw/mlx4/main.c | 5 +--
drivers/infiniband/ulp/ipoib/ipoib_netlink.c | 1 +
drivers/isdn/hardware/mISDN/netjet.c | 1 -
drivers/md/dm-verity-verify-sig.c | 2 +-
drivers/net/ethernet/cadence/macb_main.c | 3 ++
drivers/net/ethernet/mellanox/mlx4/fw.c | 3 ++
drivers/net/ethernet/mellanox/mlx4/fw.h | 1 +
drivers/net/ethernet/mellanox/mlx4/main.c | 6 ++++
drivers/nvme/host/Kconfig | 3 +-
drivers/nvme/host/fabrics.c | 5 +++
drivers/nvme/target/core.c | 15 ++++++---
drivers/nvme/target/nvmet.h | 2 +-
drivers/phy/cadence/phy-cadence-sierra.c | 1 +
drivers/regulator/core.c | 6 ++++
drivers/regulator/fan53880.c | 3 ++
drivers/regulator/max77620-regulator.c | 7 +++++
drivers/scsi/hosts.c | 16 +++++-----
drivers/spi/spi-bcm2835.c | 10 ++++--
drivers/spi/spi-bitbang.c | 18 ++++++++---
drivers/spi/spi-fsl-spi.c | 4 +++
drivers/spi/spi-omap-uwire.c | 9 +++++-
drivers/spi/spi-omap2-mcspi.c | 33 ++++++++++++--------
drivers/spi/spi-pxa2xx.c | 9 +++++-
drivers/usb/chipidea/udc.c | 1 +
drivers/usb/dwc3/ep0.c | 3 ++
drivers/usb/gadget/function/f_fs.c | 3 ++
drivers/usb/gadget/function/f_ncm.c | 2 +-
drivers/usb/serial/omninet.c | 2 ++
drivers/usb/typec/ucsi/ucsi.c | 1 +
fs/nfs/nfs4_fs.h | 1 +
fs/nfs/nfs4proc.c | 20 +++++++++++-
include/linux/mlx4/device.h | 1 +
include/linux/usb/pd.h | 2 +-
kernel/cgroup/cgroup.c | 13 ++++----
kernel/sched/fair.c | 2 +-
net/nfc/rawsock.c | 2 +-
net/rds/connection.c | 23 ++++++++++----
net/rds/tcp.c | 4 +--
net/rds/tcp.h | 3 +-
net/rds/tcp_listen.c | 6 ++++
sound/core/seq/seq_timer.c | 10 +++++-
sound/firewire/amdtp-stream.c | 2 +-
sound/pci/hda/patch_realtek.c | 12 +++++++
sound/soc/intel/boards/bytcr_rt5640.c | 11 +++++++
48 files changed, 228 insertions(+), 73 deletions(-)
--
2.25.1
1
37
kylin inclusion
category: feature
bugfix: https://gitee.com/openeuler-competition/summer-2021/issues/I3EIMT?from=proj…
CVE: NA
--------------------------------------------------
In some atomic operation scenarios, such as interrupt context, it is not possible to sleep.
Therefore, when memory allocation in this scenario, it will not enter the direct_reclaim link,
and will not even wake up the kswapd process. For example, in the soft interrupt processing
function of the network card receiving packets, there may be a phenomenon that the page cache
is too occupied and the remaining memory of the system is insufficient, and the memory cannot
be allocated for the received data packet, and the packet is directly lost.
This is the problem to be solved by the page cache limit.
The page cache limit is mainly used to detect whether the page cache exceeds the upper limit
we set (/proc/sys/vm/pagecache_limit_ratio) when the page cache is added to the application
(that is, when the add_to_page_cache_lru function is called)
Provides 3 /proc interfaces, respectively:
echo x > /proc/sys/vm/pagecache_limit_ratio(0 < x < 100):Enable page cache limit function
x means limit the percentage of page cache to the total system memory
/proc/sys/vm/pagecache_limit_ignore_dirty :Whether to ignore dirty pages when calculating the
memory occupied by the page cache. The default value is 1, which means ignore.
Because the recycling of dirty pages is time-consuming.
/proc/sys/vm/pagecache_limit_async:1 means asynchronous recycling, 0 means synchronous recycling
signed-off-by: wen zhiwei <wenzhiwei(a)kylinos.cn>
Signed-off-by: wenzhiwei <wenzhiwei(a)kylinos.cn>
---
include/linux/memcontrol.h | 7 +-
include/linux/mmzone.h | 7 +
include/linux/swap.h | 15 +
include/trace/events/vmscan.h | 28 +-
kernel/sysctl.c | 139 ++++++++
mm/filemap.c | 2 +
mm/page_alloc.c | 52 +++
mm/vmscan.c | 650 ++++++++++++++++++++++++++++++++--
mm/workingset.c | 1 +
9 files changed, 862 insertions(+), 39 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 71a5b589bddb..731a2cd2ea86 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -50,6 +50,7 @@ enum memcg_memory_event {
struct mem_cgroup_reclaim_cookie {
pg_data_t *pgdat;
+ int priority;
unsigned int generation;
};
@@ -492,8 +493,7 @@ mem_cgroup_nodeinfo(struct mem_cgroup *memcg, int nid)
* @node combination. This can be the node lruvec, if the memory
* controller is disabled.
*/
-static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
- struct pglist_data *pgdat)
+static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct pglist_data *pgdat)
{
struct mem_cgroup_per_node *mz;
struct lruvec *lruvec;
@@ -1066,8 +1066,7 @@ static inline void mem_cgroup_migrate(struct page *old, struct page *new)
{
}
-static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
- struct pglist_data *pgdat)
+static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct pglist_data *pgdat)
{
return &pgdat->__lruvec;
}
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 82fceef88448..d3c5258e5d0d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -445,6 +445,13 @@ struct zone {
* changes.
*/
long lowmem_reserve[MAX_NR_ZONES];
+ /*
+ * This atomic counter is set when there is pagecache limit
+ * reclaim going on on this particular zone. Other potential
+ * reclaiers should back off to prevent from heavy lru_lock
+ * bouncing.
+ */
+ atomic_t pagecache_reclaim;
#ifdef CONFIG_NEED_MULTIPLE_NODES
int node;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9b708c0288bc..b9329e575836 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -377,6 +377,21 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem,
unsigned long *nr_scanned);
extern unsigned long shrink_all_memory(unsigned long nr_pages);
extern int vm_swappiness;
+
+#define ADDITIONAL_RECLAIM_RATIO 2
+extern unsigned long pagecache_over_limit(void);
+extern void shrink_page_cache(gfp_t mask, struct page *page);
+extern unsigned long vm_pagecache_limit_pages;
+extern unsigned long vm_pagecache_limit_reclaim_pages;
+extern int unsigned vm_pagecache_limit_ratio;
+extern int vm_pagecache_limit_reclaim_ratio;
+extern unsigned int vm_pagecache_ignore_dirty;
+extern unsigned long pagecache_over_limit(void);
+extern unsigned int vm_pagecache_limit_async;
+extern int kpagecache_limitd_run(void);
+extern void kpagecache_limitd_stop(void);
+extern unsigned int vm_pagecache_ignore_slab;
+
extern int remove_mapping(struct address_space *mapping, struct page *page);
extern unsigned long reclaim_pages(struct list_head *page_list);
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index 2070df64958e..3bfe47a85f6f 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -183,48 +183,48 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_softlimit_re
#endif /* CONFIG_MEMCG */
TRACE_EVENT(mm_shrink_slab_start,
- TP_PROTO(struct shrinker *shr, struct shrink_control *sc,
- long nr_objects_to_shrink, unsigned long cache_items,
- unsigned long long delta, unsigned long total_scan,
- int priority),
-
- TP_ARGS(shr, sc, nr_objects_to_shrink, cache_items, delta, total_scan,
- priority),
+ TP_PROTO(struct shrinker *shr, struct shrink_control *sc,
+ long nr_objects_to_shrink,unsigned long pgs_scanned,
+ unsigned long lru_pgs, unsigned long cache_items,
+ unsigned long long delta, unsigned long total_scan),
+ TP_ARGS(shr, sc, nr_objects_to_shrink,pgs_scanned, lru_pgs, cache_items, delta, total_scan),
TP_STRUCT__entry(
__field(struct shrinker *, shr)
__field(void *, shrink)
__field(int, nid)
__field(long, nr_objects_to_shrink)
__field(gfp_t, gfp_flags)
+ __field(unsigned long, pgs_scanned)
+ __field(unsigned long, lru_pgs)
__field(unsigned long, cache_items)
__field(unsigned long long, delta)
__field(unsigned long, total_scan)
- __field(int, priority)
),
TP_fast_assign(
- __entry->shr = shr;
+ __entry->shr = shr;
__entry->shrink = shr->scan_objects;
__entry->nid = sc->nid;
__entry->nr_objects_to_shrink = nr_objects_to_shrink;
__entry->gfp_flags = sc->gfp_mask;
+ __entry->pgs_scanned = pgs_scanned;
+ __entry->lru_pgs = lru_pgs;
__entry->cache_items = cache_items;
__entry->delta = delta;
__entry->total_scan = total_scan;
- __entry->priority = priority;
),
-
- TP_printk("%pS %p: nid: %d objects to shrink %ld gfp_flags %s cache items %ld delta %lld total_scan %ld priority %d",
+TP_printk("%pF %p: nid: %d objects to shrink %ld gfp_flags %s pgs_scanned %ld lru_pgs %ld cache items %ld delta %lld total_scan %ld",
__entry->shrink,
__entry->shr,
__entry->nid,
__entry->nr_objects_to_shrink,
show_gfp_flags(__entry->gfp_flags),
+ __entry->pgs_scanned,
+ __entry->lru_pgs,
__entry->cache_items,
__entry->delta,
- __entry->total_scan,
- __entry->priority)
+ __entry->total_scan)
);
TRACE_EVENT(mm_shrink_slab_end,
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index c7ca58de3b1b..4ef436cdfdad 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -111,6 +111,7 @@
static int sixty = 60;
#endif
+static int zero;
static int __maybe_unused neg_one = -1;
static int __maybe_unused two = 2;
static int __maybe_unused four = 4;
@@ -648,6 +649,68 @@ static int do_proc_dointvec(struct ctl_table *table, int write,
return __do_proc_dointvec(table->data, table, write,
buffer, lenp, ppos, conv, data);
}
+int setup_pagecache_limit(void)
+{
+ /* reclaim $ADDITIONAL_RECLAIM_PAGES more than limit. */
+ vm_pagecache_limit_reclaim_ratio = vm_pagecache_limit_ratio + ADDITIONAL_RECLAIM_RATIO;
+
+ if (vm_pagecache_limit_reclaim_ratio > 100)
+ vm_pagecache_limit_reclaim_ratio = 100;
+ if (vm_pagecache_limit_ratio == 0)
+ vm_pagecache_limit_reclaim_ratio = 0;
+
+ vm_pagecache_limit_pages = vm_pagecache_limit_ratio * totalram_pages() / 100;
+ vm_pagecache_limit_reclaim_pages = vm_pagecache_limit_reclaim_ratio * totalram_pages() / 100;
+ return 0;
+}
+
+static int pc_limit_proc_dointvec(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (write && !ret)
+ ret = setup_pagecache_limit();
+ return ret;
+}
+static int pc_reclaim_limit_proc_dointvec(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ int pre_reclaim_ratio = vm_pagecache_limit_reclaim_ratio;
+ int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+
+ if (write && vm_pagecache_limit_ratio == 0)
+ return -EINVAL;
+
+ if (write && !ret) {
+ if (vm_pagecache_limit_reclaim_ratio - vm_pagecache_limit_ratio < ADDITIONAL_RECLAIM_RATIO) {
+ vm_pagecache_limit_reclaim_ratio = pre_reclaim_ratio;
+ return -EINVAL;
+ }
+ vm_pagecache_limit_reclaim_pages = vm_pagecache_limit_reclaim_ratio * totalram_pages() / 100;
+ }
+ return ret;
+}
+static int pc_limit_async_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+
+ if (write && vm_pagecache_limit_ratio == 0)
+ return -EINVAL;
+
+ if (write && !ret) {
+ if (vm_pagecache_limit_async > 0) {
+ if (kpagecache_limitd_run()) {
+ vm_pagecache_limit_async = 0;
+ return -EINVAL;
+ }
+ }
+ else {
+ kpagecache_limitd_stop();
+ }
+ }
+ return ret;
+}
static int do_proc_douintvec_w(unsigned int *tbl_data,
struct ctl_table *table,
@@ -2711,6 +2774,14 @@ static struct ctl_table kern_table[] = {
},
{ }
};
+static int pc_limit_proc_dointvec(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos);
+
+static int pc_reclaim_limit_proc_dointvec(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos);
+
+static int pc_limit_async_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos);
static struct ctl_table vm_table[] = {
{
@@ -2833,6 +2904,74 @@ static struct ctl_table vm_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = &two_hundred,
},
+ {
+ .procname = "pagecache_limit_ratio",
+ .data = &vm_pagecache_limit_ratio,
+ .maxlen = sizeof(vm_pagecache_limit_ratio),
+ .mode = 0644,
+ .proc_handler = &pc_limit_proc_dointvec,
+ .extra1 = &zero,
+ .extra2 = &one_hundred,
+ },
+ {
+ .procname = "pagecache_limit_reclaim_ratio",
+ .data = &vm_pagecache_limit_reclaim_ratio,
+ .maxlen = sizeof(vm_pagecache_limit_reclaim_ratio),
+ .mode = 0644,
+ .proc_handler = &pc_reclaim_limit_proc_dointvec,
+ .extra1 = &zero,
+ .extra2 = &one_hundred,
+ },
+ {
+ .procname = "pagecache_limit_ignore_dirty",
+ .data = &vm_pagecache_ignore_dirty,
+ .maxlen = sizeof(vm_pagecache_ignore_dirty),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+#ifdef CONFIG_SHRINK_PAGECACHE
+ {
+ .procname = "cache_reclaim_s",
+ .data = &vm_cache_reclaim_s,
+ .maxlen = sizeof(vm_cache_reclaim_s),
+ .mode = 0644,
+ .proc_handler = cache_reclaim_sysctl_handler,
+ .extra1 = &vm_cache_reclaim_s_min,
+ .extra2 = &vm_cache_reclaim_s_max,
+ },
+ {
+ .procname = "cache_reclaim_weight",
+ .data = &vm_cache_reclaim_weight,
+ .maxlen = sizeof(vm_cache_reclaim_weight),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &vm_cache_reclaim_weight_min,
+ .extra2 = &vm_cache_reclaim_weight_max,
+ },
+ {
+ .procname = "cache_reclaim_enable",
+ .data = &vm_cache_reclaim_enable,
+ .maxlen = sizeof(vm_cache_reclaim_enable),
+ .mode = 0644,
+ .proc_handler = cache_reclaim_enable_handler,
+ .extra1 = &zero,
+ .extra2 = &one,
+ },
+ {
+ .procname = "pagecache_limit_async",
+ .data = &vm_pagecache_limit_async,
+ .maxlen = sizeof(vm_pagecache_limit_async),
+ .mode = 0644,
+ .proc_handler = &pc_limit_async_handler,
+ },
+ {
+ .procname = "pagecache_limit_ignore_slab",
+ .data = &vm_pagecache_ignore_slab,
+ .maxlen = sizeof(vm_pagecache_ignore_slab),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+#endif
#ifdef CONFIG_HUGETLB_PAGE
{
.procname = "nr_hugepages",
diff --git a/mm/filemap.c b/mm/filemap.c
index ef611eb34aa7..808d4f02b5a5 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -922,6 +922,8 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
{
void *shadow = NULL;
int ret;
+ if (unlikely(vm_pagecache_limit_pages) && pagecache_over_limit() > 0)
+ shrink_page_cache(gfp_mask, page);
__SetPageLocked(page);
ret = __add_to_page_cache_locked(page, mapping, offset,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 71afec177233..08feba42d3d7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8933,6 +8933,58 @@ void zone_pcp_reset(struct zone *zone)
local_irq_restore(flags);
}
+/* Returns a number that's positive if the pagecache is above
+ * the set limit*/
+unsigned long pagecache_over_limit()
+{
+ unsigned long should_reclaim_pages = 0;
+ unsigned long overlimit_pages = 0;
+ unsigned long delta_pages = 0;
+ unsigned long pgcache_lru_pages = 0;
+ /* We only want to limit unmapped and non-shmem page cache pages;
+ * normally all shmem pages are mapped as well*/
+ unsigned long pgcache_pages = global_node_page_state(NR_FILE_PAGES)
+ - max_t(unsigned long,
+ global_node_page_state(NR_FILE_MAPPED),
+ global_node_page_state(NR_SHMEM));
+ /* We certainly can't free more than what's on the LRU lists
+ * minus the dirty ones*/
+ if (vm_pagecache_ignore_slab)
+ pgcache_lru_pages = global_node_page_state(NR_ACTIVE_FILE)
+ + global_node_page_state(NR_INACTIVE_FILE);
+ else
+ pgcache_lru_pages = global_node_page_state(NR_ACTIVE_FILE)
+ + global_node_page_state(NR_INACTIVE_FILE)
+ + global_node_page_state(NR_SLAB_RECLAIMABLE_B)
+ + global_node_page_state(NR_SLAB_UNRECLAIMABLE_B);
+
+ if (vm_pagecache_ignore_dirty != 0)
+ pgcache_lru_pages -= global_node_page_state(NR_FILE_DIRTY) / vm_pagecache_ignore_dirty;
+ /* Paranoia */
+ if (unlikely(pgcache_lru_pages > LONG_MAX))
+ return 0;
+
+ /* Limit it to 94% of LRU (not all there might be unmapped) */
+ pgcache_lru_pages -= pgcache_lru_pages/16;
+ if (vm_pagecache_ignore_slab)
+ pgcache_pages = min_t(unsigned long, pgcache_pages, pgcache_lru_pages);
+ else
+ pgcache_pages = pgcache_lru_pages;
+
+ /*
+ *delta_pages: we should reclaim at least 2% more pages than overlimit_page, values get from
+ * /proc/vm/pagecache_limit_reclaim_pages
+ *should_reclaim_pages: the real pages we will reclaim, but it should less than pgcache_pages;
+ */
+ if (pgcache_pages > vm_pagecache_limit_pages) {
+ overlimit_pages = pgcache_pages - vm_pagecache_limit_pages;
+ delta_pages = vm_pagecache_limit_reclaim_pages - vm_pagecache_limit_pages;
+ should_reclaim_pages = min_t(unsigned long, delta_pages, vm_pagecache_limit_pages) + overlimit_pages;
+ return should_reclaim_pages;
+ }
+ return 0;
+}
+
#ifdef CONFIG_MEMORY_HOTREMOVE
/*
* All pages in the range must be in a single zone, must not contain holes,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 23f8a5242de7..1fe2c74a1c10 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -175,6 +175,39 @@ struct scan_control {
*/
int vm_swappiness = 60;
+/*
+ * The total number of pages which are beyond the high watermark within all
+ * zones.
+ */
+unsigned long vm_pagecache_limit_pages __read_mostly = 0;
+unsigned long vm_pagecache_limit_reclaim_pages = 0;
+unsigned int vm_pagecache_limit_ratio __read_mostly = 0;
+int vm_pagecache_limit_reclaim_ratio __read_mostly = 0;
+unsigned int vm_pagecache_ignore_dirty __read_mostly = 1;
+
+unsigned long vm_total_pages;
+static struct task_struct *kpclimitd = NULL;
+unsigned int vm_pagecache_ignore_slab __read_mostly = 1;
+unsigned int vm_pagecache_limit_async __read_mostly = 0;
+
+#ifdef CONFIG_SHRINK_PAGECACHE
+unsigned long vm_cache_limit_ratio;
+unsigned long vm_cache_limit_ratio_min;
+unsigned long vm_cache_limit_ratio_max;
+unsigned long vm_cache_limit_mbytes __read_mostly;
+unsigned long vm_cache_limit_mbytes_min;
+unsigned long vm_cache_limit_mbytes_max;
+static bool kpclimitd_context = false;
+int vm_cache_reclaim_s __read_mostly;
+int vm_cache_reclaim_s_min;
+int vm_cache_reclaim_s_max;
+int vm_cache_reclaim_weight __read_mostly;
+int vm_cache_reclaim_weight_min;
+int vm_cache_reclaim_weight_max;
+int vm_cache_reclaim_enable;
+static DEFINE_PER_CPU(struct delayed_work, vmscan_work);
+#endif
+
static void set_task_reclaim_state(struct task_struct *task,
struct reclaim_state *rs)
{
@@ -187,10 +220,12 @@ static void set_task_reclaim_state(struct task_struct *task,
task->reclaim_state = rs;
}
+static bool kpclimitd_context = false;
static LIST_HEAD(shrinker_list);
static DECLARE_RWSEM(shrinker_rwsem);
#ifdef CONFIG_MEMCG
+static DEFINE_IDR(shrinker_idr);
static int shrinker_nr_max;
/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
@@ -346,7 +381,6 @@ void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
}
}
-static DEFINE_IDR(shrinker_idr);
static int prealloc_memcg_shrinker(struct shrinker *shrinker)
{
@@ -646,7 +680,9 @@ EXPORT_SYMBOL(unregister_shrinker);
#define SHRINK_BATCH 128
static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
- struct shrinker *shrinker, int priority)
+ struct shrinker *shrinker,
+ unsigned long nr_scanned,
+ unsigned long nr_eligible)
{
unsigned long freed = 0;
unsigned long long delta;
@@ -670,8 +706,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
nr = xchg_nr_deferred(shrinker, shrinkctl);
if (shrinker->seeks) {
- delta = freeable >> priority;
- delta *= 4;
+ //delta = freeable >> priority;
+ //delta *= 4;
+ delta = (4 * nr_scanned) / shrinker->seeks;
+ delta *= freeable;
do_div(delta, shrinker->seeks);
} else {
/*
@@ -682,12 +720,12 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
delta = freeable / 2;
}
- total_scan = nr >> priority;
+ total_scan = nr;
total_scan += delta;
total_scan = min(total_scan, (2 * freeable));
trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
- freeable, delta, total_scan, priority);
+ freeable, delta, total_scan, nr_scanned,nr_eligible);
/*
* Normally, we should not scan less than batch_size objects in one
@@ -744,7 +782,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
#ifdef CONFIG_MEMCG
static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
- struct mem_cgroup *memcg, int priority)
+ struct mem_cgroup *memcg, unsigned long nr_scanned, unsigned long nr_eligible)
{
struct shrinker_info *info;
unsigned long ret, freed = 0;
@@ -780,7 +818,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
!(shrinker->flags & SHRINKER_NONSLAB))
continue;
- ret = do_shrink_slab(&sc, shrinker, priority);
+ ret = do_shrink_slab(&sc, shrinker, nr_scanned, nr_eligible);
if (ret == SHRINK_EMPTY) {
clear_bit(i, info->map);
/*
@@ -799,7 +837,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
* set_bit() do_shrink_slab()
*/
smp_mb__after_atomic();
- ret = do_shrink_slab(&sc, shrinker, priority);
+ ret = do_shrink_slab(&sc, shrinker, nr_scanned, nr_eligible);
if (ret == SHRINK_EMPTY)
ret = 0;
else
@@ -846,7 +884,8 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
*/
static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
struct mem_cgroup *memcg,
- int priority)
+ unsigned long nr_scanned,
+ unsigned long nr_eligible)
{
unsigned long ret, freed = 0;
struct shrinker *shrinker;
@@ -859,7 +898,8 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
* oom.
*/
if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
- return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
+ return 0;
+ // return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
if (!down_read_trylock(&shrinker_rwsem))
goto out;
@@ -871,7 +911,14 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
.memcg = memcg,
};
- ret = do_shrink_slab(&sc, shrinker, priority);
+ if (memcg_kmem_enabled() &&
+ !!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE))
+ continue;
+
+ if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
+ sc.nid = 0;
+
+ ret = do_shrink_slab(&sc, shrinker, nr_scanned, nr_eligible);
if (ret == SHRINK_EMPTY)
ret = 0;
freed += ret;
@@ -905,7 +952,7 @@ void drop_slab_node(int nid)
freed = 0;
memcg = mem_cgroup_iter(NULL, NULL, NULL);
do {
- freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
+ freed += shrink_slab(GFP_KERNEL, nid, memcg, 1000,1000);
} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
} while (freed > 10);
}
@@ -2369,7 +2416,7 @@ unsigned long reclaim_pages(struct list_head *page_list)
EXPORT_SYMBOL_GPL(reclaim_pages);
static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
- struct lruvec *lruvec, struct scan_control *sc)
+ struct lruvec *lruvec, struct mem_cgroup *memcg, struct scan_control *sc)
{
if (is_active_lru(lru)) {
if (sc->may_deactivate & (1 << is_file_lru(lru)))
@@ -2683,7 +2730,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
nr[lru] -= nr_to_scan;
nr_reclaimed += shrink_list(lru, nr_to_scan,
- lruvec, sc);
+ lruvec, NULL, sc);
}
}
@@ -2836,7 +2883,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
unsigned long reclaimed;
unsigned long scanned;
-
+ unsigned long lru_pages;
/*
* This loop can become CPU-bound when target memcgs
* aren't eligible for reclaim - either because they
@@ -2873,7 +2920,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
shrink_lruvec(lruvec, sc);
shrink_slab(sc->gfp_mask, pgdat->node_id, memcg,
- sc->priority);
+ sc->nr_scanned - scanned,
+ lru_pages);
/* Record the group's reclaim efficiency */
vmpressure(sc->gfp_mask, memcg, false,
@@ -3202,6 +3250,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat)
{
+ struct mem_cgroup *memcg;
struct lruvec *target_lruvec;
unsigned long refaults;
@@ -3273,8 +3322,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
if (cgroup_reclaim(sc)) {
struct lruvec *lruvec;
- lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup,
- zone->zone_pgdat);
+ lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, zone->zone_pgdat);
clear_bit(LRUVEC_CONGESTED, &lruvec->flags);
}
}
@@ -3745,6 +3793,8 @@ static bool kswapd_shrink_node(pg_data_t *pgdat,
return sc->nr_scanned >= sc->nr_to_reclaim;
}
+static void __shrink_page_cache(gfp_t mask);
+
/*
* For kswapd, balance_pgdat() will reclaim pages across a node from zones
* that are eligible for use by the caller until at least one zone is
@@ -4208,6 +4258,27 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
wake_up_interruptible(&pgdat->kswapd_wait);
}
+/*
+ * The reclaimable count would be mostly accurate.
+ * The less reclaimable pages may be
+ * - mlocked pages, which will be moved to unevictable list when encountered
+ * - mapped pages, which may require several travels to be reclaimed
+ * - dirty pages, which is not "instantly" reclaimable
+ */
+
+static unsigned long global_reclaimable_pages(void)
+{
+ int nr;
+
+ nr = global_node_page_state(NR_ACTIVE_FILE) +
+ global_node_page_state(NR_INACTIVE_FILE);
+
+ if (get_nr_swap_pages() > 0)
+ nr += global_node_page_state(NR_ACTIVE_ANON) +
+ global_node_page_state(NR_INACTIVE_ANON);
+ return nr;
+}
+
#ifdef CONFIG_HIBERNATION
/*
* Try to free `nr_to_reclaim' of memory, system-wide, and return the number of
@@ -4246,6 +4317,498 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
return nr_reclaimed;
}
#endif /* CONFIG_HIBERNATION */
+/*
+ * Returns non-zero if the lock has been acquired, false if somebody
+ * else is holding the lock.
+ */
+static int pagecache_reclaim_lock_zone(struct zone *zone)
+{
+ return atomic_add_unless(&zone->pagecache_reclaim, 1, 1);
+}
+
+static void pagecache_reclaim_unlock_zone(struct zone *zone)
+{
+ BUG_ON(atomic_dec_return(&zone->pagecache_reclaim));
+}
+
+/*
+ * Potential page cache reclaimers who are not able to take
+ * reclaim lock on any zone are sleeping on this waitqueue.
+ * So this is basically a congestion wait queue for them.
+ */
+DECLARE_WAIT_QUEUE_HEAD(pagecache_reclaim_wq);
+DECLARE_WAIT_QUEUE_HEAD(kpagecache_limitd_wq);
+
+/*
+ * Similar to shrink_zone but it has a different consumer - pagecache limit
+ * so we cannot reuse the original function - and we do not want to clobber
+ * that code path so we have to live with this code duplication.
+ *
+ * In short this simply scans through the given lru for all cgroups for the
+ * give zone.
+ *
+ * returns true if we managed to cumulatively reclaim (via nr_reclaimed)
+ * the given nr_to_reclaim pages, false otherwise. The caller knows that
+ * it doesn't have to touch other zones if the target was hit already.
+ *
+ * DO NOT USE OUTSIDE of shrink_all_zones unless you have a really really
+ * really good reason.
+ */
+
+static bool shrink_zone_per_memcg(struct zone *zone, enum lru_list lru,
+ unsigned long nr_to_scan, unsigned long nr_to_reclaim,
+ unsigned long *nr_reclaimed, struct scan_control *sc)
+{
+ struct mem_cgroup *root = sc->target_mem_cgroup;
+ struct mem_cgroup *memcg;
+ struct mem_cgroup_reclaim_cookie reclaim = {
+ .pgdat = zone->zone_pgdat,
+ .priority = sc->priority,
+ };
+
+ memcg = mem_cgroup_iter(root, NULL, &reclaim);
+ do {
+ struct lruvec *lruvec;
+
+ lruvec = mem_cgroup_lruvec(memcg, zone->zone_pgdat);
+ *nr_reclaimed += shrink_list(lru, nr_to_scan, lruvec, memcg, sc);
+ if (*nr_reclaimed >= nr_to_reclaim) {
+ mem_cgroup_iter_break(root, memcg);
+ return true;
+ }
+ memcg = mem_cgroup_iter(root, memcg, &reclaim);
+ } while (memcg);
+
+ return false;
+}
+/*
+ * Tries to reclaim 'nr_pages' pages from LRU lists system-wide, for given
+ * pass.
+ *
+ * For pass > 3 we also try to shrink the LRU lists that contain a few pages
+ *
+ * Returns the number of scanned zones.
+ */
+static int shrink_all_zones(unsigned long nr_pages, int pass,
+ struct scan_control *sc)
+{
+ struct zone *zone;
+ unsigned long nr_reclaimed = 0;
+ unsigned int nr_locked_zones = 0;
+ DEFINE_WAIT(wait);
+
+ prepare_to_wait(&pagecache_reclaim_wq, &wait, TASK_INTERRUPTIBLE);
+
+ for_each_populated_zone(zone) {
+ enum lru_list lru;
+
+ /*
+ * Back off if somebody is already reclaiming this zone
+ * for the pagecache reclaim.
+ */
+ if (!pagecache_reclaim_lock_zone(zone))
+ continue;
+
+
+ /*
+ * This reclaimer might scan a zone so it will never
+ * sleep on pagecache_reclaim_wq
+ */
+ finish_wait(&pagecache_reclaim_wq, &wait);
+ nr_locked_zones++;
+
+ for_each_evictable_lru(lru) {
+ enum zone_stat_item ls = NR_ZONE_LRU_BASE + lru;
+ unsigned long lru_pages = zone_page_state(zone, ls);
+
+ /* For pass = 0, we don't shrink the active list */
+ if (pass == 0 && (lru == LRU_ACTIVE_ANON ||
+ lru == LRU_ACTIVE_FILE))
+ continue;
+
+ /* Original code relied on nr_saved_scan which is no
+ * longer present so we are just considering LRU pages.
+ * This means that the zone has to have quite large
+ * LRU list for default priority and minimum nr_pages
+ * size (8*SWAP_CLUSTER_MAX). In the end we will tend
+ * to reclaim more from large zones wrt. small.
+ * This should be OK because shrink_page_cache is called
+ * when we are getting to short memory condition so
+ * LRUs tend to be large.
+ */
+ if (((lru_pages >> sc->priority) + 1) >= nr_pages || pass >= 3) {
+ unsigned long nr_to_scan;
+
+ nr_to_scan = min(nr_pages, lru_pages);
+
+ /*
+ * A bit of a hack but the code has always been
+ * updating sc->nr_reclaimed once per shrink_all_zones
+ * rather than accumulating it for all calls to shrink
+ * lru. This costs us an additional argument to
+ * shrink_zone_per_memcg but well...
+ *
+ * Let's stick with this for bug-to-bug compatibility
+ */
+ while (nr_to_scan > 0) {
+ /* shrink_list takes lru_lock with IRQ off so we
+ * should be careful about really huge nr_to_scan
+ */
+ unsigned long batch = min_t(unsigned long, nr_to_scan, SWAP_CLUSTER_MAX);
+
+ if (shrink_zone_per_memcg(zone, lru,
+ batch, nr_pages, &nr_reclaimed, sc)) {
+ pagecache_reclaim_unlock_zone(zone);
+ goto out_wakeup;
+ }
+ nr_to_scan -= batch;
+ }
+ }
+ }
+ pagecache_reclaim_unlock_zone(zone);
+ }
+ /*
+ * We have to go to sleep because all the zones are already reclaimed.
+ * One of the reclaimer will wake us up or __shrink_page_cache will
+ * do it if there is nothing to be done.
+ */
+ if (!nr_locked_zones) {
+ if (!kpclimitd_context)
+ schedule();
+ finish_wait(&pagecache_reclaim_wq, &wait);
+ goto out;
+ }
+
+out_wakeup:
+ wake_up_interruptible(&pagecache_reclaim_wq);
+ sc->nr_reclaimed += nr_reclaimed;
+out:
+ return nr_locked_zones;
+}
+
+/*
+ * Function to shrink the page cache
+ *
+ * This function calculates the number of pages (nr_pages) the page
+ * cache is over its limit and shrinks the page cache accordingly.
+ *
+ * The maximum number of pages, the page cache shrinks in one call of
+ * this function is limited to SWAP_CLUSTER_MAX pages. Therefore it may
+ * require a number of calls to actually reach the vm_pagecache_limit_kb.
+ *
+ * This function is similar to shrink_all_memory, except that it may never
+ * swap out mapped pages and only does four passes.
+ */
+static void __shrink_page_cache(gfp_t mask)
+{
+ unsigned long ret = 0;
+ int pass = 0;
+ struct reclaim_state reclaim_state;
+ struct scan_control sc = {
+ .gfp_mask = mask,
+ .may_swap = 0,
+ .may_unmap = 0,
+ .may_writepage = 0,
+ .target_mem_cgroup = NULL,
+ .reclaim_idx = MAX_NR_ZONES,
+ };
+ struct reclaim_state *old_rs = current->reclaim_state;
+ long nr_pages;
+
+ /* We might sleep during direct reclaim so make atomic context
+ * is certainly a bug.
+ */
+ BUG_ON(!(mask & __GFP_RECLAIM));
+
+retry:
+ /* How many pages are we over the limit?*/
+ nr_pages = pagecache_over_limit();
+
+ /*
+ * Return early if there's no work to do.
+ * Wake up reclaimers that couldn't scan any zone due to congestion.
+ * There is apparently nothing to do so they do not have to sleep.
+ * This makes sure that no sleeping reclaimer will stay behind.
+ * Allow breaching the limit if the task is on the way out.
+ */
+ if (nr_pages <= 0 || fatal_signal_pending(current)) {
+ wake_up_interruptible(&pagecache_reclaim_wq);
+ goto out;
+ }
+
+ /* But do a few at least */
+ nr_pages = max_t(unsigned long, nr_pages, 8*SWAP_CLUSTER_MAX);
+
+ current->reclaim_state = &reclaim_state;
+
+ /*
+ * Shrink the LRU in 4 passes:
+ * 0 = Reclaim from inactive_list only (fast)
+ * 1 = Reclaim from active list but don't reclaim mapped and dirtied (not that fast)
+ * 2 = Reclaim from active list but don't reclaim mapped (2nd pass)
+ * it may reclaim dirtied if vm_pagecache_ignore_dirty = 0
+ * 3 = same as pass 2, but it will reclaim some few pages , detail in shrink_all_zones
+ */
+ for (; pass <= 3; pass++) {
+ for (sc.priority = DEF_PRIORITY; sc.priority >= 0; sc.priority--) {
+ unsigned long nr_to_scan = nr_pages - ret;
+ struct mem_cgroup *memcg = NULL;
+ int nid;
+
+ sc.nr_scanned = 0;
+
+ /*
+ * No zone reclaimed because of too many reclaimers. Retry whether
+ * there is still something to do
+ */
+ if (!shrink_all_zones(nr_to_scan, pass, &sc))
+ goto retry;
+
+ ret += sc.nr_reclaimed;
+ if (ret >= nr_pages)
+ goto out;
+
+ reclaim_state.reclaimed_slab = 0;
+ for_each_online_node(nid) {
+ do {
+ shrink_slab(mask, nid, memcg, sc.nr_scanned,
+ global_reclaimable_pages());
+ } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
+ }
+ ret += reclaim_state.reclaimed_slab;
+
+ if (ret >= nr_pages)
+ goto out;
+
+ }
+ if (pass == 1) {
+ if (vm_pagecache_ignore_dirty == 1 ||
+ (mask & (__GFP_IO | __GFP_FS)) != (__GFP_IO | __GFP_FS) )
+ break;
+ else
+ sc.may_writepage = 1;
+ }
+ }
+
+out:
+ current->reclaim_state = old_rs;
+}
+
+#ifdef CONFIG_SHRINK_PAGECACHE
+static unsigned long __shrink_page_cache(gfp_t mask)
+{
+ struct scan_control sc = {
+ .gfp_mask = current_gfp_context(mask),
+ .reclaim_idx = gfp_zone(mask),
+ .may_writepage = !laptop_mode,
+ .nr_to_reclaim = SWAP_CLUSTER_MAX *
+ (unsigned long)vm_cache_reclaim_weight,
+ .may_unmap = 1,
+ .may_swap = 1,
+ .order = 0,
+ .priority = DEF_PRIORITY,
+ .target_mem_cgroup = NULL,
+ .nodemask = NULL,
+ };
+
+ struct zonelist *zonelist = node_zonelist(numa_node_id(), mask);
+
+ return do_try_to_free_pages(zonelist, &sc);
+}
+
+
+static void shrink_page_cache_work(struct work_struct *w);
+static void shrink_shepherd(struct work_struct *w);
+static DECLARE_DEFERRABLE_WORK(shepherd, shrink_shepherd);
+
+static void shrink_shepherd(struct work_struct *w)
+{
+ int cpu;
+
+ get_online_cpus();
+
+ for_each_online_cpu(cpu) {
+ struct delayed_work *work = &per_cpu(vmscan_work, cpu);
+
+ if (!delayed_work_pending(work) && vm_cache_reclaim_enable)
+ queue_delayed_work_on(cpu, system_wq, work, 0);
+ }
+
+ put_online_cpus();
+
+ /* we want all kernel thread to stop */
+ if (vm_cache_reclaim_enable) {
+ if (vm_cache_reclaim_s == 0)
+ schedule_delayed_work(&shepherd,
+ round_jiffies_relative(120 * HZ));
+ else
+ schedule_delayed_work(&shepherd,
+ round_jiffies_relative((unsigned long)
+ vm_cache_reclaim_s * HZ));
+ }
+}
+static void shrink_shepherd_timer(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ struct delayed_work *work = &per_cpu(vmscan_work, cpu);
+
+ INIT_DEFERRABLE_WORK(work, shrink_page_cache_work);
+ }
+
+ schedule_delayed_work(&shepherd,
+ round_jiffies_relative((unsigned long)vm_cache_reclaim_s * HZ));
+}
+
+unsigned long shrink_page_cache(gfp_t mask)
+{
+ unsigned long nr_pages;
+
+ /* We reclaim the highmem zone too, it is useful for 32bit arch */
+ nr_pages = __shrink_page_cache(mask | __GFP_HIGHMEM);
+
+ return nr_pages;
+}
+static void shrink_page_cache_work(struct work_struct *w)
+{
+ struct delayed_work *work = to_delayed_work(w);
+ unsigned long nr_pages;
+
+ /*
+ * if vm_cache_reclaim_enable or vm_cache_reclaim_s is zero,
+ * we do not shrink page cache again.
+ */
+ if (vm_cache_reclaim_s == 0 || !vm_cache_reclaim_enable)
+ return;
+
+ /* It should wait more time if we hardly reclaim the page cache */
+ nr_pages = shrink_page_cache(GFP_KERNEL);
+ if ((nr_pages < SWAP_CLUSTER_MAX) && vm_cache_reclaim_enable)
+ queue_delayed_work_on(smp_processor_id(), system_wq, work,
+ round_jiffies_relative(120 * HZ));
+}
+
+static void shrink_page_cache_init(void)
+{
+ vm_cache_limit_ratio = 0;
+ vm_cache_limit_ratio_min = 0;
+ vm_cache_limit_ratio_max = 100;
+ vm_cache_limit_mbytes = 0;
+ vm_cache_limit_mbytes_min = 0;
+ vm_cache_limit_mbytes_max = totalram_pages >> (20 - PAGE_SHIFT);
+ vm_cache_reclaim_s = 0;
+ vm_cache_reclaim_s_min = 0;
+ vm_cache_reclaim_s_max = 43200;
+ vm_cache_reclaim_weight = 1;
+ vm_cache_reclaim_weight_min = 1;
+ vm_cache_reclaim_weight_max = 100;
+ vm_cache_reclaim_enable = 1;
+
+ shrink_shepherd_timer();
+}
+
+static int kswapd_cpu_down_prep(unsigned int cpu)
+{
+ cancel_delayed_work_sync(&per_cpu(vmscan_work, cpu));
+
+ return 0;
+}
+int cache_reclaim_enable_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *length, loff_t *ppos)
+{
+ int ret;
+
+ ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
+ if (ret)
+ return ret;
+
+ if (write)
+ schedule_delayed_work(&shepherd, round_jiffies_relative((unsigned long)vm_cache_reclaim_s * HZ));
+
+ return 0;
+}
+
+int cache_reclaim_sysctl_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *length, loff_t *ppos)
+{
+ int ret;
+
+ ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
+ if (ret)
+ return ret;
+
+ if (write)
+ mod_delayed_work(system_wq, &shepherd,
+ round_jiffies_relative(
+ (unsigned long)vm_cache_reclaim_s * HZ));
+
+ return ret;
+}
+#endif
+
+static int kpagecache_limitd(void *data)
+{
+ DEFINE_WAIT(wait);
+ kpclimitd_context = true;
+
+ /*
+ * make sure all work threads woken up, when switch to async mode
+ */
+ if (waitqueue_active(&pagecache_reclaim_wq))
+ wake_up_interruptible(&pagecache_reclaim_wq);
+
+ for ( ; ; ) {
+ __shrink_page_cache(GFP_KERNEL);
+ prepare_to_wait(&kpagecache_limitd_wq, &wait, TASK_INTERRUPTIBLE);
+
+ if (!kthread_should_stop())
+ schedule();
+ else {
+ finish_wait(&kpagecache_limitd_wq, &wait);
+ break;
+ }
+ finish_wait(&kpagecache_limitd_wq, &wait);
+ }
+ kpclimitd_context = false;
+ return 0;
+}
+
+static void wakeup_kpclimitd(gfp_t mask)
+{
+ if (!waitqueue_active(&kpagecache_limitd_wq))
+ return;
+ wake_up_interruptible(&kpagecache_limitd_wq);
+}
+
+void shrink_page_cache(gfp_t mask, struct page *page)
+{
+ if (0 == vm_pagecache_limit_async)
+ __shrink_page_cache(mask);
+ else
+ wakeup_kpclimitd(mask);
+}
+
+/* It's optimal to keep kswapds on the same CPUs as their memory, but
+ not required for correctness. So if the last cpu in a node goes
+ away, we get changed to run anywhere: as the first one comes back,
+ restore their cpu bindings. */
+static int kswapd_cpu_online(unsigned int cpu)
+{
+ int nid;
+
+ for_each_node_state(nid, N_MEMORY) {
+ pg_data_t *pgdat = NODE_DATA(nid);
+ const struct cpumask *mask;
+
+ mask = cpumask_of_node(pgdat->node_id);
+
+ if (cpumask_any_and(cpu_online_mask, mask) < nr_cpu_ids)
+ /* One of our CPUs online: restore mask */
+ set_cpus_allowed_ptr(pgdat->kswapd, mask);
+ }
+ return 0;
+}
/*
* This kswapd start function will be called by init and node-hot-add.
@@ -4286,16 +4849,61 @@ void kswapd_stop(int nid)
static int __init kswapd_init(void)
{
- int nid;
+ /*int nid;
swap_setup();
for_each_node_state(nid, N_MEMORY)
kswapd_run(nid);
- return 0;
+ return 0;*/
+ int nid, ret;
+
+ swap_setup();
+ for_each_node_state(nid, N_MEMORY)
+ kswapd_run(nid);
+#ifdef CONFIG_SHRINK_PAGECACHE
+ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ "mm/vmscan:online", kswapd_cpu_online,
+ kswapd_cpu_down_prep);
+#else
+ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ "mm/vmscan:online", kswapd_cpu_online,
+ NULL);
+#endif
+ WARN_ON(ret < 0);
+#ifdef CONFIG_SHRINK_PAGECACHE
+ shrink_page_cache_init();
+#endif
+ return 0;
+
}
module_init(kswapd_init)
+int kpagecache_limitd_run(void)
+{
+ int ret = 0;
+
+ if (kpclimitd)
+ return 0;
+
+ kpclimitd = kthread_run(kpagecache_limitd, NULL, "kpclimitd");
+ if (IS_ERR(kpclimitd)) {
+ pr_err("Failed to start kpagecache_limitd thread\n");
+ ret = PTR_ERR(kpclimitd);
+ kpclimitd = NULL;
+ }
+ return ret;
+
+}
+
+void kpagecache_limitd_stop(void)
+{
+ if (kpclimitd) {
+ kthread_stop(kpclimitd);
+ kpclimitd = NULL;
+ }
+}
+
#ifdef CONFIG_NUMA
/*
* Node reclaim mode
diff --git a/mm/workingset.c b/mm/workingset.c
index bba4380405b4..9a5ad145b9bd 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -253,6 +253,7 @@ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages)
void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg)
{
struct pglist_data *pgdat = page_pgdat(page);
+ struct mem_cgroup *memcg = page_memcg(page);
unsigned long eviction;
struct lruvec *lruvec;
int memcgid;
--
2.30.0
1
0

[PATCH kernel-4.19] drivers/txgbe: fix buffer not null terminated by strncpy in txgbe_ethtool.c
by shenzijun 26 Oct '21
by shenzijun 26 Oct '21
26 Oct '21
From: 沈子俊 <shenzijun(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AG3E?from=project-issue
CVE: NA
---------------------------------------------------
change copy size in the function strncpy().
Signed-off-by: 沈子俊 <shenzijun(a)kylinos.cn>
---
drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
index 5cb8ef61e04b..9af9f19fb491 100644
--- a/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
+++ b/drivers/net/ethernet/netswift/txgbe/txgbe_ethtool.c
@@ -1040,7 +1040,7 @@ static void txgbe_get_drvinfo(struct net_device *netdev,
strncpy(drvinfo->version, txgbe_driver_version,
sizeof(drvinfo->version) - 1);
strncpy(drvinfo->fw_version, adapter->eeprom_id,
- sizeof(drvinfo->fw_version));
+ sizeof(drvinfo->fw_version) - 1);
strncpy(drvinfo->bus_info, pci_name(adapter->pdev),
sizeof(drvinfo->bus_info) - 1);
if (adapter->num_tx_queues <= TXGBE_NUM_RX_QUEUES) {
--
2.30.0
1
0

[PATCH openEuler-1.0-LTS] blk-mq: complete req in softirq context in case of single queue
by Yang Yingliang 26 Oct '21
by Yang Yingliang 26 Oct '21
26 Oct '21
From: Ming Lei <ming.lei(a)redhat.com>
mainline inclusion
from mainline-4.20-rc1
commit 36e765392e48e0322222347c4d21078c0b94758c
category: bugfix
bugzilla: 175585
CVE: NA
-------------------------------------------------
Lot of controllers may have only one irq vector for completing IO
request. And usually affinity of the only irq vector is all possible
CPUs, however, on most of ARCH, there may be only one specific CPU
for handling this interrupt.
So if all IOs are completed in hardirq context, it is inevitable to
degrade IO performance because of increased irq latency.
This patch tries to address this issue by allowing to complete request
in softirq context, like the legacy IO path.
IOPS is observed as ~13%+ in the following randread test on raid0 over
virtio-scsi.
mdadm --create --verbose /dev/md0 --level=0 --chunk=1024 --raid-devices=8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
fio --time_based --name=benchmark --runtime=30 --filename=/dev/md0 --nrfiles=1 --ioengine=libaio --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=32 --rw=randread --blocksize=4k
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Zach Marano <zmarano(a)google.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Bart Van Assche <bvanassche(a)acm.org>
Cc: Jianchao Wang <jianchao.w.wang(a)oracle.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Lihong Kou <koulihong(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-mq.c | 14 ++++++++++++++
block/blk-softirq.c | 5 ++---
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7106c94ea58fe..55c81dcafbdc2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -600,6 +600,20 @@ void blk_mq_force_complete_rq(struct request *rq)
if (rq->internal_tag != -1)
blk_mq_sched_completed_request(rq);
+ /*
+ * Most of single queue controllers, there is only one irq vector
+ * for handling IO completion, and the only irq's affinity is set
+ * as all possible CPUs. On most of ARCHs, this affinity means the
+ * irq is handled on one specific CPU.
+ *
+ * So complete IO reqeust in softirq context in case of single queue
+ * for not degrading IO performance by irqsoff latency.
+ */
+ if (rq->q->nr_hw_queues == 1) {
+ __blk_complete_request(rq);
+ return;
+ }
+
if (!test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) {
rq->q->softirq_done_fn(rq);
return;
diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 15c1f5e12eb89..e47a2f751884d 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -97,8 +97,8 @@ static int blk_softirq_cpu_dead(unsigned int cpu)
void __blk_complete_request(struct request *req)
{
- int ccpu, cpu;
struct request_queue *q = req->q;
+ int cpu, ccpu = q->mq_ops ? req->mq_ctx->cpu : req->cpu;
unsigned long flags;
bool shared = false;
@@ -110,8 +110,7 @@ void __blk_complete_request(struct request *req)
/*
* Select completion CPU
*/
- if (req->cpu != -1) {
- ccpu = req->cpu;
+ if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags) && ccpu != -1) {
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags))
shared = cpus_share_cache(cpu, ccpu);
} else
--
2.25.1
1
0

[PATCH kernel-4.19] blk-mq: complete req in softirq context in case of single queue
by Yang Yingliang 26 Oct '21
by Yang Yingliang 26 Oct '21
26 Oct '21
From: Ming Lei <ming.lei(a)redhat.com>
mainline inclusion
from mainline-4.20-rc1
commit 36e765392e48e0322222347c4d21078c0b94758c
category: bugfix
bugzilla: 175585
CVE: NA
-------------------------------------------------
Lot of controllers may have only one irq vector for completing IO
request. And usually affinity of the only irq vector is all possible
CPUs, however, on most of ARCH, there may be only one specific CPU
for handling this interrupt.
So if all IOs are completed in hardirq context, it is inevitable to
degrade IO performance because of increased irq latency.
This patch tries to address this issue by allowing to complete request
in softirq context, like the legacy IO path.
IOPS is observed as ~13%+ in the following randread test on raid0 over
virtio-scsi.
mdadm --create --verbose /dev/md0 --level=0 --chunk=1024 --raid-devices=8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
fio --time_based --name=benchmark --runtime=30 --filename=/dev/md0 --nrfiles=1 --ioengine=libaio --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=32 --rw=randread --blocksize=4k
Cc: Dongli Zhang <dongli.zhang(a)oracle.com>
Cc: Zach Marano <zmarano(a)google.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Bart Van Assche <bvanassche(a)acm.org>
Cc: Jianchao Wang <jianchao.w.wang(a)oracle.com>
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Lihong Kou <koulihong(a)huawei.com>
Reviewed-by: Tao Hou <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-mq.c | 14 ++++++++++++++
block/blk-softirq.c | 5 ++---
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index be1e2ad4631aa..52a04f6ffeea2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -603,6 +603,20 @@ void blk_mq_force_complete_rq(struct request *rq)
if (rq->internal_tag != -1)
blk_mq_sched_completed_request(rq);
+ /*
+ * Most of single queue controllers, there is only one irq vector
+ * for handling IO completion, and the only irq's affinity is set
+ * as all possible CPUs. On most of ARCHs, this affinity means the
+ * irq is handled on one specific CPU.
+ *
+ * So complete IO reqeust in softirq context in case of single queue
+ * for not degrading IO performance by irqsoff latency.
+ */
+ if (rq->q->nr_hw_queues == 1) {
+ __blk_complete_request(rq);
+ return;
+ }
+
if (!test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) {
rq->q->softirq_done_fn(rq);
return;
diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 15c1f5e12eb89..e47a2f751884d 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -97,8 +97,8 @@ static int blk_softirq_cpu_dead(unsigned int cpu)
void __blk_complete_request(struct request *req)
{
- int ccpu, cpu;
struct request_queue *q = req->q;
+ int cpu, ccpu = q->mq_ops ? req->mq_ctx->cpu : req->cpu;
unsigned long flags;
bool shared = false;
@@ -110,8 +110,7 @@ void __blk_complete_request(struct request *req)
/*
* Select completion CPU
*/
- if (req->cpu != -1) {
- ccpu = req->cpu;
+ if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags) && ccpu != -1) {
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags))
shared = cpus_share_cache(cpu, ccpu);
} else
--
2.25.1
1
0

[PATCH kernel-4.19 1/8] ovl: simplify setting of origin for index lookup
by Yang Yingliang 26 Oct '21
by Yang Yingliang 26 Oct '21
26 Oct '21
From: Vivek Goyal <vgoyal(a)redhat.com>
mainline inclusion
from mainline-v5.8-rc1
commit 59fb20138a9b5249a4176d5bbc5c670a97343061
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
overlayfs can keep index of copied up files and directories and it seems to
serve two primary puroposes. For regular files, it avoids breaking lower
hardlinks over copy up. For directories it seems to be used for various
error checks.
During ovl_lookup(), we lookup for index using lower dentry in many a
cases. That lower dentry is called "origin" and following is a summary of
current logic.
If there is no upperdentry, always lookup for index using lower dentry.
For regular files it helps avoiding breaking hard links over copyup and for
directories it seems to be just error checks.
If there is an upperdentry, then there are 3 possible cases.
- For directories, lower dentry is found using two ways. One is regular
path based lookup in lower layers and second is using ORIGIN xattr on
upper dentry. First verify that path based lookup lower dentry matches
the one pointed by upper ORIGIN xattr. If yes, use this verified origin
for index lookup.
- For regular files (non-metacopy), there is no path based lookup in lower
layers as lookup stops once we find upper dentry. So there is no origin
verification. If there is ORIGIN xattr present on upper, use that to
lookup index otherwise don't.
- For regular metacopy files, again lower dentry is found using path based
lookup as well as ORIGIN xattr on upper. Path based lookup is continued
in this case to find lower data dentry for metacopy upper. So like
directories we only use verified origin. If ORIGIN xattr is not present
(Either because lower did not support file handles or because this is
hardlink copied up with index=off), then don't use path lookup based
lower dentry as origin. This is same as regular non-metacopy file case.
Suggested-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Vivek Goyal <vgoyal(a)redhat.com>
Reviewed-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Zheng Liang <zhengliang6(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/overlayfs/namei.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
index 145bfdde53feb..968ad757c578e 100644
--- a/fs/overlayfs/namei.c
+++ b/fs/overlayfs/namei.c
@@ -1014,25 +1014,30 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
}
stack = origin_path;
ctr = 1;
+ origin = origin_path->dentry;
origin_path = NULL;
}
/*
- * Lookup index by lower inode and verify it matches upper inode.
- * We only trust dir index if we verified that lower dir matches
- * origin, otherwise dir index entries may be inconsistent and we
- * ignore them.
+ * Always lookup index if there is no-upperdentry.
*
- * For non-dir upper metacopy dentry, we already set "origin" if we
- * verified that lower matched upper origin. If upper origin was
- * not present (because lower layer did not support fh encode/decode),
- * or indexing is not enabled, do not set "origin" and skip looking up
- * index. This case should be handled in same way as a non-dir upper
- * without ORIGIN is handled.
+ * For the case of upperdentry, we have set origin by now if it
+ * needed to be set. There are basically three cases.
+ *
+ * For directories, lookup index by lower inode and verify it matches
+ * upper inode. We only trust dir index if we verified that lower dir
+ * matches origin, otherwise dir index entries may be inconsistent
+ * and we ignore them.
+ *
+ * For regular upper, we already set origin if upper had ORIGIN
+ * xattr. There is no verification though as there is no path
+ * based dentry lookup in lower in this case.
+ *
+ * For metacopy upper, we set a verified origin already if index
+ * is enabled and if upper had an ORIGIN xattr.
*
- * Always lookup index of non-dir non-metacopy and non-upper.
*/
- if (ctr && (!upperdentry || (!d.is_dir && !metacopy)))
+ if (!upperdentry && ctr)
origin = stack[0].dentry;
if (origin && ovl_indexdir(dentry->d_sb) &&
--
2.25.1
1
7

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:03, xjx00 写道:
> From: Maciej Żenczykowski <maze(a)google.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 0f5a20b1fd9da3ac9f7c6edcad522712ca694d5c
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=358
> CVE: NA
>
> -------------------------------------------------
>
> commit 3370139745853f7826895293e8ac3aec1430508e upstream.
>
> [ 190.544755] configfs-gadget gadget: notify speed -44967296
>
> This is because 4250000000 - 2**32 is -44967296.
>
> Fixes: 9f6ce4240a2b ("usb: gadget: f_ncm.c added")
> Cc: Brooke Basile <brookebasile(a)gmail.com>
> Cc: Bryan O'Donoghue <bryan.odonoghue(a)linaro.org>
> Cc: Felipe Balbi <balbi(a)kernel.org>
> Cc: Lorenzo Colitti <lorenzo(a)google.com>
> Cc: Yauheni Kaliuta <yauheni.kaliuta(a)nokia.com>
> Cc: Linux USB Mailing List <linux-usb(a)vger.kernel.org>
> Acked-By: Lorenzo Colitti <lorenzo(a)google.com>
> Signed-off-by: Maciej Żenczykowski <maze(a)google.com>
> Cc: stable <stable(a)vger.kernel.org>
> Link: https://lore.kernel.org/r/20210608005344.3762668-1-zenczykowski@gmail.com
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: xjx00 <xjxyklwx(a)126.com>
> ---
> drivers/usb/gadget/function/f_ncm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
> index 019bea8e09cc..0d23c6c11a13 100644
> --- a/drivers/usb/gadget/function/f_ncm.c
> +++ b/drivers/usb/gadget/function/f_ncm.c
> @@ -583,7 +583,7 @@ static void ncm_do_notify(struct f_ncm *ncm)
> data[0] = cpu_to_le32(ncm_bitrate(cdev->gadget));
> data[1] = data[0];
>
> - DBG(cdev, "notify speed %d\n", ncm_bitrate(cdev->gadget));
> + DBG(cdev, "notify speed %u\n", ncm_bitrate(cdev->gadget));
> ncm->notify_state = NCM_NOTIFY_CONNECT;
> break;
> }
2
1

Re: [PATCH openEuler-21.03] ALSA: firewire-lib: fix the context to call snd_pcm_stop_xrun()
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:17, lihao 写道:
> From: Takashi Sakamoto <o-takashi(a)sakamocchi.jp>
>
> stable inclusion
> from stable-v5.10.44
> commit 98f842951f8aa222e8a8453e6dbce6c056e9984f
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=430
> CVE: NA
>
> -------------------------------------------------
>
> commit 9981b20a5e3694f4625ab5a1ddc98ce7503f6d12 upstream.
>
> In the workqueue to queue wake-up event, isochronous context is not
> processed, thus it's useless to check context for the workqueue to switch
> status of runtime for PCM substream to XRUN. On the other hand, in
> software IRQ context of 1394 OHCI, it's needed.
>
> This commit fixes the bug introduced when tasklet was replaced with
> workqueue.
>
> Cc: <stable(a)vger.kernel.org>
> Fixes: 2b3d2987d800 ("ALSA: firewire: Replace tasklet with work")
> Signed-off-by: Takashi Sakamoto <o-takashi(a)sakamocchi.jp>
> Link: https://lore.kernel.org/r/20210605091054.68866-1-o-takashi@sakamocchi.jp
> Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: lihao <380525608(a)qq.com>
> ---
> sound/firewire/amdtp-stream.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c
> index e0faa6601966..5805c5de39fb 100644
> --- a/sound/firewire/amdtp-stream.c
> +++ b/sound/firewire/amdtp-stream.c
> @@ -804,7 +804,7 @@ static void generate_pkt_descs(struct amdtp_stream *s, struct pkt_desc *descs,
> static inline void cancel_stream(struct amdtp_stream *s)
> {
> s->packet_index = -1;
> - if (current_work() == &s->period_work)
> + if (in_interrupt())
> amdtp_stream_pcm_abort(s);
> WRITE_ONCE(s->pcm_buffer_pointer, SNDRV_PCM_POS_XRUN);
> }
1
0

Re: [PATCH openEuler-21.03] drm/msm/a6xx: avoid shadow NULL reference in failure path
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:02, lzb 写道:
> From: Jonathan Marek <jonathan(a)marek.ca>
>
> stable inclusion
> from stable-v5.10.44
> commit fd681a8c7ac8f649a0718f6cbf2fe75d0587c9a2
> bugzilla:
> CVE: NA
>
> https://bugzilla.openeuler.org/show_bug.cgi?id=472-------------------------…
>
> commit ce86c239e4d218ae6040bec18e6d19a58edb8b7c upstream.
>
> If a6xx_hw_init() fails before creating the shadow_bo, the a6xx_pm_suspend
> code referencing it will crash. Change the condition to one that avoids
> this problem (note: creation of shadow_bo is behind this same condition)
>
> Fixes: e8b0b994c3a5 ("drm/msm/a6xx: Clear shadow on suspend")
> Signed-off-by: Jonathan Marek <jonathan(a)marek.ca>
> Reviewed-by: Akhil P Oommen <akhilpo(a)codeaurora.org>
> Link: https://lore.kernel.org/r/20210513171431.18632-6-jonathan@marek.ca
> Signed-off-by: Rob Clark <robdclark(a)chromium.org>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: lzb <zbliancs(a)qq.com>
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index 722c2fe3bfd5..7061ba457c5b 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -1055,7 +1055,7 @@ static int a6xx_pm_suspend(struct msm_gpu *gpu)
> if (ret)
> return ret;
>
> - if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami)
> + if (a6xx_gpu->shadow_bo)
> for (i = 0; i < gpu->nr_rings; i++)
> a6xx_gpu->shadow[i] = 0;
>
1
0

Re: [PATCH openEuler-21.03] sched/fair: Make sure to update tg contrib for blocked load
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:09, wjy 写道:
> From: Vincent Guittot <vincent.guittot(a)linaro.org>
>
> stable inclusion
> from stable-v5.10.44
> commit 32e22db8b25ea165bd9e446c7f92b089c8568eaf
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=379
> CVE: NA
>
> -------------------------------------------------
>
> commit 02da26ad5ed6ea8680e5d01f20661439611ed776 upstream.
>
> During the update of fair blocked load (__update_blocked_fair()), we
> update the contribution of the cfs in tg->load_avg if cfs_rq's pelt
> has decayed. Nevertheless, the pelt values of a cfs_rq could have
> been recently updated while propagating the change of a child. In this
> case, cfs_rq's pelt will not decayed because it has already been
> updated and we don't update tg->load_avg.
>
> __update_blocked_fair
> ...
> for_each_leaf_cfs_rq_safe: child cfs_rq
> update cfs_rq_load_avg() for child cfs_rq
> ...
> update_load_avg(cfs_rq_of(se), se, 0)
> ...
> update cfs_rq_load_avg() for parent cfs_rq
> -propagation of child's load makes parent cfs_rq->load_sum
> becoming null
> -UPDATE_TG is not set so it doesn't update parent
> cfs_rq->tg_load_avg_contrib
> ..
> for_each_leaf_cfs_rq_safe: parent cfs_rq
> update cfs_rq_load_avg() for parent cfs_rq
> - nothing to do because parent cfs_rq has already been updated
> recently so cfs_rq->tg_load_avg_contrib is not updated
> ...
> parent cfs_rq is decayed
> list_del_leaf_cfs_rq parent cfs_rq
> - but it still contibutes to tg->load_avg
>
> we must set UPDATE_TG flags when propagting pending load to the parent
>
> Fixes: 039ae8bcf7a5 ("sched/fair: Fix O(nr_cgroups) in the load balancing path")
> Reported-by: Odin Ugedal <odin(a)uged.al>
> Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
> Reviewed-by: Odin Ugedal <odin(a)uged.al>
> Link: https://lkml.kernel.org/r/20210527122916.27683-3-vincent.guittot@linaro.org
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: wjy <464310675(a)qq.com>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1ad0e52487f6..43497d88a330 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7961,7 +7961,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> /* Propagate pending load changes to the parent, if any: */
> se = cfs_rq->tg->se[cpu];
> if (se && !skip_blocked_update(se))
> - update_load_avg(cfs_rq_of(se), se, 0);
> + update_load_avg(cfs_rq_of(se), se, UPDATE_TG);
>
> /*
> * There can be a lot of idle CPU cgroups. Don't let fully
1
0

Re: [PATCH openEuler-21.03] NFSv4: Fix second deadlock in nfs4_evict_inode()
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:13, wyp 写道:
> From: Trond Myklebust <trond.myklebust(a)hammerspace.com>
>
> stable inclusion
> from stable-v5.10.44
> commit d973bd0d6e7f9b4ea976cc619e8d6e0d235b9056
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=462
> CVE: NA
>
> -------------------------------------------------
>
> commit c3aba897c6e67fa464ec02b1f17911577d619713 upstream.
>
> If the inode is being evicted but has to return a layout first, then
> that too can cause a deadlock in the corner case where the server
> reboots.
>
> Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: Hang <haihangyiyuan(a)163.com>
> Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
> Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
> ---
> fs/nfs/nfs4proc.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index c92d6ff0fcea..eedcbe6832fb 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -9619,15 +9619,20 @@ int nfs4_proc_layoutreturn(struct nfs4_layoutreturn *lrp, bool sync)
> &task_setup_data.rpc_client, &msg);
>
> dprintk("--> %s\n", __func__);
> + lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!sync) {
> - lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!lrp->inode) {
> nfs4_layoutreturn_release(lrp);
> return -EAGAIN;
> }
> task_setup_data.flags |= RPC_TASK_ASYNC;
> }
> - nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0);
> + if (!lrp->inode)
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 1);
> + else
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 0);
> task = rpc_run_task(&task_setup_data);
> if (IS_ERR(task))
> return PTR_ERR(task);
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:15, gpj 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:18, dqh 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:18, dqh 写道:
> From: Lukas Wunner <lukas(a)wunner.de>
>
> stable inclusion
> from stable-v5.10.44
> commit fa05ba61967ad051f5f2b7c4f39d6c56719c9900
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=470
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 2ec6f20b33eb4f62ab90bdcd620436c883ec3af6 ]
>
> Commit c7299fea6769 ("spi: Fix spi device unregister flow") changed the
> SPI core's behavior if the ->setup() hook returns an error upon adding
> an spi_device: Before, the ->cleanup() hook was invoked to free any
> allocations that were made by ->setup(). With the commit, that's no
> longer the case, so the ->setup() hook is expected to free the
> allocations itself.
>
> I've identified 5 drivers which depend on the old behavior and am fixing
> them up hereinafter: spi-bitbang.c spi-fsl-spi.c spi-omap-uwire.c
> spi-omap2-mcspi.c spi-pxa2xx.c
>
> Importantly, ->setup() is not only invoked on spi_device *addition*:
> It may subsequently be called to *change* SPI parameters. If changing
> these SPI parameters fails, freeing memory allocations would be wrong.
> That should only be done if the spi_device is finally destroyed.
> I am therefore using a bool "initial_setup" in 4 of the affected drivers
> to differentiate between the invocation on *adding* the spi_device and
> any subsequent invocations: spi-bitbang.c spi-fsl-spi.c spi-omap-uwire.c
> spi-omap2-mcspi.c
>
> In spi-pxa2xx.c, it seems the ->setup() hook can only fail on spi_device
> addition, not any subsequent calls. It therefore doesn't need the bool.
>
> It's worth noting that 5 other drivers already perform a cleanup if the
> ->setup() hook fails. Before c7299fea6769, they caused a double-free
> if ->setup() failed on spi_device addition. Since the commit, they're
> fine. These drivers are: spi-mpc512x-psc.c spi-pl022.c spi-s3c64xx.c
> spi-st-ssc4.c spi-tegra114.c
>
> (spi-pxa2xx.c also already performs a cleanup, but only in one of
> several error paths.)
>
> Fixes: c7299fea6769 ("spi: Fix spi device unregister flow")
> Signed-off-by: Lukas Wunner <lukas(a)wunner.de>
> Cc: Saravana Kannan <saravanak(a)google.com>
> Acked-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com> # pxa2xx
> Link: https://lore.kernel.org/r/f76a0599469f265b69c371538794101fa37b5536.16221493…
> Signed-off-by: Mark Brown <broonie(a)kernel.org>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: dqh <1486653795(a)qq.com>
> ---
> drivers/spi/spi-bitbang.c | 18 ++++++++++++++----
> drivers/spi/spi-fsl-spi.c | 4 ++++
> drivers/spi/spi-omap-uwire.c | 9 ++++++++-
> drivers/spi/spi-omap2-mcspi.c | 33 ++++++++++++++++++++-------------
> drivers/spi/spi-pxa2xx.c | 9 ++++++++-
> 5 files changed, 54 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/spi/spi-bitbang.c b/drivers/spi/spi-bitbang.c
> index 1a7352abd878..3d8948a17095 100644
> --- a/drivers/spi/spi-bitbang.c
> +++ b/drivers/spi/spi-bitbang.c
> @@ -181,6 +181,8 @@ int spi_bitbang_setup(struct spi_device *spi)
> {
> struct spi_bitbang_cs *cs = spi->controller_state;
> struct spi_bitbang *bitbang;
> + bool initial_setup = false;
> + int retval;
>
> bitbang = spi_master_get_devdata(spi->master);
>
> @@ -189,22 +191,30 @@ int spi_bitbang_setup(struct spi_device *spi)
> if (!cs)
> return -ENOMEM;
> spi->controller_state = cs;
> + initial_setup = true;
> }
>
> /* per-word shift register access, in hardware or bitbanging */
> cs->txrx_word = bitbang->txrx_word[spi->mode & (SPI_CPOL|SPI_CPHA)];
> - if (!cs->txrx_word)
> - return -EINVAL;
> + if (!cs->txrx_word) {
> + retval = -EINVAL;
> + goto err_free;
> + }
>
> if (bitbang->setup_transfer) {
> - int retval = bitbang->setup_transfer(spi, NULL);
> + retval = bitbang->setup_transfer(spi, NULL);
> if (retval < 0)
> - return retval;
> + goto err_free;
> }
>
> dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs);
>
> return 0;
> +
> +err_free:
> + if (initial_setup)
> + kfree(cs);
> + return retval;
> }
> EXPORT_SYMBOL_GPL(spi_bitbang_setup);
>
> diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
> index d0e5aa18b7ba..bdf94cc7be1a 100644
> --- a/drivers/spi/spi-fsl-spi.c
> +++ b/drivers/spi/spi-fsl-spi.c
> @@ -440,6 +440,7 @@ static int fsl_spi_setup(struct spi_device *spi)
> {
> struct mpc8xxx_spi *mpc8xxx_spi;
> struct fsl_spi_reg __iomem *reg_base;
> + bool initial_setup = false;
> int retval;
> u32 hw_mode;
> struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi);
> @@ -452,6 +453,7 @@ static int fsl_spi_setup(struct spi_device *spi)
> if (!cs)
> return -ENOMEM;
> spi_set_ctldata(spi, cs);
> + initial_setup = true;
> }
> mpc8xxx_spi = spi_master_get_devdata(spi->master);
>
> @@ -475,6 +477,8 @@ static int fsl_spi_setup(struct spi_device *spi)
> retval = fsl_spi_setup_transfer(spi, NULL);
> if (retval < 0) {
> cs->hw_mode = hw_mode; /* Restore settings */
> + if (initial_setup)
> + kfree(cs);
> return retval;
> }
>
> diff --git a/drivers/spi/spi-omap-uwire.c b/drivers/spi/spi-omap-uwire.c
> index 71402f71ddd8..df28c6664aba 100644
> --- a/drivers/spi/spi-omap-uwire.c
> +++ b/drivers/spi/spi-omap-uwire.c
> @@ -424,15 +424,22 @@ static int uwire_setup_transfer(struct spi_device *spi, struct spi_transfer *t)
> static int uwire_setup(struct spi_device *spi)
> {
> struct uwire_state *ust = spi->controller_state;
> + bool initial_setup = false;
> + int status;
>
> if (ust == NULL) {
> ust = kzalloc(sizeof(*ust), GFP_KERNEL);
> if (ust == NULL)
> return -ENOMEM;
> spi->controller_state = ust;
> + initial_setup = true;
> }
>
> - return uwire_setup_transfer(spi, NULL);
> + status = uwire_setup_transfer(spi, NULL);
> + if (status && initial_setup)
> + kfree(ust);
> +
> + return status;
> }
>
> static void uwire_cleanup(struct spi_device *spi)
> diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
> index d4c9510af393..3596bbe4b776 100644
> --- a/drivers/spi/spi-omap2-mcspi.c
> +++ b/drivers/spi/spi-omap2-mcspi.c
> @@ -1032,8 +1032,22 @@ static void omap2_mcspi_release_dma(struct spi_master *master)
> }
> }
>
> +static void omap2_mcspi_cleanup(struct spi_device *spi)
> +{
> + struct omap2_mcspi_cs *cs;
> +
> + if (spi->controller_state) {
> + /* Unlink controller state from context save list */
> + cs = spi->controller_state;
> + list_del(&cs->node);
> +
> + kfree(cs);
> + }
> +}
> +
> static int omap2_mcspi_setup(struct spi_device *spi)
> {
> + bool initial_setup = false;
> int ret;
> struct omap2_mcspi *mcspi = spi_master_get_devdata(spi->master);
> struct omap2_mcspi_regs *ctx = &mcspi->ctx;
> @@ -1051,35 +1065,28 @@ static int omap2_mcspi_setup(struct spi_device *spi)
> spi->controller_state = cs;
> /* Link this to context save list */
> list_add_tail(&cs->node, &ctx->cs);
> + initial_setup = true;
> }
>
> ret = pm_runtime_get_sync(mcspi->dev);
> if (ret < 0) {
> pm_runtime_put_noidle(mcspi->dev);
> + if (initial_setup)
> + omap2_mcspi_cleanup(spi);
>
> return ret;
> }
>
> ret = omap2_mcspi_setup_transfer(spi, NULL);
> + if (ret && initial_setup)
> + omap2_mcspi_cleanup(spi);
> +
> pm_runtime_mark_last_busy(mcspi->dev);
> pm_runtime_put_autosuspend(mcspi->dev);
>
> return ret;
> }
>
> -static void omap2_mcspi_cleanup(struct spi_device *spi)
> -{
> - struct omap2_mcspi_cs *cs;
> -
> - if (spi->controller_state) {
> - /* Unlink controller state from context save list */
> - cs = spi->controller_state;
> - list_del(&cs->node);
> -
> - kfree(cs);
> - }
> -}
> -
> static irqreturn_t omap2_mcspi_irq_handler(int irq, void *data)
> {
> struct omap2_mcspi *mcspi = data;
> diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
> index d6b534d38e5d..56a62095ec8c 100644
> --- a/drivers/spi/spi-pxa2xx.c
> +++ b/drivers/spi/spi-pxa2xx.c
> @@ -1254,6 +1254,8 @@ static int setup_cs(struct spi_device *spi, struct chip_data *chip,
> chip->gpio_cs_inverted = spi->mode & SPI_CS_HIGH;
>
> err = gpiod_direction_output(gpiod, !chip->gpio_cs_inverted);
> + if (err)
> + gpiod_put(chip->gpiod_cs);
> }
>
> return err;
> @@ -1267,6 +1269,7 @@ static int setup(struct spi_device *spi)
> struct driver_data *drv_data =
> spi_controller_get_devdata(spi->controller);
> uint tx_thres, tx_hi_thres, rx_thres;
> + int err;
>
> switch (drv_data->ssp_type) {
> case QUARK_X1000_SSP:
> @@ -1413,7 +1416,11 @@ static int setup(struct spi_device *spi)
> if (drv_data->ssp_type == CE4100_SSP)
> return 0;
>
> - return setup_cs(spi, chip, chip_info);
> + err = setup_cs(spi, chip, chip_info);
> + if (err)
> + kfree(chip);
> +
> + return err;
> }
>
> static void cleanup(struct spi_device *spi)
1
0

Re: [PATCH openEuler-21.03] usb: chipidea: udc: assign interrupt number to USB gadget structure
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:19, chensiyan96 写道:
> From: Li Jun <jun.li(a)nxp.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 2e2145ccfbcb0dd38d8423681d22b595ca735846
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=376
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 9e3927f6373da54cb17e17f4bd700907e1123d2f ]
>
> Chipidea also need sync interrupt before unbind the udc while
> gadget remove driver, otherwise setup irq handling may happen
> while unbind, see below dump generated from android function
> switch stress test:
>
> [ 4703.503056] android_work: sent uevent USB_STATE=CONNECTED
> [ 4703.514642] android_work: sent uevent USB_STATE=DISCONNECTED
> [ 4703.651339] android_work: sent uevent USB_STATE=CONNECTED
> [ 4703.661806] init: Control message: Processed ctl.stop for 'adbd' from pid: 561 (system_server)
> [ 4703.673469] init: processing action (init.svc.adbd=stopped) from (/system/etc/init/hw/init.usb.configfs.rc:14)
> [ 4703.676451] Unable to handle kernel read from unreadable memory at virtual address 0000000000000090
> [ 4703.676454] Mem abort info:
> [ 4703.676458] ESR = 0x96000004
> [ 4703.676461] EC = 0x25: DABT (current EL), IL = 32 bits
> [ 4703.676464] SET = 0, FnV = 0
> [ 4703.676466] EA = 0, S1PTW = 0
> [ 4703.676468] Data abort info:
> [ 4703.676471] ISV = 0, ISS = 0x00000004
> [ 4703.676473] CM = 0, WnR = 0
> [ 4703.676478] user pgtable: 4k pages, 48-bit VAs, pgdp=000000004a867000
> [ 4703.676481] [0000000000000090] pgd=0000000000000000, p4d=0000000000000000
> [ 4703.676503] Internal error: Oops: 96000004 [#1] PREEMPT SMP
> [ 4703.758297] Modules linked in: synaptics_dsx_i2c moal(O) mlan(O)
> [ 4703.764327] CPU: 0 PID: 235 Comm: lmkd Tainted: G W O 5.10.9-00001-g3f5fd8487c38-dirty #63
> [ 4703.773720] Hardware name: NXP i.MX8MNano EVK board (DT)
> [ 4703.779033] pstate: 60400085 (nZCv daIf +PAN -UAO -TCO BTYPE=--)
> [ 4703.785046] pc : _raw_write_unlock_bh+0xc0/0x2c8
> [ 4703.789667] lr : android_setup+0x4c/0x168
> [ 4703.793676] sp : ffff80001256bd80
> [ 4703.796989] x29: ffff80001256bd80 x28: 00000000000000a8
> [ 4703.802304] x27: ffff800012470000 x26: ffff80006d923000
> [ 4703.807616] x25: ffff800012471000 x24: ffff00000b091140
> [ 4703.812929] x23: ffff0000077dbd38 x22: ffff0000077da490
> [ 4703.818242] x21: ffff80001256be30 x20: 0000000000000000
> [ 4703.823554] x19: 0000000000000080 x18: ffff800012561048
> [ 4703.828867] x17: 0000000000000000 x16: 0000000000000039
> [ 4703.834180] x15: ffff8000106ad258 x14: ffff80001194c277
> [ 4703.839493] x13: 0000000000003934 x12: 0000000000000000
> [ 4703.844805] x11: 0000000000000000 x10: 0000000000000001
> [ 4703.850117] x9 : 0000000000000000 x8 : 0000000000000090
> [ 4703.855429] x7 : 6f72646e61203a70 x6 : ffff8000124f2450
> [ 4703.860742] x5 : ffffffffffffffff x4 : 0000000000000009
> [ 4703.866054] x3 : ffff8000108a290c x2 : ffff00007fb3a9c8
> [ 4703.871367] x1 : 0000000000000000 x0 : 0000000000000090
> [ 4703.876681] Call trace:
> [ 4703.879129] _raw_write_unlock_bh+0xc0/0x2c8
> [ 4703.883397] android_setup+0x4c/0x168
> [ 4703.887059] udc_irq+0x824/0xa9c
> [ 4703.890287] ci_irq+0x124/0x148
> [ 4703.893429] __handle_irq_event_percpu+0x84/0x268
> [ 4703.898131] handle_irq_event+0x64/0x14c
> [ 4703.902054] handle_fasteoi_irq+0x110/0x210
> [ 4703.906236] __handle_domain_irq+0x8c/0xd4
> [ 4703.910332] gic_handle_irq+0x6c/0x124
> [ 4703.914081] el1_irq+0xdc/0x1c0
> [ 4703.917221] _raw_spin_unlock_irq+0x20/0x54
> [ 4703.921405] finish_task_switch+0x84/0x224
> [ 4703.925502] __schedule+0x4a4/0x734
> [ 4703.928990] schedule+0xa0/0xe8
> [ 4703.932132] do_notify_resume+0x150/0x184
> [ 4703.936140] work_pending+0xc/0x40c
> [ 4703.939633] Code: d5384613 521b0a69 d5184609 f9800111 (885ffd01)
> [ 4703.945732] ---[ end trace ba5c1875ae49d53c ]---
> [ 4703.950350] Kernel panic - not syncing: Oops: Fatal exception in interrupt
> [ 4703.957223] SMP: stopping secondary CPUs
> [ 4703.961151] Kernel Offset: disabled
> [ 4703.964638] CPU features: 0x0240002,2000200c
> [ 4703.968905] Memory Limit: none
> [ 4703.971963] Rebooting in 5 seconds..
>
> Tested-by: faqiang.zhu <faqiang.zhu(a)nxp.com>
> Signed-off-by: Li Jun <jun.li(a)nxp.com>
> Link: https://lore.kernel.org/r/1620989984-7653-1-git-send-email-jun.li@nxp.com
> Signed-off-by: Peter Chen <peter.chen(a)kernel.org>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: chensiyan96 <3225973902(a)qq.com>
> ---
> drivers/usb/chipidea/udc.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
> index 60ea932afe2b..5f35cdd2cf1d 100644
> --- a/drivers/usb/chipidea/udc.c
> +++ b/drivers/usb/chipidea/udc.c
> @@ -2055,6 +2055,7 @@ static int udc_start(struct ci_hdrc *ci)
> ci->gadget.name = ci->platdata->name;
> ci->gadget.otg_caps = otg_caps;
> ci->gadget.sg_supported = 1;
> + ci->gadget.irq = ci->irq;
>
> if (ci->platdata->flags & CI_HDRC_REQUIRES_ALIGNED_DMA)
> ci->gadget.quirk_avoids_skb_reserve = 1;
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:25, ZhuoliHuang 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:26, lihao 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:27, zanderzhao 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

Re: [PATCH openEuler-21.03] scsi: core: Fix failure handling of scsi_add_host_with_dma()
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:32, dongenyang 写道:
> From: Ming Lei <ming.lei(a)redhat.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 146446a43b3dbaa3a58364ef99fd606b3f324832
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=361
> CVE: NA
>
> -------------------------------------------------
>
> commit 3719f4ff047e20062b8314c23ec3cab84d74c908 upstream.
>
> When scsi_add_host_with_dma() returns failure, the caller will call
> scsi_host_put(shost) to release everything allocated for this host
> instance. Consequently we can't also free allocated stuff in
> scsi_add_host_with_dma(), otherwise we will end up with a double free.
>
> Strictly speaking, host resource allocations should have been done in
> scsi_host_alloc(). However, the allocations may need information which is
> not yet provided by the driver when that function is called. So leave the
> allocations where they are but rely on host device's release handler to
> free resources.
>
> Link: https://lore.kernel.org/r/20210602133029.2864069-3-ming.lei@redhat.com
> Cc: Bart Van Assche <bvanassche(a)acm.org>
> Cc: John Garry <john.garry(a)huawei.com>
> Cc: Hannes Reinecke <hare(a)suse.de>
> Tested-by: John Garry <john.garry(a)huawei.com>
> Reviewed-by: Bart Van Assche <bvanassche(a)acm.org>
> Reviewed-by: John Garry <john.garry(a)huawei.com>
> Reviewed-by: Hannes Reinecke <hare(a)suse.de>
> Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
> Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: dongenyang <sdlpdey(a)163.com>
> ---
> drivers/scsi/hosts.c | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
> index 2f162603876f..85ec3cce43f1 100644
> --- a/drivers/scsi/hosts.c
> +++ b/drivers/scsi/hosts.c
> @@ -278,23 +278,22 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
>
> if (!shost->work_q) {
> error = -EINVAL;
> - goto out_free_shost_data;
> + goto out_del_dev;
> }
> }
>
> error = scsi_sysfs_add_host(shost);
> if (error)
> - goto out_destroy_host;
> + goto out_del_dev;
>
> scsi_proc_host_add(shost);
> scsi_autopm_put_host(shost);
> return error;
>
> - out_destroy_host:
> - if (shost->work_q)
> - destroy_workqueue(shost->work_q);
> - out_free_shost_data:
> - kfree(shost->shost_data);
> + /*
> + * Any host allocation in this function will be freed in
> + * scsi_host_dev_release().
> + */
> out_del_dev:
> device_del(&shost->shost_dev);
> out_del_gendev:
> @@ -304,7 +303,6 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
> pm_runtime_disable(&shost->shost_gendev);
> pm_runtime_set_suspended(&shost->shost_gendev);
> pm_runtime_put_noidle(&shost->shost_gendev);
> - scsi_mq_destroy_tags(shost);
> fail:
> return error;
> }
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 17:32, xjx00 写道:
> From: Maciej Żenczykowski <maze(a)google.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 0f5a20b1fd9da3ac9f7c6edcad522712ca694d5c
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=358
> CVE: NA
>
> -------------------------------------------------
>
> commit 3370139745853f7826895293e8ac3aec1430508e upstream.
>
> [ 190.544755] configfs-gadget gadget: notify speed -44967296
>
> This is because 4250000000 - 2**32 is -44967296.
>
> Fixes: 9f6ce4240a2b ("usb: gadget: f_ncm.c added")
> Cc: Brooke Basile <brookebasile(a)gmail.com>
> Cc: Bryan O'Donoghue <bryan.odonoghue(a)linaro.org>
> Cc: Felipe Balbi <balbi(a)kernel.org>
> Cc: Lorenzo Colitti <lorenzo(a)google.com>
> Cc: Yauheni Kaliuta <yauheni.kaliuta(a)nokia.com>
> Cc: Linux USB Mailing List <linux-usb(a)vger.kernel.org>
> Acked-By: Lorenzo Colitti <lorenzo(a)google.com>
> Signed-off-by: Maciej Żenczykowski <maze(a)google.com>
> Cc: stable <stable(a)vger.kernel.org>
> Link: https://lore.kernel.org/r/20210608005344.3762668-1-zenczykowski@gmail.com
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: xjx00 <xjxyklwx(a)126.com>
> ---
> drivers/usb/gadget/function/f_ncm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
> index 019bea8e09cc..0d23c6c11a13 100644
> --- a/drivers/usb/gadget/function/f_ncm.c
> +++ b/drivers/usb/gadget/function/f_ncm.c
> @@ -583,7 +583,7 @@ static void ncm_do_notify(struct f_ncm *ncm)
> data[0] = cpu_to_le32(ncm_bitrate(cdev->gadget));
> data[1] = data[0];
>
> - DBG(cdev, "notify speed %d\n", ncm_bitrate(cdev->gadget));
> + DBG(cdev, "notify speed %u\n", ncm_bitrate(cdev->gadget));
> ncm->notify_state = NCM_NOTIFY_CONNECT;
> break;
> }
1
0

Re: [PATCH openEuler-21.03] gpio: wcd934x: Fix shift-out-of-bounds error
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 20:18, zcj 写道:
> From: Srinivas Kandagatla <srinivas.kandagatla(a)linaro.org>
>
> stable inclusion
> from stable-v5.10.44
> commit e0b518a2eb44d8a74c19e50f79a8ed393e96d634
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=463
> CVE: NA
>
> -------------------------------------------------
>
> commit dbec64b11c65d74f31427e2b9d5746fbf17bf840 upstream.
>
> bit-mask for pins 0 to 4 is BIT(0) to BIT(4) however we ended up with BIT(n - 1)
> which is not right, and this was caught by below usban check
>
> UBSAN: shift-out-of-bounds in drivers/gpio/gpio-wcd934x.c:34:14
>
> Fixes: 59c324683400 ("gpio: wcd934x: Add support to wcd934x gpio controller")
> Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla(a)linaro.org>
> Reviewed-by: Andy Shevchenko <andy.shevchenko(a)gmail.com>
> Reviewed-by: Bjorn Andersson <bjorn.andersson(a)linaro.org>
> Signed-off-by: Bartosz Golaszewski <bgolaszewski(a)baylibre.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: zcj <2459770937(a)qq.com>
> ---
> drivers/gpio/gpio-wcd934x.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpio/gpio-wcd934x.c b/drivers/gpio/gpio-wcd934x.c
> index 1cbce5990855..97e6caedf1f3 100644
> --- a/drivers/gpio/gpio-wcd934x.c
> +++ b/drivers/gpio/gpio-wcd934x.c
> @@ -7,7 +7,7 @@
> #include <linux/slab.h>
> #include <linux/of_device.h>
>
> -#define WCD_PIN_MASK(p) BIT(p - 1)
> +#define WCD_PIN_MASK(p) BIT(p)
> #define WCD_REG_DIR_CTL_OFFSET 0x42
> #define WCD_REG_VAL_CTL_OFFSET 0x43
> #define WCD934X_NPINS 5
1
0

Re: [PATCH openEuler-21.03] regulator: fan53880: Fix missing n_voltages setting
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 20:49, lantianbaiyun 写道:
> From: Axel Lin <axel.lin(a)ingics.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 5a5f5cfb5f0996d65eae3cc034513d90f4be6783
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=401
> CVE: NA
>
> -------------------------------------------------
>
> commit 34991ee96fd8477479dd15adadceb6b28b30d9b0 upstream.
>
> Fixes: e6dea51e2d41 ("regulator: fan53880: Add initial support")
> Signed-off-by: Axel Lin <axel.lin(a)ingics.com>
> Acked-by: Christoph Fritz <chf.fritz(a)googlemail.com>
> Link: https://lore.kernel.org/r/20210517105325.1227393-1-axel.lin@ingics.com
> Signed-off-by: Mark Brown <broonie(a)kernel.org>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: lantianbaiyun <lianyi21(a)mails.ucas.ac.cn>
> ---
> drivers/regulator/fan53880.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/regulator/fan53880.c b/drivers/regulator/fan53880.c
> index e83eb4fb1876..1684faf82ed2 100644
> --- a/drivers/regulator/fan53880.c
> +++ b/drivers/regulator/fan53880.c
> @@ -51,6 +51,7 @@ static const struct regulator_ops fan53880_ops = {
> REGULATOR_LINEAR_RANGE(800000, 0xf, 0x73, 25000), \
> }, \
> .n_linear_ranges = 2, \
> + .n_voltages = 0x74, \
> .vsel_reg = FAN53880_LDO ## _num ## VOUT, \
> .vsel_mask = 0x7f, \
> .enable_reg = FAN53880_ENABLE, \
> @@ -76,6 +77,7 @@ static const struct regulator_desc fan53880_regulators[] = {
> REGULATOR_LINEAR_RANGE(600000, 0x1f, 0xf7, 12500),
> },
> .n_linear_ranges = 2,
> + .n_voltages = 0xf8,
> .vsel_reg = FAN53880_BUCKVOUT,
> .vsel_mask = 0x7f,
> .enable_reg = FAN53880_ENABLE,
> @@ -95,6 +97,7 @@ static const struct regulator_desc fan53880_regulators[] = {
> REGULATOR_LINEAR_RANGE(3000000, 0x4, 0x70, 25000),
> },
> .n_linear_ranges = 2,
> + .n_voltages = 0x71,
> .vsel_reg = FAN53880_BOOSTVOUT,
> .vsel_mask = 0x7f,
> .enable_reg = FAN53880_ENABLE_BOOST,
1
0

Re: [PATCH openEuler-21.03] NFSv4: Fix second deadlock in nfs4_evict_inode()
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 20:55, smilezhangs 写道:
> From: Trond Myklebust <trond.myklebust(a)hammerspace.com>
>
> stable inclusion
> from stable-v5.10.44
> commit d973bd0d6e7f9b4ea976cc619e8d6e0d235b9056
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=462
> CVE: NA
>
> -------------------------------------------------
>
> commit c3aba897c6e67fa464ec02b1f17911577d619713 upstream.
>
> If the inode is being evicted but has to return a layout first, then
> that too can cause a deadlock in the corner case where the server
> reboots.
>
> Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: Hang <haihangyiyuan(a)163.com>
> Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
> Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
> ---
> fs/nfs/nfs4proc.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index c92d6ff0fcea..eedcbe6832fb 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -9619,15 +9619,20 @@ int nfs4_proc_layoutreturn(struct nfs4_layoutreturn *lrp, bool sync)
> &task_setup_data.rpc_client, &msg);
>
> dprintk("--> %s\n", __func__);
> + lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!sync) {
> - lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!lrp->inode) {
> nfs4_layoutreturn_release(lrp);
> return -EAGAIN;
> }
> task_setup_data.flags |= RPC_TASK_ASYNC;
> }
> - nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0);
> + if (!lrp->inode)
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 1);
> + else
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 0);
> task = rpc_run_task(&task_setup_data);
> if (IS_ERR(task))
> return PTR_ERR(task);
1
0

Re: [PATCH openEuler-21.03] net/nfc/rawsock.c: fix a permission check bug
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:10, lf 写道:
> From: Jeimon <jjjinmeng.zhou(a)gmail.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 1e5cab50208c8fb7351b798cb1d569debfeb994a
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=371
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 8ab78863e9eff11910e1ac8bcf478060c29b379e ]
>
> The function rawsock_create() calls a privileged function sk_alloc(), which requires a ns-aware check to check net->user_ns, i.e., ns_capable(). However, the original code checks the init_user_ns using capable(). So we replace the capable() with ns_capable().
>
> Signed-off-by: Jeimon <jjjinmeng.zhou(a)gmail.com>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: lf <15042944259(a)163.com>
> ---
> net/nfc/rawsock.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
> index 9c7eb8455ba8..5f1d438a0a23 100644
> --- a/net/nfc/rawsock.c
> +++ b/net/nfc/rawsock.c
> @@ -329,7 +329,7 @@ static int rawsock_create(struct net *net, struct socket *sock,
> return -ESOCKTNOSUPPORT;
>
> if (sock->type == SOCK_RAW) {
> - if (!capable(CAP_NET_RAW))
> + if (!ns_capable(net->user_ns, CAP_NET_RAW))
> return -EPERM;
> sock->ops = &rawsock_raw_ops;
> } else {
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:09, lf 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

Re: [PATCH openEuler-21.03] Revert "ACPI: sleep: Put the FACS table after using it"
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:18, ws 写道:
> From: Zhang Rui <rui.zhang(a)intel.com>
>
> stable inclusion
> from stable-v5.10.44
> commit afd87792db355282c4608356b98bb2dd650a6885
> Bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=438
> CVE: NA
>
> -------------------------------------------------
>
> commit f1ffa9d4cccc8fdf6c03fb1b3429154d22037988 upstream.
>
> Commit 95722237cb2a ("ACPI: sleep: Put the FACS table after using it")
> puts the FACS table during initialization.
>
> But the hardware signature bits in the FACS table need to be accessed,
> after every hibernation, to compare with the original hardware
> signature.
>
> So there is no reason to release the FACS table mapping after
> initialization.
>
> This reverts commit 95722237cb2ae4f7b73471058cdb19e8f4057c93.
>
> An alternative solution is to use acpi_gbl_FACS variable instead, which
> is mapped by the ACPICA core and never released.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=212277
> Reported-by: Stephan Hohe <sth.dev(a)tejp.de>
> Signed-off-by: Zhang Rui <rui.zhang(a)intel.com>
> Cc: 5.8+ <stable(a)vger.kernel.org> # 5.8+
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: ws <3045672234(a)qq.com>
> ---
> drivers/acpi/sleep.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
> index aff13bf4d947..31c9d0c8ae11 100644
> --- a/drivers/acpi/sleep.c
> +++ b/drivers/acpi/sleep.c
> @@ -1290,10 +1290,8 @@ static void acpi_sleep_hibernate_setup(void)
> return;
>
> acpi_get_table(ACPI_SIG_FACS, 1, (struct acpi_table_header **)&facs);
> - if (facs) {
> + if (facs)
> s4_hardware_signature = facs->hardware_signature;
> - acpi_put_table((struct acpi_table_header *)facs);
> - }
> }
> #else /* !CONFIG_HIBERNATION */
> static inline void acpi_sleep_hibernate_setup(void) {}
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:27, lcr 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
> ---
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

Re: [PATCH openEuler-21.03] RDMA/mlx4: Do not map the core_clock page to user space unless enabled
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:27, gaodawei 写道:
> From: Shay Drory <shayd(a)nvidia.com>
>
> stable inclusion
> from stable-v5.10.44
> commit cb1aa1da04882d1860f733e24aeebdbbc85724d7
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=454
> CVE: NA
>
> -------------------------------------------------
>
> commit 404e5a12691fe797486475fe28cc0b80cb8bef2c upstream.
>
> Currently when mlx4 maps the hca_core_clock page to the user space there
> are read-modifiable registers, one of which is semaphore, on this page as
> well as the clock counter. If user reads the wrong offset, it can modify
> the semaphore and hang the device.
>
> Do not map the hca_core_clock page to the user space unless the device has
> been put in a backwards compatibility mode to support this feature.
>
> After this patch, mlx4 core_clock won't be mapped to user space on the
> majority of existing devices and the uverbs device time feature in
> ibv_query_rt_values_ex() will be disabled.
>
> Fixes: 52033cfb5aab ("IB/mlx4: Add mmap call to map the hardware clock")
> Link: https://lore.kernel.org/r/9632304e0d6790af84b3b706d8c18732bc0d5e27.16227263…
> Signed-off-by: Shay Drory <shayd(a)nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro(a)nvidia.com>
> Signed-off-by: Jason Gunthorpe <jgg(a)nvidia.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: gaodawei <810789471(a)qq.com>
> ---
> drivers/infiniband/hw/mlx4/main.c | 5 +----
> drivers/net/ethernet/mellanox/mlx4/fw.c | 3 +++
> drivers/net/ethernet/mellanox/mlx4/fw.h | 1 +
> drivers/net/ethernet/mellanox/mlx4/main.c | 6 ++++++
> include/linux/mlx4/device.h | 1 +
> 5 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
> index cd0fba6b0964..7b11aff8a5ea 100644
> --- a/drivers/infiniband/hw/mlx4/main.c
> +++ b/drivers/infiniband/hw/mlx4/main.c
> @@ -580,12 +580,9 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
> props->cq_caps.max_cq_moderation_count = MLX4_MAX_CQ_COUNT;
> props->cq_caps.max_cq_moderation_period = MLX4_MAX_CQ_PERIOD;
>
> - if (!mlx4_is_slave(dev->dev))
> - err = mlx4_get_internal_clock_params(dev->dev, &clock_params);
> -
> if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) {
> resp.response_length += sizeof(resp.hca_core_clock_offset);
> - if (!err && !mlx4_is_slave(dev->dev)) {
> + if (!mlx4_get_internal_clock_params(dev->dev, &clock_params)) {
> resp.comp_mask |= MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET;
> resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE;
> }
> diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
> index f6cfec81ccc3..dc4ac1a2b6b6 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/fw.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
> @@ -823,6 +823,7 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
> #define QUERY_DEV_CAP_MAD_DEMUX_OFFSET 0xb0
> #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_BASE_OFFSET 0xa8
> #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_RANGE_OFFSET 0xac
> +#define QUERY_DEV_CAP_MAP_CLOCK_TO_USER 0xc1
> #define QUERY_DEV_CAP_QP_RATE_LIMIT_NUM_OFFSET 0xcc
> #define QUERY_DEV_CAP_QP_RATE_LIMIT_MAX_OFFSET 0xd0
> #define QUERY_DEV_CAP_QP_RATE_LIMIT_MIN_OFFSET 0xd2
> @@ -841,6 +842,8 @@ int mlx4_QUERY_DEV_CAP(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
>
> if (mlx4_is_mfunc(dev))
> disable_unsupported_roce_caps(outbox);
> + MLX4_GET(field, outbox, QUERY_DEV_CAP_MAP_CLOCK_TO_USER);
> + dev_cap->map_clock_to_user = field & 0x80;
> MLX4_GET(field, outbox, QUERY_DEV_CAP_RSVD_QP_OFFSET);
> dev_cap->reserved_qps = 1 << (field & 0xf);
> MLX4_GET(field, outbox, QUERY_DEV_CAP_MAX_QP_OFFSET);
> diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h
> index 8f020f26ebf5..cf64e54eecb0 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/fw.h
> +++ b/drivers/net/ethernet/mellanox/mlx4/fw.h
> @@ -131,6 +131,7 @@ struct mlx4_dev_cap {
> u32 health_buffer_addrs;
> struct mlx4_port_cap port_cap[MLX4_MAX_PORTS + 1];
> bool wol_port[MLX4_MAX_PORTS + 1];
> + bool map_clock_to_user;
> };
>
> struct mlx4_func_cap {
> diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
> index c326b434734e..00c84656b2e7 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/main.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/main.c
> @@ -498,6 +498,7 @@ static int mlx4_dev_cap(struct mlx4_dev *dev, struct mlx4_dev_cap *dev_cap)
> }
> }
>
> + dev->caps.map_clock_to_user = dev_cap->map_clock_to_user;
> dev->caps.uar_page_size = PAGE_SIZE;
> dev->caps.num_uars = dev_cap->uar_size / PAGE_SIZE;
> dev->caps.local_ca_ack_delay = dev_cap->local_ca_ack_delay;
> @@ -1948,6 +1949,11 @@ int mlx4_get_internal_clock_params(struct mlx4_dev *dev,
> if (mlx4_is_slave(dev))
> return -EOPNOTSUPP;
>
> + if (!dev->caps.map_clock_to_user) {
> + mlx4_dbg(dev, "Map clock to user is not supported.\n");
> + return -EOPNOTSUPP;
> + }
> +
> if (!params)
> return -EINVAL;
>
> diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
> index 06e066e04a4b..eb8169c03d89 100644
> --- a/include/linux/mlx4/device.h
> +++ b/include/linux/mlx4/device.h
> @@ -631,6 +631,7 @@ struct mlx4_caps {
> bool wol_port[MLX4_MAX_PORTS + 1];
> struct mlx4_rate_limit_caps rl_caps;
> u32 health_buffer_addrs;
> + bool map_clock_to_user;
> };
>
> struct mlx4_buf_list {
1
0

Re: [PATCH openEuler-21.03] NFSv4: Fix second deadlock in nfs4_evict_inode()
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:31, smilezhangs 写道:
> From: Trond Myklebust <trond.myklebust(a)hammerspace.com>
>
> stable inclusion
> from stable-v5.10.44
> commit d973bd0d6e7f9b4ea976cc619e8d6e0d235b9056
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=462
> CVE: NA
>
> -------------------------------------------------
>
> commit c3aba897c6e67fa464ec02b1f17911577d619713 upstream.
>
> If the inode is being evicted but has to return a layout first, then
> that too can cause a deadlock in the corner case where the server
> reboots.
>
> Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: Hang <haihangyiyuan(a)163.com>
> Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com>
> Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
> ---
> fs/nfs/nfs4proc.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index c92d6ff0fcea..eedcbe6832fb 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -9619,15 +9619,20 @@ int nfs4_proc_layoutreturn(struct nfs4_layoutreturn *lrp, bool sync)
> &task_setup_data.rpc_client, &msg);
>
> dprintk("--> %s\n", __func__);
> + lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!sync) {
> - lrp->inode = nfs_igrab_and_active(lrp->args.inode);
> if (!lrp->inode) {
> nfs4_layoutreturn_release(lrp);
> return -EAGAIN;
> }
> task_setup_data.flags |= RPC_TASK_ASYNC;
> }
> - nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0);
> + if (!lrp->inode)
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 1);
> + else
> + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1,
> + 0);
> task = rpc_run_task(&task_setup_data);
> if (IS_ERR(task))
> return PTR_ERR(task);
1
0

Re: [PATCH openEuler-21.03] scsi: core: Only put parent device if host state differs from SHOST_CREATED
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 21:37, yangshuo 写道:
> From: Ming Lei <ming.lei(a)redhat.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 5b537408f2733d510060e72596befa44c3435cb6
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=403
> CVE: NA
>
> -------------------------------------------------
>
> commit 1e0d4e6225996f05271de1ebcb1a7c9381af0b27 upstream.
>
> get_device(shost->shost_gendev.parent) is called after host state has
> switched to SHOST_RUNNING. scsi_host_dev_release() shouldn't release the
> parent device if host state is still SHOST_CREATED.
>
> Link: https://lore.kernel.org/r/20210602133029.2864069-5-ming.lei@redhat.com
> Cc: Bart Van Assche <bvanassche(a)acm.org>
> Cc: John Garry <john.garry(a)huawei.com>
> Cc: Hannes Reinecke <hare(a)suse.de>
> Tested-by: John Garry <john.garry(a)huawei.com>
> Reviewed-by: John Garry <john.garry(a)huawei.com>
> Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
> Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: yangshuo <look4polaris(a)163.com>
> ---
> drivers/scsi/hosts.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
> index 2f162603876f..f1b7061aa0d9 100644
> --- a/drivers/scsi/hosts.c
> +++ b/drivers/scsi/hosts.c
> @@ -345,7 +345,7 @@ static void scsi_host_dev_release(struct device *dev)
>
> ida_simple_remove(&host_index_ida, shost->host_no);
>
> - if (parent)
> + if (shost->shost_state != SHOST_CREATED)
> put_device(parent);
> kfree(shost);
> }
1
0

Re: [PATCH openEuler-21.03] dm verity: fix require_signatures module_param permissions
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 22:40, huangzhuoli 写道:
> From: John Keeping <john(a)metanate.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 90547d5db50bcb2705709e420e0af51535109113
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=426
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 0c1f3193b1cdd21e7182f97dc9bca7d284d18a15 ]
>
> The third parameter of module_param() is permissions for the sysfs node
> but it looks like it is being used as the initial value of the parameter
> here. In fact, false here equates to omitting the file from sysfs and
> does not affect the value of require_signatures.
>
> Making the parameter writable is not simple because going from
> false->true is fine but it should not be possible to remove the
> requirement to verify a signature. But it can be useful to inspect the
> value of this parameter from userspace, so change the permissions to
> make a read-only file in sysfs.
>
> Signed-off-by: John Keeping <john(a)metanate.com>
> Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: huangzhuoli <bioagr_huangzl(a)163.com>
> ---
> drivers/md/dm-verity-verify-sig.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/dm-verity-verify-sig.c b/drivers/md/dm-verity-verify-sig.c
> index 614e43db93aa..919154ae4cae 100644
> --- a/drivers/md/dm-verity-verify-sig.c
> +++ b/drivers/md/dm-verity-verify-sig.c
> @@ -15,7 +15,7 @@
> #define DM_VERITY_VERIFY_ERR(s) DM_VERITY_ROOT_HASH_VERIFICATION " " s
>
> static bool require_signatures;
> -module_param(require_signatures, bool, false);
> +module_param(require_signatures, bool, 0444);
> MODULE_PARM_DESC(require_signatures,
> "Verify the roothash of dm-verity hash tree");
>
1
0

Re: [PATCH openEuler-21.03] dm verity: fix require_signatures module_param permissions
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 22:40, huangzhuoli 写道:
> From: John Keeping <john(a)metanate.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 90547d5db50bcb2705709e420e0af51535109113
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=426
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 0c1f3193b1cdd21e7182f97dc9bca7d284d18a15 ]
>
> The third parameter of module_param() is permissions for the sysfs node
> but it looks like it is being used as the initial value of the parameter
> here. In fact, false here equates to omitting the file from sysfs and
> does not affect the value of require_signatures.
>
> Making the parameter writable is not simple because going from
> false->true is fine but it should not be possible to remove the
> requirement to verify a signature. But it can be useful to inspect the
> value of this parameter from userspace, so change the permissions to
> make a read-only file in sysfs.
>
> Signed-off-by: John Keeping <john(a)metanate.com>
> Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: huangzhuoli <bioagr_huangzl(a)163.com>
> ---
> drivers/md/dm-verity-verify-sig.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/dm-verity-verify-sig.c b/drivers/md/dm-verity-verify-sig.c
> index 614e43db93aa..919154ae4cae 100644
> --- a/drivers/md/dm-verity-verify-sig.c
> +++ b/drivers/md/dm-verity-verify-sig.c
> @@ -15,7 +15,7 @@
> #define DM_VERITY_VERIFY_ERR(s) DM_VERITY_ROOT_HASH_VERIFICATION " " s
>
> static bool require_signatures;
> -module_param(require_signatures, bool, false);
> +module_param(require_signatures, bool, 0444);
> MODULE_PARM_DESC(require_signatures,
> "Verify the roothash of dm-verity hash tree");
>
1
0

26 Oct '21
您好,感谢参与 openEuler kernel 开发
您的 signed-off-by 没有加。
另外您的邮箱用户名可能不正确:显示为 yourname。
在 2021/10/23 23:05, yourname 写道:
> From: Takashi Iwai <tiwai(a)suse.de>
>
> stable inclusion
> from stable-v5.10.44
> commit bd7d88b0874f82f7b29d1a53e574cedaf23166ba
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=445
> CVE: NA
>
> -------------------------------------------------
>
> commit 83e197a8414c0ba545e7e3916ce05f836f349273 upstream.
>
> The timer instance per queue is exclusive, and snd_seq_timer_open()
> should have managed the concurrent accesses. It looks as if it's
> checking the already existing timer instance at the beginning, but
> it's not right, because there is no protection, hence any later
> concurrent call of snd_seq_timer_open() may override the timer
> instance easily. This may result in UAF, as the leftover timer
> instance can keep running while the queue itself gets closed, as
> spotted by syzkaller recently.
>
> For avoiding the race, add a proper check at the assignment of
> tmr->timeri again, and return -EBUSY if it's been already registered.
>
> Reported-by: syzbot+ddc1260a83ed1cbf6fb5(a)syzkaller.appspotmail.com
> Cc: <stable(a)vger.kernel.org>
> Link: https://lore.kernel.org/r/000000000000dce34f05c42f110c@google.com
> Link: https://lore.kernel.org/r/20210610152059.24633-1-tiwai@suse.de
> Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: yourname <625267121(a)qq.com>
> ---
> sound/core/seq/seq_timer.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/sound/core/seq/seq_timer.c b/sound/core/seq/seq_timer.c
> index 1645e4142e30..9863be6fd43e 100644
> --- a/sound/core/seq/seq_timer.c
> +++ b/sound/core/seq/seq_timer.c
> @@ -297,8 +297,16 @@ int snd_seq_timer_open(struct snd_seq_queue *q)
> return err;
> }
> spin_lock_irq(&tmr->lock);
> - tmr->timeri = t;
> + if (tmr->timeri)
> + err = -EBUSY;
> + else
> + tmr->timeri = t;
> spin_unlock_irq(&tmr->lock);
> + if (err < 0) {
> + snd_timer_close(t);
> + snd_timer_instance_free(t);
> + return err;
> + }
> return 0;
> }
>
1
0

26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/23 23:05, yourname 写道:
> From: Dan Carpenter <dan.carpenter(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit be23c4af3d8a1b986fe9b43b8966797653a76ca4
> bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341
> CVE: NA
>
> --------------------------------
>
> [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ]
>
> We spotted a bug recently during a review where a driver was
> unregistering a bus that wasn't registered, which would trigger this
> BUG_ON(). Let's handle that situation more gracefully, and just print
> a warning and return.
>
> Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk>
> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch>
> Signed-off-by: David S. Miller <davem(a)davemloft.net>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: wangqing <wangqing(a)uniontech.com>
> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
您好,感谢参与 openEuler kernel 开发
您的 signed-off-by 没有加。
另外您的邮箱用户名可能不正确:显示为 yourname。
> drivers/net/phy/mdio_bus.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
> index 757e950fb745..b848439fa837 100644
> --- a/drivers/net/phy/mdio_bus.c
> +++ b/drivers/net/phy/mdio_bus.c
> @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus)
> struct mdio_device *mdiodev;
> int i;
>
> - BUG_ON(bus->state != MDIOBUS_REGISTERED);
> + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED))
> + return;
> bus->state = MDIOBUS_UNREGISTERED;
>
> for (i = 0; i < PHY_MAX_ADDR; i++) {
1
0

[PATCH openEuler-21.03] usb: gadget: f_fs: Ensure io_completion_wq is idle during unbind
by liuhao 26 Oct '21
by liuhao 26 Oct '21
26 Oct '21
From: Wesley Cheng <wcheng(a)codeaurora.org>
stable inclusion
from stable-v5.10.44
commit 5cead896962d8b25dee8a8efc85b076572732b86
bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=406
CVE: NA
-------------------------------------------------
commit 6fc1db5e6211e30fbb1cee8d7925d79d4ed2ae14 upstream.
During unbind, ffs_func_eps_disable() will be executed, resulting in
completion callbacks for any pending USB requests. When using AIO,
irrespective of the completion status, io_data work is queued to
io_completion_wq to evaluate and handle the completed requests. Since
work runs asynchronously to the unbind() routine, there can be a
scenario where the work runs after the USB gadget has been fully
removed, resulting in accessing of a resource which has been already
freed. (i.e. usb_ep_free_request() accessing the USB ep structure)
Explicitly drain the io_completion_wq, instead of relying on the
destroy_workqueue() (in ffs_data_put()) to make sure no pending
completion work items are running.
Signed-off-by: Wesley Cheng <wcheng(a)codeaurora.org>
Cc: stable <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/r/1621644261-1236-1-git-send-email-wcheng@codeauror…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: liuhao <qq1107732331(a)qq.com>
---
drivers/usb/gadget/function/f_fs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index ffe67d836b0c..7df180b110af 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -3566,6 +3566,9 @@ static void ffs_func_unbind(struct usb_configuration *c,
ffs->func = NULL;
}
+ /* Drain any pending AIO completions */
+ drain_workqueue(ffs->io_completion_wq);
+
if (!--opts->refcnt)
functionfs_unbind(ffs);
--
2.23.0
2
1

Re: [PATCH openEuler-21.03] NFSv4: nfs4_proc_set_acl needs to restore NFS_CAP_UIDGID_NOMAP on error.
by chengjian (D) 26 Oct '21
by chengjian (D) 26 Oct '21
26 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/24 16:58, wangyongpan 写道:
> From: Dai Ngo <dai.ngo(a)oracle.com>
>
> stable inclusion
> from stable-v5.10.44
> commit 6e13b9bc66f0e34238aa7b9486a0575177fb7955
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=414
> CVE: NA
>
> -------------------------------------------------
>
> commit f8849e206ef52b584cd9227255f4724f0cc900bb upstream.
>
> Currently if __nfs4_proc_set_acl fails with NFS4ERR_BADOWNER it
> re-enables the idmapper by clearing NFS_CAP_UIDGID_NOMAP before
> retrying again. The NFS_CAP_UIDGID_NOMAP remains cleared even if
> the retry fails. This causes problem for subsequent setattr
> requests for v4 server that does not have idmapping configured.
>
> This patch modifies nfs4_proc_set_acl to detect NFS4ERR_BADOWNER
> and NFS4ERR_BADNAME and skips the retry, since the kernel isn't
> involved in encoding the ACEs, and return -EINVAL.
>
> Steps to reproduce the problem:
>
> # mount -o vers=4.1,sec=sys server:/export/test /tmp/mnt
> # touch /tmp/mnt/file1
> # chown 99 /tmp/mnt/file1
> # nfs4_setfacl -a A::unknown.user@xyz.com:wrtncy /tmp/mnt/file1
> Failed setxattr operation: Invalid argument
> # chown 99 /tmp/mnt/file1
> chown: changing ownership of ‘/tmp/mnt/file1’: Invalid argument
> # umount /tmp/mnt
> # mount -o vers=4.1,sec=sys server:/export/test /tmp/mnt
> # chown 99 /tmp/mnt/file1
> #
>
> v2: detect NFS4ERR_BADOWNER and NFS4ERR_BADNAME and skip retry
> in nfs4_proc_set_acl.
> Signed-off-by: Dai Ngo <dai.ngo(a)oracle.com>
> Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Signed-off-by: wangyongpan <1071630525(a)qq.com>
> ---
> fs/nfs/nfs4proc.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index eedcbe6832fb..f387b34bc5e5 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -5942,6 +5942,14 @@ static int nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t buflen
> do {
> err = __nfs4_proc_set_acl(inode, buf, buflen);
> trace_nfs4_set_acl(inode, err);
> + if (err == -NFS4ERR_BADOWNER || err == -NFS4ERR_BADNAME) {
> + /*
> + * no need to retry since the kernel
> + * isn't involved in encoding the ACEs.
> + */
> + err = -EINVAL;
> + break;
> + }
> err = nfs4_handle_exception(NFS_SERVER(inode), err,
> &exception);
> } while (exception.retry);
1
0
From: Feng Tang <feng.tang(a)intel.com>
mainline inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMMB?from=project-issue
CVE: NA
------------------------------------
End users frequently want to know what features their processor
supports, independent of what the kernel supports.
/proc/cpuinfo is great. It is omnipresent and since it is provided by
the kernel it is always as up to date as the kernel. But, it could be
ambiguous about processor features which can be disabled by the kernel
at boot-time or compile-time.
There are some user space tools showing more raw features, but they are
not bound with kernel, and go with distros. Many end users are still
using old distros with new kernels (upgraded by themselves), and may
not upgrade the distros only to get a newer tool.
So here arise the need for a new tool, which
* shows raw CPU features read from the CPUID instruction
* will be easier to update compared to existing userspace
tooling (perhaps distributed like perf)
* inherits "modern" kernel development process, in contrast to some
of the existing userspace CPUID tools which are still being
developed
without git and distributed in tarballs from non-https sites.
* Can produce output consistent with /proc/cpuinfo to make comparison
easier.
The CPUID leaf definitions are kept in an .csv file which allows for
updating only that file to add support for new feature leafs.
This is based on prototype code from Borislav Petkov
(http://sr71.net/~dave/intel/stupid-cpuid.c)
[ bp:
- Massage, add #define _GNU_SOURCE to fix implicit declaration of
function ‘strcasestr' warning
- remove superfluous newlines
- fallback to cpuid.csv in the current dir if none found
- fix typos
- move comments over the lines instead of sideways. ]
Originally-from: Borislav Petkov <bp(a)alien8.de>
Suggested-by: Dave Hansen <dave.hansen(a)intel.com>
Suggested-by: Borislav Petkov <bp(a)alien8.de>
Signed-off-by: Feng Tang <feng.tang(a)intel.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: zhangyue <zhangyue1(a)kylinos.cn>
---
tools/arch/x86/kcpuid/Makefile | 24 ++
tools/arch/x86/kcpuid/cpuid.csv | 400 +++++++++++++++++++
tools/arch/x86/kcpuid/kcpuid.c | 657 ++++++++++++++++++++++++++++++++
3 files changed, 1081 insertions(+)
create mode 100644 tools/arch/x86/kcpuid/Makefile
create mode 100644 tools/arch/x86/kcpuid/cpuid.csv
create mode 100644 tools/arch/x86/kcpuid/kcpuid.c
diff --git a/tools/arch/x86/kcpuid/Makefile b/tools/arch/x86/kcpuid/Makefile
new file mode 100644
index 000000000000..a2f1e855092c
--- /dev/null
+++ b/tools/arch/x86/kcpuid/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for x86/kcpuid tool
+
+kcpuid : kcpuid.c
+
+CFLAGS = -Wextra
+
+BINDIR ?= /usr/sbin
+
+HWDATADIR ?= /usr/share/misc/
+
+override CFLAGS += -O2 -Wall -I../../../include
+
+%: %.c
+ $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY : clean
+clean :
+ @rm -f kcpuid
+
+install : kcpuid
+ install -d $(DESTDIR)$(BINDIR)
+ install -m 755 -p kcpuid $(DESTDIR)$(BINDIR)/kcpuid
+ install -m 444 -p cpuid.csv $(HWDATADIR)/cpuid.csv
diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
new file mode 100644
index 000000000000..e422fad9a98e
--- /dev/null
+++ b/tools/arch/x86/kcpuid/cpuid.csv
@@ -0,0 +1,400 @@
+# The basic row format is:
+# LEAF, SUBLEAF, register_name, bits, short_name, long_description
+
+# Leaf 00H
+ 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+
+# Leaf 01H
+ 1, 0, EAX, 3:0, stepping, Stepping ID
+ 1, 0, EAX, 7:4, model, Model
+ 1, 0, EAX, 11:8, family, Family ID
+ 1, 0, EAX, 13:12, processor, Processor Type
+ 1, 0, EAX, 19:16, model_ext, Extended Model ID
+ 1, 0, EAX, 27:20, family_ext, Extended Family ID
+
+ 1, 0, EBX, 7:0, brand, Brand Index
+ 1, 0, EBX, 15:8, clflush_size, CLFLUSH line size (value * 8) in bytes
+ 1, 0, EBX, 23:16, max_cpu_id, Maxim number of addressable logic cpu in this package
+ 1, 0, EBX, 31:24, apic_id, Initial APIC ID
+
+ 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ 1, 0, ECX, 1, pclmulqdq, PCLMULQDQ instruction supported
+ 1, 0, ECX, 2, dtes64, DS area uses 64-bit layout
+ 1, 0, ECX, 3, mwait, MONITOR/MWAIT supported
+ 1, 0, ECX, 4, ds_cpl, CPL Qualified Debug Store which allows for branch message storage qualified by CPL
+ 1, 0, ECX, 5, vmx, Virtual Machine Extensions supported
+ 1, 0, ECX, 6, smx, Safer Mode Extension supported
+ 1, 0, ECX, 7, eist, Enhanced Intel SpeedStep Technology
+ 1, 0, ECX, 8, tm2, Thermal Monitor 2
+ 1, 0, ECX, 9, ssse3, Supplemental Streaming SIMD Extensions 3 (SSSE3)
+ 1, 0, ECX, 10, l1_ctx_id, L1 data cache could be set to either adaptive mode or shared mode (check IA32_MISC_ENABLE bit 24 definition)
+ 1, 0, ECX, 11, sdbg, IA32_DEBUG_INTERFACE MSR for silicon debug supported
+ 1, 0, ECX, 12, fma, FMA extensions using YMM state supported
+ 1, 0, ECX, 13, cmpxchg16b, 'CMPXCHG16B - Compare and Exchange Bytes' supported
+ 1, 0, ECX, 14, xtpr_update, xTPR Update Control supported
+ 1, 0, ECX, 15, pdcm, Perfmon and Debug Capability present
+ 1, 0, ECX, 17, pcid, Process-Context Identifiers feature present
+ 1, 0, ECX, 18, dca, Prefetching data from a memory mapped device supported
+ 1, 0, ECX, 19, sse4_1, SSE4.1 feature present
+ 1, 0, ECX, 20, sse4_2, SSE4.2 feature present
+ 1, 0, ECX, 21, x2apic, x2APIC supported
+ 1, 0, ECX, 22, movbe, MOVBE instruction supported
+ 1, 0, ECX, 23, popcnt, POPCNT instruction supported
+ 1, 0, ECX, 24, tsc_deadline_timer, LAPIC supports one-shot operation using a TSC deadline value
+ 1, 0, ECX, 25, aesni, AESNI instruction supported
+ 1, 0, ECX, 26, xsave, XSAVE/XRSTOR processor extended states (XSETBV/XGETBV/XCR0)
+ 1, 0, ECX, 27, osxsave, OS has set CR4.OSXSAVE bit to enable XSETBV/XGETBV/XCR0
+ 1, 0, ECX, 28, avx, AVX instruction supported
+ 1, 0, ECX, 29, f16c, 16-bit floating-point conversion instruction supported
+ 1, 0, ECX, 30, rdrand, RDRAND instruction supported
+
+ 1, 0, EDX, 0, fpu, x87 FPU on chip
+ 1, 0, EDX, 1, vme, Virtual-8086 Mode Enhancement
+ 1, 0, EDX, 2, de, Debugging Extensions
+ 1, 0, EDX, 3, pse, Page Size Extensions
+ 1, 0, EDX, 4, tsc, Time Stamp Counter
+ 1, 0, EDX, 5, msr, RDMSR and WRMSR Support
+ 1, 0, EDX, 6, pae, Physical Address Extensions
+ 1, 0, EDX, 7, mce, Machine Check Exception
+ 1, 0, EDX, 8, cx8, CMPXCHG8B instr
+ 1, 0, EDX, 9, apic, APIC on Chip
+ 1, 0, EDX, 11, sep, SYSENTER and SYSEXIT instrs
+ 1, 0, EDX, 12, mtrr, Memory Type Range Registers
+ 1, 0, EDX, 13, pge, Page Global Bit
+ 1, 0, EDX, 14, mca, Machine Check Architecture
+ 1, 0, EDX, 15, cmov, Conditional Move Instrs
+ 1, 0, EDX, 16, pat, Page Attribute Table
+ 1, 0, EDX, 17, pse36, 36-Bit Page Size Extension
+ 1, 0, EDX, 18, psn, Processor Serial Number
+ 1, 0, EDX, 19, clflush, CLFLUSH instr
+# 1, 0, EDX, 20,
+ 1, 0, EDX, 21, ds, Debug Store
+ 1, 0, EDX, 22, acpi, Thermal Monitor and Software Controlled Clock Facilities
+ 1, 0, EDX, 23, mmx, Intel MMX Technology
+ 1, 0, EDX, 24, fxsr, XSAVE and FXRSTOR Instrs
+ 1, 0, EDX, 25, sse, SSE
+ 1, 0, EDX, 26, sse2, SSE2
+ 1, 0, EDX, 27, ss, Self Snoop
+ 1, 0, EDX, 28, hit, Max APIC IDs
+ 1, 0, EDX, 29, tm, Thermal Monitor
+# 1, 0, EDX, 30,
+ 1, 0, EDX, 31, pbe, Pending Break Enable
+
+# Leaf 02H
+# cache and TLB descriptor info
+
+# Leaf 03H
+# Precessor Serial Number, introduced on Pentium III, not valid for
+# latest models
+
+# Leaf 04H
+# thread/core and cache topology
+ 4, 0, EAX, 4:0, cache_type, Cache type like instr/data or unified
+ 4, 0, EAX, 7:5, cache_level, Cache Level (starts at 1)
+ 4, 0, EAX, 8, cache_self_init, Cache Self Initialization
+ 4, 0, EAX, 9, fully_associate, Fully Associative cache
+# 4, 0, EAX, 13:10, resvd, resvd
+ 4, 0, EAX, 25:14, max_logical_id, Max number of addressable IDs for logical processors sharing the cache
+ 4, 0, EAX, 31:26, max_phy_id, Max number of addressable IDs for processors in phy package
+
+ 4, 0, EBX, 11:0, cache_linesize, Size of a cache line in bytes
+ 4, 0, EBX, 21:12, cache_partition, Physical Line partitions
+ 4, 0, EBX, 31:22, cache_ways, Ways of associativity
+ 4, 0, ECX, 31:0, cache_sets, Number of Sets - 1
+ 4, 0, EDX, 0, c_wbinvd, 1 means WBINVD/INVD is not ganranteed to act upon lower level caches of non-originating threads sharing this cache
+ 4, 0, EDX, 1, c_incl, Whether cache is inclusive of lower cache level
+ 4, 0, EDX, 2, c_comp_index, Complex Cache Indexing
+
+# Leaf 05H
+# MONITOR/MWAIT
+ 5, 0, EAX, 15:0, min_mon_size, Smallest monitor line size in bytes
+ 5, 0, EBX, 15:0, max_mon_size, Largest monitor line size in bytes
+ 5, 0, ECX, 0, mwait_ext, Enum of Monitor-Mwait extensions supported
+ 5, 0, ECX, 1, mwait_irq_break, Largest monitor line size in bytes
+ 5, 0, EDX, 3:0, c0_sub_stats, Number of C0* sub C-states supported using MWAIT
+ 5, 0, EDX, 7:4, c1_sub_stats, Number of C1* sub C-states supported using MWAIT
+ 5, 0, EDX, 11:8, c2_sub_stats, Number of C2* sub C-states supported using MWAIT
+ 5, 0, EDX, 15:12, c3_sub_stats, Number of C3* sub C-states supported using MWAIT
+ 5, 0, EDX, 19:16, c4_sub_stats, Number of C4* sub C-states supported using MWAIT
+ 5, 0, EDX, 23:20, c5_sub_stats, Number of C5* sub C-states supported using MWAIT
+ 5, 0, EDX, 27:24, c6_sub_stats, Number of C6* sub C-states supported using MWAIT
+ 5, 0, EDX, 31:28, c7_sub_stats, Number of C7* sub C-states supported using MWAIT
+
+# Leaf 06H
+# Thermal & Power Management
+
+ 6, 0, EAX, 0, dig_temp, Digital temperature sensor supported
+ 6, 0, EAX, 1, turbo, Intel Turbo Boost
+ 6, 0, EAX, 2, arat, Always running APIC timer
+# 6, 0, EAX, 3, resv, Reserved
+ 6, 0, EAX, 4, pln, Power limit notifications supported
+ 6, 0, EAX, 5, ecmd, Clock modulation duty cycle extension supported
+ 6, 0, EAX, 6, ptm, Package thermal management supported
+ 6, 0, EAX, 7, hwp, HWP base register
+ 6, 0, EAX, 8, hwp_notify, HWP notification
+ 6, 0, EAX, 9, hwp_act_window, HWP activity window
+ 6, 0, EAX, 10, hwp_energy, HWP energy performance preference
+ 6, 0, EAX, 11, hwp_pkg_req, HWP package level request
+# 6, 0, EAX, 12, resv, Reserved
+ 6, 0, EAX, 13, hdc, HDC base registers supported
+ 6, 0, EAX, 14, turbo3, Turbo Boost Max 3.0
+ 6, 0, EAX, 15, hwp_cap, Highest Performance change supported
+ 6, 0, EAX, 16, hwp_peci, HWP PECI override is supported
+ 6, 0, EAX, 17, hwp_flex, Flexible HWP is supported
+ 6, 0, EAX, 18, hwp_fast, Fast access mode for the IA32_HWP_REQUEST MSR is supported
+# 6, 0, EAX, 19, resv, Reserved
+ 6, 0, EAX, 20, hwp_ignr, Ignoring Idle Logical Processor HWP request is supported
+
+ 6, 0, EBX, 3:0, therm_irq_thresh, Number of Interrupt Thresholds in Digital Thermal Sensor
+ 6, 0, ECX, 0, aperfmperf, Presence of IA32_MPERF and IA32_APERF
+ 6, 0, ECX, 3, energ_bias, Performance-energy bias preference supported
+
+# Leaf 07H
+# ECX == 0
+# AVX512 refers to https://en.wikipedia.org/wiki/AVX-512
+# XXX: Do we really need to enumerate each and every AVX512 sub features
+
+ 7, 0, EBX, 0, fsgsbase, RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE supported
+ 7, 0, EBX, 1, tsc_adjust, TSC_ADJUST MSR supported
+ 7, 0, EBX, 2, sgx, Software Guard Extensions
+ 7, 0, EBX, 3, bmi1, BMI1
+ 7, 0, EBX, 4, hle, Hardware Lock Elision
+ 7, 0, EBX, 5, avx2, AVX2
+# 7, 0, EBX, 6, fdp_excp_only, x87 FPU Data Pointer updated only on x87 exceptions
+ 7, 0, EBX, 7, smep, Supervisor-Mode Execution Prevention
+ 7, 0, EBX, 8, bmi2, BMI2
+ 7, 0, EBX, 9, rep_movsb, Enhanced REP MOVSB/STOSB
+ 7, 0, EBX, 10, invpcid, INVPCID instruction
+ 7, 0, EBX, 11, rtm, Restricted Transactional Memory
+ 7, 0, EBX, 12, rdt_m, Intel RDT Monitoring capability
+ 7, 0, EBX, 13, depc_fpu_cs_ds, Deprecates FPU CS and FPU DS
+ 7, 0, EBX, 14, mpx, Memory Protection Extensions
+ 7, 0, EBX, 15, rdt_a, Intel RDT Allocation capability
+ 7, 0, EBX, 16, avx512f, AVX512 Foundation instr
+ 7, 0, EBX, 17, avx512dq, AVX512 Double and Quadword AVX512 instr
+ 7, 0, EBX, 18, rdseed, RDSEED instr
+ 7, 0, EBX, 19, adx, ADX instr
+ 7, 0, EBX, 20, smap, Supervisor Mode Access Prevention
+ 7, 0, EBX, 21, avx512ifma, AVX512 Integer Fused Multiply Add
+# 7, 0, EBX, 22, resvd, resvd
+ 7, 0, EBX, 23, clflushopt, CLFLUSHOPT instr
+ 7, 0, EBX, 24, clwb, CLWB instr
+ 7, 0, EBX, 25, intel_pt, Intel Processor Trace instr
+ 7, 0, EBX, 26, avx512pf, Prefetch
+ 7, 0, EBX, 27, avx512er, AVX512 Exponent Reciproca instr
+ 7, 0, EBX, 28, avx512cd, AVX512 Conflict Detection instr
+ 7, 0, EBX, 29, sha, Intel Secure Hash Algorithm Extensions instr
+ 7, 0, EBX, 26, avx512bw, AVX512 Byte & Word instr
+ 7, 0, EBX, 28, avx512vl, AVX512 Vector Length Extentions (VL)
+ 7, 0, ECX, 0, prefetchwt1, X
+ 7, 0, ECX, 1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
+ 7, 0, ECX, 2, umip, User-mode Instruction Prevention
+
+ 7, 0, ECX, 3, pku, Protection Keys for User-mode pages
+ 7, 0, ECX, 4, ospke, CR4 PKE set to enable protection keys
+# 7, 0, ECX, 16:5, resvd, resvd
+ 7, 0, ECX, 21:17, mawau, The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode
+ 7, 0, ECX, 22, rdpid, RDPID and IA32_TSC_AUX
+# 7, 0, ECX, 29:23, resvd, resvd
+ 7, 0, ECX, 30, sgx_lc, SGX Launch Configuration
+# 7, 0, ECX, 31, resvd, resvd
+
+# Leaf 08H
+#
+
+
+# Leaf 09H
+# Direct Cache Access (DCA) information
+ 9, 0, ECX, 31:0, dca_cap, The value of IA32_PLATFORM_DCA_CAP
+
+# Leaf 0AH
+# Architectural Performance Monitoring
+#
+# Do we really need to print out the PMU related stuff?
+# Does normal user really care about it?
+#
+ 0xA, 0, EAX, 7:0, pmu_ver, Performance Monitoring Unit version
+ 0xA, 0, EAX, 15:8, pmu_gp_cnt_num, Numer of general-purose PMU counters per logical CPU
+ 0xA, 0, EAX, 23:16, pmu_cnt_bits, Bit wideth of PMU counter
+ 0xA, 0, EAX, 31:24, pmu_ebx_bits, Length of EBX bit vector to enumerate PMU events
+
+ 0xA, 0, EBX, 0, pmu_no_core_cycle_evt, Core cycle event not available
+ 0xA, 0, EBX, 1, pmu_no_instr_ret_evt, Instruction retired event not available
+ 0xA, 0, EBX, 2, pmu_no_ref_cycle_evt, Reference cycles event not available
+ 0xA, 0, EBX, 3, pmu_no_llc_ref_evt, Last-level cache reference event not available
+ 0xA, 0, EBX, 4, pmu_no_llc_mis_evt, Last-level cache misses event not available
+ 0xA, 0, EBX, 5, pmu_no_br_instr_ret_evt, Branch instruction retired event not available
+ 0xA, 0, EBX, 6, pmu_no_br_mispredict_evt, Branch mispredict retired event not available
+
+ 0xA, 0, ECX, 4:0, pmu_fixed_cnt_num, Performance Monitoring Unit version
+ 0xA, 0, ECX, 12:5, pmu_fixed_cnt_bits, Numer of PMU counters per logical CPU
+
+# Leaf 0BH
+# Extended Topology Enumeration Leaf
+#
+
+ 0xB, 0, EAX, 4:0, id_shift, Number of bits to shift right on x2APIC ID to get a unique topology ID of the next level type
+ 0xB, 0, EBX, 15:0, cpu_nr, Number of logical processors at this level type
+ 0xB, 0, ECX, 15:8, lvl_type, 0-Invalid 1-SMT 2-Core
+ 0xB, 0, EDX, 31:0, x2apic_id, x2APIC ID the current logical processor
+
+
+# Leaf 0DH
+# Processor Extended State
+
+ 0xD, 0, EAX, 0, x87, X87 state
+ 0xD, 0, EAX, 1, sse, SSE state
+ 0xD, 0, EAX, 2, avx, AVX state
+ 0xD, 0, EAX, 4:3, mpx, MPX state
+ 0xD, 0, EAX, 7:5, avx512, AVX-512 state
+ 0xD, 0, EAX, 9, pkru, PKRU state
+
+ 0xD, 0, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 0, ECX, 31:0, max_sz_xsave, Maximum size (bytes) of the XSAVE/XRSTOR save area
+
+ 0xD, 1, EAX, 0, xsaveopt, XSAVEOPT available
+ 0xD, 1, EAX, 1, xsavec, XSAVEC and compacted form supported
+ 0xD, 1, EAX, 2, xgetbv, XGETBV supported
+ 0xD, 1, EAX, 3, xsaves, XSAVES/XRSTORS and IA32_XSS supported
+
+ 0xD, 1, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 1, ECX, 8, pt, PT state
+ 0xD, 1, ECX, 11, cet_usr, CET user state
+ 0xD, 1, ECX, 12, cet_supv, CET supervisor state
+ 0xD, 1, ECX, 13, hdc, HDC state
+ 0xD, 1, ECX, 16, hwp, HWP state
+
+# Leaf 0FH
+# Intel RDT Monitoring
+
+ 0xF, 0, EBX, 31:0, rmid_range, Maximum range (zero-based) of RMID within this physical processor of all types
+ 0xF, 0, EDX, 1, l3c_rdt_mon, L3 Cache RDT Monitoring supported
+
+ 0xF, 1, ECX, 31:0, rmid_range, Maximum range (zero-based) of RMID of this types
+ 0xF, 1, EDX, 0, l3c_ocp_mon, L3 Cache occupancy Monitoring supported
+ 0xF, 1, EDX, 1, l3c_tbw_mon, L3 Cache Total Bandwidth Monitoring supported
+ 0xF, 1, EDX, 2, l3c_lbw_mon, L3 Cache Local Bandwidth Monitoring supported
+
+# Leaf 10H
+# Intel RDT Allocation
+
+ 0x10, 0, EBX, 1, l3c_rdt_alloc, L3 Cache Allocation supported
+ 0x10, 0, EBX, 2, l2c_rdt_alloc, L2 Cache Allocation supported
+ 0x10, 0, EBX, 3, mem_bw_alloc, Memory Bandwidth Allocation supported
+
+
+# Leaf 12H
+# SGX Capability
+#
+# Some detailed SGX features not added yet
+
+ 0x12, 0, EAX, 0, sgx1, L3 Cache Allocation supported
+ 0x12, 1, EAX, 0, sgx2, L3 Cache Allocation supported
+
+
+# Leaf 14H
+# Intel Processor Tracer
+#
+
+# Leaf 15H
+# Time Stamp Counter and Nominal Core Crystal Clock Information
+
+ 0x15, 0, EAX, 31:0, tsc_denominator, The denominator of the TSC/”core crystal clock” ratio
+ 0x15, 0, EBX, 31:0, tsc_numerator, The numerator of the TSC/”core crystal clock” ratio
+ 0x15, 0, ECX, 31:0, nom_freq, Nominal frequency of the core crystal clock in Hz
+
+# Leaf 16H
+# Processor Frequency Information
+
+ 0x16, 0, EAX, 15:0, cpu_base_freq, Processor Base Frequency in MHz
+ 0x16, 0, EBX, 15:0, cpu_max_freq, Maximum Frequency in MHz
+ 0x16, 0, ECX, 15:0, bus_freq, Bus (Reference) Frequency in MHz
+
+# Leaf 17H
+# System-On-Chip Vendor Attribute
+
+ 0x17, 0, EAX, 31:0, max_socid, Maximum input value of supported sub-leaf
+ 0x17, 0, EBX, 15:0, soc_vid, SOC Vendor ID
+ 0x17, 0, EBX, 16, std_vid, SOC Vendor ID is assigned via an industry standard scheme
+ 0x17, 0, ECX, 31:0, soc_pid, SOC Project ID assigned by vendor
+ 0x17, 0, EDX, 31:0, soc_sid, SOC Stepping ID
+
+# Leaf 18H
+# Deterministic Address Translation Parameters
+
+
+# Leaf 19H
+# Key Locker Leaf
+
+
+# Leaf 1AH
+# Hybrid Information
+
+ 0x1A, 0, EAX, 31:24, core_type, 20H-Intel_Atom 40H-Intel_Core
+
+
+# Leaf 1FH
+# V2 Extended Topology - A preferred superset to leaf 0BH
+
+
+# According to SDM
+# 40000000H - 4FFFFFFFH is invalid range
+
+
+# Leaf 80000001H
+# Extended Processor Signature and Feature Bits
+
+0x80000001, 0, ECX, 0, lahf_lm, LAHF/SAHF available in 64-bit mode
+0x80000001, 0, ECX, 5, lzcnt, LZCNT
+0x80000001, 0, ECX, 8, prefetchw, PREFETCHW
+
+0x80000001, 0, EDX, 11, sysret, SYSCALL/SYSRET supported
+0x80000001, 0, EDX, 20, exec_dis, Execute Disable Bit available
+0x80000001, 0, EDX, 26, 1gb_page, 1GB page supported
+0x80000001, 0, EDX, 27, rdtscp, RDTSCP and IA32_TSC_AUX are available
+#0x80000001, 0, EDX, 29, 64b, 64b Architecture supported
+
+# Leaf 80000002H/80000003H/80000004H
+# Processor Brand String
+
+# Leaf 80000005H
+# Reserved
+
+# Leaf 80000006H
+# Extended L2 Cache Features
+
+0x80000006, 0, ECX, 7:0, clsize, Cache Line size in bytes
+0x80000006, 0, ECX, 15:12, l2c_assoc, L2 Associativity
+0x80000006, 0, ECX, 31:16, csize, Cache size in 1K units
+
+
+# Leaf 80000007H
+
+0x80000007, 0, EDX, 8, nonstop_tsc, Invariant TSC available
+
+
+# Leaf 80000008H
+
+0x80000008, 0, EAX, 7:0, phy_adr_bits, Physical Address Bits
+0x80000008, 0, EAX, 15:8, lnr_adr_bits, Linear Address Bits
+0x80000007, 0, EBX, 9, wbnoinvd, WBNOINVD
+
+# 0x8000001E
+# EAX: Extended APIC ID
+0x8000001E, 0, EAX, 31:0, extended_apic_id, Extended APIC ID
+# EBX: Core Identifiers
+0x8000001E, 0, EBX, 7:0, core_id, Identifies the logical core ID
+0x8000001E, 0, EBX, 15:8, threads_per_core, The number of threads per core is threads_per_core + 1
+# ECX: Node Identifiers
+0x8000001E, 0, ECX, 7:0, node_id, Node ID
+0x8000001E, 0, ECX, 10:8, nodes_per_processor, Nodes per processor { 0: 1 node, else reserved }
+
+# 8000001F: AMD Secure Encryption
+0x8000001F, 0, EAX, 0, sme, Secure Memory Encryption
+0x8000001F, 0, EAX, 1, sev, Secure Encrypted Virtualization
+0x8000001F, 0, EAX, 2, vmpgflush, VM Page Flush MSR
+0x8000001F, 0, EAX, 3, seves, SEV Encrypted State
+0x8000001F, 0, EBX, 5:0, c-bit, Page table bit number used to enable memory encryption
+0x8000001F, 0, EBX, 11:6, mem_encrypt_physaddr_width, Reduction of physical address space in bits with SME enabled
+0x8000001F, 0, ECX, 31:0, num_encrypted_guests, Maximum ASID value that may be used for an SEV-enabled guest
+0x8000001F, 0, EDX, 31:0, minimum_sev_asid, Minimum ASID value that must be used for an SEV-enabled, SEV-ES-disabled guest
\ No newline at end of file
diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
new file mode 100644
index 000000000000..936a3a2aad04
--- /dev/null
+++ b/tools/arch/x86/kcpuid/kcpuid.c
@@ -0,0 +1,657 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <getopt.h>
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+typedef unsigned int u32;
+typedef unsigned long long u64;
+
+char *def_csv = "/usr/share/misc/cpuid.csv";
+char *user_csv;
+
+
+/* Cover both single-bit flag and multiple-bits fields */
+struct bits_desc {
+ /* start and end bits */
+ int start, end;
+ /* 0 or 1 for 1-bit flag */
+ int value;
+ char simp[32];
+ char detail[256];
+};
+
+/* descriptor info for eax/ebx/ecx/edx */
+struct reg_desc {
+ /* number of valid entries */
+ int nr;
+ struct bits_desc descs[32];
+};
+
+enum {
+ R_EAX = 0,
+ R_EBX,
+ R_ECX,
+ R_EDX,
+ NR_REGS
+};
+
+struct subleaf {
+ u32 index;
+ u32 sub;
+ u32 eax, ebx, ecx, edx;
+ struct reg_desc info[NR_REGS];
+};
+
+/* Represent one leaf (basic or extended) */
+struct cpuid_func {
+ /*
+ * Array of subleafs for this func, if there is no subleafs
+ * then the leafs[0] is the main leaf
+ */
+ struct subleaf *leafs;
+ int nr;
+};
+
+struct cpuid_range {
+ /* array of main leafs */
+ struct cpuid_func *funcs;
+ /* number of valid leafs */
+ int nr;
+ bool is_ext;
+};
+
+/*
+ * basic: basic functions range: [0... ]
+ * ext: extended functions range: [0x80000000... ]
+ */
+struct cpuid_range *leafs_basic, *leafs_ext;
+
+static int num_leafs;
+static bool is_amd;
+static bool show_details;
+static bool show_raw;
+static bool show_flags_only = true;
+static u32 user_index = 0xFFFFFFFF;
+static u32 user_sub = 0xFFFFFFFF;
+static int flines;
+
+static inline void cpuid(u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
+{
+ /* ecx is often an input as well as an output. */
+ asm volatile("cpuid"
+ : "=a" (*eax),
+ "=b" (*ebx),
+ "=c" (*ecx),
+ "=d" (*edx)
+ : "0" (*eax), "2" (*ecx));
+}
+
+static inline bool has_subleafs(u32 f)
+{
+ if (f == 0x7 || f == 0xd)
+ return true;
+
+ if (is_amd) {
+ if (f == 0x8000001d)
+ return true;
+ return false;
+ }
+
+ switch (f) {
+ case 0x4:
+ case 0xb:
+ case 0xf:
+ case 0x10:
+ case 0x14:
+ case 0x18:
+ case 0x1f:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static void leaf_print_raw(struct subleaf *leaf)
+{
+ if (has_subleafs(leaf->index)) {
+ if (leaf->sub == 0)
+ printf("0x%08x: subleafs:\n", leaf->index);
+
+ printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->sub, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ } else {
+ printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->index, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ }
+}
+
+/* Return true is the input eax/ebx/ecx/edx are all zero */
+static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ u32 a, u32 b, u32 c, u32 d)
+{
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ int s = 0;
+
+ if (a == 0 && b == 0 && c == 0 && d == 0)
+ return true;
+
+ /*
+ * Cut off vendor-prefix from CPUID function as we're using it as an
+ * index into ->funcs.
+ */
+ func = &range->funcs[f & 0xffff];
+
+ if (!func->leafs) {
+ func->leafs = malloc(sizeof(struct subleaf));
+ if (!func->leafs)
+ perror("malloc func leaf");
+
+ func->nr = 1;
+ } else {
+ s = func->nr;
+ func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ if (!func->leafs)
+ perror("realloc f->leafs");
+
+ func->nr++;
+ }
+
+ leaf = &func->leafs[s];
+
+ leaf->index = f;
+ leaf->sub = subleaf;
+ leaf->eax = a;
+ leaf->ebx = b;
+ leaf->ecx = c;
+ leaf->edx = d;
+
+ return false;
+}
+
+static void raw_dump_range(struct cpuid_range *range)
+{
+ u32 f;
+ int i;
+
+ printf("%s Leafs :\n", range->is_ext ? "Extended" : "Basic");
+ printf("================\n");
+
+ for (f = 0; (int)f < range->nr; f++) {
+ struct cpuid_func *func = &range->funcs[f];
+ u32 index = f;
+
+ if (range->is_ext)
+ index += 0x80000000;
+
+ /* Skip leaf without valid items */
+ if (!func->nr)
+ continue;
+
+ /* First item is the main leaf, followed by all subleafs */
+ for (i = 0; i < func->nr; i++)
+ leaf_print_raw(&func->leafs[i]);
+ }
+}
+
+#define MAX_SUBLEAF_NUM 32
+struct cpuid_range *setup_cpuid_range(u32 input_eax)
+{
+ u32 max_func, idx_func;
+ int subleaf;
+ struct cpuid_range *range;
+ u32 eax, ebx, ecx, edx;
+ u32 f = input_eax;
+ int max_subleaf;
+ bool allzero;
+
+ eax = input_eax;
+ ebx = ecx = edx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ max_func = eax;
+ idx_func = (max_func & 0xffff) + 1;
+
+ range = malloc(sizeof(struct cpuid_range));
+ if (!range)
+ perror("malloc range");
+
+ if (input_eax & 0x80000000)
+ range->is_ext = true;
+ else
+ range->is_ext = false;
+
+ range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ if (!range->funcs)
+ perror("malloc range->funcs");
+
+ range->nr = idx_func;
+ memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+
+ for (; f <= max_func; f++) {
+ eax = f;
+ subleaf = ecx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+
+ if (!has_subleafs(f))
+ continue;
+
+ max_subleaf = MAX_SUBLEAF_NUM;
+
+ /*
+ * Some can provide the exact number of subleafs,
+ * others have to be tried (0xf)
+ */
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+ max_subleaf = (eax & 0xff) + 1;
+
+ if (f == 0xb)
+ max_subleaf = 2;
+
+ for (subleaf = 1; subleaf < max_subleaf; subleaf++) {
+ eax = f;
+ ecx = subleaf;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf,
+ eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+ }
+
+ }
+
+ return range;
+}
+
+/*
+ * The basic row format for cpuid.csv is
+ * LEAF,SUBLEAF,register_name,bits,short name,long description
+ *
+ * like:
+ * 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+ * 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ */
+static int parse_line(char *line)
+{
+ char *str;
+ int i;
+ struct cpuid_range *range;
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ u32 index;
+ u32 sub;
+ char buffer[512];
+ char *buf;
+ /*
+ * Tokens:
+ * 1. leaf
+ * 2. subleaf
+ * 3. register
+ * 4. bits
+ * 5. short name
+ * 6. long detail
+ */
+ char *tokens[6];
+ struct reg_desc *reg;
+ struct bits_desc *bdesc;
+ int reg_index;
+ char *start, *end;
+
+ /* Skip comments and NULL line */
+ if (line[0] == '#' || line[0] == '\n')
+ return 0;
+
+ strncpy(buffer, line, 511);
+ buffer[511] = 0;
+ str = buffer;
+ for (i = 0; i < 5; i++) {
+ tokens[i] = strtok(str, ",");
+ if (!tokens[i])
+ goto err_exit;
+ str = NULL;
+ }
+ tokens[5] = strtok(str, "\n");
+ if (!tokens[5])
+ goto err_exit;
+
+ /* index/main-leaf */
+ index = strtoull(tokens[0], NULL, 0);
+
+ if (index & 0x80000000)
+ range = leafs_ext;
+ else
+ range = leafs_basic;
+
+ index &= 0x7FFFFFFF;
+ /* Skip line parsing for non-existing indexes */
+ if ((int)index >= range->nr)
+ return -1;
+
+ func = &range->funcs[index];
+
+ /* Return if the index has no valid item on this platform */
+ if (!func->nr)
+ return 0;
+
+ /* subleaf */
+ sub = strtoul(tokens[1], NULL, 0);
+ if ((int)sub > func->nr)
+ return -1;
+
+ leaf = &func->leafs[sub];
+ buf = tokens[2];
+
+ if (strcasestr(buf, "EAX"))
+ reg_index = R_EAX;
+ else if (strcasestr(buf, "EBX"))
+ reg_index = R_EBX;
+ else if (strcasestr(buf, "ECX"))
+ reg_index = R_ECX;
+ else if (strcasestr(buf, "EDX"))
+ reg_index = R_EDX;
+ else
+ goto err_exit;
+
+ reg = &leaf->info[reg_index];
+ bdesc = ®->descs[reg->nr++];
+
+ /* bit flag or bits field */
+ buf = tokens[3];
+
+ end = strtok(buf, ":");
+ bdesc->end = strtoul(end, NULL, 0);
+ bdesc->start = bdesc->end;
+
+ /* start != NULL means it is bit fields */
+ start = strtok(NULL, ":");
+ if (start)
+ bdesc->start = strtoul(start, NULL, 0);
+
+ strcpy(bdesc->simp, tokens[4]);
+ strcpy(bdesc->detail, tokens[5]);
+ return 0;
+
+err_exit:
+ printf("Warning: wrong line format:\n");
+ printf("\tline[%d]: %s\n", flines, line);
+ return -1;
+}
+
+/* Parse csv file, and construct the array of all leafs and subleafs */
+static void parse_text(void)
+{
+ FILE *file;
+ char *filename, *line = NULL;
+ size_t len = 0;
+ int ret;
+
+ if (show_raw)
+ return;
+
+ filename = user_csv ? user_csv : def_csv;
+ file = fopen(filename, "r");
+ if (!file) {
+ /* Fallback to a csv in the same dir */
+ file = fopen("./cpuid.csv", "r");
+ }
+
+ if (!file) {
+ printf("Fail to open '%s'\n", filename);
+ return;
+ }
+
+ while (1) {
+ ret = getline(&line, &len, file);
+ flines++;
+ if (ret > 0)
+ parse_line(line);
+
+ if (feof(file))
+ break;
+ }
+
+ fclose(file);
+}
+
+
+/* Decode every eax/ebx/ecx/edx */
+static void decode_bits(u32 value, struct reg_desc *rdesc)
+{
+ struct bits_desc *bdesc;
+ int start, end, i;
+ u32 mask;
+
+ for (i = 0; i < rdesc->nr; i++) {
+ bdesc = &rdesc->descs[i];
+
+ start = bdesc->start;
+ end = bdesc->end;
+ if (start == end) {
+ /* single bit flag */
+ if (value & (1 << start))
+ printf("\t%-20s %s%s\n",
+ bdesc->simp,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ } else {
+ /* bit fields */
+ if (show_flags_only)
+ continue;
+
+ mask = ((u64)1 << (end - start + 1)) - 1;
+ printf("\t%-20s\t: 0x%-8x\t%s%s\n",
+ bdesc->simp,
+ (value >> start) & mask,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ }
+ }
+}
+
+static void show_leaf(struct subleaf *leaf)
+{
+ if (!leaf)
+ return;
+
+ if (show_raw)
+ leaf_print_raw(leaf);
+
+ decode_bits(leaf->eax, &leaf->info[R_EAX]);
+ decode_bits(leaf->ebx, &leaf->info[R_EBX]);
+ decode_bits(leaf->ecx, &leaf->info[R_ECX]);
+ decode_bits(leaf->edx, &leaf->info[R_EDX]);
+}
+
+static void show_func(struct cpuid_func *func)
+{
+ int i;
+
+ if (!func)
+ return;
+
+ for (i = 0; i < func->nr; i++)
+ show_leaf(&func->leafs[i]);
+}
+
+static void show_range(struct cpuid_range *range)
+{
+ int i;
+
+ for (i = 0; i < range->nr; i++)
+ show_func(&range->funcs[i]);
+}
+
+static inline struct cpuid_func *index_to_func(u32 index)
+{
+ struct cpuid_range *range;
+
+ range = (index & 0x80000000) ? leafs_ext : leafs_basic;
+ index &= 0x7FFFFFFF;
+
+ if (((index & 0xFFFF) + 1) > (u32)range->nr) {
+ printf("ERR: invalid input index (0x%x)\n", index);
+ return NULL;
+ }
+ return &range->funcs[index];
+}
+
+static void show_info(void)
+{
+ struct cpuid_func *func;
+
+ if (show_raw) {
+ /* Show all of the raw output of 'cpuid' instr */
+ raw_dump_range(leafs_basic);
+ raw_dump_range(leafs_ext);
+ return;
+ }
+
+ if (user_index != 0xFFFFFFFF) {
+ /* Only show specific leaf/subleaf info */
+ func = index_to_func(user_index);
+ if (!func)
+ return;
+
+ /* Dump the raw data also */
+ show_raw = true;
+
+ if (user_sub != 0xFFFFFFFF) {
+ if (user_sub + 1 <= (u32)func->nr) {
+ show_leaf(&func->leafs[user_sub]);
+ return;
+ }
+
+ printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
+ }
+
+ show_func(func);
+ return;
+ }
+
+ printf("CPU features:\n=============\n\n");
+ show_range(leafs_basic);
+ show_range(leafs_ext);
+}
+
+static void setup_platform_cpuid(void)
+{
+ u32 eax, ebx, ecx, edx;
+
+ /* Check vendor */
+ eax = ebx = ecx = edx = 0;
+ cpuid(&eax, &ebx, &ecx, &edx);
+
+ /* "htuA" */
+ if (ebx == 0x68747541)
+ is_amd = true;
+
+ /* Setup leafs for the basic and extended range */
+ leafs_basic = setup_cpuid_range(0x0);
+ leafs_ext = setup_cpuid_range(0x80000000);
+}
+
+static void usage(void)
+{
+ printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+ "\t-a|--all Show both bit flags and complex bit fields info\n"
+ "\t-b|--bitflags Show boolean flags only\n"
+ "\t-d|--detail Show details of the flag/fields (default)\n"
+ "\t-f|--flags Specify the cpuid csv file\n"
+ "\t-h|--help Show usage info\n"
+ "\t-l|--leaf=index Specify the leaf you want to check\n"
+ "\t-r|--raw Show raw cpuid data\n"
+ "\t-s|--subleaf=sub Specify the subleaf you want to check\n"
+ );
+}
+
+static struct option opts[] = {
+ { "all", no_argument, NULL, 'a' }, /* show both bit flags and fields */
+ { "bitflags", no_argument, NULL, 'b' }, /* only show bit flags, default on */
+ { "detail", no_argument, NULL, 'd' }, /* show detail descriptions */
+ { "file", required_argument, NULL, 'f' }, /* use user's cpuid file */
+ { "help", no_argument, NULL, 'h'}, /* show usage */
+ { "leaf", required_argument, NULL, 'l'}, /* only check a specific leaf */
+ { "raw", no_argument, NULL, 'r'}, /* show raw CPUID leaf data */
+ { "subleaf", required_argument, NULL, 's'}, /* check a specific subleaf */
+ { NULL, 0, NULL, 0 }
+};
+
+static int parse_options(int argc, char *argv[])
+{
+ int c;
+
+ while ((c = getopt_long(argc, argv, "abdf:hl:rs:",
+ opts, NULL)) != -1)
+ switch (c) {
+ case 'a':
+ show_flags_only = false;
+ break;
+ case 'b':
+ show_flags_only = true;
+ break;
+ case 'd':
+ show_details = true;
+ break;
+ case 'f':
+ user_csv = optarg;
+ break;
+ case 'h':
+ usage();
+ exit(1);
+ break;
+ case 'l':
+ /* main leaf */
+ user_index = strtoul(optarg, NULL, 0);
+ break;
+ case 'r':
+ show_raw = true;
+ break;
+ case 's':
+ /* subleaf */
+ user_sub = strtoul(optarg, NULL, 0);
+ break;
+ default:
+ printf("%s: Invalid option '%c'\n", argv[0], optopt);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Do 4 things in turn:
+ * 1. Parse user options
+ * 2. Parse and store all the CPUID leaf data supported on this platform
+ * 2. Parse the csv file, while skipping leafs which are not available
+ * on this platform
+ * 3. Print leafs info based on user options
+ */
+int main(int argc, char *argv[])
+{
+ if (parse_options(argc, argv))
+ return -1;
+
+ /* Setup the cpuid leafs of current platform */
+ setup_platform_cpuid();
+
+ /* Read and parse the 'cpuid.csv' */
+ parse_text();
+
+ show_info();
+ return 0;
+}
\ No newline at end of file
--
2.30.0
1
0
From: Feng Tang <feng.tang(a)intel.com>
mainline inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMMB?from=project-issue
CVE: NA
------------------------------------
End users frequently want to know what features their processor
supports, independent of what the kernel supports.
/proc/cpuinfo is great. It is omnipresent and since it is provided by
the kernel it is always as up to date as the kernel. But, it could be
ambiguous about processor features which can be disabled by the kernel
at boot-time or compile-time.
There are some user space tools showing more raw features, but they are
not bound with kernel, and go with distros. Many end users are still
using old distros with new kernels (upgraded by themselves), and may
not upgrade the distros only to get a newer tool.
So here arise the need for a new tool, which
* shows raw CPU features read from the CPUID instruction
* will be easier to update compared to existing userspace
tooling (perhaps distributed like perf)
* inherits "modern" kernel development process, in contrast to some
of the existing userspace CPUID tools which are still being
developed
without git and distributed in tarballs from non-https sites.
* Can produce output consistent with /proc/cpuinfo to make comparison
easier.
The CPUID leaf definitions are kept in an .csv file which allows for
updating only that file to add support for new feature leafs.
This is based on prototype code from Borislav Petkov
(http://sr71.net/~dave/intel/stupid-cpuid.c)
[ bp:
- Massage, add #define _GNU_SOURCE to fix implicit declaration of
function ‘strcasestr' warning
- remove superfluous newlines
- fallback to cpuid.csv in the current dir if none found
- fix typos
- move comments over the lines instead of sideways. ]
Originally-from: Borislav Petkov <bp(a)alien8.de>
Suggested-by: Dave Hansen <dave.hansen(a)intel.com>
Suggested-by: Borislav Petkov <bp(a)alien8.de>
Signed-off-by: Feng Tang <feng.tang(a)intel.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: zhangyue <zhangyue1(a)kylinos.cn>
---
tools/arch/x86/kcpuid/Makefile | 24 ++
tools/arch/x86/kcpuid/cpuid.csv | 400 +++++++++++++++++++
tools/arch/x86/kcpuid/kcpuid.c | 657 ++++++++++++++++++++++++++++++++
3 files changed, 1081 insertions(+)
create mode 100644 tools/arch/x86/kcpuid/Makefile
create mode 100644 tools/arch/x86/kcpuid/cpuid.csv
create mode 100644 tools/arch/x86/kcpuid/kcpuid.c
diff --git a/tools/arch/x86/kcpuid/Makefile b/tools/arch/x86/kcpuid/Makefile
new file mode 100644
index 000000000000..a2f1e855092c
--- /dev/null
+++ b/tools/arch/x86/kcpuid/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for x86/kcpuid tool
+
+kcpuid : kcpuid.c
+
+CFLAGS = -Wextra
+
+BINDIR ?= /usr/sbin
+
+HWDATADIR ?= /usr/share/misc/
+
+override CFLAGS += -O2 -Wall -I../../../include
+
+%: %.c
+ $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY : clean
+clean :
+ @rm -f kcpuid
+
+install : kcpuid
+ install -d $(DESTDIR)$(BINDIR)
+ install -m 755 -p kcpuid $(DESTDIR)$(BINDIR)/kcpuid
+ install -m 444 -p cpuid.csv $(HWDATADIR)/cpuid.csv
diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
new file mode 100644
index 000000000000..e422fad9a98e
--- /dev/null
+++ b/tools/arch/x86/kcpuid/cpuid.csv
@@ -0,0 +1,400 @@
+# The basic row format is:
+# LEAF, SUBLEAF, register_name, bits, short_name, long_description
+
+# Leaf 00H
+ 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+
+# Leaf 01H
+ 1, 0, EAX, 3:0, stepping, Stepping ID
+ 1, 0, EAX, 7:4, model, Model
+ 1, 0, EAX, 11:8, family, Family ID
+ 1, 0, EAX, 13:12, processor, Processor Type
+ 1, 0, EAX, 19:16, model_ext, Extended Model ID
+ 1, 0, EAX, 27:20, family_ext, Extended Family ID
+
+ 1, 0, EBX, 7:0, brand, Brand Index
+ 1, 0, EBX, 15:8, clflush_size, CLFLUSH line size (value * 8) in bytes
+ 1, 0, EBX, 23:16, max_cpu_id, Maxim number of addressable logic cpu in this package
+ 1, 0, EBX, 31:24, apic_id, Initial APIC ID
+
+ 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ 1, 0, ECX, 1, pclmulqdq, PCLMULQDQ instruction supported
+ 1, 0, ECX, 2, dtes64, DS area uses 64-bit layout
+ 1, 0, ECX, 3, mwait, MONITOR/MWAIT supported
+ 1, 0, ECX, 4, ds_cpl, CPL Qualified Debug Store which allows for branch message storage qualified by CPL
+ 1, 0, ECX, 5, vmx, Virtual Machine Extensions supported
+ 1, 0, ECX, 6, smx, Safer Mode Extension supported
+ 1, 0, ECX, 7, eist, Enhanced Intel SpeedStep Technology
+ 1, 0, ECX, 8, tm2, Thermal Monitor 2
+ 1, 0, ECX, 9, ssse3, Supplemental Streaming SIMD Extensions 3 (SSSE3)
+ 1, 0, ECX, 10, l1_ctx_id, L1 data cache could be set to either adaptive mode or shared mode (check IA32_MISC_ENABLE bit 24 definition)
+ 1, 0, ECX, 11, sdbg, IA32_DEBUG_INTERFACE MSR for silicon debug supported
+ 1, 0, ECX, 12, fma, FMA extensions using YMM state supported
+ 1, 0, ECX, 13, cmpxchg16b, 'CMPXCHG16B - Compare and Exchange Bytes' supported
+ 1, 0, ECX, 14, xtpr_update, xTPR Update Control supported
+ 1, 0, ECX, 15, pdcm, Perfmon and Debug Capability present
+ 1, 0, ECX, 17, pcid, Process-Context Identifiers feature present
+ 1, 0, ECX, 18, dca, Prefetching data from a memory mapped device supported
+ 1, 0, ECX, 19, sse4_1, SSE4.1 feature present
+ 1, 0, ECX, 20, sse4_2, SSE4.2 feature present
+ 1, 0, ECX, 21, x2apic, x2APIC supported
+ 1, 0, ECX, 22, movbe, MOVBE instruction supported
+ 1, 0, ECX, 23, popcnt, POPCNT instruction supported
+ 1, 0, ECX, 24, tsc_deadline_timer, LAPIC supports one-shot operation using a TSC deadline value
+ 1, 0, ECX, 25, aesni, AESNI instruction supported
+ 1, 0, ECX, 26, xsave, XSAVE/XRSTOR processor extended states (XSETBV/XGETBV/XCR0)
+ 1, 0, ECX, 27, osxsave, OS has set CR4.OSXSAVE bit to enable XSETBV/XGETBV/XCR0
+ 1, 0, ECX, 28, avx, AVX instruction supported
+ 1, 0, ECX, 29, f16c, 16-bit floating-point conversion instruction supported
+ 1, 0, ECX, 30, rdrand, RDRAND instruction supported
+
+ 1, 0, EDX, 0, fpu, x87 FPU on chip
+ 1, 0, EDX, 1, vme, Virtual-8086 Mode Enhancement
+ 1, 0, EDX, 2, de, Debugging Extensions
+ 1, 0, EDX, 3, pse, Page Size Extensions
+ 1, 0, EDX, 4, tsc, Time Stamp Counter
+ 1, 0, EDX, 5, msr, RDMSR and WRMSR Support
+ 1, 0, EDX, 6, pae, Physical Address Extensions
+ 1, 0, EDX, 7, mce, Machine Check Exception
+ 1, 0, EDX, 8, cx8, CMPXCHG8B instr
+ 1, 0, EDX, 9, apic, APIC on Chip
+ 1, 0, EDX, 11, sep, SYSENTER and SYSEXIT instrs
+ 1, 0, EDX, 12, mtrr, Memory Type Range Registers
+ 1, 0, EDX, 13, pge, Page Global Bit
+ 1, 0, EDX, 14, mca, Machine Check Architecture
+ 1, 0, EDX, 15, cmov, Conditional Move Instrs
+ 1, 0, EDX, 16, pat, Page Attribute Table
+ 1, 0, EDX, 17, pse36, 36-Bit Page Size Extension
+ 1, 0, EDX, 18, psn, Processor Serial Number
+ 1, 0, EDX, 19, clflush, CLFLUSH instr
+# 1, 0, EDX, 20,
+ 1, 0, EDX, 21, ds, Debug Store
+ 1, 0, EDX, 22, acpi, Thermal Monitor and Software Controlled Clock Facilities
+ 1, 0, EDX, 23, mmx, Intel MMX Technology
+ 1, 0, EDX, 24, fxsr, XSAVE and FXRSTOR Instrs
+ 1, 0, EDX, 25, sse, SSE
+ 1, 0, EDX, 26, sse2, SSE2
+ 1, 0, EDX, 27, ss, Self Snoop
+ 1, 0, EDX, 28, hit, Max APIC IDs
+ 1, 0, EDX, 29, tm, Thermal Monitor
+# 1, 0, EDX, 30,
+ 1, 0, EDX, 31, pbe, Pending Break Enable
+
+# Leaf 02H
+# cache and TLB descriptor info
+
+# Leaf 03H
+# Precessor Serial Number, introduced on Pentium III, not valid for
+# latest models
+
+# Leaf 04H
+# thread/core and cache topology
+ 4, 0, EAX, 4:0, cache_type, Cache type like instr/data or unified
+ 4, 0, EAX, 7:5, cache_level, Cache Level (starts at 1)
+ 4, 0, EAX, 8, cache_self_init, Cache Self Initialization
+ 4, 0, EAX, 9, fully_associate, Fully Associative cache
+# 4, 0, EAX, 13:10, resvd, resvd
+ 4, 0, EAX, 25:14, max_logical_id, Max number of addressable IDs for logical processors sharing the cache
+ 4, 0, EAX, 31:26, max_phy_id, Max number of addressable IDs for processors in phy package
+
+ 4, 0, EBX, 11:0, cache_linesize, Size of a cache line in bytes
+ 4, 0, EBX, 21:12, cache_partition, Physical Line partitions
+ 4, 0, EBX, 31:22, cache_ways, Ways of associativity
+ 4, 0, ECX, 31:0, cache_sets, Number of Sets - 1
+ 4, 0, EDX, 0, c_wbinvd, 1 means WBINVD/INVD is not ganranteed to act upon lower level caches of non-originating threads sharing this cache
+ 4, 0, EDX, 1, c_incl, Whether cache is inclusive of lower cache level
+ 4, 0, EDX, 2, c_comp_index, Complex Cache Indexing
+
+# Leaf 05H
+# MONITOR/MWAIT
+ 5, 0, EAX, 15:0, min_mon_size, Smallest monitor line size in bytes
+ 5, 0, EBX, 15:0, max_mon_size, Largest monitor line size in bytes
+ 5, 0, ECX, 0, mwait_ext, Enum of Monitor-Mwait extensions supported
+ 5, 0, ECX, 1, mwait_irq_break, Largest monitor line size in bytes
+ 5, 0, EDX, 3:0, c0_sub_stats, Number of C0* sub C-states supported using MWAIT
+ 5, 0, EDX, 7:4, c1_sub_stats, Number of C1* sub C-states supported using MWAIT
+ 5, 0, EDX, 11:8, c2_sub_stats, Number of C2* sub C-states supported using MWAIT
+ 5, 0, EDX, 15:12, c3_sub_stats, Number of C3* sub C-states supported using MWAIT
+ 5, 0, EDX, 19:16, c4_sub_stats, Number of C4* sub C-states supported using MWAIT
+ 5, 0, EDX, 23:20, c5_sub_stats, Number of C5* sub C-states supported using MWAIT
+ 5, 0, EDX, 27:24, c6_sub_stats, Number of C6* sub C-states supported using MWAIT
+ 5, 0, EDX, 31:28, c7_sub_stats, Number of C7* sub C-states supported using MWAIT
+
+# Leaf 06H
+# Thermal & Power Management
+
+ 6, 0, EAX, 0, dig_temp, Digital temperature sensor supported
+ 6, 0, EAX, 1, turbo, Intel Turbo Boost
+ 6, 0, EAX, 2, arat, Always running APIC timer
+# 6, 0, EAX, 3, resv, Reserved
+ 6, 0, EAX, 4, pln, Power limit notifications supported
+ 6, 0, EAX, 5, ecmd, Clock modulation duty cycle extension supported
+ 6, 0, EAX, 6, ptm, Package thermal management supported
+ 6, 0, EAX, 7, hwp, HWP base register
+ 6, 0, EAX, 8, hwp_notify, HWP notification
+ 6, 0, EAX, 9, hwp_act_window, HWP activity window
+ 6, 0, EAX, 10, hwp_energy, HWP energy performance preference
+ 6, 0, EAX, 11, hwp_pkg_req, HWP package level request
+# 6, 0, EAX, 12, resv, Reserved
+ 6, 0, EAX, 13, hdc, HDC base registers supported
+ 6, 0, EAX, 14, turbo3, Turbo Boost Max 3.0
+ 6, 0, EAX, 15, hwp_cap, Highest Performance change supported
+ 6, 0, EAX, 16, hwp_peci, HWP PECI override is supported
+ 6, 0, EAX, 17, hwp_flex, Flexible HWP is supported
+ 6, 0, EAX, 18, hwp_fast, Fast access mode for the IA32_HWP_REQUEST MSR is supported
+# 6, 0, EAX, 19, resv, Reserved
+ 6, 0, EAX, 20, hwp_ignr, Ignoring Idle Logical Processor HWP request is supported
+
+ 6, 0, EBX, 3:0, therm_irq_thresh, Number of Interrupt Thresholds in Digital Thermal Sensor
+ 6, 0, ECX, 0, aperfmperf, Presence of IA32_MPERF and IA32_APERF
+ 6, 0, ECX, 3, energ_bias, Performance-energy bias preference supported
+
+# Leaf 07H
+# ECX == 0
+# AVX512 refers to https://en.wikipedia.org/wiki/AVX-512
+# XXX: Do we really need to enumerate each and every AVX512 sub features
+
+ 7, 0, EBX, 0, fsgsbase, RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE supported
+ 7, 0, EBX, 1, tsc_adjust, TSC_ADJUST MSR supported
+ 7, 0, EBX, 2, sgx, Software Guard Extensions
+ 7, 0, EBX, 3, bmi1, BMI1
+ 7, 0, EBX, 4, hle, Hardware Lock Elision
+ 7, 0, EBX, 5, avx2, AVX2
+# 7, 0, EBX, 6, fdp_excp_only, x87 FPU Data Pointer updated only on x87 exceptions
+ 7, 0, EBX, 7, smep, Supervisor-Mode Execution Prevention
+ 7, 0, EBX, 8, bmi2, BMI2
+ 7, 0, EBX, 9, rep_movsb, Enhanced REP MOVSB/STOSB
+ 7, 0, EBX, 10, invpcid, INVPCID instruction
+ 7, 0, EBX, 11, rtm, Restricted Transactional Memory
+ 7, 0, EBX, 12, rdt_m, Intel RDT Monitoring capability
+ 7, 0, EBX, 13, depc_fpu_cs_ds, Deprecates FPU CS and FPU DS
+ 7, 0, EBX, 14, mpx, Memory Protection Extensions
+ 7, 0, EBX, 15, rdt_a, Intel RDT Allocation capability
+ 7, 0, EBX, 16, avx512f, AVX512 Foundation instr
+ 7, 0, EBX, 17, avx512dq, AVX512 Double and Quadword AVX512 instr
+ 7, 0, EBX, 18, rdseed, RDSEED instr
+ 7, 0, EBX, 19, adx, ADX instr
+ 7, 0, EBX, 20, smap, Supervisor Mode Access Prevention
+ 7, 0, EBX, 21, avx512ifma, AVX512 Integer Fused Multiply Add
+# 7, 0, EBX, 22, resvd, resvd
+ 7, 0, EBX, 23, clflushopt, CLFLUSHOPT instr
+ 7, 0, EBX, 24, clwb, CLWB instr
+ 7, 0, EBX, 25, intel_pt, Intel Processor Trace instr
+ 7, 0, EBX, 26, avx512pf, Prefetch
+ 7, 0, EBX, 27, avx512er, AVX512 Exponent Reciproca instr
+ 7, 0, EBX, 28, avx512cd, AVX512 Conflict Detection instr
+ 7, 0, EBX, 29, sha, Intel Secure Hash Algorithm Extensions instr
+ 7, 0, EBX, 26, avx512bw, AVX512 Byte & Word instr
+ 7, 0, EBX, 28, avx512vl, AVX512 Vector Length Extentions (VL)
+ 7, 0, ECX, 0, prefetchwt1, X
+ 7, 0, ECX, 1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
+ 7, 0, ECX, 2, umip, User-mode Instruction Prevention
+
+ 7, 0, ECX, 3, pku, Protection Keys for User-mode pages
+ 7, 0, ECX, 4, ospke, CR4 PKE set to enable protection keys
+# 7, 0, ECX, 16:5, resvd, resvd
+ 7, 0, ECX, 21:17, mawau, The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode
+ 7, 0, ECX, 22, rdpid, RDPID and IA32_TSC_AUX
+# 7, 0, ECX, 29:23, resvd, resvd
+ 7, 0, ECX, 30, sgx_lc, SGX Launch Configuration
+# 7, 0, ECX, 31, resvd, resvd
+
+# Leaf 08H
+#
+
+
+# Leaf 09H
+# Direct Cache Access (DCA) information
+ 9, 0, ECX, 31:0, dca_cap, The value of IA32_PLATFORM_DCA_CAP
+
+# Leaf 0AH
+# Architectural Performance Monitoring
+#
+# Do we really need to print out the PMU related stuff?
+# Does normal user really care about it?
+#
+ 0xA, 0, EAX, 7:0, pmu_ver, Performance Monitoring Unit version
+ 0xA, 0, EAX, 15:8, pmu_gp_cnt_num, Numer of general-purose PMU counters per logical CPU
+ 0xA, 0, EAX, 23:16, pmu_cnt_bits, Bit wideth of PMU counter
+ 0xA, 0, EAX, 31:24, pmu_ebx_bits, Length of EBX bit vector to enumerate PMU events
+
+ 0xA, 0, EBX, 0, pmu_no_core_cycle_evt, Core cycle event not available
+ 0xA, 0, EBX, 1, pmu_no_instr_ret_evt, Instruction retired event not available
+ 0xA, 0, EBX, 2, pmu_no_ref_cycle_evt, Reference cycles event not available
+ 0xA, 0, EBX, 3, pmu_no_llc_ref_evt, Last-level cache reference event not available
+ 0xA, 0, EBX, 4, pmu_no_llc_mis_evt, Last-level cache misses event not available
+ 0xA, 0, EBX, 5, pmu_no_br_instr_ret_evt, Branch instruction retired event not available
+ 0xA, 0, EBX, 6, pmu_no_br_mispredict_evt, Branch mispredict retired event not available
+
+ 0xA, 0, ECX, 4:0, pmu_fixed_cnt_num, Performance Monitoring Unit version
+ 0xA, 0, ECX, 12:5, pmu_fixed_cnt_bits, Numer of PMU counters per logical CPU
+
+# Leaf 0BH
+# Extended Topology Enumeration Leaf
+#
+
+ 0xB, 0, EAX, 4:0, id_shift, Number of bits to shift right on x2APIC ID to get a unique topology ID of the next level type
+ 0xB, 0, EBX, 15:0, cpu_nr, Number of logical processors at this level type
+ 0xB, 0, ECX, 15:8, lvl_type, 0-Invalid 1-SMT 2-Core
+ 0xB, 0, EDX, 31:0, x2apic_id, x2APIC ID the current logical processor
+
+
+# Leaf 0DH
+# Processor Extended State
+
+ 0xD, 0, EAX, 0, x87, X87 state
+ 0xD, 0, EAX, 1, sse, SSE state
+ 0xD, 0, EAX, 2, avx, AVX state
+ 0xD, 0, EAX, 4:3, mpx, MPX state
+ 0xD, 0, EAX, 7:5, avx512, AVX-512 state
+ 0xD, 0, EAX, 9, pkru, PKRU state
+
+ 0xD, 0, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 0, ECX, 31:0, max_sz_xsave, Maximum size (bytes) of the XSAVE/XRSTOR save area
+
+ 0xD, 1, EAX, 0, xsaveopt, XSAVEOPT available
+ 0xD, 1, EAX, 1, xsavec, XSAVEC and compacted form supported
+ 0xD, 1, EAX, 2, xgetbv, XGETBV supported
+ 0xD, 1, EAX, 3, xsaves, XSAVES/XRSTORS and IA32_XSS supported
+
+ 0xD, 1, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 1, ECX, 8, pt, PT state
+ 0xD, 1, ECX, 11, cet_usr, CET user state
+ 0xD, 1, ECX, 12, cet_supv, CET supervisor state
+ 0xD, 1, ECX, 13, hdc, HDC state
+ 0xD, 1, ECX, 16, hwp, HWP state
+
+# Leaf 0FH
+# Intel RDT Monitoring
+
+ 0xF, 0, EBX, 31:0, rmid_range, Maximum range (zero-based) of RMID within this physical processor of all types
+ 0xF, 0, EDX, 1, l3c_rdt_mon, L3 Cache RDT Monitoring supported
+
+ 0xF, 1, ECX, 31:0, rmid_range, Maximum range (zero-based) of RMID of this types
+ 0xF, 1, EDX, 0, l3c_ocp_mon, L3 Cache occupancy Monitoring supported
+ 0xF, 1, EDX, 1, l3c_tbw_mon, L3 Cache Total Bandwidth Monitoring supported
+ 0xF, 1, EDX, 2, l3c_lbw_mon, L3 Cache Local Bandwidth Monitoring supported
+
+# Leaf 10H
+# Intel RDT Allocation
+
+ 0x10, 0, EBX, 1, l3c_rdt_alloc, L3 Cache Allocation supported
+ 0x10, 0, EBX, 2, l2c_rdt_alloc, L2 Cache Allocation supported
+ 0x10, 0, EBX, 3, mem_bw_alloc, Memory Bandwidth Allocation supported
+
+
+# Leaf 12H
+# SGX Capability
+#
+# Some detailed SGX features not added yet
+
+ 0x12, 0, EAX, 0, sgx1, L3 Cache Allocation supported
+ 0x12, 1, EAX, 0, sgx2, L3 Cache Allocation supported
+
+
+# Leaf 14H
+# Intel Processor Tracer
+#
+
+# Leaf 15H
+# Time Stamp Counter and Nominal Core Crystal Clock Information
+
+ 0x15, 0, EAX, 31:0, tsc_denominator, The denominator of the TSC/”core crystal clock” ratio
+ 0x15, 0, EBX, 31:0, tsc_numerator, The numerator of the TSC/”core crystal clock” ratio
+ 0x15, 0, ECX, 31:0, nom_freq, Nominal frequency of the core crystal clock in Hz
+
+# Leaf 16H
+# Processor Frequency Information
+
+ 0x16, 0, EAX, 15:0, cpu_base_freq, Processor Base Frequency in MHz
+ 0x16, 0, EBX, 15:0, cpu_max_freq, Maximum Frequency in MHz
+ 0x16, 0, ECX, 15:0, bus_freq, Bus (Reference) Frequency in MHz
+
+# Leaf 17H
+# System-On-Chip Vendor Attribute
+
+ 0x17, 0, EAX, 31:0, max_socid, Maximum input value of supported sub-leaf
+ 0x17, 0, EBX, 15:0, soc_vid, SOC Vendor ID
+ 0x17, 0, EBX, 16, std_vid, SOC Vendor ID is assigned via an industry standard scheme
+ 0x17, 0, ECX, 31:0, soc_pid, SOC Project ID assigned by vendor
+ 0x17, 0, EDX, 31:0, soc_sid, SOC Stepping ID
+
+# Leaf 18H
+# Deterministic Address Translation Parameters
+
+
+# Leaf 19H
+# Key Locker Leaf
+
+
+# Leaf 1AH
+# Hybrid Information
+
+ 0x1A, 0, EAX, 31:24, core_type, 20H-Intel_Atom 40H-Intel_Core
+
+
+# Leaf 1FH
+# V2 Extended Topology - A preferred superset to leaf 0BH
+
+
+# According to SDM
+# 40000000H - 4FFFFFFFH is invalid range
+
+
+# Leaf 80000001H
+# Extended Processor Signature and Feature Bits
+
+0x80000001, 0, ECX, 0, lahf_lm, LAHF/SAHF available in 64-bit mode
+0x80000001, 0, ECX, 5, lzcnt, LZCNT
+0x80000001, 0, ECX, 8, prefetchw, PREFETCHW
+
+0x80000001, 0, EDX, 11, sysret, SYSCALL/SYSRET supported
+0x80000001, 0, EDX, 20, exec_dis, Execute Disable Bit available
+0x80000001, 0, EDX, 26, 1gb_page, 1GB page supported
+0x80000001, 0, EDX, 27, rdtscp, RDTSCP and IA32_TSC_AUX are available
+#0x80000001, 0, EDX, 29, 64b, 64b Architecture supported
+
+# Leaf 80000002H/80000003H/80000004H
+# Processor Brand String
+
+# Leaf 80000005H
+# Reserved
+
+# Leaf 80000006H
+# Extended L2 Cache Features
+
+0x80000006, 0, ECX, 7:0, clsize, Cache Line size in bytes
+0x80000006, 0, ECX, 15:12, l2c_assoc, L2 Associativity
+0x80000006, 0, ECX, 31:16, csize, Cache size in 1K units
+
+
+# Leaf 80000007H
+
+0x80000007, 0, EDX, 8, nonstop_tsc, Invariant TSC available
+
+
+# Leaf 80000008H
+
+0x80000008, 0, EAX, 7:0, phy_adr_bits, Physical Address Bits
+0x80000008, 0, EAX, 15:8, lnr_adr_bits, Linear Address Bits
+0x80000007, 0, EBX, 9, wbnoinvd, WBNOINVD
+
+# 0x8000001E
+# EAX: Extended APIC ID
+0x8000001E, 0, EAX, 31:0, extended_apic_id, Extended APIC ID
+# EBX: Core Identifiers
+0x8000001E, 0, EBX, 7:0, core_id, Identifies the logical core ID
+0x8000001E, 0, EBX, 15:8, threads_per_core, The number of threads per core is threads_per_core + 1
+# ECX: Node Identifiers
+0x8000001E, 0, ECX, 7:0, node_id, Node ID
+0x8000001E, 0, ECX, 10:8, nodes_per_processor, Nodes per processor { 0: 1 node, else reserved }
+
+# 8000001F: AMD Secure Encryption
+0x8000001F, 0, EAX, 0, sme, Secure Memory Encryption
+0x8000001F, 0, EAX, 1, sev, Secure Encrypted Virtualization
+0x8000001F, 0, EAX, 2, vmpgflush, VM Page Flush MSR
+0x8000001F, 0, EAX, 3, seves, SEV Encrypted State
+0x8000001F, 0, EBX, 5:0, c-bit, Page table bit number used to enable memory encryption
+0x8000001F, 0, EBX, 11:6, mem_encrypt_physaddr_width, Reduction of physical address space in bits with SME enabled
+0x8000001F, 0, ECX, 31:0, num_encrypted_guests, Maximum ASID value that may be used for an SEV-enabled guest
+0x8000001F, 0, EDX, 31:0, minimum_sev_asid, Minimum ASID value that must be used for an SEV-enabled, SEV-ES-disabled guest
\ No newline at end of file
diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
new file mode 100644
index 000000000000..936a3a2aad04
--- /dev/null
+++ b/tools/arch/x86/kcpuid/kcpuid.c
@@ -0,0 +1,657 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <getopt.h>
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+typedef unsigned int u32;
+typedef unsigned long long u64;
+
+char *def_csv = "/usr/share/misc/cpuid.csv";
+char *user_csv;
+
+
+/* Cover both single-bit flag and multiple-bits fields */
+struct bits_desc {
+ /* start and end bits */
+ int start, end;
+ /* 0 or 1 for 1-bit flag */
+ int value;
+ char simp[32];
+ char detail[256];
+};
+
+/* descriptor info for eax/ebx/ecx/edx */
+struct reg_desc {
+ /* number of valid entries */
+ int nr;
+ struct bits_desc descs[32];
+};
+
+enum {
+ R_EAX = 0,
+ R_EBX,
+ R_ECX,
+ R_EDX,
+ NR_REGS
+};
+
+struct subleaf {
+ u32 index;
+ u32 sub;
+ u32 eax, ebx, ecx, edx;
+ struct reg_desc info[NR_REGS];
+};
+
+/* Represent one leaf (basic or extended) */
+struct cpuid_func {
+ /*
+ * Array of subleafs for this func, if there is no subleafs
+ * then the leafs[0] is the main leaf
+ */
+ struct subleaf *leafs;
+ int nr;
+};
+
+struct cpuid_range {
+ /* array of main leafs */
+ struct cpuid_func *funcs;
+ /* number of valid leafs */
+ int nr;
+ bool is_ext;
+};
+
+/*
+ * basic: basic functions range: [0... ]
+ * ext: extended functions range: [0x80000000... ]
+ */
+struct cpuid_range *leafs_basic, *leafs_ext;
+
+static int num_leafs;
+static bool is_amd;
+static bool show_details;
+static bool show_raw;
+static bool show_flags_only = true;
+static u32 user_index = 0xFFFFFFFF;
+static u32 user_sub = 0xFFFFFFFF;
+static int flines;
+
+static inline void cpuid(u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
+{
+ /* ecx is often an input as well as an output. */
+ asm volatile("cpuid"
+ : "=a" (*eax),
+ "=b" (*ebx),
+ "=c" (*ecx),
+ "=d" (*edx)
+ : "0" (*eax), "2" (*ecx));
+}
+
+static inline bool has_subleafs(u32 f)
+{
+ if (f == 0x7 || f == 0xd)
+ return true;
+
+ if (is_amd) {
+ if (f == 0x8000001d)
+ return true;
+ return false;
+ }
+
+ switch (f) {
+ case 0x4:
+ case 0xb:
+ case 0xf:
+ case 0x10:
+ case 0x14:
+ case 0x18:
+ case 0x1f:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static void leaf_print_raw(struct subleaf *leaf)
+{
+ if (has_subleafs(leaf->index)) {
+ if (leaf->sub == 0)
+ printf("0x%08x: subleafs:\n", leaf->index);
+
+ printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->sub, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ } else {
+ printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->index, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ }
+}
+
+/* Return true is the input eax/ebx/ecx/edx are all zero */
+static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ u32 a, u32 b, u32 c, u32 d)
+{
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ int s = 0;
+
+ if (a == 0 && b == 0 && c == 0 && d == 0)
+ return true;
+
+ /*
+ * Cut off vendor-prefix from CPUID function as we're using it as an
+ * index into ->funcs.
+ */
+ func = &range->funcs[f & 0xffff];
+
+ if (!func->leafs) {
+ func->leafs = malloc(sizeof(struct subleaf));
+ if (!func->leafs)
+ perror("malloc func leaf");
+
+ func->nr = 1;
+ } else {
+ s = func->nr;
+ func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ if (!func->leafs)
+ perror("realloc f->leafs");
+
+ func->nr++;
+ }
+
+ leaf = &func->leafs[s];
+
+ leaf->index = f;
+ leaf->sub = subleaf;
+ leaf->eax = a;
+ leaf->ebx = b;
+ leaf->ecx = c;
+ leaf->edx = d;
+
+ return false;
+}
+
+static void raw_dump_range(struct cpuid_range *range)
+{
+ u32 f;
+ int i;
+
+ printf("%s Leafs :\n", range->is_ext ? "Extended" : "Basic");
+ printf("================\n");
+
+ for (f = 0; (int)f < range->nr; f++) {
+ struct cpuid_func *func = &range->funcs[f];
+ u32 index = f;
+
+ if (range->is_ext)
+ index += 0x80000000;
+
+ /* Skip leaf without valid items */
+ if (!func->nr)
+ continue;
+
+ /* First item is the main leaf, followed by all subleafs */
+ for (i = 0; i < func->nr; i++)
+ leaf_print_raw(&func->leafs[i]);
+ }
+}
+
+#define MAX_SUBLEAF_NUM 32
+struct cpuid_range *setup_cpuid_range(u32 input_eax)
+{
+ u32 max_func, idx_func;
+ int subleaf;
+ struct cpuid_range *range;
+ u32 eax, ebx, ecx, edx;
+ u32 f = input_eax;
+ int max_subleaf;
+ bool allzero;
+
+ eax = input_eax;
+ ebx = ecx = edx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ max_func = eax;
+ idx_func = (max_func & 0xffff) + 1;
+
+ range = malloc(sizeof(struct cpuid_range));
+ if (!range)
+ perror("malloc range");
+
+ if (input_eax & 0x80000000)
+ range->is_ext = true;
+ else
+ range->is_ext = false;
+
+ range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ if (!range->funcs)
+ perror("malloc range->funcs");
+
+ range->nr = idx_func;
+ memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+
+ for (; f <= max_func; f++) {
+ eax = f;
+ subleaf = ecx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+
+ if (!has_subleafs(f))
+ continue;
+
+ max_subleaf = MAX_SUBLEAF_NUM;
+
+ /*
+ * Some can provide the exact number of subleafs,
+ * others have to be tried (0xf)
+ */
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+ max_subleaf = (eax & 0xff) + 1;
+
+ if (f == 0xb)
+ max_subleaf = 2;
+
+ for (subleaf = 1; subleaf < max_subleaf; subleaf++) {
+ eax = f;
+ ecx = subleaf;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf,
+ eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+ }
+
+ }
+
+ return range;
+}
+
+/*
+ * The basic row format for cpuid.csv is
+ * LEAF,SUBLEAF,register_name,bits,short name,long description
+ *
+ * like:
+ * 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+ * 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ */
+static int parse_line(char *line)
+{
+ char *str;
+ int i;
+ struct cpuid_range *range;
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ u32 index;
+ u32 sub;
+ char buffer[512];
+ char *buf;
+ /*
+ * Tokens:
+ * 1. leaf
+ * 2. subleaf
+ * 3. register
+ * 4. bits
+ * 5. short name
+ * 6. long detail
+ */
+ char *tokens[6];
+ struct reg_desc *reg;
+ struct bits_desc *bdesc;
+ int reg_index;
+ char *start, *end;
+
+ /* Skip comments and NULL line */
+ if (line[0] == '#' || line[0] == '\n')
+ return 0;
+
+ strncpy(buffer, line, 511);
+ buffer[511] = 0;
+ str = buffer;
+ for (i = 0; i < 5; i++) {
+ tokens[i] = strtok(str, ",");
+ if (!tokens[i])
+ goto err_exit;
+ str = NULL;
+ }
+ tokens[5] = strtok(str, "\n");
+ if (!tokens[5])
+ goto err_exit;
+
+ /* index/main-leaf */
+ index = strtoull(tokens[0], NULL, 0);
+
+ if (index & 0x80000000)
+ range = leafs_ext;
+ else
+ range = leafs_basic;
+
+ index &= 0x7FFFFFFF;
+ /* Skip line parsing for non-existing indexes */
+ if ((int)index >= range->nr)
+ return -1;
+
+ func = &range->funcs[index];
+
+ /* Return if the index has no valid item on this platform */
+ if (!func->nr)
+ return 0;
+
+ /* subleaf */
+ sub = strtoul(tokens[1], NULL, 0);
+ if ((int)sub > func->nr)
+ return -1;
+
+ leaf = &func->leafs[sub];
+ buf = tokens[2];
+
+ if (strcasestr(buf, "EAX"))
+ reg_index = R_EAX;
+ else if (strcasestr(buf, "EBX"))
+ reg_index = R_EBX;
+ else if (strcasestr(buf, "ECX"))
+ reg_index = R_ECX;
+ else if (strcasestr(buf, "EDX"))
+ reg_index = R_EDX;
+ else
+ goto err_exit;
+
+ reg = &leaf->info[reg_index];
+ bdesc = ®->descs[reg->nr++];
+
+ /* bit flag or bits field */
+ buf = tokens[3];
+
+ end = strtok(buf, ":");
+ bdesc->end = strtoul(end, NULL, 0);
+ bdesc->start = bdesc->end;
+
+ /* start != NULL means it is bit fields */
+ start = strtok(NULL, ":");
+ if (start)
+ bdesc->start = strtoul(start, NULL, 0);
+
+ strcpy(bdesc->simp, tokens[4]);
+ strcpy(bdesc->detail, tokens[5]);
+ return 0;
+
+err_exit:
+ printf("Warning: wrong line format:\n");
+ printf("\tline[%d]: %s\n", flines, line);
+ return -1;
+}
+
+/* Parse csv file, and construct the array of all leafs and subleafs */
+static void parse_text(void)
+{
+ FILE *file;
+ char *filename, *line = NULL;
+ size_t len = 0;
+ int ret;
+
+ if (show_raw)
+ return;
+
+ filename = user_csv ? user_csv : def_csv;
+ file = fopen(filename, "r");
+ if (!file) {
+ /* Fallback to a csv in the same dir */
+ file = fopen("./cpuid.csv", "r");
+ }
+
+ if (!file) {
+ printf("Fail to open '%s'\n", filename);
+ return;
+ }
+
+ while (1) {
+ ret = getline(&line, &len, file);
+ flines++;
+ if (ret > 0)
+ parse_line(line);
+
+ if (feof(file))
+ break;
+ }
+
+ fclose(file);
+}
+
+
+/* Decode every eax/ebx/ecx/edx */
+static void decode_bits(u32 value, struct reg_desc *rdesc)
+{
+ struct bits_desc *bdesc;
+ int start, end, i;
+ u32 mask;
+
+ for (i = 0; i < rdesc->nr; i++) {
+ bdesc = &rdesc->descs[i];
+
+ start = bdesc->start;
+ end = bdesc->end;
+ if (start == end) {
+ /* single bit flag */
+ if (value & (1 << start))
+ printf("\t%-20s %s%s\n",
+ bdesc->simp,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ } else {
+ /* bit fields */
+ if (show_flags_only)
+ continue;
+
+ mask = ((u64)1 << (end - start + 1)) - 1;
+ printf("\t%-20s\t: 0x%-8x\t%s%s\n",
+ bdesc->simp,
+ (value >> start) & mask,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ }
+ }
+}
+
+static void show_leaf(struct subleaf *leaf)
+{
+ if (!leaf)
+ return;
+
+ if (show_raw)
+ leaf_print_raw(leaf);
+
+ decode_bits(leaf->eax, &leaf->info[R_EAX]);
+ decode_bits(leaf->ebx, &leaf->info[R_EBX]);
+ decode_bits(leaf->ecx, &leaf->info[R_ECX]);
+ decode_bits(leaf->edx, &leaf->info[R_EDX]);
+}
+
+static void show_func(struct cpuid_func *func)
+{
+ int i;
+
+ if (!func)
+ return;
+
+ for (i = 0; i < func->nr; i++)
+ show_leaf(&func->leafs[i]);
+}
+
+static void show_range(struct cpuid_range *range)
+{
+ int i;
+
+ for (i = 0; i < range->nr; i++)
+ show_func(&range->funcs[i]);
+}
+
+static inline struct cpuid_func *index_to_func(u32 index)
+{
+ struct cpuid_range *range;
+
+ range = (index & 0x80000000) ? leafs_ext : leafs_basic;
+ index &= 0x7FFFFFFF;
+
+ if (((index & 0xFFFF) + 1) > (u32)range->nr) {
+ printf("ERR: invalid input index (0x%x)\n", index);
+ return NULL;
+ }
+ return &range->funcs[index];
+}
+
+static void show_info(void)
+{
+ struct cpuid_func *func;
+
+ if (show_raw) {
+ /* Show all of the raw output of 'cpuid' instr */
+ raw_dump_range(leafs_basic);
+ raw_dump_range(leafs_ext);
+ return;
+ }
+
+ if (user_index != 0xFFFFFFFF) {
+ /* Only show specific leaf/subleaf info */
+ func = index_to_func(user_index);
+ if (!func)
+ return;
+
+ /* Dump the raw data also */
+ show_raw = true;
+
+ if (user_sub != 0xFFFFFFFF) {
+ if (user_sub + 1 <= (u32)func->nr) {
+ show_leaf(&func->leafs[user_sub]);
+ return;
+ }
+
+ printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
+ }
+
+ show_func(func);
+ return;
+ }
+
+ printf("CPU features:\n=============\n\n");
+ show_range(leafs_basic);
+ show_range(leafs_ext);
+}
+
+static void setup_platform_cpuid(void)
+{
+ u32 eax, ebx, ecx, edx;
+
+ /* Check vendor */
+ eax = ebx = ecx = edx = 0;
+ cpuid(&eax, &ebx, &ecx, &edx);
+
+ /* "htuA" */
+ if (ebx == 0x68747541)
+ is_amd = true;
+
+ /* Setup leafs for the basic and extended range */
+ leafs_basic = setup_cpuid_range(0x0);
+ leafs_ext = setup_cpuid_range(0x80000000);
+}
+
+static void usage(void)
+{
+ printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+ "\t-a|--all Show both bit flags and complex bit fields info\n"
+ "\t-b|--bitflags Show boolean flags only\n"
+ "\t-d|--detail Show details of the flag/fields (default)\n"
+ "\t-f|--flags Specify the cpuid csv file\n"
+ "\t-h|--help Show usage info\n"
+ "\t-l|--leaf=index Specify the leaf you want to check\n"
+ "\t-r|--raw Show raw cpuid data\n"
+ "\t-s|--subleaf=sub Specify the subleaf you want to check\n"
+ );
+}
+
+static struct option opts[] = {
+ { "all", no_argument, NULL, 'a' }, /* show both bit flags and fields */
+ { "bitflags", no_argument, NULL, 'b' }, /* only show bit flags, default on */
+ { "detail", no_argument, NULL, 'd' }, /* show detail descriptions */
+ { "file", required_argument, NULL, 'f' }, /* use user's cpuid file */
+ { "help", no_argument, NULL, 'h'}, /* show usage */
+ { "leaf", required_argument, NULL, 'l'}, /* only check a specific leaf */
+ { "raw", no_argument, NULL, 'r'}, /* show raw CPUID leaf data */
+ { "subleaf", required_argument, NULL, 's'}, /* check a specific subleaf */
+ { NULL, 0, NULL, 0 }
+};
+
+static int parse_options(int argc, char *argv[])
+{
+ int c;
+
+ while ((c = getopt_long(argc, argv, "abdf:hl:rs:",
+ opts, NULL)) != -1)
+ switch (c) {
+ case 'a':
+ show_flags_only = false;
+ break;
+ case 'b':
+ show_flags_only = true;
+ break;
+ case 'd':
+ show_details = true;
+ break;
+ case 'f':
+ user_csv = optarg;
+ break;
+ case 'h':
+ usage();
+ exit(1);
+ break;
+ case 'l':
+ /* main leaf */
+ user_index = strtoul(optarg, NULL, 0);
+ break;
+ case 'r':
+ show_raw = true;
+ break;
+ case 's':
+ /* subleaf */
+ user_sub = strtoul(optarg, NULL, 0);
+ break;
+ default:
+ printf("%s: Invalid option '%c'\n", argv[0], optopt);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Do 4 things in turn:
+ * 1. Parse user options
+ * 2. Parse and store all the CPUID leaf data supported on this platform
+ * 2. Parse the csv file, while skipping leafs which are not available
+ * on this platform
+ * 3. Print leafs info based on user options
+ */
+int main(int argc, char *argv[])
+{
+ if (parse_options(argc, argv))
+ return -1;
+
+ /* Setup the cpuid leafs of current platform */
+ setup_platform_cpuid();
+
+ /* Read and parse the 'cpuid.csv' */
+ parse_text();
+
+ show_info();
+ return 0;
+}
\ No newline at end of file
--
2.30.0
1
0
From: Feng Tang <feng.tang(a)intel.com>
mainline inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMMB?from=project-issue
CVE: NA
------------------------------------
End users frequently want to know what features their processor
supports, independent of what the kernel supports.
/proc/cpuinfo is great. It is omnipresent and since it is provided by
the kernel it is always as up to date as the kernel. But, it could be
ambiguous about processor features which can be disabled by the kernel
at boot-time or compile-time.
There are some user space tools showing more raw features, but they are
not bound with kernel, and go with distros. Many end users are still
using old distros with new kernels (upgraded by themselves), and may
not upgrade the distros only to get a newer tool.
So here arise the need for a new tool, which
* shows raw CPU features read from the CPUID instruction
* will be easier to update compared to existing userspace
tooling (perhaps distributed like perf)
* inherits "modern" kernel development process, in contrast to some
of the existing userspace CPUID tools which are still being
developed
without git and distributed in tarballs from non-https sites.
* Can produce output consistent with /proc/cpuinfo to make comparison
easier.
The CPUID leaf definitions are kept in an .csv file which allows for
updating only that file to add support for new feature leafs.
This is based on prototype code from Borislav Petkov
(http://sr71.net/~dave/intel/stupid-cpuid.c)
[ bp:
- Massage, add #define _GNU_SOURCE to fix implicit declaration of
function ‘strcasestr' warning
- remove superfluous newlines
- fallback to cpuid.csv in the current dir if none found
- fix typos
- move comments over the lines instead of sideways. ]
Originally-from: Borislav Petkov <bp(a)alien8.de>
Suggested-by: Dave Hansen <dave.hansen(a)intel.com>
Suggested-by: Borislav Petkov <bp(a)alien8.de>
Signed-off-by: Feng Tang <feng.tang(a)intel.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: zhangyue <zhangyue1(a)kylinos.cn>
---
tools/arch/x86/kcpuid/Makefile | 24 ++
tools/arch/x86/kcpuid/cpuid.csv | 400 +++++++++++++++++++
tools/arch/x86/kcpuid/kcpuid.c | 657 ++++++++++++++++++++++++++++++++
3 files changed, 1081 insertions(+)
create mode 100644 tools/arch/x86/kcpuid/Makefile
create mode 100644 tools/arch/x86/kcpuid/cpuid.csv
create mode 100644 tools/arch/x86/kcpuid/kcpuid.c
diff --git a/tools/arch/x86/kcpuid/Makefile b/tools/arch/x86/kcpuid/Makefile
new file mode 100644
index 000000000000..a2f1e855092c
--- /dev/null
+++ b/tools/arch/x86/kcpuid/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for x86/kcpuid tool
+
+kcpuid : kcpuid.c
+
+CFLAGS = -Wextra
+
+BINDIR ?= /usr/sbin
+
+HWDATADIR ?= /usr/share/misc/
+
+override CFLAGS += -O2 -Wall -I../../../include
+
+%: %.c
+ $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY : clean
+clean :
+ @rm -f kcpuid
+
+install : kcpuid
+ install -d $(DESTDIR)$(BINDIR)
+ install -m 755 -p kcpuid $(DESTDIR)$(BINDIR)/kcpuid
+ install -m 444 -p cpuid.csv $(HWDATADIR)/cpuid.csv
diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
new file mode 100644
index 000000000000..e422fad9a98e
--- /dev/null
+++ b/tools/arch/x86/kcpuid/cpuid.csv
@@ -0,0 +1,400 @@
+# The basic row format is:
+# LEAF, SUBLEAF, register_name, bits, short_name, long_description
+
+# Leaf 00H
+ 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+
+# Leaf 01H
+ 1, 0, EAX, 3:0, stepping, Stepping ID
+ 1, 0, EAX, 7:4, model, Model
+ 1, 0, EAX, 11:8, family, Family ID
+ 1, 0, EAX, 13:12, processor, Processor Type
+ 1, 0, EAX, 19:16, model_ext, Extended Model ID
+ 1, 0, EAX, 27:20, family_ext, Extended Family ID
+
+ 1, 0, EBX, 7:0, brand, Brand Index
+ 1, 0, EBX, 15:8, clflush_size, CLFLUSH line size (value * 8) in bytes
+ 1, 0, EBX, 23:16, max_cpu_id, Maxim number of addressable logic cpu in this package
+ 1, 0, EBX, 31:24, apic_id, Initial APIC ID
+
+ 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ 1, 0, ECX, 1, pclmulqdq, PCLMULQDQ instruction supported
+ 1, 0, ECX, 2, dtes64, DS area uses 64-bit layout
+ 1, 0, ECX, 3, mwait, MONITOR/MWAIT supported
+ 1, 0, ECX, 4, ds_cpl, CPL Qualified Debug Store which allows for branch message storage qualified by CPL
+ 1, 0, ECX, 5, vmx, Virtual Machine Extensions supported
+ 1, 0, ECX, 6, smx, Safer Mode Extension supported
+ 1, 0, ECX, 7, eist, Enhanced Intel SpeedStep Technology
+ 1, 0, ECX, 8, tm2, Thermal Monitor 2
+ 1, 0, ECX, 9, ssse3, Supplemental Streaming SIMD Extensions 3 (SSSE3)
+ 1, 0, ECX, 10, l1_ctx_id, L1 data cache could be set to either adaptive mode or shared mode (check IA32_MISC_ENABLE bit 24 definition)
+ 1, 0, ECX, 11, sdbg, IA32_DEBUG_INTERFACE MSR for silicon debug supported
+ 1, 0, ECX, 12, fma, FMA extensions using YMM state supported
+ 1, 0, ECX, 13, cmpxchg16b, 'CMPXCHG16B - Compare and Exchange Bytes' supported
+ 1, 0, ECX, 14, xtpr_update, xTPR Update Control supported
+ 1, 0, ECX, 15, pdcm, Perfmon and Debug Capability present
+ 1, 0, ECX, 17, pcid, Process-Context Identifiers feature present
+ 1, 0, ECX, 18, dca, Prefetching data from a memory mapped device supported
+ 1, 0, ECX, 19, sse4_1, SSE4.1 feature present
+ 1, 0, ECX, 20, sse4_2, SSE4.2 feature present
+ 1, 0, ECX, 21, x2apic, x2APIC supported
+ 1, 0, ECX, 22, movbe, MOVBE instruction supported
+ 1, 0, ECX, 23, popcnt, POPCNT instruction supported
+ 1, 0, ECX, 24, tsc_deadline_timer, LAPIC supports one-shot operation using a TSC deadline value
+ 1, 0, ECX, 25, aesni, AESNI instruction supported
+ 1, 0, ECX, 26, xsave, XSAVE/XRSTOR processor extended states (XSETBV/XGETBV/XCR0)
+ 1, 0, ECX, 27, osxsave, OS has set CR4.OSXSAVE bit to enable XSETBV/XGETBV/XCR0
+ 1, 0, ECX, 28, avx, AVX instruction supported
+ 1, 0, ECX, 29, f16c, 16-bit floating-point conversion instruction supported
+ 1, 0, ECX, 30, rdrand, RDRAND instruction supported
+
+ 1, 0, EDX, 0, fpu, x87 FPU on chip
+ 1, 0, EDX, 1, vme, Virtual-8086 Mode Enhancement
+ 1, 0, EDX, 2, de, Debugging Extensions
+ 1, 0, EDX, 3, pse, Page Size Extensions
+ 1, 0, EDX, 4, tsc, Time Stamp Counter
+ 1, 0, EDX, 5, msr, RDMSR and WRMSR Support
+ 1, 0, EDX, 6, pae, Physical Address Extensions
+ 1, 0, EDX, 7, mce, Machine Check Exception
+ 1, 0, EDX, 8, cx8, CMPXCHG8B instr
+ 1, 0, EDX, 9, apic, APIC on Chip
+ 1, 0, EDX, 11, sep, SYSENTER and SYSEXIT instrs
+ 1, 0, EDX, 12, mtrr, Memory Type Range Registers
+ 1, 0, EDX, 13, pge, Page Global Bit
+ 1, 0, EDX, 14, mca, Machine Check Architecture
+ 1, 0, EDX, 15, cmov, Conditional Move Instrs
+ 1, 0, EDX, 16, pat, Page Attribute Table
+ 1, 0, EDX, 17, pse36, 36-Bit Page Size Extension
+ 1, 0, EDX, 18, psn, Processor Serial Number
+ 1, 0, EDX, 19, clflush, CLFLUSH instr
+# 1, 0, EDX, 20,
+ 1, 0, EDX, 21, ds, Debug Store
+ 1, 0, EDX, 22, acpi, Thermal Monitor and Software Controlled Clock Facilities
+ 1, 0, EDX, 23, mmx, Intel MMX Technology
+ 1, 0, EDX, 24, fxsr, XSAVE and FXRSTOR Instrs
+ 1, 0, EDX, 25, sse, SSE
+ 1, 0, EDX, 26, sse2, SSE2
+ 1, 0, EDX, 27, ss, Self Snoop
+ 1, 0, EDX, 28, hit, Max APIC IDs
+ 1, 0, EDX, 29, tm, Thermal Monitor
+# 1, 0, EDX, 30,
+ 1, 0, EDX, 31, pbe, Pending Break Enable
+
+# Leaf 02H
+# cache and TLB descriptor info
+
+# Leaf 03H
+# Precessor Serial Number, introduced on Pentium III, not valid for
+# latest models
+
+# Leaf 04H
+# thread/core and cache topology
+ 4, 0, EAX, 4:0, cache_type, Cache type like instr/data or unified
+ 4, 0, EAX, 7:5, cache_level, Cache Level (starts at 1)
+ 4, 0, EAX, 8, cache_self_init, Cache Self Initialization
+ 4, 0, EAX, 9, fully_associate, Fully Associative cache
+# 4, 0, EAX, 13:10, resvd, resvd
+ 4, 0, EAX, 25:14, max_logical_id, Max number of addressable IDs for logical processors sharing the cache
+ 4, 0, EAX, 31:26, max_phy_id, Max number of addressable IDs for processors in phy package
+
+ 4, 0, EBX, 11:0, cache_linesize, Size of a cache line in bytes
+ 4, 0, EBX, 21:12, cache_partition, Physical Line partitions
+ 4, 0, EBX, 31:22, cache_ways, Ways of associativity
+ 4, 0, ECX, 31:0, cache_sets, Number of Sets - 1
+ 4, 0, EDX, 0, c_wbinvd, 1 means WBINVD/INVD is not ganranteed to act upon lower level caches of non-originating threads sharing this cache
+ 4, 0, EDX, 1, c_incl, Whether cache is inclusive of lower cache level
+ 4, 0, EDX, 2, c_comp_index, Complex Cache Indexing
+
+# Leaf 05H
+# MONITOR/MWAIT
+ 5, 0, EAX, 15:0, min_mon_size, Smallest monitor line size in bytes
+ 5, 0, EBX, 15:0, max_mon_size, Largest monitor line size in bytes
+ 5, 0, ECX, 0, mwait_ext, Enum of Monitor-Mwait extensions supported
+ 5, 0, ECX, 1, mwait_irq_break, Largest monitor line size in bytes
+ 5, 0, EDX, 3:0, c0_sub_stats, Number of C0* sub C-states supported using MWAIT
+ 5, 0, EDX, 7:4, c1_sub_stats, Number of C1* sub C-states supported using MWAIT
+ 5, 0, EDX, 11:8, c2_sub_stats, Number of C2* sub C-states supported using MWAIT
+ 5, 0, EDX, 15:12, c3_sub_stats, Number of C3* sub C-states supported using MWAIT
+ 5, 0, EDX, 19:16, c4_sub_stats, Number of C4* sub C-states supported using MWAIT
+ 5, 0, EDX, 23:20, c5_sub_stats, Number of C5* sub C-states supported using MWAIT
+ 5, 0, EDX, 27:24, c6_sub_stats, Number of C6* sub C-states supported using MWAIT
+ 5, 0, EDX, 31:28, c7_sub_stats, Number of C7* sub C-states supported using MWAIT
+
+# Leaf 06H
+# Thermal & Power Management
+
+ 6, 0, EAX, 0, dig_temp, Digital temperature sensor supported
+ 6, 0, EAX, 1, turbo, Intel Turbo Boost
+ 6, 0, EAX, 2, arat, Always running APIC timer
+# 6, 0, EAX, 3, resv, Reserved
+ 6, 0, EAX, 4, pln, Power limit notifications supported
+ 6, 0, EAX, 5, ecmd, Clock modulation duty cycle extension supported
+ 6, 0, EAX, 6, ptm, Package thermal management supported
+ 6, 0, EAX, 7, hwp, HWP base register
+ 6, 0, EAX, 8, hwp_notify, HWP notification
+ 6, 0, EAX, 9, hwp_act_window, HWP activity window
+ 6, 0, EAX, 10, hwp_energy, HWP energy performance preference
+ 6, 0, EAX, 11, hwp_pkg_req, HWP package level request
+# 6, 0, EAX, 12, resv, Reserved
+ 6, 0, EAX, 13, hdc, HDC base registers supported
+ 6, 0, EAX, 14, turbo3, Turbo Boost Max 3.0
+ 6, 0, EAX, 15, hwp_cap, Highest Performance change supported
+ 6, 0, EAX, 16, hwp_peci, HWP PECI override is supported
+ 6, 0, EAX, 17, hwp_flex, Flexible HWP is supported
+ 6, 0, EAX, 18, hwp_fast, Fast access mode for the IA32_HWP_REQUEST MSR is supported
+# 6, 0, EAX, 19, resv, Reserved
+ 6, 0, EAX, 20, hwp_ignr, Ignoring Idle Logical Processor HWP request is supported
+
+ 6, 0, EBX, 3:0, therm_irq_thresh, Number of Interrupt Thresholds in Digital Thermal Sensor
+ 6, 0, ECX, 0, aperfmperf, Presence of IA32_MPERF and IA32_APERF
+ 6, 0, ECX, 3, energ_bias, Performance-energy bias preference supported
+
+# Leaf 07H
+# ECX == 0
+# AVX512 refers to https://en.wikipedia.org/wiki/AVX-512
+# XXX: Do we really need to enumerate each and every AVX512 sub features
+
+ 7, 0, EBX, 0, fsgsbase, RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE supported
+ 7, 0, EBX, 1, tsc_adjust, TSC_ADJUST MSR supported
+ 7, 0, EBX, 2, sgx, Software Guard Extensions
+ 7, 0, EBX, 3, bmi1, BMI1
+ 7, 0, EBX, 4, hle, Hardware Lock Elision
+ 7, 0, EBX, 5, avx2, AVX2
+# 7, 0, EBX, 6, fdp_excp_only, x87 FPU Data Pointer updated only on x87 exceptions
+ 7, 0, EBX, 7, smep, Supervisor-Mode Execution Prevention
+ 7, 0, EBX, 8, bmi2, BMI2
+ 7, 0, EBX, 9, rep_movsb, Enhanced REP MOVSB/STOSB
+ 7, 0, EBX, 10, invpcid, INVPCID instruction
+ 7, 0, EBX, 11, rtm, Restricted Transactional Memory
+ 7, 0, EBX, 12, rdt_m, Intel RDT Monitoring capability
+ 7, 0, EBX, 13, depc_fpu_cs_ds, Deprecates FPU CS and FPU DS
+ 7, 0, EBX, 14, mpx, Memory Protection Extensions
+ 7, 0, EBX, 15, rdt_a, Intel RDT Allocation capability
+ 7, 0, EBX, 16, avx512f, AVX512 Foundation instr
+ 7, 0, EBX, 17, avx512dq, AVX512 Double and Quadword AVX512 instr
+ 7, 0, EBX, 18, rdseed, RDSEED instr
+ 7, 0, EBX, 19, adx, ADX instr
+ 7, 0, EBX, 20, smap, Supervisor Mode Access Prevention
+ 7, 0, EBX, 21, avx512ifma, AVX512 Integer Fused Multiply Add
+# 7, 0, EBX, 22, resvd, resvd
+ 7, 0, EBX, 23, clflushopt, CLFLUSHOPT instr
+ 7, 0, EBX, 24, clwb, CLWB instr
+ 7, 0, EBX, 25, intel_pt, Intel Processor Trace instr
+ 7, 0, EBX, 26, avx512pf, Prefetch
+ 7, 0, EBX, 27, avx512er, AVX512 Exponent Reciproca instr
+ 7, 0, EBX, 28, avx512cd, AVX512 Conflict Detection instr
+ 7, 0, EBX, 29, sha, Intel Secure Hash Algorithm Extensions instr
+ 7, 0, EBX, 26, avx512bw, AVX512 Byte & Word instr
+ 7, 0, EBX, 28, avx512vl, AVX512 Vector Length Extentions (VL)
+ 7, 0, ECX, 0, prefetchwt1, X
+ 7, 0, ECX, 1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
+ 7, 0, ECX, 2, umip, User-mode Instruction Prevention
+
+ 7, 0, ECX, 3, pku, Protection Keys for User-mode pages
+ 7, 0, ECX, 4, ospke, CR4 PKE set to enable protection keys
+# 7, 0, ECX, 16:5, resvd, resvd
+ 7, 0, ECX, 21:17, mawau, The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode
+ 7, 0, ECX, 22, rdpid, RDPID and IA32_TSC_AUX
+# 7, 0, ECX, 29:23, resvd, resvd
+ 7, 0, ECX, 30, sgx_lc, SGX Launch Configuration
+# 7, 0, ECX, 31, resvd, resvd
+
+# Leaf 08H
+#
+
+
+# Leaf 09H
+# Direct Cache Access (DCA) information
+ 9, 0, ECX, 31:0, dca_cap, The value of IA32_PLATFORM_DCA_CAP
+
+# Leaf 0AH
+# Architectural Performance Monitoring
+#
+# Do we really need to print out the PMU related stuff?
+# Does normal user really care about it?
+#
+ 0xA, 0, EAX, 7:0, pmu_ver, Performance Monitoring Unit version
+ 0xA, 0, EAX, 15:8, pmu_gp_cnt_num, Numer of general-purose PMU counters per logical CPU
+ 0xA, 0, EAX, 23:16, pmu_cnt_bits, Bit wideth of PMU counter
+ 0xA, 0, EAX, 31:24, pmu_ebx_bits, Length of EBX bit vector to enumerate PMU events
+
+ 0xA, 0, EBX, 0, pmu_no_core_cycle_evt, Core cycle event not available
+ 0xA, 0, EBX, 1, pmu_no_instr_ret_evt, Instruction retired event not available
+ 0xA, 0, EBX, 2, pmu_no_ref_cycle_evt, Reference cycles event not available
+ 0xA, 0, EBX, 3, pmu_no_llc_ref_evt, Last-level cache reference event not available
+ 0xA, 0, EBX, 4, pmu_no_llc_mis_evt, Last-level cache misses event not available
+ 0xA, 0, EBX, 5, pmu_no_br_instr_ret_evt, Branch instruction retired event not available
+ 0xA, 0, EBX, 6, pmu_no_br_mispredict_evt, Branch mispredict retired event not available
+
+ 0xA, 0, ECX, 4:0, pmu_fixed_cnt_num, Performance Monitoring Unit version
+ 0xA, 0, ECX, 12:5, pmu_fixed_cnt_bits, Numer of PMU counters per logical CPU
+
+# Leaf 0BH
+# Extended Topology Enumeration Leaf
+#
+
+ 0xB, 0, EAX, 4:0, id_shift, Number of bits to shift right on x2APIC ID to get a unique topology ID of the next level type
+ 0xB, 0, EBX, 15:0, cpu_nr, Number of logical processors at this level type
+ 0xB, 0, ECX, 15:8, lvl_type, 0-Invalid 1-SMT 2-Core
+ 0xB, 0, EDX, 31:0, x2apic_id, x2APIC ID the current logical processor
+
+
+# Leaf 0DH
+# Processor Extended State
+
+ 0xD, 0, EAX, 0, x87, X87 state
+ 0xD, 0, EAX, 1, sse, SSE state
+ 0xD, 0, EAX, 2, avx, AVX state
+ 0xD, 0, EAX, 4:3, mpx, MPX state
+ 0xD, 0, EAX, 7:5, avx512, AVX-512 state
+ 0xD, 0, EAX, 9, pkru, PKRU state
+
+ 0xD, 0, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 0, ECX, 31:0, max_sz_xsave, Maximum size (bytes) of the XSAVE/XRSTOR save area
+
+ 0xD, 1, EAX, 0, xsaveopt, XSAVEOPT available
+ 0xD, 1, EAX, 1, xsavec, XSAVEC and compacted form supported
+ 0xD, 1, EAX, 2, xgetbv, XGETBV supported
+ 0xD, 1, EAX, 3, xsaves, XSAVES/XRSTORS and IA32_XSS supported
+
+ 0xD, 1, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 1, ECX, 8, pt, PT state
+ 0xD, 1, ECX, 11, cet_usr, CET user state
+ 0xD, 1, ECX, 12, cet_supv, CET supervisor state
+ 0xD, 1, ECX, 13, hdc, HDC state
+ 0xD, 1, ECX, 16, hwp, HWP state
+
+# Leaf 0FH
+# Intel RDT Monitoring
+
+ 0xF, 0, EBX, 31:0, rmid_range, Maximum range (zero-based) of RMID within this physical processor of all types
+ 0xF, 0, EDX, 1, l3c_rdt_mon, L3 Cache RDT Monitoring supported
+
+ 0xF, 1, ECX, 31:0, rmid_range, Maximum range (zero-based) of RMID of this types
+ 0xF, 1, EDX, 0, l3c_ocp_mon, L3 Cache occupancy Monitoring supported
+ 0xF, 1, EDX, 1, l3c_tbw_mon, L3 Cache Total Bandwidth Monitoring supported
+ 0xF, 1, EDX, 2, l3c_lbw_mon, L3 Cache Local Bandwidth Monitoring supported
+
+# Leaf 10H
+# Intel RDT Allocation
+
+ 0x10, 0, EBX, 1, l3c_rdt_alloc, L3 Cache Allocation supported
+ 0x10, 0, EBX, 2, l2c_rdt_alloc, L2 Cache Allocation supported
+ 0x10, 0, EBX, 3, mem_bw_alloc, Memory Bandwidth Allocation supported
+
+
+# Leaf 12H
+# SGX Capability
+#
+# Some detailed SGX features not added yet
+
+ 0x12, 0, EAX, 0, sgx1, L3 Cache Allocation supported
+ 0x12, 1, EAX, 0, sgx2, L3 Cache Allocation supported
+
+
+# Leaf 14H
+# Intel Processor Tracer
+#
+
+# Leaf 15H
+# Time Stamp Counter and Nominal Core Crystal Clock Information
+
+ 0x15, 0, EAX, 31:0, tsc_denominator, The denominator of the TSC/”core crystal clock” ratio
+ 0x15, 0, EBX, 31:0, tsc_numerator, The numerator of the TSC/”core crystal clock” ratio
+ 0x15, 0, ECX, 31:0, nom_freq, Nominal frequency of the core crystal clock in Hz
+
+# Leaf 16H
+# Processor Frequency Information
+
+ 0x16, 0, EAX, 15:0, cpu_base_freq, Processor Base Frequency in MHz
+ 0x16, 0, EBX, 15:0, cpu_max_freq, Maximum Frequency in MHz
+ 0x16, 0, ECX, 15:0, bus_freq, Bus (Reference) Frequency in MHz
+
+# Leaf 17H
+# System-On-Chip Vendor Attribute
+
+ 0x17, 0, EAX, 31:0, max_socid, Maximum input value of supported sub-leaf
+ 0x17, 0, EBX, 15:0, soc_vid, SOC Vendor ID
+ 0x17, 0, EBX, 16, std_vid, SOC Vendor ID is assigned via an industry standard scheme
+ 0x17, 0, ECX, 31:0, soc_pid, SOC Project ID assigned by vendor
+ 0x17, 0, EDX, 31:0, soc_sid, SOC Stepping ID
+
+# Leaf 18H
+# Deterministic Address Translation Parameters
+
+
+# Leaf 19H
+# Key Locker Leaf
+
+
+# Leaf 1AH
+# Hybrid Information
+
+ 0x1A, 0, EAX, 31:24, core_type, 20H-Intel_Atom 40H-Intel_Core
+
+
+# Leaf 1FH
+# V2 Extended Topology - A preferred superset to leaf 0BH
+
+
+# According to SDM
+# 40000000H - 4FFFFFFFH is invalid range
+
+
+# Leaf 80000001H
+# Extended Processor Signature and Feature Bits
+
+0x80000001, 0, ECX, 0, lahf_lm, LAHF/SAHF available in 64-bit mode
+0x80000001, 0, ECX, 5, lzcnt, LZCNT
+0x80000001, 0, ECX, 8, prefetchw, PREFETCHW
+
+0x80000001, 0, EDX, 11, sysret, SYSCALL/SYSRET supported
+0x80000001, 0, EDX, 20, exec_dis, Execute Disable Bit available
+0x80000001, 0, EDX, 26, 1gb_page, 1GB page supported
+0x80000001, 0, EDX, 27, rdtscp, RDTSCP and IA32_TSC_AUX are available
+#0x80000001, 0, EDX, 29, 64b, 64b Architecture supported
+
+# Leaf 80000002H/80000003H/80000004H
+# Processor Brand String
+
+# Leaf 80000005H
+# Reserved
+
+# Leaf 80000006H
+# Extended L2 Cache Features
+
+0x80000006, 0, ECX, 7:0, clsize, Cache Line size in bytes
+0x80000006, 0, ECX, 15:12, l2c_assoc, L2 Associativity
+0x80000006, 0, ECX, 31:16, csize, Cache size in 1K units
+
+
+# Leaf 80000007H
+
+0x80000007, 0, EDX, 8, nonstop_tsc, Invariant TSC available
+
+
+# Leaf 80000008H
+
+0x80000008, 0, EAX, 7:0, phy_adr_bits, Physical Address Bits
+0x80000008, 0, EAX, 15:8, lnr_adr_bits, Linear Address Bits
+0x80000007, 0, EBX, 9, wbnoinvd, WBNOINVD
+
+# 0x8000001E
+# EAX: Extended APIC ID
+0x8000001E, 0, EAX, 31:0, extended_apic_id, Extended APIC ID
+# EBX: Core Identifiers
+0x8000001E, 0, EBX, 7:0, core_id, Identifies the logical core ID
+0x8000001E, 0, EBX, 15:8, threads_per_core, The number of threads per core is threads_per_core + 1
+# ECX: Node Identifiers
+0x8000001E, 0, ECX, 7:0, node_id, Node ID
+0x8000001E, 0, ECX, 10:8, nodes_per_processor, Nodes per processor { 0: 1 node, else reserved }
+
+# 8000001F: AMD Secure Encryption
+0x8000001F, 0, EAX, 0, sme, Secure Memory Encryption
+0x8000001F, 0, EAX, 1, sev, Secure Encrypted Virtualization
+0x8000001F, 0, EAX, 2, vmpgflush, VM Page Flush MSR
+0x8000001F, 0, EAX, 3, seves, SEV Encrypted State
+0x8000001F, 0, EBX, 5:0, c-bit, Page table bit number used to enable memory encryption
+0x8000001F, 0, EBX, 11:6, mem_encrypt_physaddr_width, Reduction of physical address space in bits with SME enabled
+0x8000001F, 0, ECX, 31:0, num_encrypted_guests, Maximum ASID value that may be used for an SEV-enabled guest
+0x8000001F, 0, EDX, 31:0, minimum_sev_asid, Minimum ASID value that must be used for an SEV-enabled, SEV-ES-disabled guest
\ No newline at end of file
diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
new file mode 100644
index 000000000000..936a3a2aad04
--- /dev/null
+++ b/tools/arch/x86/kcpuid/kcpuid.c
@@ -0,0 +1,657 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <getopt.h>
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+typedef unsigned int u32;
+typedef unsigned long long u64;
+
+char *def_csv = "/usr/share/misc/cpuid.csv";
+char *user_csv;
+
+
+/* Cover both single-bit flag and multiple-bits fields */
+struct bits_desc {
+ /* start and end bits */
+ int start, end;
+ /* 0 or 1 for 1-bit flag */
+ int value;
+ char simp[32];
+ char detail[256];
+};
+
+/* descriptor info for eax/ebx/ecx/edx */
+struct reg_desc {
+ /* number of valid entries */
+ int nr;
+ struct bits_desc descs[32];
+};
+
+enum {
+ R_EAX = 0,
+ R_EBX,
+ R_ECX,
+ R_EDX,
+ NR_REGS
+};
+
+struct subleaf {
+ u32 index;
+ u32 sub;
+ u32 eax, ebx, ecx, edx;
+ struct reg_desc info[NR_REGS];
+};
+
+/* Represent one leaf (basic or extended) */
+struct cpuid_func {
+ /*
+ * Array of subleafs for this func, if there is no subleafs
+ * then the leafs[0] is the main leaf
+ */
+ struct subleaf *leafs;
+ int nr;
+};
+
+struct cpuid_range {
+ /* array of main leafs */
+ struct cpuid_func *funcs;
+ /* number of valid leafs */
+ int nr;
+ bool is_ext;
+};
+
+/*
+ * basic: basic functions range: [0... ]
+ * ext: extended functions range: [0x80000000... ]
+ */
+struct cpuid_range *leafs_basic, *leafs_ext;
+
+static int num_leafs;
+static bool is_amd;
+static bool show_details;
+static bool show_raw;
+static bool show_flags_only = true;
+static u32 user_index = 0xFFFFFFFF;
+static u32 user_sub = 0xFFFFFFFF;
+static int flines;
+
+static inline void cpuid(u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
+{
+ /* ecx is often an input as well as an output. */
+ asm volatile("cpuid"
+ : "=a" (*eax),
+ "=b" (*ebx),
+ "=c" (*ecx),
+ "=d" (*edx)
+ : "0" (*eax), "2" (*ecx));
+}
+
+static inline bool has_subleafs(u32 f)
+{
+ if (f == 0x7 || f == 0xd)
+ return true;
+
+ if (is_amd) {
+ if (f == 0x8000001d)
+ return true;
+ return false;
+ }
+
+ switch (f) {
+ case 0x4:
+ case 0xb:
+ case 0xf:
+ case 0x10:
+ case 0x14:
+ case 0x18:
+ case 0x1f:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static void leaf_print_raw(struct subleaf *leaf)
+{
+ if (has_subleafs(leaf->index)) {
+ if (leaf->sub == 0)
+ printf("0x%08x: subleafs:\n", leaf->index);
+
+ printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->sub, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ } else {
+ printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->index, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ }
+}
+
+/* Return true is the input eax/ebx/ecx/edx are all zero */
+static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ u32 a, u32 b, u32 c, u32 d)
+{
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ int s = 0;
+
+ if (a == 0 && b == 0 && c == 0 && d == 0)
+ return true;
+
+ /*
+ * Cut off vendor-prefix from CPUID function as we're using it as an
+ * index into ->funcs.
+ */
+ func = &range->funcs[f & 0xffff];
+
+ if (!func->leafs) {
+ func->leafs = malloc(sizeof(struct subleaf));
+ if (!func->leafs)
+ perror("malloc func leaf");
+
+ func->nr = 1;
+ } else {
+ s = func->nr;
+ func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ if (!func->leafs)
+ perror("realloc f->leafs");
+
+ func->nr++;
+ }
+
+ leaf = &func->leafs[s];
+
+ leaf->index = f;
+ leaf->sub = subleaf;
+ leaf->eax = a;
+ leaf->ebx = b;
+ leaf->ecx = c;
+ leaf->edx = d;
+
+ return false;
+}
+
+static void raw_dump_range(struct cpuid_range *range)
+{
+ u32 f;
+ int i;
+
+ printf("%s Leafs :\n", range->is_ext ? "Extended" : "Basic");
+ printf("================\n");
+
+ for (f = 0; (int)f < range->nr; f++) {
+ struct cpuid_func *func = &range->funcs[f];
+ u32 index = f;
+
+ if (range->is_ext)
+ index += 0x80000000;
+
+ /* Skip leaf without valid items */
+ if (!func->nr)
+ continue;
+
+ /* First item is the main leaf, followed by all subleafs */
+ for (i = 0; i < func->nr; i++)
+ leaf_print_raw(&func->leafs[i]);
+ }
+}
+
+#define MAX_SUBLEAF_NUM 32
+struct cpuid_range *setup_cpuid_range(u32 input_eax)
+{
+ u32 max_func, idx_func;
+ int subleaf;
+ struct cpuid_range *range;
+ u32 eax, ebx, ecx, edx;
+ u32 f = input_eax;
+ int max_subleaf;
+ bool allzero;
+
+ eax = input_eax;
+ ebx = ecx = edx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ max_func = eax;
+ idx_func = (max_func & 0xffff) + 1;
+
+ range = malloc(sizeof(struct cpuid_range));
+ if (!range)
+ perror("malloc range");
+
+ if (input_eax & 0x80000000)
+ range->is_ext = true;
+ else
+ range->is_ext = false;
+
+ range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ if (!range->funcs)
+ perror("malloc range->funcs");
+
+ range->nr = idx_func;
+ memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+
+ for (; f <= max_func; f++) {
+ eax = f;
+ subleaf = ecx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+
+ if (!has_subleafs(f))
+ continue;
+
+ max_subleaf = MAX_SUBLEAF_NUM;
+
+ /*
+ * Some can provide the exact number of subleafs,
+ * others have to be tried (0xf)
+ */
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+ max_subleaf = (eax & 0xff) + 1;
+
+ if (f == 0xb)
+ max_subleaf = 2;
+
+ for (subleaf = 1; subleaf < max_subleaf; subleaf++) {
+ eax = f;
+ ecx = subleaf;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf,
+ eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+ }
+
+ }
+
+ return range;
+}
+
+/*
+ * The basic row format for cpuid.csv is
+ * LEAF,SUBLEAF,register_name,bits,short name,long description
+ *
+ * like:
+ * 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+ * 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ */
+static int parse_line(char *line)
+{
+ char *str;
+ int i;
+ struct cpuid_range *range;
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ u32 index;
+ u32 sub;
+ char buffer[512];
+ char *buf;
+ /*
+ * Tokens:
+ * 1. leaf
+ * 2. subleaf
+ * 3. register
+ * 4. bits
+ * 5. short name
+ * 6. long detail
+ */
+ char *tokens[6];
+ struct reg_desc *reg;
+ struct bits_desc *bdesc;
+ int reg_index;
+ char *start, *end;
+
+ /* Skip comments and NULL line */
+ if (line[0] == '#' || line[0] == '\n')
+ return 0;
+
+ strncpy(buffer, line, 511);
+ buffer[511] = 0;
+ str = buffer;
+ for (i = 0; i < 5; i++) {
+ tokens[i] = strtok(str, ",");
+ if (!tokens[i])
+ goto err_exit;
+ str = NULL;
+ }
+ tokens[5] = strtok(str, "\n");
+ if (!tokens[5])
+ goto err_exit;
+
+ /* index/main-leaf */
+ index = strtoull(tokens[0], NULL, 0);
+
+ if (index & 0x80000000)
+ range = leafs_ext;
+ else
+ range = leafs_basic;
+
+ index &= 0x7FFFFFFF;
+ /* Skip line parsing for non-existing indexes */
+ if ((int)index >= range->nr)
+ return -1;
+
+ func = &range->funcs[index];
+
+ /* Return if the index has no valid item on this platform */
+ if (!func->nr)
+ return 0;
+
+ /* subleaf */
+ sub = strtoul(tokens[1], NULL, 0);
+ if ((int)sub > func->nr)
+ return -1;
+
+ leaf = &func->leafs[sub];
+ buf = tokens[2];
+
+ if (strcasestr(buf, "EAX"))
+ reg_index = R_EAX;
+ else if (strcasestr(buf, "EBX"))
+ reg_index = R_EBX;
+ else if (strcasestr(buf, "ECX"))
+ reg_index = R_ECX;
+ else if (strcasestr(buf, "EDX"))
+ reg_index = R_EDX;
+ else
+ goto err_exit;
+
+ reg = &leaf->info[reg_index];
+ bdesc = ®->descs[reg->nr++];
+
+ /* bit flag or bits field */
+ buf = tokens[3];
+
+ end = strtok(buf, ":");
+ bdesc->end = strtoul(end, NULL, 0);
+ bdesc->start = bdesc->end;
+
+ /* start != NULL means it is bit fields */
+ start = strtok(NULL, ":");
+ if (start)
+ bdesc->start = strtoul(start, NULL, 0);
+
+ strcpy(bdesc->simp, tokens[4]);
+ strcpy(bdesc->detail, tokens[5]);
+ return 0;
+
+err_exit:
+ printf("Warning: wrong line format:\n");
+ printf("\tline[%d]: %s\n", flines, line);
+ return -1;
+}
+
+/* Parse csv file, and construct the array of all leafs and subleafs */
+static void parse_text(void)
+{
+ FILE *file;
+ char *filename, *line = NULL;
+ size_t len = 0;
+ int ret;
+
+ if (show_raw)
+ return;
+
+ filename = user_csv ? user_csv : def_csv;
+ file = fopen(filename, "r");
+ if (!file) {
+ /* Fallback to a csv in the same dir */
+ file = fopen("./cpuid.csv", "r");
+ }
+
+ if (!file) {
+ printf("Fail to open '%s'\n", filename);
+ return;
+ }
+
+ while (1) {
+ ret = getline(&line, &len, file);
+ flines++;
+ if (ret > 0)
+ parse_line(line);
+
+ if (feof(file))
+ break;
+ }
+
+ fclose(file);
+}
+
+
+/* Decode every eax/ebx/ecx/edx */
+static void decode_bits(u32 value, struct reg_desc *rdesc)
+{
+ struct bits_desc *bdesc;
+ int start, end, i;
+ u32 mask;
+
+ for (i = 0; i < rdesc->nr; i++) {
+ bdesc = &rdesc->descs[i];
+
+ start = bdesc->start;
+ end = bdesc->end;
+ if (start == end) {
+ /* single bit flag */
+ if (value & (1 << start))
+ printf("\t%-20s %s%s\n",
+ bdesc->simp,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ } else {
+ /* bit fields */
+ if (show_flags_only)
+ continue;
+
+ mask = ((u64)1 << (end - start + 1)) - 1;
+ printf("\t%-20s\t: 0x%-8x\t%s%s\n",
+ bdesc->simp,
+ (value >> start) & mask,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ }
+ }
+}
+
+static void show_leaf(struct subleaf *leaf)
+{
+ if (!leaf)
+ return;
+
+ if (show_raw)
+ leaf_print_raw(leaf);
+
+ decode_bits(leaf->eax, &leaf->info[R_EAX]);
+ decode_bits(leaf->ebx, &leaf->info[R_EBX]);
+ decode_bits(leaf->ecx, &leaf->info[R_ECX]);
+ decode_bits(leaf->edx, &leaf->info[R_EDX]);
+}
+
+static void show_func(struct cpuid_func *func)
+{
+ int i;
+
+ if (!func)
+ return;
+
+ for (i = 0; i < func->nr; i++)
+ show_leaf(&func->leafs[i]);
+}
+
+static void show_range(struct cpuid_range *range)
+{
+ int i;
+
+ for (i = 0; i < range->nr; i++)
+ show_func(&range->funcs[i]);
+}
+
+static inline struct cpuid_func *index_to_func(u32 index)
+{
+ struct cpuid_range *range;
+
+ range = (index & 0x80000000) ? leafs_ext : leafs_basic;
+ index &= 0x7FFFFFFF;
+
+ if (((index & 0xFFFF) + 1) > (u32)range->nr) {
+ printf("ERR: invalid input index (0x%x)\n", index);
+ return NULL;
+ }
+ return &range->funcs[index];
+}
+
+static void show_info(void)
+{
+ struct cpuid_func *func;
+
+ if (show_raw) {
+ /* Show all of the raw output of 'cpuid' instr */
+ raw_dump_range(leafs_basic);
+ raw_dump_range(leafs_ext);
+ return;
+ }
+
+ if (user_index != 0xFFFFFFFF) {
+ /* Only show specific leaf/subleaf info */
+ func = index_to_func(user_index);
+ if (!func)
+ return;
+
+ /* Dump the raw data also */
+ show_raw = true;
+
+ if (user_sub != 0xFFFFFFFF) {
+ if (user_sub + 1 <= (u32)func->nr) {
+ show_leaf(&func->leafs[user_sub]);
+ return;
+ }
+
+ printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
+ }
+
+ show_func(func);
+ return;
+ }
+
+ printf("CPU features:\n=============\n\n");
+ show_range(leafs_basic);
+ show_range(leafs_ext);
+}
+
+static void setup_platform_cpuid(void)
+{
+ u32 eax, ebx, ecx, edx;
+
+ /* Check vendor */
+ eax = ebx = ecx = edx = 0;
+ cpuid(&eax, &ebx, &ecx, &edx);
+
+ /* "htuA" */
+ if (ebx == 0x68747541)
+ is_amd = true;
+
+ /* Setup leafs for the basic and extended range */
+ leafs_basic = setup_cpuid_range(0x0);
+ leafs_ext = setup_cpuid_range(0x80000000);
+}
+
+static void usage(void)
+{
+ printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+ "\t-a|--all Show both bit flags and complex bit fields info\n"
+ "\t-b|--bitflags Show boolean flags only\n"
+ "\t-d|--detail Show details of the flag/fields (default)\n"
+ "\t-f|--flags Specify the cpuid csv file\n"
+ "\t-h|--help Show usage info\n"
+ "\t-l|--leaf=index Specify the leaf you want to check\n"
+ "\t-r|--raw Show raw cpuid data\n"
+ "\t-s|--subleaf=sub Specify the subleaf you want to check\n"
+ );
+}
+
+static struct option opts[] = {
+ { "all", no_argument, NULL, 'a' }, /* show both bit flags and fields */
+ { "bitflags", no_argument, NULL, 'b' }, /* only show bit flags, default on */
+ { "detail", no_argument, NULL, 'd' }, /* show detail descriptions */
+ { "file", required_argument, NULL, 'f' }, /* use user's cpuid file */
+ { "help", no_argument, NULL, 'h'}, /* show usage */
+ { "leaf", required_argument, NULL, 'l'}, /* only check a specific leaf */
+ { "raw", no_argument, NULL, 'r'}, /* show raw CPUID leaf data */
+ { "subleaf", required_argument, NULL, 's'}, /* check a specific subleaf */
+ { NULL, 0, NULL, 0 }
+};
+
+static int parse_options(int argc, char *argv[])
+{
+ int c;
+
+ while ((c = getopt_long(argc, argv, "abdf:hl:rs:",
+ opts, NULL)) != -1)
+ switch (c) {
+ case 'a':
+ show_flags_only = false;
+ break;
+ case 'b':
+ show_flags_only = true;
+ break;
+ case 'd':
+ show_details = true;
+ break;
+ case 'f':
+ user_csv = optarg;
+ break;
+ case 'h':
+ usage();
+ exit(1);
+ break;
+ case 'l':
+ /* main leaf */
+ user_index = strtoul(optarg, NULL, 0);
+ break;
+ case 'r':
+ show_raw = true;
+ break;
+ case 's':
+ /* subleaf */
+ user_sub = strtoul(optarg, NULL, 0);
+ break;
+ default:
+ printf("%s: Invalid option '%c'\n", argv[0], optopt);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Do 4 things in turn:
+ * 1. Parse user options
+ * 2. Parse and store all the CPUID leaf data supported on this platform
+ * 2. Parse the csv file, while skipping leafs which are not available
+ * on this platform
+ * 3. Print leafs info based on user options
+ */
+int main(int argc, char *argv[])
+{
+ if (parse_options(argc, argv))
+ return -1;
+
+ /* Setup the cpuid leafs of current platform */
+ setup_platform_cpuid();
+
+ /* Read and parse the 'cpuid.csv' */
+ parse_text();
+
+ show_info();
+ return 0;
+}
\ No newline at end of file
--
2.30.0
1
0
From: Feng Tang <feng.tang(a)intel.com>
mainline inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMMB?from=project-issue
CVE: NA
------------------------------------
End users frequently want to know what features their processor
supports, independent of what the kernel supports.
/proc/cpuinfo is great. It is omnipresent and since it is provided by
the kernel it is always as up to date as the kernel. But, it could be
ambiguous about processor features which can be disabled by the kernel
at boot-time or compile-time.
There are some user space tools showing more raw features, but they are
not bound with kernel, and go with distros. Many end users are still
using old distros with new kernels (upgraded by themselves), and may
not upgrade the distros only to get a newer tool.
So here arise the need for a new tool, which
* shows raw CPU features read from the CPUID instruction
* will be easier to update compared to existing userspace
tooling (perhaps distributed like perf)
* inherits "modern" kernel development process, in contrast to some
of the existing userspace CPUID tools which are still being
developed
without git and distributed in tarballs from non-https sites.
* Can produce output consistent with /proc/cpuinfo to make comparison
easier.
The CPUID leaf definitions are kept in an .csv file which allows for
updating only that file to add support for new feature leafs.
This is based on prototype code from Borislav Petkov
(http://sr71.net/~dave/intel/stupid-cpuid.c)
[ bp:
- Massage, add #define _GNU_SOURCE to fix implicit declaration of
function ‘strcasestr' warning
- remove superfluous newlines
- fallback to cpuid.csv in the current dir if none found
- fix typos
- move comments over the lines instead of sideways. ]
Originally-from: Borislav Petkov <bp(a)alien8.de>
Suggested-by: Dave Hansen <dave.hansen(a)intel.com>
Suggested-by: Borislav Petkov <bp(a)alien8.de>
Signed-off-by: Feng Tang <feng.tang(a)intel.com>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Signed-off-by: zhangyue <zhangyue1(a)kylinos.cn>
---
tools/arch/x86/kcpuid/Makefile | 24 ++
tools/arch/x86/kcpuid/cpuid.csv | 400 +++++++++++++++++++
tools/arch/x86/kcpuid/kcpuid.c | 657 ++++++++++++++++++++++++++++++++
3 files changed, 1081 insertions(+)
create mode 100644 tools/arch/x86/kcpuid/Makefile
create mode 100644 tools/arch/x86/kcpuid/cpuid.csv
create mode 100644 tools/arch/x86/kcpuid/kcpuid.c
diff --git a/tools/arch/x86/kcpuid/Makefile b/tools/arch/x86/kcpuid/Makefile
new file mode 100644
index 000000000000..a2f1e855092c
--- /dev/null
+++ b/tools/arch/x86/kcpuid/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for x86/kcpuid tool
+
+kcpuid : kcpuid.c
+
+CFLAGS = -Wextra
+
+BINDIR ?= /usr/sbin
+
+HWDATADIR ?= /usr/share/misc/
+
+override CFLAGS += -O2 -Wall -I../../../include
+
+%: %.c
+ $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY : clean
+clean :
+ @rm -f kcpuid
+
+install : kcpuid
+ install -d $(DESTDIR)$(BINDIR)
+ install -m 755 -p kcpuid $(DESTDIR)$(BINDIR)/kcpuid
+ install -m 444 -p cpuid.csv $(HWDATADIR)/cpuid.csv
diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
new file mode 100644
index 000000000000..e422fad9a98e
--- /dev/null
+++ b/tools/arch/x86/kcpuid/cpuid.csv
@@ -0,0 +1,400 @@
+# The basic row format is:
+# LEAF, SUBLEAF, register_name, bits, short_name, long_description
+
+# Leaf 00H
+ 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+
+# Leaf 01H
+ 1, 0, EAX, 3:0, stepping, Stepping ID
+ 1, 0, EAX, 7:4, model, Model
+ 1, 0, EAX, 11:8, family, Family ID
+ 1, 0, EAX, 13:12, processor, Processor Type
+ 1, 0, EAX, 19:16, model_ext, Extended Model ID
+ 1, 0, EAX, 27:20, family_ext, Extended Family ID
+
+ 1, 0, EBX, 7:0, brand, Brand Index
+ 1, 0, EBX, 15:8, clflush_size, CLFLUSH line size (value * 8) in bytes
+ 1, 0, EBX, 23:16, max_cpu_id, Maxim number of addressable logic cpu in this package
+ 1, 0, EBX, 31:24, apic_id, Initial APIC ID
+
+ 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ 1, 0, ECX, 1, pclmulqdq, PCLMULQDQ instruction supported
+ 1, 0, ECX, 2, dtes64, DS area uses 64-bit layout
+ 1, 0, ECX, 3, mwait, MONITOR/MWAIT supported
+ 1, 0, ECX, 4, ds_cpl, CPL Qualified Debug Store which allows for branch message storage qualified by CPL
+ 1, 0, ECX, 5, vmx, Virtual Machine Extensions supported
+ 1, 0, ECX, 6, smx, Safer Mode Extension supported
+ 1, 0, ECX, 7, eist, Enhanced Intel SpeedStep Technology
+ 1, 0, ECX, 8, tm2, Thermal Monitor 2
+ 1, 0, ECX, 9, ssse3, Supplemental Streaming SIMD Extensions 3 (SSSE3)
+ 1, 0, ECX, 10, l1_ctx_id, L1 data cache could be set to either adaptive mode or shared mode (check IA32_MISC_ENABLE bit 24 definition)
+ 1, 0, ECX, 11, sdbg, IA32_DEBUG_INTERFACE MSR for silicon debug supported
+ 1, 0, ECX, 12, fma, FMA extensions using YMM state supported
+ 1, 0, ECX, 13, cmpxchg16b, 'CMPXCHG16B - Compare and Exchange Bytes' supported
+ 1, 0, ECX, 14, xtpr_update, xTPR Update Control supported
+ 1, 0, ECX, 15, pdcm, Perfmon and Debug Capability present
+ 1, 0, ECX, 17, pcid, Process-Context Identifiers feature present
+ 1, 0, ECX, 18, dca, Prefetching data from a memory mapped device supported
+ 1, 0, ECX, 19, sse4_1, SSE4.1 feature present
+ 1, 0, ECX, 20, sse4_2, SSE4.2 feature present
+ 1, 0, ECX, 21, x2apic, x2APIC supported
+ 1, 0, ECX, 22, movbe, MOVBE instruction supported
+ 1, 0, ECX, 23, popcnt, POPCNT instruction supported
+ 1, 0, ECX, 24, tsc_deadline_timer, LAPIC supports one-shot operation using a TSC deadline value
+ 1, 0, ECX, 25, aesni, AESNI instruction supported
+ 1, 0, ECX, 26, xsave, XSAVE/XRSTOR processor extended states (XSETBV/XGETBV/XCR0)
+ 1, 0, ECX, 27, osxsave, OS has set CR4.OSXSAVE bit to enable XSETBV/XGETBV/XCR0
+ 1, 0, ECX, 28, avx, AVX instruction supported
+ 1, 0, ECX, 29, f16c, 16-bit floating-point conversion instruction supported
+ 1, 0, ECX, 30, rdrand, RDRAND instruction supported
+
+ 1, 0, EDX, 0, fpu, x87 FPU on chip
+ 1, 0, EDX, 1, vme, Virtual-8086 Mode Enhancement
+ 1, 0, EDX, 2, de, Debugging Extensions
+ 1, 0, EDX, 3, pse, Page Size Extensions
+ 1, 0, EDX, 4, tsc, Time Stamp Counter
+ 1, 0, EDX, 5, msr, RDMSR and WRMSR Support
+ 1, 0, EDX, 6, pae, Physical Address Extensions
+ 1, 0, EDX, 7, mce, Machine Check Exception
+ 1, 0, EDX, 8, cx8, CMPXCHG8B instr
+ 1, 0, EDX, 9, apic, APIC on Chip
+ 1, 0, EDX, 11, sep, SYSENTER and SYSEXIT instrs
+ 1, 0, EDX, 12, mtrr, Memory Type Range Registers
+ 1, 0, EDX, 13, pge, Page Global Bit
+ 1, 0, EDX, 14, mca, Machine Check Architecture
+ 1, 0, EDX, 15, cmov, Conditional Move Instrs
+ 1, 0, EDX, 16, pat, Page Attribute Table
+ 1, 0, EDX, 17, pse36, 36-Bit Page Size Extension
+ 1, 0, EDX, 18, psn, Processor Serial Number
+ 1, 0, EDX, 19, clflush, CLFLUSH instr
+# 1, 0, EDX, 20,
+ 1, 0, EDX, 21, ds, Debug Store
+ 1, 0, EDX, 22, acpi, Thermal Monitor and Software Controlled Clock Facilities
+ 1, 0, EDX, 23, mmx, Intel MMX Technology
+ 1, 0, EDX, 24, fxsr, XSAVE and FXRSTOR Instrs
+ 1, 0, EDX, 25, sse, SSE
+ 1, 0, EDX, 26, sse2, SSE2
+ 1, 0, EDX, 27, ss, Self Snoop
+ 1, 0, EDX, 28, hit, Max APIC IDs
+ 1, 0, EDX, 29, tm, Thermal Monitor
+# 1, 0, EDX, 30,
+ 1, 0, EDX, 31, pbe, Pending Break Enable
+
+# Leaf 02H
+# cache and TLB descriptor info
+
+# Leaf 03H
+# Precessor Serial Number, introduced on Pentium III, not valid for
+# latest models
+
+# Leaf 04H
+# thread/core and cache topology
+ 4, 0, EAX, 4:0, cache_type, Cache type like instr/data or unified
+ 4, 0, EAX, 7:5, cache_level, Cache Level (starts at 1)
+ 4, 0, EAX, 8, cache_self_init, Cache Self Initialization
+ 4, 0, EAX, 9, fully_associate, Fully Associative cache
+# 4, 0, EAX, 13:10, resvd, resvd
+ 4, 0, EAX, 25:14, max_logical_id, Max number of addressable IDs for logical processors sharing the cache
+ 4, 0, EAX, 31:26, max_phy_id, Max number of addressable IDs for processors in phy package
+
+ 4, 0, EBX, 11:0, cache_linesize, Size of a cache line in bytes
+ 4, 0, EBX, 21:12, cache_partition, Physical Line partitions
+ 4, 0, EBX, 31:22, cache_ways, Ways of associativity
+ 4, 0, ECX, 31:0, cache_sets, Number of Sets - 1
+ 4, 0, EDX, 0, c_wbinvd, 1 means WBINVD/INVD is not ganranteed to act upon lower level caches of non-originating threads sharing this cache
+ 4, 0, EDX, 1, c_incl, Whether cache is inclusive of lower cache level
+ 4, 0, EDX, 2, c_comp_index, Complex Cache Indexing
+
+# Leaf 05H
+# MONITOR/MWAIT
+ 5, 0, EAX, 15:0, min_mon_size, Smallest monitor line size in bytes
+ 5, 0, EBX, 15:0, max_mon_size, Largest monitor line size in bytes
+ 5, 0, ECX, 0, mwait_ext, Enum of Monitor-Mwait extensions supported
+ 5, 0, ECX, 1, mwait_irq_break, Largest monitor line size in bytes
+ 5, 0, EDX, 3:0, c0_sub_stats, Number of C0* sub C-states supported using MWAIT
+ 5, 0, EDX, 7:4, c1_sub_stats, Number of C1* sub C-states supported using MWAIT
+ 5, 0, EDX, 11:8, c2_sub_stats, Number of C2* sub C-states supported using MWAIT
+ 5, 0, EDX, 15:12, c3_sub_stats, Number of C3* sub C-states supported using MWAIT
+ 5, 0, EDX, 19:16, c4_sub_stats, Number of C4* sub C-states supported using MWAIT
+ 5, 0, EDX, 23:20, c5_sub_stats, Number of C5* sub C-states supported using MWAIT
+ 5, 0, EDX, 27:24, c6_sub_stats, Number of C6* sub C-states supported using MWAIT
+ 5, 0, EDX, 31:28, c7_sub_stats, Number of C7* sub C-states supported using MWAIT
+
+# Leaf 06H
+# Thermal & Power Management
+
+ 6, 0, EAX, 0, dig_temp, Digital temperature sensor supported
+ 6, 0, EAX, 1, turbo, Intel Turbo Boost
+ 6, 0, EAX, 2, arat, Always running APIC timer
+# 6, 0, EAX, 3, resv, Reserved
+ 6, 0, EAX, 4, pln, Power limit notifications supported
+ 6, 0, EAX, 5, ecmd, Clock modulation duty cycle extension supported
+ 6, 0, EAX, 6, ptm, Package thermal management supported
+ 6, 0, EAX, 7, hwp, HWP base register
+ 6, 0, EAX, 8, hwp_notify, HWP notification
+ 6, 0, EAX, 9, hwp_act_window, HWP activity window
+ 6, 0, EAX, 10, hwp_energy, HWP energy performance preference
+ 6, 0, EAX, 11, hwp_pkg_req, HWP package level request
+# 6, 0, EAX, 12, resv, Reserved
+ 6, 0, EAX, 13, hdc, HDC base registers supported
+ 6, 0, EAX, 14, turbo3, Turbo Boost Max 3.0
+ 6, 0, EAX, 15, hwp_cap, Highest Performance change supported
+ 6, 0, EAX, 16, hwp_peci, HWP PECI override is supported
+ 6, 0, EAX, 17, hwp_flex, Flexible HWP is supported
+ 6, 0, EAX, 18, hwp_fast, Fast access mode for the IA32_HWP_REQUEST MSR is supported
+# 6, 0, EAX, 19, resv, Reserved
+ 6, 0, EAX, 20, hwp_ignr, Ignoring Idle Logical Processor HWP request is supported
+
+ 6, 0, EBX, 3:0, therm_irq_thresh, Number of Interrupt Thresholds in Digital Thermal Sensor
+ 6, 0, ECX, 0, aperfmperf, Presence of IA32_MPERF and IA32_APERF
+ 6, 0, ECX, 3, energ_bias, Performance-energy bias preference supported
+
+# Leaf 07H
+# ECX == 0
+# AVX512 refers to https://en.wikipedia.org/wiki/AVX-512
+# XXX: Do we really need to enumerate each and every AVX512 sub features
+
+ 7, 0, EBX, 0, fsgsbase, RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE supported
+ 7, 0, EBX, 1, tsc_adjust, TSC_ADJUST MSR supported
+ 7, 0, EBX, 2, sgx, Software Guard Extensions
+ 7, 0, EBX, 3, bmi1, BMI1
+ 7, 0, EBX, 4, hle, Hardware Lock Elision
+ 7, 0, EBX, 5, avx2, AVX2
+# 7, 0, EBX, 6, fdp_excp_only, x87 FPU Data Pointer updated only on x87 exceptions
+ 7, 0, EBX, 7, smep, Supervisor-Mode Execution Prevention
+ 7, 0, EBX, 8, bmi2, BMI2
+ 7, 0, EBX, 9, rep_movsb, Enhanced REP MOVSB/STOSB
+ 7, 0, EBX, 10, invpcid, INVPCID instruction
+ 7, 0, EBX, 11, rtm, Restricted Transactional Memory
+ 7, 0, EBX, 12, rdt_m, Intel RDT Monitoring capability
+ 7, 0, EBX, 13, depc_fpu_cs_ds, Deprecates FPU CS and FPU DS
+ 7, 0, EBX, 14, mpx, Memory Protection Extensions
+ 7, 0, EBX, 15, rdt_a, Intel RDT Allocation capability
+ 7, 0, EBX, 16, avx512f, AVX512 Foundation instr
+ 7, 0, EBX, 17, avx512dq, AVX512 Double and Quadword AVX512 instr
+ 7, 0, EBX, 18, rdseed, RDSEED instr
+ 7, 0, EBX, 19, adx, ADX instr
+ 7, 0, EBX, 20, smap, Supervisor Mode Access Prevention
+ 7, 0, EBX, 21, avx512ifma, AVX512 Integer Fused Multiply Add
+# 7, 0, EBX, 22, resvd, resvd
+ 7, 0, EBX, 23, clflushopt, CLFLUSHOPT instr
+ 7, 0, EBX, 24, clwb, CLWB instr
+ 7, 0, EBX, 25, intel_pt, Intel Processor Trace instr
+ 7, 0, EBX, 26, avx512pf, Prefetch
+ 7, 0, EBX, 27, avx512er, AVX512 Exponent Reciproca instr
+ 7, 0, EBX, 28, avx512cd, AVX512 Conflict Detection instr
+ 7, 0, EBX, 29, sha, Intel Secure Hash Algorithm Extensions instr
+ 7, 0, EBX, 26, avx512bw, AVX512 Byte & Word instr
+ 7, 0, EBX, 28, avx512vl, AVX512 Vector Length Extentions (VL)
+ 7, 0, ECX, 0, prefetchwt1, X
+ 7, 0, ECX, 1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
+ 7, 0, ECX, 2, umip, User-mode Instruction Prevention
+
+ 7, 0, ECX, 3, pku, Protection Keys for User-mode pages
+ 7, 0, ECX, 4, ospke, CR4 PKE set to enable protection keys
+# 7, 0, ECX, 16:5, resvd, resvd
+ 7, 0, ECX, 21:17, mawau, The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode
+ 7, 0, ECX, 22, rdpid, RDPID and IA32_TSC_AUX
+# 7, 0, ECX, 29:23, resvd, resvd
+ 7, 0, ECX, 30, sgx_lc, SGX Launch Configuration
+# 7, 0, ECX, 31, resvd, resvd
+
+# Leaf 08H
+#
+
+
+# Leaf 09H
+# Direct Cache Access (DCA) information
+ 9, 0, ECX, 31:0, dca_cap, The value of IA32_PLATFORM_DCA_CAP
+
+# Leaf 0AH
+# Architectural Performance Monitoring
+#
+# Do we really need to print out the PMU related stuff?
+# Does normal user really care about it?
+#
+ 0xA, 0, EAX, 7:0, pmu_ver, Performance Monitoring Unit version
+ 0xA, 0, EAX, 15:8, pmu_gp_cnt_num, Numer of general-purose PMU counters per logical CPU
+ 0xA, 0, EAX, 23:16, pmu_cnt_bits, Bit wideth of PMU counter
+ 0xA, 0, EAX, 31:24, pmu_ebx_bits, Length of EBX bit vector to enumerate PMU events
+
+ 0xA, 0, EBX, 0, pmu_no_core_cycle_evt, Core cycle event not available
+ 0xA, 0, EBX, 1, pmu_no_instr_ret_evt, Instruction retired event not available
+ 0xA, 0, EBX, 2, pmu_no_ref_cycle_evt, Reference cycles event not available
+ 0xA, 0, EBX, 3, pmu_no_llc_ref_evt, Last-level cache reference event not available
+ 0xA, 0, EBX, 4, pmu_no_llc_mis_evt, Last-level cache misses event not available
+ 0xA, 0, EBX, 5, pmu_no_br_instr_ret_evt, Branch instruction retired event not available
+ 0xA, 0, EBX, 6, pmu_no_br_mispredict_evt, Branch mispredict retired event not available
+
+ 0xA, 0, ECX, 4:0, pmu_fixed_cnt_num, Performance Monitoring Unit version
+ 0xA, 0, ECX, 12:5, pmu_fixed_cnt_bits, Numer of PMU counters per logical CPU
+
+# Leaf 0BH
+# Extended Topology Enumeration Leaf
+#
+
+ 0xB, 0, EAX, 4:0, id_shift, Number of bits to shift right on x2APIC ID to get a unique topology ID of the next level type
+ 0xB, 0, EBX, 15:0, cpu_nr, Number of logical processors at this level type
+ 0xB, 0, ECX, 15:8, lvl_type, 0-Invalid 1-SMT 2-Core
+ 0xB, 0, EDX, 31:0, x2apic_id, x2APIC ID the current logical processor
+
+
+# Leaf 0DH
+# Processor Extended State
+
+ 0xD, 0, EAX, 0, x87, X87 state
+ 0xD, 0, EAX, 1, sse, SSE state
+ 0xD, 0, EAX, 2, avx, AVX state
+ 0xD, 0, EAX, 4:3, mpx, MPX state
+ 0xD, 0, EAX, 7:5, avx512, AVX-512 state
+ 0xD, 0, EAX, 9, pkru, PKRU state
+
+ 0xD, 0, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 0, ECX, 31:0, max_sz_xsave, Maximum size (bytes) of the XSAVE/XRSTOR save area
+
+ 0xD, 1, EAX, 0, xsaveopt, XSAVEOPT available
+ 0xD, 1, EAX, 1, xsavec, XSAVEC and compacted form supported
+ 0xD, 1, EAX, 2, xgetbv, XGETBV supported
+ 0xD, 1, EAX, 3, xsaves, XSAVES/XRSTORS and IA32_XSS supported
+
+ 0xD, 1, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
+ 0xD, 1, ECX, 8, pt, PT state
+ 0xD, 1, ECX, 11, cet_usr, CET user state
+ 0xD, 1, ECX, 12, cet_supv, CET supervisor state
+ 0xD, 1, ECX, 13, hdc, HDC state
+ 0xD, 1, ECX, 16, hwp, HWP state
+
+# Leaf 0FH
+# Intel RDT Monitoring
+
+ 0xF, 0, EBX, 31:0, rmid_range, Maximum range (zero-based) of RMID within this physical processor of all types
+ 0xF, 0, EDX, 1, l3c_rdt_mon, L3 Cache RDT Monitoring supported
+
+ 0xF, 1, ECX, 31:0, rmid_range, Maximum range (zero-based) of RMID of this types
+ 0xF, 1, EDX, 0, l3c_ocp_mon, L3 Cache occupancy Monitoring supported
+ 0xF, 1, EDX, 1, l3c_tbw_mon, L3 Cache Total Bandwidth Monitoring supported
+ 0xF, 1, EDX, 2, l3c_lbw_mon, L3 Cache Local Bandwidth Monitoring supported
+
+# Leaf 10H
+# Intel RDT Allocation
+
+ 0x10, 0, EBX, 1, l3c_rdt_alloc, L3 Cache Allocation supported
+ 0x10, 0, EBX, 2, l2c_rdt_alloc, L2 Cache Allocation supported
+ 0x10, 0, EBX, 3, mem_bw_alloc, Memory Bandwidth Allocation supported
+
+
+# Leaf 12H
+# SGX Capability
+#
+# Some detailed SGX features not added yet
+
+ 0x12, 0, EAX, 0, sgx1, L3 Cache Allocation supported
+ 0x12, 1, EAX, 0, sgx2, L3 Cache Allocation supported
+
+
+# Leaf 14H
+# Intel Processor Tracer
+#
+
+# Leaf 15H
+# Time Stamp Counter and Nominal Core Crystal Clock Information
+
+ 0x15, 0, EAX, 31:0, tsc_denominator, The denominator of the TSC/”core crystal clock” ratio
+ 0x15, 0, EBX, 31:0, tsc_numerator, The numerator of the TSC/”core crystal clock” ratio
+ 0x15, 0, ECX, 31:0, nom_freq, Nominal frequency of the core crystal clock in Hz
+
+# Leaf 16H
+# Processor Frequency Information
+
+ 0x16, 0, EAX, 15:0, cpu_base_freq, Processor Base Frequency in MHz
+ 0x16, 0, EBX, 15:0, cpu_max_freq, Maximum Frequency in MHz
+ 0x16, 0, ECX, 15:0, bus_freq, Bus (Reference) Frequency in MHz
+
+# Leaf 17H
+# System-On-Chip Vendor Attribute
+
+ 0x17, 0, EAX, 31:0, max_socid, Maximum input value of supported sub-leaf
+ 0x17, 0, EBX, 15:0, soc_vid, SOC Vendor ID
+ 0x17, 0, EBX, 16, std_vid, SOC Vendor ID is assigned via an industry standard scheme
+ 0x17, 0, ECX, 31:0, soc_pid, SOC Project ID assigned by vendor
+ 0x17, 0, EDX, 31:0, soc_sid, SOC Stepping ID
+
+# Leaf 18H
+# Deterministic Address Translation Parameters
+
+
+# Leaf 19H
+# Key Locker Leaf
+
+
+# Leaf 1AH
+# Hybrid Information
+
+ 0x1A, 0, EAX, 31:24, core_type, 20H-Intel_Atom 40H-Intel_Core
+
+
+# Leaf 1FH
+# V2 Extended Topology - A preferred superset to leaf 0BH
+
+
+# According to SDM
+# 40000000H - 4FFFFFFFH is invalid range
+
+
+# Leaf 80000001H
+# Extended Processor Signature and Feature Bits
+
+0x80000001, 0, ECX, 0, lahf_lm, LAHF/SAHF available in 64-bit mode
+0x80000001, 0, ECX, 5, lzcnt, LZCNT
+0x80000001, 0, ECX, 8, prefetchw, PREFETCHW
+
+0x80000001, 0, EDX, 11, sysret, SYSCALL/SYSRET supported
+0x80000001, 0, EDX, 20, exec_dis, Execute Disable Bit available
+0x80000001, 0, EDX, 26, 1gb_page, 1GB page supported
+0x80000001, 0, EDX, 27, rdtscp, RDTSCP and IA32_TSC_AUX are available
+#0x80000001, 0, EDX, 29, 64b, 64b Architecture supported
+
+# Leaf 80000002H/80000003H/80000004H
+# Processor Brand String
+
+# Leaf 80000005H
+# Reserved
+
+# Leaf 80000006H
+# Extended L2 Cache Features
+
+0x80000006, 0, ECX, 7:0, clsize, Cache Line size in bytes
+0x80000006, 0, ECX, 15:12, l2c_assoc, L2 Associativity
+0x80000006, 0, ECX, 31:16, csize, Cache size in 1K units
+
+
+# Leaf 80000007H
+
+0x80000007, 0, EDX, 8, nonstop_tsc, Invariant TSC available
+
+
+# Leaf 80000008H
+
+0x80000008, 0, EAX, 7:0, phy_adr_bits, Physical Address Bits
+0x80000008, 0, EAX, 15:8, lnr_adr_bits, Linear Address Bits
+0x80000007, 0, EBX, 9, wbnoinvd, WBNOINVD
+
+# 0x8000001E
+# EAX: Extended APIC ID
+0x8000001E, 0, EAX, 31:0, extended_apic_id, Extended APIC ID
+# EBX: Core Identifiers
+0x8000001E, 0, EBX, 7:0, core_id, Identifies the logical core ID
+0x8000001E, 0, EBX, 15:8, threads_per_core, The number of threads per core is threads_per_core + 1
+# ECX: Node Identifiers
+0x8000001E, 0, ECX, 7:0, node_id, Node ID
+0x8000001E, 0, ECX, 10:8, nodes_per_processor, Nodes per processor { 0: 1 node, else reserved }
+
+# 8000001F: AMD Secure Encryption
+0x8000001F, 0, EAX, 0, sme, Secure Memory Encryption
+0x8000001F, 0, EAX, 1, sev, Secure Encrypted Virtualization
+0x8000001F, 0, EAX, 2, vmpgflush, VM Page Flush MSR
+0x8000001F, 0, EAX, 3, seves, SEV Encrypted State
+0x8000001F, 0, EBX, 5:0, c-bit, Page table bit number used to enable memory encryption
+0x8000001F, 0, EBX, 11:6, mem_encrypt_physaddr_width, Reduction of physical address space in bits with SME enabled
+0x8000001F, 0, ECX, 31:0, num_encrypted_guests, Maximum ASID value that may be used for an SEV-enabled guest
+0x8000001F, 0, EDX, 31:0, minimum_sev_asid, Minimum ASID value that must be used for an SEV-enabled, SEV-ES-disabled guest
\ No newline at end of file
diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
new file mode 100644
index 000000000000..936a3a2aad04
--- /dev/null
+++ b/tools/arch/x86/kcpuid/kcpuid.c
@@ -0,0 +1,657 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <getopt.h>
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+typedef unsigned int u32;
+typedef unsigned long long u64;
+
+char *def_csv = "/usr/share/misc/cpuid.csv";
+char *user_csv;
+
+
+/* Cover both single-bit flag and multiple-bits fields */
+struct bits_desc {
+ /* start and end bits */
+ int start, end;
+ /* 0 or 1 for 1-bit flag */
+ int value;
+ char simp[32];
+ char detail[256];
+};
+
+/* descriptor info for eax/ebx/ecx/edx */
+struct reg_desc {
+ /* number of valid entries */
+ int nr;
+ struct bits_desc descs[32];
+};
+
+enum {
+ R_EAX = 0,
+ R_EBX,
+ R_ECX,
+ R_EDX,
+ NR_REGS
+};
+
+struct subleaf {
+ u32 index;
+ u32 sub;
+ u32 eax, ebx, ecx, edx;
+ struct reg_desc info[NR_REGS];
+};
+
+/* Represent one leaf (basic or extended) */
+struct cpuid_func {
+ /*
+ * Array of subleafs for this func, if there is no subleafs
+ * then the leafs[0] is the main leaf
+ */
+ struct subleaf *leafs;
+ int nr;
+};
+
+struct cpuid_range {
+ /* array of main leafs */
+ struct cpuid_func *funcs;
+ /* number of valid leafs */
+ int nr;
+ bool is_ext;
+};
+
+/*
+ * basic: basic functions range: [0... ]
+ * ext: extended functions range: [0x80000000... ]
+ */
+struct cpuid_range *leafs_basic, *leafs_ext;
+
+static int num_leafs;
+static bool is_amd;
+static bool show_details;
+static bool show_raw;
+static bool show_flags_only = true;
+static u32 user_index = 0xFFFFFFFF;
+static u32 user_sub = 0xFFFFFFFF;
+static int flines;
+
+static inline void cpuid(u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
+{
+ /* ecx is often an input as well as an output. */
+ asm volatile("cpuid"
+ : "=a" (*eax),
+ "=b" (*ebx),
+ "=c" (*ecx),
+ "=d" (*edx)
+ : "0" (*eax), "2" (*ecx));
+}
+
+static inline bool has_subleafs(u32 f)
+{
+ if (f == 0x7 || f == 0xd)
+ return true;
+
+ if (is_amd) {
+ if (f == 0x8000001d)
+ return true;
+ return false;
+ }
+
+ switch (f) {
+ case 0x4:
+ case 0xb:
+ case 0xf:
+ case 0x10:
+ case 0x14:
+ case 0x18:
+ case 0x1f:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static void leaf_print_raw(struct subleaf *leaf)
+{
+ if (has_subleafs(leaf->index)) {
+ if (leaf->sub == 0)
+ printf("0x%08x: subleafs:\n", leaf->index);
+
+ printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->sub, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ } else {
+ printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
+ leaf->index, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ }
+}
+
+/* Return true is the input eax/ebx/ecx/edx are all zero */
+static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
+ u32 a, u32 b, u32 c, u32 d)
+{
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ int s = 0;
+
+ if (a == 0 && b == 0 && c == 0 && d == 0)
+ return true;
+
+ /*
+ * Cut off vendor-prefix from CPUID function as we're using it as an
+ * index into ->funcs.
+ */
+ func = &range->funcs[f & 0xffff];
+
+ if (!func->leafs) {
+ func->leafs = malloc(sizeof(struct subleaf));
+ if (!func->leafs)
+ perror("malloc func leaf");
+
+ func->nr = 1;
+ } else {
+ s = func->nr;
+ func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
+ if (!func->leafs)
+ perror("realloc f->leafs");
+
+ func->nr++;
+ }
+
+ leaf = &func->leafs[s];
+
+ leaf->index = f;
+ leaf->sub = subleaf;
+ leaf->eax = a;
+ leaf->ebx = b;
+ leaf->ecx = c;
+ leaf->edx = d;
+
+ return false;
+}
+
+static void raw_dump_range(struct cpuid_range *range)
+{
+ u32 f;
+ int i;
+
+ printf("%s Leafs :\n", range->is_ext ? "Extended" : "Basic");
+ printf("================\n");
+
+ for (f = 0; (int)f < range->nr; f++) {
+ struct cpuid_func *func = &range->funcs[f];
+ u32 index = f;
+
+ if (range->is_ext)
+ index += 0x80000000;
+
+ /* Skip leaf without valid items */
+ if (!func->nr)
+ continue;
+
+ /* First item is the main leaf, followed by all subleafs */
+ for (i = 0; i < func->nr; i++)
+ leaf_print_raw(&func->leafs[i]);
+ }
+}
+
+#define MAX_SUBLEAF_NUM 32
+struct cpuid_range *setup_cpuid_range(u32 input_eax)
+{
+ u32 max_func, idx_func;
+ int subleaf;
+ struct cpuid_range *range;
+ u32 eax, ebx, ecx, edx;
+ u32 f = input_eax;
+ int max_subleaf;
+ bool allzero;
+
+ eax = input_eax;
+ ebx = ecx = edx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ max_func = eax;
+ idx_func = (max_func & 0xffff) + 1;
+
+ range = malloc(sizeof(struct cpuid_range));
+ if (!range)
+ perror("malloc range");
+
+ if (input_eax & 0x80000000)
+ range->is_ext = true;
+ else
+ range->is_ext = false;
+
+ range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ if (!range->funcs)
+ perror("malloc range->funcs");
+
+ range->nr = idx_func;
+ memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+
+ for (; f <= max_func; f++) {
+ eax = f;
+ subleaf = ecx = 0;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+
+ if (!has_subleafs(f))
+ continue;
+
+ max_subleaf = MAX_SUBLEAF_NUM;
+
+ /*
+ * Some can provide the exact number of subleafs,
+ * others have to be tried (0xf)
+ */
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
+ max_subleaf = (eax & 0xff) + 1;
+
+ if (f == 0xb)
+ max_subleaf = 2;
+
+ for (subleaf = 1; subleaf < max_subleaf; subleaf++) {
+ eax = f;
+ ecx = subleaf;
+
+ cpuid(&eax, &ebx, &ecx, &edx);
+ allzero = cpuid_store(range, f, subleaf,
+ eax, ebx, ecx, edx);
+ if (allzero)
+ continue;
+ num_leafs++;
+ }
+
+ }
+
+ return range;
+}
+
+/*
+ * The basic row format for cpuid.csv is
+ * LEAF,SUBLEAF,register_name,bits,short name,long description
+ *
+ * like:
+ * 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
+ * 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
+ */
+static int parse_line(char *line)
+{
+ char *str;
+ int i;
+ struct cpuid_range *range;
+ struct cpuid_func *func;
+ struct subleaf *leaf;
+ u32 index;
+ u32 sub;
+ char buffer[512];
+ char *buf;
+ /*
+ * Tokens:
+ * 1. leaf
+ * 2. subleaf
+ * 3. register
+ * 4. bits
+ * 5. short name
+ * 6. long detail
+ */
+ char *tokens[6];
+ struct reg_desc *reg;
+ struct bits_desc *bdesc;
+ int reg_index;
+ char *start, *end;
+
+ /* Skip comments and NULL line */
+ if (line[0] == '#' || line[0] == '\n')
+ return 0;
+
+ strncpy(buffer, line, 511);
+ buffer[511] = 0;
+ str = buffer;
+ for (i = 0; i < 5; i++) {
+ tokens[i] = strtok(str, ",");
+ if (!tokens[i])
+ goto err_exit;
+ str = NULL;
+ }
+ tokens[5] = strtok(str, "\n");
+ if (!tokens[5])
+ goto err_exit;
+
+ /* index/main-leaf */
+ index = strtoull(tokens[0], NULL, 0);
+
+ if (index & 0x80000000)
+ range = leafs_ext;
+ else
+ range = leafs_basic;
+
+ index &= 0x7FFFFFFF;
+ /* Skip line parsing for non-existing indexes */
+ if ((int)index >= range->nr)
+ return -1;
+
+ func = &range->funcs[index];
+
+ /* Return if the index has no valid item on this platform */
+ if (!func->nr)
+ return 0;
+
+ /* subleaf */
+ sub = strtoul(tokens[1], NULL, 0);
+ if ((int)sub > func->nr)
+ return -1;
+
+ leaf = &func->leafs[sub];
+ buf = tokens[2];
+
+ if (strcasestr(buf, "EAX"))
+ reg_index = R_EAX;
+ else if (strcasestr(buf, "EBX"))
+ reg_index = R_EBX;
+ else if (strcasestr(buf, "ECX"))
+ reg_index = R_ECX;
+ else if (strcasestr(buf, "EDX"))
+ reg_index = R_EDX;
+ else
+ goto err_exit;
+
+ reg = &leaf->info[reg_index];
+ bdesc = ®->descs[reg->nr++];
+
+ /* bit flag or bits field */
+ buf = tokens[3];
+
+ end = strtok(buf, ":");
+ bdesc->end = strtoul(end, NULL, 0);
+ bdesc->start = bdesc->end;
+
+ /* start != NULL means it is bit fields */
+ start = strtok(NULL, ":");
+ if (start)
+ bdesc->start = strtoul(start, NULL, 0);
+
+ strcpy(bdesc->simp, tokens[4]);
+ strcpy(bdesc->detail, tokens[5]);
+ return 0;
+
+err_exit:
+ printf("Warning: wrong line format:\n");
+ printf("\tline[%d]: %s\n", flines, line);
+ return -1;
+}
+
+/* Parse csv file, and construct the array of all leafs and subleafs */
+static void parse_text(void)
+{
+ FILE *file;
+ char *filename, *line = NULL;
+ size_t len = 0;
+ int ret;
+
+ if (show_raw)
+ return;
+
+ filename = user_csv ? user_csv : def_csv;
+ file = fopen(filename, "r");
+ if (!file) {
+ /* Fallback to a csv in the same dir */
+ file = fopen("./cpuid.csv", "r");
+ }
+
+ if (!file) {
+ printf("Fail to open '%s'\n", filename);
+ return;
+ }
+
+ while (1) {
+ ret = getline(&line, &len, file);
+ flines++;
+ if (ret > 0)
+ parse_line(line);
+
+ if (feof(file))
+ break;
+ }
+
+ fclose(file);
+}
+
+
+/* Decode every eax/ebx/ecx/edx */
+static void decode_bits(u32 value, struct reg_desc *rdesc)
+{
+ struct bits_desc *bdesc;
+ int start, end, i;
+ u32 mask;
+
+ for (i = 0; i < rdesc->nr; i++) {
+ bdesc = &rdesc->descs[i];
+
+ start = bdesc->start;
+ end = bdesc->end;
+ if (start == end) {
+ /* single bit flag */
+ if (value & (1 << start))
+ printf("\t%-20s %s%s\n",
+ bdesc->simp,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ } else {
+ /* bit fields */
+ if (show_flags_only)
+ continue;
+
+ mask = ((u64)1 << (end - start + 1)) - 1;
+ printf("\t%-20s\t: 0x%-8x\t%s%s\n",
+ bdesc->simp,
+ (value >> start) & mask,
+ show_details ? "-" : "",
+ show_details ? bdesc->detail : ""
+ );
+ }
+ }
+}
+
+static void show_leaf(struct subleaf *leaf)
+{
+ if (!leaf)
+ return;
+
+ if (show_raw)
+ leaf_print_raw(leaf);
+
+ decode_bits(leaf->eax, &leaf->info[R_EAX]);
+ decode_bits(leaf->ebx, &leaf->info[R_EBX]);
+ decode_bits(leaf->ecx, &leaf->info[R_ECX]);
+ decode_bits(leaf->edx, &leaf->info[R_EDX]);
+}
+
+static void show_func(struct cpuid_func *func)
+{
+ int i;
+
+ if (!func)
+ return;
+
+ for (i = 0; i < func->nr; i++)
+ show_leaf(&func->leafs[i]);
+}
+
+static void show_range(struct cpuid_range *range)
+{
+ int i;
+
+ for (i = 0; i < range->nr; i++)
+ show_func(&range->funcs[i]);
+}
+
+static inline struct cpuid_func *index_to_func(u32 index)
+{
+ struct cpuid_range *range;
+
+ range = (index & 0x80000000) ? leafs_ext : leafs_basic;
+ index &= 0x7FFFFFFF;
+
+ if (((index & 0xFFFF) + 1) > (u32)range->nr) {
+ printf("ERR: invalid input index (0x%x)\n", index);
+ return NULL;
+ }
+ return &range->funcs[index];
+}
+
+static void show_info(void)
+{
+ struct cpuid_func *func;
+
+ if (show_raw) {
+ /* Show all of the raw output of 'cpuid' instr */
+ raw_dump_range(leafs_basic);
+ raw_dump_range(leafs_ext);
+ return;
+ }
+
+ if (user_index != 0xFFFFFFFF) {
+ /* Only show specific leaf/subleaf info */
+ func = index_to_func(user_index);
+ if (!func)
+ return;
+
+ /* Dump the raw data also */
+ show_raw = true;
+
+ if (user_sub != 0xFFFFFFFF) {
+ if (user_sub + 1 <= (u32)func->nr) {
+ show_leaf(&func->leafs[user_sub]);
+ return;
+ }
+
+ printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
+ }
+
+ show_func(func);
+ return;
+ }
+
+ printf("CPU features:\n=============\n\n");
+ show_range(leafs_basic);
+ show_range(leafs_ext);
+}
+
+static void setup_platform_cpuid(void)
+{
+ u32 eax, ebx, ecx, edx;
+
+ /* Check vendor */
+ eax = ebx = ecx = edx = 0;
+ cpuid(&eax, &ebx, &ecx, &edx);
+
+ /* "htuA" */
+ if (ebx == 0x68747541)
+ is_amd = true;
+
+ /* Setup leafs for the basic and extended range */
+ leafs_basic = setup_cpuid_range(0x0);
+ leafs_ext = setup_cpuid_range(0x80000000);
+}
+
+static void usage(void)
+{
+ printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+ "\t-a|--all Show both bit flags and complex bit fields info\n"
+ "\t-b|--bitflags Show boolean flags only\n"
+ "\t-d|--detail Show details of the flag/fields (default)\n"
+ "\t-f|--flags Specify the cpuid csv file\n"
+ "\t-h|--help Show usage info\n"
+ "\t-l|--leaf=index Specify the leaf you want to check\n"
+ "\t-r|--raw Show raw cpuid data\n"
+ "\t-s|--subleaf=sub Specify the subleaf you want to check\n"
+ );
+}
+
+static struct option opts[] = {
+ { "all", no_argument, NULL, 'a' }, /* show both bit flags and fields */
+ { "bitflags", no_argument, NULL, 'b' }, /* only show bit flags, default on */
+ { "detail", no_argument, NULL, 'd' }, /* show detail descriptions */
+ { "file", required_argument, NULL, 'f' }, /* use user's cpuid file */
+ { "help", no_argument, NULL, 'h'}, /* show usage */
+ { "leaf", required_argument, NULL, 'l'}, /* only check a specific leaf */
+ { "raw", no_argument, NULL, 'r'}, /* show raw CPUID leaf data */
+ { "subleaf", required_argument, NULL, 's'}, /* check a specific subleaf */
+ { NULL, 0, NULL, 0 }
+};
+
+static int parse_options(int argc, char *argv[])
+{
+ int c;
+
+ while ((c = getopt_long(argc, argv, "abdf:hl:rs:",
+ opts, NULL)) != -1)
+ switch (c) {
+ case 'a':
+ show_flags_only = false;
+ break;
+ case 'b':
+ show_flags_only = true;
+ break;
+ case 'd':
+ show_details = true;
+ break;
+ case 'f':
+ user_csv = optarg;
+ break;
+ case 'h':
+ usage();
+ exit(1);
+ break;
+ case 'l':
+ /* main leaf */
+ user_index = strtoul(optarg, NULL, 0);
+ break;
+ case 'r':
+ show_raw = true;
+ break;
+ case 's':
+ /* subleaf */
+ user_sub = strtoul(optarg, NULL, 0);
+ break;
+ default:
+ printf("%s: Invalid option '%c'\n", argv[0], optopt);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Do 4 things in turn:
+ * 1. Parse user options
+ * 2. Parse and store all the CPUID leaf data supported on this platform
+ * 2. Parse the csv file, while skipping leafs which are not available
+ * on this platform
+ * 3. Print leafs info based on user options
+ */
+int main(int argc, char *argv[])
+{
+ if (parse_options(argc, argv))
+ return -1;
+
+ /* Setup the cpuid leafs of current platform */
+ setup_platform_cpuid();
+
+ /* Read and parse the 'cpuid.csv' */
+ parse_text();
+
+ show_info();
+ return 0;
+}
\ No newline at end of file
--
2.30.0
1
0

[PATCH kernel-4.19 1/3] net: hns3: use ae_dev->ops->reset_event to do reset.
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Yonglong Liu <liuyonglong(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
----------------------------
IT products want to know whether a reset is happened, so need to
use ae_dev->ops->reset_event to notify them.
Signed-off-by: Yonglong Liu <liuyonglong(a)huawei.com>
Reviewed-by: li yongxin <liyongxin1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 98a1d3d2870a9..8507eb60450fe 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3933,6 +3933,7 @@ static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
static void hclge_reset_timer(struct timer_list *t)
{
struct hclge_dev *hdev = from_timer(hdev, t, reset_timer);
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
/* if default_reset_request has no value, it means that this reset
* request has already be handled, so just return here
@@ -3942,7 +3943,9 @@ static void hclge_reset_timer(struct timer_list *t)
dev_info(&hdev->pdev->dev,
"triggering reset in reset timer\n");
- hclge_reset_event(hdev->pdev, NULL);
+
+ if (ae_dev->ops->reset_event)
+ ae_dev->ops->reset_event(hdev->pdev, NULL);
}
static bool hclge_reset_end(struct hnae3_handle *handle, bool done)
--
2.25.1
1
2

[PATCH kernel-4.19] media: firewire: firedtv-avc: fix a buffer overflow in avc_ca_pmt()
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Dan Carpenter <dan.carpenter(a)oracle.com>
mainline inclusion
from mainline-v5.16
commit 35d2969ea3c7d32aee78066b1f3cf61a0d935a4e
category: bugfix
bugzilla: NA
CVE: CVE-2021-42739
-------------------------------------------------
The bounds checking in avc_ca_pmt() is not strict enough. It should
be checking "read_pos + 4" because it's reading 5 bytes. If the
"es_info_length" is non-zero then it reads a 6th byte so there needs to
be an additional check for that.
I also added checks for the "write_pos". I don't think these are
required because "read_pos" and "write_pos" are tied together so
checking one ought to be enough. But they make the code easier to
understand for me. The check on write_pos is:
if (write_pos + 4 >= sizeof(c->operand) - 4) {
The first "+ 4" is because we're writing 5 bytes and the last " - 4"
is to leave space for the CRC.
The other problem is that "length" can be invalid. It comes from
"data_length" in fdtv_ca_pmt().
Cc: stable(a)vger.kernel.org
Reported-by: Luo Likang <luolikang(a)nsfocus.com>
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Hans Verkuil <hverkuil-cisco(a)xs4all.nl>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/media/firewire/firedtv-avc.c | 14 +++++++++++---
drivers/media/firewire/firedtv-ci.c | 2 ++
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/media/firewire/firedtv-avc.c b/drivers/media/firewire/firedtv-avc.c
index 3ef5df1648d77..8c31cf90c5907 100644
--- a/drivers/media/firewire/firedtv-avc.c
+++ b/drivers/media/firewire/firedtv-avc.c
@@ -1169,7 +1169,11 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
read_pos += program_info_length;
write_pos += program_info_length;
}
- while (read_pos < length) {
+ while (read_pos + 4 < length) {
+ if (write_pos + 4 >= sizeof(c->operand) - 4) {
+ ret = -EINVAL;
+ goto out;
+ }
c->operand[write_pos++] = msg[read_pos++];
c->operand[write_pos++] = msg[read_pos++];
c->operand[write_pos++] = msg[read_pos++];
@@ -1181,13 +1185,17 @@ int avc_ca_pmt(struct firedtv *fdtv, char *msg, int length)
c->operand[write_pos++] = es_info_length >> 8;
c->operand[write_pos++] = es_info_length & 0xff;
if (es_info_length > 0) {
+ if (read_pos >= length) {
+ ret = -EINVAL;
+ goto out;
+ }
pmt_cmd_id = msg[read_pos++];
if (pmt_cmd_id != 1 && pmt_cmd_id != 4)
dev_err(fdtv->device, "invalid pmt_cmd_id %d at stream level\n",
pmt_cmd_id);
- if (es_info_length > sizeof(c->operand) - 4 -
- write_pos) {
+ if (es_info_length > sizeof(c->operand) - 4 - write_pos ||
+ es_info_length > length - read_pos) {
ret = -EINVAL;
goto out;
}
diff --git a/drivers/media/firewire/firedtv-ci.c b/drivers/media/firewire/firedtv-ci.c
index 8dc5a7495abee..14f779812d250 100644
--- a/drivers/media/firewire/firedtv-ci.c
+++ b/drivers/media/firewire/firedtv-ci.c
@@ -138,6 +138,8 @@ static int fdtv_ca_pmt(struct firedtv *fdtv, void *arg)
} else {
data_length = msg->msg[3];
}
+ if (data_length > sizeof(msg->msg) - data_pos)
+ return -EINVAL;
return avc_ca_pmt(fdtv, &msg->msg[data_pos], data_length);
}
--
2.25.1
1
0
From: wuhuaye <wuhuaye(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DLVR
CVE: NA
-------------------------------------------------
Hi everyone,
This patch add some special gpio register configuration.
It is necessary to configure some gpio register
when gpio-dwapb function is enable in Ascend platform.
Signed-off-by: wuhuaye <wuhuaye(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/mm/init.c | 17 ++++++++++
drivers/gpio/gpio-dwapb.c | 66 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 82 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 5449ae2d26bee..55fc6a0206796 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -941,6 +941,12 @@ __setup("keepinitrd", keepinitrd_setup);
void ascend_enable_all_features(void)
{
+#ifdef CONFIG_GPIO_DWAPB
+ extern bool enable_ascend_gpio_dwapb;
+
+ enable_ascend_gpio_dwapb = true;
+#endif
+
if (IS_ENABLED(CONFIG_ASCEND_DVPP_MMAP))
enable_mmap_dvpp = 1;
@@ -974,6 +980,17 @@ static int __init ascend_enable_setup(char *__unused)
}
early_param("ascend_enable_all", ascend_enable_setup);
+
+static int __init ascend_mini_enable_setup(char *s)
+{
+#ifdef CONFIG_GPIO_DWAPB
+ extern bool enable_ascend_mini_gpio_dwapb;
+
+ enable_ascend_mini_gpio_dwapb = true;
+#endif
+ return 1;
+}
+__setup("ascend_mini_enable", ascend_mini_enable_setup);
#endif
diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
index 044888fd96a1f..2cdbb4a11075f 100644
--- a/drivers/gpio/gpio-dwapb.c
+++ b/drivers/gpio/gpio-dwapb.c
@@ -50,6 +50,9 @@
#define GPIO_EXT_PORTB 0x54
#define GPIO_EXT_PORTC 0x58
#define GPIO_EXT_PORTD 0x5c
+#define GPIO_INTCOMB_MASK 0xffc
+#define GPIO_INT_MASK_REG 0x3804
+#define MEM_PERI_SUBCTRL_IOBASE 0x1
#define DWAPB_MAX_PORTS 4
#define GPIO_EXT_PORT_STRIDE 0x04 /* register stride 32 bits */
@@ -64,6 +67,8 @@
#define GPIO_INTSTATUS_V2 0x3c
#define GPIO_PORTA_EOI_V2 0x40
+bool enable_ascend_mini_gpio_dwapb;
+bool enable_ascend_gpio_dwapb;
struct dwapb_gpio;
#ifdef CONFIG_PM_SLEEP
@@ -77,10 +82,11 @@ struct dwapb_context {
u32 int_type;
u32 int_pol;
u32 int_deb;
+ u32 int_comb_mask;
u32 wake_en;
};
#endif
-
+static void __iomem *peri_subctrl_base_addr;
struct dwapb_gpio_port {
struct gpio_chip gc;
bool is_registered;
@@ -232,6 +238,11 @@ static void dwapb_irq_enable(struct irq_data *d)
val = dwapb_read(gpio, GPIO_INTEN);
val |= BIT(d->hwirq);
dwapb_write(gpio, GPIO_INTEN, val);
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTMASK);
+ val &= ~BIT(d->hwirq);
+ dwapb_write(gpio, GPIO_INTMASK, val);
+ }
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
@@ -244,6 +255,11 @@ static void dwapb_irq_disable(struct irq_data *d)
u32 val;
spin_lock_irqsave(&gc->bgpio_lock, flags);
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTMASK);
+ val |= BIT(d->hwirq);
+ dwapb_write(gpio, GPIO_INTMASK, val);
+ }
val = dwapb_read(gpio, GPIO_INTEN);
val &= ~BIT(d->hwirq);
dwapb_write(gpio, GPIO_INTEN, val);
@@ -393,6 +409,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
unsigned int hwirq, ngpio = gc->ngpio;
struct irq_chip_type *ct;
int err, i;
+ u32 val;
gpio->domain = irq_domain_create_linear(fwnode, ngpio,
&irq_generic_chip_ops, gpio);
@@ -470,6 +487,12 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
irq_create_mapping(gpio->domain, hwirq);
port->gc.to_irq = dwapb_gpio_to_irq;
+
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTCOMB_MASK);
+ val |= BIT(0);
+ dwapb_write(gpio, GPIO_INTCOMB_MASK, val);
+ }
}
static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
@@ -478,6 +501,7 @@ static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
struct gpio_chip *gc = &port->gc;
unsigned int ngpio = gc->ngpio;
irq_hw_number_t hwirq;
+ u32 val;
if (!gpio->domain)
return;
@@ -487,6 +511,12 @@ static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
irq_domain_remove(gpio->domain);
gpio->domain = NULL;
+
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTCOMB_MASK);
+ val &= ~BIT(0);
+ dwapb_write(gpio, GPIO_INTCOMB_MASK, val);
+ }
}
static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
@@ -660,6 +690,22 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
int err;
struct device *dev = &pdev->dev;
struct dwapb_platform_data *pdata = dev_get_platdata(dev);
+ struct device_node *np = dev->of_node;
+ unsigned int value;
+
+ if (enable_ascend_mini_gpio_dwapb && enable_ascend_gpio_dwapb) {
+ peri_subctrl_base_addr = of_iomap(np, MEM_PERI_SUBCTRL_IOBASE);
+ if (!peri_subctrl_base_addr) {
+ dev_err(&pdev->dev, "sysctrl iomap not find!\n");
+ } else {
+ dev_dbg(&pdev->dev, "sysctrl iomap find!\n");
+ value = readl(peri_subctrl_base_addr +
+ GPIO_INT_MASK_REG);
+ value &= ~1UL;
+ writel(value, peri_subctrl_base_addr +
+ GPIO_INT_MASK_REG);
+ }
+ }
if (!pdata) {
pdata = dwapb_gpio_get_pdata(dev);
@@ -742,6 +788,10 @@ static int dwapb_gpio_remove(struct platform_device *pdev)
reset_control_assert(gpio->rst);
clk_disable_unprepare(gpio->clk);
+ if ((peri_subctrl_base_addr != NULL) && enable_ascend_mini_gpio_dwapb &&
+ enable_ascend_gpio_dwapb)
+ iounmap(peri_subctrl_base_addr);
+
return 0;
}
@@ -778,6 +828,9 @@ static int dwapb_gpio_suspend(struct device *dev)
ctx->int_pol = dwapb_read(gpio, GPIO_INT_POLARITY);
ctx->int_type = dwapb_read(gpio, GPIO_INTTYPE_LEVEL);
ctx->int_deb = dwapb_read(gpio, GPIO_PORTA_DEBOUNCE);
+ if (enable_ascend_gpio_dwapb)
+ ctx->int_comb_mask =
+ dwapb_read(gpio, GPIO_INTCOMB_MASK);
/* Mask out interrupts */
dwapb_write(gpio, GPIO_INTMASK,
@@ -798,6 +851,7 @@ static int dwapb_gpio_resume(struct device *dev)
struct gpio_chip *gc = &gpio->ports[0].gc;
unsigned long flags;
int i;
+ unsigned int value;
if (!IS_ERR(gpio->clk))
clk_prepare_enable(gpio->clk);
@@ -826,6 +880,9 @@ static int dwapb_gpio_resume(struct device *dev)
dwapb_write(gpio, GPIO_PORTA_DEBOUNCE, ctx->int_deb);
dwapb_write(gpio, GPIO_INTEN, ctx->int_en);
dwapb_write(gpio, GPIO_INTMASK, ctx->int_mask);
+ if (enable_ascend_gpio_dwapb)
+ dwapb_write(gpio,
+ GPIO_INTCOMB_MASK, ctx->int_comb_mask);
/* Clear out spurious interrupts */
dwapb_write(gpio, GPIO_PORTA_EOI, 0xffffffff);
@@ -833,6 +890,13 @@ static int dwapb_gpio_resume(struct device *dev)
}
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ if ((peri_subctrl_base_addr != NULL) && enable_ascend_mini_gpio_dwapb &&
+ enable_ascend_gpio_dwapb) {
+ value = readl(peri_subctrl_base_addr + GPIO_INT_MASK_REG);
+ value &= ~1UL;
+ writel(value, peri_subctrl_base_addr + GPIO_INT_MASK_REG);
+ }
+
return 0;
}
#endif
--
2.25.1
1
0
From: wuhuaye <wuhuaye(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DLVR
CVE: NA
-------------------------------------------------
Hi everyone,
This patch add some special gpio register configuration.
It is necessary to configure some gpio register
when gpio-dwapb function is enable in Ascend platform.
Signed-off-by: wuhuaye <wuhuaye(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/mm/init.c | 17 ++++++++++
drivers/gpio/gpio-dwapb.c | 66 ++++++++++++++++++++++++++++++++++++++-
2 files changed, 82 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 2399a257eaf33..8cdf92626c2c6 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -944,6 +944,12 @@ __setup("keepinitrd", keepinitrd_setup);
void ascend_enable_all_features(void)
{
+#ifdef CONFIG_GPIO_DWAPB
+ extern bool enable_ascend_gpio_dwapb;
+
+ enable_ascend_gpio_dwapb = true;
+#endif
+
if (IS_ENABLED(CONFIG_ASCEND_DVPP_MMAP))
enable_mmap_dvpp = 1;
@@ -981,6 +987,17 @@ static int __init ascend_enable_setup(char *__unused)
}
early_param("ascend_enable_all", ascend_enable_setup);
+
+static int __init ascend_mini_enable_setup(char *s)
+{
+#ifdef CONFIG_GPIO_DWAPB
+ extern bool enable_ascend_mini_gpio_dwapb;
+
+ enable_ascend_mini_gpio_dwapb = true;
+#endif
+ return 1;
+}
+__setup("ascend_mini_enable", ascend_mini_enable_setup);
#endif
diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
index 2a56efced7988..4c332f0ed8524 100644
--- a/drivers/gpio/gpio-dwapb.c
+++ b/drivers/gpio/gpio-dwapb.c
@@ -50,6 +50,9 @@
#define GPIO_EXT_PORTB 0x54
#define GPIO_EXT_PORTC 0x58
#define GPIO_EXT_PORTD 0x5c
+#define GPIO_INTCOMB_MASK 0xffc
+#define GPIO_INT_MASK_REG 0x3804
+#define MEM_PERI_SUBCTRL_IOBASE 0x1
#define DWAPB_DRIVER_NAME "gpio-dwapb"
#define DWAPB_MAX_PORTS 4
@@ -66,6 +69,8 @@
#define GPIO_INTSTATUS_V2 0x3c
#define GPIO_PORTA_EOI_V2 0x40
+bool enable_ascend_mini_gpio_dwapb;
+bool enable_ascend_gpio_dwapb;
struct dwapb_gpio;
#ifdef CONFIG_PM_SLEEP
@@ -79,10 +84,11 @@ struct dwapb_context {
u32 int_type;
u32 int_pol;
u32 int_deb;
+ u32 int_comb_mask;
u32 wake_en;
};
#endif
-
+static void __iomem *peri_subctrl_base_addr;
struct dwapb_gpio_port {
struct gpio_chip gc;
bool is_registered;
@@ -234,6 +240,11 @@ static void dwapb_irq_enable(struct irq_data *d)
val = dwapb_read(gpio, GPIO_INTEN);
val |= BIT(d->hwirq);
dwapb_write(gpio, GPIO_INTEN, val);
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTMASK);
+ val &= ~BIT(d->hwirq);
+ dwapb_write(gpio, GPIO_INTMASK, val);
+ }
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
@@ -246,6 +257,11 @@ static void dwapb_irq_disable(struct irq_data *d)
u32 val;
spin_lock_irqsave(&gc->bgpio_lock, flags);
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTMASK);
+ val |= BIT(d->hwirq);
+ dwapb_write(gpio, GPIO_INTMASK, val);
+ }
val = dwapb_read(gpio, GPIO_INTEN);
val &= ~BIT(d->hwirq);
dwapb_write(gpio, GPIO_INTEN, val);
@@ -395,6 +411,7 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
unsigned int hwirq, ngpio = gc->ngpio;
struct irq_chip_type *ct;
int err, i;
+ u32 val;
gpio->domain = irq_domain_create_linear(fwnode, ngpio,
&irq_generic_chip_ops, gpio);
@@ -472,6 +489,12 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
irq_create_mapping(gpio->domain, hwirq);
port->gc.to_irq = dwapb_gpio_to_irq;
+
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTCOMB_MASK);
+ val |= BIT(0);
+ dwapb_write(gpio, GPIO_INTCOMB_MASK, val);
+ }
}
static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
@@ -480,6 +503,7 @@ static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
struct gpio_chip *gc = &port->gc;
unsigned int ngpio = gc->ngpio;
irq_hw_number_t hwirq;
+ u32 val;
if (!gpio->domain)
return;
@@ -489,6 +513,12 @@ static void dwapb_irq_teardown(struct dwapb_gpio *gpio)
irq_domain_remove(gpio->domain);
gpio->domain = NULL;
+
+ if (enable_ascend_gpio_dwapb) {
+ val = dwapb_read(gpio, GPIO_INTCOMB_MASK);
+ val &= ~BIT(0);
+ dwapb_write(gpio, GPIO_INTCOMB_MASK, val);
+ }
}
static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
@@ -669,6 +699,22 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
int err;
struct device *dev = &pdev->dev;
struct dwapb_platform_data *pdata = dev_get_platdata(dev);
+ struct device_node *np = dev->of_node;
+ unsigned int value;
+
+ if (enable_ascend_mini_gpio_dwapb && enable_ascend_gpio_dwapb) {
+ peri_subctrl_base_addr = of_iomap(np, MEM_PERI_SUBCTRL_IOBASE);
+ if (!peri_subctrl_base_addr) {
+ dev_err(&pdev->dev, "sysctrl iomap not find!\n");
+ } else {
+ dev_dbg(&pdev->dev, "sysctrl iomap find!\n");
+ value = readl(peri_subctrl_base_addr +
+ GPIO_INT_MASK_REG);
+ value &= ~1UL;
+ writel(value, peri_subctrl_base_addr +
+ GPIO_INT_MASK_REG);
+ }
+ }
if (!pdata) {
pdata = dwapb_gpio_get_pdata(dev);
@@ -751,6 +797,10 @@ static int dwapb_gpio_remove(struct platform_device *pdev)
reset_control_assert(gpio->rst);
clk_disable_unprepare(gpio->clk);
+ if ((peri_subctrl_base_addr != NULL) && enable_ascend_mini_gpio_dwapb &&
+ enable_ascend_gpio_dwapb)
+ iounmap(peri_subctrl_base_addr);
+
return 0;
}
@@ -787,6 +837,9 @@ static int dwapb_gpio_suspend(struct device *dev)
ctx->int_pol = dwapb_read(gpio, GPIO_INT_POLARITY);
ctx->int_type = dwapb_read(gpio, GPIO_INTTYPE_LEVEL);
ctx->int_deb = dwapb_read(gpio, GPIO_PORTA_DEBOUNCE);
+ if (enable_ascend_gpio_dwapb)
+ ctx->int_comb_mask =
+ dwapb_read(gpio, GPIO_INTCOMB_MASK);
/* Mask out interrupts */
dwapb_write(gpio, GPIO_INTMASK,
@@ -807,6 +860,7 @@ static int dwapb_gpio_resume(struct device *dev)
struct gpio_chip *gc = &gpio->ports[0].gc;
unsigned long flags;
int i;
+ unsigned int value;
if (!IS_ERR(gpio->clk))
clk_prepare_enable(gpio->clk);
@@ -835,6 +889,9 @@ static int dwapb_gpio_resume(struct device *dev)
dwapb_write(gpio, GPIO_PORTA_DEBOUNCE, ctx->int_deb);
dwapb_write(gpio, GPIO_INTEN, ctx->int_en);
dwapb_write(gpio, GPIO_INTMASK, ctx->int_mask);
+ if (enable_ascend_gpio_dwapb)
+ dwapb_write(gpio,
+ GPIO_INTCOMB_MASK, ctx->int_comb_mask);
/* Clear out spurious interrupts */
dwapb_write(gpio, GPIO_PORTA_EOI, 0xffffffff);
@@ -842,6 +899,13 @@ static int dwapb_gpio_resume(struct device *dev)
}
spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ if ((peri_subctrl_base_addr != NULL) && enable_ascend_mini_gpio_dwapb &&
+ enable_ascend_gpio_dwapb) {
+ value = readl(peri_subctrl_base_addr + GPIO_INT_MASK_REG);
+ value &= ~1UL;
+ writel(value, peri_subctrl_base_addr + GPIO_INT_MASK_REG);
+ }
+
return 0;
}
#endif
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/9] iommu/arm-smmu-v3: Add support to configure mpam in STE/CD context
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Xingang Wang <wangxingang5(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2
CVE: NA
-------------------------------------------------
To support limiting qos of device, the partid and pmg need to be set
into the SMMU STE/CD context. This introduce support of SMMU mpam
feature and add interface to set mpam configuration in STE/CD.
Signed-off-by: Xingang Wang <wangxingang5(a)huawei.com>
Signed-off-by: Rui Zhu <zhurui3(a)huawei.com>
Reviewed-by: Yingtai Xie <xieyingtai(a)huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/arm-smmu-v3-context.c | 25 +++++
drivers/iommu/arm-smmu-v3.c | 151 ++++++++++++++++++++++++++++
include/linux/ascend_smmu.h | 9 ++
3 files changed, 185 insertions(+)
create mode 100644 include/linux/ascend_smmu.h
diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c
index 2351de86d31f1..b30a93e076608 100644
--- a/drivers/iommu/arm-smmu-v3-context.c
+++ b/drivers/iommu/arm-smmu-v3-context.c
@@ -66,6 +66,9 @@
#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4)
+#define CTXDESC_CD_5_PARTID_MASK GENMASK_ULL(47, 32)
+#define CTXDESC_CD_5_PMG_MASK GENMASK_ULL(55, 48)
+
/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
#define ARM_SMMU_TCR2CD(tcr, fld) FIELD_PREP(CTXDESC_CD_0_TCR_##fld, \
FIELD_GET(ARM64_TCR_##fld, tcr))
@@ -563,6 +566,28 @@ static int arm_smmu_set_cd(struct iommu_pasid_table_ops *ops, int pasid,
return arm_smmu_write_ctx_desc(tbl, pasid, cd);
}
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg)
+{
+ struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
+ u64 val;
+ __le64 *cdptr = arm_smmu_get_cd_ptr(tbl, ssid);
+
+ if (!cdptr)
+ return -ENOMEM;
+
+ val = le64_to_cpu(cdptr[5]);
+ val &= ~CTXDESC_CD_5_PARTID_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PARTID_MASK, partid);
+ val &= ~CTXDESC_CD_5_PMG_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PMG_MASK, pmg);
+ WRITE_ONCE(cdptr[5], cpu_to_le64(val));
+
+ iommu_pasid_flush(&tbl->pasid, ssid, true);
+
+ return 0;
+}
+
static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid,
struct iommu_pasid_entry *entry)
{
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 6731b47558f4d..c282fc129222f 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -88,6 +88,10 @@
#define IDR1_SSIDSIZE GENMASK(10, 6)
#define IDR1_SIDSIZE GENMASK(5, 0)
+#define ARM_SMMU_IDR3 0xc
+#define IDR3_MPAM (1 << 7)
+#define ARM_SMMU_IDR3_CFG 0x140C
+
#define ARM_SMMU_IDR5 0x14
#define IDR5_STALL_MAX GENMASK(31, 16)
#define IDR5_GRAN64K (1 << 6)
@@ -186,6 +190,10 @@
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+#define ARM_SMMU_MPAMIDR 0x130
+#define MPAMIDR_PMG_MAX GENMASK(23, 16)
+#define MPAMIDR_PARTID_MAX GENMASK(15, 0)
+
/* Common MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_SH GENMASK(5, 4)
@@ -250,6 +258,7 @@
#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4)
#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6)
+#define STRTAB_STE_1_S1MPAM (1UL << 26)
#define STRTAB_STE_1_S1STALLD (1UL << 27)
#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28)
@@ -273,6 +282,11 @@
#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4)
+#define STRTAB_STE_4_PARTID_MASK GENMASK_ULL(31, 16)
+
+#define STRTAB_STE_5_MPAM_NS (1UL << 8)
+#define STRTAB_STE_5_PMG_MASK GENMASK_ULL(7, 0)
+
/* Command queue */
#define CMDQ_ENT_SZ_SHIFT 4
#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
@@ -634,6 +648,7 @@ struct arm_smmu_device {
#define ARM_SMMU_FEAT_SVA (1 << 17)
#define ARM_SMMU_FEAT_HA (1 << 18)
#define ARM_SMMU_FEAT_HD (1 << 19)
+#define ARM_SMMU_FEAT_MPAM (1 << 20)
u32 features;
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -672,6 +687,9 @@ struct arm_smmu_device {
struct mutex streams_mutex;
struct iopf_queue *iopf_queue;
+
+ unsigned int mpam_partid_max;
+ unsigned int mpam_pmg_max;
};
struct arm_smmu_stream {
@@ -3977,6 +3995,25 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
if (smmu->sid_bits <= STRTAB_SPLIT)
smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+ /* IDR3 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+
+ if (!(reg & IDR3_MPAM)) {
+ reg |= FIELD_PREP(IDR3_MPAM, 1);
+ writel(reg, smmu->base + ARM_SMMU_IDR3_CFG);
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+ if (!(reg & IDR3_MPAM))
+ dev_warn(smmu->dev, "enable smmu mpam failed\n");
+ }
+
+ if (reg & IDR3_MPAM) {
+ reg = readl_relaxed(smmu->base + ARM_SMMU_MPAMIDR);
+ smmu->mpam_partid_max = FIELD_GET(MPAMIDR_PARTID_MAX, reg);
+ smmu->mpam_pmg_max = FIELD_GET(MPAMIDR_PMG_MAX, reg);
+ if (smmu->mpam_partid_max || smmu->mpam_pmg_max)
+ smmu->features |= ARM_SMMU_FEAT_MPAM;
+ }
+
/* IDR5 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
@@ -4124,6 +4161,120 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
return SZ_128K;
}
+static int arm_smmu_set_ste_mpam(struct arm_smmu_device *smmu,
+ int sid, int partid, int pmg, int s1mpam)
+{
+ u64 val;
+ __le64 *ste;
+
+ if (!arm_smmu_sid_in_range(smmu, sid))
+ return -ERANGE;
+
+ /* get ste ptr */
+ ste = arm_smmu_get_step_for_sid(smmu, sid);
+
+ /* write s1mpam to ste */
+ val = le64_to_cpu(ste[1]);
+ val &= ~STRTAB_STE_1_S1MPAM;
+ val |= FIELD_PREP(STRTAB_STE_1_S1MPAM, s1mpam);
+ WRITE_ONCE(ste[1], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[4]);
+ val &= ~STRTAB_STE_4_PARTID_MASK;
+ val |= FIELD_PREP(STRTAB_STE_4_PARTID_MASK, partid);
+ WRITE_ONCE(ste[4], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[5]);
+ val &= ~STRTAB_STE_5_PMG_MASK;
+ val |= FIELD_PREP(STRTAB_STE_5_PMG_MASK, pmg);
+ WRITE_ONCE(ste[5], cpu_to_le64(val));
+
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+
+ return 0;
+}
+
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg);
+
+static int arm_smmu_set_mpam(struct arm_smmu_device *smmu,
+ int sid, int ssid, int partid, int pmg, int s1mpam)
+{
+ struct arm_smmu_master_data *master = arm_smmu_find_master(smmu, sid);
+ struct arm_smmu_s1_cfg *cfg = master ? master->ste.s1_cfg : NULL;
+ struct arm_smmu_domain *domain = master ? master->domain : NULL;
+ int ret;
+
+ struct arm_smmu_cmdq_ent prefetch_cmd = {
+ .opcode = CMDQ_OP_PREFETCH_CFG,
+ .prefetch = {
+ .sid = sid,
+ },
+ };
+
+ if (!(smmu->features & ARM_SMMU_FEAT_MPAM))
+ return -ENODEV;
+
+ if (WARN_ON(!domain))
+ return -EINVAL;
+
+ if (WARN_ON(!cfg))
+ return -EINVAL;
+
+ if (WARN_ON(ssid >= (1 << master->ssid_bits)))
+ return -E2BIG;
+
+ if (partid > smmu->mpam_partid_max || pmg > smmu->mpam_pmg_max) {
+ dev_err(smmu->dev,
+ "mpam rmid out of range: partid[0, %d] pmg[0, %d]\n",
+ smmu->mpam_partid_max, smmu->mpam_pmg_max);
+ return -ERANGE;
+ }
+
+ ret = arm_smmu_set_ste_mpam(smmu, sid, partid, pmg, s1mpam);
+ if (ret < 0) {
+ dev_err(smmu->dev, "set ste mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* do not modify cd table which owned by guest */
+ if (domain->stage == ARM_SMMU_DOMAIN_NESTED) {
+ dev_err(smmu->dev,
+ "mpam: smmu cd is owned by guest, not modified\n");
+ return 0;
+ }
+
+ ret = arm_smmu_set_cd_mpam(cfg->ops, ssid, partid, pmg);
+ if (s1mpam && ret < 0) {
+ dev_err(smmu->dev, "set cd mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* It's likely that we'll want to use the new STE soon */
+ if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+ arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+
+ dev_info(smmu->dev, "partid %d, pmg %d\n", partid, pmg);
+
+ return 0;
+}
+
+/**
+ * arm_smmu_set_dev_mpam() - Set mpam configuration to SMMU STE/CD
+ */
+int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid, int pmg,
+ int s1mpam)
+{
+ struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
+ struct arm_smmu_device *smmu = master->domain->smmu;
+ int sid = master->streams->id;
+
+ return arm_smmu_set_mpam(smmu, sid, ssid, partid, pmg, s1mpam);
+}
+EXPORT_SYMBOL(arm_smmu_set_dev_mpam);
+
static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
diff --git a/include/linux/ascend_smmu.h b/include/linux/ascend_smmu.h
new file mode 100644
index 0000000000000..bd0edd25057e1
--- /dev/null
+++ b/include/linux/ascend_smmu.h
@@ -0,0 +1,9 @@
+#ifndef __LINUX_ASCEND_SMMU_H
+#define __LINUX_ASCEND_SMMU_H
+
+#include <linux/device.h>
+
+extern int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid,
+ int pmg, int s1mpam);
+
+#endif /* __LINUX_ASCEND_SMMU_H */
--
2.25.1
1
8

[PATCH kernel-4.19 1/9] iommu/arm-smmu-v3: Add support to configure mpam in STE/CD context
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Xingang Wang <wangxingang5(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2
CVE: NA
-------------------------------------------------
To support limiting qos of device, the partid and pmg need to be set
into the SMMU STE/CD context. This introduce support of SMMU mpam
feature and add interface to set mpam configuration in STE/CD.
Signed-off-by: Xingang Wang <wangxingang5(a)huawei.com>
Signed-off-by: Rui Zhu <zhurui3(a)huawei.com>
Reviewed-by: Yingtai Xie <xieyingtai(a)huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/arm-smmu-v3-context.c | 25 +++++
drivers/iommu/arm-smmu-v3.c | 151 ++++++++++++++++++++++++++++
include/linux/ascend_smmu.h | 9 ++
3 files changed, 185 insertions(+)
create mode 100644 include/linux/ascend_smmu.h
diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c
index 2351de86d31f1..b30a93e076608 100644
--- a/drivers/iommu/arm-smmu-v3-context.c
+++ b/drivers/iommu/arm-smmu-v3-context.c
@@ -66,6 +66,9 @@
#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4)
+#define CTXDESC_CD_5_PARTID_MASK GENMASK_ULL(47, 32)
+#define CTXDESC_CD_5_PMG_MASK GENMASK_ULL(55, 48)
+
/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
#define ARM_SMMU_TCR2CD(tcr, fld) FIELD_PREP(CTXDESC_CD_0_TCR_##fld, \
FIELD_GET(ARM64_TCR_##fld, tcr))
@@ -563,6 +566,28 @@ static int arm_smmu_set_cd(struct iommu_pasid_table_ops *ops, int pasid,
return arm_smmu_write_ctx_desc(tbl, pasid, cd);
}
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg)
+{
+ struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
+ u64 val;
+ __le64 *cdptr = arm_smmu_get_cd_ptr(tbl, ssid);
+
+ if (!cdptr)
+ return -ENOMEM;
+
+ val = le64_to_cpu(cdptr[5]);
+ val &= ~CTXDESC_CD_5_PARTID_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PARTID_MASK, partid);
+ val &= ~CTXDESC_CD_5_PMG_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PMG_MASK, pmg);
+ WRITE_ONCE(cdptr[5], cpu_to_le64(val));
+
+ iommu_pasid_flush(&tbl->pasid, ssid, true);
+
+ return 0;
+}
+
static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid,
struct iommu_pasid_entry *entry)
{
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 8b5083c3e0a16..6634d596f7efc 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -88,6 +88,10 @@
#define IDR1_SSIDSIZE GENMASK(10, 6)
#define IDR1_SIDSIZE GENMASK(5, 0)
+#define ARM_SMMU_IDR3 0xc
+#define IDR3_MPAM (1 << 7)
+#define ARM_SMMU_IDR3_CFG 0x140C
+
#define ARM_SMMU_IDR5 0x14
#define IDR5_STALL_MAX GENMASK(31, 16)
#define IDR5_GRAN64K (1 << 6)
@@ -186,6 +190,10 @@
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+#define ARM_SMMU_MPAMIDR 0x130
+#define MPAMIDR_PMG_MAX GENMASK(23, 16)
+#define MPAMIDR_PARTID_MAX GENMASK(15, 0)
+
/* Common MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_SH GENMASK(5, 4)
@@ -250,6 +258,7 @@
#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4)
#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6)
+#define STRTAB_STE_1_S1MPAM (1UL << 26)
#define STRTAB_STE_1_S1STALLD (1UL << 27)
#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28)
@@ -273,6 +282,11 @@
#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4)
+#define STRTAB_STE_4_PARTID_MASK GENMASK_ULL(31, 16)
+
+#define STRTAB_STE_5_MPAM_NS (1UL << 8)
+#define STRTAB_STE_5_PMG_MASK GENMASK_ULL(7, 0)
+
/* Command queue */
#define CMDQ_ENT_SZ_SHIFT 4
#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
@@ -634,6 +648,7 @@ struct arm_smmu_device {
#define ARM_SMMU_FEAT_SVA (1 << 17)
#define ARM_SMMU_FEAT_HA (1 << 18)
#define ARM_SMMU_FEAT_HD (1 << 19)
+#define ARM_SMMU_FEAT_MPAM (1 << 20)
u32 features;
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -672,6 +687,9 @@ struct arm_smmu_device {
struct mutex streams_mutex;
struct iopf_queue *iopf_queue;
+
+ unsigned int mpam_partid_max;
+ unsigned int mpam_pmg_max;
};
struct arm_smmu_stream {
@@ -3980,6 +3998,25 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
if (smmu->sid_bits <= STRTAB_SPLIT)
smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+ /* IDR3 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+
+ if (!(reg & IDR3_MPAM)) {
+ reg |= FIELD_PREP(IDR3_MPAM, 1);
+ writel(reg, smmu->base + ARM_SMMU_IDR3_CFG);
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+ if (!(reg & IDR3_MPAM))
+ dev_warn(smmu->dev, "enable smmu mpam failed\n");
+ }
+
+ if (reg & IDR3_MPAM) {
+ reg = readl_relaxed(smmu->base + ARM_SMMU_MPAMIDR);
+ smmu->mpam_partid_max = FIELD_GET(MPAMIDR_PARTID_MAX, reg);
+ smmu->mpam_pmg_max = FIELD_GET(MPAMIDR_PMG_MAX, reg);
+ if (smmu->mpam_partid_max || smmu->mpam_pmg_max)
+ smmu->features |= ARM_SMMU_FEAT_MPAM;
+ }
+
/* IDR5 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
@@ -4127,6 +4164,120 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
return SZ_128K;
}
+static int arm_smmu_set_ste_mpam(struct arm_smmu_device *smmu,
+ int sid, int partid, int pmg, int s1mpam)
+{
+ u64 val;
+ __le64 *ste;
+
+ if (!arm_smmu_sid_in_range(smmu, sid))
+ return -ERANGE;
+
+ /* get ste ptr */
+ ste = arm_smmu_get_step_for_sid(smmu, sid);
+
+ /* write s1mpam to ste */
+ val = le64_to_cpu(ste[1]);
+ val &= ~STRTAB_STE_1_S1MPAM;
+ val |= FIELD_PREP(STRTAB_STE_1_S1MPAM, s1mpam);
+ WRITE_ONCE(ste[1], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[4]);
+ val &= ~STRTAB_STE_4_PARTID_MASK;
+ val |= FIELD_PREP(STRTAB_STE_4_PARTID_MASK, partid);
+ WRITE_ONCE(ste[4], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[5]);
+ val &= ~STRTAB_STE_5_PMG_MASK;
+ val |= FIELD_PREP(STRTAB_STE_5_PMG_MASK, pmg);
+ WRITE_ONCE(ste[5], cpu_to_le64(val));
+
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+
+ return 0;
+}
+
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg);
+
+static int arm_smmu_set_mpam(struct arm_smmu_device *smmu,
+ int sid, int ssid, int partid, int pmg, int s1mpam)
+{
+ struct arm_smmu_master_data *master = arm_smmu_find_master(smmu, sid);
+ struct arm_smmu_s1_cfg *cfg = master ? master->ste.s1_cfg : NULL;
+ struct arm_smmu_domain *domain = master ? master->domain : NULL;
+ int ret;
+
+ struct arm_smmu_cmdq_ent prefetch_cmd = {
+ .opcode = CMDQ_OP_PREFETCH_CFG,
+ .prefetch = {
+ .sid = sid,
+ },
+ };
+
+ if (!(smmu->features & ARM_SMMU_FEAT_MPAM))
+ return -ENODEV;
+
+ if (WARN_ON(!domain))
+ return -EINVAL;
+
+ if (WARN_ON(!cfg))
+ return -EINVAL;
+
+ if (WARN_ON(ssid >= (1 << master->ssid_bits)))
+ return -E2BIG;
+
+ if (partid > smmu->mpam_partid_max || pmg > smmu->mpam_pmg_max) {
+ dev_err(smmu->dev,
+ "mpam rmid out of range: partid[0, %d] pmg[0, %d]\n",
+ smmu->mpam_partid_max, smmu->mpam_pmg_max);
+ return -ERANGE;
+ }
+
+ ret = arm_smmu_set_ste_mpam(smmu, sid, partid, pmg, s1mpam);
+ if (ret < 0) {
+ dev_err(smmu->dev, "set ste mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* do not modify cd table which owned by guest */
+ if (domain->stage == ARM_SMMU_DOMAIN_NESTED) {
+ dev_err(smmu->dev,
+ "mpam: smmu cd is owned by guest, not modified\n");
+ return 0;
+ }
+
+ ret = arm_smmu_set_cd_mpam(cfg->ops, ssid, partid, pmg);
+ if (s1mpam && ret < 0) {
+ dev_err(smmu->dev, "set cd mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* It's likely that we'll want to use the new STE soon */
+ if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+ arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+
+ dev_info(smmu->dev, "partid %d, pmg %d\n", partid, pmg);
+
+ return 0;
+}
+
+/**
+ * arm_smmu_set_dev_mpam() - Set mpam configuration to SMMU STE/CD
+ */
+int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid, int pmg,
+ int s1mpam)
+{
+ struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
+ struct arm_smmu_device *smmu = master->domain->smmu;
+ int sid = master->streams->id;
+
+ return arm_smmu_set_mpam(smmu, sid, ssid, partid, pmg, s1mpam);
+}
+EXPORT_SYMBOL(arm_smmu_set_dev_mpam);
+
static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
diff --git a/include/linux/ascend_smmu.h b/include/linux/ascend_smmu.h
new file mode 100644
index 0000000000000..bd0edd25057e1
--- /dev/null
+++ b/include/linux/ascend_smmu.h
@@ -0,0 +1,9 @@
+#ifndef __LINUX_ASCEND_SMMU_H
+#define __LINUX_ASCEND_SMMU_H
+
+#include <linux/device.h>
+
+extern int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid,
+ int pmg, int s1mpam);
+
+#endif /* __LINUX_ASCEND_SMMU_H */
--
2.25.1
1
8

[PATCH openEuler-1.0-LTS 0/6] Fix the problem that the number of tcp timeout retransmissions is lost
by jiazhenyuan 25 Oct '21
by jiazhenyuan 25 Oct '21
25 Oct '21
From: Jiazhenyuan <jiazhenyuan(a)uniontech.com>
issue: https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
jiazhenyuan (6):
tcp: switch tcp and sch_fq to new earliest departure time
net_sched: sch_fq: ensure maxrate fq parameter applies to EDT flows
From 9efdda4e3abed13f0903b7b6e4d4c2102019440a Mon Sep 17 00:00:00 2001
From: Eric Dumazet <edumazet(a)google.com> Date: Sat, 24 Nov 2018
09:12:24 -0800 Subject: [PATCH] tcp: address problems caused by EDT
misshaps
From 7ae189759cc48cf8b54beebff566e9fd2d4e7d7c Mon Sep 17 00:00:00 2001
From: Yuchung Cheng <ycheng(a)google.com> Date: Wed, 16 Jan 2019
15:05:30 -0800 Subject: [PATCH] tcp: always set retrans_stamp on
recovery
From 01a523b071618abbc634d1958229fe3bd2dfa5fa Mon Sep 17 00:00:00 2001
From: Yuchung Cheng <ycheng(a)google.com> Date: Wed, 16 Jan 2019
15:05:32 -0800 Subject: [PATCH] tcp: create a helper to model
exponential backoff
From 3256a2d6ab1f71f9a1bd2d7f6f18eb8108c48d17 Mon Sep 17 00:00:00 2001
From: Eric Dumazet <edumazet(a)google.com> Date: Mon, 30 Sep 2019
15:44:44 -0700 Subject: [PATCH] tcp: adjust rto_base in
retransmits_timed_out()
net/ipv4/tcp_bbr.c | 7 +++--
net/ipv4/tcp_input.c | 17 +++++++-----
net/ipv4/tcp_output.c | 29 ++++++++++++++------
net/ipv4/tcp_timer.c | 64 ++++++++++++++++++++-----------------------
net/sched/sch_fq.c | 46 ++++++++++++++++++-------------
5 files changed, 92 insertions(+), 71 deletions(-)
--
2.27.0
2
7

[PATCH openEuler-1.0-LTS] nvme-rdma: destroy cm id before destroy qp to avoid use after free
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Ruozhu Li <liruozhu(a)huawei.com>
mainline inclusion
from mainline-v5.15-rc2
commit 9817d763dbe15327b9b3ff4404fa6f27f927e744
category: bugfix
bugzilla: NA
CVE: NA
Link: https://gitee.com/openeuler/kernel/issues/I1WGZE
We got a panic when host received a rej cm event soon after a connect
error cm event.
When host get connect error cm event, it will destroy qp immediately.
But cm_id is still valid then.Another cm event rise here, try to access
the qp which was destroyed.Then we got a kernel panic blow:
[87816.777089] [20473] ib_cm:cm_rep_handler:2343: cm_rep_handler: Stale connection. local_comm_id -154357094, remote_comm_id -1133609861
[87816.777223] [20473] ib_cm:cm_init_qp_rtr_attr:4162: cm_init_qp_rtr_attr: local_id -1150387077, cm_id_priv->id.state: 13
[87816.777225] [20473] rdma_cm:cma_rep_recv:1871: RDMA CM: CONNECT_ERROR: failed to handle reply. status -22
[87816.777395] [20473] ib_cm:ib_send_cm_rej:2781: ib_send_cm_rej: local_id -1150387077, cm_id->state: 13
[87816.777398] [20473] nvme_rdma:nvme_rdma_cm_handler:1718: nvme nvme278: connect error (6): status -22 id 00000000c3809aff
[87816.801155] [20473] nvme_rdma:nvme_rdma_cm_handler:1742: nvme nvme278: CM error event 6
[87816.801160] [20473] rdma_cm:cma_ib_handler:1947: RDMA CM: REJECTED: consumer defined
[87816.801163] nvme nvme278: rdma connection establishment failed (-104)
[87816.801168] BUG: unable to handle kernel NULL pointer dereference at 0000000000000370
[87816.801201] RIP: 0010:_ib_modify_qp+0x6e/0x3a0 [ib_core]
[87816.801215] Call Trace:
[87816.801223] cma_modify_qp_err+0x52/0x80 [rdma_cm]
[87816.801228] ? __dynamic_pr_debug+0x8a/0xb0
[87816.801232] cma_ib_handler+0x25a/0x2f0 [rdma_cm]
[87816.801235] cm_process_work+0x60/0xe0 [ib_cm]
[87816.801238] cm_work_handler+0x13b/0x1b97 [ib_cm]
[87816.801243] ? __switch_to_asm+0x35/0x70
[87816.801244] ? __switch_to_asm+0x41/0x70
[87816.801246] ? __switch_to_asm+0x35/0x70
[87816.801248] ? __switch_to_asm+0x41/0x70
[87816.801252] ? __switch_to+0x8c/0x480
[87816.801254] ? __switch_to_asm+0x41/0x70
[87816.801256] ? __switch_to_asm+0x35/0x70
[87816.801259] process_one_work+0x1a7/0x3b0
[87816.801263] worker_thread+0x30/0x390
[87816.801266] ? create_worker+0x1a0/0x1a0
[87816.801268] kthread+0x112/0x130
[87816.801270] ? kthread_flush_work_fn+0x10/0x10
[87816.801272] ret_from_fork+0x35/0x40
-------------------------------------------------
We should always destroy cm_id before destroy qp to avoid to get cma
event after qp was destroyed, which may lead to use after free.
In RDMA connection establishment error flow, don't destroy qp in cm
event handler.Just report cm_error to upper level, qp will be destroy
in nvme_rdma_alloc_queue() after destroy cm id.
Signed-off-by: Ruozhu Li <liruozhu(a)huawei.com>
Reviewed-by: Max Gurtovoy <mgurtovoy(a)nvidia.com>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Conflicts:
drivers/nvme/host/rdma.c
[lrz: adjust context]
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/nvme/host/rdma.c | 16 +++-------------
1 file changed, 3 insertions(+), 13 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index b8e0d637ddcfc..049edb9ed1858 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -575,8 +575,8 @@ static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue)
if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
return;
- nvme_rdma_destroy_queue_ib(queue);
rdma_destroy_id(queue->cm_id);
+ nvme_rdma_destroy_queue_ib(queue);
mutex_destroy(&queue->queue_lock);
}
@@ -1509,14 +1509,10 @@ static int nvme_rdma_conn_established(struct nvme_rdma_queue *queue)
for (i = 0; i < queue->queue_size; i++) {
ret = nvme_rdma_post_recv(queue, &queue->rsp_ring[i]);
if (ret)
- goto out_destroy_queue_ib;
+ return ret;
}
return 0;
-
-out_destroy_queue_ib:
- nvme_rdma_destroy_queue_ib(queue);
- return ret;
}
static int nvme_rdma_conn_rejected(struct nvme_rdma_queue *queue,
@@ -1608,14 +1604,10 @@ static int nvme_rdma_route_resolved(struct nvme_rdma_queue *queue)
if (ret) {
dev_err(ctrl->ctrl.device,
"rdma_connect failed (%d).\n", ret);
- goto out_destroy_queue_ib;
+ return ret;
}
return 0;
-
-out_destroy_queue_ib:
- nvme_rdma_destroy_queue_ib(queue);
- return ret;
}
static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
@@ -1646,8 +1638,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
case RDMA_CM_EVENT_ROUTE_ERROR:
case RDMA_CM_EVENT_CONNECT_ERROR:
case RDMA_CM_EVENT_UNREACHABLE:
- nvme_rdma_destroy_queue_ib(queue);
- /* fall through */
case RDMA_CM_EVENT_ADDR_ERROR:
dev_dbg(queue->ctrl->ctrl.device,
"CM error event %d\n", ev->event);
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] arm64: Errata: fix kabi changed by cpu_errata
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Weilong Chen <chenweilong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: 46922
CVE: NA
-------------------------------------
Patch "cache: Workaround HiSilicon Taishan DC CVAU"
breaks the kabi symbols:
cpu_hwcaps
cpu_hwcap_keys
Fix by using CONFIG_HISILICON_ERRATUM_1980005 to protect it.
Signed-off-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/cpucaps.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d6c863d2cf984..69515f9077e34 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -56,8 +56,12 @@
#define ARM64_WORKAROUND_1463225 35
#define ARM64_HAS_CRC32 36
#define ARM64_SSBS 37
+#ifdef CONFIG_HISILICON_ERRATUM_1980005
#define ARM64_WORKAROUND_HISILICON_1980005 38
#define ARM64_NCAPS 39
+#else
+#define ARM64_NCAPS 38
+#endif
#endif /* __ASM_CPUCAPS_H */
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 01/45] af_unix: fix garbage collect vs MSG_PEEK
by Yang Yingliang 25 Oct '21
by Yang Yingliang 25 Oct '21
25 Oct '21
From: Miklos Szeredi <mszeredi(a)redhat.com>
stable inclusion
from linux-4.19.200
commit 1dabafa9f61118b1377fde424d9a94bf8dbf2813
--------------------------------
commit cbcf01128d0a92e131bd09f1688fe032480b65ca upstream.
unix_gc() assumes that candidate sockets can never gain an external
reference (i.e. be installed into an fd) while the unix_gc_lock is
held. Except for MSG_PEEK this is guaranteed by modifying inflight
count under the unix_gc_lock.
MSG_PEEK does not touch any variable protected by unix_gc_lock (file
count is not), yet it needs to be serialized with garbage collection.
Do this by locking/unlocking unix_gc_lock:
1) increment file count
2) lock/unlock barrier to make sure incremented file count is visible
to garbage collection
3) install file into fd
This is a lock barrier (unlike smp_mb()) that ensures that garbage
collection is run completely before or completely after the barrier.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/unix/af_unix.c | 51 ++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 49 insertions(+), 2 deletions(-)
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 337c4797ab167..98c253afa0db2 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1517,6 +1517,53 @@ static int unix_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
return err;
}
+static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb)
+{
+ scm->fp = scm_fp_dup(UNIXCB(skb).fp);
+
+ /*
+ * Garbage collection of unix sockets starts by selecting a set of
+ * candidate sockets which have reference only from being in flight
+ * (total_refs == inflight_refs). This condition is checked once during
+ * the candidate collection phase, and candidates are marked as such, so
+ * that non-candidates can later be ignored. While inflight_refs is
+ * protected by unix_gc_lock, total_refs (file count) is not, hence this
+ * is an instantaneous decision.
+ *
+ * Once a candidate, however, the socket must not be reinstalled into a
+ * file descriptor while the garbage collection is in progress.
+ *
+ * If the above conditions are met, then the directed graph of
+ * candidates (*) does not change while unix_gc_lock is held.
+ *
+ * Any operations that changes the file count through file descriptors
+ * (dup, close, sendmsg) does not change the graph since candidates are
+ * not installed in fds.
+ *
+ * Dequeing a candidate via recvmsg would install it into an fd, but
+ * that takes unix_gc_lock to decrement the inflight count, so it's
+ * serialized with garbage collection.
+ *
+ * MSG_PEEK is special in that it does not change the inflight count,
+ * yet does install the socket into an fd. The following lock/unlock
+ * pair is to ensure serialization with garbage collection. It must be
+ * done between incrementing the file count and installing the file into
+ * an fd.
+ *
+ * If garbage collection starts after the barrier provided by the
+ * lock/unlock, then it will see the elevated refcount and not mark this
+ * as a candidate. If a garbage collection is already in progress
+ * before the file count was incremented, then the lock/unlock pair will
+ * ensure that garbage collection is finished before progressing to
+ * installing the fd.
+ *
+ * (*) A -> B where B is on the queue of A or B is on the queue of C
+ * which is on the queue of listening socket A.
+ */
+ spin_lock(&unix_gc_lock);
+ spin_unlock(&unix_gc_lock);
+}
+
static int unix_scm_to_skb(struct scm_cookie *scm, struct sk_buff *skb, bool send_fds)
{
int err = 0;
@@ -2142,7 +2189,7 @@ static int unix_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
sk_peek_offset_fwd(sk, size);
if (UNIXCB(skb).fp)
- scm.fp = scm_fp_dup(UNIXCB(skb).fp);
+ unix_peek_fds(&scm, skb);
}
err = (flags & MSG_TRUNC) ? skb->len - skip : size;
@@ -2383,7 +2430,7 @@ static int unix_stream_read_generic(struct unix_stream_read_state *state,
/* It is questionable, see note in unix_dgram_recvmsg.
*/
if (UNIXCB(skb).fp)
- scm.fp = scm_fp_dup(UNIXCB(skb).fp);
+ unix_peek_fds(&scm, skb);
sk_peek_offset_fwd(sk, chunk);
--
2.25.1
1
44

[PATCH kernel-4.19 1/9] iommu/arm-smmu-v3: Add support to configure mpam in STE/CD context
by Yang Yingliang 22 Oct '21
by Yang Yingliang 22 Oct '21
22 Oct '21
From: Xingang Wang <wangxingang5(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2
CVE: NA
-------------------------------------------------
To support limiting qos of device, the partid and pmg need to be set
into the SMMU STE/CD context. This introduce support of SMMU mpam
feature and add interface to set mpam configuration in STE/CD.
Signed-off-by: Xingang Wang <wangxingang5(a)huawei.com>
Signed-off-by: Rui Zhu <zhurui3(a)huawei.com>
Reviewed-by: Yingtai Xie <xieyingtai(a)huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/arm-smmu-v3-context.c | 25 +++++
drivers/iommu/arm-smmu-v3.c | 151 ++++++++++++++++++++++++++++
include/linux/ascend_smmu.h | 9 ++
3 files changed, 185 insertions(+)
create mode 100644 include/linux/ascend_smmu.h
diff --git a/drivers/iommu/arm-smmu-v3-context.c b/drivers/iommu/arm-smmu-v3-context.c
index 2351de86d31f1..b30a93e076608 100644
--- a/drivers/iommu/arm-smmu-v3-context.c
+++ b/drivers/iommu/arm-smmu-v3-context.c
@@ -66,6 +66,9 @@
#define CTXDESC_CD_1_TTB0_MASK GENMASK_ULL(51, 4)
+#define CTXDESC_CD_5_PARTID_MASK GENMASK_ULL(47, 32)
+#define CTXDESC_CD_5_PMG_MASK GENMASK_ULL(55, 48)
+
/* Convert between AArch64 (CPU) TCR format and SMMU CD format */
#define ARM_SMMU_TCR2CD(tcr, fld) FIELD_PREP(CTXDESC_CD_0_TCR_##fld, \
FIELD_GET(ARM64_TCR_##fld, tcr))
@@ -563,6 +566,28 @@ static int arm_smmu_set_cd(struct iommu_pasid_table_ops *ops, int pasid,
return arm_smmu_write_ctx_desc(tbl, pasid, cd);
}
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg)
+{
+ struct arm_smmu_cd_tables *tbl = pasid_ops_to_tables(ops);
+ u64 val;
+ __le64 *cdptr = arm_smmu_get_cd_ptr(tbl, ssid);
+
+ if (!cdptr)
+ return -ENOMEM;
+
+ val = le64_to_cpu(cdptr[5]);
+ val &= ~CTXDESC_CD_5_PARTID_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PARTID_MASK, partid);
+ val &= ~CTXDESC_CD_5_PMG_MASK;
+ val |= FIELD_PREP(CTXDESC_CD_5_PMG_MASK, pmg);
+ WRITE_ONCE(cdptr[5], cpu_to_le64(val));
+
+ iommu_pasid_flush(&tbl->pasid, ssid, true);
+
+ return 0;
+}
+
static void arm_smmu_clear_cd(struct iommu_pasid_table_ops *ops, int pasid,
struct iommu_pasid_entry *entry)
{
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 8b5083c3e0a16..6634d596f7efc 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -88,6 +88,10 @@
#define IDR1_SSIDSIZE GENMASK(10, 6)
#define IDR1_SIDSIZE GENMASK(5, 0)
+#define ARM_SMMU_IDR3 0xc
+#define IDR3_MPAM (1 << 7)
+#define ARM_SMMU_IDR3_CFG 0x140C
+
#define ARM_SMMU_IDR5 0x14
#define IDR5_STALL_MAX GENMASK(31, 16)
#define IDR5_GRAN64K (1 << 6)
@@ -186,6 +190,10 @@
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+#define ARM_SMMU_MPAMIDR 0x130
+#define MPAMIDR_PMG_MAX GENMASK(23, 16)
+#define MPAMIDR_PARTID_MAX GENMASK(15, 0)
+
/* Common MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_SH GENMASK(5, 4)
@@ -250,6 +258,7 @@
#define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4)
#define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6)
+#define STRTAB_STE_1_S1MPAM (1UL << 26)
#define STRTAB_STE_1_S1STALLD (1UL << 27)
#define STRTAB_STE_1_EATS GENMASK_ULL(29, 28)
@@ -273,6 +282,11 @@
#define STRTAB_STE_3_S2TTB_MASK GENMASK_ULL(51, 4)
+#define STRTAB_STE_4_PARTID_MASK GENMASK_ULL(31, 16)
+
+#define STRTAB_STE_5_MPAM_NS (1UL << 8)
+#define STRTAB_STE_5_PMG_MASK GENMASK_ULL(7, 0)
+
/* Command queue */
#define CMDQ_ENT_SZ_SHIFT 4
#define CMDQ_ENT_DWORDS ((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
@@ -634,6 +648,7 @@ struct arm_smmu_device {
#define ARM_SMMU_FEAT_SVA (1 << 17)
#define ARM_SMMU_FEAT_HA (1 << 18)
#define ARM_SMMU_FEAT_HD (1 << 19)
+#define ARM_SMMU_FEAT_MPAM (1 << 20)
u32 features;
#define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -672,6 +687,9 @@ struct arm_smmu_device {
struct mutex streams_mutex;
struct iopf_queue *iopf_queue;
+
+ unsigned int mpam_partid_max;
+ unsigned int mpam_pmg_max;
};
struct arm_smmu_stream {
@@ -3980,6 +3998,25 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
if (smmu->sid_bits <= STRTAB_SPLIT)
smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+ /* IDR3 */
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+
+ if (!(reg & IDR3_MPAM)) {
+ reg |= FIELD_PREP(IDR3_MPAM, 1);
+ writel(reg, smmu->base + ARM_SMMU_IDR3_CFG);
+ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3);
+ if (!(reg & IDR3_MPAM))
+ dev_warn(smmu->dev, "enable smmu mpam failed\n");
+ }
+
+ if (reg & IDR3_MPAM) {
+ reg = readl_relaxed(smmu->base + ARM_SMMU_MPAMIDR);
+ smmu->mpam_partid_max = FIELD_GET(MPAMIDR_PARTID_MAX, reg);
+ smmu->mpam_pmg_max = FIELD_GET(MPAMIDR_PMG_MAX, reg);
+ if (smmu->mpam_partid_max || smmu->mpam_pmg_max)
+ smmu->features |= ARM_SMMU_FEAT_MPAM;
+ }
+
/* IDR5 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
@@ -4127,6 +4164,120 @@ static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
return SZ_128K;
}
+static int arm_smmu_set_ste_mpam(struct arm_smmu_device *smmu,
+ int sid, int partid, int pmg, int s1mpam)
+{
+ u64 val;
+ __le64 *ste;
+
+ if (!arm_smmu_sid_in_range(smmu, sid))
+ return -ERANGE;
+
+ /* get ste ptr */
+ ste = arm_smmu_get_step_for_sid(smmu, sid);
+
+ /* write s1mpam to ste */
+ val = le64_to_cpu(ste[1]);
+ val &= ~STRTAB_STE_1_S1MPAM;
+ val |= FIELD_PREP(STRTAB_STE_1_S1MPAM, s1mpam);
+ WRITE_ONCE(ste[1], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[4]);
+ val &= ~STRTAB_STE_4_PARTID_MASK;
+ val |= FIELD_PREP(STRTAB_STE_4_PARTID_MASK, partid);
+ WRITE_ONCE(ste[4], cpu_to_le64(val));
+
+ val = le64_to_cpu(ste[5]);
+ val &= ~STRTAB_STE_5_PMG_MASK;
+ val |= FIELD_PREP(STRTAB_STE_5_PMG_MASK, pmg);
+ WRITE_ONCE(ste[5], cpu_to_le64(val));
+
+ arm_smmu_sync_ste_for_sid(smmu, sid);
+
+ return 0;
+}
+
+int arm_smmu_set_cd_mpam(struct iommu_pasid_table_ops *ops,
+ int ssid, int partid, int pmg);
+
+static int arm_smmu_set_mpam(struct arm_smmu_device *smmu,
+ int sid, int ssid, int partid, int pmg, int s1mpam)
+{
+ struct arm_smmu_master_data *master = arm_smmu_find_master(smmu, sid);
+ struct arm_smmu_s1_cfg *cfg = master ? master->ste.s1_cfg : NULL;
+ struct arm_smmu_domain *domain = master ? master->domain : NULL;
+ int ret;
+
+ struct arm_smmu_cmdq_ent prefetch_cmd = {
+ .opcode = CMDQ_OP_PREFETCH_CFG,
+ .prefetch = {
+ .sid = sid,
+ },
+ };
+
+ if (!(smmu->features & ARM_SMMU_FEAT_MPAM))
+ return -ENODEV;
+
+ if (WARN_ON(!domain))
+ return -EINVAL;
+
+ if (WARN_ON(!cfg))
+ return -EINVAL;
+
+ if (WARN_ON(ssid >= (1 << master->ssid_bits)))
+ return -E2BIG;
+
+ if (partid > smmu->mpam_partid_max || pmg > smmu->mpam_pmg_max) {
+ dev_err(smmu->dev,
+ "mpam rmid out of range: partid[0, %d] pmg[0, %d]\n",
+ smmu->mpam_partid_max, smmu->mpam_pmg_max);
+ return -ERANGE;
+ }
+
+ ret = arm_smmu_set_ste_mpam(smmu, sid, partid, pmg, s1mpam);
+ if (ret < 0) {
+ dev_err(smmu->dev, "set ste mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* do not modify cd table which owned by guest */
+ if (domain->stage == ARM_SMMU_DOMAIN_NESTED) {
+ dev_err(smmu->dev,
+ "mpam: smmu cd is owned by guest, not modified\n");
+ return 0;
+ }
+
+ ret = arm_smmu_set_cd_mpam(cfg->ops, ssid, partid, pmg);
+ if (s1mpam && ret < 0) {
+ dev_err(smmu->dev, "set cd mpam configuration error %d\n",
+ ret);
+ return ret;
+ }
+
+ /* It's likely that we'll want to use the new STE soon */
+ if (!(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH))
+ arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+
+ dev_info(smmu->dev, "partid %d, pmg %d\n", partid, pmg);
+
+ return 0;
+}
+
+/**
+ * arm_smmu_set_dev_mpam() - Set mpam configuration to SMMU STE/CD
+ */
+int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid, int pmg,
+ int s1mpam)
+{
+ struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
+ struct arm_smmu_device *smmu = master->domain->smmu;
+ int sid = master->streams->id;
+
+ return arm_smmu_set_mpam(smmu, sid, ssid, partid, pmg, s1mpam);
+}
+EXPORT_SYMBOL(arm_smmu_set_dev_mpam);
+
static int arm_smmu_device_probe(struct platform_device *pdev)
{
int irq, ret;
diff --git a/include/linux/ascend_smmu.h b/include/linux/ascend_smmu.h
new file mode 100644
index 0000000000000..bd0edd25057e1
--- /dev/null
+++ b/include/linux/ascend_smmu.h
@@ -0,0 +1,9 @@
+#ifndef __LINUX_ASCEND_SMMU_H
+#define __LINUX_ASCEND_SMMU_H
+
+#include <linux/device.h>
+
+extern int arm_smmu_set_dev_mpam(struct device *dev, int ssid, int partid,
+ int pmg, int s1mpam);
+
+#endif /* __LINUX_ASCEND_SMMU_H */
--
2.25.1
1
8

[PATCH kernel-4.19] efi: Change down_interruptible() in virt_efi_reset_system() to down_trylock()
by Yang Yingliang 22 Oct '21
by Yang Yingliang 22 Oct '21
22 Oct '21
From: Zhang Jianhua <chris.zjh(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 182892, https://gitee.com/openeuler/kernel/issues/I4F04R
--------
While reboot the system by sysrq, the following bug will be occur.
BUG: sleeping function called from invalid context at kernel/locking/semaphore.c:90
in_atomic(): 0, irqs_disabled(): 128, non_block: 0, pid: 10052, name: rc.shutdown
CPU: 3 PID: 10052 Comm: rc.shutdown Tainted: G W O 5.10.0 #1
Call trace:
dump_backtrace+0x0/0x1c8
show_stack+0x18/0x28
dump_stack+0xd0/0x110
___might_sleep+0x14c/0x160
__might_sleep+0x74/0x88
down_interruptible+0x40/0x118
virt_efi_reset_system+0x3c/0xd0
efi_reboot+0xd4/0x11c
machine_restart+0x60/0x9c
emergency_restart+0x1c/0x2c
sysrq_handle_reboot+0x1c/0x2c
__handle_sysrq+0xd0/0x194
write_sysrq_trigger+0xbc/0xe4
proc_reg_write+0xd4/0xf0
vfs_write+0xa8/0x148
ksys_write+0x6c/0xd8
__arm64_sys_write+0x18/0x28
el0_svc_common.constprop.3+0xe4/0x16c
do_el0_svc+0x1c/0x2c
el0_svc+0x20/0x30
el0_sync_handler+0x80/0x17c
el0_sync+0x158/0x180
The reason for this problem is that irq has been disabled in
machine_restart() and then it calls down_interruptible() in
virt_efi_reset_system(), which would occur sleep in irq context,
it is dangerous! Commit 99409b935c9a("locking/semaphore: Add
might_sleep() to down_*() family") add might_sleep() in
down_interruptible(), so the bug info is here. down_trylock()
can solve this problem, cause there is no might_sleep.
Signed-off-by: Zhang Jianhua <chris.zjh(a)huawei.com>
Reviewed-by: Chen Lifu <chenlifu(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Reviewed-by: Liao Chang <liaochang1(a)huawei.com>
Reviewed-by: He Ying <heying24(a)huawei.com>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Reviewed-by: Yihang Xu <xuyihang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/firmware/efi/runtime-wrappers.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
index 04288719721fc..ecee810f7a6fa 100644
--- a/drivers/firmware/efi/runtime-wrappers.c
+++ b/drivers/firmware/efi/runtime-wrappers.c
@@ -408,7 +408,7 @@ static void virt_efi_reset_system(int reset_type,
unsigned long data_size,
efi_char16_t *data)
{
- if (down_interruptible(&efi_runtime_lock)) {
+ if (down_trylock(&efi_runtime_lock)) {
pr_warn("failed to invoke the reset_system() runtime service:\n"
"could not get exclusive access to the firmware\n");
return;
--
2.25.1
1
0

22 Oct '21
From: Lijun Fang <fanglijun3(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DZYE
CVE: NA
---------------------------
svm_mmap use the pgoff flag for the security requirement nid, which used
as cdm node.
Signed-off-by: Lijun Fang <fanglijun3(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/char/svm.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/char/svm.c b/drivers/char/svm.c
index f312b7e02263c..62092fbaa0104 100644
--- a/drivers/char/svm.c
+++ b/drivers/char/svm.c
@@ -1820,9 +1820,17 @@ static int svm_mmap(struct file *file, struct vm_area_struct *vma)
if ((vma->vm_end < vma->vm_start) || (vm_size > MMAP_PHY32_MAX))
return -EINVAL;
- page = alloc_pages(GFP_KERNEL | GFP_DMA32, get_order(vm_size));
+ /* vma->vm_pgoff transfer the nid */
+ if (vma->vm_pgoff == 0)
+ page = alloc_pages(GFP_KERNEL | GFP_DMA32,
+ get_order(vm_size));
+ else
+ page = alloc_pages_node((int)vma->vm_pgoff,
+ GFP_KERNEL | __GFP_THISNODE,
+ get_order(vm_size));
if (!page) {
- dev_err(sdev->dev, "fail to alloc page\n");
+ dev_err(sdev->dev, "fail to alloc page on node 0x%lx\n",
+ vma->vm_pgoff);
return -ENOMEM;
}
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] Ascend/hugetlb:support alloc normal and buddy hugepage
by Yang Yingliang 22 Oct '21
by Yang Yingliang 22 Oct '21
22 Oct '21
From: Zhou Guanghui <zhouguanghui1(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I
CVE: NA
----------------------------------------------------
The current function hugetlb_alloc_hugepage implements the allocation
from static hugepages first. When the static hugepage is used up, it
attempts to apply for hugepages from buddy system. Two additional modes
are supported: static hugepages only and buddy hugepages only.
Signed-off-by: Zhou Guanghui <zhouguanghui1(a)huawei.com>
Signed-off-by: Guo Mengqi <guomengqi3(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/hugetlb.h | 11 +++++++++--
mm/hugetlb.c | 35 +++++++++++++++++++++++++++++++++--
2 files changed, 42 insertions(+), 4 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 830c41a5ca70e..de6cdfa51694c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -374,8 +374,15 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
pgoff_t idx);
#ifdef CONFIG_ASCEND_FEATURES
+#define HUGETLB_ALLOC_NONE 0x00
+#define HUGETLB_ALLOC_NORMAL 0x01 /* normal hugepage */
+#define HUGETLB_ALLOC_BUDDY 0x02 /* buddy hugepage */
+#define HUGETLB_ALLOC_MASK (HUGETLB_ALLOC_NONE | \
+ HUGETLB_ALLOC_NORMAL | \
+ HUGETLB_ALLOC_BUDDY)
+
const struct hstate *hugetlb_get_hstate(void);
-struct page *hugetlb_alloc_hugepage(int nid);
+struct page *hugetlb_alloc_hugepage(int nid, int flag);
int hugetlb_insert_hugepage_pte(struct mm_struct *mm, unsigned long addr,
pgprot_t prot, struct page *hpage);
#else
@@ -384,7 +391,7 @@ static inline const struct hstate *hugetlb_get_hstate(void)
return NULL;
}
-static inline struct page *hugetlb_alloc_hugepage(int nid)
+static inline struct page *hugetlb_alloc_hugepage(int nid, int flag)
{
return NULL;
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3091c61bb63e3..907b1351b0f5f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5233,17 +5233,48 @@ const struct hstate *hugetlb_get_hstate(void)
}
EXPORT_SYMBOL_GPL(hugetlb_get_hstate);
+static struct page *hugetlb_alloc_hugepage_normal(struct hstate *h,
+ gfp_t gfp_mask, int nid)
+{
+ struct page *page = NULL;
+
+ spin_lock(&hugetlb_lock);
+ if (h->free_huge_pages - h->resv_huge_pages > 0)
+ page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL, NULL);
+ spin_unlock(&hugetlb_lock);
+
+ return page;
+}
+
/*
* Allocate hugepage without reserve
*/
-struct page *hugetlb_alloc_hugepage(int nid)
+struct page *hugetlb_alloc_hugepage(int nid, int flag)
{
+ struct hstate *h = &default_hstate;
+ gfp_t gfp_mask = htlb_alloc_mask(h);
+ struct page *page = NULL;
+
VM_WARN_ON(nid < 0 || nid >= MAX_NUMNODES);
+ if (flag & ~HUGETLB_ALLOC_MASK)
+ return NULL;
+
if (nid == NUMA_NO_NODE)
nid = numa_mem_id();
- return alloc_huge_page_node(&default_hstate, nid);
+ gfp_mask |= __GFP_THISNODE;
+
+ if (flag & HUGETLB_ALLOC_NORMAL)
+ page = hugetlb_alloc_hugepage_normal(h, gfp_mask, nid);
+ else if (flag & HUGETLB_ALLOC_BUDDY) {
+ if (enable_charge_mighp)
+ gfp_mask |= __GFP_ACCOUNT;
+ page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL);
+ } else
+ page = alloc_huge_page_node(h, nid);
+
+ return page;
}
EXPORT_SYMBOL_GPL(hugetlb_alloc_hugepage);
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] Ascend/memcg: Use CONFIG_ASCEND_FEATURES for customized interfaces
by Yang Yingliang 22 Oct '21
by Yang Yingliang 22 Oct '21
22 Oct '21
From: Zhou Guanghui <zhouguanghui1(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I
CVE: NA
-------------------------------------------------------------
The following functions are used only in the ascend scenario:
hugetlb_get_hstate,
hugetlb_alloc_hugepage,
hugetlb_insert_hugepage_pte,
hugetlb_insert_hugepage_pte_by_pa
Remove unused interface hugetlb_insert_hugepage
Signed-off-by: Zhou Guanghui <zhouguanghui1(a)huawei.com>
Signed-off-by: Guo Mengqi <guomengqi3(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/hugetlb.h | 33 ++++++++++++++++++++++++++-------
mm/hugetlb.c | 2 +-
2 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 6831b24defd64..830c41a5ca70e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -373,11 +373,27 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
pgoff_t idx);
-#ifdef CONFIG_ARM64
+#ifdef CONFIG_ASCEND_FEATURES
const struct hstate *hugetlb_get_hstate(void);
struct page *hugetlb_alloc_hugepage(int nid);
int hugetlb_insert_hugepage_pte(struct mm_struct *mm, unsigned long addr,
pgprot_t prot, struct page *hpage);
+#else
+static inline const struct hstate *hugetlb_get_hstate(void)
+{
+ return NULL;
+}
+
+static inline struct page *hugetlb_alloc_hugepage(int nid)
+{
+ return NULL;
+}
+
+static inline int hugetlb_insert_hugepage_pte(struct mm_struct *mm,
+ unsigned long addr, pgprot_t prot, struct page *hpage)
+{
+ return -EPERM;
+}
#endif
/* arch callback */
@@ -614,12 +630,6 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
{
}
-static inline int hugetlb_insert_hugepage_pte_by_pa(struct mm_struct *mm,
- unsigned long vir_addr,
- pgprot_t prot, unsigned long phy_addr)
-{
- return 0;
-}
#endif /* CONFIG_HUGETLB_PAGE */
static inline spinlock_t *huge_pte_lock(struct hstate *h,
@@ -632,4 +642,13 @@ static inline spinlock_t *huge_pte_lock(struct hstate *h,
return ptl;
}
+#ifndef CONFIG_ASCEND_FEATURES
+static inline int hugetlb_insert__hugepage_pte_by_pa(struct mm_struct *mm,
+ unsigned long vir_addr,
+ pgprot_t prot, unsigned long phy_addr)
+{
+ return -EPERM;
+}
+#endif
+
#endif /* _LINUX_HUGETLB_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5132fefb60c86..3091c61bb63e3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5226,7 +5226,7 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
}
}
-#ifdef CONFIG_ARM64
+#ifdef CONFIG_ASCEND_FEATURES
const struct hstate *hugetlb_get_hstate(void)
{
return &default_hstate;
--
2.25.1
1
0

22 Oct '21
From: Miklos Szeredi <mszeredi(a)redhat.com>
mainline inclusion
from mainline-5.14
commit 76224355db7570cbe6b6f75c8929a1558828dd55
category: bugfix
bugzilla: 181107
CVE: NA
---------------------------
fuse_finish_open() will be called with FUSE_NOWRITE in case of atomic
O_TRUNC. This can deadlock with fuse_wait_on_page_writeback() in
fuse_launder_page() triggered by invalidate_inode_pages2().
Fix by replacing invalidate_inode_pages2() in fuse_finish_open() with a
truncate_pagecache() call. This makes sense regardless of FOPEN_KEEP_CACHE
or fc->writeback cache, so do it unconditionally.
Reported-by: Xie Yongji <xieyongji(a)bytedance.com>
Reported-and-tested-by: syzbot+bea44a5189836d956894(a)syzkaller.appspotmail.com
Fixes: e4648309b85a ("fuse: truncate pending writes on O_TRUNC")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Conflict: fs/fuse/file.c
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/fuse/file.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 41e2a7b567d7f..5ecabacabd477 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -178,8 +178,6 @@ void fuse_finish_open(struct inode *inode, struct file *file)
if (ff->open_flags & FOPEN_DIRECT_IO)
file->f_op = &fuse_direct_io_file_operations;
- if (!(ff->open_flags & FOPEN_KEEP_CACHE))
- invalidate_inode_pages2(inode->i_mapping);
if (ff->open_flags & FOPEN_STREAM)
stream_open(inode, file);
else if (ff->open_flags & FOPEN_NONSEEKABLE)
@@ -191,10 +189,13 @@ void fuse_finish_open(struct inode *inode, struct file *file)
fi->attr_version = ++fc->attr_version;
i_size_write(inode, 0);
spin_unlock(&fc->lock);
+ truncate_pagecache(inode, 0);
fuse_invalidate_attr(inode);
if (fc->writeback_cache)
file_update_time(file);
- }
+ } else if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+ invalidate_inode_pages2(inode->i_mapping);
+
if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
fuse_link_write_file(file);
}
--
2.25.1
1
1
unsubscribe
1
0
help
1
0

22 Oct '21
From: Zhang Yi <yi.zhang(a)huawei.com>
mainline inclusion
from mainline-5.15-rc4
commit 4df031ff5876d94b48dd9ee486ba5522382a06b2
category: perf
bugzilla: NA
CVE: NA
---------------------------
After commit 3da40c7b0898 ("ext4: only call ext4_truncate when size <=
isize"), i_disksize could always be updated to i_size in ext4_setattr(),
and we could sure that i_disksize <= i_size since holding inode lock and
if i_disksize < i_size there are delalloc writes pending in the range
upto i_size. If the end of the current write is <= i_size, there's no
need to touch i_disksize since writeback will push i_disksize upto
i_size eventually. So we can switch to check i_size instead of
i_disksize in ext4_da_write_end() when write to the end of the file.
we also could remove ext4_mark_inode_dirty() together because we defer
inode dirtying to generic_write_end() or ext4_da_write_inline_data_end().
Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Link: https://lore.kernel.org/r/20210716122024.1105856-2-yi.zhang@huawei.com
Conflicts:
fs/ext4/inode.c
Reviewed-by: Yang Erkun <yangerkun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/inode.c | 34 ++++++++++++++++++----------------
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index b1bede3c489e7..9bb38c936e020 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3190,35 +3190,37 @@ static int ext4_da_write_end(struct file *file,
end = start + copied - 1;
/*
- * generic_write_end() will run mark_inode_dirty() if i_size
- * changes. So let's piggyback the i_disksize mark_inode_dirty
- * into that.
+ * Since we are holding inode lock, we are sure i_disksize <=
+ * i_size. We also know that if i_disksize < i_size, there are
+ * delalloc writes pending in the range upto i_size. If the end of
+ * the current write is <= i_size, there's no need to touch
+ * i_disksize since writeback will push i_disksize upto i_size
+ * eventually. If the end of the current write is > i_size and
+ * inside an allocated block (ext4_da_should_update_i_disksize()
+ * check), we need to update i_disksize here as neither
+ * ext4_writepage() nor certain ext4_writepages() paths not
+ * allocating blocks update i_disksize.
+ *
+ * Note that we defer inode dirtying to generic_write_end() /
+ * ext4_da_write_inline_data_end().
*/
new_i_size = pos + copied;
- if (copied && new_i_size > EXT4_I(inode)->i_disksize) {
+ if (copied && new_i_size > inode->i_size) {
if (ext4_has_inline_data(inode) ||
- ext4_da_should_update_i_disksize(page, end)) {
+ ext4_da_should_update_i_disksize(page, end))
ext4_update_i_disksize(inode, new_i_size);
- /* We need to mark inode dirty even if
- * new_i_size is less that inode->i_size
- * bu greater than i_disksize.(hint delalloc)
- */
- ext4_mark_inode_dirty(handle, inode);
- }
}
if (write_mode != CONVERT_INLINE_DATA &&
ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) &&
ext4_has_inline_data(inode))
- ret2 = ext4_da_write_inline_data_end(inode, pos, len, copied,
+ ret = ext4_da_write_inline_data_end(inode, pos, len, copied,
page);
else
- ret2 = generic_write_end(file, mapping, pos, len, copied,
+ ret = generic_write_end(file, mapping, pos, len, copied,
page, fsdata);
- copied = ret2;
- if (ret2 < 0)
- ret = ret2;
+ copied = ret;
ret2 = ext4_journal_stop(handle);
if (!ret)
ret = ret2;
--
2.25.1
1
3

22 Oct '21
From: gouhao <gouhao(a)uniontech.com>
Fix hibmc did not get edid.
issue: https://gitee.com/openeuler/kernel/issues/I469VQ
gouhao (2):
drm/hisilicon: Support i2c driver algorithms for bit-shift adapters
drm/hisilicon: Features to support reading resolutions from EDID
drivers/gpu/drm/hisilicon/hibmc/Makefile | 3 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 25 +++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c | 98 +++++++++++++++++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 54 ++++++----
4 files changed, 162 insertions(+), 18 deletions(-)
create mode 100644 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
--
2.20.1
1
2

[PATCH kernel-4.19 openEuler-1.0-LTS] sched/topology: Fix sched_domain_topology_level alloc in sched_init_numa()
by Cheng Jian 21 Oct '21
by Cheng Jian 21 Oct '21
21 Oct '21
From: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
mainline inclusion
from mainline-v5.12-rc1
commit 71e5f6644fb2f3304fcb310145ded234a37e7cc1
category: bugfix
bugzilla: 182847,https://gitee.com/openeuler/kernel/issues/I4EVBL
CVE: NA
----------------------------------------------------------
Commit "sched/topology: Make sched_init_numa() use a set for the
deduplicating sort" allocates 'i + nr_levels (level)' instead of
'i + nr_levels + 1' sched_domain_topology_level.
This led to an Oops (on Arm64 juno with CONFIG_SCHED_DEBUG):
sched_init_domains
build_sched_domains()
__free_domain_allocs()
__sdt_free() {
...
for_each_sd_topology(tl)
...
sd = *per_cpu_ptr(sdd->sd, j); <--
...
}
Signed-off-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Tested-by: Vincent Guittot <vincent.guittot(a)linaro.org>
Tested-by: Barry Song <song.bao.hua(a)hisilicon.com>
Link: https://lkml.kernel.org/r/6000e39e-7d28-c360-9cd6-8798fd22a9bf@arm.com
Fixes: 620a6dc40754 ("sched/topology: Make sched_init_numa() use a set for the deduplicating sort")
Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com>
---
kernel/sched/topology.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 23f0a69b2ed4..ad5591520c99 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1548,7 +1548,7 @@ void sched_init_numa(void)
/* Compute default topology size */
for (i = 0; sched_domain_topology[i].mask; i++);
- tl = kzalloc((i + nr_levels) *
+ tl = kzalloc((i + nr_levels + 1) *
sizeof(struct sched_domain_topology_level), GFP_KERNEL);
if (!tl)
return;
--
2.25.1
1
0
Backport LTS 5.10.68 patches from upstream.
Adrian Bunk (1):
bnx2x: Fix enabling network interfaces without VFs
Alex Elder (1):
net: ipa: initialize all filter table slots
Alexander Egorenkov (1):
s390/sclp: fix Secure-IPL facility detection
Anand Jain (1):
btrfs: fix upper limit for max_inline for page size 64K
Andrea Claudi (1):
selftest: net: fix typo in altname test
Andy Shevchenko (2):
PCI: Sync __pci_register_driver() stub for CONFIG_PCI=n
mfd: lpc_sch: Partially revert "Add support for Intel Quark X1000"
Anshuman Khandual (1):
KVM: arm64: Restrict IPA size to maximum 48 bits on 4K and 16K page
size
Arnaldo Carvalho de Melo (1):
perf bench inject-buildid: Handle writen() errors
Arnd Bergmann (1):
drm/rockchip: cdn-dp-core: Make cdn_dp_core_resume __maybe_unused
Aya Levin (1):
udp_tunnel: Fix udp_tunnel_nic work-queue type
Baptiste Lepers (1):
events: Reuse value read using READ_ONCE instead of re-reading it
Benjamin Hesmans (1):
netfilter: socket: icmp6: fix use-after-scope
Christophe JAILLET (4):
PCI: tegra: Fix OF node reference leak
mtd: rawnand: cafe: Fix a resource leak in the error handling path of
'cafe_nand_probe()'
gpio: mpc8xxx: Fix a resources leak in the error handling path of
'mpc8xxx_probe()'
gpio: mpc8xxx: Use 'devm_gpiochip_add_data()' to simplify the code and
avoid a leak
Dan Carpenter (1):
PCI: Fix pci_dev_str_match_path() alloc while atomic bug
Daniele Palmas (1):
net: usb: cdc_mbim: avoid altsetting toggling for Telit LN920
David Heidelberg (1):
dt-bindings: arm: Fix Toradex compatible typo
David Hildenbrand (1):
mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range()
Dinghao Liu (2):
PCI: rcar: Fix runtime PM imbalance in rcar_pcie_ep_probe()
qlcnic: Remove redundant unlock in qlcnic_pinit_from_rom
Edwin Peer (3):
bnxt_en: make bnxt_free_skbs() safe to call after bnxt_free_mem()
bnxt_en: fix stored FW_PSID version masks
bnxt_en: log firmware debug notifications
Eli Cohen (1):
net/{mlx5|nfp|bnxt}: Remove unnecessary RTNL lock assert
Eric Dumazet (3):
net-caif: avoid user-triggerable WARN_ON(1)
net/af_unix: fix a data-race in unix_dgram_poll
fq_codel: reject silly quantum parameters
Ernst Sjöstrand (1):
drm/amd/amdgpu: Increase HWIP_MAX_INSTANCE to 10
Evan Quan (1):
PCI: Add AMD GPU multi-function power dependencies
Florian Fainelli (2):
r6040: Restore MDIO clock frequency after MAC reset
net: dsa: bcm_sf2: Fix array overrun in bcm_sf2_num_active_ports()
George Cherian (1):
PCI: Add ACS quirks for Cavium multi-function devices
Gustavo A. R. Silva (1):
netfilter: Fix fall-through warnings for Clang
Hans de Goede (1):
mfd: axp20x: Update AXP288 volatile ranges
Hoang Le (1):
tipc: increase timeout in tipc_sk_enqueue()
Ilya Leoshkevich (3):
s390/bpf: Fix optimizing out zero-extensions
s390/bpf: Fix 64-bit subtraction of the -0x80000000 constant
s390/bpf: Fix branch shortening during codegen pass
Jan Kiszka (1):
watchdog: Start watchdog in watchdog_set_last_hw_keepalive only if
appropriate
Jeff Moyer (1):
x86/pat: Pass valid address to sanitize_phys()
Juergen Gross (2):
xen: reset legacy rtc flag for PV domU
PM: base: power: don't try to use non-existing RTC for storing data
Keith Busch (1):
nvme-tcp: fix io_work priority inversion
Kishon Vijay Abraham I (3):
PCI: cadence: Use bitfield for *quirk_retrain_flag* instead of bool
PCI: j721e: Add PCIe support for J7200
PCI: j721e: Add PCIe support for AM64
Kortan (1):
gen_compile_commands: fix missing 'sys' package
Li Huafei (1):
perf unwind: Do not overwrite
FEATURE_CHECK_LDFLAGS-libunwind-{x86,aarch64}
Lin, Zhenpeng (1):
dccp: don't duplicate ccid when cloning dccp sock
Linus Walleij (3):
mfd: db8500-prcmu: Adjust map to reality
backlight: ktd253: Stabilize backlight
net: dsa: tag_rtl4_a: Fix egress tags
Lucas Stach (8):
drm/etnaviv: return context from etnaviv_iommu_context_get
drm/etnaviv: put submit prev MMU context when it exists
drm/etnaviv: stop abusing mmu_context as FE running marker
drm/etnaviv: keep MMU context across runtime suspend/resume
drm/etnaviv: exec and MMU state is lost when resetting the GPU
drm/etnaviv: fix MMU context leak on GPU reset
drm/etnaviv: reference MMU context when setting up hardware state
drm/etnaviv: add missing MMU context put when reaping MMU mapping
Maor Gottlieb (1):
net/mlx5: Fix potential sleeping in atomic context
Marc Zyngier (1):
mfd: Don't use irq_create_mapping() to resolve a mapping
Mark Brown (1):
arm64/sve: Use correct size when reinitialising SVE state
Masami Hiramatsu (2):
tracing/probes: Reject events which have the same name of existing one
tracing/boot: Fix a hist trigger dependency for boot time tracing
Matthias Schiffer (1):
mfd: tqmx86: Clear GPIO IRQ resource when no IRQ is set
Matthieu Baerts (1):
selftests: mptcp: clean tmp files in simult_flows
Michael Chan (6):
bnxt_en: Fix asic.rev in devlink dev info command
bnxt_en: Consolidate firmware reset event logging.
bnxt_en: Convert to use netif_level() helpers.
bnxt_en: Improve logging of error recovery settings information.
bnxt_en: Fix possible unintended driver initiated error recovery
bnxt_en: Fix error recovery regression
Michael Petlan (1):
perf machine: Initialize srcline string member in add_location struct
Mike Rapoport (1):
x86/mm: Fix kern_addr_valid() to cope with existing but not present
entries
Miklos Szeredi (1):
fuse: fix use after free in fuse_read_interrupt()
Miquel Raynal (1):
dt-bindings: mtd: gpmc: Fix the ECC bytes vs. OOB bytes equation
Nadeem Athani (1):
PCI: cadence: Add quirk flag to set minimum delay in LTSSM
Detect.Quiet state
Nicholas Piggin (1):
KVM: PPC: Book3S HV: Tolerate treclaim. in fake-suspend mode changing
registers
Oliver Upton (2):
KVM: arm64: Fix read-side race on updates to vcpu reset state
KVM: arm64: Handle PSCI resets before userspace touches vCPU state
Om Prakash Singh (2):
PCI: tegra194: Fix handling BME_CHGED event
PCI: tegra194: Fix MSI-X programming
Paolo Abeni (1):
vhost_net: fix OoB on sendmsg() failure.
Paolo Valente (1):
block, bfq: honor already-setup queue merges
Pavel Skripkin (1):
netfilter: nft_ct: protect nft_ct_pcpu_template_refcnt with mutex
Rafał Miłecki (3):
net: dsa: b53: Fix calculating number of switch ports
net: dsa: b53: Set correct number of ports in the DSA struct
net: dsa: b53: Fix IMP port setup on BCM5301x
Randy Dunlap (3):
ptp: dp83640: don't define PAGE0
ARC: export clear_user_page() for modules
mfd: lpc_sch: Rename GPIOBASE to prevent build error
Rob Herring (2):
PCI: of: Don't fail devm_pci_alloc_host_bridge() on missing 'ranges'
PCI: iproc: Fix BCMA probe resource handling
Robert Foss (1):
drm/bridge: lt9611: Fix handling of 4k panels
Ryoga Saito (1):
Set fc_nlinfo in nh_create_ipv4, nh_create_ipv6
Saeed Mahameed (2):
ethtool: Fix rxnfc copy to user buffer overflow
net/mlx5: FWTrace, cancel work on alloc pd error flow
Shai Malin (1):
qed: Handle management FW error
Smadar Fuks (1):
octeontx2-af: Add additional register check to rvu_poll_reg()
Sukadev Bhattiprolu (1):
ibmvnic: check failover_pending in login response
Tony Luck (1):
x86/mce: Avoid infinite loop for copy from user recovery
Vishal Aslot (1):
PCI: ibmphp: Fix double unmap of io_mem
Vladimir Oltean (1):
net: dsa: destroy the phylink instance on any error in
dsa_slave_phy_setup
Wasim Khan (1):
PCI: Add ACS quirks for NXP LX2xx0 and LX2xx2 platforms
Will Deacon (1):
x86/uaccess: Fix 32-bit __get_user_asm_u64() when
CC_HAS_ASM_GOTO_OUTPUT=y
Willem de Bruijn (1):
ip_gre: validate csum_start only on pull
Xin Long (1):
tipc: fix an use-after-free issue in tipc_recvmsg
Xiyu Yang (1):
net/l2tp: Fix reference count leak in l2tp_udp_recv_core
Yang Li (3):
ethtool: Fix an error code in cxgb2.c
NTB: Fix an error code in ntb_msit_probe()
NTB: perf: Fix an error code in perf_setup_inbuf()
Yoshihiro Shimoda (1):
net: renesas: sh_eth: Fix freeing wrong tx descriptor
Ziyang Xuan (1):
net: hso: add failure handler for add_net_device
zhenggy (1):
tcp: fix tp->undo_retrans accounting in tcp_sacktag_one()
.../devicetree/bindings/arm/tegra.yaml | 2 +-
.../devicetree/bindings/mtd/gpmc-nand.txt | 2 +-
arch/arc/mm/cache.c | 2 +-
arch/arm64/kernel/fpsimd.c | 2 +-
arch/arm64/kvm/arm.c | 8 ++
arch/arm64/kvm/reset.c | 24 ++++--
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 36 ++++++++-
arch/s390/net/bpf_jit_comp.c | 70 ++++++++--------
arch/x86/include/asm/uaccess.h | 4 +-
arch/x86/kernel/cpu/mce/core.c | 43 +++++++---
arch/x86/mm/init_64.c | 6 +-
arch/x86/mm/pat/memtype.c | 7 +-
arch/x86/xen/enlighten_pv.c | 7 ++
block/bfq-iosched.c | 16 +++-
drivers/base/power/trace.c | 10 +++
drivers/gpio/gpio-mpc8xxx.c | 5 +-
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 +-
drivers/gpu/drm/bridge/lontium-lt9611.c | 8 +-
drivers/gpu/drm/etnaviv/etnaviv_buffer.c | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 3 +-
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 43 +++++-----
drivers/gpu/drm/etnaviv/etnaviv_gpu.h | 1 +
drivers/gpu/drm/etnaviv/etnaviv_iommu.c | 4 +
drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c | 8 ++
drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 1 +
drivers/gpu/drm/etnaviv/etnaviv_mmu.h | 4 +-
drivers/gpu/drm/rockchip/cdn-dp-core.c | 4 +-
drivers/mfd/ab8500-core.c | 2 +-
drivers/mfd/axp20x.c | 3 +-
drivers/mfd/db8500-prcmu.c | 14 ++--
drivers/mfd/lpc_sch.c | 36 ++-------
drivers/mfd/stmpe.c | 4 +-
drivers/mfd/tc3589x.c | 2 +-
drivers/mfd/tqmx86.c | 2 +
drivers/mfd/wm8994-irq.c | 2 +-
drivers/mtd/nand/raw/cafe_nand.c | 4 +-
drivers/net/dsa/b53/b53_common.c | 33 ++++++--
drivers/net/dsa/b53/b53_priv.h | 1 +
drivers/net/dsa/bcm_sf2.c | 2 +-
.../net/ethernet/broadcom/bnx2x/bnx2x_sriov.c | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 80 +++++++++++++------
.../net/ethernet/broadcom/bnxt/bnxt_devlink.c | 6 +-
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c | 3 -
drivers/net/ethernet/chelsio/cxgb/cxgb2.c | 1 +
drivers/net/ethernet/ibm/ibmvnic.c | 8 ++
.../net/ethernet/marvell/octeontx2/af/rvu.c | 12 ++-
.../mellanox/mlx5/core/diag/fw_tracer.c | 3 +-
.../ethernet/mellanox/mlx5/core/en/rep/tc.c | 3 -
.../net/ethernet/mellanox/mlx5/core/fs_core.c | 5 +-
.../ethernet/netronome/nfp/flower/offload.c | 3 -
drivers/net/ethernet/qlogic/qed/qed_mcp.c | 6 +-
.../net/ethernet/qlogic/qlcnic/qlcnic_init.c | 1 -
drivers/net/ethernet/rdc/r6040.c | 9 ++-
drivers/net/ethernet/renesas/sh_eth.c | 1 +
drivers/net/ipa/ipa_table.c | 3 +-
drivers/net/phy/dp83640_reg.h | 2 +-
drivers/net/usb/cdc_mbim.c | 5 ++
drivers/net/usb/hso.c | 11 ++-
drivers/ntb/test/ntb_msi_test.c | 4 +-
drivers/ntb/test/ntb_perf.c | 1 +
drivers/nvme/host/tcp.c | 20 ++---
drivers/pci/controller/cadence/pci-j721e.c | 61 ++++++++++++--
.../pci/controller/cadence/pcie-cadence-ep.c | 4 +
.../controller/cadence/pcie-cadence-host.c | 3 +
drivers/pci/controller/cadence/pcie-cadence.c | 16 ++++
drivers/pci/controller/cadence/pcie-cadence.h | 17 +++-
drivers/pci/controller/dwc/pcie-tegra194.c | 32 ++++----
drivers/pci/controller/pci-tegra.c | 13 +--
drivers/pci/controller/pcie-iproc-bcma.c | 16 ++--
drivers/pci/controller/pcie-rcar-ep.c | 4 +-
drivers/pci/hotplug/TODO | 3 -
drivers/pci/hotplug/ibmphp_ebda.c | 5 +-
drivers/pci/of.c | 2 +-
drivers/pci/pci.c | 2 +-
drivers/pci/quirks.c | 58 +++++++++++++-
drivers/s390/char/sclp_early.c | 3 +-
drivers/vhost/net.c | 11 ++-
drivers/video/backlight/ktd253-backlight.c | 75 ++++++++++++-----
drivers/watchdog/watchdog_dev.c | 5 +-
fs/btrfs/disk-io.c | 45 ++++++-----
fs/fuse/dev.c | 4 +-
include/linux/memory_hotplug.h | 4 +-
include/linux/pci.h | 5 +-
include/linux/pci_ids.h | 3 +-
include/linux/sched.h | 1 +
include/linux/skbuff.h | 2 +-
include/uapi/linux/pkt_sched.h | 2 +
kernel/events/core.c | 2 +-
kernel/trace/trace_boot.c | 15 ++--
kernel/trace/trace_kprobe.c | 6 +-
kernel/trace/trace_probe.c | 25 ++++++
kernel/trace/trace_probe.h | 1 +
kernel/trace/trace_uprobe.c | 6 +-
mm/memory_hotplug.c | 4 +-
net/caif/chnl_net.c | 19 +----
net/dccp/minisocks.c | 2 +
net/dsa/slave.c | 12 ++-
net/dsa/tag_rtl4_a.c | 7 +-
net/ethtool/ioctl.c | 2 +-
net/ipv4/ip_gre.c | 9 ++-
net/ipv4/nexthop.c | 2 +
net/ipv4/tcp_input.c | 2 +-
net/ipv4/udp_tunnel_nic.c | 2 +-
net/ipv6/netfilter/nf_socket_ipv6.c | 4 +-
net/l2tp/l2tp_core.c | 4 +-
net/netfilter/nf_conntrack_proto_dccp.c | 1 +
net/netfilter/nf_tables_api.c | 1 +
net/netfilter/nft_ct.c | 10 ++-
net/sched/sch_fq_codel.c | 12 ++-
net/tipc/socket.c | 10 ++-
net/unix/af_unix.c | 2 +-
scripts/clang-tools/gen_compile_commands.py | 1 +
tools/perf/Makefile.config | 8 +-
tools/perf/bench/inject-buildid.c | 52 +++++++-----
tools/perf/util/machine.c | 1 +
tools/testing/selftests/net/altnames.sh | 2 +-
.../selftests/net/mptcp/simult_flows.sh | 4 +-
118 files changed, 853 insertions(+), 393 deletions(-)
--
2.20.1
1
118
openEuler kernel 21.03创新版本补丁
Alexander Kuznetsov (1):
cgroup1: don't allow '\n' in renaming
Andy Shevchenko (1):
usb: typec: intel_pmc_mux: Put fwnode in error case during ->probe()
Anna Schumaker (1):
NFS: Fix use-after-free in nfs4_init_client()
Desmond Cheong Zhi Xi (1):
drm: Fix use-after-free read in drm_getunique()
Eric Farman (2):
vfio-ccw: Reset FSM state to IDLE inside FSM
vfio-ccw: Serialize FSM IDLE state with I/O completion
Jeremy Szu (1):
ALSA: hda/realtek: fix mute/micmute LEDs for HP EliteBook 840 Aero G8
Jerome Brunet (1):
ASoC: meson: gx-card: fix sound-dai dt schema
Johannes Berg (1):
netlink: disable IRQs for netlink_lock_table()
Li Jun (1):
usb: typec: tcpm: cancel vdm and state machine hrtimer when unregister
tcpm port
Linus Walleij (1):
drm/mcde: Fix off by 10^3 in calculation
Maciej Żenczykowski (1):
usb: f_ncm: only first packet of aggregate needs to start timer
Saravana Kannan (1):
spi: Fix spi device unregister flow
Tiezhu Yang (1):
MIPS: Fix kernel hang under FUNCTION_GRAPH_TRACER and PREEMPT_TRACER
Trond Myklebust (1):
NFSv4: Fix second deadlock in nfs4_evict_inode()
Vincent Guittot (1):
sched/fair: Keep load_avg and load_sum synced
Wesley Cheng (1):
usb: dwc3: gadget: Disable gadget IRQ during pullup disable
Wolfram Sang (1):
mmc: renesas_sdhi: abort tuning when timeout detected
Zhen Lei (1):
tools/bootconfig: Fix error return code in apply_xbc()
Zou Wei (1):
ASoC: sti-sas: add missing MODULE_DEVICE_TABLE
.../bindings/sound/amlogic,gx-sound-card.yaml | 4 ++--
arch/mips/lib/mips-atomic.c | 12 ++++++------
drivers/gpu/drm/drm_ioctl.c | 9 +++++----
drivers/gpu/drm/mcde/mcde_dsi.c | 2 +-
drivers/mmc/host/renesas_sdhi_core.c | 7 ++++++-
drivers/s390/cio/vfio_ccw_drv.c | 12 ++++++++++--
drivers/s390/cio/vfio_ccw_fsm.c | 1 +
drivers/s390/cio/vfio_ccw_ops.c | 2 --
drivers/spi/spi.c | 18 ++++++++++++------
drivers/usb/dwc3/gadget.c | 11 +++++------
drivers/usb/gadget/function/f_ncm.c | 8 ++++----
drivers/usb/typec/mux/intel_pmc_mux.c | 4 +++-
drivers/usb/typec/tcpm/tcpm.c | 3 +++
fs/nfs/nfs4client.c | 2 +-
fs/nfs/nfs4proc.c | 9 +++++++--
kernel/cgroup/cgroup-v1.c | 4 ++++
kernel/sched/fair.c | 11 +++++------
net/netlink/af_netlink.c | 6 ++++--
sound/pci/hda/patch_realtek.c | 1 +
sound/soc/codecs/sti-sas.c | 1 +
tools/bootconfig/main.c | 1 +
21 files changed, 82 insertions(+), 46 deletions(-)
--
2.25.1
2
21
Backport LTS 5.10.67 patches from upstream.
Ahmad Fatoum (1):
clk: imx8m: fix clock tree update of TF-A managed clocks
Alexey Kardashevskiy (1):
KVM: PPC: Fix clearing never mapped TCEs in realmode
Alim Akhtar (1):
scsi: ufs: ufs-exynos: Fix static checker warning
Alyssa Rosenzweig (3):
drm/panfrost: Simplify lock_region calculation
drm/panfrost: Use u64 for size in lock_region
drm/panfrost: Clamp lock region to Bifrost minimum
Amir Goldstein (1):
fanotify: limit number of event merge attempts
Andreas Obergschwandtner (1):
ARM: tegra: tamonten: Fix UART pad setting
Andrey Grodzovsky (1):
drm/amdgpu: Fix BUG_ON assert
Andy Shevchenko (1):
ata: sata_dwc_460ex: No need to call phy_exit() befre phy_init()
AngeloGioacchino Del Regno (2):
arm64: dts: qcom: sdm630: Rewrite memory map
arm64: dts: qcom: sdm630: Fix TLMM node and pinctrl configuration
Ani Sinha (1):
x86/hyperv: fix for unwanted manipulation of sched_clock when TSC
marked unstable
Anirudh Rayabharam (1):
usbip: give back URBs for unsent unlink requests during cleanup
Anna Schumaker (1):
sunrpc: Fix return value of get_srcport()
Anson Jacob (1):
drm/amd/amdgpu: Update debugfs link_settings output link_rate field in
hex
Anthony Iliopoulos (1):
dma-debug: fix debugfs initialization order
Arnd Bergmann (2):
ethtool: improve compat ioctl handling
m68knommu: only set CONFIG_ISA_DMA_API for ColdFire sub-arch
Arne Welzel (1):
dm crypt: Avoid percpu_counter spinlock contention in
crypt_page_alloc()
Aurabindo Pillai (1):
drm/amd/display: Update number of DCN3 clock states
Bart Van Assche (1):
scsi: ufs: Fix memory corruption by ufshcd_read_desc_param()
Bob Peterson (2):
gfs2: Fix glock recursion in freeze_go_xmote_bh
gfs2: Don't call dlm after protocol is unmounted
Boris Brezillon (1):
drm/panfrost: Make sure MMU context lifetime is not bound to
panfrost_priv
Brandon Wyman (1):
hwmon: (pmbus/ibm-cffps) Fix write bits for LED control
Brijesh Singh (1):
crypto: ccp - shutdown SEV firmware on kexec
Cezary Rojewski (1):
ASoC: Intel: Skylake: Fix module configuration for KPB and MIXER
Chao Yu (5):
f2fs: fix to do sanity check for sb/cp fields correctly
f2fs: quota: fix potential deadlock
f2fs: fix to account missing .skipped_gc_rwsem
f2fs: fix unexpected ENOENT comes from f2fs_map_blocks()
f2fs: fix to unmap pages from userspace process in punch_hole()
Chengfeng Ye (1):
selftests/bpf: Fix potential unreleased lock
Chin-Yen Lee (2):
rtw88: use read_poll_timeout instead of fixed sleep
rtw88: wow: fix size access error of probe request
Chris Chiu (1):
rtl8xxxu: Fix the handling of TX A-MPDU aggregation
Christoph Hellwig (1):
scsi: bsg: Remove support for SCSI_IOCTL_SEND_COMMAND
Christophe JAILLET (1):
staging: ks7010: Fix the initialization of the 'sleep_status'
structure
Codrin Ciubotariu (1):
clk: at91: clk-generated: Limit the requested rate to our range
Colin Ian King (3):
ceph: fix dereference of null pointer cf
scsi: BusLogic: Use %X for u32 sized integer rather than %lX
parport: remove non-zero check on count
Damien Le Moal (1):
block: bfq: fix bfq_set_next_ioprio_data()
Dan Carpenter (3):
scsi: smartpqi: Fix an error code in pqi_get_raid_map()
scsi: qedi: Fix error codes in qedi_alloc_global_queues()
scsi: qedf: Fix error codes in qedf_alloc_global_queues()
Darrick J. Wong (1):
iomap: pass writeback errors to the mapping
David Heidelberg (4):
ARM: 9105/1: atags_to_fdt: don't warn about stack size
ARM: dts: qcom: apq8064: correct clock names
drm/msm: mdp4: drop vblank get/put from prepare/complete_commit
drm/msi/mdp4: populate priv->kms in mdp4_kms_init
David Howells (1):
fscache: Fix cookie key hashing
David Laight (1):
fs/io_uring Don't use the return value from import_iovec().
Desmond Cheong Zhi Xi (8):
btrfs: reset replace target device to allocation state on close
drm: avoid blocking in drm_clients_info's rcu section
drm: serialize drm_file.master with a new spinlock
drm: protect drm_master pointers in drm_lease.c
Bluetooth: skip invalid hci_sync_conn_complete_evt
drm/vmwgfx: fix potential UAF in vmwgfx_surface.c
Bluetooth: schedule SCO timeouts with delayed_work
Bluetooth: avoid circular locks in sco_sock_connect
Ding Hui (1):
cifs: fix wrong release in sess_alloc_buffer() failed path
Dinghao Liu (1):
media: atomisp: Fix runtime PM imbalance in atomisp_pci_probe
Dinh Nguyen (3):
clk: socfpga: agilex: fix the parents of the psi_ref_clk
clk: socfpga: agilex: fix up s2f_user0_clk representation
clk: socfpga: agilex: add the bypass register for s2f_usr0 clock
Dmitry Osipenko (2):
rtc: tps65910: Correct driver module alias
ARM: tegra: acer-a500: Remove bogus USB VBUS regulators
Dmitry Torokhov (1):
HID: input: do not report stylus battery state as "full"
Dom Cobley (1):
drm/vc4: hdmi: Set HD_CTL_WHOLSMP and HD_CTL_CHALIGN_SET
Eli Cohen (1):
net: Fix offloading indirect devices dependency on qdisc order
creation
Eran Ben Elisha (1):
net/mlx5: Fix variable type to match 64bit
Evan Wang (1):
PCI: aardvark: Fix checking for PIO status
Evgeny Novikov (3):
USB: EHCI: ehci-mv: improve error handling in mv_ehci_enable()
media: platform: stm32: unprepare clocks at handling errors in probe
media: tegra-cec: Handle errors of clk_prepare_enable()
Ezequiel Garcia (1):
media: hantro: vp8: Move noisy WARN_ON to vpu_debug
Fabiano Rosas (1):
KVM: PPC: Book3S HV: Fix copy_tofrom_guest routines
Gautham R. Shenoy (1):
cpuidle: pseries: Fixup CEDE0 latency only for POWER10 onwards
Geert Uytterhoeven (2):
staging: board: Fix uninitialized spinlock when attaching genpd
drm/bridge: nwl-dsi: Avoid potential multiplication overflow on 32-bit
Georgi Djakov (1):
arm64: dts: qcom: sm8250: Fix epss_l3 unit address
Greg Kroah-Hartman (1):
serial: 8250_pci: make setup_port() parameters explicitly unsigned
Gustavo A. R. Silva (2):
ipv4: ip_output.c: Fix out-of-bounds warning in ip_copy_addrs()
flow_dissector: Fix out-of-bounds warnings
Gustaw Lewandowski (1):
ASoC: Intel: Skylake: Fix passing loadable flag for module
Haimin Zhang (1):
fix array-index-out-of-bounds in taprio_change
Halil Pasic (1):
s390/pv: fix the forcing of the swiotlb
Hans Verkuil (1):
media: v4l2-dv-timings.c: fix wrong condition in two for-loops
Hans de Goede (3):
libata: add ATA_HORKAGE_NO_NCQ_TRIM for Samsung 860 and 870 SSDs
platform/x86: dell-smbios-wmi: Add missing kfree in error-exit from
run_smbios_call
ASoC: Intel: bytcr_rt5640: Move "Platform Clock" routes to the maps
for the matching in-/output
Harshvardhan Jha (1):
9p/xen: Fix end of loop tests for list_for_each_entry
Heiko Carstens (1):
s390/jump_label: print real address in a case of a jump label bug
Hyun Kwon (1):
PCI: xilinx-nwl: Enable the clock through CCF
Ilan Peer (1):
iwlwifi: mvm: Fix scan channel flags settings
Iwona Winiarska (2):
soc: aspeed: lpc-ctrl: Fix boundary check for mmap
soc: aspeed: p2a-ctrl: Fix boundary check for mmap
J. Bruce Fields (3):
rpc: fix gss_svc_init cleanup on failure
lockd: lockd server-side shouldn't set fl_ops
nfsd: fix crash on LOCKT on reexported NFSv3
Jack Pham (1):
usb: gadget: composite: Allow bMaxPower=0 if self-powered
Jaegeuk Kim (2):
f2fs: deallocate compressed pages when error happens
f2fs: should put a page beyond EOF when preparing a write
Jaehyoung Choi (1):
pinctrl: samsung: Fix pinctrl bank pin count
Jan Hoffmann (1):
net: dsa: lantiq_gswip: fix maximum frame length
Jason Gunthorpe (1):
vfio: Use config not menuconfig for VFIO_NOIOMMU
Jens Axboe (1):
io-wq: fix wakeup race when adding new work
Jernej Skrabec (1):
arm64: dts: allwinner: h6: tanix-tx6: Fix regulator node names
Jerry (Fangzhi) Zuo (1):
drm/amd/display: Update bounding box states (v2)
Jianjun Wang (1):
PCI: Export pci_pio_to_address() for module use
Jim Broadus (1):
HID: i2c-hid: Fix Elan touchpad regression
Jiri Slaby (2):
xtensa: ISS: don't panic in rs_init
hvsi: don't panic on tty_register_driver failure
Joel Stanley (1):
powerpc/config: Renable MTD_PHYSMAP_OF
Johan Almbladh (3):
bpf/tests: Fix copy-and-paste error in double word test
bpf/tests: Do not PASS tests without actually testing the result
mac80211: Fix monitor MTU limit so that A-MSDUs get through
Johannes Berg (4):
iwlwifi: pcie: free RBs during configure
iwlwifi: mvm: avoid static queue number aliasing
iwlwifi: mvm: fix access to BSS elements
iwlwifi: fw: correctly limit to monitor dump
Jonathan Cameron (1):
iio: dac: ad5624r: Fix incorrect handling of an optional regulator.
Josef Bacik (1):
btrfs: wake up async_delalloc_pages waiters after submit
Joseph Gates (1):
wcn36xx: Ensure finish scan is not requested before start scan
Juergen Gross (1):
xen: fix setting of max_pfn in shared_info
Juhee Kang (1):
samples: bpf: Fix tracex7 error raised on the missing argument
Julian Wiedmann (2):
s390/qdio: fix roll-back after timeout on ESTABLISH ccw
s390/qdio: cancel the ESTABLISH ccw after timeout
Jussi Maki (1):
selftests/bpf: Fix xdp_tx.c prog section name
Kajol Jain (1):
powerpc/perf/hv-gpci: Fix counter value parsing
Kees Cook (2):
staging: rts5208: Fix get_ms_information() heap buffer size
lib/test_stackinit: Fix static initializer test
Kelly Devilliv (2):
usb: host: fotg210: fix the endpoint's transactional opportunities
calculation
usb: host: fotg210: fix the actual_length of an iso packet
Konrad Dybcio (1):
drm/msm/dsi: Fix DSI and DSI PHY regulator config from SDM660
Krzysztof Hałasa (1):
media: TDA1997x: fix tda1997x_query_dv_timings() return value
Krzysztof Kozlowski (1):
power: supply: max17042: handle fails of reading status register
Krzysztof Wilczyński (1):
PCI: Return ~0 data on pciconfig_read() CAP_SYS_ADMIN failure
Kuogee Hsieh (1):
drm/msm/dp: return correct edid checksum after corrupted edid checksum
read
Laurent Dufour (1):
powerpc/numa: Consider the max NUMA node for migratable LPAR
Laurent Pinchart (1):
media: imx258: Rectify mismatch of VTS value
Laurentiu Tudor (1):
bus: fsl-mc: fix mmio base address for child DPRCs
Leon Romanovsky (4):
RDMA/iwcm: Release resources if iw_cm module initialization fails
docs: Fix infiniband uverbs minor number
RDMA/efa: Remove double QP type assignment
RDMA/mlx5: Delete not-available udata check
Li Jun (1):
usb: chipidea: host: fix port index underflow and UBSAN complains
Li Zhijian (2):
selftests/bpf: Enlarge select() timeout for test_maps
mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
Liu Zixian (1):
mm/hugetlb: initialize hugetlb_usage in mm_init
Loic Poulain (1):
wcn36xx: Fix missing frame timestamp for beacon/probe-resp
Lu Baolu (1):
iommu/vt-d: Update the virtual command related registers
Luben Tuikov (1):
drm/amdgpu: Fix amdgpu_ras_eeprom_init()
Luiz Augusto von Dentz (1):
Bluetooth: Fix handling of LE Enhanced Connection Complete
Luke Hsiao (1):
tcp: enable data-less, empty-cookie SYN with
TFO_SERVER_COOKIE_NOT_REQD
Maciej W. Rozycki (2):
serial: 8250: Define RX trigger levels for OxSemi 950 devices
scsi: BusLogic: Fix missing pr_cont() use
Maciej Żenczykowski (1):
usb: gadget: u_ether: fix a potential null pointer dereference
Manish Narani (2):
mmc: sdhci-of-arasan: Modified SD default speed to 19MHz for ZynqMP
mmc: sdhci-of-arasan: Check return value of non-void funtions
Manivannan Sadhasivam (1):
soc: qcom: aoss: Fix the out of bound usage of cooling_devs
Marc Zyngier (2):
pinctrl: stmfx: Fix hazardous u8[] to unsigned long cast
of: Don't allow __of_attached_node_sysfs() without CONFIG_SYSFS
Marcos Paulo de Souza (1):
btrfs: tree-log: check btrfs_lookup_data_extent return value
Marek Behún (2):
PCI: Restrict ASMedia ASM1062 SATA Max Payload Size Supported
pinctrl: armada-37xx: Correct PWM pins definitions
Marek Marczykowski-Górecki (1):
PCI/MSI: Skip masking MSI-X on Xen PV
Marek Vasut (4):
net: phy: Fix data type in DP83822 dp8382x_disable_wol()
ARM: dts: stm32: Set {bitclock,frame}-master phandles on DHCOM SoM
ARM: dts: stm32: Set {bitclock,frame}-master phandles on ST DKx
ARM: dts: stm32: Update AV96 adv7513 node per dtbs_check
Mark Brown (2):
kselftest/arm64: mte: Fix misleading output when skipping tests
kselftest/arm64: pac: Fix skipping of tests on systems without PAC
Mark Rutland (1):
arm64: head: avoid over-mapping in map_memory
Martynas Pumputis (2):
libbpf: Fix reuse of pinned map on older kernel
libbpf: Fix race when pinning maps in parallel
Masahiro Yamada (1):
kbuild: Fix 'no symbols' warning when CONFIG_TRIM_UNUSD_KSYMS=y
Mathias Nyman (1):
Revert "USB: xhci: fix U1/U2 handling for hardware with
XHCI_INTEL_HOST quirk set"
Mauro Carvalho Chehab (2):
media: uvc: don't do DMA on stack
media: dib8000: rewrite the init prbs logic
Miaoqing Pan (1):
ath9k: fix sleeping in atomic context
Michal Suchanek (1):
powerpc/stacktrace: Include linux/delay.h
Mike Kravetz (1):
hugetlb: fix hugetlb cgroup refcounting during vma split
Mike Marciniszyn (1):
IB/hfi1: Adjust pkey entry in index 0
Mikulas Patocka (1):
parisc: fix crash with signals and alloca
Nadav Amit (1):
userfaultfd: prevent concurrent API initialization
Nadezda Lutovinova (1):
usb: musb: musb_dsps: request_irq() after initializing musb
Nathan Chancellor (3):
cpuidle: pseries: Mark pseries_idle_proble() as __init
net: ethernet: stmmac: Do not use unreachable() in
ipq806x_gmac_probe()
drm/exynos: Always initialize mapping in exynos_drm_register_dma()
Nicholas Piggin (1):
KVM: PPC: Book3S HV Nested: Reflect guest PMU in-use to L0 when guest
SPRs are live
Nicolas Ferre (1):
ARM: dts: at91: use the right property for shutdown controller
Niklas Cassel (2):
blk-zoned: allow zone management send operations without CAP_SYS_ADMIN
blk-zoned: allow BLKREPORTZONE without CAP_SYS_ADMIN
Niklas Schnelle (1):
s390: make PCI mio support a machine flag
Niklas Söderlund (1):
nfp: fix return statement in nfp_net_parse_meta()
Nishad Kamdar (1):
mmc: core: Return correct emmc response in case of ioctl error
Nuno Sá (1):
iio: ltc2983: fix device probe
Oak Zeng (1):
drm/amdgpu: Fix a printing message
Oleksij Rempel (1):
MIPS: Malta: fix alignment of the devicetree buffer
Olga Kornievskaia (1):
SUNRPC query transport's source port
Oliver Logush (1):
drm/amd/display: Fix timer_per_pixel unit error
Pali Rohár (2):
PCI: aardvark: Configure PCIe resources from 'ranges' DT property
PCI: aardvark: Fix masking and unmasking legacy INTx interrupts
Patryk Duda (1):
platform/chrome: cros_ec_proto: Send command again when timeout occurs
Paul Cercueil (1):
pinctrl: ingenic: Fix incorrect pull up/down info
Pavel Begunkov (5):
io_uring: limit fixed table size by RLIMIT_NOFILE
io_uring: place fixed tables under memcg limits
io_uring: add ->splice_fd_in checks
io_uring: fail links of cancelled timeouts
io_uring: remove duplicated io_size from rw
Peter Geis (1):
clk: rockchip: drop GRF dependency for rk3328/rk3036 pll types
Pierre-Louis Bossart (2):
ASoC: Intel: update sof_pcm512x quirks
soundwire: intel: fix potential race condition during power down
Ping-Ke Shih (1):
rtw88: wow: build wow function only if CONFIG_PM is on
Pratik R. Sampat (1):
cpufreq: powernv: Fix init_chip_info initialization in numa=off
Quanyang Wang (2):
drm: xlnx: zynqmp_dpsub: Call pm_runtime_get_sync before setting pixel
clock
drm: xlnx: zynqmp: release reset to DP controller before accessing DP
registers
Raag Jadav (1):
arm64: dts: ls1046a: fix eeprom entries
Rafael J. Wysocki (1):
PCI: Use pci_update_current_state() in pci_enable_device_flags()
Rajendra Nayak (2):
nvmem: qfprom: Fix up qfprom_disable_fuse_blowing() ordering
opp: Don't print an error if required-opps is missing
Rajkumar Subbiah (1):
drm/dp_mst: Fix return code on sideband message failure
Randy Dunlap (2):
openrisc: don't printk() unconditionally
ASoC: atmel: ATMEL drivers don't need HAS_DMA
Rik van Riel (1):
mm,vmscan: fix divide by zero in get_scan_count
Robin Gong (2):
Revert "dmaengine: imx-sdma: refine to load context only once"
dmaengine: imx-sdma: remove duplicated sdma_load_context
Rolf Eike Beer (1):
tools/thermal/tmon: Add cross compiling support
Roy Chan (2):
drm/amd/display: fix missing writeback disablement if plane is removed
drm/amd/display: fix incorrect CM/TF programming sequence in dwb
Sagi Grimberg (2):
nvme-tcp: don't check blk_mq_tag_to_rq when receiving pdu data
nvme: code command_id with a genctr for use-after-free validation
Sanjay R Mehta (1):
thunderbolt: Fix port linking by checking all adapters
Sasha Neftin (1):
igc: Check if num of q_vectors is smaller than max before array access
Saurav Kashyap (2):
scsi: qla2xxx: Changes to support kdump kernel
scsi: qla2xxx: Sync queue idx with queue_pair_map idx
Sean Anderson (1):
crypto: mxs-dcp - Use sg_mapping_iter to copy data
Sean Keely (1):
drm/amdkfd: Account for SH/SE count when setting up cu masks.
Sean Young (1):
media: rc-loopback: return number of emitters rather than error
Sebastian Reichel (1):
ARM: dts: imx53-ppd: Fix ACHC entry
Shuah Khan (2):
selftests: firmware: Fix ignored return val of asprintf() warn
usbip:vhci_hcd USB port can get stuck in the disabled state
Srikar Dronamraju (1):
powerpc/smp: Update cpu_core_map on all PowerPc systems
Stefan Assmann (2):
iavf: do not override the adapter state in the watchdog task
iavf: fix locking of critical sections
Steven Rostedt (VMware) (1):
selftests/ftrace: Fix requirement check of README file
Stuart Hayes (1):
PCI/portdrv: Enable Bandwidth Notification only if port supports it
Subbaraya Sundeep (1):
octeontx2-pf: Fix NIX1_RX interface backpressure
Sugar Zhang (1):
ASoC: rockchip: i2s: Fix regmap_ops hang
Thierry Reding (1):
arm64: tegra: Fix compatible string for Tegra132 CPUs
Thomas Hebb (1):
mmc: rtsx_pci: Fix long reads when clock is prescaled
Thomas Zimmermann (1):
drm/mgag200: Select clock in PLL update functions
Tianjia Zhang (1):
Smack: Fix wrong semantics in smk_access_entry()
Tony Lindgren (1):
serial: 8250_omap: Handle optional overrun-throttle-ms property
Trond Myklebust (5):
NFSv4/pNFS: Fix a layoutget livelock loop
NFSv4/pNFS: Always allow update of a zero valued layout barrier
NFSv4/pnfs: The layout barrier indicate a minimal value for the seqid
SUNRPC: Fix potential memory corruption
SUNRPC/xprtrdma: Fix reconnection locking
Tuo Li (2):
gpu: drm: amd: amdgpu: amdgpu_i2c: fix possible uninitialized-variable
access in amdgpu_i2c_router_select_ddc_port()
drm/display: fix possible null-pointer dereference in
dcn10_set_clock()
Ulrich Hecht (1):
serial: sh-sci: fix break handling for sysrq
Umang Jain (1):
media: imx258: Limit the max analogue gain to 480
Vidya Sagar (1):
arm64: tegra: Fix Tegra194 PCIe EP compatible string
Vinod Koul (6):
arm64: dts: qcom: ipq8074: fix pci node reg property
arm64: dts: qcom: sdm660: use reg value for memory node
arm64: dts: qcom: ipq6018: drop '0x' from unit address
arm64: dts: qcom: sdm630: don't use underscore in node name
arm64: dts: qcom: msm8994: don't use underscore in node name
arm64: dts: qcom: msm8996: don't use underscore in node name
Wang Hai (1):
VMCI: fix NULL pointer dereference when unmapping queue pair
Wei Li (1):
scsi: fdomain: Fix error return code in fdomain_probe()
Wenpeng Liang (1):
RDMA/hns: Fix QP's resp incomplete assignment
Wentao_Liang (1):
net/mlx5: DR, fix a potential use-after-free bug
Will Deacon (1):
arm64: mm: Fix TLBI vs ASID rollover
Xiaotan Luo (1):
ASoC: rockchip: i2s: Fixup config for DAIFMT_DSP_A/B
Xin Long (1):
tipc: keep the skb in rcv queue until the whole data is read
Yajun Deng (1):
netlink: Deal with ESRCH error in nlmsg_notify()
Yang Yingliang (2):
media: atomisp: pci: fix error return code in atomisp_pci_probe()
net: w5100: check return value after calling platform_get_resource()
Yangtao Li (1):
f2fs: reduce the scope of setting fsck tag when de->name_len is zero
Yevgeny Kliteynik (1):
net/mlx5: DR, Enable QP retransmission
Yonghong Song (1):
selftests/bpf: Fix flaky send_signal test
Yongqiang Niu (1):
soc: mediatek: cmdq: add address shift in jump
Yufeng Mo (1):
bonding: 3ad: fix the concurrency between __bond_release_one() and
bond_3ad_state_machine_handler()
Zekun Shen (1):
ath9k: fix OOB read ar9300_eeprom_restore_internal
Zhang Qilong (1):
iwlwifi: mvm: fix a memory leak in iwl_mvm_mac_ctxt_beacon_changed
Zhaoyu Liu (1):
pinctrl: remove empty lines in pinctrl subsystem
Zhen Lei (2):
pinctrl: single: Fix error return code in
pcs_parse_bits_in_pinctrl_entry()
workqueue: Fix possible memory leaks in wq_numa_init()
Zheyu Ma (5):
video: fbdev: kyro: fix a DoS bug by restricting user input
tty: serial: jsm: hold port lock when reporting modem line changes
video: fbdev: asiliantfb: Error out if 'pixclock' equals zero
video: fbdev: kyro: Error out if 'pixclock' equals zero
video: fbdev: riva: Error out if 'pixclock' equals zero
Zhouyi Zhou (1):
rcu: Fix macro name CONFIG_TASKS_RCU_TRACE
chenying (1):
ovl: fix BUG_ON() in may_delete() when called from ovl_cleanup()
sumiyawang (1):
libnvdimm/pmem: Fix crash triggered when I/O in-flight during unbind
zhenwei pi (1):
crypto: public_key: fix overflow during implicit conversion
王贇 (1):
net: fix NULL pointer reference in cipso_v4_doi_free
Documentation/admin-guide/devices.txt | 6 +-
.../pinctrl/marvell,armada-37xx-pinctrl.txt | 8 +-
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/boot/dts/at91-kizbox3_common.dtsi | 2 +-
arch/arm/boot/dts/at91-sam9x60ek.dts | 2 +-
arch/arm/boot/dts/at91-sama5d27_som1_ek.dts | 2 +-
arch/arm/boot/dts/at91-sama5d27_wlsom1_ek.dts | 2 +-
arch/arm/boot/dts/at91-sama5d2_icp.dts | 2 +-
arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts | 2 +-
arch/arm/boot/dts/at91-sama5d2_xplained.dts | 2 +-
arch/arm/boot/dts/imx53-ppd.dts | 23 +-
arch/arm/boot/dts/qcom-apq8064.dtsi | 6 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi | 8 +-
.../boot/dts/stm32mp15xx-dhcor-avenger96.dtsi | 6 +-
arch/arm/boot/dts/stm32mp15xx-dkx.dtsi | 8 +-
.../boot/dts/tegra20-acer-a500-picasso.dts | 25 +-
arch/arm/boot/dts/tegra20-tamonten.dtsi | 14 +-
.../dts/allwinner/sun50i-h6-tanix-tx6.dts | 4 +-
.../boot/dts/freescale/fsl-ls1046a-frwy.dts | 8 +-
.../boot/dts/freescale/fsl-ls1046a-rdb.dts | 7 +-
arch/arm64/boot/dts/nvidia/tegra132.dtsi | 4 +-
arch/arm64/boot/dts/nvidia/tegra194.dtsi | 6 +-
arch/arm64/boot/dts/qcom/ipq6018.dtsi | 2 +-
arch/arm64/boot/dts/qcom/ipq8074-hk01.dts | 2 +-
arch/arm64/boot/dts/qcom/ipq8074.dtsi | 16 +-
arch/arm64/boot/dts/qcom/msm8994.dtsi | 6 +-
arch/arm64/boot/dts/qcom/msm8996.dtsi | 4 +-
arch/arm64/boot/dts/qcom/sdm630.dtsi | 257 ++++++++++-------
arch/arm64/boot/dts/qcom/sm8250.dtsi | 2 +-
arch/arm64/include/asm/kernel-pgtable.h | 4 +-
arch/arm64/include/asm/mmu.h | 29 +-
arch/arm64/include/asm/tlbflush.h | 11 +-
arch/arm64/kernel/head.S | 11 +-
arch/m68k/Kconfig.bus | 2 +-
arch/mips/mti-malta/malta-dtshim.c | 2 +-
arch/openrisc/kernel/entry.S | 2 +
arch/parisc/kernel/signal.c | 6 +
arch/powerpc/configs/mpc885_ads_defconfig | 1 +
arch/powerpc/include/asm/pmc.h | 7 +
arch/powerpc/kernel/smp.c | 11 +-
arch/powerpc/kernel/stacktrace.c | 1 +
arch/powerpc/kvm/book3s_64_mmu_radix.c | 6 +-
arch/powerpc/kvm/book3s_64_vio_hv.c | 9 +-
arch/powerpc/kvm/book3s_hv.c | 20 ++
arch/powerpc/mm/numa.c | 13 +-
arch/powerpc/perf/hv-gpci.c | 2 +-
arch/s390/include/asm/setup.h | 2 +
arch/s390/kernel/early.c | 4 +
arch/s390/kernel/jump_label.c | 2 +-
arch/s390/mm/init.c | 2 +-
arch/s390/pci/pci.c | 5 +-
arch/x86/kernel/cpu/mshyperv.c | 9 +-
arch/x86/xen/p2m.c | 4 +-
arch/xtensa/platforms/iss/console.c | 17 +-
block/bfq-iosched.c | 2 +-
block/blk-zoned.c | 6 -
block/bsg.c | 5 +-
drivers/ata/libata-core.c | 4 +
drivers/ata/sata_dwc_460ex.c | 12 +-
drivers/bus/fsl-mc/fsl-mc-bus.c | 24 +-
drivers/clk/at91/clk-generated.c | 6 +
drivers/clk/imx/clk-composite-8m.c | 3 +-
drivers/clk/imx/clk-imx8mm.c | 7 +-
drivers/clk/imx/clk-imx8mn.c | 7 +-
drivers/clk/imx/clk-imx8mq.c | 7 +-
drivers/clk/imx/clk.h | 16 +-
drivers/clk/rockchip/clk-pll.c | 2 +-
drivers/clk/socfpga/clk-agilex.c | 19 +-
drivers/cpufreq/powernv-cpufreq.c | 16 +-
drivers/cpuidle/cpuidle-pseries.c | 18 +-
drivers/crypto/ccp/sev-dev.c | 49 ++--
drivers/crypto/ccp/sp-pci.c | 12 +
drivers/crypto/mxs-dcp.c | 36 +--
drivers/dma/imx-sdma.c | 13 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c | 2 +-
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 2 +-
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 2 +-
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 2 +-
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c | 84 ++++--
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h | 1 +
.../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 16 +-
.../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 11 +-
.../drm/amd/display/dc/dcn20/dcn20_hwseq.c | 14 +-
.../drm/amd/display/dc/dcn20/dcn20_resource.c | 2 +-
.../drm/amd/display/dc/dcn30/dcn30_dwb_cm.c | 90 ++++--
.../drm/amd/display/dc/dcn30/dcn30_hwseq.c | 12 +-
.../drm/amd/display/dc/dcn30/dcn30_resource.c | 42 ++-
drivers/gpu/drm/bridge/nwl-dsi.c | 2 +-
drivers/gpu/drm/drm_auth.c | 42 ++-
drivers/gpu/drm/drm_debugfs.c | 3 +-
drivers/gpu/drm/drm_dp_mst_topology.c | 10 +-
drivers/gpu/drm/drm_file.c | 1 +
drivers/gpu/drm/drm_lease.c | 81 ++++--
drivers/gpu/drm/exynos/exynos_drm_dma.c | 2 +
drivers/gpu/drm/mgag200/mgag200_drv.h | 16 ++
drivers/gpu/drm/mgag200/mgag200_mode.c | 20 +-
drivers/gpu/drm/mgag200/mgag200_reg.h | 9 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 17 +-
drivers/gpu/drm/msm/dp/dp_panel.c | 9 +-
drivers/gpu/drm/msm/dsi/dsi_cfg.c | 1 -
drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_device.h | 8 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 50 +---
drivers/gpu/drm/panfrost/panfrost_gem.c | 20 +-
drivers/gpu/drm/panfrost/panfrost_job.c | 4 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 191 ++++++++-----
drivers/gpu/drm/panfrost/panfrost_mmu.h | 5 +-
drivers/gpu/drm/panfrost/panfrost_regs.h | 2 +
drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +-
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 4 +-
drivers/gpu/drm/xlnx/zynqmp_disp.c | 3 +-
drivers/gpu/drm/xlnx/zynqmp_dp.c | 22 +-
drivers/hid/hid-input.c | 2 -
drivers/hid/i2c-hid/i2c-hid-core.c | 5 +-
drivers/hwmon/pmbus/ibm-cffps.c | 6 +-
drivers/iio/dac/ad5624r_spi.c | 18 +-
drivers/iio/temperature/ltc2983.c | 30 +-
drivers/infiniband/core/iwcm.c | 19 +-
drivers/infiniband/hw/efa/efa_verbs.c | 1 -
drivers/infiniband/hw/hfi1/init.c | 7 +-
drivers/infiniband/hw/hns/hns_roce_qp.c | 3 +-
drivers/infiniband/hw/mlx5/qp.c | 3 -
drivers/iommu/intel/pasid.h | 10 +-
drivers/mailbox/mtk-cmdq-mailbox.c | 3 +-
drivers/md/dm-crypt.c | 7 +-
drivers/media/cec/platform/stm32/stm32-cec.c | 26 +-
drivers/media/cec/platform/tegra/tegra_cec.c | 10 +-
drivers/media/dvb-frontends/dib8000.c | 58 ++--
drivers/media/i2c/imx258.c | 4 +-
drivers/media/i2c/tda1997x.c | 5 +-
drivers/media/rc/rc-loopback.c | 2 +-
drivers/media/usb/uvc/uvc_v4l2.c | 34 ++-
drivers/media/v4l2-core/v4l2-dv-timings.c | 4 +-
drivers/misc/vmw_vmci/vmci_queue_pair.c | 6 +-
drivers/mmc/core/block.c | 3 +-
drivers/mmc/host/rtsx_pci_sdmmc.c | 36 ++-
drivers/mmc/host/sdhci-of-arasan.c | 36 ++-
drivers/net/bonding/bond_main.c | 3 +-
drivers/net/dsa/lantiq_gswip.c | 3 +-
drivers/net/ethernet/intel/iavf/iavf_main.c | 58 +++-
drivers/net/ethernet/intel/igc/igc_main.c | 9 +-
.../marvell/octeontx2/nic/otx2_common.c | 15 +
drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 8 +-
.../mellanox/mlx5/core/steering/dr_rule.c | 2 +-
.../mellanox/mlx5/core/steering/dr_send.c | 1 +
.../ethernet/netronome/nfp/nfp_net_common.c | 2 +-
.../ethernet/stmicro/stmmac/dwmac-ipq806x.c | 18 +-
drivers/net/ethernet/wiznet/w5100.c | 2 +
drivers/net/phy/dp83822.c | 8 +-
.../net/wireless/ath/ath9k/ar9003_eeprom.c | 3 +-
drivers/net/wireless/ath/ath9k/hw.c | 12 +-
drivers/net/wireless/ath/wcn36xx/main.c | 5 +-
drivers/net/wireless/ath/wcn36xx/txrx.c | 4 +
drivers/net/wireless/ath/wcn36xx/wcn36xx.h | 1 +
drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 2 +-
.../net/wireless/intel/iwlwifi/mvm/mac-ctxt.c | 4 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 8 +-
drivers/net/wireless/intel/iwlwifi/mvm/ops.c | 24 +-
drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 2 +-
drivers/net/wireless/intel/iwlwifi/mvm/sta.c | 30 +-
drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 5 +-
.../net/wireless/intel/iwlwifi/pcie/trans.c | 3 +
.../net/wireless/realtek/rtl8xxxu/rtl8xxxu.h | 2 +
.../wireless/realtek/rtl8xxxu/rtl8xxxu_core.c | 33 ++-
drivers/net/wireless/realtek/rtw88/Makefile | 2 +-
drivers/net/wireless/realtek/rtw88/fw.c | 8 +-
drivers/net/wireless/realtek/rtw88/fw.h | 1 +
drivers/net/wireless/realtek/rtw88/wow.c | 21 +-
drivers/nvdimm/pmem.c | 4 +-
drivers/nvme/host/core.c | 3 +-
drivers/nvme/host/nvme.h | 47 +++-
drivers/nvme/host/pci.c | 2 +-
drivers/nvme/host/rdma.c | 4 +-
drivers/nvme/host/tcp.c | 38 +--
drivers/nvme/target/loop.c | 4 +-
drivers/nvmem/qfprom.c | 6 +-
drivers/of/kobj.c | 2 +-
drivers/opp/of.c | 12 +-
drivers/parport/ieee1284_ops.c | 2 +-
drivers/pci/controller/pci-aardvark.c | 266 +++++++++++++++++-
drivers/pci/controller/pcie-xilinx-nwl.c | 12 +
drivers/pci/msi.c | 3 +
drivers/pci/pci.c | 7 +-
drivers/pci/pcie/portdrv_core.c | 9 +-
drivers/pci/quirks.c | 1 +
drivers/pci/syscall.c | 4 +-
drivers/pinctrl/actions/pinctrl-owl.c | 1 -
drivers/pinctrl/core.c | 1 -
drivers/pinctrl/freescale/pinctrl-imx1-core.c | 1 -
drivers/pinctrl/mvebu/pinctrl-armada-37xx.c | 17 +-
drivers/pinctrl/pinctrl-at91.c | 1 -
drivers/pinctrl/pinctrl-ingenic.c | 6 +-
drivers/pinctrl/pinctrl-single.c | 1 +
drivers/pinctrl/pinctrl-st.c | 1 -
drivers/pinctrl/pinctrl-stmfx.c | 6 +-
drivers/pinctrl/pinctrl-sx150x.c | 1 -
drivers/pinctrl/qcom/pinctrl-sdm845.c | 1 -
drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c | 1 -
drivers/pinctrl/renesas/pfc-r8a77950.c | 1 -
drivers/pinctrl/renesas/pfc-r8a77951.c | 1 -
drivers/pinctrl/renesas/pfc-r8a7796.c | 1 -
drivers/pinctrl/renesas/pfc-r8a77965.c | 1 -
drivers/pinctrl/samsung/pinctrl-samsung.c | 2 +-
drivers/platform/chrome/cros_ec_proto.c | 9 +
drivers/platform/x86/dell-smbios-wmi.c | 1 +
drivers/power/supply/max17042_battery.c | 6 +-
drivers/rtc/rtc-tps65910.c | 2 +-
drivers/s390/cio/qdio_main.c | 82 +++---
drivers/scsi/BusLogic.c | 6 +-
drivers/scsi/pcmcia/fdomain_cs.c | 4 +-
drivers/scsi/qedf/qedf_main.c | 10 +-
drivers/scsi/qedi/qedi_main.c | 14 +-
drivers/scsi/qla2xxx/qla_nvme.c | 5 +-
drivers/scsi/qla2xxx/qla_os.c | 6 +
drivers/scsi/smartpqi/smartpqi_init.c | 1 +
drivers/scsi/ufs/ufs-exynos.c | 4 +-
drivers/scsi/ufs/ufs-exynos.h | 2 +-
drivers/scsi/ufs/ufshcd.c | 8 +-
drivers/soc/aspeed/aspeed-lpc-ctrl.c | 2 +-
drivers/soc/aspeed/aspeed-p2a-ctrl.c | 2 +-
drivers/soc/qcom/qcom_aoss.c | 8 +-
drivers/soundwire/intel.c | 23 +-
drivers/staging/board/board.c | 7 +-
drivers/staging/ks7010/ks7010_sdio.c | 2 +-
.../staging/media/atomisp/pci/atomisp_v4l2.c | 4 +-
.../staging/media/hantro/hantro_g1_vp8_dec.c | 13 +-
.../media/hantro/rk3399_vpu_hw_vp8_dec.c | 13 +-
drivers/staging/rts5208/rtsx_scsi.c | 10 +-
drivers/thunderbolt/switch.c | 2 +-
drivers/tty/hvc/hvsi.c | 19 +-
drivers/tty/serial/8250/8250_omap.c | 25 +-
drivers/tty/serial/8250/8250_pci.c | 2 +-
drivers/tty/serial/8250/8250_port.c | 3 +-
drivers/tty/serial/jsm/jsm_neo.c | 2 +
drivers/tty/serial/jsm/jsm_tty.c | 3 +
drivers/tty/serial/sh-sci.c | 7 +-
drivers/usb/chipidea/host.c | 14 +-
drivers/usb/gadget/composite.c | 8 +-
drivers/usb/gadget/function/u_ether.c | 5 +-
drivers/usb/host/ehci-mv.c | 23 +-
drivers/usb/host/fotg210-hcd.c | 41 ++-
drivers/usb/host/fotg210.h | 5 -
drivers/usb/host/xhci.c | 24 +-
drivers/usb/musb/musb_dsps.c | 13 +-
drivers/usb/usbip/vhci_hcd.c | 32 ++-
drivers/vfio/Kconfig | 2 +-
drivers/video/fbdev/asiliantfb.c | 3 +
drivers/video/fbdev/kyro/fbdev.c | 8 +
drivers/video/fbdev/riva/fbdev.c | 3 +
fs/btrfs/inode.c | 10 +-
fs/btrfs/tree-log.c | 4 +-
fs/btrfs/volumes.c | 3 +
fs/ceph/caps.c | 3 +
fs/cifs/sess.c | 2 +-
fs/f2fs/compress.c | 12 +-
fs/f2fs/data.c | 16 ++
fs/f2fs/dir.c | 14 +-
fs/f2fs/file.c | 4 +-
fs/f2fs/gc.c | 4 +-
fs/f2fs/super.c | 106 ++++---
fs/fscache/cookie.c | 14 +-
fs/fscache/internal.h | 2 +
fs/fscache/main.c | 39 +++
fs/gfs2/glops.c | 17 +-
fs/gfs2/lock_dlm.c | 5 +
fs/io-wq.c | 8 +-
fs/io_uring.c | 70 +++--
fs/iomap/buffered-io.c | 2 +-
fs/lockd/svclock.c | 30 +-
fs/nfs/pnfs.c | 16 +-
fs/nfsd/nfs4state.c | 5 +-
fs/notify/fanotify/fanotify.c | 6 +
fs/overlayfs/dir.c | 6 +-
fs/userfaultfd.c | 91 +++---
include/crypto/public_key.h | 4 +-
include/drm/drm_auth.h | 1 +
include/drm/drm_file.h | 18 +-
include/linux/ethtool.h | 4 -
include/linux/hugetlb.h | 9 +
include/linux/hugetlb_cgroup.h | 12 +
include/linux/intel-iommu.h | 6 +-
include/linux/rcupdate.h | 2 +-
include/linux/sunrpc/xprt.h | 1 +
include/linux/sunrpc/xprtsock.h | 1 +
include/net/flow_offload.h | 1 +
include/uapi/linux/serial_reg.h | 1 +
kernel/dma/debug.c | 7 +-
kernel/fork.c | 1 +
kernel/rcu/tree_plugin.h | 8 +-
kernel/workqueue.c | 12 +-
lib/test_bpf.c | 13 +-
lib/test_stackinit.c | 20 +-
mm/hmm.c | 5 +-
mm/hugetlb.c | 4 +-
mm/vmscan.c | 2 +-
net/9p/trans_xen.c | 4 +-
net/bluetooth/hci_event.c | 108 +++++--
net/bluetooth/sco.c | 74 +++--
net/core/flow_dissector.c | 12 +-
net/core/flow_offload.c | 89 +++++-
net/ethtool/ioctl.c | 136 +++++++--
net/ipv4/ip_output.c | 5 +-
net/ipv4/tcp_fastopen.c | 3 +-
net/mac80211/iface.c | 11 +-
net/netfilter/nf_flow_table_offload.c | 1 +
net/netfilter/nf_tables_offload.c | 1 +
net/netlabel/netlabel_cipso_v4.c | 4 +-
net/netlink/af_netlink.c | 4 +-
net/sched/cls_api.c | 1 +
net/sched/sch_taprio.c | 4 +-
net/socket.c | 125 +-------
net/sunrpc/auth_gss/svcauth_gss.c | 2 +-
net/sunrpc/xprt.c | 8 +-
net/sunrpc/xprtrdma/transport.c | 11 +-
net/sunrpc/xprtsock.c | 7 +
net/tipc/socket.c | 36 ++-
samples/bpf/test_override_return.sh | 1 +
samples/bpf/tracex7_user.c | 5 +
scripts/gen_ksymdeps.sh | 8 +-
security/smack/smack_access.c | 17 +-
sound/soc/atmel/Kconfig | 1 -
sound/soc/intel/boards/bytcr_rt5640.c | 9 +-
sound/soc/intel/boards/sof_pcm512x.c | 13 +-
sound/soc/intel/skylake/skl-messages.c | 11 +-
sound/soc/intel/skylake/skl-pcm.c | 25 +-
sound/soc/rockchip/rockchip_i2s.c | 35 ++-
tools/lib/bpf/libbpf.c | 63 ++++-
.../selftests/arm64/mte/mte_common_util.c | 2 +-
tools/testing/selftests/arm64/pauth/pac.c | 10 +-
.../selftests/bpf/prog_tests/send_signal.c | 16 ++
.../bpf/prog_tests/sockopt_inherit.c | 4 +-
tools/testing/selftests/bpf/progs/xdp_tx.c | 2 +-
tools/testing/selftests/bpf/test_maps.c | 2 +-
tools/testing/selftests/bpf/test_xdp_veth.sh | 2 +-
.../testing/selftests/firmware/fw_namespace.c | 3 +-
.../testing/selftests/ftrace/test.d/functions | 2 +-
tools/thermal/tmon/Makefile | 2 +-
340 files changed, 3158 insertions(+), 1635 deletions(-)
--
2.20.1
1
303
Backport bugfix patches for mm/fs from upstream.
Ard Biesheuvel (1):
arm64: mm: account for hotplug memory when randomizing the linear
region
Dan Carpenter (1):
crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()
Florian Fainelli (1):
ARM: Qualify enabling of swiotlb_init()
Guo Xuenan (3):
Revert "compiler: remove CONFIG_OPTIMIZE_INLINING entirely"
disable OPTIMIZE_INLINING by default
make OPTIMIZE_INLINING config editable
Johannes Weiner (1):
mm: vmscan: fix missing psi annotation for node_reclaim()
John Garry (1):
blk-mq-sched: Fix blk_mq_sched_alloc_tags() error handling
Lu Baolu (1):
iommu/vt-d: Fix clearing real DMA device's scalable-mode context
entries
Piotr Krysiuk (1):
bpf, mips: Validate conditional branch offsets
Rafael Aquini (1):
ipc: replace costly bailout check in sysvipc_find_ipc()
Sanjay Kumar (1):
iommu/vt-d: Global devTLB flush when present context entry changed
Vlastimil Babka (1):
mm: slub: fix slub_debug disabling for list of slabs
Xu Kuohai (1):
bpf: Fix integer overflow in prealloc_elems_and_freelist()
yangerkun (1):
ext4: flush s_error_work before journal destroy in ext4_fill_super
arch/arm/mm/init.c | 6 +-
arch/arm64/kvm/sys_regs.h | 5 ++
arch/arm64/mm/init.c | 13 +++--
arch/mips/net/bpf_jit.c | 57 ++++++++++++++-----
arch/x86/configs/i386_defconfig | 1 +
arch/x86/configs/x86_64_defconfig | 1 +
block/blk-mq-sched.c | 19 ++-----
drivers/crypto/ccp/ccp-ops.c | 14 +++--
.../gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c | 5 ++
.../gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.h | 5 +-
drivers/iommu/intel/iommu.c | 34 +++++++----
.../pci/hive_isp_css_include/print_support.h | 4 ++
fs/ext4/super.c | 5 +-
include/linux/compiler_types.h | 8 +++
ipc/util.c | 16 ++----
kernel/bpf/stackmap.c | 3 +-
kernel/configs/tiny.config | 1 +
lib/Kconfig.debug | 13 +++++
mm/slub.c | 13 +++--
mm/vmscan.c | 3 +
20 files changed, 154 insertions(+), 72 deletions(-)
--
2.20.1
1
15
Backport kfence feature from mainline.
Alexander Potapenko (10):
mm: add Kernel Electric-Fence infrastructure
x86, kfence: enable KFENCE for x86
mm, kfence: insert KFENCE hooks for SLAB
mm, kfence: insert KFENCE hooks for SLUB
kfence, kasan: make KFENCE compatible with KASAN
tracing: add error_report_end trace point
kfence: use error_report_end tracepoint
kasan: use error_report_end tracepoint
kfence: move the size check to the beginning of __kfence_alloc()
kfence: skip all GFP_ZONEMASK allocations
Christophe Leroy (2):
powerpc/32s: Always map kernel text and rodata with BATs
powerpc: Enable KFENCE for PPC32
Hyeonggon Yoo (1):
mm, slub: change run-time assertion in kmalloc_index() to compile-time
Jisheng Zhang (1):
arm64: mm: don't use CON and BLK mapping if KFENCE is enabled
Kefeng Wang (3):
ARM: mm: Provide set_memory_valid()
ARM: mm: Provide is_write_fault()
ARM: Support KFENCE for ARM
Marco Elver (22):
arm64, kfence: enable KFENCE for ARM64
kfence: use pt_regs to generate stack trace on faults
kfence, Documentation: add KFENCE documentation
kfence: add test suite
MAINTAINERS: add entry for KFENCE
kfence: report sensitive information based on no_hash_pointers
kfence: fix printk format for ptrdiff_t
kfence, slab: fix cache_alloc_debugcheck_after() for bulk allocations
kfence: fix reports if constant function prefixes exist
kfence: make compatible with kmemleak
kfence, x86: fix preemptible warning on KPTI-enabled systems
kfence: zero guard page after out-of-bounds access
kfence: await for allocation using wait_event
kfence: maximize allocation wait timeout duration
kfence: use power-efficient work queue to run delayed work
kfence: use TASK_IDLE when awaiting allocation
kfence: unconditionally use unbound work queue
kfence: fix is_kfence_address() for addresses below KFENCE_POOL_SIZE
kfence, x86: only define helpers if !MODULE
lib/vsprintf: do not show no_hash_pointers message multiple times
kfence: test: fail fast if disabled at boot
kfence: show cpu and timestamp in alloc/free info
Stephen Boyd (1):
slub: force on no_hash_pointers when slub_debug is enabled
Sven Schnelle (1):
kfence: add function to mask address bits
Timur Tabi (3):
lib: use KSTM_MODULE_GLOBALS macro in kselftest drivers
kselftest: add support for skipped tests
lib/vsprintf: no_hash_pointers prints all addresses as unhashed
Vlastimil Babka (1):
printk: clarify the documentation for plain pointer printing
Weizhao Ouyang (1):
kfence: defer kfence_test_init to ensure that kunit debugfs is created
.../admin-guide/kernel-parameters.txt | 15 +
Documentation/core-api/printk-formats.rst | 26 +-
Documentation/dev-tools/index.rst | 1 +
Documentation/dev-tools/kfence.rst | 306 ++++++
MAINTAINERS | 12 +
arch/arm/Kconfig | 1 +
arch/arm/include/asm/kfence.h | 53 ++
arch/arm/include/asm/set_memory.h | 5 +
arch/arm/mm/fault.c | 16 +-
arch/arm/mm/pageattr.c | 41 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/kfence.h | 22 +
arch/arm64/mm/fault.c | 4 +
arch/arm64/mm/mmu.c | 11 +-
arch/powerpc/Kconfig | 13 +-
arch/powerpc/include/asm/kfence.h | 33 +
arch/powerpc/kernel/head_book3s_32.S | 4 +-
arch/powerpc/mm/book3s32/mmu.c | 8 +-
arch/powerpc/mm/fault.c | 7 +-
arch/powerpc/mm/init_32.c | 3 +
arch/powerpc/mm/mmu_decl.h | 5 +
arch/powerpc/mm/nohash/8xx.c | 4 +-
arch/x86/Kconfig | 1 +
arch/x86/include/asm/kfence.h | 73 ++
arch/x86/mm/fault.c | 6 +
include/linux/kernel.h | 2 +
include/linux/kfence.h | 223 +++++
include/linux/slab.h | 17 +-
include/linux/slab_def.h | 3 +
include/linux/slub_def.h | 3 +
include/trace/events/error_report.h | 74 ++
init/main.c | 3 +
kernel/trace/Makefile | 1 +
kernel/trace/error_report-traces.c | 11 +
lib/Kconfig.debug | 1 +
lib/Kconfig.kfence | 83 ++
lib/test_bitmap.c | 3 +-
lib/test_printf.c | 12 +-
lib/vsprintf.c | 46 +-
mm/Makefile | 1 +
mm/kasan/common.c | 18 +
mm/kasan/generic.c | 3 +-
mm/kasan/report.c | 8 +-
mm/kfence/Makefile | 6 +
mm/kfence/core.c | 891 ++++++++++++++++++
mm/kfence/kfence.h | 108 +++
mm/kfence/kfence_test.c | 873 +++++++++++++++++
mm/kfence/report.c | 274 ++++++
mm/kmemleak.c | 3 +-
mm/slab.c | 41 +-
mm/slab_common.c | 12 +-
mm/slub.c | 80 +-
tools/testing/selftests/kselftest_module.h | 18 +-
53 files changed, 3405 insertions(+), 84 deletions(-)
create mode 100644 Documentation/dev-tools/kfence.rst
create mode 100644 arch/arm/include/asm/kfence.h
create mode 100644 arch/arm64/include/asm/kfence.h
create mode 100644 arch/powerpc/include/asm/kfence.h
create mode 100644 arch/x86/include/asm/kfence.h
create mode 100644 include/linux/kfence.h
create mode 100644 include/trace/events/error_report.h
create mode 100644 kernel/trace/error_report-traces.c
create mode 100644 lib/Kconfig.kfence
create mode 100644 mm/kfence/Makefile
create mode 100644 mm/kfence/core.c
create mode 100644 mm/kfence/kfence.h
create mode 100644 mm/kfence/kfence_test.c
create mode 100644 mm/kfence/report.c
--
2.20.1
1
46

[PATCH v3 openEuler-1.0-LTS 2/2] drm/hisilicon: Features to support reading resolutions from EDID
by Gou Hao 21 Oct '21
by Gou Hao 21 Oct '21
21 Oct '21
From: Tian Tao <tiantao6(a)hisilicon.com>
mainline inclusion
from mainline-v5.14.0-rc7
commit a0d078d06e516184e2f575f3803935697b5e3ac6
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I469VQ
CVE: NA
Refactoring hibmc_vdac_init, hibmc_connector_init method.
---------------------------------------
Use drm_get_edid to get the resolution, if that fails, set it to
a fixed resolution. Rewrite the desrtoy callback function to release
resources.
Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com>
Reviewed-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/1600778670-60370-3-git-send-e…
Signed-off-by: gouhao <gouhao(a)uniontech.com>
---
.../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 54 +++++++++++++------
1 file changed, 37 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
index 90319a902..aeef73037 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
@@ -59,11 +59,23 @@ static int hibmc_valid_mode(int w, int h)
static int hibmc_connector_get_modes(struct drm_connector *connector)
{
int count;
+ void *edid;
+ struct hibmc_connector *hibmc_connector = to_hibmc_connector(connector);
+
+ edid = drm_get_edid(connector, &hibmc_connector->adapter);
+ if (edid) {
+ drm_connector_update_edid_property(connector, edid);
+ count = drm_add_edid_modes(connector, edid);
+ if (count)
+ goto out;
+ }
drm_connector_update_edid_property(connector, NULL);
count = drm_add_modes_noedid(connector, 1920, 1200);
drm_set_preferred_mode(connector, 1024, 768);
+out:
+ kfree(edid);
return count;
}
@@ -93,39 +105,41 @@ static const struct drm_connector_helper_funcs
.best_encoder = hibmc_connector_best_encoder,
};
+static void hibmc_connector_destroy(struct drm_connector *connector)
+{
+ struct hibmc_connector *hibmc_connector = to_hibmc_connector(connector);
+
+ i2c_del_adapter(&hibmc_connector->adapter);
+ drm_connector_cleanup(connector);
+}
+
static const struct drm_connector_funcs hibmc_connector_funcs = {
.fill_modes = drm_helper_probe_single_connector_modes,
- .destroy = drm_connector_cleanup,
+ .destroy = hibmc_connector_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
-static struct drm_connector *
+static int
hibmc_connector_init(struct hibmc_drm_private *priv)
{
struct drm_device *dev = priv->dev;
- struct drm_connector *connector;
+ struct drm_connector *connector = &priv->connector.base;
int ret;
- connector = devm_kzalloc(dev->dev, sizeof(*connector), GFP_KERNEL);
- if (!connector) {
- DRM_ERROR("failed to alloc memory when init connector\n");
- return ERR_PTR(-ENOMEM);
- }
-
ret = drm_connector_init(dev, connector,
&hibmc_connector_funcs,
DRM_MODE_CONNECTOR_VGA);
if (ret) {
DRM_ERROR("failed to init connector: %d\n", ret);
- return ERR_PTR(ret);
+ return ret;
}
drm_connector_helper_add(connector,
&hibmc_connector_helper_funcs);
drm_connector_register(connector);
- return connector;
+ return 0;
}
static void hibmc_encoder_mode_set(struct drm_encoder *encoder,
@@ -155,15 +169,21 @@ static const struct drm_encoder_funcs hibmc_encoder_funcs = {
int hibmc_vdac_init(struct hibmc_drm_private *priv)
{
struct drm_device *dev = priv->dev;
+ struct hibmc_connector *hibmc_connector = &priv->connector;
struct drm_encoder *encoder;
- struct drm_connector *connector;
+ struct drm_connector *connector = &hibmc_connector->base;
int ret;
- connector = hibmc_connector_init(priv);
- if (IS_ERR(connector)) {
- DRM_ERROR("failed to create connector: %ld\n",
- PTR_ERR(connector));
- return PTR_ERR(connector);
+ ret = hibmc_connector_init(priv);
+ if (ret) {
+ DRM_ERROR("failed to init connector: %d\n", ret);
+ return ret;
+ }
+
+ ret = hibmc_ddc_create(dev, hibmc_connector);
+ if (ret) {
+ DRM_ERROR("failed to create ddc: %d\n", ret);
+ return ret;
}
encoder = devm_kzalloc(dev->dev, sizeof(*encoder), GFP_KERNEL);
--
2.20.1
1
0

[PATCH v3 openEuler-1.0-LTS 1/2] drm/hisilicon: Support i2c driver algorithms for bit-shift adapters
by Gou Hao 21 Oct '21
by Gou Hao 21 Oct '21
21 Oct '21
From: Tian Tao <tiantao6(a)hisilicon.com>
mainline inclusion
from mainline-v5.14.0-rc7
commit 4eb4d99dfe3018d86f4529112aa7082f43b6996a
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I469VQ
CVE: NA
Remove #include <drm/drm_probe_helper.h> in hibmc_drm_i2c.c.
-----------------------------
Adding driver implementation to support i2c driver algorithms for
bit-shift adapters, so hibmc will using the interface provided by
drm to read edid.
Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com>
Reviewed-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/1600778670-60370-2-git-send-e…
Signed-off-by: gouhao <gouhao(a)uniontech.com>
---
drivers/gpu/drm/hisilicon/hibmc/Makefile | 3 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 25 +++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c | 98 +++++++++++++++++++
3 files changed, 125 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
diff --git a/drivers/gpu/drm/hisilicon/hibmc/Makefile b/drivers/gpu/drm/hisilicon/hibmc/Makefile
index 3df726696..71c248f4c 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/Makefile
+++ b/drivers/gpu/drm/hisilicon/hibmc/Makefile
@@ -1,3 +1,4 @@
-hibmc-drm-y := hibmc_drm_drv.o hibmc_drm_de.o hibmc_drm_vdac.o hibmc_drm_fbdev.o hibmc_ttm.o
+hibmc-drm-y := hibmc_drm_drv.o hibmc_drm_de.o hibmc_drm_vdac.o \
+ hibmc_drm_fbdev.o hibmc_ttm.o hibmc_drm_i2c.o
obj-$(CONFIG_DRM_HISI_HIBMC) += hibmc-drm.o
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
index 4395dc667..c246151b2 100644
--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h
@@ -19,12 +19,18 @@
#ifndef HIBMC_DRM_DRV_H
#define HIBMC_DRM_DRV_H
+#include <linux/gpio/consumer.h>
+#include <linux/i2c-algo-bit.h>
+#include <linux/i2c.h>
+
+#include <drm/drm_edid.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_gem.h>
#include <drm/ttm/ttm_bo_driver.h>
+
struct hibmc_framebuffer {
struct drm_framebuffer fb;
struct drm_gem_object *obj;
@@ -36,6 +42,13 @@ struct hibmc_fbdev {
int size;
};
+struct hibmc_connector {
+ struct drm_connector base;
+
+ struct i2c_adapter adapter;
+ struct i2c_algo_bit_data bit_data;
+};
+
struct hibmc_drm_private {
/* hw */
void __iomem *mmio;
@@ -46,6 +59,7 @@ struct hibmc_drm_private {
/* drm */
struct drm_device *dev;
+ struct hibmc_connector connector;
bool mode_config_initialized;
struct drm_atomic_state *suspend_state;
@@ -60,6 +74,16 @@ struct hibmc_drm_private {
bool mm_inited;
};
+static inline struct hibmc_connector *to_hibmc_connector(struct drm_connector *connector)
+{
+ return container_of(connector, struct hibmc_connector, base);
+}
+
+static inline struct hibmc_drm_private *to_hibmc_drm_private(struct drm_device *dev)
+{
+ return dev->dev_private;
+}
+
#define to_hibmc_framebuffer(x) container_of(x, struct hibmc_framebuffer, fb)
struct hibmc_bo {
@@ -110,6 +134,7 @@ int hibmc_dumb_create(struct drm_file *file, struct drm_device *dev,
int hibmc_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
u32 handle, u64 *offset);
int hibmc_mmap(struct file *filp, struct vm_area_struct *vma);
+int hibmc_ddc_create(struct drm_device *drm_dev, struct hibmc_connector *connector);
extern const struct drm_mode_config_funcs hibmc_mode_funcs;
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
new file mode 100644
index 000000000..ffd7c7bf4
--- /dev/null
+++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Hisilicon Hibmc SoC drm driver
+ *
+ * Based on the bochs drm driver.
+ *
+ * Copyright (c) 2016 Huawei Limited.
+ *
+ * Author:
+ * Tian Tao <tiantao6(a)hisilicon.com>
+ */
+
+#include <linux/delay.h>
+#include <linux/pci.h>
+
+#include <drm/drm_atomic_helper.h>
+
+#include "hibmc_drm_drv.h"
+
+#define GPIO_DATA 0x0802A0
+#define GPIO_DATA_DIRECTION 0x0802A4
+
+#define I2C_SCL_MASK BIT(0)
+#define I2C_SDA_MASK BIT(1)
+
+static void hibmc_set_i2c_signal(void *data, u32 mask, int value)
+{
+ struct hibmc_connector *hibmc_connector = data;
+ struct hibmc_drm_private *priv = to_hibmc_drm_private(hibmc_connector->base.dev);
+ u32 tmp_dir = readl(priv->mmio + GPIO_DATA_DIRECTION);
+
+ if (value) {
+ tmp_dir &= ~mask;
+ writel(tmp_dir, priv->mmio + GPIO_DATA_DIRECTION);
+ } else {
+ u32 tmp_data = readl(priv->mmio + GPIO_DATA);
+
+ tmp_data &= ~mask;
+ writel(tmp_data, priv->mmio + GPIO_DATA);
+
+ tmp_dir |= mask;
+ writel(tmp_dir, priv->mmio + GPIO_DATA_DIRECTION);
+ }
+}
+
+static int hibmc_get_i2c_signal(void *data, u32 mask)
+{
+ struct hibmc_connector *hibmc_connector = data;
+ struct hibmc_drm_private *priv = to_hibmc_drm_private(hibmc_connector->base.dev);
+ u32 tmp_dir = readl(priv->mmio + GPIO_DATA_DIRECTION);
+
+ if ((tmp_dir & mask) != mask) {
+ tmp_dir &= ~mask;
+ writel(tmp_dir, priv->mmio + GPIO_DATA_DIRECTION);
+ }
+
+ return (readl(priv->mmio + GPIO_DATA) & mask) ? 1 : 0;
+}
+
+static void hibmc_ddc_setsda(void *data, int state)
+{
+ hibmc_set_i2c_signal(data, I2C_SDA_MASK, state);
+}
+
+static void hibmc_ddc_setscl(void *data, int state)
+{
+ hibmc_set_i2c_signal(data, I2C_SCL_MASK, state);
+}
+
+static int hibmc_ddc_getsda(void *data)
+{
+ return hibmc_get_i2c_signal(data, I2C_SDA_MASK);
+}
+
+static int hibmc_ddc_getscl(void *data)
+{
+ return hibmc_get_i2c_signal(data, I2C_SCL_MASK);
+}
+
+int hibmc_ddc_create(struct drm_device *drm_dev,
+ struct hibmc_connector *connector)
+{
+ connector->adapter.owner = THIS_MODULE;
+ connector->adapter.class = I2C_CLASS_DDC;
+ snprintf(connector->adapter.name, I2C_NAME_SIZE, "HIS i2c bit bus");
+ connector->adapter.dev.parent = &drm_dev->pdev->dev;
+ i2c_set_adapdata(&connector->adapter, connector);
+ connector->adapter.algo_data = &connector->bit_data;
+
+ connector->bit_data.udelay = 20;
+ connector->bit_data.timeout = usecs_to_jiffies(2000);
+ connector->bit_data.data = connector;
+ connector->bit_data.setsda = hibmc_ddc_setsda;
+ connector->bit_data.setscl = hibmc_ddc_setscl;
+ connector->bit_data.getsda = hibmc_ddc_getsda;
+ connector->bit_data.getscl = hibmc_ddc_getscl;
+
+ return i2c_bit_add_bus(&connector->adapter);
+}
--
2.20.1
1
0

21 Oct '21
From: gouhao <gouhao(a)uniontech.com>
Fix hibmc did not get edid.
issue: https://gitee.com/openeuler/kernel/issues/I469VQ
gouhao (2):
drm/hisilicon: Support i2c driver algorithms for bit-shift adapters
drm/hisilicon: Features to support reading resolutions from EDID
drivers/gpu/drm/hisilicon/hibmc/Makefile | 3 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 25 +++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c | 98 +++++++++++++++++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 54 ++++++----
4 files changed, 162 insertions(+), 18 deletions(-)
create mode 100644 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
--
2.20.1
1
0

[PATCH openEuler-21.03] etmem: corret ept page walk under 4 level page table
by Kemeng Shi 21 Oct '21
by Kemeng Shi 21 Oct '21
21 Oct '21
euleros inclusion
category: feature
feature: etmem
bugzilla: 48246
-------------------------------------------------
Function ept_pgd_range does not work for vm has 4 level page table.
Call ept_p4d_range instead to fix this.
Signed-off-by: Kemeng Shi <shikemeng(a)huawei.com>
---
fs/proc/etmem_scan.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/proc/etmem_scan.c b/fs/proc/etmem_scan.c
index cdc8f341af67..b4655c85f8c5 100644
--- a/fs/proc/etmem_scan.c
+++ b/fs/proc/etmem_scan.c
@@ -557,9 +557,8 @@ static int ept_page_range(struct page_idle_ctrl *pic,
ept_root = __va(mmu->root_hpa);
- // walk start at p4d when host enable 5 level table pages but
- // vm only get 4 level table pages
- if (mmu->shadow_root_level == 4 + (!!pgtable_l5_enabled()))
+ // Walk start at p4d when vm get 4 level table pages
+ if (mmu->shadow_root_level != 4)
err = ept_pgd_range(pic, (pgd_t *)ept_root, addr, end, walk);
else
err = ept_p4d_range(pic, (p4d_t *)ept_root, addr, end, walk);
--
2.30.0
1
0

21 Oct '21
hi zhenyuan: 非常感谢您的提交; 但是在您提交的附1补丁集中,我们发现tcp:
address problems caused by EDT misshaps中修复的前向 补丁Fixes: ab408b6dc744
("tcp: switch tcp and sch_fq to new earliest departure time model")
在版本中并未合入,请再确认一下。 best regard附1: issue:
https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
jiazhenyuan (4): tcp: address problems caused by EDT misshaps ^^^^^^^
tcp: always set retrans_stamp on recovery tcp: create a helper to model
exponential backoff tcp: adjust rto_base in retransmits_timed_out()
net/ipv4/tcp_input.c | 16 +++++++---- net/ipv4/tcp_output.c | 9 +++---
net/ipv4/tcp_timer.c | 65 ++++++++++++++++++++----------------------- 3
files changed, 44 insertions(+), 46 deletions(-):
*附2:* [v1,openEuler-1.0-LTS,1/4] tcp: address problems caused by EDT
misshaps From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v5.3.0
commit 9efdda4e3abed13f0903b7b6e4d4c2102019440a
category: bugfix
bugzilla:https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
CVE: NA
------------------------------------------------------------
When a qdisc setup including pacing FQ is dismantled and recreated,
some TCP packets are sent earlier than instructed by TCP stack.
TCP can be fooled when ACK comes back, because the following
operation can return a negative value.
tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
Some paths in TCP stack were not dealing properly with this,
this patch addresses four of them.
Fixes: ab408b6dc744 ("tcp: switch tcp and sch_fq to new earliest departure time model")
^^^^^^^^^^^^^^^ <= 当前版本未合入该patch
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Jiazhenyuan <jiazhenyuan(a)uniontech.com>
---
net/ipv4/tcp_input.c | 16 ++++++++++------
net/ipv4/tcp_timer.c | 10 ++++++----
2 files changed, 16 insertions(+), 10 deletions(-)
1
0

[PATCH kernel-4.19] net:fix tcp timeout retransmits are always missing 2 times
by Yang Yingliang 21 Oct '21
by Yang Yingliang 21 Oct '21
21 Oct '21
From: jiazhenyuan <jiazhenyuan(a)uniontech.com>
mainline inclusion
from mainline-v5.4-rc2
commit 3256a2d6ab1f71f9a1bd2d7f6f18eb8108c48d17
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
CVE: NA
---------------------------------------------------------------
tcp: adjust rto_base in retransmits_timed_out()
The cited commit exposed an old retransmits_timed_out() bug
which assumed it could call tcp_model_timeout() with
TCP_RTO_MIN as rto_base for all states.
But flows in SYN_SENT or SYN_RECV state uses a different
RTO base (1 sec instead of 200 ms, unless BPF choses
another value)
This caused a reduction of SYN retransmits from 6 to 4 with
the default /proc/sys/net/ipv4/tcp_syn_retries value.
Fixes: a41e8a8 ("tcp: better handle TCP_USER_TIMEOUT in SYN_SENT state")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Cc: Yuchung Cheng <ycheng(a)google.com>
Cc: Marek Majkowski <marek(a)cloudflare.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: jiazhenyuan <jiazhenyuan(a)uniontech.com> # openEuler_contributor
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/ipv4/tcp_timer.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index 681882a409686..81a47e87c35d4 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -190,7 +190,7 @@ static bool retransmits_timed_out(struct sock *sk,
unsigned int boundary,
unsigned int timeout)
{
- const unsigned int rto_base = TCP_RTO_MIN;
+ unsigned int rto_base = TCP_RTO_MIN;
unsigned int linear_backoff_thresh, start_ts;
if (!inet_csk(sk)->icsk_retransmits)
@@ -201,6 +201,9 @@ static bool retransmits_timed_out(struct sock *sk,
return false;
if (likely(timeout == 0)) {
+ if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))
+ rto_base = tcp_timeout_init(sk);
+
linear_backoff_thresh = ilog2(TCP_RTO_MAX/rto_base);
if (boundary <= linear_backoff_thresh)
--
2.25.1
1
0

[PATCH kernel-4.19] blk-wbt: fix IO hang due to negative inflight counter
by Yang Yingliang 21 Oct '21
by Yang Yingliang 21 Oct '21
21 Oct '21
From: Laibin Qiu <qiulaibin(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 182135, https://gitee.com/openeuler/kernel/issues/I4ENC8
CVE: NA
--------------------------
Block test reported the following stack, Some req has been watting for
wakeup in wbt_wait, and vmcore showed that wbt inflight counter is -1.
So Request cannot be awakened.
PID: 75416 TASK: ffff88836c098000 CPU: 2 COMMAND: "fsstress"
[ffff8882e59a7608] __schedule at ffffffffb2d22a25
[ffff8882e59a7720] schedule at ffffffffb2d2358f
[ffff8882e59a7738] io_schedule at ffffffffb2d23bdc
[ffff8882e59a7750] rq_qos_wait at ffffffffb2400fde
[ffff8882e59a7878] wbt_wait at ffffffffb243a051
[ffff8882e59a7910] __rq_qos_throttle at ffffffffb2400a20
[ffff8882e59a7930] blk_mq_make_request at ffffffffb23de038
[ffff8882e59a7a98] generic_make_request at ffffffffb23c393d
[ffff8882e59a7b80] submit_bio at ffffffffb23c3db8
[ffff8882e59a7c48] submit_bio_wait at ffffffffb23b3a5d
[ffff8882e59a7cf0] blkdev_issue_flush at ffffffffb23c8f4c
[ffff8882e59a7d20] ext4_sync_fs at ffffffffc06dd708 [ext4]
[ffff8882e59a7dd0] sync_filesystem at ffffffffb21e8335
[ffff8882e59a7df8] ovl_sync_fs at ffffffffc0fd853a [overlay]
[ffff8882e59a7e10] sync_fs_one_sb at ffffffffb21e8221
[ffff8882e59a7e30] iterate_supers at ffffffffb218401e
[ffff8882e59a7e70] ksys_sync at ffffffffb21e8588
[ffff8882e59a7f20] __x64_sys_sync at ffffffffb21e861f
[ffff8882e59a7f28] do_syscall_64 at ffffffffb1c06bc8
[ffff8882e59a7f50] entry_SYSCALL_64_after_hwframe at ffffffffb2e000ad
RIP: 00007f479ab13347 RSP: 00007ffd4dda9fe8 RFLAGS: 00000202
RAX: ffffffffffffffda RBX: 0000000000000068 RCX: 00007f479ab13347
RDX: 0000000000000000 RSI: 000000003e1b142d RDI: 0000000000000068
RBP: 0000000051eb851f R8: 00007f479abd4034 R9: 00007f479abd40a0
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000402c20
R13: 0000000000000001 R14: 0000000000000000 R15: 7fffffffffffffff
The ->inflight counter may be negative (-1) if
1) blk-wbt was disabled when the IO was issued,
which will add inflight count.
2) blk-wbt was enabled before this IO tracked.
3) the ->inflight counter is decreased from
0 to -1 in endio().
This fixes the problem by freezing the queue while enabling wbt,
there is no inflight rq running.
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-wbt.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index a9f11d12bea92..b65504f330442 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/backing-dev.h>
#include <linux/swap.h>
+#include <linux/blk-mq.h>
#include "blk-wbt.h"
#include "blk-rq-qos.h"
@@ -822,9 +823,16 @@ int wbt_init(struct request_queue *q)
rq_qos_add(q, &rwb->rqos);
blk_stat_add_callback(q, rwb->cb);
- rwb->min_lat_nsec = wbt_default_latency_nsec(q);
+ /*
+ * Ensure that the queue is idled by freezing the queue
+ * while enabling wbt, there is no inflight rq running.
+ */
+ blk_mq_freeze_queue(q);
+ rwb->min_lat_nsec = wbt_default_latency_nsec(q);
wbt_queue_depth_changed(&rwb->rqos);
+
+ blk_mq_unfreeze_queue(q);
wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
return 0;
--
2.25.1
2
1
unsubscribe
1
0
From: Wang Wensheng <wangwensheng4(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: NA
CVE: NA
---------------------------
To avoid mmap vspace reserved for sharepool, we currently change the
high_limit to MMAP_SHARE_POOL_START in arch_get_unmapped_area() and
arch_get_unmapped_area_topdown(). In mmap-topdown scene, this make the
start address of mmap being always MMAP_SHARE_POOL_START. ASLR got
broken.
To fix this, this patch set the mm->mmap_base based on
MMAP_SHARE_POOL_START instead of STACK_TOP in topdown scene.
Fixes: 4bdd5c21793e ("ascend: memory: introduce do_mm_populate and hugetlb_insert_hugepage")
Signed-off-by: Wang Wensheng <wangwensheng4(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/mm/mmap.c | 6 +++++-
include/linux/share_pool.h | 4 ++--
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 157f2caa13516..aca257158611f 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -28,6 +28,7 @@
#include <linux/io.h>
#include <linux/personality.h>
#include <linux/random.h>
+#include <linux/share_pool.h>
#include <asm/cputype.h>
@@ -80,7 +81,10 @@ static unsigned long mmap_base(unsigned long rnd, struct rlimit *rlim_stack)
else if (gap > MAX_GAP)
gap = MAX_GAP;
- return PAGE_ALIGN(STACK_TOP - gap - rnd);
+ if (sp_is_enabled())
+ return ALIGN_DOWN(MMAP_SHARE_POOL_START - rnd, PAGE_SIZE);
+ else
+ return PAGE_ALIGN(STACK_TOP - gap - rnd);
}
/*
diff --git a/include/linux/share_pool.h b/include/linux/share_pool.h
index 9650f257b3ad7..9557a8be46677 100644
--- a/include/linux/share_pool.h
+++ b/include/linux/share_pool.h
@@ -130,8 +130,6 @@ struct sp_proc_stat {
atomic64_t k2u_size;
};
-#ifdef CONFIG_ASCEND_SHARE_POOL
-
#define MAP_SHARE_POOL 0x100000
#define MMAP_TOP_4G_SIZE 0x100000000UL
@@ -148,6 +146,8 @@ struct sp_proc_stat {
#define MMAP_SHARE_POOL_START (MMAP_SHARE_POOL_END - MMAP_SHARE_POOL_SIZE)
#define MMAP_SHARE_POOL_16G_START (MMAP_SHARE_POOL_END - MMAP_SHARE_POOL_DVPP_SIZE)
+#ifdef CONFIG_ASCEND_SHARE_POOL
+
static inline void sp_init_mm(struct mm_struct *mm)
{
mm->sp_group = NULL;
--
2.25.1
1
0

20 Oct '21
From: Wang Wensheng <wangwensheng4(a)huawei.com>
ascend inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I
CVE: NA
-------------------------------------------------
Not all cdm nodes are hbm and we don't need to operate on the other
nodes. So we should specify the hbm count per partion.
Here we assume that all the hbm nodes appear at first of all the cdm
nodes in one partion. Otherwise the management structure of the hbm
nodes could not be moved, which is not worse than closing this feature.
Signed-off-by: Wang Wensheng <wangwensheng4(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/mm/numa.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index 82d53927554d8..c65a71de8d5fb 100644
--- a/arch/arm64/mm/numa.c
+++ b/arch/arm64/mm/numa.c
@@ -63,7 +63,9 @@ inline int arch_check_node_cdm(int nid)
* |node2 HBM| | |node4 HBM|
* |---------- | ----------|
* |node3 HBM| | |node5 HBM|
- * ----------- | -----------
+ * |---------- | ----------|
+ * | ... | | | ... |
+ * |---------- | ----------|
*
* Return:
* This function returns a ddr node which is of the same partion with the input
@@ -76,6 +78,12 @@ int __init cdm_node_to_ddr_node(int nid)
nodemask_t ddr_mask;
int nr_ddr, cdm_per_part, fake_nid;
int nr_cdm = nodes_weight(cdmmask);
+ /*
+ * Specify the count of hbm nodes whoes management structrue would be
+ * moved. Here number 2 is a magic and we should make it configable
+ * for extending
+ */
+ int hbm_per_part = 2;
if (!nr_cdm || nodes_empty(numa_nodes_parsed))
return nid;
@@ -87,11 +95,12 @@ int __init cdm_node_to_ddr_node(int nid)
nr_ddr = nodes_weight(ddr_mask);
cdm_per_part = nr_cdm / nr_ddr;
- if (cdm_per_part == 0 || nid < nr_ddr)
+ if (cdm_per_part == 0 || nid < nr_ddr ||
+ nid >= (hbm_per_part + 1) * nr_ddr)
/* our assumption has borken, just return the original nid. */
return nid;
- fake_nid = (nid - nr_ddr) / cdm_per_part;
+ fake_nid = (nid - nr_ddr) / hbm_per_part;
fake_nid = !node_isset(fake_nid, cdmmask) ? fake_nid : nid;
pr_info("nid: %d, fake_nid: %d\n", nid, fake_nid);
--
2.25.1
1
0
From: Yu'an Wang <wangyuan46(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
1.add input para check of uacce_unregister api
2.change uacce_qfrt_str to internal interface, because it is used
just in uacce.c
Signed-off-by: Yu'an Wang <wangyuan46(a)huawei.com>
Reviewed-by: Longfang Liu <liulongfang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/misc/uacce/uacce.c | 6 ++++--
include/linux/uacce.h | 1 -
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
index db7b3936aec6f..b0cdc244e882a 100644
--- a/drivers/misc/uacce/uacce.c
+++ b/drivers/misc/uacce/uacce.c
@@ -126,11 +126,10 @@ static void uacce_hw_err_destroy(struct uacce *uacce)
}
}
-const char *uacce_qfrt_str(struct uacce_qfile_region *qfr)
+static const char *uacce_qfrt_str(struct uacce_qfile_region *qfr)
{
return qfrt_str[qfr->type];
}
-EXPORT_SYMBOL_GPL(uacce_qfrt_str);
/**
* uacce_wake_up - Wake up the process who is waiting this queue
@@ -1302,6 +1301,9 @@ EXPORT_SYMBOL_GPL(uacce_register);
*/
int uacce_unregister(struct uacce *uacce)
{
+ if (!uacce)
+ return -ENODEV;
+
if (atomic_read(&uacce->ref) > 0) {
printk_ratelimited("Fail to unregister uacce, please close all uacce queues!\n");
return -EAGAIN;
diff --git a/include/linux/uacce.h b/include/linux/uacce.h
index 43737c3f7f525..b43b65abdce39 100644
--- a/include/linux/uacce.h
+++ b/include/linux/uacce.h
@@ -128,7 +128,6 @@ struct uacce {
int uacce_register(struct uacce *uacce);
int uacce_unregister(struct uacce *uacce);
void uacce_wake_up(struct uacce_queue *q);
-const char *uacce_qfrt_str(struct uacce_qfile_region *qfr);
struct uacce *dev_to_uacce(struct device *dev);
int uacce_hw_err_isolate(struct uacce *uacce);
--
2.25.1
1
0

[PATCH kernel-4.19 1/6] mm/page_alloc.c: memory hotplug: free pages as higher order
by Yang Yingliang 20 Oct '21
by Yang Yingliang 20 Oct '21
20 Oct '21
From: Arun KS <arunks(a)codeaurora.org>
mainline inclusion
from mainline-v5.1-rc1
commit a9cd410a3d296846a8125aa43d97a573a354c472
category: feature
bugzilla: 182882
CVE: NA
-----------------------------------------------
When freeing pages are done with higher order, time spent on coalescing
pages by buddy allocator can be reduced. With section size of 256MB,
hot add latency of a single section shows improvement from 50-60 ms to
less than 1 ms, hence improving the hot add latency by 60 times. Modify
external providers of online callback to align with the change.
[arunks(a)codeaurora.org: v11]
Link: http://lkml.kernel.org/r/1547792588-18032-1-git-send-email-arunks@codeauror…
[akpm(a)linux-foundation.org: remove unused local, per Arun]
[akpm(a)linux-foundation.org: avoid return of void-returning __free_pages_core(), per Oscar]
[akpm(a)linux-foundation.org: fix it for mm-convert-totalram_pages-and-totalhigh_pages-variables-to-atomic.patch]
[arunks(a)codeaurora.org: v8]
Link: http://lkml.kernel.org/r/1547032395-24582-1-git-send-email-arunks@codeauror…
[arunks(a)codeaurora.org: v9]
Link: http://lkml.kernel.org/r/1547098543-26452-1-git-send-email-arunks@codeauror…
Link: http://lkml.kernel.org/r/1538727006-5727-1-git-send-email-arunks@codeaurora…
Signed-off-by: Arun KS <arunks(a)codeaurora.org>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Reviewed-by: Alexander Duyck <alexander.h.duyck(a)linux.intel.com>
Cc: K. Y. Srinivasan <kys(a)microsoft.com>
Cc: Haiyang Zhang <haiyangz(a)microsoft.com>
Cc: Stephen Hemminger <sthemmin(a)microsoft.com>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Mathieu Malaterre <malat(a)debian.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Souptick Joarder <jrdr.linux(a)gmail.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Aaron Lu <aaron.lu(a)intel.com>
Cc: Srivatsa Vaddagiri <vatsa(a)codeaurora.org>
Cc: Vinayak Menon <vinmenon(a)codeaurora.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
mm/page_alloc.c
mm/memory_hotplug.c
[Peng Liu: adjust context]
Signed-off-by: Peng Liu <liupeng256(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/hv/hv_balloon.c | 7 ++++---
drivers/xen/balloon.c | 15 ++++++++++-----
include/linux/memory_hotplug.h | 2 +-
mm/internal.h | 1 +
mm/memory_hotplug.c | 33 +++++++++++++++++++++------------
mm/page_alloc.c | 8 ++++----
6 files changed, 41 insertions(+), 25 deletions(-)
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index e5fc719a34e70..23d9156915271 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -771,7 +771,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
}
}
-static void hv_online_page(struct page *pg)
+static void hv_online_page(struct page *pg, unsigned int order)
{
struct hv_hotadd_state *has;
unsigned long flags;
@@ -780,10 +780,11 @@ static void hv_online_page(struct page *pg)
spin_lock_irqsave(&dm_device.ha_lock, flags);
list_for_each_entry(has, &dm_device.ha_region_list, list) {
/* The page belongs to a different HAS. */
- if ((pfn < has->start_pfn) || (pfn >= has->end_pfn))
+ if ((pfn < has->start_pfn) ||
+ (pfn + (1UL << order) > has->end_pfn))
continue;
- hv_page_online_one(has, pg);
+ hv_bring_pgs_online(has, pfn, 1UL << order);
break;
}
spin_unlock_irqrestore(&dm_device.ha_lock, flags);
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index b23edf64c2b21..630fd49a7ce84 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -369,14 +369,19 @@ static enum bp_state reserve_additional_memory(void)
return BP_ECANCELED;
}
-static void xen_online_page(struct page *page)
+static void xen_online_page(struct page *page, unsigned int order)
{
- __online_page_set_limits(page);
+ unsigned long i, size = (1 << order);
+ unsigned long start_pfn = page_to_pfn(page);
+ struct page *p;
+ pr_debug("Online %lu pages starting at pfn 0x%lx\n", size, start_pfn);
mutex_lock(&balloon_mutex);
-
- __balloon_append(page);
-
+ for (i = 0; i < size; i++) {
+ p = pfn_to_page(start_pfn + i);
+ __online_page_set_limits(p);
+ __balloon_append(p);
+ }
mutex_unlock(&balloon_mutex);
}
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 5653178768227..8782f0e993704 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -89,7 +89,7 @@ extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn,
unsigned long *valid_start, unsigned long *valid_end);
extern void __offline_isolated_pages(unsigned long, unsigned long);
-typedef void (*online_page_callback_t)(struct page *page);
+typedef void (*online_page_callback_t)(struct page *page, unsigned int order);
extern int set_online_page_callback(online_page_callback_t callback);
extern int restore_online_page_callback(online_page_callback_t callback);
diff --git a/mm/internal.h b/mm/internal.h
index 3bb7ca86e84eb..415a6a326bb4d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -163,6 +163,7 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn,
extern int __isolate_free_page(struct page *page, unsigned int order);
extern void __free_pages_bootmem(struct page *page, unsigned long pfn,
unsigned int order);
+extern void __free_pages_core(struct page *page, unsigned int order);
extern void prep_compound_page(struct page *page, unsigned int order);
extern void post_alloc_hook(struct page *page, unsigned int order,
gfp_t gfp_flags);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 967adfc17dd26..64a3ec9365911 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -48,7 +48,7 @@
* and restore_online_page_callback() for generic callback restore.
*/
-static void generic_online_page(struct page *page);
+static void generic_online_page(struct page *page, unsigned int order);
static online_page_callback_t online_page_callback = generic_online_page;
static DEFINE_MUTEX(online_page_callback_lock);
@@ -580,26 +580,35 @@ void __online_page_free(struct page *page)
}
EXPORT_SYMBOL_GPL(__online_page_free);
-static void generic_online_page(struct page *page)
+static void generic_online_page(struct page *page, unsigned int order)
{
- __online_page_set_limits(page);
- __online_page_increment_counters(page);
- __online_page_free(page);
+ __free_pages_core(page, order);
+ adjust_managed_page_count(page, 1UL << order);
+}
+
+static int online_pages_blocks(unsigned long start, unsigned long nr_pages)
+{
+ unsigned long end = start + nr_pages;
+ int order, onlined_pages = 0;
+
+ while (start < end) {
+ order = min(MAX_ORDER - 1,
+ get_order(PFN_PHYS(end) - PFN_PHYS(start)));
+ (*online_page_callback)(pfn_to_page(start), order);
+
+ onlined_pages += (1UL << order);
+ start += (1UL << order);
+ }
+ return onlined_pages;
}
static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
void *arg)
{
- unsigned long i;
unsigned long onlined_pages = *(unsigned long *)arg;
- struct page *page;
if (PageReserved(pfn_to_page(start_pfn)))
- for (i = 0; i < nr_pages; i++) {
- page = pfn_to_page(start_pfn + i);
- (*online_page_callback)(page);
- onlined_pages++;
- }
+ onlined_pages += online_pages_blocks(start_pfn, nr_pages);
online_mem_sections(start_pfn, start_pfn + nr_pages);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 12e0899f3b9c8..e0002e5c3d06d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1288,7 +1288,7 @@ static void __free_pages_ok(struct page *page, unsigned int order)
local_irq_restore(flags);
}
-static void __init __free_pages_boot_core(struct page *page, unsigned int order)
+void __free_pages_core(struct page *page, unsigned int order)
{
unsigned int nr_pages = 1 << order;
struct page *p = page;
@@ -1368,7 +1368,7 @@ void __init __free_pages_bootmem(struct page *page, unsigned long pfn,
return;
page_zone(page)->managed_pages += 1UL << order;
- return __free_pages_boot_core(page, order);
+ return __free_pages_core(page, order);
}
/*
@@ -1459,14 +1459,14 @@ static void __init deferred_free_range(unsigned long pfn,
if (nr_pages == pageblock_nr_pages &&
(pfn & (pageblock_nr_pages - 1)) == 0) {
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- __free_pages_boot_core(page, pageblock_order);
+ __free_pages_core(page, pageblock_order);
return;
}
for (i = 0; i < nr_pages; i++, page++, pfn++) {
if ((pfn & (pageblock_nr_pages - 1)) == 0)
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- __free_pages_boot_core(page, 0);
+ __free_pages_core(page, 0);
}
}
--
2.25.1
1
5

[PATCH kernel-4.19] raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
by Yang Yingliang 20 Oct '21
by Yang Yingliang 20 Oct '21
20 Oct '21
From: Guoqing Jiang <jiangguoqing(a)kylinos.cn>
mainline inclusion
from mainline-v5.15-rc1
commit 6607cd319b6b91bff94e90f798a61c031650b514
category: bugfix
bugzilla: 182883, https://gitee.com/openeuler/kernel/issues/I4ENHY
CVE: NA
-------------------------------------------------
We can't split write behind bio with more than BIO_MAX_VECS sectors,
otherwise the below call trace was triggered because we could allocate
oversized write behind bio later.
[ 8.097936] bvec_alloc+0x90/0xc0
[ 8.098934] bio_alloc_bioset+0x1b3/0x260
[ 8.099959] raid1_make_request+0x9ce/0xc50 [raid1]
[ 8.100988] ? __bio_clone_fast+0xa8/0xe0
[ 8.102008] md_handle_request+0x158/0x1d0 [md_mod]
[ 8.103050] md_submit_bio+0xcd/0x110 [md_mod]
[ 8.104084] submit_bio_noacct+0x139/0x530
[ 8.105127] submit_bio+0x78/0x1d0
[ 8.106163] ext4_io_submit+0x48/0x60 [ext4]
[ 8.107242] ext4_writepages+0x652/0x1170 [ext4]
[ 8.108300] ? do_writepages+0x41/0x100
[ 8.109338] ? __ext4_mark_inode_dirty+0x240/0x240 [ext4]
[ 8.110406] do_writepages+0x41/0x100
[ 8.111450] __filemap_fdatawrite_range+0xc5/0x100
[ 8.112513] file_write_and_wait_range+0x61/0xb0
[ 8.113564] ext4_sync_file+0x73/0x370 [ext4]
[ 8.114607] __x64_sys_fsync+0x33/0x60
[ 8.115635] do_syscall_64+0x33/0x40
[ 8.116670] entry_SYSCALL_64_after_hwframe+0x44/0xae
Thanks for the comment from Christoph.
[1]. https://bugs.archlinux.org/task/70992
Cc: stable(a)vger.kernel.org # v5.12+
Reported-by: Jens Stutte <jens(a)chianterastutte.eu>
Tested-by: Jens Stutte <jens(a)chianterastutte.eu>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Guoqing Jiang <jiangguoqing(a)kylinos.cn>
Signed-off-by: Song Liu <songliubraving(a)fb.com>
Conflict:
drivers/md/raid1.c
[ Mainline patch 6607cd319b6b ("raid1: ensure write behind bio has less
than BIO_MAX_VECS sectors"), BIO_MAX_VECS is used directly, but the
BIO_MAX_VECS was renamed previously and the corresponding patch
a8affc03a9b37 ("block: rename BIO_MAX_PAGES to BIO_MAX_VECS") was not
incorporated. So we modify BIO_MAX_VECS to the original BIO_MAX_PAGES.]
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/md/raid1.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index cfc23bf440f84..0c30a1fdb561c 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1318,6 +1318,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
struct raid1_plug_cb *plug = NULL;
int first_clone;
int max_sectors;
+ bool write_behind = false;
if (mddev_is_clustered(mddev) &&
md_cluster_ops->area_resyncing(mddev, WRITE,
@@ -1370,6 +1371,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
max_sectors = r1_bio->sectors;
for (i = 0; i < disks; i++) {
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
+
+ /*
+ * The write-behind io is only attempted on drives marked as
+ * write-mostly, which means we could allocate write behind
+ * bio later.
+ */
+ if (rdev && test_bit(WriteMostly, &rdev->flags))
+ write_behind = true;
+
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
atomic_inc(&rdev->nr_pending);
blocked_rdev = rdev;
@@ -1444,6 +1454,15 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
goto retry_write;
}
+ /*
+ * When using a bitmap, we may call alloc_behind_master_bio below.
+ * alloc_behind_master_bio allocates a copy of the data payload a page
+ * at a time and thus needs a new bio that can fit the whole payload
+ * this bio in page sized chunks.
+ */
+ if (write_behind && bitmap)
+ max_sectors = min_t(int, max_sectors,
+ BIO_MAX_PAGES * (PAGE_SIZE >> 9));
if (max_sectors < bio_sectors(bio)) {
struct bio *split = bio_split(bio, max_sectors,
GFP_NOIO, &conf->bio_split);
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] blk-wbt: fix IO hang due to negative inflight counter
by Yang Yingliang 20 Oct '21
by Yang Yingliang 20 Oct '21
20 Oct '21
From: Laibin Qiu <qiulaibin(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 182135, https://gitee.com/openeuler/kernel/issues/I4ENC8
CVE: NA
--------------------------
Block test reported the following stack, Some req has been watting for
wakeup in wbt_wait, and vmcore showed that wbt inflight counter is -1.
So Request cannot be awakened.
PID: 75416 TASK: ffff88836c098000 CPU: 2 COMMAND: "fsstress"
[ffff8882e59a7608] __schedule at ffffffffb2d22a25
[ffff8882e59a7720] schedule at ffffffffb2d2358f
[ffff8882e59a7738] io_schedule at ffffffffb2d23bdc
[ffff8882e59a7750] rq_qos_wait at ffffffffb2400fde
[ffff8882e59a7878] wbt_wait at ffffffffb243a051
[ffff8882e59a7910] __rq_qos_throttle at ffffffffb2400a20
[ffff8882e59a7930] blk_mq_make_request at ffffffffb23de038
[ffff8882e59a7a98] generic_make_request at ffffffffb23c393d
[ffff8882e59a7b80] submit_bio at ffffffffb23c3db8
[ffff8882e59a7c48] submit_bio_wait at ffffffffb23b3a5d
[ffff8882e59a7cf0] blkdev_issue_flush at ffffffffb23c8f4c
[ffff8882e59a7d20] ext4_sync_fs at ffffffffc06dd708 [ext4]
[ffff8882e59a7dd0] sync_filesystem at ffffffffb21e8335
[ffff8882e59a7df8] ovl_sync_fs at ffffffffc0fd853a [overlay]
[ffff8882e59a7e10] sync_fs_one_sb at ffffffffb21e8221
[ffff8882e59a7e30] iterate_supers at ffffffffb218401e
[ffff8882e59a7e70] ksys_sync at ffffffffb21e8588
[ffff8882e59a7f20] __x64_sys_sync at ffffffffb21e861f
[ffff8882e59a7f28] do_syscall_64 at ffffffffb1c06bc8
[ffff8882e59a7f50] entry_SYSCALL_64_after_hwframe at ffffffffb2e000ad
RIP: 00007f479ab13347 RSP: 00007ffd4dda9fe8 RFLAGS: 00000202
RAX: ffffffffffffffda RBX: 0000000000000068 RCX: 00007f479ab13347
RDX: 0000000000000000 RSI: 000000003e1b142d RDI: 0000000000000068
RBP: 0000000051eb851f R8: 00007f479abd4034 R9: 00007f479abd40a0
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000402c20
R13: 0000000000000001 R14: 0000000000000000 R15: 7fffffffffffffff
The ->inflight counter may be negative (-1) if
1) blk-wbt was disabled when the IO was issued,
which will add inflight count.
2) blk-wbt was enabled before this IO tracked.
3) the ->inflight counter is decreased from
0 to -1 in endio().
This fixes the problem by freezing the queue while enabling wbt,
there is no inflight rq running.
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-wbt.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 3d9f61633e688..ffd5c17f5a101 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -23,6 +23,9 @@
#include <linux/slab.h>
#include <linux/backing-dev.h>
#include <linux/swap.h>
+#ifndef __GENKSYMS__
+#include <linux/blk-mq.h>
+#endif
#include "blk-wbt.h"
#include "blk-rq-qos.h"
@@ -824,9 +827,16 @@ int wbt_init(struct request_queue *q)
rq_qos_add(q, &rwb->rqos);
blk_stat_add_callback(q, rwb->cb);
- rwb->min_lat_nsec = wbt_default_latency_nsec(q);
+ /*
+ * Ensure that the queue is idled by freezing the queue
+ * while enabling wbt, there is no inflight rq running.
+ */
+ blk_mq_freeze_queue(q);
+ rwb->min_lat_nsec = wbt_default_latency_nsec(q);
wbt_set_queue_depth(q, blk_queue_depth(q));
+
+ blk_mq_unfreeze_queue(q);
wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
return 0;
--
2.25.1
1
0

20 Oct '21
Ramaxel inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DBD7
CVE: NA
Initial commit the spfc module for ramaxel Super FC adapter
Changes since v2:
- 1. Change comments of default N to M in kconfig.
- 2. Update MACRO to sync with file name.
- 3. Remove new blank line at EOF.
- 4. Remove unncessary initialization of local varaibles.
Changes since v1:
- Add UNF_PORT_CFG_SET_PORT_STATE state
Yanling Song (1):
scsi: spfc: initial commit the spfc module
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/spfc/Kconfig | 16 +
drivers/scsi/spfc/Makefile | 47 +
drivers/scsi/spfc/common/unf_common.h | 1755 +++++++
drivers/scsi/spfc/common/unf_disc.c | 1276 +++++
drivers/scsi/spfc/common/unf_disc.h | 51 +
drivers/scsi/spfc/common/unf_event.c | 517 ++
drivers/scsi/spfc/common/unf_event.h | 83 +
drivers/scsi/spfc/common/unf_exchg.c | 2317 +++++++++
drivers/scsi/spfc/common/unf_exchg.h | 436 ++
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 +++
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 +
drivers/scsi/spfc/common/unf_fcstruct.h | 459 ++
drivers/scsi/spfc/common/unf_gs.c | 2521 +++++++++
drivers/scsi/spfc/common/unf_gs.h | 58 +
drivers/scsi/spfc/common/unf_init.c | 353 ++
drivers/scsi/spfc/common/unf_io.c | 1220 +++++
drivers/scsi/spfc/common/unf_io.h | 96 +
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ++++
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 +
drivers/scsi/spfc/common/unf_log.h | 178 +
drivers/scsi/spfc/common/unf_lport.c | 1008 ++++
drivers/scsi/spfc/common/unf_lport.h | 519 ++
drivers/scsi/spfc/common/unf_ls.c | 4883 ++++++++++++++++++
drivers/scsi/spfc/common/unf_ls.h | 61 +
drivers/scsi/spfc/common/unf_npiv.c | 1005 ++++
drivers/scsi/spfc/common/unf_npiv.h | 47 +
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 ++
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 +
drivers/scsi/spfc/common/unf_portman.c | 2431 +++++++++
drivers/scsi/spfc/common/unf_portman.h | 96 +
drivers/scsi/spfc/common/unf_rport.c | 2286 ++++++++
drivers/scsi/spfc/common/unf_rport.h | 301 ++
drivers/scsi/spfc/common/unf_scsi.c | 1462 ++++++
drivers/scsi/spfc/common/unf_scsi_common.h | 570 ++
drivers/scsi/spfc/common/unf_service.c | 1439 ++++++
drivers/scsi/spfc/common/unf_service.h | 66 +
drivers/scsi/spfc/common/unf_type.h | 216 +
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ++++
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 +++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1646 ++++++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 +
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 891 ++++
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 +
drivers/scsi/spfc/hw/spfc_cqm_main.c | 1256 +++++
drivers/scsi/spfc/hw/spfc_cqm_main.h | 414 ++
drivers/scsi/spfc/hw/spfc_cqm_object.c | 958 ++++
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 +
drivers/scsi/spfc/hw/spfc_hba.c | 1751 +++++++
drivers/scsi/spfc/hw/spfc_hba.h | 341 ++
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ++++++
drivers/scsi/spfc/hw/spfc_io.c | 1193 +++++
drivers/scsi/spfc/hw/spfc_io.h | 138 +
drivers/scsi/spfc/hw/spfc_lld.c | 997 ++++
drivers/scsi/spfc/hw/spfc_lld.h | 76 +
drivers/scsi/spfc/hw/spfc_module.h | 297 ++
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 +
drivers/scsi/spfc/hw/spfc_queue.c | 4857 +++++++++++++++++
drivers/scsi/spfc/hw/spfc_queue.h | 711 +++
drivers/scsi/spfc/hw/spfc_service.c | 2168 ++++++++
drivers/scsi/spfc/hw/spfc_service.h | 282 +
drivers/scsi/spfc/hw/spfc_utils.c | 102 +
drivers/scsi/spfc/hw/spfc_utils.h | 202 +
drivers/scsi/spfc/hw/spfc_wqe.c | 646 +++
drivers/scsi/spfc/hw/spfc_wqe.h | 239 +
68 files changed, 53547 insertions(+)
create mode 100644 drivers/scsi/spfc/Kconfig
create mode 100644 drivers/scsi/spfc/Makefile
create mode 100644 drivers/scsi/spfc/common/unf_common.h
create mode 100644 drivers/scsi/spfc/common/unf_disc.c
create mode 100644 drivers/scsi/spfc/common/unf_disc.h
create mode 100644 drivers/scsi/spfc/common/unf_event.c
create mode 100644 drivers/scsi/spfc/common/unf_event.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
create mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
create mode 100644 drivers/scsi/spfc/common/unf_gs.c
create mode 100644 drivers/scsi/spfc/common/unf_gs.h
create mode 100644 drivers/scsi/spfc/common/unf_init.c
create mode 100644 drivers/scsi/spfc/common/unf_io.c
create mode 100644 drivers/scsi/spfc/common/unf_io.h
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
create mode 100644 drivers/scsi/spfc/common/unf_log.h
create mode 100644 drivers/scsi/spfc/common/unf_lport.c
create mode 100644 drivers/scsi/spfc/common/unf_lport.h
create mode 100644 drivers/scsi/spfc/common/unf_ls.c
create mode 100644 drivers/scsi/spfc/common/unf_ls.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_rport.c
create mode 100644 drivers/scsi/spfc/common/unf_rport.h
create mode 100644 drivers/scsi/spfc/common/unf_scsi.c
create mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
create mode 100644 drivers/scsi/spfc/common/unf_service.c
create mode 100644 drivers/scsi/spfc/common/unf_service.h
create mode 100644 drivers/scsi/spfc/common/unf_type.h
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
create mode 100644 drivers/scsi/spfc/hw/spfc_io.c
create mode 100644 drivers/scsi/spfc/hw/spfc_io.h
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
create mode 100644 drivers/scsi/spfc/hw/spfc_module.h
create mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
create mode 100644 drivers/scsi/spfc/hw/spfc_service.c
create mode 100644 drivers/scsi/spfc/hw/spfc_service.h
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
--
2.30.0
2
2
backport psi feature from upstream 5.4
bugzilla: https://gitee.com/openeuler/kernel/issues/I47QS2
Baruch Siach (1):
psi: fix reference to kernel commandline enable
Dan Schatzberg (1):
kernel/sched/psi.c: expose pressure metrics on root cgroup
Johannes Weiner (11):
sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD
sched: loadavg: make calc_load_n() public
sched: sched.h: make rq locking and clock functions available in
stats.h
sched: introduce this_rq_lock_irq()
psi: pressure stall information for CPU, memory, and IO
psi: cgroup support
psi: make disabling/enabling easier for vendor kernels
psi: fix aggregation idle shut-off
psi: avoid divide-by-zero crash inside virtual machines
fs: kernfs: add poll file operation
sched/psi: Fix sampling error and rare div0 crashes with cgroups and
high uptime
Josef Bacik (1):
blk-iolatency: use a percentile approache for ssd's
Liu Xinpeng (2):
psi:enable psi in config
psi:avoid kabi change
Olof Johansson (1):
kernel/sched/psi.c: simplify cgroup_move_task()
Suren Baghdasaryan (6):
psi: introduce state_mask to represent stalled psi states
psi: make psi_enable static
psi: rename psi fields in preparation for psi trigger addition
psi: split update_stats into parts
psi: track changed states
include/: refactor headers to allow kthread.h inclusion in psi_types.h
Documentation/accounting/psi.txt | 73 +++
Documentation/admin-guide/cgroup-v2.rst | 18 +
Documentation/admin-guide/kernel-parameters.txt | 4 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +-
arch/powerpc/platforms/cell/spufs/sched.c | 9 +-
arch/s390/appldata/appldata_os.c | 4 -
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-iolatency.c | 183 +++++-
drivers/cpuidle/governors/menu.c | 4 -
drivers/spi/spi-rockchip.c | 1 +
fs/kernfs/file.c | 31 +-
fs/proc/loadavg.c | 3 -
include/linux/cgroup-defs.h | 12 +
include/linux/cgroup.h | 17 +
include/linux/kernfs.h | 8 +
include/linux/kthread.h | 4 +
include/linux/psi.h | 55 ++
include/linux/psi_types.h | 95 +++
include/linux/sched.h | 13 +
include/linux/sched/loadavg.h | 24 +-
init/Kconfig | 28 +
kernel/cgroup/cgroup.c | 55 +-
kernel/debug/kdb/kdb_main.c | 7 +-
kernel/fork.c | 4 +
kernel/kthread.c | 3 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 16 +-
kernel/sched/loadavg.c | 139 ++--
kernel/sched/psi.c | 823 ++++++++++++++++++++++++
kernel/sched/sched.h | 178 ++---
kernel/sched/stats.h | 86 +++
kernel/workqueue.c | 23 +
kernel/workqueue_internal.h | 6 +-
mm/compaction.c | 5 +
mm/filemap.c | 11 +
mm/page_alloc.c | 9 +
mm/vmscan.c | 9 +
38 files changed, 1726 insertions(+), 241 deletions(-)
create mode 100644 Documentation/accounting/psi.txt
create mode 100644 include/linux/psi.h
create mode 100644 include/linux/psi_types.h
create mode 100644 kernel/sched/psi.c
--
1.8.3.1
1
22
Backport LTS 5.10.66 patches from upstream.
Greg Kroah-Hartman (3):
Revert "block: nbd: add sanity check for first_minor"
Revert "posix-cpu-timers: Force next expiration recalc after itimer
reset"
Revert "time: Handle negative seconds correctly in timespec64_to_ns()"
Sasha Levin (1):
Revert "Bluetooth: Move shutdown callback before flushing tx and rx
queue"
drivers/block/nbd.c | 10 ----------
include/linux/time64.h | 9 ++-------
kernel/time/posix-cpu-timers.c | 2 ++
net/bluetooth/hci_core.c | 8 --------
4 files changed, 4 insertions(+), 25 deletions(-)
--
2.20.1
1
4
Backport LTS 5.10.65 patches from upstream.
Abhishek Naik (1):
iwlwifi: skip first element in the WTAS ACPI table
Ahmad Fatoum (1):
brcmfmac: pcie: fix oops on failure to resume and reprobe
Alexander Gordeev (1):
s390/kasan: fix large PMD pages address alignment check
Amit Engel (1):
nvmet: pass back cntlid on successful completion
Anand Moon (3):
ARM: dts: meson8b: odroidc1: Fix the pwm regulator supply properties
ARM: dts: meson8b: mxq: Fix the pwm regulator supply properties
ARM: dts: meson8b: ec100: Fix the pwm regulator supply properties
Andrey Ignatov (1):
bpf: Fix possible out of bound write in narrow load handling
Andrii Nakryiko (1):
libbpf: Re-build libbpf.so when libbpf.map changes
Andy Duan (1):
tty: serial: fsl_lpuart: fix the wrong mapbase value
Andy Shevchenko (1):
leds: lt3593: Put fwnode in any case during ->probe()
Austin Kim (1):
IMA: remove -Wmissing-prototypes warning
Aya Levin (1):
net/mlx5: Register to devlink ingress VLAN filter trap
Babu Moger (1):
x86/resctrl: Fix a maybe-uninitialized build warning treated as error
Ben Hutchings (1):
crypto: omap - Fix inconsistent locking of device lists
Benjamin Coddington (1):
lockd: Fix invalid lockowner cast after vfs_test_lock
Biju Das (1):
arm64: dts: renesas: hihope-rzg2-ex: Add EtherAVB internal rx delay
Bjorn Andersson (1):
soc: qcom: rpmhpd: Use corner in power_off
Bob Peterson (1):
gfs2: init system threads before freeze lock
Borislav Petkov (1):
x86/mce: Defer processing of early errors
Brett Creeley (1):
ice: Only lock to update netdev dev_addr
Cezary Rojewski (3):
ASoC: Intel: kbl_da7219_max98927: Fix format selection for max98373
ASoC: Intel: Skylake: Leave data as is when invoking TLV IPCs
ASoC: Intel: Skylake: Fix module resource and format selection
Chen-Yu Tsai (3):
irqchip/gic-v3: Fix priority comparison when non-secure priorities are
used
regulator: vctrl: Use locked regulator_get_voltage in probe path
regulator: vctrl: Avoid lockdep warning in enable/disable ops
Chih-Kang Chang (1):
mac80211: Fix insufficient headroom issue for AMSDU
Christoph Hellwig (1):
bcache: add proper error unwinding in bcache_device_init
Christophe JAILLET (9):
spi: coldfire-qspi: Use clk_disable_unprepare in the remove function
media: cxd2880-spi: Fix an error handling path
drm/msm/dsi: Fix some reference counted resource leaks
firmware: raspberrypi: Fix a leak in 'rpi_firmware_get()'
usb: bdc: Fix an error handling path in 'bdc_probe()' when no suitable
DMA config is available
usb: bdc: Fix a resource leak in the error handling path of
'bdc_probe()'
ASoC: wcd9335: Fix a double irq free in the remove function
ASoC: wcd9335: Fix a memory leak in the error handling path of the
probe function
ASoC: wcd9335: Disable irq on slave ports in the remove function
Chunguang Xu (1):
blk-throtl: optimize IOPS throttle for large IO scenarios
Chunyan Zhang (1):
spi: sprd: Fix the wrong WDG_LOAD_VAL
Claudiu Beznea (1):
ARM: dts: at91: add pinctrl-{names, 0} for all gpios
Colin Ian King (4):
gfs2: Fix memory leak of object lsi on error return path
6lowpan: iphc: Fix an off-by-one check of array index
media: venus: venc: Fix potential null pointer dereference on pointer
fmt
Bluetooth: increase BTNAMSIZ to 21 chars to fix potential buffer
overflow
Curtis Malainey (1):
ASoC: Intel: Fix platform ID matching
Damien Le Moal (1):
libata: fix ata_host_start()
Dan Carpenter (5):
media: rockchip/rga: fix error handling in probe
Bluetooth: sco: prevent information leak in sco_conn_defer_accept()
rsi: fix error code in rsi_load_9116_firmware()
rsi: fix an error code in rsi_probe()
ath6kl: wmi: fix an error code in ath6kl_wmi_sync_point()
Daniel Thompson (1):
backlight: pwm_bl: Improve bootloader/kernel device handover
David Heidelberg (2):
drm/msm/mdp4: refactor HW revision detection into read_mdp_hw_revision
drm/msm/mdp4: move HW revision detection to earlier phase
Desmond Cheong Zhi Xi (2):
fcntl: fix potential deadlock for &fasync_struct.fa_lock
Bluetooth: fix repeated calls to sco_sock_kill
Dietmar Eggemann (1):
sched/deadline: Fix missing clock update in migrate_task_rq_dl()
Dmitry Baryshkov (1):
drm/msm/dpu: make dpu_hw_ctl_clear_all_blendstages clear necessary LMs
Dmitry Osipenko (2):
regulator: tps65910: Silence deferred probe error
power: supply: smb347-charger: Add missing pin control activation
Dongliang Mu (4):
media: dvb-usb: fix uninit-value in dvb_usb_adapter_dvb_init
media: dvb-usb: fix uninit-value in vp702x_read_mac_addr
media: dvb-usb: Fix error handling in dvb_usb_i2c_init
media: em28xx-input: fix refcount bug in em28xx_usb_disconnect
Douglas Anderson (2):
ASoC: rt5682: Properly turn off regulators if wrong device ID
ASoC: rt5682: Remove unused variable in rt5682_i2c_remove()
Dylan Hung (1):
ARM: dts: aspeed-g6: Fix HVI3C function-group in pinctrl dtsi
Emmanuel Grumbach (1):
iwlwifi: follow the new inclusive terminology
Eric Biggers (1):
blk-crypto: fix check for too-large dun_bytes
Eric Dumazet (3):
ipv6: make exception cache less predictible
ipv4: make exception cache less predictible
ipv4: fix endianness issue in inet_rtm_getroute_build_skb()
Evgeny Novikov (1):
usb: ehci-orion: Handle errors of clk_prepare_enable() in probe
Frederic Weisbecker (1):
posix-cpu-timers: Force next expiration recalc after itimer reset
Geert Uytterhoeven (5):
m68k: Fix invalid RMW_INSNS on CPUs that lack CAS
soc: rockchip: ROCKCHIP_GRF should not default to y, unconditionally
arm64: dts: renesas: r8a77995: draak: Remove bogus adv7511w properties
arm64: dts: renesas: rzg2: Convert EtherAVB to explicit delay handling
usb: gadget: udc: renesas_usb3: Fix soc_device_match() abuse
Giovanni Cabiddu (4):
crypto: qat - do not ignore errors from enable_vf2pf_comms()
crypto: qat - handle both source of interrupt in VF ISR
crypto: qat - do not export adf_iov_putmsg()
crypto: qat - use proper type for vf_mask
Haiyue Wang (1):
gve: fix the wrong AdminQ buffer overflow check
Halil Pasic (1):
KVM: s390: index kvm->arch.idle_mask by vcpu_idx
Hans de Goede (2):
power: supply: axp288_fuel_gauge: Report register-address on readb /
writeb errors
leds: trigger: audio: Add an activate callback to ensure the initial
brightness is set
Harald Freudenberger (2):
s390/zcrypt: fix wrong offset index for APKA master key valid state
s390/ap: fix state machine hang after failure to enable irq
Harshvardhan Jha (1):
drm/gma500: Fix end of loop tests for list_for_each_entry
He Fengqing (1):
bpf: Fix potential memleak and UAF in the verifier.
Hongbo Li (1):
lib/mpi: use kcalloc in mpi_resize
Huacai Chen (1):
irqchip/loongson-pch-pic: Improve edge triggered interrupt support
Ilya Leoshkevich (1):
selftests/bpf: Fix test_core_autosize on big-endian machines
J. Bruce Fields (1):
nfsd4: Fix forced-expiry locking
Jaegeuk Kim (1):
f2fs: guarantee to write dirty data when enabling checkpoint back
Jan Kara (1):
udf: Check LVID earlier
Jens Axboe (1):
io_uring: IORING_OP_WRITE needs hash_reg_file set
Jeongtae Park (1):
regmap: fix the offset of register error log
Jiapeng Chong (2):
leds: is31fl32xx: Fix missing error code in is31fl32xx_parse_dt()
net/mlx5: Fix missing return value in
mlx5_devlink_eswitch_inline_mode_set()
Jose Blanquicet (1):
selftests/bpf: Fix bpf-iter-tcp4 test to print correctly the dest IP
Juhee Kang (1):
samples: pktgen: add missing IPv6 option to pktgen scripts
Julia Lawall (1):
drm/of: free the right object
Justin M. Forbes (1):
iwlwifi Add support for ax201 in Samsung Galaxy Book Flex2 Alpha
Kai-Heng Feng (2):
drm/amdgpu/acp: Make PM domain really work
Bluetooth: Move shutdown callback before flushing tx and rx queue
Kevin Mitchell (1):
lkdtm: replace SCSI_DISPATCH_CMD with SCSI_QUEUE_RQ
Kim Phillips (1):
perf/x86/amd/ibs: Extend PERF_PMU_CAP_NO_EXCLUDE to IBS Op
Krzysztof Hałasa (1):
media: TDA1997x: enable EDID support
Krzysztof Kozlowski (1):
arm64: dts: exynos: correct GIC CPU interfaces address range on
Exynos7
Kuniyuki Iwashima (1):
bpf: Fix a typo of reuseport map in bpf.h.
Len Baker (1):
CIFS: Fix a potencially linear read overflow
Leon Romanovsky (3):
ionic: cleanly release devlink instance
devlink: Break parameter notification sequence to be before/after
unload/load driver
devlink: Clear whole devlink_flash_notify struct
Linus Walleij (1):
clk: kirkwood: Fix a clocking boot regression
Lukas Bulwahn (1):
clk: staging: correct reference to config IOMEM to config HAS_IOMEM
Lukas Hannen (1):
time: Handle negative seconds correctly in timespec64_to_ns()
Lukasz Luba (1):
PM: EM: Increase energy calculation precision
Marco Chiappero (2):
crypto: qat - fix reuse of completion variable
crypto: qat - fix naming for init/shutdown VF to PF notifications
Marek Vasut (3):
drm: mxsfb: Enable recovery on underflow
drm: mxsfb: Increase number of outstanding requests on V4 and newer HW
drm: mxsfb: Clear FIFO_CLEAR bit
Martin Blumenstingl (1):
ARM: dts: meson8: Use a higher default GPU clock frequency
Martin KaFai Lau (1):
tcp: seq_file: Avoid skipping sk during tcp_seek_last_pos
Martynas Pumputis (1):
libbpf: Fix removal of inner map in bpf_object__create_map
Matija Glavinic Pecotic (1):
spi: davinci: invoke chipselect callback
Matthew Cover (1):
bpf, samples: Add missing mprog-disable to xdp_redirect_cpu's
optstring
Mauro Carvalho Chehab (1):
media: rockchip/rga: use pm_runtime_resume_and_get()
Maxim Levitsky (1):
KVM: VMX: avoid running vmx_handle_exit_irqoff in case of emulation
Maxim Mikityanskiy (2):
net/mlx5e: Prohibit inner indir TIRs in IPoIB
net/mlx5e: Block LRO if firmware asks for tunneled LRO
Mika Penttilä (1):
sched/numa: Fix is_core_idle()
Miklos Szeredi (2):
fuse: truncate pagecache on atomic_o_trunc
fuse: flush extending writes
Ming Lei (1):
block: return ELEVATOR_DISCARD_MERGE if possible
Nadezda Lutovinova (1):
usb: gadget: mv_u3d: request_irq() after initializing UDC
Nguyen Dinh Phi (1):
tty: Fix data race between tiocsti() and flush_to_ldisc()
Nicolas Saenz Julienne (1):
firmware: raspberrypi: Keep count of all consumers
Niklas Schnelle (1):
s390/pci: fix misleading rc in clp_set_pci_fn()
Pali Rohár (3):
udf: Fix iocharset=utf8 mount option
isofs: joliet: Fix iocharset=utf8 mount option
arm64: dts: marvell: armada-37xx: Extend PCIe MEM space
Parav Pandit (1):
net/mlx5: Fix unpublish devlink parameters
Paul E. McKenney (1):
rcu: Add lockdep_assert_irqs_disabled() to rcu_sched_clock_irq() and
callees
Pavel Begunkov (1):
bio: fix page leak bio_add_hw_page failure
Pavel Skripkin (6):
m68k: emu: Fix invalid free in nfeth_cleanup()
block: nbd: add sanity check for first_minor
media: go7007: fix memory leak in go7007_usb_probe
media: go7007: remove redundant initialization
net: cipso: fix warnings in netlbl_cipsov4_add_std
Bluetooth: add timeout sanity check to hci_inquiry
Peter Oberparleiter (2):
s390/debug: keep debug data on resize
s390/debug: fix debug area life cycle
Peter Robinson (1):
power: supply: cw2015: use dev_err_probe to allow deferred probe
Peter Zijlstra (2):
locking/mutex: Fix HANDOFF condition
locking/lockdep: Mark local_lock_t
Philipp Zabel (1):
media: coda: fix frame_mem_ctrl for YUV420 and YVU420 formats
Phong Hoang (1):
clocksource/drivers/sh_cmt: Fix wrong setting if don't request IRQ for
clock source channel
Qiuxu Zhuo (1):
EDAC/i10nm: Fix NVDIMM detection
Quanyang Wang (1):
spi: spi-zynq-qspi: use wait_for_completion_timeout to make
zynq_qspi_exec_mem_op not interruptible
Quentin Perret (2):
sched/deadline: Fix reset_on_fork reporting of DL tasks
sched: Fix UCLAMP_FLAG_IDLE setting
Rafael J. Wysocki (2):
PCI: PM: Avoid forcing PCI_D0 for wakeup reasons inconsistently
PCI: PM: Enable PME if it can be signaled from D3cold
Ruozhu Li (2):
nvme-tcp: don't update queue count when failing to set io queues
nvme-rdma: don't update queue count when failing to set io queues
Sean Anderson (1):
crypto: mxs-dcp - Check for DMA mapping errors
Sean Christopherson (2):
Revert "KVM: x86: mmu: Add guest physical address check in
translate_gpa()"
KVM: nVMX: Unconditionally clear nested.pi_pending on nested VM-Enter
Sebastian Krzyszkowiak (1):
power: supply: max17042_battery: fix typo in MAx17042_TOFF
Sergey Senozhatsky (1):
rcu/tree: Handle VM stoppage in stall detection
Sergey Shtylyov (15):
i2c: highlander: add IRQ check
usb: dwc3: meson-g12a: add IRQ check
usb: dwc3: qcom: add IRQ check
usb: gadget: udc: at91: add IRQ check
usb: gadget: udc: s3c2410: add IRQ check
usb: phy: fsl-usb: add IRQ check
usb: phy: twl6030: add IRQ checks
usb: host: ohci-tmio: add IRQ check
usb: phy: tahvo: add IRQ check
i2c: synquacer: fix deferred probing
i2c: iop3xx: fix deferred probing
i2c: s3c2410: fix IRQ check
i2c: hix5hd2: fix IRQ check
i2c: mt65xx: fix IRQ check
i2c: xlp9xx: fix main IRQ check
Shuyi Cheng (1):
libbpf: Fix the possible memory leak on error
Smita Koralahalli (1):
EDAC/mce_amd: Do not load edac_mce_amd module on guests
Stefan Assmann (1):
i40e: improve locking of mac_filter_hash
Stefan Berger (2):
certs: Trigger creation of RSA module signing key if it's not an RSA
key
tpm: ibmvtpm: Avoid error message when process gets signal while
waiting
Stefan Wahren (1):
net: qualcomm: fix QCA7000 checksum handling
Stephan Gerhold (1):
soc: qcom: smsm: Fix missed interrupts if state changes while masked
Stephen Boyd (1):
ASoC: rt5682: Implement remove callback
Steven Price (1):
drm/of: free the iterator object on failure
Stian Skjelstad (1):
udf_get_extendedattr() had no boundary checks.
Subbaraya Sundeep (2):
octeontx2-af: Fix loop in free and unmap counter
octeontx2-af: Fix static code analyzer reported issues
Sudarsana Reddy Kalluru (1):
atlantic: Fix driver resume flow.
Sunil Goutham (1):
octeontx2-af: Set proper errorcode for IPv4 checksum errors
Sven Eckelmann (1):
debugfs: Return error during {full/open}_proxy_open() on rmmod
THOBY Simon (1):
IMA: remove the dependency on CRYPTO_MD5
Tedd Ho-Jeong An (1):
Bluetooth: mgmt: Fix wrong opcode in the response for add_adv cmd
Tetsuo Handa (1):
fbmem: don't allow too huge resolutions
Thomas Gleixner (3):
hrtimer: Avoid double reprogramming in __hrtimer_start_range_ns()
hrtimer: Ensure timerfd notification for HIGHRES=n
locking/local_lock: Add missing owner initialization
Tian Tao (1):
i2c: fix platform_get_irq.cocci warnings
Tony Lindgren (6):
crypto: omap-sham - clear dma flags only after
omap_sham_update_dma_stop()
spi: spi-fsl-dspi: Fix issue with uninitialized dma_slave_config
spi: spi-pic32: Fix issue with uninitialized dma_slave_config
mmc: sdhci: Fix issue with uninitialized dma_slave_config
mmc: dw_mmc: Fix issue with uninitialized dma_slave_config
mmc: moxart: Fix issue with uninitialized dma_slave_config
Valentin Schneider (1):
PM: cpu: Make notifier chain use a raw_spinlock_t
Vineeth Vijayan (1):
s390/cio: add dev_busid sysfs entry for each subchannel
Waiman Long (3):
cgroup/cpuset: Fix a partition bug with hotplug
cgroup/cpuset: Miscellaneous code cleanup
cgroup/cpuset: Fix violation of cpuset locking rule
Wei Yongjun (1):
drm/panfrost: Fix missing clk_disable_unprepare() on error in
panfrost_clk_init()
William Breathitt Gray (1):
counter: 104-quad-8: Return error when invalid mode during
ceiling_write
Xiyu Yang (1):
net: sched: Fix qdisc_rate_table refcount leak when get tcf_block
failed
Xu Yu (1):
mm/swap: consider max pages in iomap_swapfile_add_extent
Yanfei Xu (2):
rcu: Fix to include first blocked task in stall warning
rcu: Fix stall-warning deadlock due to non-release of rcu_node ->lock
Yizhuo (1):
media: atomisp: fix the uninitialized use and rename "retvalue"
Zelin Deng (1):
KVM: x86: Update vCPU's hv_clock before back to guest when tsc_offset
is adjusted
Zenghui Yu (1):
bcma: Fix memory leak for internally-handled cores
Zhang Qilong (1):
ASoC: mediatek: mt8183: Fix Unbalanced pm_runtime_enable in
mt8183_afe_pcm_dev_probe
Zhen Lei (3):
genirq/timings: Fix error return code in irq_timings_test_irqs()
firmware: fix theoretical UAF race with firmware cache and resume
driver core: Fix error return code in really_probe()
.../fault-injection/provoke-crashes.rst | 2 +-
arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi | 4 +-
arch/arm/boot/dts/at91-sam9x60ek.dts | 16 +-
arch/arm/boot/dts/at91-sama5d3_xplained.dts | 29 +++
arch/arm/boot/dts/at91-sama5d4_xplained.dts | 19 ++
arch/arm/boot/dts/meson8.dtsi | 5 +
arch/arm/boot/dts/meson8b-ec100.dts | 4 +-
arch/arm/boot/dts/meson8b-mxq.dts | 4 +-
arch/arm/boot/dts/meson8b-odroidc1.dts | 4 +-
arch/arm64/boot/dts/exynos/exynos7.dtsi | 2 +-
.../dts/marvell/armada-3720-turris-mox.dts | 17 ++
arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 11 +-
.../boot/dts/renesas/beacon-renesom-som.dtsi | 3 +-
.../boot/dts/renesas/hihope-rzg2-ex.dtsi | 3 +-
arch/arm64/boot/dts/renesas/r8a774a1.dtsi | 2 +
arch/arm64/boot/dts/renesas/r8a774b1.dtsi | 2 +
arch/arm64/boot/dts/renesas/r8a774c0.dtsi | 1 +
arch/arm64/boot/dts/renesas/r8a774e1.dtsi | 2 +
.../arm64/boot/dts/renesas/r8a77995-draak.dts | 4 -
arch/m68k/Kconfig.cpu | 8 +-
arch/m68k/emu/nfeth.c | 4 +-
arch/s390/include/asm/kvm_host.h | 1 +
arch/s390/kernel/debug.c | 176 +++++++++++-------
arch/s390/kvm/interrupt.c | 12 +-
arch/s390/kvm/kvm-s390.c | 2 +-
arch/s390/kvm/kvm-s390.h | 2 +-
arch/s390/mm/kasan_init.c | 41 ++--
arch/s390/pci/pci.c | 7 +-
arch/s390/pci/pci_clp.c | 33 ++--
arch/x86/events/amd/ibs.c | 1 +
arch/x86/include/asm/mce.h | 1 +
arch/x86/kernel/cpu/mce/core.c | 11 +-
arch/x86/kernel/cpu/resctrl/monitor.c | 6 +
arch/x86/kvm/mmu/mmu.c | 6 -
arch/x86/kvm/vmx/nested.c | 7 +-
arch/x86/kvm/vmx/vmx.c | 3 +
arch/x86/kvm/x86.c | 4 +
block/bfq-iosched.c | 3 +
block/bio.c | 15 +-
block/blk-crypto.c | 2 +-
block/blk-merge.c | 18 +-
block/blk-throttle.c | 32 ++++
block/blk.h | 2 +
block/elevator.c | 3 +
block/mq-deadline.c | 2 +
certs/Makefile | 8 +
drivers/ata/libata-core.c | 2 +-
drivers/base/dd.c | 16 +-
drivers/base/firmware_loader/main.c | 20 +-
drivers/base/regmap/regmap.c | 2 +-
drivers/bcma/main.c | 6 +-
drivers/block/nbd.c | 10 +
drivers/char/tpm/tpm_ibmvtpm.c | 26 +--
drivers/char/tpm/tpm_ibmvtpm.h | 2 +-
drivers/clk/mvebu/kirkwood.c | 1 +
drivers/clocksource/sh_cmt.c | 30 +--
drivers/counter/104-quad-8.c | 5 +-
drivers/crypto/mxs-dcp.c | 45 ++++-
drivers/crypto/omap-aes.c | 8 +-
drivers/crypto/omap-des.c | 8 +-
drivers/crypto/omap-sham.c | 14 +-
.../qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c | 4 +-
.../qat/qat_c62xvf/adf_c62xvf_hw_data.c | 4 +-
.../crypto/qat/qat_common/adf_common_drv.h | 8 +-
drivers/crypto/qat/qat_common/adf_init.c | 5 +-
drivers/crypto/qat/qat_common/adf_isr.c | 7 +-
drivers/crypto/qat/qat_common/adf_pf2vf_msg.c | 3 +-
drivers/crypto/qat/qat_common/adf_vf2pf_msg.c | 12 +-
drivers/crypto/qat/qat_common/adf_vf_isr.c | 7 +-
.../qat_dh895xccvf/adf_dh895xccvf_hw_data.c | 4 +-
drivers/edac/i10nm_base.c | 6 +-
drivers/edac/mce_amd.c | 3 +
drivers/firmware/raspberrypi.c | 46 ++++-
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 54 +++---
drivers/gpu/drm/drm_of.c | 6 +-
drivers/gpu/drm/gma500/oaktrail_lvds.c | 2 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c | 10 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 68 ++++---
drivers/gpu/drm/msm/dsi/dsi.c | 6 +-
drivers/gpu/drm/mxsfb/mxsfb_drv.c | 3 +
drivers/gpu/drm/mxsfb/mxsfb_drv.h | 1 +
drivers/gpu/drm/mxsfb/mxsfb_kms.c | 40 ++++
drivers/gpu/drm/mxsfb/mxsfb_regs.h | 9 +
drivers/gpu/drm/panfrost/panfrost_device.c | 3 +-
drivers/i2c/busses/i2c-highlander.c | 2 +-
drivers/i2c/busses/i2c-hix5hd2.c | 4 +-
drivers/i2c/busses/i2c-iop3xx.c | 6 +-
drivers/i2c/busses/i2c-mt65xx.c | 2 +-
drivers/i2c/busses/i2c-s3c2410.c | 2 +-
drivers/i2c/busses/i2c-synquacer.c | 2 +-
drivers/i2c/busses/i2c-xlp9xx.c | 2 +-
drivers/irqchip/irq-gic-v3.c | 23 ++-
drivers/irqchip/irq-loongson-pch-pic.c | 19 +-
drivers/leds/leds-is31fl32xx.c | 1 +
drivers/leds/leds-lt3593.c | 5 +-
drivers/leds/trigger/ledtrig-audio.c | 37 +++-
drivers/md/bcache/super.c | 16 +-
drivers/media/i2c/tda1997x.c | 1 +
drivers/media/platform/coda/coda-bit.c | 18 +-
drivers/media/platform/qcom/venus/venc.c | 2 +
drivers/media/platform/rockchip/rga/rga-buf.c | 3 +-
drivers/media/platform/rockchip/rga/rga.c | 29 ++-
drivers/media/spi/cxd2880-spi.c | 7 +-
drivers/media/usb/dvb-usb/dvb-usb-i2c.c | 9 +-
drivers/media/usb/dvb-usb/dvb-usb-init.c | 2 +-
drivers/media/usb/dvb-usb/nova-t-usb2.c | 6 +-
drivers/media/usb/dvb-usb/vp702x.c | 12 +-
drivers/media/usb/em28xx/em28xx-input.c | 1 -
drivers/media/usb/go7007/go7007-driver.c | 26 ---
drivers/media/usb/go7007/go7007-usb.c | 2 +-
drivers/misc/lkdtm/core.c | 2 +-
drivers/mmc/host/dw_mmc.c | 1 +
drivers/mmc/host/moxart-mmc.c | 1 +
drivers/mmc/host/sdhci.c | 1 +
.../ethernet/aquantia/atlantic/aq_pci_func.c | 3 +
drivers/net/ethernet/google/gve/gve_adminq.c | 6 +-
.../ethernet/intel/i40e/i40e_virtchnl_pf.c | 23 ++-
drivers/net/ethernet/intel/ice/ice_main.c | 13 +-
.../ethernet/marvell/octeontx2/af/rvu_npc.c | 16 +-
.../net/ethernet/mellanox/mlx5/core/devlink.c | 52 ++++++
.../net/ethernet/mellanox/mlx5/core/en/fs.h | 6 -
.../net/ethernet/mellanox/mlx5/core/en_fs.c | 10 +-
.../net/ethernet/mellanox/mlx5/core/en_main.c | 15 ++
.../mellanox/mlx5/core/eswitch_offloads.c | 5 +-
.../ethernet/mellanox/mlx5/core/ipoib/ipoib.c | 18 +-
.../ethernet/pensando/ionic/ionic_devlink.c | 14 +-
drivers/net/ethernet/qualcomm/qca_spi.c | 2 +-
drivers/net/ethernet/qualcomm/qca_uart.c | 2 +-
drivers/net/wireless/ath/ath6kl/wmi.c | 4 +-
.../broadcom/brcm80211/brcmfmac/pcie.c | 2 +-
drivers/net/wireless/intel/iwlwifi/fw/acpi.c | 32 ++--
drivers/net/wireless/intel/iwlwifi/fw/acpi.h | 10 +-
.../wireless/intel/iwlwifi/fw/api/commands.h | 2 +-
.../wireless/intel/iwlwifi/fw/api/nvm-reg.h | 8 +-
.../net/wireless/intel/iwlwifi/fw/api/scan.h | 12 +-
drivers/net/wireless/intel/iwlwifi/fw/file.h | 2 +-
.../net/wireless/intel/iwlwifi/iwl-config.h | 2 +-
drivers/net/wireless/intel/iwlwifi/mvm/fw.c | 6 +-
.../net/wireless/intel/iwlwifi/mvm/mac-ctxt.c | 10 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 13 +-
drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 24 +--
drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 1 +
drivers/net/wireless/rsi/rsi_91x_hal.c | 4 +-
drivers/net/wireless/rsi/rsi_91x_usb.c | 1 +
drivers/nvme/host/rdma.c | 4 +-
drivers/nvme/host/tcp.c | 4 +-
drivers/nvme/target/fabrics-cmd.c | 9 +-
drivers/pci/pci.c | 25 ++-
drivers/power/supply/axp288_fuel_gauge.c | 4 +-
drivers/power/supply/cw2015_battery.c | 4 +-
drivers/power/supply/max17042_battery.c | 2 +-
drivers/power/supply/smb347-charger.c | 10 +
drivers/regulator/tps65910-regulator.c | 10 +-
drivers/regulator/vctrl-regulator.c | 73 +++++---
drivers/s390/cio/css.c | 17 ++
drivers/s390/crypto/ap_bus.c | 25 +--
drivers/s390/crypto/ap_bus.h | 10 +-
drivers/s390/crypto/ap_queue.c | 20 +-
drivers/s390/crypto/zcrypt_ccamisc.c | 8 +-
drivers/soc/qcom/rpmhpd.c | 5 +-
drivers/soc/qcom/smsm.c | 11 +-
drivers/soc/rockchip/Kconfig | 4 +-
drivers/spi/spi-coldfire-qspi.c | 2 +-
drivers/spi/spi-davinci.c | 8 +-
drivers/spi/spi-fsl-dspi.c | 1 +
drivers/spi/spi-pic32.c | 1 +
drivers/spi/spi-sprd-adi.c | 2 +-
drivers/spi/spi-zynq-qspi.c | 8 +-
drivers/staging/clocking-wizard/Kconfig | 2 +-
.../media/atomisp/i2c/atomisp-mt9m114.c | 11 +-
drivers/tty/serial/fsl_lpuart.c | 2 +-
drivers/tty/tty_io.c | 4 +-
drivers/usb/dwc3/dwc3-meson-g12a.c | 2 +
drivers/usb/dwc3/dwc3-qcom.c | 4 +
drivers/usb/gadget/udc/at91_udc.c | 4 +-
drivers/usb/gadget/udc/bdc/bdc_core.c | 30 +--
drivers/usb/gadget/udc/mv_u3d_core.c | 19 +-
drivers/usb/gadget/udc/renesas_usb3.c | 17 +-
drivers/usb/gadget/udc/s3c2410_udc.c | 4 +
drivers/usb/host/ehci-orion.c | 8 +-
drivers/usb/host/ohci-tmio.c | 3 +
drivers/usb/phy/phy-fsl-usb.c | 2 +
drivers/usb/phy/phy-tahvo.c | 4 +-
drivers/usb/phy/phy-twl6030-usb.c | 5 +
drivers/video/backlight/pwm_bl.c | 54 +++---
drivers/video/fbdev/core/fbmem.c | 6 +
fs/cifs/cifs_unicode.c | 9 +-
fs/debugfs/file.c | 8 +-
fs/f2fs/file.c | 5 +-
fs/f2fs/super.c | 11 +-
fs/fcntl.c | 5 +-
fs/fuse/file.c | 9 +-
fs/gfs2/ops_fstype.c | 43 +++++
fs/gfs2/super.c | 61 +-----
fs/io_uring.c | 1 +
fs/iomap/swapfile.c | 6 +
fs/isofs/inode.c | 27 ++-
fs/isofs/isofs.h | 1 -
fs/isofs/joliet.c | 4 +-
fs/lockd/svclock.c | 2 +-
fs/nfsd/nfs4state.c | 4 +-
fs/udf/misc.c | 13 +-
fs/udf/super.c | 75 ++++----
fs/udf/udf_sb.h | 2 -
fs/udf/unicode.c | 4 +-
include/linux/blkdev.h | 16 ++
include/linux/energy_model.h | 16 ++
include/linux/hrtimer.h | 5 -
include/linux/local_lock_internal.h | 39 ++--
include/linux/lockdep.h | 15 +-
include/linux/lockdep_types.h | 18 +-
include/linux/mlx5/mlx5_ifc.h | 3 +-
include/linux/power/max17042_battery.h | 2 +-
include/linux/time64.h | 9 +-
include/soc/bcm2835/raspberrypi-firmware.h | 2 +
include/uapi/linux/bpf.h | 2 +-
kernel/bpf/verifier.c | 31 +--
kernel/cgroup/cpuset.c | 95 ++++++----
kernel/cpu_pm.c | 50 +++--
kernel/irq/timings.c | 2 +
kernel/locking/lockdep.c | 16 +-
kernel/locking/mutex.c | 15 +-
kernel/power/energy_model.c | 4 +-
kernel/rcu/tree.c | 4 +
kernel/rcu/tree_plugin.h | 1 +
kernel/rcu/tree_stall.h | 34 +++-
kernel/sched/core.c | 25 ++-
kernel/sched/deadline.c | 8 +-
kernel/sched/fair.c | 2 +-
kernel/sched/sched.h | 2 +
kernel/time/hrtimer.c | 92 ++++++---
kernel/time/posix-cpu-timers.c | 2 -
kernel/time/tick-internal.h | 3 +
lib/mpi/mpiutil.c | 2 +-
net/6lowpan/debugfs.c | 3 +-
net/bluetooth/cmtp/cmtp.h | 2 +-
net/bluetooth/hci_core.c | 14 ++
net/bluetooth/mgmt.c | 2 +-
net/bluetooth/sco.c | 11 +-
net/core/devlink.c | 36 ++--
net/ipv4/route.c | 48 +++--
net/ipv4/tcp_ipv4.c | 5 +-
net/ipv6/route.c | 5 +-
net/mac80211/tx.c | 4 +-
net/netlabel/netlabel_cipso_v4.c | 8 +-
net/sched/sch_cbq.c | 2 +-
samples/bpf/xdp_redirect_cpu_user.c | 2 +-
samples/pktgen/pktgen_sample04_many_flows.sh | 12 +-
.../pktgen/pktgen_sample05_flow_per_thread.sh | 12 +-
security/integrity/ima/Kconfig | 1 -
security/integrity/ima/ima_mok.c | 2 +-
sound/soc/codecs/rt5682-i2c.c | 20 ++
sound/soc/codecs/wcd9335.c | 23 ++-
sound/soc/intel/boards/kbl_da7219_max98927.c | 55 +-----
.../intel/common/soc-acpi-intel-cml-match.c | 2 +-
.../intel/common/soc-acpi-intel-kbl-match.c | 2 +-
sound/soc/intel/skylake/skl-topology.c | 25 +--
sound/soc/mediatek/mt8183/mt8183-afe-pcm.c | 43 +++--
tools/include/uapi/linux/bpf.h | 2 +-
tools/lib/bpf/Makefile | 10 +-
tools/lib/bpf/libbpf.c | 16 +-
.../selftests/bpf/progs/bpf_iter_tcp4.c | 2 +-
.../selftests/bpf/progs/test_core_autosize.c | 20 +-
263 files changed, 2072 insertions(+), 1182 deletions(-)
--
2.20.1
1
236
Backport LTS 5.10.64 patches from upstream.
Alexander Tsoy (1):
ALSA: usb-audio: Add registration quirk for JBL Quantum 800
Chunfeng Yun (4):
usb: gadget: tegra-xudc: fix the wrong mult value for HS isoc or intr
usb: mtu3: restore HS function when set SS/SSP
usb: mtu3: use @mult for HS isoc or intr
usb: mtu3: fix the wrong HS mult value
Esben Haabendal (1):
net: ll_temac: Remove left-over debug message
Hayes Wang (1):
Revert "r8169: avoid link-up interrupt issue on RTL8106e if user
enables ASPM"
Jiri Slaby (1):
tty: drop termiox user definitions
Marek Behún (1):
PCI: Call Max Payload Size-related fixup quirks early
Mathias Nyman (2):
xhci: fix even more unsafe memory usage in xhci tracing
xhci: fix unsafe memory usage in xhci tracing
Ming Lei (3):
blk-mq: fix kernel panic during iterating over flush request
blk-mq: fix is_flush_rq
blk-mq: clearing flush request reference in tags->rqs[]
Pablo Neira Ayuso (2):
netfilter: nf_tables: initialize set before expression setup
netfilter: nftables: clone set element expression template
Paul Gortmaker (1):
x86/reboot: Limit Dell Optiplex 990 quirk to early BIOS versions
Randy Dunlap (2):
net: kcov: don't select SKB_EXTENSIONS when there is no NET
net: linux/skbuff.h: combine SKB_EXTENSIONS + KCOV handling
Suravee Suthikulpanit (1):
x86/events/amd/iommu: Fix invalid Perf result due to IOMMU PMC
power-gating
Tom Rix (1):
USB: serial: mos7720: improve OOM-handling in read_mos_reg()
Vignesh Raghavendra (1):
serial: 8250: 8250_omap: Fix unused variable warning
Yoshihiro Shimoda (1):
usb: host: xhci-rcar: Don't reload firmware after the completion
arch/x86/events/amd/iommu.c | 47 ++++++------
arch/x86/kernel/reboot.c | 3 +-
block/blk-core.c | 1 -
block/blk-flush.c | 13 ++++
block/blk-mq.c | 37 +++++++++-
block/blk.h | 6 +-
drivers/net/ethernet/realtek/r8169_main.c | 1 +
drivers/net/ethernet/xilinx/ll_temac_main.c | 4 +-
drivers/pci/quirks.c | 12 +--
drivers/tty/serial/8250/8250_omap.c | 26 +++----
drivers/usb/gadget/udc/tegra-xudc.c | 4 +-
drivers/usb/host/xhci-debugfs.c | 14 +++-
drivers/usb/host/xhci-rcar.c | 7 ++
drivers/usb/host/xhci-ring.c | 3 +-
drivers/usb/host/xhci-trace.h | 26 ++++---
drivers/usb/host/xhci.h | 73 +++++++++---------
drivers/usb/mtu3/mtu3_core.c | 4 +-
drivers/usb/mtu3/mtu3_gadget.c | 6 +-
drivers/usb/serial/mos7720.c | 4 +-
include/linux/skbuff.h | 4 +-
include/uapi/linux/termios.h | 15 ----
lib/Kconfig.debug | 2 +-
net/netfilter/nf_tables_api.c | 82 +++++++++++++--------
sound/usb/quirks.c | 1 +
24 files changed, 234 insertions(+), 161 deletions(-)
--
2.20.1
1
23

19 Oct '21
livepatch: Add klp_{register,unregister}_patch for stop_machine model.
Yang Jihong (2):
livepatch: Add klp_{register,unregister}_patch for stop_machine model
livepatch: Adapt livepatch-sample for stop_machine model
include/linux/livepatch.h | 15 +-
kernel/livepatch/core.c | 257 ++++++++++++++++++++-------
samples/livepatch/livepatch-sample.c | 37 ++++
3 files changed, 239 insertions(+), 70 deletions(-)
--
2.20.1
1
2
Backport LTS 5.10.63 patches from upstream.
Al Viro (1):
new helper: inode_wrong_type()
Amir Goldstein (1):
fuse: fix illegal access to inode with reused nodeid
Andy Shevchenko (1):
spi: Switch to signed types for *_native_cs SPI controller fields
Christoph Hellwig (1):
cryptoloop: add a deprecation warning
Eric Biggers (4):
fscrypt: add fscrypt_symlink_getattr() for computing st_size
ext4: report correct st_size for encrypted symlinks
f2fs: report correct st_size for encrypted symlinks
ubifs: report correct st_size for encrypted symlinks
Greg Kroah-Hartman (3):
Revert "ucounts: Increase ucounts reference counter before the
security hook"
Revert "cred: add missing return error code when set_cred_ucounts()
failed"
Revert "Add a reference to ucounts for each cred"
Harini Katakam (1):
net: macb: Add a NULL check on desc_ptp
Johnathon Clark (1):
ALSA: hda/realtek: Quirk for HP Spectre x360 14 amp setup
Kim Phillips (2):
perf/x86/amd/ibs: Work around erratum #1197
perf/x86/amd/power: Assign pmu.module
Krzysztof Hałasa (1):
gpu: ipu-v3: Fix i.MX IPU-v3 offset calculations for (semi)planar U/V
formats
Maciej Falkowski (1):
ARM: OMAP1: ams-delta: remove unused function ams_delta_camera_power
Matthieu Baerts (1):
static_call: Fix unused variable warn w/o MODULE
Pavel Skripkin (1):
media: stkwebcam: fix memory leak in stk_camera_probe
Randy Dunlap (1):
xtensa: fix kconfig unmet dependency warning for HAVE_FUTEX_CMPXCHG
Sai Krishna Potthuri (1):
reset: reset-zynqmp: Fixed the argument data type
Shai Malin (2):
qed: Fix the VF msix vectors flow
qede: Fix memset corruption
Takashi Iwai (1):
ALSA: hda/realtek: Workaround for conflicting SSID on ASUS ROG Strix
G17
Tuo Li (1):
ceph: fix possible null-pointer dereference in ceph_mdsmap_decode()
Xiaoyao Li (1):
perf/x86/intel/pt: Fix mask of num_address_ranges
Zubin Mithra (1):
ALSA: pcm: fix divide error in snd_pcm_lib_ioctl
arch/arm/mach-omap1/board-ams-delta.c | 14 -------
arch/x86/events/amd/ibs.c | 8 ++++
arch/x86/events/amd/power.c | 1 +
arch/x86/events/intel/pt.c | 2 +-
arch/xtensa/Kconfig | 2 +-
drivers/block/Kconfig | 4 +-
drivers/block/cryptoloop.c | 2 +
drivers/gpu/ipu-v3/ipu-cpmem.c | 30 ++++++-------
drivers/media/usb/stkwebcam/stk-webcam.c | 6 ++-
drivers/net/ethernet/cadence/macb_ptp.c | 11 ++++-
drivers/net/ethernet/qlogic/qed/qed_main.c | 7 +++-
drivers/net/ethernet/qlogic/qede/qede_main.c | 2 +-
drivers/reset/reset-zynqmp.c | 3 +-
fs/9p/vfs_inode.c | 4 +-
fs/9p/vfs_inode_dotl.c | 4 +-
fs/ceph/mdsmap.c | 8 ++--
fs/cifs/inode.c | 5 +--
fs/crypto/hooks.c | 44 ++++++++++++++++++++
fs/exec.c | 4 --
fs/ext4/symlink.c | 11 ++++-
fs/f2fs/namei.c | 11 ++++-
fs/fuse/dir.c | 6 +--
fs/fuse/fuse_i.h | 7 ++++
fs/fuse/inode.c | 4 +-
fs/fuse/readdir.c | 7 +++-
fs/nfs/inode.c | 6 +--
fs/nfsd/nfsproc.c | 2 +-
fs/overlayfs/namei.c | 4 +-
fs/ubifs/file.c | 12 +++++-
include/linux/cred.h | 2 -
include/linux/fs.h | 5 +++
include/linux/fscrypt.h | 7 ++++
include/linux/spi/spi.h | 4 +-
include/linux/user_namespace.h | 4 --
kernel/cred.c | 41 ------------------
kernel/fork.c | 6 ---
kernel/static_call.c | 4 +-
kernel/sys.c | 12 ------
kernel/ucount.c | 40 ++----------------
kernel/user_namespace.c | 3 --
sound/core/pcm_lib.c | 2 +-
sound/pci/hda/patch_realtek.c | 11 +++++
42 files changed, 193 insertions(+), 179 deletions(-)
--
2.20.1
1
27
Backport LTS 5.10.62 patches from upstream.
Aaron Ma (1):
igc: fix page fault when thunderbolt is unplugged
Adam Ford (1):
clk: renesas: rcar-usb2-clock-sel: Fix kernel NULL pointer dereference
Alexey Gladkov (1):
ucounts: Increase ucounts reference counter before the security hook
Andrey Ignatov (1):
rtnetlink: Return correct error on changing device netns
Ben Skeggs (2):
drm/nouveau/disp: power down unused DP links during init
drm/nouveau/kms/nv50: workaround EFI GOP window channel format
differences
Benjamin Berg (2):
usb: typec: ucsi: acpi: Always decode connector change information
usb: typec: ucsi: Work around PPM losing change information
Bjorn Andersson (1):
usb: typec: ucsi: Clear pending after acking connector change
Christophe JAILLET (1):
xgene-v2: Fix a resource leak in the error handling path of
'xge_probe()'
Colin Ian King (1):
perf/x86/intel/uncore: Fix integer overflow on 23 bit left shift of a
u32
DENG Qingfang (1):
net: dsa: mt7530: fix VLAN traffic leaks again
Daniel Borkmann (1):
bpf: Fix ringbuf helper function compatibility
Davide Caratti (1):
net/sched: ets: fix crash when flipping from 'strict' to 'quantum'
Denis Efremov (1):
Revert "floppy: reintroduce O_NDELAY fix"
Derek Fang (1):
ASoC: rt5682: Adjust headset volume button threshold
Dinghao Liu (1):
RDMA/bnxt_re: Remove unpaired rtnl unlock in bnxt_re_dev_init()
Eric Dumazet (2):
ipv6: use siphash in rt6_exception_hash()
ipv4: use siphash instead of Jenkins in fnhe_hashfun()
Filipe Manana (1):
btrfs: fix race between marking inode needs to be logged and log
syncing
Florian Westphal (1):
netfilter: conntrack: collect all entries in one cycle
Frieder Schrempf (1):
mtd: spinand: Fix incorrect parameters for on-die ECC
Gal Pressman (1):
RDMA/efa: Free IRQ vectors on error flow
Gerd Rausch (1):
net/rds: dma_map_sg is entitled to merge entries
Guenter Roeck (1):
ARC: Fix CONFIG_STACKDEPOT
Guo Ren (2):
riscv: Fixup wrong ftrace remove cflag
riscv: Fixup patch_text panic in ftrace
Helge Deller (1):
Revert "parisc: Add assembly implementations for memset, strlen,
strcpy, strncpy and strcat"
Jacob Keller (1):
ice: do not abort devlink info if board identifier can't be found
Jerome Brunet (1):
usb: gadget: u_audio: fix race condition on endpoint stop
Johan Hovold (1):
Revert "USB: serial: ch341: fix character loss at high transfer rates"
Johannes Berg (1):
iwlwifi: pnvm: accept multiple HW-type TLVs
Kees Cook (1):
lkdtm: Enable DOUBLE_FAULT on all architectures
Kenneth Feng (2):
Revert "drm/amd/pm: fix workload mismatch on vega10"
drm/amd/pm: change the workload type for some cards
Li Jinlin (1):
scsi: core: Fix hang of freezing queue between blocking and running
device
Linus Torvalds (2):
pipe: avoid unnecessary EPOLLET wakeups under normal loads
pipe: do FASYNC notifications for every pipe IO, not just state
changes
Mark Brown (2):
ASoC: component: Remove misplaced prefix handling in pin control
functions
net: mscc: Fix non-GPL export of regmap APIs
Mark Yacoub (1):
drm: Copy drm_wait_vblank to user before returning
Mathieu Desnoyers (1):
tracepoint: Use rcu get state and cond sync for static call updates
Matthew Brost (1):
drm/i915: Fix syncmap memory leak
Maxim Kiselev (1):
net: marvell: fix MVNETA_TX_IN_PRGRS bit number
Michael S. Tsirkin (1):
tools/virtio: fix build
Michał Mirosław (1):
opp: remove WARN when no valid OPPs remain
Michel Dänzer (1):
drm/amdgpu: Cancel delayed work when GFXOFF is disabled
Miklos Szeredi (1):
ovl: fix uninitialized pointer read in ovl_lookup_real_one()
Ming Lei (2):
blk-iocost: fix lockdep warning on blkcg->lock
blk-mq: don't grab rq's refcount in blk_mq_check_expired()
Naresh Kumar PBS (1):
RDMA/bnxt_re: Add missing spin lock initialization
Neeraj Upadhyay (1):
vringh: Use wiov->used to check for read/write desc order
Parav Pandit (2):
virtio: Improve vq->broken access to avoid any compiler optimization
virtio_pci: Support surprise removal of virtio pci device
Paul E. McKenney (5):
srcu: Provide internal interface to start a Tree SRCU grace period
srcu: Provide polling interfaces for Tree SRCU grace periods
srcu: Provide internal interface to start a Tiny SRCU grace period
srcu: Make Tiny SRCU use multi-bit grace-period counter
srcu: Provide polling interfaces for Tiny SRCU grace periods
Pauli Virtanen (1):
Bluetooth: btusb: check conditions before enabling USB ALT 3 for WBS
Peter Collingbourne (1):
net: don't unconditionally copy_from_user a struct ifreq for socket
ioctls
Petko Manolov (1):
net: usb: pegasus: fixes of set_register(s) return value evaluation;
Petr Vorel (1):
arm64: dts: qcom: msm8994-angler: Fix gpio-reserved-ranges 85-88
Qu Wenruo (1):
Revert "btrfs: compression: don't try to compress if we don't have
enough pages"
Rahul Lakkireddy (1):
cxgb4: dont touch blocked freelist bitmap after free
Richard Guy Briggs (1):
audit: move put_tree() to avoid trim_trees refcount underflow and UAF
Rob Herring (1):
dt-bindings: sifive-l2-cache: Fix 'select' matching
Sasha Neftin (2):
e1000e: Fix the max snoop/no-snoop latency for 10M
e1000e: Do not take care about recovery NVM checksum
Shai Malin (2):
qed: qed ll2 race condition fixes
qed: Fix null-pointer dereference in qed_rdma_create_qp()
Shreyansh Chouhan (1):
ip_gre: add validation for csum_start
Stefan Mätje (1):
can: usb: esd_usb2: esd_usb2_rx_event(): fix the interchange of the
CAN RX and TX error counters
Takashi Iwai (1):
usb: renesas-xhci: Prefer firmware loading on unknown ROM state
Thara Gopinath (1):
cpufreq: blocklist Qualcomm sm8150 in cpufreq-dt-platdev
Thinh Nguyen (1):
usb: dwc3: gadget: Fix dwc3_calc_trbs_left()
Toshiki Nishioka (1):
igc: Use num_tx_queues when iterating over tx_ring queue
Tuo Li (1):
IB/hfi1: Fix possible null-pointer dereference in
_extend_sdma_tx_descs()
Ulf Hansson (1):
Revert "mmc: sdhci-iproc: Set SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN on
BCM2711"
Vincent Chen (1):
riscv: Ensure the value of FP registers in the core dump file is up to
date
Vincent Whitchurch (1):
virtio_vdpa: reject invalid vq indices
Wesley Cheng (1):
usb: dwc3: gadget: Stop EP0 transfers during pullup disable
Wong Vee Khee (1):
net: stmmac: fix kernel panic due to NULL pointer dereference of
plat->est
Xiaoliang Yang (1):
net: stmmac: add mutex lock to protect est parameters
Xin Long (1):
tipc: call tipc_wait_for_connect only when dlen is not 0
Xiubo Li (1):
ceph: correctly handle releasing an embedded cap flush
Yonghong Song (2):
bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper
bpf: Fix potentially incorrect results with bpf_get_local_storage()
Zhengjun Zhang (1):
USB: serial: option: add new VID/PID to support Fibocom FG150
.../bindings/riscv/sifive-l2-cache.yaml | 6 +-
arch/arc/kernel/vmlinux.lds.S | 2 +
.../boot/dts/qcom/msm8994-angler-rev-101.dts | 4 +
arch/parisc/include/asm/string.h | 15 --
arch/parisc/kernel/parisc_ksyms.c | 4 -
arch/parisc/lib/Makefile | 4 +-
arch/parisc/lib/memset.c | 72 ++++++++++
arch/parisc/lib/string.S | 136 ------------------
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ptrace.c | 4 +
arch/riscv/mm/Makefile | 3 +-
arch/x86/events/intel/uncore_snbep.c | 2 +-
block/blk-iocost.c | 8 +-
block/blk-mq.c | 30 +---
drivers/block/floppy.c | 30 ++--
drivers/bluetooth/btusb.c | 22 +--
drivers/clk/renesas/rcar-usb2-clock-sel.c | 2 +-
drivers/cpufreq/cpufreq-dt-platdev.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 11 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c | 36 +++--
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 15 +-
drivers/gpu/drm/drm_ioc32.c | 4 +-
drivers/gpu/drm/i915/gt/intel_timeline.c | 8 ++
drivers/gpu/drm/nouveau/dispnv50/disp.c | 27 ++++
drivers/gpu/drm/nouveau/dispnv50/head.c | 13 +-
drivers/gpu/drm/nouveau/dispnv50/head.h | 1 +
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.h | 1 +
.../gpu/drm/nouveau/nvkm/engine/disp/outp.c | 9 ++
drivers/infiniband/hw/bnxt_re/ib_verbs.c | 1 +
drivers/infiniband/hw/bnxt_re/main.c | 1 -
drivers/infiniband/hw/efa/efa_main.c | 1 +
drivers/infiniband/hw/hfi1/sdma.c | 9 +-
drivers/misc/lkdtm/core.c | 2 -
drivers/mmc/host/sdhci-iproc.c | 3 +-
drivers/mtd/nand/spi/core.c | 6 +-
drivers/mtd/nand/spi/macronix.c | 6 +-
drivers/mtd/nand/spi/toshiba.c | 6 +-
drivers/net/can/usb/esd_usb2.c | 4 +-
drivers/net/dsa/mt7530.c | 5 +-
drivers/net/ethernet/apm/xgene-v2/main.c | 4 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_main.c | 7 +-
drivers/net/ethernet/intel/e1000e/ich8lan.c | 32 +++--
drivers/net/ethernet/intel/e1000e/ich8lan.h | 3 +
drivers/net/ethernet/intel/ice/ice_devlink.c | 4 +-
drivers/net/ethernet/intel/igc/igc_main.c | 36 +++--
drivers/net/ethernet/intel/igc/igc_ptp.c | 3 +-
drivers/net/ethernet/marvell/mvneta.c | 2 +-
drivers/net/ethernet/mscc/ocelot_io.c | 16 +--
drivers/net/ethernet/qlogic/qed/qed_ll2.c | 20 +++
drivers/net/ethernet/qlogic/qed/qed_rdma.c | 3 +-
.../net/ethernet/stmicro/stmmac/stmmac_tc.c | 22 ++-
drivers/net/usb/pegasus.c | 4 +-
drivers/net/wireless/intel/iwlwifi/fw/pnvm.c | 25 ++--
drivers/opp/of.c | 5 +-
drivers/scsi/scsi_sysfs.c | 9 +-
drivers/usb/dwc3/gadget.c | 23 ++-
drivers/usb/gadget/function/u_audio.c | 5 +-
drivers/usb/host/xhci-pci-renesas.c | 35 +++--
drivers/usb/serial/ch341.c | 1 -
drivers/usb/serial/option.c | 2 +
drivers/usb/typec/ucsi/ucsi.c | 125 +++++++++++++---
drivers/usb/typec/ucsi/ucsi.h | 2 +
drivers/usb/typec/ucsi/ucsi_acpi.c | 5 +-
drivers/vhost/vringh.c | 2 +-
drivers/virtio/virtio_pci_common.c | 7 +
drivers/virtio/virtio_ring.c | 6 +-
drivers/virtio/virtio_vdpa.c | 3 +
fs/btrfs/btrfs_inode.h | 15 ++
fs/btrfs/file.c | 11 +-
fs/btrfs/inode.c | 6 +-
fs/btrfs/transaction.h | 2 +-
fs/ceph/caps.c | 21 +--
fs/ceph/mds_client.c | 7 +-
fs/ceph/snap.c | 3 +
fs/ceph/super.h | 3 +-
fs/overlayfs/export.c | 2 +-
fs/pipe.c | 33 +++--
include/linux/bpf-cgroup.h | 4 +-
include/linux/bpf.h | 4 +-
include/linux/netdevice.h | 4 +
include/linux/pipe_fs_i.h | 2 +
include/linux/rcupdate.h | 2 +
include/linux/srcu.h | 3 +
include/linux/srcutiny.h | 7 +-
include/linux/stmmac.h | 1 +
kernel/audit_tree.c | 2 +-
kernel/bpf/helpers.c | 4 +-
kernel/bpf/verifier.c | 8 +-
kernel/cred.c | 12 +-
kernel/rcu/srcutiny.c | 77 ++++++++--
kernel/rcu/srcutree.c | 127 ++++++++++++----
kernel/tracepoint.c | 81 +++++++++--
net/core/rtnetlink.c | 3 +-
net/ipv4/ip_gre.c | 2 +
net/ipv4/route.c | 12 +-
net/ipv6/route.c | 20 ++-
net/netfilter/nf_conntrack_core.c | 71 +++------
net/rds/ib_frmr.c | 4 +-
net/sched/sch_ets.c | 7 +
net/socket.c | 6 +-
net/tipc/socket.c | 2 +-
sound/soc/codecs/rt5682.c | 1 +
sound/soc/soc-component.c | 63 ++++----
tools/virtio/Makefile | 3 +-
tools/virtio/linux/spinlock.h | 56 ++++++++
tools/virtio/linux/virtio.h | 2 +
107 files changed, 1025 insertions(+), 600 deletions(-)
create mode 100644 arch/parisc/lib/memset.c
delete mode 100644 arch/parisc/lib/string.S
create mode 100644 tools/virtio/linux/spinlock.h
--
2.20.1
1
89

[PATCH openEuler-5.10] ARM: spectre-v2: turn off the mitigation via boot cmdline param
by Zheng Zengkai 19 Oct '21
by Zheng Zengkai 19 Oct '21
19 Oct '21
From: "GONG, Ruiqi" <gongruiqi1(a)huawei.com>
hulk inclusion
category: feature
feature: switch of spectre mitigation
bugzilla: 180851 https://gitee.com/openeuler/kernel/issues/I4EF1O
-------------------------------------------------
We enable spectre mitigation by default for ARM32, which may
cause performance regression. To offer an option to turn off
this feature, implement a cmdline parameter 'nospectre_v2' compatible
with mainline, which sets up a switch to skip invalidating BTB/icache
for A9/A15 in context switching and user abort.
Signed-off-by: GONG, Ruiqi <gongruiqi1(a)huawei.com>
Cc: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
---
arch/arm/include/asm/system_misc.h | 1 +
arch/arm/mm/proc-v7-bugs.c | 18 ++++++++++++++++++
arch/arm/mm/proc-v7.S | 15 +++++++++++++++
3 files changed, 34 insertions(+)
diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 66f6a3ae68d2..a7ac0c9f38e5 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -37,6 +37,7 @@ static inline void harden_branch_predictor(void)
#define UDBG_BUS (1 << 4)
extern unsigned int user_debug;
+extern int nospectre_v2;
#endif /* !__ASSEMBLY__ */
diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
index 114c05ab4dd9..d7750cddc334 100644
--- a/arch/arm/mm/proc-v7-bugs.c
+++ b/arch/arm/mm/proc-v7-bugs.c
@@ -8,6 +8,19 @@
#include <asm/proc-fns.h>
#include <asm/system_misc.h>
+/*
+ * 32-bit ARM spectre hardening, enabled by default, can be disabled via boot
+ * cmdline param 'nospectre_v2' to avoid performance regression.
+ */
+int nospectre_v2 __read_mostly;
+
+static int __init nospectre_v2_setup(char *str)
+{
+ nospectre_v2 = 1;
+ return 0;
+}
+early_param("nospectre_v2", nospectre_v2_setup);
+
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
@@ -41,6 +54,11 @@ static void cpu_v7_spectre_init(void)
const char *spectre_v2_method = NULL;
int cpu = smp_processor_id();
+ if (nospectre_v2) {
+ pr_info_once("Spectre v2: hardening is disabled\n");
+ return;
+ }
+
if (per_cpu(harden_branch_predictor_fn, cpu))
return;
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 28c9d32fa99a..a59ddfd7a179 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -111,17 +111,32 @@ ENTRY(cpu_v7_hvc_switch_mm)
b cpu_v7_switch_mm
ENDPROC(cpu_v7_hvc_switch_mm)
#endif
+
+.globl nospectre_v2
ENTRY(cpu_v7_iciallu_switch_mm)
+ adr r3, 3f
+ ldr r3, [r3]
+ cmp r3, #1
+ beq 1f
mov r3, #0
mcr p15, 0, r3, c7, c5, 0 @ ICIALLU
+1:
b cpu_v7_switch_mm
ENDPROC(cpu_v7_iciallu_switch_mm)
ENTRY(cpu_v7_bpiall_switch_mm)
+ adr r3, 3f
+ ldr r3, [r3]
+ cmp r3, #1
+ beq 1f
mov r3, #0
mcr p15, 0, r3, c7, c5, 6 @ flush BTAC/BTB
+1:
b cpu_v7_switch_mm
ENDPROC(cpu_v7_bpiall_switch_mm)
+ .align
+3: .long nospectre_v2
+
string cpu_v7_name, "ARMv7 Processor"
.align
--
2.20.1
1
0

19 Oct '21
Backport bugfix and enhancement patches for mm/fs/livepatch/sched.
Al Viro (2):
switch file_open_root() to struct path
take LOOKUP_{ROOT,ROOT_GRABBED,JUMPED} out of LOOKUP_... space
Chen Jun (1):
mm: Fix the uninitialized use in overcommit_policy_handler
Guoqing Jiang (1):
md: revert io stats accounting
Kefeng Wang (1):
once: Fix panic when module unload
Leah Rumancik (1):
ext4: wipe ext4_dir_entry2 upon file deletion
Li Hua (2):
sched/idle: Optimize the loop time algorithm to reduce multicore
disturb
sched/idle: Reported an error when an illegal negative value is passed
Vasily Averin (7):
memcg: enable accounting for pids in nested pid namespaces
memcg: enable accounting for mnt_cache entries
memcg: enable accounting for fasync_cache
memcg: enable accounting for new namesapces and struct nsproxy
memcg: enable accounting for signals
memcg: enable accounting for posix_timers_cache slab
memcg: enable accounting for ldt_struct objects
Vignesh Raghavendra (1):
serial: 8250: 8250_omap: Fix possible array out of bounds access
Yang Jihong (1):
perf annotate: Add itrace options support
Yang Yang (1):
kyber: introduce kyber_depth_updated()
Ye Bin (1):
ext4: fix potential uninitialized access to retval in kmmpd
Ye Weihua (9):
livepatch: Add state describe for force
livepatch: checks only if the replaced instruction is on the stack
livepatch/arm64: only check stack top
livepatch/arm: only check stack top
livepatch/ppc32: only check stack top
livepatch/ppc64: only check stack top
livepatch/x86: only check stack top
livepatch: move arch_klp_mem_recycle after the return value judgment
livepatch: Fix compile warnning
Yu Jiahua (1):
sched: Aware multi-core system for optimize loadtracking
Yu Kuai (2):
blk-mq: clear active_queues before clearing BLK_MQ_F_TAG_QUEUE_SHARED
blk-mq: fix divide by zero crash in tg_may_dispatch()
Yutian Yang (1):
memcg: charge fs_context and legacy_fs_context
Zhang Yi (5):
ext4: move inode eio simulation behind io completeion
ext4: make the updating inode data procedure atomic
ext4: factor out ext4_fill_raw_inode()
ext4: move ext4_fill_raw_inode() related functions
ext4: prevent getting empty inode buffer
Zheng Zucheng (1):
sysctl: Refactor IAS framework
Documentation/filesystems/path-lookup.rst | 6 +-
Documentation/filesystems/porting.rst | 9 +
arch/arm/kernel/livepatch.c | 221 +++++++++++--
arch/arm64/kernel/livepatch.c | 209 +++++++++++--
arch/powerpc/kernel/livepatch_32.c | 209 +++++++++++--
arch/powerpc/kernel/livepatch_64.c | 227 ++++++++++----
arch/um/drivers/mconsole_kern.c | 2 +-
arch/x86/kernel/ldt.c | 6 +-
arch/x86/kernel/livepatch.c | 347 +++++++++++++++------
block/blk-mq.c | 6 +-
block/blk-sysfs.c | 7 +
block/blk-throttle.c | 37 ++-
block/kyber-iosched.c | 29 +-
drivers/md/md.c | 45 ---
drivers/md/md.h | 1 -
drivers/tty/serial/8250/8250_omap.c | 1 +
fs/coredump.c | 4 +-
fs/ext4/inode.c | 332 +++++++++++---------
fs/ext4/mmp.c | 2 +-
fs/ext4/namei.c | 24 +-
fs/fcntl.c | 3 +-
fs/fhandle.c | 2 +-
fs/fs_context.c | 4 +-
fs/internal.h | 2 +-
fs/kernel_read_file.c | 2 +-
fs/namei.c | 60 ++--
fs/namespace.c | 7 +-
fs/nfs/nfstrace.h | 4 -
fs/open.c | 4 +-
fs/proc/proc_sysctl.c | 2 +-
include/linux/blkdev.h | 1 +
include/linux/fs.h | 9 +-
include/linux/kernel.h | 4 +-
include/linux/livepatch.h | 4 +
include/linux/namei.h | 3 -
include/linux/once.h | 4 +-
include/linux/sched/sysctl.h | 8 +-
init/Kconfig | 36 ++-
ipc/namespace.c | 2 +-
kernel/cgroup/namespace.c | 2 +-
kernel/livepatch/core.c | 2 +-
kernel/nsproxy.c | 2 +-
kernel/pid_namespace.c | 5 +-
kernel/sched/fair.c | 86 ++---
kernel/sched/idle.c | 48 ++-
kernel/signal.c | 2 +-
kernel/sysctl.c | 84 ++---
kernel/time/namespace.c | 4 +-
kernel/time/posix-timers.c | 4 +-
kernel/user_namespace.c | 2 +-
kernel/usermode_driver.c | 2 +-
lib/once.c | 11 +-
mm/util.c | 4 +-
security/integrity/ima/ima_digest_list.c | 2 +-
tools/perf/Documentation/perf-annotate.txt | 7 +
tools/perf/builtin-annotate.c | 11 +
56 files changed, 1494 insertions(+), 669 deletions(-)
--
2.20.1
1
38

19 Oct '21
From: Yang Yingliang <yangyingliang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 179898 https://gitee.com/openeuler/kernel/issues/I4DDEL
CVE: CVE-2018-12928
---------------------------
It will cause a null-ptr-deref in hfs_find_init()
in fuzz test.
[ 107.092729] hfs: continuing without an alternate MDB
[ 107.097632] general protection fault, probably for non-canonical address 0xdffffc0000000008: 0000 [#1] SMP KASAN PTI
[ 107.104679] KASAN: null-ptr-deref in range [0x0000000000000040-0x0000000000000047]
[ 107.109100] CPU: 0 PID: 379 Comm: hfs_inject Not tainted 5.7.0-rc7-00001-g24627f5f2973 #897
[ 107.114142] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
[ 107.121095] RIP: 0010:hfs_find_init+0x72/0x170
[ 107.123609] Code: c1 ea 03 80 3c 02 00 0f 85 e6 00 00 00 4c 8d 65 40 48 c7 43 18 00 00 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 e2 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e a5 00 00 00 8b 45 40 be c0 0c
[ 107.134660] RSP: 0018:ffff88810291f3f8 EFLAGS: 00010202
[ 107.137897] RAX: dffffc0000000000 RBX: ffff88810291f468 RCX: 1ffff110175cdf05
[ 107.141874] RDX: 0000000000000008 RSI: ffff88810291f468 RDI: ffff88810291f480
[ 107.145844] RBP: 0000000000000000 R08: 0000000000000000 R09: ffffed1020381013
[ 107.149431] R10: ffff88810291f500 R11: ffffed1020381012 R12: 0000000000000040
[ 107.152315] R13: 0000000000000000 R14: ffff888101c0814a R15: ffff88810291f468
[ 107.155464] FS: 00000000009ea880(0000) GS:ffff88810c600000(0000) knlGS:0000000000000000
[ 107.159795] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 107.162987] CR2: 00005605a19dd284 CR3: 0000000103a0c006 CR4: 0000000000020ef0
[ 107.166665] Call Trace:
[ 107.167969] ? find_held_lock+0x33/0x1c0
[ 107.169972] hfs_ext_read_extent+0x16b/0xb00
[ 107.172092] ? create_page_buffers+0x14e/0x1b0
[ 107.174303] ? hfs_free_extents+0x280/0x280
[ 107.176437] ? lock_downgrade+0x730/0x730
[ 107.178272] hfs_get_block+0x496/0x8a0
[ 107.179972] block_read_full_page+0x241/0x8d0
[ 107.181971] ? hfs_extend_file+0xae0/0xae0
[ 107.183814] ? end_buffer_async_read_io+0x10/0x10
[ 107.185954] ? add_to_page_cache_lru+0x13f/0x1f0
[ 107.188006] ? add_to_page_cache_locked+0x10/0x10
[ 107.190175] do_read_cache_page+0xc6a/0x1180
[ 107.192096] ? generic_file_read_iter+0x4c0/0x4c0
[ 107.194234] ? hfs_btree_open+0x408/0x1000
[ 107.196068] ? lock_downgrade+0x730/0x730
[ 107.197926] ? wake_bit_function+0x180/0x180
[ 107.199845] ? lockdep_init_map_waits+0x267/0x7c0
[ 107.201895] hfs_btree_open+0x455/0x1000
[ 107.203479] hfs_mdb_get+0x122c/0x1ae8
[ 107.205065] ? hfs_mdb_put+0x350/0x350
[ 107.206590] ? queue_work_node+0x260/0x260
[ 107.208309] ? rcu_read_lock_sched_held+0xa1/0xd0
[ 107.210227] ? lockdep_init_map_waits+0x267/0x7c0
[ 107.212144] ? lockdep_init_map_waits+0x267/0x7c0
[ 107.213979] hfs_fill_super+0x9ba/0x1280
[ 107.215444] ? bdev_name.isra.9+0xf1/0x2b0
[ 107.217028] ? hfs_remount+0x190/0x190
[ 107.218428] ? pointer+0x5da/0x710
[ 107.219745] ? file_dentry_name+0xf0/0xf0
[ 107.221262] ? mount_bdev+0xd1/0x330
[ 107.222592] ? vsnprintf+0x7bd/0x1250
[ 107.224007] ? pointer+0x710/0x710
[ 107.225332] ? down_write+0xe5/0x160
[ 107.226698] ? hfs_remount+0x190/0x190
[ 107.228120] ? snprintf+0x91/0xc0
[ 107.229388] ? vsprintf+0x10/0x10
[ 107.230628] ? sget+0x3af/0x4a0
[ 107.231848] ? hfs_remount+0x190/0x190
[ 107.233300] mount_bdev+0x26e/0x330
[ 107.234611] ? hfs_statfs+0x540/0x540
[ 107.236015] legacy_get_tree+0x101/0x1f0
[ 107.237431] ? security_capable+0x58/0x90
[ 107.238832] vfs_get_tree+0x89/0x2d0
[ 107.240082] ? ns_capable_common+0x5c/0xd0
[ 107.241521] do_mount+0xd8a/0x1720
[ 107.242727] ? lock_downgrade+0x730/0x730
[ 107.244116] ? copy_mount_string+0x20/0x20
[ 107.245557] ? _copy_from_user+0xbe/0x100
[ 107.246967] ? memdup_user+0x47/0x70
[ 107.248212] __x64_sys_mount+0x162/0x1b0
[ 107.249537] do_syscall_64+0xa5/0x4f0
[ 107.250742] entry_SYSCALL_64_after_hwframe+0x49/0xb3
[ 107.252369] RIP: 0033:0x44e8ea
[ 107.253360] Code: 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
[ 107.259240] RSP: 002b:00007ffd910e4c28 EFLAGS: 00000207 ORIG_RAX: 00000000000000a5
[ 107.261668] RAX: ffffffffffffffda RBX: 0000000000400400 RCX: 000000000044e8ea
[ 107.263920] RDX: 000000000049321e RSI: 0000000000493222 RDI: 00007ffd910e4d00
[ 107.266177] RBP: 00007ffd910e5d10 R08: 0000000000000000 R09: 000000000000000a
[ 107.268451] R10: 0000000000000001 R11: 0000000000000207 R12: 0000000000401c40
[ 107.270721] R13: 0000000000000000 R14: 00000000006ba018 R15: 0000000000000000
[ 107.273025] Modules linked in:
[ 107.274029] Dumping ftrace buffer:
[ 107.275121] (ftrace buffer empty)
[ 107.276370] ---[ end trace c5e0b9d684f3570e ]---
We need check tree in hfs_find_init().
https://lore.kernel.org/linux-fsdevel/20180419024358.GA5215@bombadil.infrad…
https://marc.info/?l=linux-fsdevel&m=152406881024567&w=2
References: CVE-2018-12928
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
---
fs/hfs/bfind.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/hfs/bfind.c b/fs/hfs/bfind.c
index ef9498a6e88a..ae44c8fa86ec 100644
--- a/fs/hfs/bfind.c
+++ b/fs/hfs/bfind.c
@@ -16,6 +16,8 @@ int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd)
{
void *ptr;
+ if (!tree)
+ return -EINVAL;
fd->tree = tree;
fd->bnode = NULL;
ptr = kmalloc(tree->max_key_len * 2 + 4, GFP_KERNEL);
--
2.20.1
1
11
Backport LTS 5.10.61 patches from upstream.
Adrian Larumbe (1):
dmaengine: xilinx_dma: Fix read-after-free bug when terminating
transfers
Alan Stern (2):
USB: core: Avoid WARNings for 0-length descriptor requests
USB: core: Fix incorrect pipe calculation in do_proc_control()
Andreas Persson (1):
mtd: cfi_cmdset_0002: fix crash when erasing/writing AMD cards
Andy Shevchenko (1):
ptp_pch: Restore dependency on PCI
Arkadiusz Kubalewski (1):
i40e: Fix ATR queue selection
Bing Guo (1):
drm/amd/display: Fix Dynamic bpp issue with 8K30 with Navi 1X
Bjorn Andersson (1):
clk: qcom: gdsc: Ensure regulator init state matches GDSC state
Christophe Kerello (1):
mmc: mmci: stm32: Check when the voltage switch procedure should be
done
Dan Carpenter (1):
media: zr364xx: fix memory leaks in probe()
Dave Gerlach (1):
ARM: dts: am43x-epos-evm: Reduce i2c0 bus speed for tps65218
Dinghao Liu (1):
net: qlcnic: add missed unlock in qlcnic_83xx_flash_read32
Dong Aisheng (1):
clk: imx6q: fix uart earlycon unwork
Dongliang Mu (2):
ipack: tpci200: fix many double free issues in tpci200_pci_probe
ipack: tpci200: fix memory leak in the tpci200_register
Eli Cohen (1):
vdpa/mlx5: Avoid destroying MR on empty iotlb
Evgeny Novikov (1):
media: zr364xx: propagate errors from zr364xx_start_readpipe()
Frank Wunderlich (1):
iommu: Check if group is NULL before remove device
Harshvardhan Jha (2):
net: xfrm: Fix end of loop tests for list_for_each_entry
scsi: megaraid_mm: Fix end of loop tests for list_for_each_entry()
Hayes Wang (1):
r8152: fix writing USB_BP2_EN
Ido Schimmel (1):
Revert "flow_offload: action should not be NULL when it is referenced"
Igor Pylypiv (1):
scsi: pm80xx: Fix TMF task completion race condition
Ilya Leoshkevich (1):
bpf: Clear zext_dst of dead insns
Ivan T. Ivanov (1):
net: usb: lan78xx: don't modify phy_device state concurrently
Jakub Kicinski (4):
bnxt: don't lock the tx queue from napi poll
bnxt: disable napi before canceling DIM
bnxt: make sure xmit_more + errors does not miss doorbells
bnxt: count Tx drops
Jaroslav Kysela (1):
ALSA: hda - fix the 'Capture Switch' value change notifications
Jason Wang (1):
virtio-net: use NETIF_F_GRO_HW instead of NETIF_F_LRO
Jeff Layton (1):
fs: warn about impending deprecation of mandatory locks
Jens Axboe (2):
io_uring: fix xa_alloc_cycle() error return value check
io_uring: only assign io_uring_enter() SQPOLL error in actual error
case
Johannes Weiner (1):
mm: memcontrol: fix occasional OOMs due to proportional memory.low
reclaim
Jouni Malinen (5):
ath: Use safer key clearing with key cache entries
ath9k: Clear key cache explicitly on disabling hardware
ath: Export ath_hw_keysetmac()
ath: Modify ath_key_delete() to not need full key entry
ath9k: Postpone key cache entry deletion for TXQ frames reference it
Kai-Heng Feng (1):
ALSA: hda/realtek: Limit mic boost on HP ProBook 445 G8
Kristin Paget (1):
ALSA: hda/realtek: Enable 4-speaker output for Dell XPS 15 9510 laptop
Lahav Schlesinger (1):
vrf: Reset skb conntrack connection on VRF rcv
Liu Yi L (1):
iommu/vt-d: Fix incomplete cache flush in
intel_pasid_tear_down_entry()
Lu Baolu (1):
iommu/vt-d: Consolidate duplicate cache invaliation code
Marcin Bachry (1):
PCI: Increase D3 delay for AMD Renoir/Cezanne XHCI
Marek Behún (1):
cpufreq: armada-37xx: forbid cpufreq for 1.2 GHz variant
Michael Chan (2):
bnxt_en: Disable aRFS if running on 212 firmware
bnxt_en: Add missing DMA memory barriers
NeilBrown (1):
btrfs: prevent rename2 from exchanging a subvol with a directory from
different parents
Nicolas Saenz Julienne (2):
mmc: sdhci-iproc: Cap min clock frequency on BCM2711
mmc: sdhci-iproc: Set SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN on BCM2711
Niklas Schnelle (1):
s390/pci: fix use after free of zpci_dev
Ole Bjørn Midtbø (1):
Bluetooth: hidp: use correct wait queue when removing ctrl_wait
Parav Pandit (1):
virtio: Protect vqs list access
Pavel Skripkin (2):
media: drivers/media/usb: fix memory leak in zr364xx_probe
net: 6pack: fix slab-out-of-bounds in decode_data
Peter Ujfalusi (1):
dmaengine: of-dma: router_xlate to return -EPROBE_DEFER if controller
is not yet available
Petko Manolov (1):
net: usb: pegasus: Check the return value of get_geristers() and
friends;
Petr Vorel (1):
arm64: dts: qcom: msm8992-bullhead: Remove PSCI
Prabhakar Kushwaha (1):
qede: fix crash in rmmod qede while automatic debug collection
Qingqing Zhuo (1):
drm/amd/display: workaround for hard hang on HPD on native DP
Randy Dunlap (1):
dccp: add do-while-0 stubs for dccp_pr_debug macros
Saravana Kannan (2):
net: mdio-mux: Don't ignore memory allocation errors
net: mdio-mux: Handle -EPROBE_DEFER correctly
Shaik Sajida Bhanu (1):
mmc: sdhci-msm: Update the software timeout value for sdhc
Sreekanth Reddy (1):
scsi: core: Avoid printing an error if target_alloc() returns -ENXIO
Srinivas Kandagatla (4):
arm64: dts: qcom: c630: fix correct powerdown pin for WSA881x
slimbus: messaging: start transaction ids from 1 instead of zero
slimbus: messaging: check for valid transaction id
slimbus: ngd: reset dma setup during runtime pm
Steven Rostedt (VMware) (1):
tracing / histogram: Fix NULL pointer dereference on strcmp() on NULL
event name
Sudeep Holla (1):
ARM: dts: nomadik: Fix up interrupt controller node names
Sylwester Dziedziuch (1):
iavf: Fix ping is lost after untrusted VF had tried to change MAC
Takashi Iwai (2):
ALSA: hda/via: Apply runtime PM workaround for ASUS B23E
ASoC: intel: atom: Fix breakage for PCM buffer address setup
Toke Høiland-Jørgensen (1):
sch_cake: fix srchost/dsthost hashing mode
Tony Lindgren (1):
bus: ti-sysc: Fix error handling for sysc_check_active_timer()
Uwe Kleine-König (1):
spi: spi-mux: Add module info needed for autoloading
Vincent Whitchurch (1):
mmc: dw_mmc: Fix hang on data CRC error
Wang Hai (1):
ixgbe, xsk: clean up the resources in ixgbe_xsk_pool_enable error path
Wanpeng Li (1):
KVM: X86: Fix warning caused by stale emulation context
Wei Huang (1):
KVM: x86: Factor out x86 instruction emulation with decoding
Xie Yongji (2):
vhost-vdpa: Fix integer overflow in vhost_vdpa_process_iotlb_update()
vhost: Fix the calculation in vhost_overflow()
Xuan Zhuo (1):
virtio-net: support XDP when not more queues
Ye Bin (1):
scsi: scsi_dh_rdac: Avoid crash during rdac_bus_attach()
Yifan Zhang (1):
drm/amdgpu: fix the doorbell missing when in CGPG issue for renoir.
Yongqiang Niu (2):
soc / drm: mediatek: Move DDP component defines into mtk-mmsys.h
drm/mediatek: Fix aal size config
Yu Kuai (1):
dmaengine: usb-dmac: Fix PM reference leak in usb_dmac_probe()
jason-jh.lin (1):
drm/mediatek: Add AAL output size configuration
kaixi.fan (1):
ovs: clear skb->tstamp in forwarding path
lijinlin (1):
scsi: core: Fix capacity set to zero after offlinining device
arch/arm/boot/dts/am43x-epos-evm.dts | 2 +-
arch/arm/boot/dts/ste-nomadik-stn8815.dtsi | 4 +-
.../dts/qcom/msm8992-bullhead-rev-101.dts | 4 +
.../boot/dts/qcom/sdm850-lenovo-yoga-c630.dts | 4 +-
arch/s390/pci/pci.c | 6 +
arch/s390/pci/pci_bus.h | 5 +
arch/x86/kvm/x86.c | 62 ++++++----
arch/x86/kvm/x86.h | 2 +
drivers/bus/ti-sysc.c | 4 +-
drivers/clk/imx/clk-imx6q.c | 2 +-
drivers/clk/qcom/gdsc.c | 54 ++++++---
drivers/cpufreq/armada-37xx-cpufreq.c | 6 +-
drivers/dma/of-dma.c | 9 +-
drivers/dma/sh/usb-dmac.c | 2 +-
drivers/dma/xilinx/xilinx_dma.c | 12 ++
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 21 +++-
.../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 4 +-
.../gpu/drm/amd/display/dc/dcn20/dcn20_optc.c | 2 +-
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c | 4 +-
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h | 34 +-----
drivers/iommu/intel/pasid.c | 26 ++--
drivers/iommu/intel/pasid.h | 6 +
drivers/iommu/intel/svm.c | 55 ++-------
drivers/iommu/iommu.c | 3 +
drivers/ipack/carriers/tpci200.c | 60 +++++-----
drivers/media/usb/zr364xx/zr364xx.c | 77 ++++++++----
drivers/mmc/host/dw_mmc.c | 6 +-
drivers/mmc/host/mmci_stm32_sdmmc.c | 7 +-
drivers/mmc/host/sdhci-iproc.c | 21 +++-
drivers/mmc/host/sdhci-msm.c | 18 +++
drivers/mtd/chips/cfi_cmdset_0002.c | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 113 ++++++++++++------
drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 +
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 3 +-
drivers/net/ethernet/intel/iavf/iavf.h | 1 +
drivers/net/ethernet/intel/iavf/iavf_main.c | 1 +
.../net/ethernet/intel/iavf/iavf_virtchnl.c | 47 +++++++-
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 5 +-
drivers/net/ethernet/qlogic/qede/qede.h | 1 +
drivers/net/ethernet/qlogic/qede/qede_main.c | 8 ++
.../ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c | 4 +-
drivers/net/hamradio/6pack.c | 6 +
drivers/net/mdio/mdio-mux.c | 36 ++++--
drivers/net/usb/lan78xx.c | 16 ++-
drivers/net/usb/pegasus.c | 108 ++++++++++++-----
drivers/net/usb/r8152.c | 2 +-
drivers/net/virtio_net.c | 76 ++++++++----
drivers/net/vrf.c | 4 +
drivers/net/wireless/ath/ath.h | 3 +-
drivers/net/wireless/ath/ath5k/mac80211-ops.c | 2 +-
drivers/net/wireless/ath/ath9k/htc_drv_main.c | 2 +-
drivers/net/wireless/ath/ath9k/hw.h | 1 +
drivers/net/wireless/ath/ath9k/main.c | 95 ++++++++++++++-
drivers/net/wireless/ath/key.c | 41 ++++---
drivers/pci/quirks.c | 1 +
drivers/ptp/Kconfig | 3 +-
drivers/scsi/device_handler/scsi_dh_rdac.c | 4 +-
drivers/scsi/megaraid/megaraid_mm.c | 21 +++-
drivers/scsi/pm8001/pm8001_sas.c | 32 +++--
drivers/scsi/scsi_scan.c | 3 +-
drivers/scsi/scsi_sysfs.c | 9 +-
drivers/slimbus/messaging.c | 7 +-
drivers/slimbus/qcom-ngd-ctrl.c | 5 +-
drivers/soc/mediatek/mtk-mmsys.c | 4 +-
drivers/spi/spi-mux.c | 8 ++
drivers/usb/core/devio.c | 2 +-
drivers/usb/core/message.c | 6 +
drivers/vdpa/mlx5/core/mr.c | 9 --
drivers/vhost/vdpa.c | 3 +-
drivers/vhost/vhost.c | 10 +-
drivers/virtio/virtio.c | 1 +
drivers/virtio/virtio_ring.c | 8 ++
fs/btrfs/inode.c | 10 +-
fs/io_uring.c | 16 +--
fs/namespace.c | 6 +-
include/linux/memcontrol.h | 29 ++---
include/linux/soc/mediatek/mtk-mmsys.h | 33 +++++
include/linux/virtio.h | 1 +
include/net/flow_offload.h | 12 +-
kernel/bpf/verifier.c | 1 +
kernel/trace/trace_events_hist.c | 2 +
mm/vmscan.c | 27 +++--
net/bluetooth/hidp/core.c | 2 +-
net/dccp/dccp.h | 6 +-
net/openvswitch/vport.c | 1 +
net/sched/sch_cake.c | 2 +-
net/xfrm/xfrm_ipcomp.c | 2 +-
sound/pci/hda/hda_generic.c | 10 +-
sound/pci/hda/patch_realtek.c | 12 +-
sound/pci/hda/patch_via.c | 1 +
sound/soc/intel/atom/sst-mfld-platform-pcm.c | 2 +-
91 files changed, 964 insertions(+), 447 deletions(-)
--
2.20.1
1
94
Backport LTS 5.10.60 patches from upstream.
Alex Deucher (1):
drm/amdgpu: don't enable baco on boco platforms in runpm
Andy Shevchenko (1):
pinctrl: tigerlake: Fix GPIO mapping for newer version of software
Anirudh Venkataramanan (1):
ice: Prevent probing virtual functions
Anson Jacob (1):
drm/amd/display: use GFP_ATOMIC in amdgpu_dm_irq_schedule_work
Antti Keränen (1):
iio: adis: set GPIO reset pin direction
Ard Biesheuvel (3):
efi/libstub: arm64: Force Image reallocation if BSS was not reserved
efi/libstub: arm64: Relax 2M alignment again for relocatable kernels
efi/libstub: arm64: Double check image alignment at entry
Aya Levin (1):
net/mlx5: Fix return value from tracer initialization
Babu Moger (1):
x86/resctrl: Fix default monitoring groups reporting
Ben Dai (1):
genirq/timings: Prevent potential array overflow in
__irq_timings_store()
Ben Hutchings (8):
net: phy: micrel: Fix link detection on ksz87xx switch"
net: dsa: microchip: Fix ksz_read64()
net: dsa: microchip: ksz8795: Fix VLAN filtering
net: dsa: microchip: Fix probing KSZ87xx switch with DT node for host
port
net: dsa: microchip: ksz8795: Fix PVID tag insertion
net: dsa: microchip: ksz8795: Reject unsupported VLAN configuration
net: dsa: microchip: ksz8795: Fix VLAN untagged flag change on
deletion
net: dsa: microchip: ksz8795: Use software untagging on CPU port
Benjamin Herrenschmidt (1):
arm64: efi: kaslr: Fix occasional random alloc (and boot) failure
Bixuan Cui (1):
genirq/msi: Ensure deactivation on teardown
Brett Creeley (1):
ice: don't remove netdev->dev_addr from uc sync list
Chris Lesiak (1):
iio: humidity: hdc100x: Add margin to the conversion time
Christian Hewitt (1):
drm/meson: fix colour distortion from HDR set during vendor u-boot
Christophe Leroy (1):
powerpc/smp: Fix OOPS in topology_init()
Colin Ian King (1):
iio: adc: Fix incorrect exit of for-loop
DENG Qingfang (1):
net: dsa: mt7530: add the missing RxUnicast MIB counter
Dan Williams (2):
ACPI: NFIT: Fix support for virtual SPA ranges
libnvdimm/region: Fix label activation vs errors
Dongliang Mu (2):
ieee802154: hwsim: fix GPF in hwsim_set_edge_lqi
ieee802154: hwsim: fix GPF in hwsim_new_edge_nl
Eric Bernstein (1):
drm/amd/display: Remove invalid assert for ODM + MPC case
Eric Dumazet (2):
net: igmp: fix data-race in igmp_ifc_timer_expire()
net: igmp: increase size of mr_ifc_count
Ewan D. Milne (1):
scsi: lpfc: Move initialization of phba->poll_list earlier to avoid
crash
Greg Kroah-Hartman (1):
i2c: dev: zero out array used for i2c reads from userspace
Grygorii Strashko (1):
net: ethernet: ti: cpsw: fix min eth packet size for non-switch
use-cases
Guennadi Liakhovetski (1):
ASoC: SOF: Intel: hda-ipc: fix reply size checking
Guillaume Nault (1):
bareudp: Fix invalid read beyond skb's linear data
Hangbin Liu (1):
net: sched: act_mirred: Reset ct info when mirror/redirect skb
Hans de Goede (3):
platform/x86: pcengines-apuv2: Add missing terminating entries to
gpio-lookup tables
vboxsf: Add vboxsf_[create|release]_sf_handle() helpers
vboxsf: Add support for the atomic_open directory-inode op
Hsin-Yi Wang (1):
pinctrl: mediatek: Fix fallback behavior for bias_set_combo
Hsuan-Chi Kuo (1):
seccomp: Fix setting loaded filter count during TSYNC
Jeff Layton (3):
ceph: add some lockdep assertions around snaprealm handling
ceph: clean up locking annotation for ceph_get_snap_realm and
__lookup_snap_realm
ceph: take snap_empty_lock atomically with snaprealm refcount change
John Hubbard (1):
net: mvvp2: fix short frame size on s390
Karsten Graul (1):
net/smc: fix wait on already cleared link
Longpeng(Mike) (1):
vsock/virtio: avoid potential deadlock when vsock device remove
Luis Henriques (1):
ceph: reduce contention in ceph_check_delayed_caps()
Mark Brown (1):
ASoC: tlv320aic31xx: Fix jack detection after suspend
Matt Roper (1):
drm/i915: Only access SFC_DONE when media domain is not fused off
Maxim Levitsky (2):
KVM: nSVM: avoid picking up unsupported bits from L2 in int_ctl
(CVE-2021-3653)
KVM: nSVM: always intercept VMLOAD/VMSAVE when nested (CVE-2021-3656)
Maximilian Heyne (1):
xen/events: Fix race in set_evtchn_to_irq
Md Fahad Iqbal Polash (1):
iavf: Set RSS LUT and key in reset handle path
Nathan Chancellor (1):
vmlinux.lds.h: Handle clang's module.{c,d}tor sections
Neal Cardwell (1):
tcp_bbr: fix u32 wrap bug in round logic if bbr_init() called after 2B
packets
Nikolay Aleksandrov (1):
net: bridge: fix flags interpretation for extern learn fdb entries
Pali Rohár (1):
ppp: Fix generating ifname when empty IFLA_IFNAME is specified
Randy Dunlap (1):
x86/tools: Fix objdump version check again
Richard Fitzgerald (5):
ASoC: cs42l42: Correct definition of ADC Volume control
ASoC: cs42l42: Don't allow SND_SOC_DAIFMT_LEFT_J
ASoC: cs42l42: Fix inversion of ADC Notch Switch control
ASoC: cs42l42: Remove duplicate control for WNF filter frequency
ASoC: cs42l42: Fix LRCLK frame start edge
Robin Gögge (1):
libbpf: Fix probe for BPF_PROG_TYPE_CGROUP_SOCKOPT
Roi Dayan (1):
psample: Add a fwd declaration for skbuff
Sean Christopherson (2):
KVM: VMX: Use current VMCS to query WAITPKG support for MSR emulation
KVM: nVMX: Use vmx_need_pf_intercept() when deciding if L0 wants a #PF
Shay Drory (1):
net/mlx5: Synchronize correct IRQ when destroying CQ
Shyam Prasad N (1):
cifs: create sd context must be a multiple of 8
Takashi Iwai (4):
ASoC: amd: Fix reference to PCM buffer address
ASoC: xilinx: Fix reference to PCM buffer address
ASoC: uniphier: Fix reference to PCM buffer address
ASoC: intel: atom: Fix reference to PCM buffer address
Takeshi Misawa (1):
net: Fix memory leak in ieee802154_raw_deliver
Tatsuhiko Yasumatsu (1):
bpf: Fix integer overflow involving bucket_size
Thomas Gleixner (11):
genirq: Provide IRQCHIP_AFFINITY_PRE_STARTUP
x86/msi: Force affinity setup before startup
x86/ioapic: Force affinity setup before startup
PCI/MSI: Enable and mask MSI-X early
PCI/MSI: Mask all unused MSI-X entries
PCI/MSI: Enforce that MSI-X table entry is masked for update
PCI/MSI: Enforce MSI[X] entry updates to be visible
PCI/MSI: Do not set invalid bits in MSI mask
PCI/MSI: Correct misleading comments
PCI/MSI: Use msi_mask_irq() in pci_msi_shutdown()
PCI/MSI: Protect msi_desc::masked for multi-MSI
Uwe Kleine-König (1):
iio: adc: ti-ads7950: Ensure CS is deasserted after reading channels
Vineet Gupta (1):
ARC: fp: set FPU_STATUS.FWE to enable FPU_STATUS update on context
switch
Vladimir Oltean (4):
net: dsa: lan9303: fix broken backpressure in .port_fdb_dump
net: dsa: lantiq: fix broken backpressure in .port_fdb_dump
net: dsa: sja1105: fix broken backpressure in .port_fdb_dump
net: bridge: validate the NUD_PERMANENT bit when adding an
extern_learn FDB entry
Willy Tarreau (1):
net: linkwatch: fix failure to restore device state across
suspend/resume
Xie Yongji (1):
nbd: Aovid double completion of a request
Yajun Deng (1):
netfilter: nf_conntrack_bridge: Fix memory leak when error
Yang Yingliang (1):
net: bridge: fix memleak in br_add_if()
arch/arc/kernel/fpu.c | 9 +-
arch/powerpc/kernel/sysfs.c | 2 +-
arch/x86/include/asm/svm.h | 2 +
arch/x86/kernel/apic/io_apic.c | 6 +-
arch/x86/kernel/apic/msi.c | 13 +-
arch/x86/kernel/cpu/resctrl/monitor.c | 27 ++--
arch/x86/kvm/svm/nested.c | 14 +-
arch/x86/kvm/svm/svm.c | 8 +-
arch/x86/kvm/vmx/nested.c | 3 +-
arch/x86/kvm/vmx/vmx.h | 2 +-
arch/x86/tools/chkobjdump.awk | 1 +
drivers/acpi/nfit/core.c | 3 +
drivers/base/core.c | 1 +
drivers/block/nbd.c | 14 +-
drivers/firmware/efi/libstub/arm64-stub.c | 69 ++++++++--
drivers/firmware/efi/libstub/randomalloc.c | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +
.../drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c | 2 +-
.../drm/amd/display/dc/dcn30/dcn30_resource.c | 1 -
drivers/gpu/drm/i915/i915_gpu_error.c | 19 ++-
drivers/gpu/drm/meson/meson_registers.h | 5 +
drivers/gpu/drm/meson/meson_viu.c | 7 +-
drivers/i2c/i2c-dev.c | 5 +-
drivers/iio/adc/palmas_gpadc.c | 4 +-
drivers/iio/adc/ti-ads7950.c | 1 -
drivers/iio/humidity/hdc100x.c | 6 +-
drivers/iio/imu/adis.c | 3 +-
drivers/infiniband/hw/mlx5/cq.c | 4 +-
drivers/infiniband/hw/mlx5/devx.c | 3 +-
drivers/net/bareudp.c | 16 ++-
drivers/net/dsa/lan9303-core.c | 34 ++---
drivers/net/dsa/lantiq_gswip.c | 14 +-
drivers/net/dsa/microchip/ksz8795.c | 91 +++++++++++--
drivers/net/dsa/microchip/ksz_common.c | 2 +-
drivers/net/dsa/microchip/ksz_common.h | 9 +-
drivers/net/dsa/mt7530.c | 1 +
drivers/net/dsa/sja1105/sja1105_main.c | 4 +-
drivers/net/ethernet/intel/iavf/iavf_main.c | 13 +-
drivers/net/ethernet/intel/ice/ice_main.c | 28 ++--
drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/cq.c | 1 +
.../mellanox/mlx5/core/diag/fw_tracer.c | 11 +-
.../net/ethernet/mellanox/mlx5/core/en_main.c | 13 +-
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 20 ++-
.../ethernet/mellanox/mlx5/core/fpga/conn.c | 4 +-
.../net/ethernet/mellanox/mlx5/core/lib/eq.h | 2 +
.../mellanox/mlx5/core/steering/dr_send.c | 4 +-
drivers/net/ethernet/ti/cpsw_new.c | 7 +-
drivers/net/ethernet/ti/cpsw_priv.h | 4 +-
drivers/net/ieee802154/mac802154_hwsim.c | 6 +-
drivers/net/phy/micrel.c | 2 -
drivers/net/ppp/ppp_generic.c | 2 +-
drivers/nvdimm/namespace_devs.c | 17 ++-
drivers/pci/msi.c | 125 +++++++++++-------
drivers/pinctrl/intel/pinctrl-tigerlake.c | 26 ++--
.../pinctrl/mediatek/pinctrl-mtk-common-v2.c | 8 +-
drivers/platform/x86/pcengines-apuv2.c | 2 +
drivers/scsi/lpfc/lpfc_init.c | 3 +-
drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +-
drivers/xen/events/events_base.c | 20 ++-
fs/ceph/caps.c | 17 ++-
fs/ceph/mds_client.c | 25 ++--
fs/ceph/snap.c | 54 +++++---
fs/ceph/super.h | 2 +-
fs/cifs/smb2pdu.c | 2 +-
fs/vboxsf/dir.c | 48 +++++++
fs/vboxsf/file.c | 71 ++++++----
fs/vboxsf/vfsmod.h | 7 +
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/device.h | 1 +
include/linux/inetdevice.h | 2 +-
include/linux/irq.h | 2 +
include/linux/mlx5/driver.h | 3 +-
include/linux/msi.h | 2 +-
include/net/psample.h | 2 +
include/uapi/linux/neighbour.h | 7 +-
kernel/bpf/hashtab.c | 4 +-
kernel/irq/chip.c | 5 +-
kernel/irq/msi.c | 13 +-
kernel/irq/timings.c | 5 +
kernel/seccomp.c | 2 +-
net/bridge/br_fdb.c | 23 +++-
net/bridge/br_if.c | 2 +
net/bridge/netfilter/nf_conntrack_bridge.c | 6 +
net/core/link_watch.c | 5 +-
net/ieee802154/socket.c | 7 +-
net/ipv4/igmp.c | 21 ++-
net/ipv4/tcp_bbr.c | 2 +-
net/sched/act_mirred.c | 3 +
net/smc/smc_core.h | 2 +
net/smc/smc_llc.c | 10 +-
net/smc/smc_tx.c | 18 ++-
net/smc/smc_wr.c | 10 ++
net/vmw_vsock/virtio_transport.c | 7 +-
sound/soc/amd/acp-pcm-dma.c | 2 +-
sound/soc/amd/raven/acp3x-pcm-dma.c | 2 +-
sound/soc/amd/renoir/acp3x-pdm-dma.c | 2 +-
sound/soc/codecs/cs42l42.c | 39 +++---
sound/soc/codecs/tlv320aic31xx.c | 10 ++
sound/soc/intel/atom/sst-mfld-platform-pcm.c | 3 +-
sound/soc/sof/intel/hda-ipc.c | 4 +-
sound/soc/uniphier/aio-dma.c | 2 +-
sound/soc/xilinx/xlnx_formatter_pcm.c | 4 +-
tools/lib/bpf/libbpf_probes.c | 4 +-
104 files changed, 829 insertions(+), 366 deletions(-)
--
2.20.1
1
101
Backport LTS 5.10.59 patches from upstream.
Adam Ford (3):
arm64: dts: renesas: rzg2: Add usb2_clksel to RZ/G2 M/N/H
arm64: dts: renesas: beacon: Fix USB extal reference
arm64: dts: renesas: beacon: Fix USB ref clock references
Allen Pais (1):
firmware: tee_bnxt: Release TEE shm, session, and context during kexec
Daniel Borkmann (1):
bpf: Add lockdown check for probe_write_user helper
Hans de Goede (2):
vboxsf: Honor excl flag to the dir-inode create op
vboxsf: Make vboxsf_dir_create() return the handle for the created
file
Jeremy Szu (1):
ALSA: hda/realtek: fix mute/micmute LEDs for HP ProBook 650 G8
Notebook PC
Longfang Liu (1):
USB:ehci:fix Kunpeng920 ehci hardware problem
Luke D Jones (1):
ALSA: hda: Add quirk for ASUS Flow x13
Mike Rapoport (1):
mm: make zone_to_nid() and zone_set_nid() available for DISCONTIGMEM
Miklos Szeredi (1):
ovl: prevent private clone if bind mount is not allowed
Pali Rohár (1):
ppp: Fix generating ppp unit id when ifname is not specified
Reinette Chatre (1):
Revert "selftests/resctrl: Use resctrl/info for feature detection"
Sean Christopherson (1):
KVM: SVM: Fix off-by-one indexing when nullifying last used SEV VMCB
Sumit Garg (1):
tee: Correct inappropriate usage of TEE_SHM_DMA_BUF flag
Takashi Iwai (1):
ALSA: pcm: Fix mmap breakage without explicit buffer setup
YueHaibing (1):
net: xilinx_emaclite: Do not print real IOMEM pointer
.../dts/renesas/beacon-renesom-baseboard.dtsi | 4 +-
.../boot/dts/renesas/beacon-renesom-som.dtsi | 6 ++-
arch/arm64/boot/dts/renesas/r8a774a1.dtsi | 15 ++++++
arch/arm64/boot/dts/renesas/r8a774b1.dtsi | 15 ++++++
arch/arm64/boot/dts/renesas/r8a774e1.dtsi | 15 ++++++
arch/x86/kvm/svm/sev.c | 2 +-
drivers/firmware/broadcom/tee_bnxt_fw.c | 14 +++--
drivers/net/ethernet/xilinx/xilinx_emaclite.c | 5 +-
drivers/net/ppp/ppp_generic.c | 19 +++++--
drivers/tee/optee/call.c | 2 +-
drivers/tee/optee/core.c | 3 +-
drivers/tee/optee/rpc.c | 5 +-
drivers/tee/optee/shm_pool.c | 8 ++-
drivers/tee/tee_shm.c | 4 +-
drivers/usb/host/ehci-pci.c | 3 ++
fs/namespace.c | 42 ++++++++++-----
fs/vboxsf/dir.c | 28 ++++++----
include/linux/mmzone.h | 4 +-
include/linux/security.h | 1 +
include/linux/tee_drv.h | 1 +
kernel/trace/bpf_trace.c | 5 +-
security/security.c | 1 +
sound/core/pcm_native.c | 5 +-
sound/pci/hda/patch_realtek.c | 2 +
tools/testing/selftests/resctrl/resctrl.h | 6 +--
tools/testing/selftests/resctrl/resctrlfs.c | 52 ++++---------------
26 files changed, 168 insertions(+), 99 deletions(-)
--
2.20.1
1
18

[PATCH openEuler-5.10 1/2] sched/rt: Fix double enqueue caused by rt_effective_prio
by Zheng Zengkai 19 Oct '21
by Zheng Zengkai 19 Oct '21
19 Oct '21
From: Peter Zijlstra <peterz(a)infradead.org>
stable inclusion
from stable-5.10.58
commit a3e6bd0c71bb5e4821aff9ab8f221bfc08039d73
bugzilla: 176984 https://gitee.com/openeuler/kernel/issues/I4E2P4
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit f558c2b834ec27e75d37b1c860c139e7b7c3a8e4 upstream.
Double enqueues in rt runqueues (list) have been reported while running
a simple test that spawns a number of threads doing a short sleep/run
pattern while being concurrently setscheduled between rt and fair class.
WARNING: CPU: 3 PID: 2825 at kernel/sched/rt.c:1294 enqueue_task_rt+0x355/0x360
CPU: 3 PID: 2825 Comm: setsched__13
RIP: 0010:enqueue_task_rt+0x355/0x360
Call Trace:
__sched_setscheduler+0x581/0x9d0
_sched_setscheduler+0x63/0xa0
do_sched_setscheduler+0xa0/0x150
__x64_sys_sched_setscheduler+0x1a/0x30
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
list_add double add: new=ffff9867cb629b40, prev=ffff9867cb629b40,
next=ffff98679fc67ca0.
kernel BUG at lib/list_debug.c:31!
invalid opcode: 0000 [#1] PREEMPT_RT SMP PTI
CPU: 3 PID: 2825 Comm: setsched__13
RIP: 0010:__list_add_valid+0x41/0x50
Call Trace:
enqueue_task_rt+0x291/0x360
__sched_setscheduler+0x581/0x9d0
_sched_setscheduler+0x63/0xa0
do_sched_setscheduler+0xa0/0x150
__x64_sys_sched_setscheduler+0x1a/0x30
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
__sched_setscheduler() uses rt_effective_prio() to handle proper queuing
of priority boosted tasks that are setscheduled while being boosted.
rt_effective_prio() is however called twice per each
__sched_setscheduler() call: first directly by __sched_setscheduler()
before dequeuing the task and then by __setscheduler() to actually do
the priority change. If the priority of the pi_top_task is concurrently
being changed however, it might happen that the two calls return
different results. If, for example, the first call returned the same rt
priority the task was running at and the second one a fair priority, the
task won't be removed by the rt list (on_list still set) and then
enqueued in the fair runqueue. When eventually setscheduled back to rt
it will be seen as enqueued already and the WARNING/BUG be issued.
Fix this by calling rt_effective_prio() only once and then reusing the
return value. While at it refactor code as well for clarity. Concurrent
priority inheritance handling is still safe and will eventually converge
to a new state by following the inheritance chain(s).
Fixes: 0782e63bc6fe ("sched: Handle priority boosted tasks proper in setscheduler()")
[squashed Peterz changes; added changelog]
Reported-by: Mark Simmons <msimmons(a)redhat.com>
Signed-off-by: Juri Lelli <juri.lelli(a)redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/20210803104501.38333-1-juri.lelli@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Acked-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Signed-off-by: Zheng Zucheng <zhengzucheng(a)huawei.com>
---
kernel/sched/core.c | 95 ++++++++++++++++++---------------------------
1 file changed, 37 insertions(+), 58 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cb5c2f9be849..d5434cc99934 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1590,12 +1590,18 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
dequeue_task(rq, p, flags);
}
-/*
- * __normal_prio - return the priority that is based on the static prio
- */
-static inline int __normal_prio(struct task_struct *p)
+static inline int __normal_prio(int policy, int rt_prio, int nice)
{
- return p->static_prio;
+ int prio;
+
+ if (dl_policy(policy))
+ prio = MAX_DL_PRIO - 1;
+ else if (rt_policy(policy))
+ prio = MAX_RT_PRIO - 1 - rt_prio;
+ else
+ prio = NICE_TO_PRIO(nice);
+
+ return prio;
}
/*
@@ -1607,15 +1613,7 @@ static inline int __normal_prio(struct task_struct *p)
*/
static inline int normal_prio(struct task_struct *p)
{
- int prio;
-
- if (task_has_dl_policy(p))
- prio = MAX_DL_PRIO-1;
- else if (task_has_rt_policy(p))
- prio = MAX_RT_PRIO-1 - p->rt_priority;
- else
- prio = __normal_prio(p);
- return prio;
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
}
/*
@@ -3240,7 +3238,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
} else if (PRIO_TO_NICE(p->static_prio) < 0)
p->static_prio = NICE_TO_PRIO(0);
- p->prio = p->normal_prio = __normal_prio(p);
+ p->prio = p->normal_prio = p->static_prio;
set_load_weight(p, false);
/*
@@ -4986,6 +4984,18 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
}
EXPORT_SYMBOL(default_wake_function);
+static void __setscheduler_prio(struct task_struct *p, int prio)
+{
+ if (dl_prio(prio))
+ p->sched_class = &dl_sched_class;
+ else if (rt_prio(prio))
+ p->sched_class = &rt_sched_class;
+ else
+ p->sched_class = &fair_sched_class;
+
+ p->prio = prio;
+}
+
#ifdef CONFIG_RT_MUTEXES
static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
@@ -5101,22 +5111,19 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
} else {
p->dl.pi_se = &p->dl;
}
- p->sched_class = &dl_sched_class;
} else if (rt_prio(prio)) {
if (dl_prio(oldprio))
p->dl.pi_se = &p->dl;
if (oldprio < prio)
queue_flag |= ENQUEUE_HEAD;
- p->sched_class = &rt_sched_class;
} else {
if (dl_prio(oldprio))
p->dl.pi_se = &p->dl;
if (rt_prio(oldprio))
p->rt.timeout = 0;
- p->sched_class = &fair_sched_class;
}
- p->prio = prio;
+ __setscheduler_prio(p, prio);
if (queued)
enqueue_task(rq, p, queue_flag);
@@ -5349,35 +5356,6 @@ static void __setscheduler_params(struct task_struct *p,
set_load_weight(p, true);
}
-/* Actually do priority change: must hold pi & rq lock. */
-static void __setscheduler(struct rq *rq, struct task_struct *p,
- const struct sched_attr *attr, bool keep_boost)
-{
- /*
- * If params can't change scheduling class changes aren't allowed
- * either.
- */
- if (attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)
- return;
-
- __setscheduler_params(p, attr);
-
- /*
- * Keep a potential priority boosting if called from
- * sched_setscheduler().
- */
- p->prio = normal_prio(p);
- if (keep_boost)
- p->prio = rt_effective_prio(p, p->prio);
-
- if (dl_prio(p->prio))
- p->sched_class = &dl_sched_class;
- else if (rt_prio(p->prio))
- p->sched_class = &rt_sched_class;
- else
- p->sched_class = &fair_sched_class;
-}
-
/*
* Check the target process has a UID that matches the current process's:
*/
@@ -5398,10 +5376,8 @@ static int __sched_setscheduler(struct task_struct *p,
const struct sched_attr *attr,
bool user, bool pi)
{
- int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 :
- MAX_RT_PRIO - 1 - attr->sched_priority;
- int retval, oldprio, oldpolicy = -1, queued, running;
- int new_effective_prio, policy = attr->sched_policy;
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
const struct sched_class *prev_class;
struct rq_flags rf;
int reset_on_fork;
@@ -5611,6 +5587,7 @@ static int __sched_setscheduler(struct task_struct *p,
p->sched_reset_on_fork = reset_on_fork;
oldprio = p->prio;
+ newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice);
if (pi) {
/*
* Take priority boosted tasks into account. If the new
@@ -5619,8 +5596,8 @@ static int __sched_setscheduler(struct task_struct *p,
* the runqueue. This will be done when the task deboost
* itself.
*/
- new_effective_prio = rt_effective_prio(p, newprio);
- if (new_effective_prio == oldprio)
+ newprio = rt_effective_prio(p, newprio);
+ if (newprio == oldprio)
queue_flags &= ~DEQUEUE_MOVE;
}
@@ -5633,7 +5610,10 @@ static int __sched_setscheduler(struct task_struct *p,
prev_class = p->sched_class;
- __setscheduler(rq, p, attr, pi);
+ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
+ __setscheduler_params(p, attr);
+ __setscheduler_prio(p, newprio);
+ }
__setscheduler_uclamp(p, attr);
if (queued) {
@@ -7640,7 +7620,6 @@ static inline int alloc_qos_sched_group(struct task_group *tg,
static void sched_change_qos_group(struct task_struct *tsk, struct task_group *tg)
{
struct sched_attr attr;
- struct rq *rq = task_rq(tsk);
/*
* No need to re-setcheduler when a task is exiting or the task
@@ -7651,8 +7630,8 @@ static void sched_change_qos_group(struct task_struct *tsk, struct task_group *t
(tg->qos_level == -1)) {
attr.sched_priority = 0;
attr.sched_policy = SCHED_IDLE;
- attr.sched_nice = PRIO_TO_NICE(tsk->static_prio);
- __setscheduler(rq, tsk, &attr, 0);
+ __setscheduler_params(tsk, &attr);
+ __setscheduler_prio(tsk, tsk->static_prio);
}
}
--
2.20.1
1
1

[PATCH openEuler-1.0-LTS 1/4] ovl: fix out of date comment and unreachable code
by Yang Yingliang 19 Oct '21
by Yang Yingliang 19 Oct '21
19 Oct '21
From: Amir Goldstein <amir73il(a)gmail.com>
mainline inclusion
from mainline-v5.7-rc1
commit 735c907d7b7df501e951ba07134b9f6f989a94e4
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
ovl_inode_update() is no longer called from create object code path.
Fixes: 01b39dcc9568 ("ovl: use inode_insert5() to hash a newly...")
Signed-off-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Zheng Liang <zhengliang6(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/overlayfs/inode.c | 8 +++++---
fs/overlayfs/util.c | 2 --
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
index d1e9d926150b1..b016343a7209b 100644
--- a/fs/overlayfs/inode.c
+++ b/fs/overlayfs/inode.c
@@ -555,9 +555,11 @@ static void ovl_fill_inode(struct inode *inode, umode_t mode, dev_t rdev,
* bits to encode layer), set the same value used for st_ino to i_ino,
* so inode number exposed via /proc/locks and a like will be
* consistent with d_ino and st_ino values. An i_ino value inconsistent
- * with d_ino also causes nfsd readdirplus to fail. When called from
- * ovl_new_inode(), ino arg is 0, so i_ino will be updated to real
- * upper inode i_ino on ovl_inode_init() or ovl_inode_update().
+ * with d_ino also causes nfsd readdirplus to fail.
+ *
+ * When called from ovl_create_object() => ovl_new_inode(), with
+ * ino = 0, i_ino will be updated to consistent value later on in
+ * ovl_get_inode() => ovl_fill_inode().
*/
if (ovl_same_dev(inode->i_sb)) {
inode->i_ino = ino;
diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
index eb9411461b695..b83955f31ded0 100644
--- a/fs/overlayfs/util.c
+++ b/fs/overlayfs/util.c
@@ -419,8 +419,6 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
smp_wmb();
OVL_I(inode)->__upperdentry = upperdentry;
if (inode_unhashed(inode)) {
- if (!inode->i_ino)
- inode->i_ino = upperinode->i_ino;
inode->i_private = upperinode;
__insert_inode_hash(inode, (unsigned long) upperinode);
}
--
2.25.1
1
3

[PATCH kernel-4.19 1/4] ovl: fix out of date comment and unreachable code
by Yang Yingliang 19 Oct '21
by Yang Yingliang 19 Oct '21
19 Oct '21
From: Amir Goldstein <amir73il(a)gmail.com>
mainline inclusion
from mainline-v5.7-rc1
commit 735c907d7b7df501e951ba07134b9f6f989a94e4
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
ovl_inode_update() is no longer called from create object code path.
Fixes: 01b39dcc9568 ("ovl: use inode_insert5() to hash a newly...")
Signed-off-by: Amir Goldstein <amir73il(a)gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Zheng Liang <zhengliang6(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/overlayfs/inode.c | 8 +++++---
fs/overlayfs/util.c | 2 --
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
index d1e9d926150b1..b016343a7209b 100644
--- a/fs/overlayfs/inode.c
+++ b/fs/overlayfs/inode.c
@@ -555,9 +555,11 @@ static void ovl_fill_inode(struct inode *inode, umode_t mode, dev_t rdev,
* bits to encode layer), set the same value used for st_ino to i_ino,
* so inode number exposed via /proc/locks and a like will be
* consistent with d_ino and st_ino values. An i_ino value inconsistent
- * with d_ino also causes nfsd readdirplus to fail. When called from
- * ovl_new_inode(), ino arg is 0, so i_ino will be updated to real
- * upper inode i_ino on ovl_inode_init() or ovl_inode_update().
+ * with d_ino also causes nfsd readdirplus to fail.
+ *
+ * When called from ovl_create_object() => ovl_new_inode(), with
+ * ino = 0, i_ino will be updated to consistent value later on in
+ * ovl_get_inode() => ovl_fill_inode().
*/
if (ovl_same_dev(inode->i_sb)) {
inode->i_ino = ino;
diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
index eb9411461b695..b83955f31ded0 100644
--- a/fs/overlayfs/util.c
+++ b/fs/overlayfs/util.c
@@ -419,8 +419,6 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
smp_wmb();
OVL_I(inode)->__upperdentry = upperdentry;
if (inode_unhashed(inode)) {
- if (!inode->i_ino)
- inode->i_ino = upperinode->i_ino;
inode->i_private = upperinode;
__insert_inode_hash(inode, (unsigned long) upperinode);
}
--
2.25.1
1
3

Re: [PATCH openEuler-1.0-LTS V2 5/5] drm/hisilicon: Features to support reading resolutions from EDID
by QiuLaibin 19 Oct '21
by QiuLaibin 19 Oct '21
19 Oct '21
hi gouhao:
这个补丁删除了hibmc_connector_init使用点,导致hibmc_connector_init函数申明但无使用点告警。
同时注意到本patch部分代码在主线通过 be8c8403f63cf ("drm/hisilicon: Code
refactoring for hibmc_drv_vdac")
合入,因此希望能够直接将该patch作为本patch的前向补丁一起合入。
best regard >>>>>>>>>>>>>>>>>>>>>>>>>> [WARNING] build_arm64_kabi
<<<<<<<<<<<<<<<<<<<<<<<<<< make ARCH=arm64
CROSS_COMPILE=aarch64-linux-gnu- defconfig && make -j8 ARCH=arm64
CROSS_COMPILE=aarch64-linux-gnu-
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c:125:1: warning:
‘hibmc_connector_init’ defined but not used [-Wunused-function]
hibmc_connector_init(struct hibmc_drm_private *priv)
^~~~~~~~~~~~~~~~~~~~
==================================END==================================
On 2021/9/18 11:11, Gou Hao wrote:
> From: Tian Tao <tiantao6(a)hisilicon.com> mainline inclusion from
> mainline-v5.14.0-rc7 commit a0d078d06e516184e2f575f3803935697b5e3ac6
> category: bugfix bugzilla:
> https://gitee.com/openeuler/kernel/issues/I469VQCVE: NA The
> modification of hibmc_vdac_init in the original patch cannot be
> incorporated. In this patch, the hibmc_vdac_init is backport from the
> mainline code. --------------------------------------- Use
> drm_get_edid to get the resolution, if that fails, set it to a fixed
> resolution. Rewrite the desrtoy callback function to release
> resources. Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com>
> Reviewed-by: Thomas Zimmermann <tzimmermann(a)suse.de> Link:
> https://patchwork.freedesktop.org/patch/msgid/1600778670-60370-3-git-send...
> <https://patchwork.freedesktop.org/patch/msgid/1600778670-60370-3-git-send-e…>Signed-off-by:
> gouhao <gouhao(a)uniontech.com> ---
> .../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 55 ++++++++++++++-----
> 1 file changed, 41 insertions(+), 14 deletions(-) diff --git
> a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c
> b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c index
> 90319a902..762e3404e 100644 ---
> a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c +++
> b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c @@ -59,11 +59,23 @@
> static int hibmc_valid_mode(int w, int h) static int
> hibmc_connector_get_modes(struct drm_connector *connector) { int
> count; + void *edid; + struct hibmc_connector *hibmc_connector =
> to_hibmc_connector(connector); + + edid = drm_get_edid(connector,
> &hibmc_connector->adapter); + if (edid) { +
> drm_connector_update_edid_property(connector, edid); + count =
> drm_add_edid_modes(connector, edid); + if (count) + goto out; + }
> drm_connector_update_edid_property(connector, NULL); count =
> drm_add_modes_noedid(connector, 1920, 1200);
> drm_set_preferred_mode(connector, 1024, 768); +out: + kfree(edid);
> return count; } @@ -86,6 +98,14 @@ hibmc_connector_best_encoder(struct
> drm_connector *connector) return drm_encoder_find(connector->dev,
> NULL, connector->encoder_ids[0]); } +static void
> hibmc_connector_destroy(struct drm_connector *connector) +{ + struct
> hibmc_connector *hibmc_connector = to_hibmc_connector(connector); + +
> i2c_del_adapter(&hibmc_connector->adapter); +
> drm_connector_cleanup(connector); +} + static const struct
> drm_connector_helper_funcs hibmc_connector_helper_funcs = { .get_modes
> = hibmc_connector_get_modes, @@ -95,7 +115,7 @@ static const struct
> drm_connector_helper_funcs static const struct drm_connector_funcs
> hibmc_connector_funcs = { .fill_modes =
> drm_helper_probe_single_connector_modes, - .destroy =
> drm_connector_cleanup, + .destroy = hibmc_connector_destroy, .reset =
> drm_atomic_helper_connector_reset, .atomic_duplicate_state =
> drm_atomic_helper_connector_duplicate_state, .atomic_destroy_state =
> drm_atomic_helper_connector_destroy_state, @@ -155,21 +175,16 @@
> static const struct drm_encoder_funcs hibmc_encoder_funcs = { int
> hibmc_vdac_init(struct hibmc_drm_private *priv) { struct drm_device
> *dev = priv->dev; - struct drm_encoder *encoder; - struct
> drm_connector *connector; + struct hibmc_connector *hibmc_connector =
> &priv->connector; + struct drm_encoder *encoder = &priv->encoder; +
> struct drm_connector *connector = &hibmc_connector->base; int ret; -
> connector = hibmc_connector_init(priv);
>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
删除后 导致hibmc_connector_init函数申明但无使用点告警。
> - if (IS_ERR(connector)) { - DRM_ERROR("failed to create connector:
> %ld\n", - PTR_ERR(connector)); - return PTR_ERR(connector); - } - -
> encoder = devm_kzalloc(dev->dev, sizeof(*encoder), GFP_KERNEL); - if
> (!encoder) { - DRM_ERROR("failed to alloc memory when init
> encoder\n"); - return -ENOMEM; + + ret = hibmc_ddc_create(dev,
> hibmc_connector); + if (ret) { + DRM_ERROR("failed to create ddc:
> %d\n", ret); + return ret; } encoder->possible_crtcs = 0x1; @@ -181,6
> +196,18 @@ int hibmc_vdac_init(struct hibmc_drm_private *priv) }
> drm_encoder_helper_add(encoder, &hibmc_encoder_helper_funcs); + + ret
> = drm_connector_init_with_ddc(dev, connector, +
> &hibmc_connector_funcs, + DRM_MODE_CONNECTOR_VGA, +
> &hibmc_connector->adapter); + if (ret) { + DRM_ERROR("failed to init
> connector: %d\n", ret); + return ret; + } + +
> drm_connector_helper_add(connector, &hibmc_connector_helper_funcs); +
> drm_connector_attach_encoder(connector, encoder); return 0; -- 2.20.1
>
1
0

19 Oct '21
From: gouhao <gouhao(a)uniontech.com>
Fix hibmc did not get edid.
issue: https://gitee.com/openeuler/kernel/issues/I469VQ
gouhao (5):
drm-Split-out-drm_probe_helper.h
drm-Add-ddc-link-in-sysfs-created-by-drm_connector
drm-Add-drm_connector_init-variant-with-ddc
drm/hisilicon: Support i2c driver algorithms for bit-shift adapters
drm/hisilicon: Features to support reading resolutions from EDID
drivers/gpu/drm/drm_connector.c | 35 +++++++
drivers/gpu/drm/drm_sysfs.c | 8 ++
drivers/gpu/drm/hisilicon/hibmc/Makefile | 3 +-
.../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 26 +++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c | 99 +++++++++++++++++++
.../gpu/drm/hisilicon/hibmc/hibmc_drm_vdac.c | 55 ++++++++---
include/drm/drm_connector.h | 18 ++++
include/drm/drm_probe_helper.h | 27 +++++
8 files changed, 256 insertions(+), 15 deletions(-)
create mode 100644 drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_i2c.c
create mode 100644 include/drm/drm_probe_helper.h
--
2.20.1
4
8
openEuler patches
Andi Kleen (1):
perf stat: Add a new --quiet option to 'perf stat'
Chris Packham (1):
i2c: mpc: Make use of i2c_recover_bus()
Desmond Cheong Zhi Xi (1):
drm: Lock pointer access in drm_master_release()
Jiri Olsa (1):
bpf: Forbid trampoline attach for functions with variable arguments
Kees Cook (1):
proc: Track /proc/$pid/attr/ opener mm_struct
Liangyan (1):
tracing: Correct the length check which causes memory corruption
Steven Rostedt (VMware) (1):
ftrace: Do not blindly read the ip address in ftrace_bug()
mpiglet (1):
arm64/mpam: add return value check for acpi_get_table()
wenzhiwei11 (1):
arm64/mpam: fix the problem that the ret variable is not initialized
yin-xiujiang (1):
arm64/mpam: fix device_errcode out of bounds
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 2 +-
arch/arm64/kernel/mpam/mpam_device.c | 2 +-
drivers/acpi/arm64/mpam.c | 2 +-
drivers/gpu/drm/drm_auth.c | 3 ++-
drivers/i2c/busses/i2c-mpc.c | 18 ++++++++++++++++--
fs/proc/base.c | 9 ++++++++-
kernel/bpf/btf.c | 12 ++++++++++++
kernel/trace/ftrace.c | 8 +++++++-
kernel/trace/trace.c | 2 +-
tools/perf/Documentation/perf-stat.txt | 4 ++++
tools/perf/builtin-stat.c | 6 +++++-
tools/perf/util/stat.h | 1 +
12 files changed, 59 insertions(+), 10 deletions(-)
--
2.25.1
1
10

[PATCH openEuler-1.0-LTS] soc: aspeed: lpc-ctrl: Fix boundary check for mmap
by Yang Yingliang 18 Oct '21
by Yang Yingliang 18 Oct '21
18 Oct '21
From: Iwona Winiarska <iwona.winiarska(a)intel.com>
stable inclusion
from linux-4.19.207
commit 9c8891b638319ddba9cfa330247922cd960c95b0
CVE: CVE-2021-42252
--------------------------------
commit b49a0e69a7b1a68c8d3f64097d06dabb770fec96 upstream.
The check mixes pages (vm_pgoff) with bytes (vm_start, vm_end) on one
side of the comparison, and uses resource address (rather than just the
resource size) on the other side of the comparison.
This can allow malicious userspace to easily bypass the boundary check and
map pages that are located outside memory-region reserved by the driver.
Fixes: 6c4e97678501 ("drivers/misc: Add Aspeed LPC control driver")
Cc: stable(a)vger.kernel.org
Signed-off-by: Iwona Winiarska <iwona.winiarska(a)intel.com>
Reviewed-by: Andrew Jeffery <andrew(a)aj.id.au>
Tested-by: Andrew Jeffery <andrew(a)aj.id.au>
Reviewed-by: Joel Stanley <joel(a)aj.id.au>
Signed-off-by: Joel Stanley <joel(a)jms.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/misc/aspeed-lpc-ctrl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/misc/aspeed-lpc-ctrl.c b/drivers/misc/aspeed-lpc-ctrl.c
index a024f8042259a..870ab0dfcde06 100644
--- a/drivers/misc/aspeed-lpc-ctrl.c
+++ b/drivers/misc/aspeed-lpc-ctrl.c
@@ -50,7 +50,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
unsigned long vsize = vma->vm_end - vma->vm_start;
pgprot_t prot = vma->vm_page_prot;
- if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size)
+ if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT)
return -EINVAL;
/* ast2400/2500 AHB accesses are not cache coherent */
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/2] mmap: userswap: fix memory leak in do_mmap
by Yang Yingliang 18 Oct '21
by Yang Yingliang 18 Oct '21
18 Oct '21
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHP2
CVE: NA
-------------------------------------------------
When userswap is enabled, the memory pointed by 'pages' is not freed in
abnormal branch in do_mmap(). To fix the issue and keep do_mmap() mostly
unchanged, we rename do_mmap() to __do_mmap() and extract the memory
alloc and free code out of __do_mmap(). When __do_mmap() returns a error
value, we goto the error label to free the memory.
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/mmap.c | 406 ++++++++++++++++++++++++++++--------------------------
1 file changed, 211 insertions(+), 195 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 90cff771af771..074fbc0c559a7 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1373,173 +1373,8 @@ int unregister_mmap_notifier(struct notifier_block *nb)
EXPORT_SYMBOL_GPL(unregister_mmap_notifier);
#endif
-#ifdef CONFIG_USERSWAP
-/*
- * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get
- * the reference of the pages and return the pages through input parameters
- * 'ppages'.
- */
-int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr,
- unsigned long len, struct page ***ppages)
-{
- struct vm_area_struct *vma;
- struct page *page = NULL;
- struct page **pages = NULL;
- unsigned long addr_start, addr_end;
- unsigned long ret;
- int i, page_num = 0;
-
- pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL);
- if (!pages)
- return -ENOMEM;
-
- addr_start = addr;
- addr_end = addr + len;
- while (addr < addr_end) {
- vma = find_vma(mm, addr);
- if (!vma || !vma_is_anonymous(vma) ||
- (vma->vm_flags & VM_LOCKED) || vma->vm_file
- || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) {
- ret = -EINVAL;
- goto out;
- }
- if (!(vma->vm_flags & VM_UFFD_MISSING)) {
- ret = -EAGAIN;
- goto out;
- }
-get_again:
- /* follow_page will inc page ref, dec the ref after we remap the page */
- page = follow_page(vma, addr, FOLL_GET);
- if (IS_ERR_OR_NULL(page)) {
- ret = -ENODEV;
- goto out;
- }
- pages[page_num] = page;
- page_num++;
- if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) {
- ret = -EINVAL;
- goto out;
- } else if (PageTransCompound(page)) {
- if (trylock_page(page)) {
- if (!split_huge_page(page)) {
- put_page(page);
- page_num--;
- unlock_page(page);
- goto get_again;
- } else {
- unlock_page(page);
- ret = -EINVAL;
- goto out;
- }
- } else {
- ret = -EINVAL;
- goto out;
- }
- }
- if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) {
- ret = -EBUSY;
- goto out;
- }
- addr += PAGE_SIZE;
- }
-
- *ppages = pages;
- return 0;
-
-out:
- for (i = 0; i < page_num; i++)
- put_page(pages[i]);
- if (pages)
- kfree(pages);
- *ppages = NULL;
- return ret;
-}
-
-/*
- * In uswap situation, we use the bit 0 of the returned address to indicate
- * whether the pages are dirty.
- */
-#define USWAP_PAGES_DIRTY 1
-
-/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */
-unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start,
- unsigned long len, struct page **pages, unsigned long new_addr)
-{
- struct vm_area_struct *vma;
- struct page *page;
- pmd_t *pmd;
- pte_t *pte, old_pte;
- spinlock_t *ptl;
- unsigned long addr, addr_end;
- bool pages_dirty = false;
- int i, err;
-
- addr_end = addr_start + len;
- lru_add_drain();
- mmu_notifier_invalidate_range_start(mm, addr_start, addr_end);
- addr = addr_start;
- i = 0;
- while (addr < addr_end) {
- page = pages[i];
- vma = find_vma(mm, addr);
- if (!vma) {
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
- WARN_ON("find_vma failed\n");
- return -EINVAL;
- }
- pmd = mm_find_pmd(mm, addr);
- if (!pmd) {
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
- WARN_ON("mm_find_pmd failed, addr:%llx\n");
- return -ENXIO;
- }
- pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
- flush_cache_page(vma, addr, pte_pfn(*pte));
- old_pte = ptep_clear_flush(vma, addr, pte);
- if (pte_dirty(old_pte) || PageDirty(page))
- pages_dirty = true;
- set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page))));
- dec_mm_counter(mm, MM_ANONPAGES);
- page_remove_rmap(page, false);
- put_page(page);
-
- pte_unmap_unlock(pte, ptl);
- vma->vm_flags |= VM_USWAP;
- page->mapping = NULL;
- addr += PAGE_SIZE;
- i++;
- }
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
-
- addr_start = new_addr;
- addr_end = new_addr + len;
- addr = addr_start;
- vma = find_vma(mm, addr);
- i = 0;
- while (addr < addr_end) {
- page = pages[i];
- if (addr > vma->vm_end - 1)
- vma = find_vma(mm, addr);
- err = vm_insert_page(vma, addr, page);
- if (err) {
- pr_err("vm_insert_page failed:%d\n", err);
- }
- i++;
- addr += PAGE_SIZE;
- }
- vma->vm_flags |= VM_USWAP;
-
- if (pages_dirty)
- new_addr = new_addr | USWAP_PAGES_DIRTY;
-
- return new_addr;
-}
-#endif
-
-/*
- * The caller must hold down_write(¤t->mm->mmap_sem).
- */
-unsigned long do_mmap(struct file *file, unsigned long addr,
+static inline
+unsigned long __do_mmap(struct file *file, unsigned long addr,
unsigned long len, unsigned long prot,
unsigned long flags, vm_flags_t vm_flags,
unsigned long pgoff, unsigned long *populate,
@@ -1547,12 +1382,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
{
struct mm_struct *mm = current->mm;
int pkey = 0;
-#ifdef CONFIG_USERSWAP
- struct page **pages = NULL;
- unsigned long addr_start = addr;
- int i, page_num = 0;
- unsigned long ret;
-#endif
*populate = 0;
@@ -1569,17 +1398,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
if (!(file && path_noexec(&file->f_path)))
prot |= PROT_EXEC;
-#ifdef CONFIG_USERSWAP
- if (enable_userswap && (flags & MAP_REPLACE)) {
- if (offset_in_page(addr) || (len % PAGE_SIZE))
- return -EINVAL;
- page_num = len / PAGE_SIZE;
- ret = pages_can_be_swapped(mm, addr, len, &pages);
- if (ret)
- return ret;
- }
-#endif
-
/* force arch specific MAP_FIXED handling in get_unmapped_area */
if (flags & MAP_FIXED_NOREPLACE)
flags |= MAP_FIXED;
@@ -1752,25 +1570,203 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
if (flags & MAP_CHECKNODE)
set_vm_checknode(&vm_flags, flags);
-#ifdef CONFIG_USERSWAP
- /* mark the vma as special to avoid merging with other vmas */
- if (enable_userswap && (flags & MAP_REPLACE))
- vm_flags |= VM_SPECIAL;
-#endif
-
addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
if (!IS_ERR_VALUE(addr) &&
((vm_flags & VM_LOCKED) ||
(flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE))
*populate = len;
-#ifndef CONFIG_USERSWAP
return addr;
-#else
- if (!enable_userswap || !(flags & MAP_REPLACE))
- return addr;
+}
+
+#ifdef CONFIG_USERSWAP
+/*
+ * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get
+ * the reference of the pages and return the pages through input parameters
+ * 'ppages'.
+ */
+int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr,
+ unsigned long len, struct page ***ppages)
+{
+ struct vm_area_struct *vma;
+ struct page *page = NULL;
+ struct page **pages = NULL;
+ unsigned long addr_start, addr_end;
+ unsigned long ret;
+ int i, page_num = 0;
+
+ pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ addr_start = addr;
+ addr_end = addr + len;
+ while (addr < addr_end) {
+ vma = find_vma(mm, addr);
+ if (!vma || !vma_is_anonymous(vma) ||
+ (vma->vm_flags & VM_LOCKED) || vma->vm_file
+ || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) {
+ ret = -EINVAL;
+ goto out;
+ }
+ if (!(vma->vm_flags & VM_UFFD_MISSING)) {
+ ret = -EAGAIN;
+ goto out;
+ }
+get_again:
+ /* follow_page will inc page ref, dec the ref after we remap the page */
+ page = follow_page(vma, addr, FOLL_GET);
+ if (IS_ERR_OR_NULL(page)) {
+ ret = -ENODEV;
+ goto out;
+ }
+ pages[page_num] = page;
+ page_num++;
+ if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) {
+ ret = -EINVAL;
+ goto out;
+ } else if (PageTransCompound(page)) {
+ if (trylock_page(page)) {
+ if (!split_huge_page(page)) {
+ put_page(page);
+ page_num--;
+ unlock_page(page);
+ goto get_again;
+ } else {
+ unlock_page(page);
+ ret = -EINVAL;
+ goto out;
+ }
+ } else {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+ if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) {
+ ret = -EBUSY;
+ goto out;
+ }
+ addr += PAGE_SIZE;
+ }
+
+ *ppages = pages;
+ return 0;
+
+out:
+ for (i = 0; i < page_num; i++)
+ put_page(pages[i]);
+ if (pages)
+ kfree(pages);
+ *ppages = NULL;
+ return ret;
+}
+
+/*
+ * In uswap situation, we use the bit 0 of the returned address to indicate
+ * whether the pages are dirty.
+ */
+#define USWAP_PAGES_DIRTY 1
+
+/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */
+unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start,
+ unsigned long len, struct page **pages, unsigned long new_addr)
+{
+ struct vm_area_struct *vma;
+ struct page *page;
+ pmd_t *pmd;
+ pte_t *pte, old_pte;
+ spinlock_t *ptl;
+ unsigned long addr, addr_end;
+ bool pages_dirty = false;
+ int i, err;
+ addr_end = addr_start + len;
+ lru_add_drain();
+ mmu_notifier_invalidate_range_start(mm, addr_start, addr_end);
+ addr = addr_start;
+ i = 0;
+ while (addr < addr_end) {
+ page = pages[i];
+ vma = find_vma(mm, addr);
+ if (!vma) {
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+ WARN_ON("find_vma failed\n");
+ return -EINVAL;
+ }
+ pmd = mm_find_pmd(mm, addr);
+ if (!pmd) {
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+ WARN_ON("mm_find_pmd failed, addr:%llx\n");
+ return -ENXIO;
+ }
+ pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
+ flush_cache_page(vma, addr, pte_pfn(*pte));
+ old_pte = ptep_clear_flush(vma, addr, pte);
+ if (pte_dirty(old_pte) || PageDirty(page))
+ pages_dirty = true;
+ set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page))));
+ dec_mm_counter(mm, MM_ANONPAGES);
+ page_remove_rmap(page, false);
+ put_page(page);
+
+ pte_unmap_unlock(pte, ptl);
+ vma->vm_flags |= VM_USWAP;
+ page->mapping = NULL;
+ addr += PAGE_SIZE;
+ i++;
+ }
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+
+ addr_start = new_addr;
+ addr_end = new_addr + len;
+ addr = addr_start;
+ vma = find_vma(mm, addr);
+ i = 0;
+ while (addr < addr_end) {
+ page = pages[i];
+ if (addr > vma->vm_end - 1)
+ vma = find_vma(mm, addr);
+ err = vm_insert_page(vma, addr, page);
+ if (err) {
+ pr_err("vm_insert_page failed:%d\n", err);
+ }
+ i++;
+ addr += PAGE_SIZE;
+ }
+ vma->vm_flags |= VM_USWAP;
+
+ if (pages_dirty)
+ new_addr = new_addr | USWAP_PAGES_DIRTY;
+
+ return new_addr;
+}
+
+static inline
+unsigned long do_uswap_mmap(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, vm_flags_t vm_flags,
+ unsigned long pgoff, unsigned long *populate,
+ struct list_head *uf)
+{
+ struct mm_struct *mm = current->mm;
+ unsigned long addr_start = addr;
+ struct page **pages = NULL;
+ unsigned long ret;
+ int i, page_num = 0;
+
+ if (!len || offset_in_page(addr) || (len % PAGE_SIZE))
+ return -EINVAL;
+
+ page_num = len / PAGE_SIZE;
+ ret = pages_can_be_swapped(mm, addr, len, &pages);
+ if (ret)
+ return ret;
+
+ /* mark the vma as special to avoid merging with other vmas */
+ vm_flags |= VM_SPECIAL;
+
+ addr = __do_mmap(file, addr, len, prot, flags, vm_flags, pgoff,
+ populate, uf);
if (IS_ERR_VALUE(addr)) {
- pr_info("mmap_region failed, return addr:%lx\n", addr);
ret = addr;
goto out;
}
@@ -1780,10 +1776,30 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
/* follow_page() above increased the reference*/
for (i = 0; i < page_num; i++)
put_page(pages[i]);
+
if (pages)
kfree(pages);
+
return ret;
+}
+#endif
+
+/*
+ * The caller must hold down_write(¤t->mm->mmap_sem).
+ */
+unsigned long do_mmap(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, vm_flags_t vm_flags,
+ unsigned long pgoff, unsigned long *populate,
+ struct list_head *uf)
+{
+#ifdef CONFIG_USERSWAP
+ if (enable_userswap && (flags & MAP_REPLACE))
+ return do_uswap_mmap(file, addr, len, prot, flags, vm_flags,
+ pgoff, populate, uf);
#endif
+ return __do_mmap(file, addr, len, prot, flags, vm_flags,
+ pgoff, populate, uf);
}
unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
--
2.25.1
1
1

Re: [PATCH openEuler-21.03] MIPS: Fix kernel hang under FUNCTION_GRAPH_TRACER and PREEMPT_TRACER
by chengjian (D) 18 Oct '21
by chengjian (D) 18 Oct '21
18 Oct '21
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
在 2021/10/16 23:06, jack 写道:
> From: Tiezhu Yang <yangtiezhu(a)loongson.cn>
>
> stable inclusion
> from stable-v5.10.44
> commit 7519ece673e300b0362572edbde7e030552705ec
> bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=417
> CVE: NA
>
> -------------------------------------------------
>
> [ Upstream commit 78cf0eb926cb1abeff2106bae67752e032fe5f3e ]
>
> When update the latest mainline kernel with the following three configs,
> the kernel hangs during startup:
>
> (1) CONFIG_FUNCTION_GRAPH_TRACER=y
> (2) CONFIG_PREEMPT_TRACER=y
> (3) CONFIG_FTRACE_STARTUP_TEST=y
>
> When update the latest mainline kernel with the above two configs (1)
> and (2), the kernel starts normally, but it still hangs when execute
> the following command:
>
> echo "function_graph" > /sys/kernel/debug/tracing/current_tracer
>
> Without CONFIG_PREEMPT_TRACER=y, the above two kinds of kernel hangs
> disappeared, so it seems that CONFIG_PREEMPT_TRACER has some influences
> with function_graph tracer at the first glance.
>
> I use ejtag to find out the epc address is related with preempt_enable()
> in the file arch/mips/lib/mips-atomic.c, because function tracing can
> trace the preempt_{enable,disable} calls that are traced, replace them
> with preempt_{enable,disable}_notrace to prevent function tracing from
> going into an infinite loop, and then it can fix the kernel hang issue.
>
> By the way, it seems that this commit is a complement and improvement of
> commit f93a1a00f2bd ("MIPS: Fix crash that occurs when function tracing
> is enabled").
>
> Signed-off-by: Tiezhu Yang <yangtiezhu(a)loongson.cn>
> Cc: Steven Rostedt <rostedt(a)goodmis.org>
> Signed-off-by: Thomas Bogendoerfer <tsbogend(a)alpha.franken.de>
> Signed-off-by: Sasha Levin <sashal(a)kernel.org>
> Signed-off-by: jack <18380124974(a)163.com>
> ---
> arch/mips/lib/mips-atomic.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/mips/lib/mips-atomic.c b/arch/mips/lib/mips-atomic.c
> index de03838b343b..a9b72eacfc0b 100644
> --- a/arch/mips/lib/mips-atomic.c
> +++ b/arch/mips/lib/mips-atomic.c
> @@ -37,7 +37,7 @@
> */
> notrace void arch_local_irq_disable(void)
> {
> - preempt_disable();
> + preempt_disable_notrace();
>
> __asm__ __volatile__(
> " .set push \n"
> @@ -53,7 +53,7 @@ notrace void arch_local_irq_disable(void)
> : /* no inputs */
> : "memory");
>
> - preempt_enable();
> + preempt_enable_notrace();
> }
> EXPORT_SYMBOL(arch_local_irq_disable);
>
> @@ -61,7 +61,7 @@ notrace unsigned long arch_local_irq_save(void)
> {
> unsigned long flags;
>
> - preempt_disable();
> + preempt_disable_notrace();
>
> __asm__ __volatile__(
> " .set push \n"
> @@ -78,7 +78,7 @@ notrace unsigned long arch_local_irq_save(void)
> : /* no inputs */
> : "memory");
>
> - preempt_enable();
> + preempt_enable_notrace();
>
> return flags;
> }
> @@ -88,7 +88,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
> {
> unsigned long __tmp1;
>
> - preempt_disable();
> + preempt_disable_notrace();
>
> __asm__ __volatile__(
> " .set push \n"
> @@ -106,7 +106,7 @@ notrace void arch_local_irq_restore(unsigned long flags)
> : "0" (flags)
> : "memory");
>
> - preempt_enable();
> + preempt_enable_notrace();
> }
> EXPORT_SYMBOL(arch_local_irq_restore);
>
1
0

[PATCH kernel-4.19] arm64/mpam: fix the problem that the ret variable is not initialized
by Yang Yingliang 18 Oct '21
by Yang Yingliang 18 Oct '21
18 Oct '21
From: wenzhiwei11 <wenzhiwei(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHUL
CVE: NA
---------------------------------------------------
initialize the value "ret" in "schemata_list_init()"
Signed-off-by: wenzhiwei11 <wenzhiwei(a)kylinos.cn> # openEuler_contributor
Reviewed-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index b1d32d432556c..b7508ad8c5314 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -127,7 +127,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
int schemata_list_init(void)
{
- int ret;
+ int ret = 0;
struct mpam_resctrl_res *res;
struct resctrl_resource *r;
--
2.25.1
1
0

18 Oct '21
From: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
stable inclusion
from stable-v5.10.44
commit aa8591a58cbd2986090709e4202881f18e8ae30e
bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=435
CVE: NA
-------------------------------------------------
commit c336a5ee984708db4826ef9e47d184e638e29717 upstream.
This patch eliminates the following smatch warning:
drivers/gpu/drm/drm_auth.c:320 drm_master_release() warn: unlocked access 'master' (line 318) expected lock '&dev->master_mutex'
The 'file_priv->master' field should be protected by the mutex lock to
'&dev->master_mutex'. This is because other processes can concurrently
modify this field and free the current 'file_priv->master'
pointer. This could result in a use-after-free error when 'master' is
dereferenced in subsequent function calls to
'drm_legacy_lock_master_cleanup()' or to 'drm_lease_revoke()'.
An example of a scenario that would produce this error can be seen
from a similar bug in 'drm_getunique()' that was reported by Syzbot:
https://syzkaller.appspot.com/bug?id=148d2f1dfac64af52ffd27b661981a540724f8…
In the Syzbot report, another process concurrently acquired the
device's master mutex in 'drm_setmaster_ioctl()', then overwrote
'fpriv->master' in 'drm_new_set_master()'. The old value of
'fpriv->master' was subsequently freed before the mutex was unlocked.
Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210609092119.173590-1-desmo…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: holmes <holmes(a)my.swjtu.edu.cn>
---
drivers/gpu/drm/drm_auth.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
index f2d46b7ac6f9..232abbba3686 100644
--- a/drivers/gpu/drm/drm_auth.c
+++ b/drivers/gpu/drm/drm_auth.c
@@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv)
void drm_master_release(struct drm_file *file_priv)
{
struct drm_device *dev = file_priv->minor->dev;
- struct drm_master *master = file_priv->master;
+ struct drm_master *master;
mutex_lock(&dev->master_mutex);
+ master = file_priv->master;
if (file_priv->magic)
idr_remove(&file_priv->master->magic_map, file_priv->magic);
--
2.23.0
3
4

18 Oct '21
From: Trond Myklebust <trond.myklebust(a)hammerspace.com>
mainline inclusion
from mainline-v5.7-rc1
commit 3c9e502b59fbd243cfac7cc6c875e432d285102a
category: bugfix
bugzilla: 182252
CVE: NA
-----------------------------------------------
Add a helper nfs_client_for_each_server() to iterate through all the
filesystems that are attached to a struct nfs_client, and apply
a function to all the active ones.
Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
Signed-off-by: ChenXiaoSong <chenxiaosong2(a)huawei.com>
Reviewed-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/nfs/internal.h | 4 +++-
fs/nfs/super.c | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index cc07189a501fa..c3c0b40f163f1 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -417,7 +417,9 @@ extern int __init register_nfs_fs(void);
extern void __exit unregister_nfs_fs(void);
extern bool nfs_sb_active(struct super_block *sb);
extern void nfs_sb_deactive(struct super_block *sb);
-
+extern int nfs_client_for_each_server(struct nfs_client *clp,
+ int (*fn)(struct nfs_server *, void *),
+ void *data);
/* io.c */
extern void nfs_start_io_read(struct inode *inode);
extern void nfs_end_io_read(struct inode *inode);
diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index fe107348aabe6..48bcdcf4d039e 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -429,6 +429,41 @@ void nfs_sb_deactive(struct super_block *sb)
}
EXPORT_SYMBOL_GPL(nfs_sb_deactive);
+static int __nfs_list_for_each_server(struct list_head *head,
+ int (*fn)(struct nfs_server *, void *),
+ void *data)
+{
+ struct nfs_server *server, *last = NULL;
+ int ret = 0;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(server, head, client_link) {
+ if (!nfs_sb_active(server->super))
+ continue;
+ rcu_read_unlock();
+ if (last)
+ nfs_sb_deactive(last->super);
+ last = server;
+ ret = fn(server, data);
+ if (ret)
+ goto out;
+ rcu_read_lock();
+ }
+ rcu_read_unlock();
+out:
+ if (last)
+ nfs_sb_deactive(last->super);
+ return ret;
+}
+
+int nfs_client_for_each_server(struct nfs_client *clp,
+ int (*fn)(struct nfs_server *, void *),
+ void *data)
+{
+ return __nfs_list_for_each_server(&clp->cl_superblocks, fn, data);
+}
+EXPORT_SYMBOL_GPL(nfs_client_for_each_server);
+
/*
* Deliver file system statistics to userspace
*/
--
2.25.1
1
2

18 Oct '21
From: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHP2
CVE: NA
-------------------------------------------------
When userswap is enabled, the memory pointed by 'pages' is not freed in
abnormal branch in do_mmap(). To fix the issue and keep do_mmap() mostly
unchanged, we rename do_mmap() to __do_mmap() and extract the memory
alloc and free code out of __do_mmap(). When __do_mmap() returns a error
value, we goto the error label to free the memory.
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/mmap.c | 404 +++++++++++++++++++++++++++---------------------------
1 file changed, 204 insertions(+), 200 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 378e1869ac7a0..69848726063c7 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1384,172 +1384,6 @@ static unsigned long __mmap_region(struct mm_struct *mm,
unsigned long len, vm_flags_t vm_flags,
unsigned long pgoff, struct list_head *uf);
-#ifdef CONFIG_USERSWAP
-/*
- * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get
- * the reference of the pages and return the pages through input parameters
- * 'ppages'.
- */
-int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr,
- unsigned long len, struct page ***ppages)
-{
- struct vm_area_struct *vma;
- struct page *page = NULL;
- struct page **pages = NULL;
- unsigned long addr_start, addr_end;
- unsigned long ret;
- int i, page_num = 0;
-
- pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL);
- if (!pages)
- return -ENOMEM;
-
- addr_start = addr;
- addr_end = addr + len;
- while (addr < addr_end) {
- vma = find_vma(mm, addr);
- if (!vma || !vma_is_anonymous(vma) ||
- (vma->vm_flags & VM_LOCKED) || vma->vm_file
- || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) {
- ret = -EINVAL;
- goto out;
- }
- if (!(vma->vm_flags & VM_UFFD_MISSING)) {
- ret = -EAGAIN;
- goto out;
- }
-get_again:
- /* follow_page will inc page ref, dec the ref after we remap the page */
- page = follow_page(vma, addr, FOLL_GET);
- if (IS_ERR_OR_NULL(page)) {
- ret = -ENODEV;
- goto out;
- }
- pages[page_num] = page;
- page_num++;
- if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) {
- ret = -EINVAL;
- goto out;
- } else if (PageTransCompound(page)) {
- if (trylock_page(page)) {
- if (!split_huge_page(page)) {
- put_page(page);
- page_num--;
- unlock_page(page);
- goto get_again;
- } else {
- unlock_page(page);
- ret = -EINVAL;
- goto out;
- }
- } else {
- ret = -EINVAL;
- goto out;
- }
- }
- if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) {
- ret = -EBUSY;
- goto out;
- }
- addr += PAGE_SIZE;
- }
-
- *ppages = pages;
- return 0;
-
-out:
- for (i = 0; i < page_num; i++)
- put_page(pages[i]);
- if (pages)
- kfree(pages);
- *ppages = NULL;
- return ret;
-}
-
-/*
- * In uswap situation, we use the bit 0 of the returned address to indicate
- * whether the pages are dirty.
- */
-#define USWAP_PAGES_DIRTY 1
-
-/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */
-unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start,
- unsigned long len, struct page **pages, unsigned long new_addr)
-{
- struct vm_area_struct *vma;
- struct page *page;
- pmd_t *pmd;
- pte_t *pte, old_pte;
- spinlock_t *ptl;
- unsigned long addr, addr_end;
- bool pages_dirty = false;
- int i, err;
-
- addr_end = addr_start + len;
- lru_add_drain();
- mmu_notifier_invalidate_range_start(mm, addr_start, addr_end);
- addr = addr_start;
- i = 0;
- while (addr < addr_end) {
- page = pages[i];
- vma = find_vma(mm, addr);
- if (!vma) {
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
- WARN_ON("find_vma failed\n");
- return -EINVAL;
- }
- pmd = mm_find_pmd(mm, addr);
- if (!pmd) {
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
- WARN_ON("mm_find_pmd failed, addr:%llx\n");
- return -ENXIO;
- }
- pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
- flush_cache_page(vma, addr, pte_pfn(*pte));
- old_pte = ptep_clear_flush(vma, addr, pte);
- if (pte_dirty(old_pte) || PageDirty(page))
- pages_dirty = true;
- set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page))));
- dec_mm_counter(mm, MM_ANONPAGES);
- page_remove_rmap(page, false);
- put_page(page);
-
- pte_unmap_unlock(pte, ptl);
- vma->vm_flags |= VM_USWAP;
- page->mapping = NULL;
- addr += PAGE_SIZE;
- i++;
- }
- mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
-
- addr_start = new_addr;
- addr_end = new_addr + len;
- addr = addr_start;
- vma = find_vma(mm, addr);
- i = 0;
- while (addr < addr_end) {
- page = pages[i];
- if (addr > vma->vm_end - 1)
- vma = find_vma(mm, addr);
- err = vm_insert_page(vma, addr, page);
- if (err) {
- pr_err("vm_insert_page failed:%d\n", err);
- }
- i++;
- addr += PAGE_SIZE;
- }
- vma->vm_flags |= VM_USWAP;
-
- if (pages_dirty)
- new_addr = new_addr | USWAP_PAGES_DIRTY;
-
- return new_addr;
-}
-#endif
-
-/*
- * The caller must hold down_write(¤t->mm->mmap_sem).
- */
unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
@@ -1557,12 +1391,6 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
unsigned long *populate, struct list_head *uf)
{
int pkey = 0;
-#ifdef CONFIG_USERSWAP
- struct page **pages = NULL;
- unsigned long addr_start = addr;
- int i, page_num = 0;
- unsigned long ret;
-#endif
*populate = 0;
@@ -1579,17 +1407,6 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
if (!(file && path_noexec(&file->f_path)))
prot |= PROT_EXEC;
-#ifdef CONFIG_USERSWAP
- if (enable_userswap && (flags & MAP_REPLACE)) {
- if (offset_in_page(addr) || (len % PAGE_SIZE))
- return -EINVAL;
- page_num = len / PAGE_SIZE;
- ret = pages_can_be_swapped(mm, addr, len, &pages);
- if (ret)
- return ret;
- }
-#endif
-
/* force arch specific MAP_FIXED handling in get_unmapped_area */
if (flags & MAP_FIXED_NOREPLACE)
flags |= MAP_FIXED;
@@ -1766,25 +1583,203 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
if (flags & MAP_CHECKNODE)
set_vm_checknode(&vm_flags, flags);
-#ifdef CONFIG_USERSWAP
- /* mark the vma as special to avoid merging with other vmas */
- if (enable_userswap && (flags & MAP_REPLACE))
- vm_flags |= VM_SPECIAL;
-#endif
-
addr = __mmap_region(mm, file, addr, len, vm_flags, pgoff, uf);
if (!IS_ERR_VALUE(addr) &&
((vm_flags & VM_LOCKED) ||
(flags & (MAP_POPULATE | MAP_NONBLOCK)) == MAP_POPULATE))
*populate = len;
-#ifndef CONFIG_USERSWAP
return addr;
-#else
- if (!enable_userswap || !(flags & MAP_REPLACE))
- return addr;
+}
+#ifdef CONFIG_USERSWAP
+/*
+ * Check if pages between 'addr ~ addr+len' can be user swapped. If so, get
+ * the reference of the pages and return the pages through input parameters
+ * 'ppages'.
+ */
+int pages_can_be_swapped(struct mm_struct *mm, unsigned long addr,
+ unsigned long len, struct page ***ppages)
+{
+ struct vm_area_struct *vma;
+ struct page *page = NULL;
+ struct page **pages = NULL;
+ unsigned long addr_start, addr_end;
+ unsigned long ret;
+ int i, page_num = 0;
+
+ pages = kmalloc(sizeof(struct page *) * (len / PAGE_SIZE), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+ addr_start = addr;
+ addr_end = addr + len;
+ while (addr < addr_end) {
+ vma = find_vma(mm, addr);
+ if (!vma || !vma_is_anonymous(vma) ||
+ (vma->vm_flags & VM_LOCKED) || vma->vm_file
+ || (vma->vm_flags & VM_STACK) || (vma->vm_flags & (VM_IO | VM_PFNMAP))) {
+ ret = -EINVAL;
+ goto out;
+ }
+ if (!(vma->vm_flags & VM_UFFD_MISSING)) {
+ ret = -EAGAIN;
+ goto out;
+ }
+get_again:
+ /* follow_page will inc page ref, dec the ref after we remap the page */
+ page = follow_page(vma, addr, FOLL_GET);
+ if (IS_ERR_OR_NULL(page)) {
+ ret = -ENODEV;
+ goto out;
+ }
+ pages[page_num] = page;
+ page_num++;
+ if (!PageAnon(page) || !PageSwapBacked(page) || PageHuge(page) || PageSwapCache(page)) {
+ ret = -EINVAL;
+ goto out;
+ } else if (PageTransCompound(page)) {
+ if (trylock_page(page)) {
+ if (!split_huge_page(page)) {
+ put_page(page);
+ page_num--;
+ unlock_page(page);
+ goto get_again;
+ } else {
+ unlock_page(page);
+ ret = -EINVAL;
+ goto out;
+ }
+ } else {
+ ret = -EINVAL;
+ goto out;
+ }
+ }
+ if (page_mapcount(page) > 1 || page_mapcount(page) + 1 != page_count(page)) {
+ ret = -EBUSY;
+ goto out;
+ }
+ addr += PAGE_SIZE;
+ }
+
+ *ppages = pages;
+ return 0;
+
+out:
+ for (i = 0; i < page_num; i++)
+ put_page(pages[i]);
+ if (pages)
+ kfree(pages);
+ *ppages = NULL;
+ return ret;
+}
+
+/*
+ * In uswap situation, we use the bit 0 of the returned address to indicate
+ * whether the pages are dirty.
+ */
+#define USWAP_PAGES_DIRTY 1
+
+/* unmap the pages between 'addr ~ addr+len' and remap them to a new address */
+unsigned long do_user_swap(struct mm_struct *mm, unsigned long addr_start,
+ unsigned long len, struct page **pages, unsigned long new_addr)
+{
+ struct vm_area_struct *vma;
+ struct page *page;
+ pmd_t *pmd;
+ pte_t *pte, old_pte;
+ spinlock_t *ptl;
+ unsigned long addr, addr_end;
+ bool pages_dirty = false;
+ int i, err;
+
+ addr_end = addr_start + len;
+ lru_add_drain();
+ mmu_notifier_invalidate_range_start(mm, addr_start, addr_end);
+ addr = addr_start;
+ i = 0;
+ while (addr < addr_end) {
+ page = pages[i];
+ vma = find_vma(mm, addr);
+ if (!vma) {
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+ WARN_ON("find_vma failed\n");
+ return -EINVAL;
+ }
+ pmd = mm_find_pmd(mm, addr);
+ if (!pmd) {
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+ WARN_ON("mm_find_pmd failed, addr:%llx\n");
+ return -ENXIO;
+ }
+ pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
+ flush_cache_page(vma, addr, pte_pfn(*pte));
+ old_pte = ptep_clear_flush(vma, addr, pte);
+ if (pte_dirty(old_pte) || PageDirty(page))
+ pages_dirty = true;
+ set_pte(pte, swp_entry_to_pte(swp_entry(SWP_USERSWAP_ENTRY, page_to_pfn(page))));
+ dec_mm_counter(mm, MM_ANONPAGES);
+ page_remove_rmap(page, false);
+ put_page(page);
+
+ pte_unmap_unlock(pte, ptl);
+ vma->vm_flags |= VM_USWAP;
+ page->mapping = NULL;
+ addr += PAGE_SIZE;
+ i++;
+ }
+ mmu_notifier_invalidate_range_end(mm, addr_start, addr_end);
+
+ addr_start = new_addr;
+ addr_end = new_addr + len;
+ addr = addr_start;
+ vma = find_vma(mm, addr);
+ i = 0;
+ while (addr < addr_end) {
+ page = pages[i];
+ if (addr > vma->vm_end - 1)
+ vma = find_vma(mm, addr);
+ err = vm_insert_page(vma, addr, page);
+ if (err) {
+ pr_err("vm_insert_page failed:%d\n", err);
+ }
+ i++;
+ addr += PAGE_SIZE;
+ }
+ vma->vm_flags |= VM_USWAP;
+
+ if (pages_dirty)
+ new_addr = new_addr | USWAP_PAGES_DIRTY;
+
+ return new_addr;
+}
+
+static inline
+unsigned long do_uswap_mmap(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, vm_flags_t vm_flags,
+ unsigned long pgoff, unsigned long *populate,
+ struct list_head *uf)
+{
+ struct mm_struct *mm = current->mm;
+ unsigned long addr_start = addr;
+ struct page **pages = NULL;
+ unsigned long ret;
+ int i, page_num = 0;
+
+ if (!len || offset_in_page(addr) || (len % PAGE_SIZE))
+ return -EINVAL;
+
+ page_num = len / PAGE_SIZE;
+ ret = pages_can_be_swapped(mm, addr, len, &pages);
+ if (ret)
+ return ret;
+
+ /* mark the vma as special to avoid merging with other vmas */
+ vm_flags |= VM_SPECIAL;
+
+ addr = __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags,
+ pgoff, populate, uf);
if (IS_ERR_VALUE(addr)) {
- pr_info("mmap_region failed, return addr:%lx\n", addr);
ret = addr;
goto out;
}
@@ -1794,23 +1789,32 @@ unsigned long __do_mmap(struct mm_struct *mm, struct file *file,
/* follow_page() above increased the reference*/
for (i = 0; i < page_num; i++)
put_page(pages[i]);
+
if (pages)
kfree(pages);
+
return ret;
-#endif
}
+#endif
/*
* The caller must hold down_write(¤t->mm->mmap_sem).
*/
-unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len,
- unsigned long prot, unsigned long flags, vm_flags_t vm_flags,
- unsigned long pgoff, unsigned long *populate, struct list_head *uf)
+unsigned long do_mmap(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, vm_flags_t vm_flags,
+ unsigned long pgoff, unsigned long *populate,
+ struct list_head *uf)
{
- return __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags, pgoff, populate, uf);
+#ifdef CONFIG_USERSWAP
+ if (enable_userswap && (flags & MAP_REPLACE))
+ return do_uswap_mmap(file, addr, len, prot, flags, vm_flags,
+ pgoff, populate, uf);
+#endif
+ return __do_mmap(current->mm, file, addr, len, prot, flags, vm_flags,
+ pgoff, populate, uf);
}
-
unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long pgoff)
--
2.25.1
1
1

[PATCH kernel-4.19] blktrace: Fix uaf in blk_trace access after removing by sysfs
by Yang Yingliang 18 Oct '21
by Yang Yingliang 18 Oct '21
18 Oct '21
From: Zhihao Cheng <chengzhihao1(a)huawei.com>
mainline inclusion
from mainline-5.15-rc3
commit 5afedf670caf30a2b5a52da96eb7eac7dee6a9c9
category: bugfix
bugzilla: 181454
CVE: NA
---------------------------
There is an use-after-free problem triggered by following process:
P1(sda) P2(sdb)
echo 0 > /sys/block/sdb/trace/enable
blk_trace_remove_queue
synchronize_rcu
blk_trace_free
relay_close
rcu_read_lock
__blk_add_trace
trace_note_tsk
(Iterate running_trace_list)
relay_close_buf
relay_destroy_buf
kfree(buf)
trace_note(sdb's bt)
relay_reserve
buf->offset <- nullptr deference (use-after-free) !!!
rcu_read_unlock
[ 502.714379] BUG: kernel NULL pointer dereference, address:
0000000000000010
[ 502.715260] #PF: supervisor read access in kernel mode
[ 502.715903] #PF: error_code(0x0000) - not-present page
[ 502.716546] PGD 103984067 P4D 103984067 PUD 17592b067 PMD 0
[ 502.717252] Oops: 0000 [#1] SMP
[ 502.720308] RIP: 0010:trace_note.isra.0+0x86/0x360
[ 502.732872] Call Trace:
[ 502.733193] __blk_add_trace.cold+0x137/0x1a3
[ 502.733734] blk_add_trace_rq+0x7b/0xd0
[ 502.734207] blk_add_trace_rq_issue+0x54/0xa0
[ 502.734755] blk_mq_start_request+0xde/0x1b0
[ 502.735287] scsi_queue_rq+0x528/0x1140
...
[ 502.742704] sg_new_write.isra.0+0x16e/0x3e0
[ 502.747501] sg_ioctl+0x466/0x1100
Reproduce method:
ioctl(/dev/sda, BLKTRACESETUP, blk_user_trace_setup[buf_size=127])
ioctl(/dev/sda, BLKTRACESTART)
ioctl(/dev/sdb, BLKTRACESETUP, blk_user_trace_setup[buf_size=127])
ioctl(/dev/sdb, BLKTRACESTART)
echo 0 > /sys/block/sdb/trace/enable &
// Add delay(mdelay/msleep) before kernel enters blk_trace_free()
ioctl$SG_IO(/dev/sda, SG_IO, ...)
// Enters trace_note_tsk() after blk_trace_free() returned
// Use mdelay in rcu region rather than msleep(which may schedule out)
Remove blk_trace from running_list before calling blk_trace_free() by
sysfs if blk_trace is at Blktrace_running state.
Fixes: c71a896154119f ("blktrace: add ftrace plugin")
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Link: https://lore.kernel.org/r/20210923134921.109194-1-chengzhihao1@huawei.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/trace/blktrace.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index da0ee8cc15a72..2df442fb46bb2 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -1662,6 +1662,14 @@ static int blk_trace_remove_queue(struct request_queue *q)
if (bt == NULL)
return -EINVAL;
+ if (bt->trace_state == Blktrace_running) {
+ bt->trace_state = Blktrace_stopped;
+ spin_lock_irq(&running_trace_lock);
+ list_del_init(&bt->running_list);
+ spin_unlock_irq(&running_trace_lock);
+ relay_flush(bt->rchan);
+ }
+
put_probe_ref();
synchronize_rcu();
blk_trace_free(bt);
--
2.25.1
1
0

18 Oct '21
From: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
stable inclusion
from stable-v5.10.44
commit aa8591a58cbd2986090709e4202881f18e8ae30e
bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=435
CVE: NA
-------------------------------------------------
commit c336a5ee984708db4826ef9e47d184e638e29717 upstream.
This patch eliminates the following smatch warning:
drivers/gpu/drm/drm_auth.c:320 drm_master_release() warn: unlocked access 'master' (line 318) expected lock '&dev->master_mutex'
The 'file_priv->master' field should be protected by the mutex lock to
'&dev->master_mutex'. This is because other processes can concurrently
modify this field and free the current 'file_priv->master'
pointer. This could result in a use-after-free error when 'master' is
dereferenced in subsequent function calls to
'drm_legacy_lock_master_cleanup()' or to 'drm_lease_revoke()'.
An example of a scenario that would produce this error can be seen
from a similar bug in 'drm_getunique()' that was reported by Syzbot:
https://syzkaller.appspot.com/bug?id=148d2f1dfac64af52ffd27b661981a540724f8…
In the Syzbot report, another process concurrently acquired the
device's master mutex in 'drm_setmaster_ioctl()', then overwrote
'fpriv->master' in 'drm_new_set_master()'. The old value of
'fpriv->master' was subsequently freed before the mutex was unlocked.
Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210609092119.173590-1-desmo…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: holmes <holmes(a)my.swjtu.edu.cn>
---
drivers/gpu/drm/drm_auth.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
index f2d46b7ac6f9..232abbba3686 100644
--- a/drivers/gpu/drm/drm_auth.c
+++ b/drivers/gpu/drm/drm_auth.c
@@ -314,9 +314,10 @@ int drm_master_open(struct drm_file *file_priv)
void drm_master_release(struct drm_file *file_priv)
{
struct drm_device *dev = file_priv->minor->dev;
- struct drm_master *master = file_priv->master;
+ struct drm_master *master;
mutex_lock(&dev->master_mutex);
+ master = file_priv->master;
if (file_priv->magic)
idr_remove(&file_priv->master->magic_map, file_priv->magic);
--
2.23.0
1
1

18 Oct '21
Ramaxel inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DBD7
CVE: NA
Initial commit the spfc module for ramaxel Super FC adapter
Changes since v1:
- Add UNF_PORT_CFG_SET_PORT_STATE state
Yanling Song (1):
scsi: spfc: initial commit the spfc module
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/spfc/Kconfig | 17 +
drivers/scsi/spfc/Makefile | 47 +
drivers/scsi/spfc/common/unf_common.h | 1755 +++++++
drivers/scsi/spfc/common/unf_disc.c | 1276 +++++
drivers/scsi/spfc/common/unf_disc.h | 51 +
drivers/scsi/spfc/common/unf_event.c | 517 ++
drivers/scsi/spfc/common/unf_event.h | 84 +
drivers/scsi/spfc/common/unf_exchg.c | 2317 +++++++++
drivers/scsi/spfc/common/unf_exchg.h | 436 ++
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 +++
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 +
drivers/scsi/spfc/common/unf_fcstruct.h | 459 ++
drivers/scsi/spfc/common/unf_gs.c | 2521 +++++++++
drivers/scsi/spfc/common/unf_gs.h | 58 +
drivers/scsi/spfc/common/unf_init.c | 353 ++
drivers/scsi/spfc/common/unf_io.c | 1220 +++++
drivers/scsi/spfc/common/unf_io.h | 96 +
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ++++
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 +
drivers/scsi/spfc/common/unf_log.h | 178 +
drivers/scsi/spfc/common/unf_lport.c | 1008 ++++
drivers/scsi/spfc/common/unf_lport.h | 519 ++
drivers/scsi/spfc/common/unf_ls.c | 4884 ++++++++++++++++++
drivers/scsi/spfc/common/unf_ls.h | 61 +
drivers/scsi/spfc/common/unf_npiv.c | 1005 ++++
drivers/scsi/spfc/common/unf_npiv.h | 47 +
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 ++
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 +
drivers/scsi/spfc/common/unf_portman.c | 2431 +++++++++
drivers/scsi/spfc/common/unf_portman.h | 96 +
drivers/scsi/spfc/common/unf_rport.c | 2286 ++++++++
drivers/scsi/spfc/common/unf_rport.h | 301 ++
drivers/scsi/spfc/common/unf_scsi.c | 1463 ++++++
drivers/scsi/spfc/common/unf_scsi_common.h | 570 ++
drivers/scsi/spfc/common/unf_service.c | 1439 ++++++
drivers/scsi/spfc/common/unf_service.h | 66 +
drivers/scsi/spfc/common/unf_type.h | 216 +
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ++++
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 +++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1646 ++++++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 +
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 891 ++++
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 +
drivers/scsi/spfc/hw/spfc_cqm_main.c | 1257 +++++
drivers/scsi/spfc/hw/spfc_cqm_main.h | 414 ++
drivers/scsi/spfc/hw/spfc_cqm_object.c | 959 ++++
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 +
drivers/scsi/spfc/hw/spfc_hba.c | 1751 +++++++
drivers/scsi/spfc/hw/spfc_hba.h | 341 ++
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ++++++
drivers/scsi/spfc/hw/spfc_io.c | 1193 +++++
drivers/scsi/spfc/hw/spfc_io.h | 138 +
drivers/scsi/spfc/hw/spfc_lld.c | 998 ++++
drivers/scsi/spfc/hw/spfc_lld.h | 76 +
drivers/scsi/spfc/hw/spfc_module.h | 297 ++
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 +
drivers/scsi/spfc/hw/spfc_queue.c | 4857 +++++++++++++++++
drivers/scsi/spfc/hw/spfc_queue.h | 711 +++
drivers/scsi/spfc/hw/spfc_service.c | 2169 ++++++++
drivers/scsi/spfc/hw/spfc_service.h | 282 +
drivers/scsi/spfc/hw/spfc_utils.c | 102 +
drivers/scsi/spfc/hw/spfc_utils.h | 202 +
drivers/scsi/spfc/hw/spfc_wqe.c | 646 +++
drivers/scsi/spfc/hw/spfc_wqe.h | 239 +
68 files changed, 53555 insertions(+)
create mode 100644 drivers/scsi/spfc/Kconfig
create mode 100644 drivers/scsi/spfc/Makefile
create mode 100644 drivers/scsi/spfc/common/unf_common.h
create mode 100644 drivers/scsi/spfc/common/unf_disc.c
create mode 100644 drivers/scsi/spfc/common/unf_disc.h
create mode 100644 drivers/scsi/spfc/common/unf_event.c
create mode 100644 drivers/scsi/spfc/common/unf_event.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
create mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
create mode 100644 drivers/scsi/spfc/common/unf_gs.c
create mode 100644 drivers/scsi/spfc/common/unf_gs.h
create mode 100644 drivers/scsi/spfc/common/unf_init.c
create mode 100644 drivers/scsi/spfc/common/unf_io.c
create mode 100644 drivers/scsi/spfc/common/unf_io.h
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
create mode 100644 drivers/scsi/spfc/common/unf_log.h
create mode 100644 drivers/scsi/spfc/common/unf_lport.c
create mode 100644 drivers/scsi/spfc/common/unf_lport.h
create mode 100644 drivers/scsi/spfc/common/unf_ls.c
create mode 100644 drivers/scsi/spfc/common/unf_ls.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_rport.c
create mode 100644 drivers/scsi/spfc/common/unf_rport.h
create mode 100644 drivers/scsi/spfc/common/unf_scsi.c
create mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
create mode 100644 drivers/scsi/spfc/common/unf_service.c
create mode 100644 drivers/scsi/spfc/common/unf_service.h
create mode 100644 drivers/scsi/spfc/common/unf_type.h
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
create mode 100644 drivers/scsi/spfc/hw/spfc_io.c
create mode 100644 drivers/scsi/spfc/hw/spfc_io.h
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
create mode 100644 drivers/scsi/spfc/hw/spfc_module.h
create mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
create mode 100644 drivers/scsi/spfc/hw/spfc_service.c
create mode 100644 drivers/scsi/spfc/hw/spfc_service.h
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
--
2.30.0
2
2
backport psi feature from upstream 5.4
bugzilla: https://gitee.com/openeuler/kernel/issues/I47QS2
Baruch Siach (1):
psi: fix reference to kernel commandline enable
Dan Schatzberg (1):
kernel/sched/psi.c: expose pressure metrics on root cgroup
Johannes Weiner (11):
sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD
sched: loadavg: make calc_load_n() public
sched: sched.h: make rq locking and clock functions available in
stats.h
sched: introduce this_rq_lock_irq()
psi: pressure stall information for CPU, memory, and IO
psi: cgroup support
psi: make disabling/enabling easier for vendor kernels
psi: fix aggregation idle shut-off
psi: avoid divide-by-zero crash inside virtual machines
fs: kernfs: add poll file operation
sched/psi: Fix sampling error and rare div0 crashes with cgroups and
high uptime
Josef Bacik (1):
blk-iolatency: use a percentile approache for ssd's
Liu Xinpeng (2):
psi:enable psi in config
psi:avoid kabi change
Olof Johansson (1):
kernel/sched/psi.c: simplify cgroup_move_task()
Suren Baghdasaryan (6):
psi: introduce state_mask to represent stalled psi states
psi: make psi_enable static
psi: rename psi fields in preparation for psi trigger addition
psi: split update_stats into parts
psi: track changed states
include/: refactor headers to allow kthread.h inclusion in psi_types.h
Documentation/accounting/psi.txt | 73 +++
Documentation/admin-guide/cgroup-v2.rst | 18 +
Documentation/admin-guide/kernel-parameters.txt | 4 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +-
arch/powerpc/platforms/cell/spufs/sched.c | 9 +-
arch/s390/appldata/appldata_os.c | 4 -
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-iolatency.c | 183 +++++-
drivers/cpuidle/governors/menu.c | 4 -
drivers/spi/spi-rockchip.c | 1 +
fs/kernfs/file.c | 31 +-
fs/proc/loadavg.c | 3 -
include/linux/cgroup-defs.h | 12 +
include/linux/cgroup.h | 17 +
include/linux/kernfs.h | 8 +
include/linux/kthread.h | 4 +
include/linux/psi.h | 55 ++
include/linux/psi_types.h | 95 +++
include/linux/sched.h | 13 +
include/linux/sched/loadavg.h | 24 +-
init/Kconfig | 28 +
kernel/cgroup/cgroup.c | 55 +-
kernel/debug/kdb/kdb_main.c | 7 +-
kernel/fork.c | 4 +
kernel/kthread.c | 3 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 16 +-
kernel/sched/loadavg.c | 139 ++--
kernel/sched/psi.c | 823 ++++++++++++++++++++++++
kernel/sched/sched.h | 178 ++---
kernel/sched/stats.h | 86 +++
kernel/workqueue.c | 23 +
kernel/workqueue_internal.h | 6 +-
mm/compaction.c | 5 +
mm/filemap.c | 11 +
mm/page_alloc.c | 9 +
mm/vmscan.c | 9 +
38 files changed, 1726 insertions(+), 241 deletions(-)
create mode 100644 Documentation/accounting/psi.txt
create mode 100644 include/linux/psi.h
create mode 100644 include/linux/psi_types.h
create mode 100644 kernel/sched/psi.c
--
1.8.3.1
1
22

Re: Re: [PATCH kernel-4.19 v4 19/22] kernel/sched/psi.c: expose pressure metrics on root cgroup
by 刘新朋 17 Oct '21
by 刘新朋 17 Oct '21
17 Oct '21
1
0
backport psi feature from upstream 5.4
bugzilla: https://gitee.com/openeuler/kernel/issues/I47QS2
Dan Schatzberg (1):
kernel/sched/psi.c: expose pressure metrics on root cgroup
Johannes Weiner (11):
sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD
sched: loadavg: make calc_load_n() public
sched: sched.h: make rq locking and clock functions available in
stats.h
sched: introduce this_rq_lock_irq()
psi: pressure stall information for CPU, memory, and IO
psi: cgroup support
psi: make disabling/enabling easier for vendor kernels
psi: fix aggregation idle shut-off
psi: avoid divide-by-zero crash inside virtual machines
fs: kernfs: add poll file operation
sched/psi: Fix sampling error and rare div0 crashes with cgroups and
high uptime
Josef Bacik (1):
blk-iolatency: use a percentile approache for ssd's
Liu Xinpeng (2):
psi:enable psi in config
psi:avoid kabi change
Olof Johansson (1):
kernel/sched/psi.c: simplify cgroup_move_task()
Suren Baghdasaryan (6):
psi: introduce state_mask to represent stalled psi states
psi: make psi_enable static
psi: rename psi fields in preparation for psi trigger addition
psi: split update_stats into parts
psi: track changed states
include/: refactor headers to allow kthread.h inclusion in psi_types.h
Documentation/accounting/psi.txt | 73 +++
Documentation/admin-guide/cgroup-v2.rst | 18 +
Documentation/admin-guide/kernel-parameters.txt | 4 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +-
arch/powerpc/platforms/cell/spufs/sched.c | 9 +-
arch/s390/appldata/appldata_os.c | 4 -
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-iolatency.c | 183 +++++-
drivers/cpuidle/governors/menu.c | 4 -
drivers/spi/spi-rockchip.c | 1 +
fs/kernfs/file.c | 31 +-
fs/proc/loadavg.c | 3 -
include/linux/cgroup-defs.h | 12 +
include/linux/cgroup.h | 16 +
include/linux/kernfs.h | 8 +
include/linux/kthread.h | 4 +
include/linux/psi.h | 55 ++
include/linux/psi_types.h | 95 +++
include/linux/sched.h | 13 +
include/linux/sched/loadavg.h | 24 +-
init/Kconfig | 28 +
kernel/cgroup/cgroup.c | 55 +-
kernel/debug/kdb/kdb_main.c | 7 +-
kernel/fork.c | 4 +
kernel/kthread.c | 3 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 16 +-
kernel/sched/loadavg.c | 139 ++--
kernel/sched/psi.c | 823 ++++++++++++++++++++++++
kernel/sched/sched.h | 178 ++---
kernel/sched/stats.h | 86 +++
kernel/workqueue.c | 23 +
kernel/workqueue_internal.h | 6 +-
mm/compaction.c | 5 +
mm/filemap.c | 11 +
mm/page_alloc.c | 9 +
mm/vmscan.c | 9 +
38 files changed, 1725 insertions(+), 241 deletions(-)
create mode 100644 Documentation/accounting/psi.txt
create mode 100644 include/linux/psi.h
create mode 100644 include/linux/psi_types.h
create mode 100644 kernel/sched/psi.c
--
1.8.3.1
2
27
Backport LTS 5.10.58 patches from upstream.
Aharon Landau (1):
RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it
recently
Alex Deucher (1):
drm/amdgpu/display: only enable aux backlight control for OLED panels
Alex Xu (Hello71) (1):
pipe: increase minimum default pipe size to 2 pages
Alexander Monakov (1):
ALSA: hda/realtek: add mic quirk for Acer SF314-42
Alexander Tsoy (1):
ALSA: usb-audio: Add registration quirk for JBL Quantum 600
Allen Pais (1):
optee: fix tee out of memory failure seen during kexec reboot
Andy Shevchenko (1):
serial: 8250_pci: Enumerate Elkhart Lake UARTs via dedicated driver
Anirudh Rayabharam (2):
firmware_loader: use -ETIMEDOUT instead of -EAGAIN in
fw_load_sysfs_fallback
firmware_loader: fix use-after-free in firmware_fallback_sysfs
Antoine Tenart (1):
net: ipv6: fix returned variable type in ip6_skb_dst_mtu
Arnd Bergmann (2):
soc: ixp4xx: fix printing resources
soc: ixp4xx/qmgr: fix invalid __iomem access
Brian Norris (1):
clk: fix leak on devm_clk_bulk_get_all() unwind
Christoph Hellwig (1):
libata: fix ata_pio_sector for CONFIG_HIGHMEM
Claudiu Beznea (1):
usb: host: ohci-at91: suspend/resume ports after/before OHCI accesses
Colin Ian King (2):
ARM: imx: fix missing 3rd argument in macro imx_mmdc_perf_init
interconnect: Fix undersized devress_alloc allocation
Dan Carpenter (1):
bnx2x: fix an error code in bnx2x_nic_load()
Daniele Palmas (1):
USB: serial: option: add Telit FD980 composition 0x1056
Dario Binacchi (2):
clk: stm32f4: fix post divisor setup for I2S/SAI PLLs
ARM: dts: am437x-l4: fix typo in can@0 node
David Bauer (1):
USB: serial: ftdi_sio: add device ID for Auto-M3 OP-COM v2
Dmitry Osipenko (2):
clk: tegra: Implement disable_unused() of tegra_clk_sdmmc_mux_ops
usb: otg-fsm: Fix hrtimer list corruption
Dmitry Safonov (1):
net/xfrm/compat: Copy xfrm_spdattr_type_t atributes
Dongliang Mu (1):
spi: meson-spicc: fix memory leak in meson_spicc_remove
Fei Qin (1):
nfp: update ethtool reporting of pauseframe control
Filip Schauer (1):
drivers core: Fix oops when driver probe fails
Frederic Weisbecker (1):
xfrm: Fix RCU vs hash_resize_mutex lock inversion
H. Nikolaus Schaller (2):
omap5-board-common: remove not physically existing vdds_1v8_main
fixed-regulator
mips: Fix non-POSIX regexp
Hans Verkuil (1):
media: videobuf2-core: dequeue if start_streaming fails
Harshvardhan Jha (1):
net: qede: Fix end of loop tests for list_for_each_entry
Huang Pei (1):
MIPS: check return value of pgtable_pmd_page_ctor
Hui Su (1):
scripts/tracing: fix the bug that can't parse raw_trace_func
Jakub Sitnicki (1):
net, gro: Set inner transport header offset in tcp/udp GRO hook
Jaroslav Kysela (1):
ALSA: pcm - fix mmap capability check for the snd-dummy driver
Jens Wiklander (1):
tee: add tee_shm_alloc_kernel_buf()
Johan Hovold (1):
media: rtl28xxu: fix zero-length control request
Jon Hunter (1):
serial: tegra: Only print FIFO error message when an error occurs
Jonathan Gray (1):
drm/i915: avoid uninitialised var in eb_parse()
Juergen Borleis (1):
dmaengine: imx-dma: configure the generic DMA type to make it work
Kajol Jain (1):
fpga: dfl: fme: Fix cpu hotplug issue in performance reporting
Kamal Agrawal (1):
tracing: Fix NULL pointer dereference in start_creating
Kevin Hilman (1):
bus: ti-sysc: AM3: RNG is GP only
Kunihiko Hayashi (1):
dmaengine: uniphier-xdmac: Use readl_poll_timeout_atomic() in atomic
state
Kyle Tso (1):
usb: typec: tcpm: Keep other events when receiving FRS and
Sourcing_vbus events
Letu Ren (1):
net/qla3xxx: fix schedule while atomic in ql_wait_for_drvr_lock and
ql_adapter_reset
Li Manyi (1):
scsi: sr: Return correct event when media event code is 3
Like Xu (1):
perf/x86/amd: Don't touch the AMD64_EVENTSEL_HOSTONLY bit inside the
guest
Maciej W. Rozycki (2):
serial: 8250: Mask out floating 16/32-bit bus bits
MIPS: Malta: Do not byte-swap accesses to the CBUS UART
Marek Vasut (5):
ARM: dts: imx: Swap M53Menlo pinctrl_power_button/pinctrl_power_out
pins
spi: imx: mx51-ecspi: Reinstate low-speed CONFIGREG delay
spi: imx: mx51-ecspi: Fix low-speed CONFIGREG delay calculation
ARM: dts: stm32: Disable LAN8710 EDPD on DHCOM
ARM: dts: stm32: Fix touchscreen IRQ line assignment on DHCOM
Mario Kleiner (1):
serial: 8250_pci: Avoid irq sharing for MSI(-X) interrupts.
Mark Rutland (1):
arm64: stacktrace: avoid tracing arch_stack_walk()
Masami Hiramatsu (1):
tracing: Reject string operand in the histogram expression
Mathieu Desnoyers (2):
tracepoint: static call: Compare data on transition from 2->1 callees
tracepoint: Fix static call function vs data state mismatch
Matt Roper (1):
drm/i915: Correct SFC_DONE register offset
Matteo Croce (1):
virt_wifi: fix error on connect
Matthias Schiffer (1):
gpio: tqmx86: really make IRQ optional
Maxim Devaev (2):
usb: gadget: f_hid: added GET_IDLE and SET_IDLE handlers
usb: gadget: f_hid: idle uses the highest byte for duration
Maxime Chevallier (1):
ARM: dts: imx6qdl-sr-som: Increase the PHY reset duration to 10ms
Michael Walle (1):
arm64: dts: ls1028: sl28: fix networking for variant 2
Mike Tipton (3):
interconnect: Zero initial BW after sync-state
interconnect: Always call pre_aggregate before aggregate
interconnect: qcom: icc-rpmh: Ensure floor BW is enforced for all
nodes
Nikos Liolios (1):
ALSA: hda/realtek: Fix headset mic for Acer SWIFT SF314-56 (ALC256)
Oleksandr Suvorov (1):
ARM: dts: colibri-imx6ull: limit SDIO clock to 25MHz
Oleksij Rempel (1):
net: dsa: qca: ar9331: reorder MDIO write sequence
Pali Rohár (1):
arm64: dts: armada-3720-turris-mox: remove mrvl,i2c-fast-mode
Paolo Bonzini (2):
KVM: x86: accept userspace interrupt only if no event is injected
KVM: Do not leak memory for duplicate debugfs directories
Pavel Skripkin (6):
net: xfrm: fix memory leak in xfrm_user_rcv_msg
net: pegasus: fix uninit-value in get_interrupt_interval
net: fec: fix use-after-free in fec_drv_remove
net: vxge: fix use-after-free in vxge_device_unregister
staging: rtl8712: get rid of flush_scheduled_work
staging: rtl8712: error handling refactoring
Pawel Laszczak (1):
usb: cdns3: Fixed incorrect gadget state
Peter Zijlstra (1):
sched/rt: Fix double enqueue caused by rt_effective_prio
Phil Elwell (1):
usb: gadget: f_hid: fixed NULL pointer dereference
Prarit Bhargava (1):
alpha: Send stop IPI to send to online CPUs
Qiang.zhang (1):
USB: usbtmc: Fix RCU stall warning
Rafael J. Wysocki (1):
Revert "ACPICA: Fix memory leak caused by _CID repair function"
Rasmus Villemoes (1):
Revert "gpio: mpc8xxx: change the gpio interrupt flags."
Sean Christopherson (1):
KVM: x86/mmu: Fix per-cpu counter corruption on 32-bit builds
Shirish S (1):
drm/amdgpu/display: fix DMUB firmware version info
Shreyansh Chouhan (1):
reiserfs: check directory items on read from disk
Steve Bennett (1):
net: phy: micrel: Fix detection of ksz87xx switch
Steve French (1):
smb3: rc uninitialized in one fallocate path
Steven Rostedt (VMware) (1):
tracing / histogram: Give calculation hist_fields a size
Takashi Iwai (2):
ALSA: seq: Fix racy deletion of subscriber
ALSA: usb-audio: Fix superfluous autosuspend recovery
Tero Kristo (1):
ARM: omap2+: hwmod: fix potential NULL pointer access
Tetsuo Handa (1):
Bluetooth: defer cleanup of resources in hci_unregister_dev()
Theodore Ts'o (1):
ext4: fix potential htree corruption when growing large_dir
directories
Thomas Gleixner (1):
timers: Move clearing of base::timer_running under base:: Lock
Tony Lindgren (1):
bus: ti-sysc: Fix gpt12 system timer issue with reserved status
Tyler Hicks (4):
optee: Clear stale cache entries during initialization
optee: Fix memory leak when failing to register shm pages
optee: Refuse to load the driver under the kdump kernel
tpm_ftpm_tee: Free and unregister TEE shared memory during kexec
Vladimir Oltean (6):
arm64: dts: ls1028a: fix node name for the sysclk
arm64: dts: armada-3720-turris-mox: fixed indices for the SDHC
controllers
net: dsa: sja1105: overwrite dynamic FDB entries with static ones in
.port_fdb_add
net: dsa: sja1105: invalidate dynamic FDB entries learned concurrently
with statically added ones
net: dsa: sja1105: be stateless with FDB entries on
SJA1105P/Q/R/S/SJA1110 too
net: dsa: sja1105: match FDB entries regardless of inner/outer VLAN
tag
Wang Hai (1):
net: natsemi: Fix missing pci_disable_device() in probe and remove
Wei Shuyu (1):
md/raid10: properly indicate failure when ending a failed write
request
Wesley Cheng (1):
usb: dwc3: gadget: Avoid runtime resume if disabling pullup
Will Deacon (1):
arm64: vdso: Avoid ISB after reading from cntvct_el0
Willy Tarreau (1):
USB: serial: ch341: fix character loss at high transfer rates
Xiangyang Zhang (1):
staging: rtl8723bs: Fix a resource leak in sd_int_dpc
Xin Long (1):
sctp: move the active_key update after sh_keys is added
Xiu Jianfeng (1):
selinux: correct the return value when loads initial sids
Yang Yingliang (2):
ARM: imx: add missing iounmap()
ARM: imx: add missing clk_disable_unprepare()
Yu Kuai (2):
blk-iolatency: error out if blk_get_queue() failed in
iolatency_set_limit()
reiserfs: add check for root_inode in reiserfs_fill_super
Yunsheng Lin (1):
net: sched: fix lockdep_set_class() typo error for sch->seqlock
Zhang Qilong (3):
dmaengine: stm32-dma: Fix PM usage counter imbalance in stm32 dma ops
dmaengine: stm32-dmamux: Fix PM usage counter unbalance in stm32
dmamux ops
usb: gadget: remove leaked entry from udc driver list
Zheyu Ma (1):
pcmcia: i82092: fix a null pointer dereference bug
Zhiyong Tao (1):
serial: 8250_mtk: fix uart corruption issue when rx power off
chihhao.chen (1):
ALSA: usb-audio: fix incorrect clock source setting
arch/alpha/kernel/smp.c | 2 +-
arch/arm/boot/dts/am437x-l4.dtsi | 2 +-
arch/arm/boot/dts/imx53-m53menlo.dts | 4 +-
arch/arm/boot/dts/imx6qdl-sr-som.dtsi | 8 +-
arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi | 1 +
arch/arm/boot/dts/omap5-board-common.dtsi | 9 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi | 24 +++--
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi | 1 +
arch/arm/mach-imx/mmdc.c | 17 ++-
arch/arm/mach-omap2/omap_hwmod.c | 10 +-
.../fsl-ls1028a-kontron-sl28-var2.dts | 2 +
.../arm64/boot/dts/freescale/fsl-ls1028a.dtsi | 2 +-
.../dts/marvell/armada-3720-turris-mox.dts | 3 +
arch/arm64/include/asm/arch_timer.h | 21 ----
arch/arm64/include/asm/barrier.h | 19 ++++
arch/arm64/include/asm/vdso/gettimeofday.h | 6 +-
arch/arm64/kernel/stacktrace.c | 2 +-
arch/mips/Makefile | 2 +-
arch/mips/include/asm/pgalloc.h | 17 +--
arch/mips/mti-malta/malta-platform.c | 3 +-
arch/x86/events/perf_event.h | 3 +-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/x86.c | 13 ++-
block/blk-iolatency.c | 6 +-
drivers/acpi/acpica/nsrepair2.c | 7 --
drivers/ata/libata-sff.c | 35 ++++--
drivers/base/dd.c | 4 +-
drivers/base/firmware_loader/fallback.c | 14 +--
drivers/base/firmware_loader/firmware.h | 10 +-
drivers/base/firmware_loader/main.c | 2 +
drivers/bus/ti-sysc.c | 22 ++--
drivers/char/tpm/tpm_ftpm_tee.c | 8 +-
drivers/clk/clk-devres.c | 9 +-
drivers/clk/clk-stm32f4.c | 10 +-
drivers/clk/tegra/clk-sdmmc-mux.c | 10 ++
drivers/dma/imx-dma.c | 2 +
drivers/dma/stm32-dma.c | 4 +-
drivers/dma/stm32-dmamux.c | 6 +-
drivers/dma/uniphier-xdmac.c | 4 +-
drivers/fpga/dfl-fme-perf.c | 2 +
drivers/gpio/gpio-mpc8xxx.c | 2 +-
drivers/gpio/gpio-tqmx86.c | 6 +-
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 +-
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 ++
drivers/gpu/drm/i915/i915_reg.h | 2 +-
drivers/infiniband/hw/mlx5/mr.c | 4 +-
drivers/interconnect/core.c | 9 +-
drivers/interconnect/qcom/icc-rpmh.c | 12 +--
drivers/md/raid1.c | 2 -
drivers/md/raid10.c | 4 +-
.../media/common/videobuf2/videobuf2-core.c | 13 ++-
drivers/media/usb/dvb-usb-v2/rtl28xxu.c | 11 +-
drivers/net/dsa/qca/ar9331.c | 14 ++-
drivers/net/dsa/sja1105/sja1105_main.c | 85 +++++++++++----
.../net/ethernet/broadcom/bnx2x/bnx2x_cmn.c | 3 +-
drivers/net/ethernet/freescale/fec_main.c | 2 +-
drivers/net/ethernet/natsemi/natsemi.c | 8 +-
.../net/ethernet/neterion/vxge/vxge-main.c | 6 +-
.../ethernet/netronome/nfp/nfp_net_ethtool.c | 2 +
.../net/ethernet/qlogic/qede/qede_filter.c | 4 +-
drivers/net/ethernet/qlogic/qla3xxx.c | 6 +-
drivers/net/phy/micrel.c | 10 +-
drivers/net/usb/pegasus.c | 14 ++-
drivers/net/wireless/virt_wifi.c | 52 +++++----
drivers/pcmcia/i82092.c | 1 +
drivers/scsi/sr.c | 2 +-
drivers/soc/ixp4xx/ixp4xx-npe.c | 11 +-
drivers/soc/ixp4xx/ixp4xx-qmgr.c | 9 +-
drivers/spi/spi-imx.c | 52 +++++----
drivers/spi/spi-meson-spicc.c | 2 +
drivers/staging/rtl8712/hal_init.c | 30 ++++--
drivers/staging/rtl8712/rtl8712_led.c | 8 ++
drivers/staging/rtl8712/rtl871x_led.h | 1 +
drivers/staging/rtl8712/rtl871x_pwrctrl.c | 8 ++
drivers/staging/rtl8712/rtl871x_pwrctrl.h | 1 +
drivers/staging/rtl8712/usb_intf.c | 51 ++++-----
drivers/staging/rtl8723bs/hal/sdio_ops.c | 2 +
drivers/tee/optee/call.c | 36 ++++++-
drivers/tee/optee/core.c | 40 +++++++
drivers/tee/optee/optee_private.h | 1 +
drivers/tee/optee/shm_pool.c | 12 ++-
drivers/tee/tee_shm.c | 18 ++++
drivers/tty/serial/8250/8250_mtk.c | 5 +
drivers/tty/serial/8250/8250_pci.c | 7 ++
drivers/tty/serial/8250/8250_port.c | 12 ++-
drivers/tty/serial/serial-tegra.c | 6 +-
drivers/usb/cdns3/ep0.c | 1 +
drivers/usb/class/usbtmc.c | 9 +-
drivers/usb/common/usb-otg-fsm.c | 6 +-
drivers/usb/dwc3/gadget.c | 11 ++
drivers/usb/gadget/function/f_hid.c | 44 ++++++--
drivers/usb/gadget/udc/max3420_udc.c | 14 ++-
drivers/usb/host/ohci-at91.c | 9 +-
drivers/usb/serial/ch341.c | 1 +
drivers/usb/serial/ftdi_sio.c | 1 +
drivers/usb/serial/ftdi_sio_ids.h | 3 +
drivers/usb/serial/option.c | 2 +
drivers/usb/typec/tcpm/tcpm.c | 4 +-
fs/cifs/smb2ops.c | 3 +-
fs/ext4/namei.c | 2 +-
fs/pipe.c | 19 +++-
fs/reiserfs/stree.c | 31 +++++-
fs/reiserfs/super.c | 8 ++
include/linux/tee_drv.h | 1 +
include/linux/usb/otg-fsm.h | 1 +
include/net/bluetooth/hci_core.h | 1 +
include/net/ip6_route.h | 2 +-
include/net/netns/xfrm.h | 1 +
kernel/sched/core.c | 90 ++++++----------
kernel/time/timer.c | 6 +-
kernel/trace/trace.c | 4 +-
kernel/trace/trace_events_hist.c | 24 ++++-
kernel/tracepoint.c | 102 ++++++++++++++----
net/bluetooth/hci_core.c | 16 +--
net/bluetooth/hci_sock.c | 49 ++++++---
net/bluetooth/hci_sysfs.c | 3 +
net/ipv4/tcp_offload.c | 3 +
net/ipv4/udp_offload.c | 4 +
net/sched/sch_generic.c | 2 +-
net/sctp/auth.c | 14 ++-
net/xfrm/xfrm_compat.c | 49 ++++++++-
net/xfrm/xfrm_policy.c | 17 ++-
net/xfrm/xfrm_user.c | 10 ++
scripts/tracing/draw_functrace.py | 6 +-
security/selinux/ss/policydb.c | 10 +-
sound/core/pcm_native.c | 2 +-
sound/core/seq/seq_ports.c | 39 ++++---
sound/pci/hda/patch_realtek.c | 2 +
sound/usb/card.c | 2 +-
sound/usb/clock.c | 6 ++
sound/usb/quirks.c | 1 +
virt/kvm/kvm_main.c | 18 +++-
132 files changed, 1110 insertions(+), 470 deletions(-)
--
2.20.1
2
132

[PATCH openEuler-1.0-LTS] qxl_fb.c: fix variable "shadow" which going out of scope in qxlfb_create
by shenzijun 15 Oct '21
by shenzijun 15 Oct '21
15 Oct '21
From: 沈子俊 <shenzijun(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AH6F?from=project-issue
CVE: NA
---------------------------------------------------
Free the variable "shadow"
Signed-off-by: 沈子俊 <shenzijun(a)kylinos.cn>
---
drivers/gpu/drm/qxl/qxl_fb.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/qxl/qxl_fb.c b/drivers/gpu/drm/qxl/qxl_fb.c
index ca465c0d49fa..a1e91d36b034 100644
--- a/drivers/gpu/drm/qxl/qxl_fb.c
+++ b/drivers/gpu/drm/qxl/qxl_fb.c
@@ -252,6 +252,7 @@ static int qxlfb_create(struct qxl_fbdev *qfbdev,
info = drm_fb_helper_alloc_fbi(&qfbdev->helper);
if (IS_ERR(info)) {
ret = PTR_ERR(info);
+ vfree(shadow);
goto out_unref;
}
--
2.30.0
1
0

15 Oct '21
Add smart halt polling and schedule wakeup ipi optimization
Li Hua (2):
arm: Optimize ttwu IPI
sched/idle: Add IAS_SMART_HALT_POLL config for smart halt polling
feature
Xiangyou Xie (1):
sched/idle: introduce smart halt polling
arch/arm/include/asm/thread_info.h | 2 ++
include/linux/kernel.h | 4 +++
init/Kconfig | 8 ++++++
kernel/sched/idle.c | 42 ++++++++++++++++++++++++++++++
kernel/sysctl.c | 9 +++++++
5 files changed, 65 insertions(+)
--
2.20.1
1
3

15 Oct '21
bugfix for arch/fs/kernel modules
Arnd Bergmann (2):
asm-generic: fix ffs -Wshadow warning
seqlock: avoid -Wshadow warnings
Chen Jiahao (1):
arm64: seccomp: fix the incorrect name of syscall __NR_compat_exit in
secure computing mode
Liang Wang (1):
lib: use PFN_PHYS() in devmem_is_allowed()
Lin Ruizhe (1):
amba-pl011: Fix no irq issue due to no IRQ domain found
Mark Rutland (1):
arm64: fix compat syscall return truncation
Peter Zijlstra (1):
kthread: Fix PF_KTHREAD vs to_kthread() race
Yu Kuai (4):
block: ensure the memory order between bi_private and bi_status
Revert "[Huawei] block: avoid creating invalid symlink file for
patitions"
Revert "[Backport] block: take bd_mutex around delete_partitions in
del_gendisk"
blk: reuse lookup_sem to serialize partition operations
Zhihao Cheng (2):
mtd: mtdconcat: Judge callback existence based on the master
mtd: mtdconcat: Check _read,_write callbacks existence before
assignment
arch/arm/mm/mmap.c | 2 +-
arch/arm64/include/asm/ptrace.h | 12 +++++++-
arch/arm64/include/asm/seccomp.h | 2 +-
arch/arm64/include/asm/syscall.h | 19 +++++++------
arch/arm64/kernel/ptrace.c | 2 +-
arch/arm64/kernel/signal.c | 3 +-
arch/arm64/kernel/syscall.c | 9 ++----
block/genhd.c | 14 +--------
block/partitions/core.c | 4 +++
drivers/mtd/mtdconcat.c | 33 +++++++++++++++-------
drivers/tty/serial/amba-pl011.c | 12 +++++++-
fs/block_dev.c | 36 ++++++++++++++++++++----
include/asm-generic/bitops/builtin-ffs.h | 5 +---
include/linux/seqlock.h | 14 ++++-----
kernel/kthread.c | 33 ++++++++++++++++++----
kernel/sched/fair.c | 2 +-
16 files changed, 134 insertions(+), 68 deletions(-)
--
2.20.1
1
13
Backport LTS 5.10.57 patches from upstream.
Alain Volmat (1):
spi: stm32h7: fix full duplex irq handler handling
Andrei Matei (2):
selftest/bpf: Adjust expected verifier errors
selftest/bpf: Verifier tests for var-off access
Axel Lin (1):
regulator: rt5033: Fix n_voltages settings for BUCK and LDO
Borislav Petkov (1):
efi/mokvar: Reserve the table only if it is in boot services data
ChiYuan Huang (1):
regulator: rtmv20: Fix wrong mask for strobe-polarity-high
Cristian Marussi (1):
firmware: arm_scmi: Add delayed response status check
Daniel Borkmann (3):
bpf, selftests: Adjust few selftest result_unpriv outcomes
bpf: Update selftests to reflect new error states
bpf, selftests: Adjust few selftest outcomes wrt unreachable code
Filipe Manana (2):
btrfs: fix race causing unnecessary inode logging during link and
rename
btrfs: fix lost inode on log replay after mix of fsync, rename and
inode eviction
Greg Kroah-Hartman (2):
Revert "Bluetooth: Shutdown controller after workqueues are flushed or
cancelled"
Revert "watchdog: iTCO_wdt: Account for rebooting on second timeout"
Guenter Roeck (1):
spi: mediatek: Fix fifo transfer
Jason Ekstrand (2):
drm/i915: Revert "drm/i915/gem: Asynchronous cmdparser"
Revert "drm/i915: Propagate errors on awaiting already signaled
fences"
Jia He (1):
qed: fix possible unpaired spin_{un}lock_bh in
_qed_mcp_cmd_and_union()
Keith Busch (1):
nvme: fix nvme_setup_command metadata trace event
Kyle Russell (1):
ASoC: tlv320aic31xx: fix reversed bclk/wclk master bits
Linus Torvalds (1):
ACPI: fix NULL pointer dereference
Nicholas Kazlauskas (1):
drm/amd/display: Fix max vstartup calculation for modes with borders
Oder Chiou (1):
ASoC: rt5682: Fix the issue of garbled recording after
powerd_dbus_suspend
Peter Ujfalusi (2):
ASoC: ti: j721e-evm: Fix unbalanced domain activity tracking during
startup
ASoC: ti: j721e-evm: Check for not initialized parent_clk_id
Pravin B Shelar (1):
net: Fix zero-copy head len calculation.
Sudeep Holla (1):
firmware: arm_scmi: Ensure drivers provide a probe function
Takashi Iwai (1):
r8152: Fix potential PM refcount imbalance
Victor Lu (1):
drm/amd/display: Fix comparison error in dcn21 DML
Yonghong Song (1):
selftests/bpf: Add a test for ptr_to_map_value on stack for helper
access
drivers/firmware/arm_scmi/bus.c | 3 +
drivers/firmware/arm_scmi/driver.c | 8 +-
drivers/firmware/efi/mokvar-table.c | 5 +-
.../drm/amd/display/dc/dcn20/dcn20_resource.c | 6 +-
.../dc/dml/dcn21/display_mode_vba_21.c | 2 +-
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 164 +-----------------
drivers/gpu/drm/i915/i915_cmd_parser.c | 28 +--
drivers/gpu/drm/i915/i915_request.c | 8 +-
drivers/net/ethernet/qlogic/qed/qed_mcp.c | 23 ++-
drivers/net/usb/r8152.c | 3 +-
drivers/nvme/host/trace.h | 6 +-
drivers/regulator/rtmv20-regulator.c | 2 +-
drivers/spi/spi-mt65xx.c | 19 +-
drivers/spi/spi-stm32.c | 15 +-
drivers/watchdog/iTCO_wdt.c | 12 +-
fs/btrfs/tree-log.c | 5 +-
include/acpi/acpi_bus.h | 3 +-
include/linux/mfd/rt5033-private.h | 4 +-
net/bluetooth/hci_core.c | 16 +-
net/core/skbuff.c | 5 +-
sound/soc/codecs/rt5682.c | 8 +-
sound/soc/codecs/tlv320aic31xx.h | 4 +-
sound/soc/ti/j721e-evm.c | 18 +-
.../selftests/bpf/progs/bpf_iter_task.c | 3 +-
tools/testing/selftests/bpf/test_verifier.c | 2 +-
tools/testing/selftests/bpf/verifier/and.c | 2 +
.../selftests/bpf/verifier/basic_stack.c | 2 +-
tools/testing/selftests/bpf/verifier/bounds.c | 19 +-
.../selftests/bpf/verifier/bounds_deduction.c | 21 +--
.../bpf/verifier/bounds_mix_sign_unsign.c | 13 --
tools/testing/selftests/bpf/verifier/calls.c | 4 +-
.../testing/selftests/bpf/verifier/const_or.c | 4 +-
.../selftests/bpf/verifier/dead_code.c | 2 +
.../bpf/verifier/helper_access_var_len.c | 12 +-
.../testing/selftests/bpf/verifier/int_ptr.c | 6 +-
tools/testing/selftests/bpf/verifier/jmp32.c | 22 +++
tools/testing/selftests/bpf/verifier/jset.c | 10 +-
.../testing/selftests/bpf/verifier/map_ptr.c | 4 +-
.../selftests/bpf/verifier/raw_stack.c | 10 +-
.../selftests/bpf/verifier/stack_ptr.c | 22 +--
tools/testing/selftests/bpf/verifier/unpriv.c | 9 +-
.../selftests/bpf/verifier/value_ptr_arith.c | 17 +-
.../testing/selftests/bpf/verifier/var_off.c | 115 ++++++++++--
43 files changed, 331 insertions(+), 335 deletions(-)
--
2.20.1
1
30
Backport LTS 5.10.57 patches from upstream.
Alain Volmat (1):
spi: stm32h7: fix full duplex irq handler handling
Andrei Matei (2):
selftest/bpf: Adjust expected verifier errors
selftest/bpf: Verifier tests for var-off access
Axel Lin (1):
regulator: rt5033: Fix n_voltages settings for BUCK and LDO
Borislav Petkov (1):
efi/mokvar: Reserve the table only if it is in boot services data
ChiYuan Huang (1):
regulator: rtmv20: Fix wrong mask for strobe-polarity-high
Cristian Marussi (1):
firmware: arm_scmi: Add delayed response status check
Daniel Borkmann (3):
bpf, selftests: Adjust few selftest result_unpriv outcomes
bpf: Update selftests to reflect new error states
bpf, selftests: Adjust few selftest outcomes wrt unreachable code
Filipe Manana (2):
btrfs: fix race causing unnecessary inode logging during link and
rename
btrfs: fix lost inode on log replay after mix of fsync, rename and
inode eviction
Greg Kroah-Hartman (2):
Revert "Bluetooth: Shutdown controller after workqueues are flushed or
cancelled"
Revert "watchdog: iTCO_wdt: Account for rebooting on second timeout"
Guenter Roeck (1):
spi: mediatek: Fix fifo transfer
Jason Ekstrand (2):
drm/i915: Revert "drm/i915/gem: Asynchronous cmdparser"
Revert "drm/i915: Propagate errors on awaiting already signaled
fences"
Jia He (1):
qed: fix possible unpaired spin_{un}lock_bh in
_qed_mcp_cmd_and_union()
Keith Busch (1):
nvme: fix nvme_setup_command metadata trace event
Kyle Russell (1):
ASoC: tlv320aic31xx: fix reversed bclk/wclk master bits
Linus Torvalds (1):
ACPI: fix NULL pointer dereference
Nicholas Kazlauskas (1):
drm/amd/display: Fix max vstartup calculation for modes with borders
Oder Chiou (1):
ASoC: rt5682: Fix the issue of garbled recording after
powerd_dbus_suspend
Peter Ujfalusi (2):
ASoC: ti: j721e-evm: Fix unbalanced domain activity tracking during
startup
ASoC: ti: j721e-evm: Check for not initialized parent_clk_id
Pravin B Shelar (1):
net: Fix zero-copy head len calculation.
Sudeep Holla (1):
firmware: arm_scmi: Ensure drivers provide a probe function
Takashi Iwai (1):
r8152: Fix potential PM refcount imbalance
Victor Lu (1):
drm/amd/display: Fix comparison error in dcn21 DML
Yonghong Song (1):
selftests/bpf: Add a test for ptr_to_map_value on stack for helper
access
drivers/firmware/arm_scmi/bus.c | 3 +
drivers/firmware/arm_scmi/driver.c | 8 +-
drivers/firmware/efi/mokvar-table.c | 5 +-
.../drm/amd/display/dc/dcn20/dcn20_resource.c | 6 +-
.../dc/dml/dcn21/display_mode_vba_21.c | 2 +-
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 164 +-----------------
drivers/gpu/drm/i915/i915_cmd_parser.c | 28 +--
drivers/gpu/drm/i915/i915_request.c | 8 +-
drivers/net/ethernet/qlogic/qed/qed_mcp.c | 23 ++-
drivers/net/usb/r8152.c | 3 +-
drivers/nvme/host/trace.h | 6 +-
drivers/regulator/rtmv20-regulator.c | 2 +-
drivers/spi/spi-mt65xx.c | 19 +-
drivers/spi/spi-stm32.c | 15 +-
drivers/watchdog/iTCO_wdt.c | 12 +-
fs/btrfs/tree-log.c | 5 +-
include/acpi/acpi_bus.h | 3 +-
include/linux/mfd/rt5033-private.h | 4 +-
net/bluetooth/hci_core.c | 16 +-
net/core/skbuff.c | 5 +-
sound/soc/codecs/rt5682.c | 8 +-
sound/soc/codecs/tlv320aic31xx.h | 4 +-
sound/soc/ti/j721e-evm.c | 18 +-
.../selftests/bpf/progs/bpf_iter_task.c | 3 +-
tools/testing/selftests/bpf/test_verifier.c | 2 +-
tools/testing/selftests/bpf/verifier/and.c | 2 +
.../selftests/bpf/verifier/basic_stack.c | 2 +-
tools/testing/selftests/bpf/verifier/bounds.c | 19 +-
.../selftests/bpf/verifier/bounds_deduction.c | 21 +--
.../bpf/verifier/bounds_mix_sign_unsign.c | 13 --
tools/testing/selftests/bpf/verifier/calls.c | 4 +-
.../testing/selftests/bpf/verifier/const_or.c | 4 +-
.../selftests/bpf/verifier/dead_code.c | 2 +
.../bpf/verifier/helper_access_var_len.c | 12 +-
.../testing/selftests/bpf/verifier/int_ptr.c | 6 +-
tools/testing/selftests/bpf/verifier/jmp32.c | 22 +++
tools/testing/selftests/bpf/verifier/jset.c | 10 +-
.../testing/selftests/bpf/verifier/map_ptr.c | 4 +-
.../selftests/bpf/verifier/raw_stack.c | 10 +-
.../selftests/bpf/verifier/stack_ptr.c | 22 +--
tools/testing/selftests/bpf/verifier/unpriv.c | 9 +-
.../selftests/bpf/verifier/value_ptr_arith.c | 17 +-
.../testing/selftests/bpf/verifier/var_off.c | 115 ++++++++++--
43 files changed, 331 insertions(+), 335 deletions(-)
--
2.20.1
1
26
backport some bugfix patches for mm/sched/kprobes modules.
Aili Yao (1):
mm,hwpoison: return -EHWPOISON to denote that the page has already
been poisoned
Pu Lehui (1):
powerpc/kprobes: Fix kprobe Oops happens in booke
Tony Luck (1):
mm/memory-failure: use a mutex to avoid memory_failure() races
Yu Jiahua (2):
sched: Access control for sysctl_update_load_latency
sched: Fix branch prediction error in static_key
arch/powerpc/kernel/kprobes.c | 3 ++-
kernel/sched/fair.c | 5 ++++-
mm/memory-failure.c | 40 ++++++++++++++++++++++-------------
3 files changed, 31 insertions(+), 17 deletions(-)
--
2.20.1
1
5
Backport LTS 5.10.56 from upstream.
Arkadiusz Kubalewski (2):
i40e: Fix logic of disabling queues
i40e: Fix firmware LLDP agent related warning
Arnaldo Carvalho de Melo (1):
Revert "perf map: Fix dso->nsinfo refcounting"
Bjorn Andersson (1):
drm/msm/dp: Initialize the INTF_CONFIG register
Cong Wang (1):
skmsg: Make sk_psock_destroy() static
Dale Zhao (1):
drm/amd/display: ensure dentist display clock update finished in DCN20
Dan Carpenter (1):
can: hi311x: fix a signedness bug in hi3110_cmd()
Daniel Borkmann (4):
bpf: Introduce BPF nospec instruction for mitigating Spectre v4
bpf: Fix leakage due to insufficient speculative store bypass
mitigation
bpf: Remove superfluous aux sanitation on subprog rejection
bpf: Fix pointer arithmetic mask tightening under state pruning
Desmond Cheong Zhi Xi (1):
btrfs: fix rw device counting in __btrfs_free_extra_devids
Dima Chumak (1):
net/mlx5e: Fix nullptr in mlx5e_hairpin_get_mdev()
Felix Fietkau (1):
mac80211: fix enabling 4-address mode on a sta vif after assoc
Florian Westphal (1):
netfilter: conntrack: adjust stop timestamp to real expiry value
Geetha sowjanya (1):
octeontx2-pf: Fix interface down flag on error
Gilad Naaman (1):
net: Set true network header for ECN decapsulation
Goldwyn Rodrigues (1):
btrfs: mark compressed range uptodate only if all bio succeed
Greg Kroah-Hartman (1):
selftest: fix build error in tools/testing/selftests/vm/userfaultfd.c
Hoang Le (1):
tipc: fix sleeping in tipc accept routine
Hui Wang (1):
Revert "ACPI: resources: Add checks for ACPI IRQ override"
Jan Kiszka (1):
x86/asm: Ensure asm/proto.h can be included stand-alone
Jason Gerecke (1):
HID: wacom: Re-enable touch by default for Cintiq 24HDT / 27QHDT
Jedrzej Jagielski (2):
i40e: Fix queue-to-TC mapping on Tx
i40e: Fix log TC creation failure when max num of queues is exceeded
Jiapeng Chong (1):
mlx4: Fix missing error code in mlx4_load_one()
Jiri Kosina (2):
drm/amdgpu: Avoid printing of stack contents on firmware load error
drm/amdgpu: Fix resource leak on probe error path
Juergen Gross (1):
x86/kvm: fix vcpu-id indexed array sizes
Junxiao Bi (2):
ocfs2: fix zero out valid data
ocfs2: issue zeroout to EOF blocks
Krzysztof Kozlowski (1):
nfc: nfcsim: fix use after free during module unload
Linus Torvalds (1):
pipe: make pipe writes always wake up readers
Lorenz Bauer (2):
bpf: Fix OOB read when printing XDP link fdinfo
bpf: verifier: Allocate idmap scratch in verifier env
Lukasz Cieplicki (1):
i40e: Add additional info to PHY type error
Maor Gottlieb (1):
net/mlx5: Fix flow table chaining
Marcelo Ricardo Leitner (1):
sctp: fix return value check in __sctp_rcv_asconf_lookup
Mike Rapoport (1):
alpha: register early reserved memory in memblock
Naresh Kumar PBS (1):
RDMA/bnxt_re: Fix stats counters
Nguyen Dinh Phi (1):
cfg80211: Fix possible memory leak in function cfg80211_bss_update
Oleksij Rempel (1):
can: j1939: j1939_session_deactivate(): clarify lifetime of session
object
Pablo Neira Ayuso (1):
netfilter: nft_nat: allow to specify layer 4 protocol NAT only
Paolo Bonzini (1):
KVM: add missing compat KVM_CLEAR_DIRTY_LOG
Paul Jakma (1):
NIU: fix incorrect error return, missed in previous revert
Pavel Skripkin (6):
can: mcba_usb_start(): add missing urb->transfer_dma initialization
can: usb_8dev: fix memory leak
can: ems_usb: fix memory leak
can: esd_usb2: fix memory leak
net: qrtr: fix memory leaks
net: llc: fix skb_over_panic
Robert Foss (1):
drm/msm/dpu: Fix sm8250_mdp register length
Shannon Nelson (3):
ionic: remove intr coalesce update from napi
ionic: fix up dim accounting for tx and rx
ionic: count csum_none when offload enabled
Srikar Dronamraju (1):
powerpc/pseries: Fix regression while building external modules
Srinivas Pandruvada (1):
ACPI: DPTF: Fix reading of attributes
Stephane Grosjean (1):
can: peak_usb: pcan_usb_handle_bus_evt(): fix reading rxerr/txerr
values
Steve French (1):
SMB3: fix readpage for large swap cache
Tejun Heo (1):
blk-iocost: fix operation ordering in iocg_wake_fn()
Vitaly Kuznetsov (1):
KVM: x86: Check the right feature bit for MSR_KVM_ASYNC_PF_ACK access
Wang Hai (2):
tulip: windbond-840: Fix missing pci_disable_device() in probe and
remove
sis900: Fix missing pci_disable_device() in probe and remove
Xin Long (2):
tipc: fix implicit-connect for SYN+
tipc: do not write skb_shinfo frags when doing decrytion
Yang Yingliang (1):
io_uring: fix null-ptr-deref in io_sq_offload_start()
Zhang Changzhong (1):
can: j1939: j1939_xtp_rx_dat_one(): fix rxtimer value between
consecutive TP.DT to 750ms
Ziyang Xuan (1):
can: raw: raw_setsockopt(): fix raw_rcv panic for sock UAF
arch/alpha/kernel/setup.c | 13 +-
arch/arm/net/bpf_jit_32.c | 3 +
arch/arm64/net/bpf_jit_comp.c | 13 ++
arch/mips/net/ebpf_jit.c | 3 +
arch/powerpc/net/bpf_jit_comp64.c | 6 +
arch/powerpc/platforms/pseries/setup.c | 2 +-
arch/riscv/net/bpf_jit_comp32.c | 4 +
arch/riscv/net/bpf_jit_comp64.c | 4 +
arch/s390/net/bpf_jit_comp.c | 5 +
arch/sparc/net/bpf_jit_comp_64.c | 3 +
arch/x86/include/asm/proto.h | 2 +
arch/x86/kvm/ioapic.c | 2 +-
arch/x86/kvm/ioapic.h | 4 +-
arch/x86/kvm/x86.c | 4 +-
arch/x86/net/bpf_jit_comp.c | 7 +
arch/x86/net/bpf_jit_comp32.c | 6 +
block/blk-iocost.c | 11 +-
drivers/acpi/dptf/dptf_pch_fivr.c | 51 ++++-
drivers/acpi/resource.c | 9 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8 +-
drivers/gpu/drm/amd/amdgpu/psp_v12_0.c | 7 +-
.../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.c | 2 +-
.../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c | 2 +-
drivers/gpu/drm/msm/dp/dp_catalog.c | 1 +
drivers/hid/wacom_wac.c | 2 +-
drivers/infiniband/hw/bnxt_re/main.c | 4 +-
drivers/infiniband/hw/bnxt_re/qplib_res.c | 10 +-
drivers/infiniband/hw/bnxt_re/qplib_res.h | 1 +
drivers/net/can/spi/hi311x.c | 2 +-
drivers/net/can/usb/ems_usb.c | 14 +-
drivers/net/can/usb/esd_usb2.c | 16 +-
drivers/net/can/usb/mcba_usb.c | 2 +
drivers/net/can/usb/peak_usb/pcan_usb.c | 10 +-
drivers/net/can/usb/usb_8dev.c | 15 +-
drivers/net/ethernet/dec/tulip/winbond-840.c | 7 +-
.../net/ethernet/intel/i40e/i40e_ethtool.c | 6 +-
drivers/net/ethernet/intel/i40e/i40e_main.c | 61 +++---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 50 +++++
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 2 +
.../marvell/octeontx2/nic/otx2_ethtool.c | 7 +-
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 5 +
drivers/net/ethernet/mellanox/mlx4/main.c | 1 +
.../net/ethernet/mellanox/mlx5/core/en_tc.c | 33 +++-
.../net/ethernet/mellanox/mlx5/core/fs_core.c | 10 +-
.../net/ethernet/pensando/ionic/ionic_lif.c | 14 +-
.../net/ethernet/pensando/ionic/ionic_txrx.c | 41 ++--
drivers/net/ethernet/sis/sis900.c | 7 +-
drivers/net/ethernet/sun/niu.c | 3 +-
drivers/nfc/nfcsim.c | 3 +-
fs/btrfs/compression.c | 2 +-
fs/btrfs/volumes.c | 1 +
fs/cifs/file.c | 2 +-
fs/io_uring.c | 2 +-
fs/ocfs2/file.c | 103 ++++++----
fs/pipe.c | 10 +-
include/linux/bpf_types.h | 1 +
include/linux/bpf_verifier.h | 11 +-
include/linux/filter.h | 15 ++
include/linux/skmsg.h | 1 -
include/net/llc_pdu.h | 31 ++-
kernel/bpf/core.c | 19 +-
kernel/bpf/disasm.c | 16 +-
kernel/bpf/verifier.c | 186 ++++++------------
net/can/j1939/transport.c | 11 +-
net/can/raw.c | 20 +-
net/core/skmsg.c | 3 +-
net/ipv4/ip_tunnel.c | 2 +-
net/llc/af_llc.c | 10 +-
net/llc/llc_s_ac.c | 2 +-
net/mac80211/cfg.c | 19 ++
net/mac80211/ieee80211_i.h | 2 +
net/mac80211/mlme.c | 4 +-
net/netfilter/nf_conntrack_core.c | 7 +-
net/netfilter/nft_nat.c | 4 +-
net/qrtr/qrtr.c | 6 +-
net/sctp/input.c | 2 +-
net/tipc/crypto.c | 14 +-
net/tipc/socket.c | 30 +--
net/wireless/scan.c | 6 +-
tools/perf/util/map.c | 2 -
tools/testing/selftests/vm/userfaultfd.c | 2 +-
virt/kvm/kvm_main.c | 28 +++
82 files changed, 706 insertions(+), 366 deletions(-)
--
2.20.1
1
68
Backport LTS 5.10.55 from upstream.
Christoph Hellwig (2):
iomap: remove the length variable in iomap_seek_data
iomap: remove the length variable in iomap_seek_hole
Cristian Marussi (1):
firmware: arm_scmi: Fix range check for the maximum number of pending
messages
Desmond Cheong Zhi Xi (3):
hfs: add missing clean-up in hfs_fill_super
hfs: fix high memory mapping in hfs_bnode_read
hfs: add lock nesting notation to hfs_find_init
Eric Dumazet (1):
net: annotate data race around sk_ll_usec
Hyunchul Lee (1):
cifs: fix the out of range assignment to bit fields in
parse_server_interfaces
Maxim Levitsky (1):
KVM: x86: determine if an exception has an error code only when
injecting it.
Miklos Szeredi (1):
af_unix: fix garbage collect vs MSG_PEEK
Paul E. McKenney (2):
rcu-tasks: Don't delete holdouts within trc_inspect_reader()
rcu-tasks: Don't delete holdouts within trc_wait_for_one_reader()
Paul Gortmaker (1):
cgroup1: fix leaked context root causing sporadic NULL deref in LTP
Pavel Begunkov (1):
io_uring: fix link timeout refs
Sudeep Holla (2):
firmware: arm_scmi: Fix possible scmi_linux_errmap buffer overflow
ARM: dts: versatile: Fix up interrupt controller node names
Vasily Averin (2):
ipv6: allocate enough headroom in ip6_finish_output2()
ipv6: ip6_finish_output2: set sk into newly allocated nskb
Xin Long (1):
sctp: move 198 addresses from unusable to private scope
Yang Yingliang (3):
workqueue: fix UAF in pwq_unbound_release_workfn()
net/802/mrp: fix memleak in mrp_request_join()
net/802/garp: fix memleak in garp_request_join()
Yonghong Song (1):
tools: Allow proper CC/CXX/... override with LLVM=1 in
Makefile.include
Zheyu Ma (1):
drm/ttm: add a check against null pointer dereference
arch/arm/boot/dts/versatile-ab.dts | 5 +--
arch/arm/boot/dts/versatile-pb.dts | 2 +-
arch/x86/kvm/x86.c | 13 +++++--
drivers/firmware/arm_scmi/driver.c | 12 +++---
drivers/gpu/drm/ttm/ttm_range_manager.c | 3 ++
fs/cifs/smb2ops.c | 4 +-
fs/hfs/bfind.c | 14 ++++++-
fs/hfs/bnode.c | 25 +++++++++---
fs/hfs/btree.h | 7 ++++
fs/hfs/super.c | 10 ++---
fs/internal.h | 1 -
fs/io_uring.c | 1 -
fs/iomap/seek.c | 25 +++++-------
include/linux/fs_context.h | 1 +
include/net/busy_poll.h | 2 +-
include/net/sctp/constants.h | 4 +-
kernel/cgroup/cgroup-v1.c | 4 +-
kernel/rcu/tasks.h | 6 +--
kernel/workqueue.c | 20 ++++++----
net/802/garp.c | 14 +++++++
net/802/mrp.c | 14 +++++++
net/core/sock.c | 2 +-
net/ipv6/ip6_output.c | 28 ++++++++++++++
net/sctp/protocol.c | 3 +-
net/unix/af_unix.c | 51 ++++++++++++++++++++++++-
tools/scripts/Makefile.include | 12 +++++-
26 files changed, 215 insertions(+), 68 deletions(-)
--
2.20.1
1
24

[PATCH openEuler-1.0-LTS 1/2] io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
by Yang Yingliang 15 Oct '21
by Yang Yingliang 15 Oct '21
15 Oct '21
From: Xiaoguang Wang <xiaoguang.wang(a)linux.alibaba.com>
mainline inclusion
from mainline-5.11-rc1
commit c07e6719511e77c4b289f62bfe96423eb6ea061d
category: bugfix
bugzilla: 182869
CVE: NA
---------------------------
io_iopoll_complete() does not hold completion_lock to complete polled io,
so in io_wq_submit_work(), we can not call io_req_complete() directly, to
complete polled io, otherwise there maybe concurrent access to cqring,
defer_list, etc, which is not safe. Commit dad1b1242fd5 ("io_uring: always
let io_iopoll_complete() complete polled io") has fixed this issue, but
Pavel reported that IOPOLL apart from rw can do buf reg/unreg requests(
IORING_OP_PROVIDE_BUFFERS or IORING_OP_REMOVE_BUFFERS), so the fix is not
good.
Given that io_iopoll_complete() is always called under uring_lock, so here
for polled io, we can also get uring_lock to fix this issue.
Fixes: dad1b1242fd5 ("io_uring: always let io_iopoll_complete() complete polled io")
Cc: <stable(a)vger.kernel.org> # 5.5+
Signed-off-by: Xiaoguang Wang <xiaoguang.wang(a)linux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence(a)gmail.com>
[axboe: don't deref 'req' after completing it']
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: Yang Erkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/io_uring.c | 29 +++++++++++++++++++----------
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c
index c944a27178a48..b5fb86e2ed597 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5782,19 +5782,28 @@ static struct io_wq_work *io_wq_submit_work(struct io_wq_work *work)
}
if (ret) {
+ struct io_ring_ctx *lock_ctx = NULL;
+
+ if (req->ctx->flags & IORING_SETUP_IOPOLL)
+ lock_ctx = req->ctx;
+
/*
- * io_iopoll_complete() does not hold completion_lock to complete
- * polled io, so here for polled io, just mark it done and still let
- * io_iopoll_complete() complete it.
+ * io_iopoll_complete() does not hold completion_lock to
+ * complete polled io, so here for polled io, we can not call
+ * io_req_complete() directly, otherwise there maybe concurrent
+ * access to cqring, defer_list, etc, which is not safe. Given
+ * that io_iopoll_complete() is always called under uring_lock,
+ * so here for polled io, we also get uring_lock to complete
+ * it.
*/
- if (req->ctx->flags & IORING_SETUP_IOPOLL) {
- struct kiocb *kiocb = &req->rw.kiocb;
+ if (lock_ctx)
+ mutex_lock(&lock_ctx->uring_lock);
- kiocb_done(kiocb, ret, NULL);
- } else {
- req_set_fail_links(req);
- io_req_complete(req, ret);
- }
+ req_set_fail_links(req);
+ io_req_complete(req, ret);
+
+ if (lock_ctx)
+ mutex_unlock(&lock_ctx->uring_lock);
}
return io_steal_work(req);
--
2.25.1
1
1

[PATCH kernel-4.19] block: fix UAF from race of ioc_release_fn() and __ioc_clear_queue()
by Yang Yingliang 15 Oct '21
by Yang Yingliang 15 Oct '21
15 Oct '21
From: Laibin Qiu <qiulaibin(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 182666
CVE: NA
--------------------------
KASAN reports a use-after-free report when doing block test:
[293762.535116]
==================================================================
[293762.535129] BUG: KASAN: use-after-free in
queued_spin_lock_slowpath+0x78/0x4c8
[293762.535133] Write of size 2 at addr ffff8000d5f12bc8 by task sh/9148
[293762.535135]
[293762.535145] CPU: 1 PID: 9148 Comm: sh Kdump: loaded Tainted: G W
4.19.90-vhulk2108.6.0.h824.kasan.eulerosv2r10.aarch64 #1
[293762.535148] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0
02/06/2015
[293762.535150] Call trace:
[293762.535154] dump_backtrace+0x0/0x310
[293762.535158] show_stack+0x28/0x38
[293762.535165] dump_stack+0xec/0x15c
[293762.535172] print_address_description+0x68/0x2d0
[293762.535177] kasan_report+0x130/0x2f0
[293762.535182] __asan_store2+0x80/0xa8
[293762.535189] queued_spin_lock_slowpath+0x78/0x4c8
[293762.535194] __ioc_clear_queue+0x158/0x160
[293762.535198] ioc_clear_queue+0x194/0x258
[293762.535202] elevator_switch_mq+0x64/0x170
[293762.535206] elevator_switch+0x140/0x270
[293762.535211] elv_iosched_store+0x1a4/0x2a0
[293762.535215] queue_attr_store+0x90/0xe0
[293762.535219] sysfs_kf_write+0xa8/0xe8
[293762.535222] kernfs_fop_write+0x1f8/0x378
[293762.535227] __vfs_write+0xe0/0x360
[293762.535233] vfs_write+0xf0/0x270
[293762.535237] ksys_write+0xdc/0x1b8
[293762.535241] __arm64_sys_write+0x50/0x60
[293762.535245] el0_svc_common+0xc8/0x320
[293762.535250] el0_svc_handler+0xf8/0x160
[293762.535253] el0_svc+0x10/0x218
[293762.535254]
[293762.535258] Allocated by task 3466763:
[293762.535264] kasan_kmalloc+0xe0/0x190
[293762.535269] kasan_slab_alloc+0x14/0x20
[293762.535276] kmem_cache_alloc_node+0x1b4/0x420
[293762.535280] create_task_io_context+0x40/0x210
[293762.535284] generic_make_request_checks+0xc78/0xe38
[293762.535288] generic_make_request+0xf8/0x640
[293762.535394] generic_file_direct_write+0x100/0x268
[293762.535401] __generic_file_write_iter+0x128/0x370
[293762.535467] vfs_iter_write+0x64/0x90
[293762.535489] ovl_write_iter+0x2f8/0x458 [overlay]
[293762.535493] __vfs_write+0x264/0x360
[293762.535497] vfs_write+0xf0/0x270
[293762.535501] ksys_write+0xdc/0x1b8
[293762.535505] __arm64_sys_write+0x50/0x60
[293762.535509] el0_svc_common+0xc8/0x320
[293762.535387] ext4_direct_IO+0x3c8/0xe80 [ext4]
[293762.535394] generic_file_direct_write+0x100/0x268
[293762.535401] __generic_file_write_iter+0x128/0x370
[293762.535452] ext4_file_write_iter+0x610/0xa80 [ext4]
[293762.535457] do_iter_readv_writev+0x28c/0x390
[293762.535463] do_iter_write+0xfc/0x360
[293762.535467] vfs_iter_write+0x64/0x90
[293762.535489] ovl_write_iter+0x2f8/0x458 [overlay]
[293762.535493] __vfs_write+0x264/0x360
[293762.535497] vfs_write+0xf0/0x270
[293762.535501] ksys_write+0xdc/0x1b8
[293762.535505] __arm64_sys_write+0x50/0x60
[293762.535509] el0_svc_common+0xc8/0x320
[293762.535513] el0_svc_handler+0xf8/0x160
[293762.535517] el0_svc+0x10/0x218
[293762.535521]
[293762.535523] Freed by task 3466763:
[293762.535528] __kasan_slab_free+0x120/0x228
[293762.535532] kasan_slab_free+0x10/0x18
[293762.535536] kmem_cache_free+0x68/0x248
[293762.535540] put_io_context+0x104/0x190
[293762.535545] put_io_context_active+0x204/0x2c8
[293762.535549] exit_io_context+0x74/0xa0
[293762.535553] do_exit+0x658/0xae0
[293762.535557] do_group_exit+0x74/0x1a8
[293762.535561] get_signal+0x21c/0xf38
[293762.535564] do_signal+0x10c/0x450
[293762.535568] do_notify_resume+0x224/0x4b0
[293762.535573] work_pending+0x8/0x10
[293762.535574]
[293762.535578] The buggy address belongs to the object at
ffff8000d5f12bb8
which belongs to the cache blkdev_ioc of size 136
[293762.535582] The buggy address is located 16 bytes inside of
136-byte region [ffff8000d5f12bb8, ffff8000d5f12c40)
[293762.535583] The buggy address belongs to the page:
[293762.535588] page:ffff7e000357c480 count:1 mapcount:0
mapping:ffff8000d8563c00 index:0x0
[293762.536201] flags: 0x7ffff0000000100(slab)
[293762.536540] raw: 07ffff0000000100 ffff7e0003118588 ffff8000d8adb530
ffff8000d8563c00
[293762.536546] raw: 0000000000000000 0000000000140014 00000001ffffffff
0000000000000000
[293762.536551] page dumped because: kasan: bad access detected
[293762.536552]
[293762.536554] Memory state around the buggy address:
[293762.536558] ffff8000d5f12a80: 00 00 00 00 00 00 fc fc fc fc fc fc fc
fc fb fb
[293762.536562] ffff8000d5f12b00: fb fb fb fb fb fb fb fb fb fb fb fb fb
fb fb fc
[293762.536566] >ffff8000d5f12b80: fc fc fc fc fc fc fc fb fb fb fb fb
fb fb fb fb
[293762.536568] ^
[293762.536572] ffff8000d5f12c00: fb fb fb fb fb fb fb fb fc fc fc fc fc
fc fc fc
[293762.536576] ffff8000d5f12c80: 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00
[293762.536577]
==================================================================
ioc_release_fn() will destroy icq from ioc->icq_list and
__ioc_clear_queue() will destroy icq from request_queue->icq_list.
However, the ioc_release_fn() will hold ioc_lock firstly, and
free ioc finally. Then __ioc_clear_queue() will get ioc from icq
and hold ioc_lock, but ioc has been released, which will result
in a use-after-free.
CPU0 CPU1
put_io_context elevator_switch_mq
queue_work &ioc->release_work ioc_clear_queue
^^^ splice q->icq_list
__ioc_clear_queue
^^^get icq from icq_list
get ioc from icq->ioc
ioc_release_fn
spin_lock(ioc->lock)
ioc_destroy_icq(icq)
spin_unlock(ioc->lock)
free(ioc)
spin_lock(ioc->lock) <= UAF
Fix by grabbing the request_queue->queue_lock in ioc_clear_queue() to
avoid this race scene.
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Link:
https://lore.kernel.org/lkml/1c9ad9f2-c487-c793-1ffc-5c3ec0fcc0ae@kernel.dk/
Conflicts:
block/blk-ioc.c
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-ioc.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 4c810969c3e2f..281b7a93e340a 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -261,13 +261,8 @@ void ioc_clear_queue(struct request_queue *q)
spin_lock_irq(q->queue_lock);
list_splice_init(&q->icq_list, &icq_list);
- if (q->mq_ops) {
- spin_unlock_irq(q->queue_lock);
- __ioc_clear_queue(&icq_list);
- } else {
- __ioc_clear_queue(&icq_list);
- spin_unlock_irq(q->queue_lock);
- }
+ __ioc_clear_queue(&icq_list);
+ spin_unlock_irq(q->queue_lock);
}
int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node)
--
2.25.1
1
0

[PATCH kernel-4.19] Driver/SMMUV3: Bugfix for the softlockup when the driver processes events
by Yang Yingliang 15 Oct '21
by Yang Yingliang 15 Oct '21
15 Oct '21
From: Zhou Guanghui <zhouguanghui1(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I
CVE: NA
-------------------------------------------------
If the SMMU frequently reports a large number of events,
the events in the event queue cannot be processed in time.
As a result, the while loop cannot exit. So add a cond_resched()
to avoid softlockup.
Signed-off-by: Zhou Guanghui <zhouguanghui1(a)huawei.com>
Signed-off-by: Guo Mengqi <guomengqi3(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/iommu/arm-smmu-v3.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 292fd9a10c7ff..8b5083c3e0a16 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1832,6 +1832,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
spin_unlock(&q->wq.lock);
+ cond_resched();
ret = arm_smmu_handle_evt(smmu, evt);
spin_lock(&q->wq.lock);
--
2.25.1
1
0

15 Oct '21
From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v5.5-rc1
commit b60fa1c5d01a10e358c509b904d4bead6114d593
category: bugfix
bugzilla: 182865
CVE: NA
-------------------------------------------------
The introduction of this schedule point was done in commit
2ba2506ca7ca ("[NET]: Add preemption point in qdisc_run")
at a time the loop was not bounded.
Then later in commit d5b8aa1d246f ("net_sched: fix dequeuer fairness")
we added a limit on the number of packets.
Now is the time to remove the schedule point, since the default
limit of 64 packets matches the number of packets a typical NAPI
poll can process in a row.
This solves a latency problem for most TCP receivers under moderate load :
1) host receives a packet.
NET_RX_SOFTIRQ is raised by NIC hard IRQ handler
2) __do_softirq() does its first loop, handling NET_RX_SOFTIRQ
and calling the driver napi->loop() function
3) TCP stores the skb in socket receive queue:
4) TCP calls sk->sk_data_ready() and wakeups a user thread
waiting for EPOLLIN (as a result, need_resched() might now be true)
5) TCP cooks an ACK and sends it.
6) qdisc_run() processes one packet from qdisc, and sees need_resched(),
this raises NET_TX_SOFTIRQ (even if there are no more packets in
the qdisc)
Then we go back to the __do_softirq() in 2), and we see that new
softirqs were raised. Since need_resched() is true, we end up waking
ksoftirqd in this path :
if (pending) {
if (time_before(jiffies, end) && !need_resched() &&
--max_restart)
goto restart;
wakeup_softirqd();
}
So we have many wakeups of ksoftirqd kernel threads,
and more calls to qdisc_run() with associated lock overhead.
Note that another way to solve the issue would be to change TCP
to first send the ACK packet, then signal the EPOLLIN,
but this changes P99 latencies, as sending the ACK packet
can add a long delay.
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Lu Wei <luwei32(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/sched/sch_generic.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 4e15913e7519e..c2276a3c9dd8d 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -401,13 +401,8 @@ void __qdisc_run(struct Qdisc *q)
int packets;
while (qdisc_restart(q, &packets)) {
- /*
- * Ordered by possible occurrence: Postpone processing if
- * 1. we've exceeded packet quota
- * 2. another process needs the CPU;
- */
quota -= packets;
- if (quota <= 0 || need_resched()) {
+ if (quota <= 0) {
__netif_schedule(q);
break;
}
--
2.25.1
1
0

15 Oct '21
Dan Schatzberg (1):
kernel/sched/psi.c: expose pressure metrics on root cgroup
Johannes Weiner (11):
sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD
sched: loadavg: make calc_load_n() public
sched: sched.h: make rq locking and clock functions available in
stats.h
sched: introduce this_rq_lock_irq()
psi: pressure stall information for CPU, memory, and IO
psi: cgroup support
psi: make disabling/enabling easier for vendor kernels
psi: fix aggregation idle shut-off
psi: avoid divide-by-zero crash inside virtual machines
fs: kernfs: add poll file operation
sched/psi: Fix sampling error and rare div0 crashes with cgroups and
high uptime
Josef Bacik (1):
blk-iolatency: use a percentile approache for ssd's
Liu Xinpeng (2):
psi:enable psi in config
psi:avoid kabi change
Olof Johansson (1):
kernel/sched/psi.c: simplify cgroup_move_task()
Suren Baghdasaryan (6):
psi: introduce state_mask to represent stalled psi states
psi: make psi_enable static
psi: rename psi fields in preparation for psi trigger addition
psi: split update_stats into parts
psi: track changed states
include/: refactor headers to allow kthread.h inclusion in psi_types.h
Documentation/accounting/psi.txt | 73 +++
Documentation/admin-guide/cgroup-v2.rst | 18 +
Documentation/admin-guide/kernel-parameters.txt | 4 +
arch/arm64/configs/openeuler_defconfig | 2 +
arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +-
arch/powerpc/platforms/cell/spufs/sched.c | 9 +-
arch/s390/appldata/appldata_os.c | 4 -
arch/x86/configs/openeuler_defconfig | 2 +
block/blk-iolatency.c | 183 +++++-
drivers/cpuidle/governors/menu.c | 4 -
drivers/spi/spi-rockchip.c | 1 +
fs/kernfs/file.c | 31 +-
fs/proc/loadavg.c | 3 -
include/linux/cgroup-defs.h | 12 +
include/linux/cgroup.h | 16 +
include/linux/kernfs.h | 8 +
include/linux/kthread.h | 4 +
include/linux/psi.h | 55 ++
include/linux/psi_types.h | 95 +++
include/linux/sched.h | 13 +
include/linux/sched/loadavg.h | 24 +-
init/Kconfig | 28 +
kernel/cgroup/cgroup.c | 55 +-
kernel/debug/kdb/kdb_main.c | 7 +-
kernel/fork.c | 4 +
kernel/kthread.c | 3 +
kernel/sched/Makefile | 1 +
kernel/sched/core.c | 16 +-
kernel/sched/loadavg.c | 139 ++--
kernel/sched/psi.c | 823 ++++++++++++++++++++++++
kernel/sched/sched.h | 178 ++---
kernel/sched/stats.h | 86 +++
kernel/workqueue.c | 23 +
kernel/workqueue_internal.h | 6 +-
mm/compaction.c | 5 +
mm/filemap.c | 11 +
mm/page_alloc.c | 9 +
mm/vmscan.c | 9 +
38 files changed, 1725 insertions(+), 241 deletions(-)
create mode 100644 Documentation/accounting/psi.txt
create mode 100644 include/linux/psi.h
create mode 100644 include/linux/psi_types.h
create mode 100644 kernel/sched/psi.c
--
1.8.3.1
3
24

[PATCH kernel-4.19 1/4] ath10k: add struct for high latency PN replay protection
by Yang Yingliang 15 Oct '21
by Yang Yingliang 15 Oct '21
15 Oct '21
From: Wen Gong <wgong(a)codeaurora.org>
mainline inclusion
from mainline-v5.3-rc1
commit e1bddde9737ac4687ca6e2fe6c95f67a9bec353b
category: bugfix
bugzilla: 181870
CVE: CVE-2020-26145
-------------------------------------------------
Add the struct for PN replay protection and fragment packet
handler.
Also fix the bitmask of HTT_RX_DESC_HL_INFO_MCAST_BCAST to match what's currently
used by SDIO firmware. The defines are not used yet so it's safe to modify
them. Remove the conflicting HTT_RX_DESC_HL_INFO_FRAGMENT as
it's not either used in ath10k.
Tested on QCA6174 SDIO with firmware WLAN.RMH.4.4.1-00007-QCARMSWP-1.
Signed-off-by: Wen Gong <wgong(a)codeaurora.org>
Signed-off-by: Kalle Valo <kvalo(a)codeaurora.org>
conflict:
drivers/net/wireless/ath/ath10k/htt.h
Signed-off-by: Wang Hai <wanghai38(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/wireless/ath/ath10k/core.h | 8 ++++++
drivers/net/wireless/ath/ath10k/htt.h | 40 ++++++++++++++++++++++++++
2 files changed, 48 insertions(+)
diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
index 5c9fc4070fd24..3cd49d29ac23f 100644
--- a/drivers/net/wireless/ath/ath10k/core.h
+++ b/drivers/net/wireless/ath/ath10k/core.h
@@ -414,6 +414,14 @@ struct ath10k_peer {
/* protected by ar->data_lock */
struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
+ union htt_rx_pn_t tids_last_pn[ATH10K_TXRX_NUM_EXT_TIDS];
+ bool tids_last_pn_valid[ATH10K_TXRX_NUM_EXT_TIDS];
+ union htt_rx_pn_t frag_tids_last_pn[ATH10K_TXRX_NUM_EXT_TIDS];
+ u32 frag_tids_seq[ATH10K_TXRX_NUM_EXT_TIDS];
+ struct {
+ enum htt_security_types sec_type;
+ int pn_len;
+ } rx_pn[ATH10K_HTT_TXRX_PEER_SECURITY_MAX];
};
struct ath10k_txq {
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index 5d3ff80f3a1f9..c1ff938d53417 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -719,6 +719,20 @@ struct htt_rx_indication {
struct htt_rx_indication_mpdu_range mpdu_ranges[0];
} __packed;
+struct htt_hl_rx_desc {
+ __le32 info;
+ __le32 pn_31_0;
+ union {
+ struct {
+ __le16 pn_47_32;
+ __le16 pn_63_48;
+ } pn16;
+ __le32 pn_63_32;
+ } u0;
+ __le32 pn_95_64;
+ __le32 pn_127_96;
+} __packed;
+
static inline struct htt_rx_indication_mpdu_range *
htt_rx_ind_get_mpdu_ranges(struct htt_rx_indication *rx_ind)
{
@@ -764,6 +778,21 @@ struct htt_rx_peer_unmap {
__le16 peer_id;
} __packed;
+enum htt_txrx_sec_cast_type {
+ HTT_TXRX_SEC_MCAST = 0,
+ HTT_TXRX_SEC_UCAST
+};
+
+enum htt_rx_pn_check_type {
+ HTT_RX_NON_PN_CHECK = 0,
+ HTT_RX_PN_CHECK
+};
+
+enum htt_rx_tkip_demic_type {
+ HTT_RX_NON_TKIP_MIC = 0,
+ HTT_RX_TKIP_MIC
+};
+
enum htt_security_types {
HTT_SECURITY_NONE,
HTT_SECURITY_WEP128,
@@ -777,6 +806,9 @@ enum htt_security_types {
HTT_NUM_SECURITY_TYPES /* keep this last! */
};
+#define ATH10K_HTT_TXRX_PEER_SECURITY_MAX 2
+#define ATH10K_TXRX_NUM_EXT_TIDS 19
+
enum htt_security_flags {
#define HTT_SECURITY_TYPE_MASK 0x7F
#define HTT_SECURITY_TYPE_LSB 0
@@ -887,6 +919,11 @@ struct htt_rx_fragment_indication {
u8 fw_msdu_rx_desc[0];
} __packed;
+#define ATH10K_IEEE80211_EXTIV BIT(5)
+#define ATH10K_IEEE80211_TKIP_MICLEN 8 /* trailing MIC */
+
+#define HTT_RX_FRAG_IND_INFO0_HEADER_LEN 16
+
#define HTT_RX_FRAG_IND_INFO0_EXT_TID_MASK 0x1F
#define HTT_RX_FRAG_IND_INFO0_EXT_TID_LSB 0
#define HTT_RX_FRAG_IND_INFO0_FLUSH_VALID_MASK 0x20
@@ -1994,6 +2031,9 @@ struct htt_rx_desc {
u8 msdu_payload[0];
};
+#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_MASK 0x00010000
+#define HTT_RX_DESC_HL_INFO_MCAST_BCAST_LSB 16
+
#define HTT_RX_DESC_ALIGN 8
#define HTT_MAC_ADDR_LEN 6
--
2.25.1
1
3

[PATCH openEuler-1.0-LTS 1/2] cache: Workaround HiSilicon Taishan DC CVAU
by Yang Yingliang 15 Oct '21
by Yang Yingliang 15 Oct '21
15 Oct '21
From: Weilong Chen <chenweilong(a)huawei.com>
ascend inclusion
category: feature
bugzilla: 46922
CVE: NA
-------------------------------------
Taishan's L1/L2 cache is inclusive, and the data is consistent.
Any change of L1 does not require DC operation to brush CL in L1 to L2.
It's safe that don't clean data cache by address to point of unification.
Without IDC featrue, kernel needs to flush icache as well as dcache,
causes performance degradation.
The flaw refers to V110/V200 variant 1.
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
Documentation/arm64/silicon-errata.txt | 1 +
arch/arm64/Kconfig | 9 ++++++++
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64/include/asm/cputype.h | 2 ++
arch/arm64/kernel/cpu_errata.c | 32 ++++++++++++++++++++++++++
5 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
index eeb3fc9d777b8..553e6aff38625 100644
--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -75,6 +75,7 @@ stable kernels.
| Hisilicon | Hip0{5,6,7} | #161010101 | HISILICON_ERRATUM_161010101 |
| Hisilicon | Hip0{6,7} | #161010701 | N/A |
| Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 |
+| Hisilicon | TSV{110,200} | #1980005 | HISILICON_ERRATUM_1980005 |
| | | | |
| Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 9dfbd052bffc8..542c461e3d6bf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -619,6 +619,15 @@ config HISILICON_ERRATUM_161600802
If unsure, say Y.
+config HISILICON_ERRATUM_1980005
+ bool "Hisilicon erratum IDC support"
+ default n
+ help
+ The HiSilicon TSV100/200 SoC support idc but report wrong value to
+ kernel.
+
+ If unsure, say N.
+
config QCOM_FALKOR_ERRATUM_E1041
bool "Falkor E1041: Speculative instruction fetches might cause errant memory access"
default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index a9090f204a085..d6c863d2cf984 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -56,7 +56,8 @@
#define ARM64_WORKAROUND_1463225 35
#define ARM64_HAS_CRC32 36
#define ARM64_SSBS 37
+#define ARM64_WORKAROUND_HISILICON_1980005 38
-#define ARM64_NCAPS 38
+#define ARM64_NCAPS 39
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index 71e77e3900107..23298b0aedaf7 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -155,6 +155,8 @@ struct midr_range {
.rv_max = MIDR_CPU_VAR_REV(v_max, r_max), \
}
+#define MIDR_REV_RANGE(m, v, r_min, r_max) MIDR_RANGE(m, v, r_min, v, r_max)
+#define MIDR_REV(m, v, r) MIDR_RANGE(m, v, r, v, r)
#define MIDR_ALL_VERSIONS(m) MIDR_RANGE(m, 0, 0, 0xf, 0xf)
static inline bool midr_is_cpu_model_range(u32 midr, u32 model, u32 rv_min,
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 3c556ff2f33e1..8dbe94b4ec81e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -71,6 +71,29 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
return model == entry->midr_range.model;
}
+#ifdef CONFIG_HISILICON_ERRATUM_1980005
+static bool
+hisilicon_1980005_match(const struct arm64_cpu_capabilities *entry,
+ int scope)
+{
+ static const struct midr_range idc_support_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
+ MIDR_REV(MIDR_HISI_TSV200, 1, 0),
+ { /* sentinel */ }
+ };
+
+ return is_midr_in_range_list(read_cpuid_id(), idc_support_list);
+}
+
+static void
+hisilicon_1980005_enable(const struct arm64_cpu_capabilities *__unused)
+{
+ cpus_set_cap(ARM64_HAS_CACHE_IDC);
+ arm64_ftr_reg_ctrel0.sys_val |= BIT(CTR_IDC_SHIFT);
+ sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
+}
+#endif
+
static bool
has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
int scope)
@@ -848,6 +871,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.cpu_enable = cpu_enable_trap_ctr_access,
},
+#ifdef CONFIG_HISILICON_ERRATUM_1980005
+ {
+ .desc = "Taishan IDC coherence workaround",
+ .capability = ARM64_WORKAROUND_HISILICON_1980005,
+ .matches = hisilicon_1980005_match,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .cpu_enable = hisilicon_1980005_enable,
+ },
+#endif
#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
{
.desc = "Qualcomm Technologies Falkor erratum 1003",
--
2.25.1
1
1

【Meeting Notice】openEuler kernel 技术分享第十三期 & 双周例会 Time: 2021-10-15 14:00-16:30
by Meeting Book 15 Oct '21
by Meeting Book 15 Oct '21
15 Oct '21
1
0
backport perf bugfix patches from maillist.
Li Huafei (2):
perf env: Normalize aarch64.* and arm64.* to arm64 in normalize_arch()
perf annotate: Add error log in symbol__annotate()
tools/perf/util/annotate.c | 4 +++-
tools/perf/util/env.c | 2 +-
2 files changed, 4 insertions(+), 2 deletions(-)
--
2.20.1
1
2
Backport LTS 5.10.54 from upstream.
Adrian Hunter (1):
driver core: Prevent warning when removing a device link from
unregistered consumer
Alain Volmat (1):
spi: stm32: fixes pm_runtime calls in probe/remove
Alan Young (1):
ALSA: pcm: Call substream ack() method upon compat mmap commit
Aleksandr Loktionov (1):
igb: Check if num of q_vectors is smaller than max before array access
Aleksandr Nogikh (1):
net: add kcov handle to skb extensions
Alexander Egorenkov (1):
s390/boot: fix use of expolines in the DMA code
Alexander Tsoy (1):
ALSA: usb-audio: Add registration quirk for JBL Quantum headsets
Alexandru Tachici (1):
spi: spi-bcm2835: Fix deadlock
Amelie Delaunay (1):
usb: typec: stusb160x: register role switch before interrupt
registration
Anand Jain (1):
btrfs: check for missing device in btrfs_trim_fs
Axel Lin (2):
regulator: hi6421: Use correct variable type for regmap api val
argument
regulator: hi6421: Fix getting wrong drvdata
Bhaumik Bhatt (1):
bus: mhi: core: Validate channel ID when processing command
completions
Casey Chen (1):
nvme-pci: do not call nvme_dev_remove_admin from nvme_remove
Charles Baylis (1):
drm: Return -ENOTTY for non-drm ioctls
Charles Keepax (1):
ASoC: wm_adsp: Correct wm_coeff_tlv_get handling
Christoph Hellwig (1):
nvme: set the PRACT bit when using Write Zeroes with T10 PI
Christophe JAILLET (7):
ixgbe: Fix an error handling path in 'ixgbe_probe()'
igc: Fix an error handling path in 'igc_probe()'
igb: Fix an error handling path in 'igb_probe()'
fm10k: Fix an error handling path in 'fm10k_probe()'
e1000e: Fix an error handling path in 'e1000_probe()'
iavf: Fix an error handling path in 'iavf_probe()'
gve: Fix an error handling path in 'gve_probe()'
Clark Wang (1):
spi: imx: add a check for speed_hz before calculating the clock
Colin Ian King (2):
liquidio: Fix unintentional sign extension issue on left shift of u16
s390/bpf: Perform r1 range checking before accessing jit->seen_reg[r1]
Colin Xu (1):
drm/i915/gvt: Clear d3_entered on elsp cmd submission.
Daniel Borkmann (1):
bpf: Fix tail_call_reachable rejection for interpreter when jit failed
David Howells (1):
afs: Fix tracepoint string placement with built-in AFS
David Jeffery (1):
usb: ehci: Prevent missed ehci interrupts with edge-triggered MSI
Dmitry Bogdanov (1):
scsi: target: Fix protect handling in WRITE SAME(32)
Dongliang Mu (1):
usb: hso: fix error handling code of hso_create_net_device
Eric Dumazet (1):
net/tcp_fastopen: fix data races around tfo_active_disable_stamp
Evan Quan (1):
PCI: Mark AMD Navi14 GPU ATS as broken
Florian Fainelli (1):
skbuff: Fix build with SKB extensions disabled
Frederic Weisbecker (1):
posix-cpu-timers: Fix rearm racing against process tick
Greg Kroah-Hartman (1):
nds32: fix up stack guard gap
Greg Thelen (1):
usb: xhci: avoid renesas_usb_fw.mem when it's unusable
Gustavo A. R. Silva (1):
media: ngene: Fix out-of-bounds bug in ngene_command_config_free_buf()
Hangbin Liu (2):
selftests: icmp_redirect: remove from checking for IPv6 route get
selftests: icmp_redirect: IPv6 PMTU info should be cleared after
redirect
Haoran Luo (1):
tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop.
Hui Wang (1):
ALSA: hda/realtek: Fix pop noise and 2 Front Mic issues on a machine
Ian Ray (1):
USB: serial: cp210x: fix comments for GE CS1000
Ilya Dryomov (2):
rbd: don't hold lock_rwsem while running_list is being drained
rbd: always kick acquire on "acquired" and "released" notifications
Jakub Sitnicki (1):
bpf, sockmap, udp: sk_prot needs inuse_idx set for proc stats
Jedrzej Jagielski (1):
igb: Fix position of assignment to *ring
Jianguo Wu (1):
mptcp: fix warning in __skb_flow_dissect() when do syn cookie for
subflow join
John Fastabend (2):
bpf, sockmap: Fix potential memory leak on unlikely error case
bpf, sockmap, tcp: sk_prot needs inuse_idx set for proc stats
John Keeping (1):
USB: serial: cp210x: add ID for CEL EM3588 USB ZigBee stick
Julian Sikorski (1):
USB: usb-storage: Add LaCie Rugged USB3-FW to IGNORE_UAS
Jérôme Glisse (1):
misc: eeprom: at24: Always append device id even if label property is
set.
Kalesh AP (1):
bnxt_en: don't disable an already disabled PCI device
Like Xu (1):
KVM: x86/pmu: Clear anythread deprecated bit when 0xa leaf is
unsupported on the SVM
Likun Gao (1):
drm/amdgpu: update golden setting for sienna_cichlid
Luis Henriques (1):
ceph: don't WARN if we're still opening a session to an MDS
Mahesh Bandewar (1):
bonding: fix build issue
Marc Zyngier (1):
firmware/efi: Tell memblock about EFI iomem reservations
Marcelo Henrique Cerri (1):
proc: Avoid mixing integer types in mem_rw()
Marco De Marco (1):
USB: serial: option: add support for u-blox LARA-R6 family
Marek Behún (2):
net: dsa: mv88e6xxx: enable SerDes RX stats for Topaz
net: dsa: mv88e6xxx: enable SerDes PCS register dump via ethtool -d on
Topaz
Marek Vasut (1):
spi: cadence: Correct initialisation of runtime PM again
Mark Tomlinson (1):
usb: max-3421: Prevent corruption of freed memory
Markus Boehme (1):
ixgbe: Fix packet corruption due to missing DMA sync
Mathias Nyman (4):
xhci: Fix lost USB 2 remote wake
usb: hub: Disable USB 3 device initiated lpm if exit latency is too
high
usb: hub: Fix link power management max exit latency (MEL)
calculations
xhci: add xhci_get_virt_ep() helper
Maxim Schwalm (1):
ASoC: rt5631: Fix regcache sync errors on resume
Maxime Ripard (1):
drm/panel: raspberrypi-touchscreen: Prevent double-free
Michael Chan (3):
bnxt_en: Refresh RoCE capabilities in bnxt_ulp_probe()
bnxt_en: Add missing check for BNXT_STATE_ABORT_ERR in
bnxt_fw_rset_task()
bnxt_en: Validate vlan protocol ID on RX packets
Michal Suchanek (1):
efi/tpm: Differentiate missing and invalid final event log table.
Mike Christie (1):
scsi: iscsi: Fix iface sysfs attr detection
Mike Kravetz (1):
hugetlbfs: fix mount mode command line processing
Mike Rapoport (1):
memblock: make for_each_mem_range() traverse MEMBLOCK_HOTPLUG regions
Minas Harutyunyan (2):
usb: dwc2: gadget: Fix GOUTNAK flow for Slave mode.
usb: dwc2: gadget: Fix sending zero length packet in DDMA mode.
Moritz Fischer (1):
Revert "usb: renesas-xhci: Fix handling of unknown ROM state"
Nguyen Dinh Phi (1):
netrom: Decrease sock refcount when sock timers expire
Nicholas Piggin (4):
KVM: PPC: Book3S: Fix CONFIG_TRANSACTIONAL_MEM=n crash
KVM: PPC: Fix kvm_arch_vcpu_ioctl vcpu_load leak
KVM: PPC: Book3S: Fix H_RTAS rets buffer overflow
KVM: PPC: Book3S HV Nested: Sanitise H_ENTER_NESTED TM state
Nicolas Dichtel (1):
ipv6: fix 'disable_policy' for fwd packets
Nicolas Saenz Julienne (1):
timers: Fix get_next_timer_interrupt() with no timers pending
Paolo Abeni (1):
ipv6: fix another slab-out-of-bounds in fib6_nh_flush_exceptions
Paul Blakey (1):
skbuff: Release nfct refcount on napi stolen or re-used skbs
Pavel Begunkov (2):
io_uring: explicitly count entries for poll reqs
io_uring: remove double poll entry on arm failure
Pavel Skripkin (1):
net: sched: fix memory leak in tcindex_partial_destroy_work
Peilin Ye (1):
net/sched: act_skbmod: Skip non-Ethernet packets
Peter Collingbourne (2):
selftest: use mmap instead of posix_memalign to allocate memory
userfaultfd: do not untag user pointers
Peter Hess (1):
spi: mediatek: fix fifo rx mode
Pierre-Louis Bossart (1):
ALSA: hda: intel-dsp-cfg: add missing ElkhartLake PCI ID
Randy Dunlap (1):
net: hisilicon: rename CACHE_LINE_MASK to avoid redefinition
Riccardo Mancini (13):
perf inject: Fix dso->nsinfo refcounting
perf map: Fix dso->nsinfo refcounting
perf probe: Fix dso->nsinfo refcounting
perf env: Fix sibling_dies memory leak
perf test session_topology: Delete session->evlist
perf test event_update: Fix memory leak of evlist
perf dso: Fix memory leak in dso__new_map()
perf test maps__merge_in: Fix memory leak of maps
perf env: Fix memory leak of cpu_pmu_caps
perf report: Free generated help strings for sort option
perf script: Fix memory 'threads' and 'cpus' leaks on exit
perf lzma: Close lzma stream on exit
perf inject: Close inject.output on exit
Robert Richter (2):
ACPI: Kconfig: Fix table override from built-in initrd
Documentation: Fix intiramfs script name
Roman Skakun (1):
dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable}
Ronnie Sahlberg (2):
cifs: only write 64kb at a time when fallocating a small region of a
file
cifs: fix fallocate when trying to allocate a hole.
Sayanta Pattanayak (1):
r8169: Avoid duplicate sysfs entry creation error
Shahjada Abul Husain (1):
cxgb4: fix IRQ free race during driver unload
Somnath Kotur (1):
bnxt_en: Check abort error state in bnxt_half_open_nic()
Stephen Boyd (1):
mmc: core: Don't allocate IDA for OF aliases
Steven Rostedt (VMware) (3):
tracepoints: Update static_call before tp_funcs when adding a
tracepoint
tracing/histogram: Rename "cpu" to "common_cpu"
tracing: Synthetic event field_pos is an index not a boolean
Taehee Yoo (8):
bonding: fix suspicious RCU usage in bond_ipsec_add_sa()
bonding: fix null dereference in bond_ipsec_add_sa()
ixgbevf: use xso.real_dev instead of xso.dev in callback functions of
struct xfrmdev_ops
bonding: fix suspicious RCU usage in bond_ipsec_del_sa()
bonding: disallow setting nested bonding + ipsec offload
bonding: Add struct bond_ipesc to manage SA
bonding: fix suspicious RCU usage in bond_ipsec_offload_ok()
bonding: fix incorrect return value of bond_ipsec_offload_ok()
Takashi Iwai (4):
ALSA: usb-audio: Add missing proc text entry for BESPOKEN type
ALSA: sb: Fix potential ABBA deadlock in CSP driver
ALSA: hdmi: Expose all pins on MSI MS-7C94 board
ALSA: pcm: Fix mmap capability check
Tobias Klauser (1):
bpftool: Check malloc return value in mount_bpffs_for_pin
Tom Rix (1):
igc: change default return of igc_read_phy_reg()
Uwe Kleine-König (1):
pwm: sprd: Ensure configuring period and duty_cycle isn't wrongly
skipped
Vasily Gorbik (1):
s390/ftrace: fix ftrace_update_ftrace_func implementation
Vincent Palatin (1):
Revert "USB: quirks: ignore remote wake-up on Fibocom L850-GL LTE
modem"
Vinicius Costa Gomes (2):
igc: Fix use-after-free error during reset
igb: Fix use-after-free error during reset
Vladimir Oltean (1):
net: dsa: sja1105: make VID 4095 a bridge VLAN too
Wei Wang (1):
tcp: disable TFO blackhole logic by default
Xin Long (2):
sctp: trim optlen when it's a huge value in sctp_setsockopt
sctp: update active_key for asoc when old key is being replaced
Xuan Zhuo (2):
bpf, test: fix NULL pointer dereference on invalid
expected_attach_type
xdp, net: Fix use-after-free in bpf_xdp_link_release
Yajun Deng (2):
net: decnet: Fix sleeping inside in af_decnet
net: sched: cls_api: Fix the the wrong parameter
Yang Jihong (1):
perf sched: Fix record failure when CONFIG_SCHEDSTATS is not set
Yoshihiro Shimoda (1):
usb: renesas_usbhs: Fix superfluous irqs happen after usb_pkt_pop()
YueHaibing (1):
stmmac: platform: Fix signedness bug in stmmac_probe_config_dt()
Zhang Qilong (1):
usb: gadget: Fix Unbalanced pm_runtime_enable in tegra_xudc_probe
Zhihao Cheng (1):
nvme-pci: don't WARN_ON in nvme_reset_work if ctrl.state is not
RESETTING
Ziyang Xuan (1):
net: fix uninit-value in caif_seqpkt_sendmsg
Íñigo Huguet (1):
sfc: ensure correct number of XDP queues
Documentation/arm64/tagged-address-abi.rst | 26 ++-
.../early_userspace_support.rst | 8 +-
.../filesystems/ramfs-rootfs-initramfs.rst | 2 +-
Documentation/networking/ip-sysctl.rst | 2 +-
Documentation/trace/histogram.rst | 2 +-
arch/nds32/mm/mmap.c | 2 +-
arch/powerpc/kvm/book3s_hv.c | 2 +
arch/powerpc/kvm/book3s_hv_nested.c | 20 ++
arch/powerpc/kvm/book3s_rtas.c | 25 ++-
arch/powerpc/kvm/powerpc.c | 4 +-
arch/s390/boot/text_dma.S | 19 +-
arch/s390/include/asm/ftrace.h | 1 +
arch/s390/kernel/ftrace.c | 2 +
arch/s390/kernel/mcount.S | 4 +-
arch/s390/net/bpf_jit_comp.c | 2 +-
arch/x86/kvm/cpuid.c | 3 +-
drivers/acpi/Kconfig | 2 +-
drivers/base/core.c | 6 +-
drivers/block/rbd.c | 32 ++-
drivers/bus/mhi/core/main.c | 17 +-
drivers/firmware/efi/efi.c | 13 +-
drivers/firmware/efi/tpm.c | 8 +-
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 +
drivers/gpu/drm/drm_ioctl.c | 3 +
drivers/gpu/drm/i915/gvt/handlers.c | 15 ++
.../drm/panel/panel-raspberrypi-touchscreen.c | 1 -
drivers/media/pci/ngene/ngene-core.c | 2 +-
drivers/media/pci/ngene/ngene.h | 14 +-
drivers/misc/eeprom/at24.c | 17 +-
drivers/mmc/core/host.c | 20 +-
drivers/net/bonding/bond_main.c | 183 +++++++++++++++---
drivers/net/dsa/mv88e6xxx/chip.c | 10 +
drivers/net/dsa/mv88e6xxx/serdes.c | 6 +-
drivers/net/dsa/sja1105/sja1105_main.c | 6 +
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 34 +++-
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 9 +-
.../cavium/liquidio/cn23xx_pf_device.c | 2 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_main.c | 18 +-
.../net/ethernet/chelsio/cxgb4/cxgb4_uld.c | 3 +
drivers/net/ethernet/google/gve/gve_main.c | 5 +-
drivers/net/ethernet/hisilicon/hip04_eth.c | 6 +-
drivers/net/ethernet/intel/e1000e/netdev.c | 1 +
drivers/net/ethernet/intel/fm10k/fm10k_pci.c | 1 +
drivers/net/ethernet/intel/iavf/iavf_main.c | 1 +
drivers/net/ethernet/intel/igb/igb_main.c | 15 +-
drivers/net/ethernet/intel/igc/igc.h | 2 +-
drivers/net/ethernet/intel/igc/igc_main.c | 3 +
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 4 +-
drivers/net/ethernet/intel/ixgbevf/ipsec.c | 20 +-
drivers/net/ethernet/realtek/r8169_main.c | 3 +-
drivers/net/ethernet/sfc/efx_channels.c | 13 +-
.../ethernet/stmicro/stmmac/stmmac_platform.c | 8 +-
drivers/net/usb/hso.c | 33 +++-
drivers/nvme/host/core.c | 5 +-
drivers/nvme/host/pci.c | 5 +-
drivers/pci/quirks.c | 4 +-
drivers/pwm/pwm-sprd.c | 11 +-
drivers/regulator/hi6421-regulator.c | 30 +--
drivers/scsi/scsi_transport_iscsi.c | 90 ++++-----
drivers/spi/spi-bcm2835.c | 12 +-
drivers/spi/spi-cadence.c | 14 +-
drivers/spi/spi-imx.c | 37 ++--
drivers/spi/spi-mt65xx.c | 16 +-
drivers/spi/spi-stm32.c | 9 +-
drivers/target/target_core_sbc.c | 35 ++--
drivers/usb/core/hub.c | 120 ++++++++----
drivers/usb/core/quirks.c | 4 -
drivers/usb/dwc2/gadget.c | 31 ++-
drivers/usb/gadget/udc/tegra-xudc.c | 1 +
drivers/usb/host/ehci-hcd.c | 18 +-
drivers/usb/host/max3421-hcd.c | 44 ++---
drivers/usb/host/xhci-hub.c | 3 +-
drivers/usb/host/xhci-pci-renesas.c | 16 +-
drivers/usb/host/xhci-pci.c | 7 +
drivers/usb/host/xhci-ring.c | 58 ++++--
drivers/usb/host/xhci.h | 3 +-
drivers/usb/renesas_usbhs/fifo.c | 7 +
drivers/usb/serial/cp210x.c | 5 +-
drivers/usb/serial/option.c | 3 +
drivers/usb/storage/unusual_uas.h | 7 +
drivers/usb/typec/stusb160x.c | 11 +-
fs/afs/cmservice.c | 25 +--
fs/btrfs/extent-tree.c | 3 +
fs/ceph/mds_client.c | 2 +-
fs/cifs/smb2ops.c | 49 +++--
fs/hugetlbfs/inode.c | 2 +-
fs/io_uring.c | 18 +-
fs/proc/base.c | 2 +-
fs/userfaultfd.c | 24 ++-
include/drm/drm_ioctl.h | 1 +
include/linux/memblock.h | 4 +-
include/linux/skbuff.h | 33 ++++
include/net/bonding.h | 9 +-
include/trace/events/afs.h | 67 ++++++-
kernel/bpf/verifier.c | 2 +
kernel/dma/ops_helpers.c | 12 +-
kernel/time/posix-cpu-timers.c | 10 +-
kernel/time/timer.c | 8 +-
kernel/trace/ring_buffer.c | 28 ++-
kernel/trace/trace.c | 4 +
kernel/trace/trace_events_hist.c | 22 ++-
kernel/trace/trace_synth.h | 2 +-
kernel/tracepoint.c | 2 +-
lib/Kconfig.debug | 1 +
mm/memblock.c | 3 +-
net/bpf/test_run.c | 3 +
net/caif/caif_socket.c | 3 +-
net/core/dev.c | 28 ++-
net/core/skbuff.c | 12 ++
net/core/skmsg.c | 16 +-
net/decnet/af_decnet.c | 27 ++-
net/ipv4/tcp_bpf.c | 2 +-
net/ipv4/tcp_fastopen.c | 28 ++-
net/ipv4/tcp_ipv4.c | 2 +-
net/ipv4/udp_bpf.c | 2 +-
net/ipv6/ip6_output.c | 4 +-
net/ipv6/route.c | 2 +-
net/mptcp/syncookies.c | 16 +-
net/netrom/nr_timer.c | 20 +-
net/sched/act_skbmod.c | 12 +-
net/sched/cls_api.c | 2 +-
net/sched/cls_tcindex.c | 5 +-
net/sctp/auth.c | 2 +
net/sctp/socket.c | 4 +
sound/core/pcm_native.c | 25 ++-
sound/hda/intel-dsp-config.c | 4 +
sound/isa/sb/sb16_csp.c | 4 +
sound/pci/hda/patch_hdmi.c | 1 +
sound/pci/hda/patch_realtek.c | 1 +
sound/soc/codecs/rt5631.c | 2 +
sound/soc/codecs/wm_adsp.c | 2 +-
sound/usb/mixer.c | 10 +-
sound/usb/quirks.c | 3 +
tools/bpf/bpftool/common.c | 5 +
tools/perf/builtin-inject.c | 13 +-
tools/perf/builtin-report.c | 33 ++--
tools/perf/builtin-sched.c | 33 +++-
tools/perf/builtin-script.c | 7 +
tools/perf/tests/event_update.c | 2 +-
tools/perf/tests/maps.c | 2 +
tools/perf/tests/topology.c | 1 +
tools/perf/util/dso.c | 4 +-
tools/perf/util/env.c | 2 +
tools/perf/util/lzma.c | 8 +-
tools/perf/util/map.c | 2 +
tools/perf/util/probe-event.c | 4 +-
tools/perf/util/sort.c | 2 +-
tools/perf/util/sort.h | 2 +-
tools/testing/selftests/net/icmp_redirect.sh | 5 +-
tools/testing/selftests/vm/userfaultfd.c | 6 +-
150 files changed, 1381 insertions(+), 592 deletions(-)
--
2.20.1
1
161
sched: load tracking optimization.
Yu Jiahua (3):
sched: Introcude config option SCHED_OPTIMIZE_LOAD_TRACKING
sched: Add switch for update_blocked_averages
sched: Add frequency control for load update in scheduler_tick
include/linux/sched/sysctl.h | 13 ++++
init/Kconfig | 8 +++
kernel/sched/fair.c | 126 +++++++++++++++++++++++++++++++++++
kernel/sysctl.c | 27 ++++++++
4 files changed, 174 insertions(+)
--
2.20.1
1
3
Backport LTS 5.10.53 from upstream.
Alexander Ovechkin (1):
net: send SYNACK packet with accepted fwmark
Alexandre Torgue (6):
ARM: dts: stm32: fix gpio-keys node on STM32 MCU boards
ARM: dts: stm32: fix RCC node name on stm32f429 MCU
ARM: dts: stm32: fix timer nodes on STM32 MCU to prevent warnings
ARM: dts: stm32: fix i2c node name on stm32f746 to prevent warnings
ARM: dts: stm32: move stmmac axi config in ethernet node on stm32mp15
ARM: dts: stm32: fix stpmic node for stm32mp1 boards
Andrew Jeffery (1):
ARM: dts: tacoma: Add phase corrections for eMMC
Benjamin Gaignard (1):
ARM: dts: rockchip: Fix IOMMU nodes properties on rk322x
Bixuan Cui (1):
rtc: mxc_v2: add missing MODULE_DEVICE_TABLE
Colin Ian King (1):
scsi: aic7xxx: Fix unintentional sign extension issue on left shift of
u8
Corentin Labbe (2):
ARM: dts: gemini: rename mdio to the right name
ARM: dts: gemini: add device_type on pci
Daniel Rosenberg (1):
f2fs: Show casefolding support only when supported
Dmitry Osipenko (4):
ARM: tegra: wm8903: Fix polarity of headphones-detection GPIO in
device-trees
ARM: tegra: nexus7: Correct 3v3 regulator GPIO of PM269 variant
memory: tegra: Fix compilation warnings on 64bit platforms
thermal/core/thermal_of: Stop zone device before unregistering it
Doug Berger (1):
net: bcmgenet: ensure EXT_ENERGY_DET_MASK is clear
Elaine Zhang (6):
ARM: dts: rockchip: Fix power-controller node names for rk3066a
ARM: dts: rockchip: Fix power-controller node names for rk3188
ARM: dts: rockchip: Fix power-controller node names for rk3288
arm64: dts: rockchip: Fix power-controller node names for px30
arm64: dts: rockchip: Fix power-controller node names for rk3328
arm64: dts: rockchip: Fix power-controller node names for rk3399
Eric Dumazet (3):
tcp: annotate data races around tp->mtu_info
ipv6: tcp: drop silly ICMPv6 packet too big messages
udp: annotate data races around unix_sk(sk)->gso_size
Etienne Carriere (1):
firmware: arm_scmi: Add SMCCC discovery dependency in Kconfig
Ezequiel Garcia (2):
ARM: dts: rockchip: Fix thermal sensor cells o rk322x
ARM: dts: rockchip: Fix the timer clocks order
Florian Fainelli (1):
net: bcmgenet: Ensure all TX/RX queues DMAs are disabled
Geert Uytterhoeven (1):
thermal/drivers/rcar_gen3_thermal: Do not shadow rcar_gen3_ths_tj_1
Greg Kroah-Hartman (2):
Revert "swap: fix do_swap_page() race with swapoff"
Revert "mm/shmem: fix shmem_swapin() race with swapoff"
Grygorii Strashko (4):
ARM: dts: am57xx-cl-som-am57x: fix ti,no-reset-on-init flag for gpios
ARM: dts: am437x-gp-evm: fix ti,no-reset-on-init flag for gpios
ARM: dts: am335x: fix ti,no-reset-on-init flag for gpios
arm64: dts: ti: k3-am654x/j721e/j7200-common-proc-board: Fix
MCU_RGMII1_TXC direction
Grzegorz Szymaszek (2):
ARM: dts: stm32: fix stm32mp157c-odyssey card detect pin
ARM: dts: stm32: fix the Odyssey SoM eMMC VQMMC supply
Gu Shengxian (1):
bpftool: Properly close va_list 'ap' by va_end() on error
Hangbin Liu (1):
net: ip_tunnel: fix mtu calculation for ETHER tunnel devices
Heiko Carstens (1):
s390: introduce proper type handling call_on_stack() macro
Ilya Leoshkevich (1):
s390/traps: do not test MONITOR CALL without CONFIG_BUG
Jason Ekstrand (1):
dma-buf/sync_file: Don't leak fences on merge failure
Javed Hasan (2):
scsi: libfc: Fix array index out of bound exception
scsi: qedf: Add check to synchronize abort and flush
Joel Stanley (1):
ARM: dts: aspeed: Fix AST2600 machines line names
Johan Jonker (4):
ARM: dts: rockchip: fix pinctrl sleep nodename for rk3036-kylin and
rk3288
arm64: dts: rockchip: fix pinctrl sleep nodename for rk3399.dtsi
arm64: dts: rockchip: fix regulator-gpio states array
ARM: dts: rockchip: fix supply properties in io-domains nodes
John Fastabend (1):
bpf: Track subprog poke descriptors correctly and fix use-after-free
Jonathan Neuschäfer (1):
ARM: imx: pm-imx5: Fix references to imx5_cpu_suspend_info
Kan Liang (1):
perf/x86/intel/uncore: Clean up error handling path of iio mapping
Konstantin Porotchkin (1):
arch/arm64/boot/dts/marvell: fix NAND partitioning scheme
Krzysztof Kozlowski (3):
thermal/drivers/imx_sc: Add missing of_node_put for loop iteration
thermal/drivers/sprd: Add missing of_node_put for loop iteration
rtc: max77686: Do not enforce (incorrect) interrupt trigger type
Linus Walleij (2):
ARM: dts: ux500: Fix orientation of accelerometer
drm/panel: nt35510: Do not fail if DSI read fails
Louis Peens (1):
net/sched: act_ct: remove and free nf_table callbacks
Lucas Stach (1):
arm64: dts: imx8mq: assign PCIe clocks
Marek Behún (4):
net: dsa: mv88e6xxx: enable .port_set_policy() on Topaz
net: dsa: mv88e6xxx: use correct .stats_set_histogram() on Topaz
net: dsa: mv88e6xxx: enable .rmu_disable() on Topaz
net: dsa: mv88e6xxx: enable devlink ATU hash param for Topaz
Marek Vasut (4):
ARM: dts: stm32: Remove extra size-cells on dhcom-pdk2
ARM: dts: stm32: Fix touchscreen node on dhcom-pdk2
ARM: dts: stm32: Drop unused linux,wakeup from touchscreen node on
DHCOM SoM
ARM: dts: stm32: Rename spi-flash/mx66l51235l@N to flash@N on DHCOM
SoM
Masahiro Yamada (2):
kbuild: sink stdout from cmd for silent build
kbuild: do not suppress Kconfig prompts for silent build
Matthias Maennich (1):
kbuild: mkcompile_h: consider timestamp if KBUILD_BUILD_TIMESTAMP is
set
Mian Yousaf Kaukab (1):
arm64: dts: ls208xa: remove bus-num from dspi node
Mike Rapoport (1):
mm/page_alloc: fix memory map initialization for descending nodes
Nguyen Dinh Phi (1):
tcp: fix tcp_init_transfer() to not reset icsk_ca_initialized
Odin Ugedal (1):
sched/fair: Fix CFS bandwidth hrtimer expiry type
Oleksij Rempel (1):
ARM: dts: imx6dl-riotboard: configure PHY clock and set proper EEE
value
Pali Rohár (2):
firmware: turris-mox-rwtm: add marvell,armada-3700-rwtm-firmware
compatible string
arm64: dts: marvell: armada-37xx: move firmware node to generic dtsi
file
Paolo Abeni (1):
tcp: consistently disable header prediction for mptcp
Paulo Alcantara (1):
cifs: prevent NULL deref in cifs_compose_mount_options()
Pavel Skripkin (4):
net: moxa: fix UAF in moxart_mac_probe
net: qcom/emac: fix UAF in emac_remove
net: ti: fix UAF in tlan_remove_one
net: fddi: fix UAF in fza_probe
Peter Xu (2):
mm/thp: simplify copying of huge zero page pmd when fork
mm/userfaultfd: fix uffd-wp special cases for fork()
Philipp Zabel (1):
reset: ti-syscon: fix to_ti_syscon_reset_data macro
Primoz Fiser (1):
ARM: dts: imx6: phyFLEX: Fix UART hardware flow control
Rafał Miłecki (5):
ARM: brcmstb: dts: fix NAND nodes names
ARM: Cygnus: dts: fix NAND nodes names
ARM: NSP: dts: fix NAND nodes names
ARM: dts: BCM63xx: Fix NAND nodes names
ARM: dts: Hurricane 2: Fix NAND nodes names
Ronak Doshi (1):
vmxnet3: fix cksum offload issues for tunnels with non-default udp
ports
Sanket Parmar (1):
usb: cdns3: Enable TDL_CHK only for OUT ep
Sebastian Reichel (2):
ARM: dts: ux500: Fix interrupt cells
ARM: dts: ux500: Rename gpio-controller node
Stefan Wahren (2):
ARM: dts: bcm283x: Fix up MMC node names
ARM: dts: bcm283x: Fix up GPIO LED node names
Sudeep Holla (2):
firmware: arm_scmi: Fix the build when CONFIG_MAILBOX is not selected
arm64: dts: juno: Update SCPI nodes as per the YAML schema
Sujit Kautkar (1):
arm64: dts: qcom: sc7180: Move rmtfs memory region
Suman Anna (1):
ARM: dts: OMAP2+: Replace underscores in sub-mailbox node names
Taehee Yoo (2):
net: netdevsim: use xso.real_dev instead of xso.dev in callback
functions of struct xfrmdev_ops
net: validate lwtstate->data before returning from skb_tunnel_info()
Talal Ahmad (1):
tcp: call sk_wmem_schedule before sk_mem_charge in zerocopy path
Thierry Reding (2):
soc/tegra: fuse: Fix Tegra234-only builds
firmware: tegra: bpmp: Fix Tegra234-only builds
Tony Lindgren (1):
ARM: OMAP2+: Block suspend for am3 and am4 if PM is not configured
Vadim Fedorenko (1):
net: ipv6: fix return value of ip6_skb_dst_mtu
Vasily Averin (1):
netfilter: ctnetlink: suspicious RCU usage in ctnetlink_dump_helpinfo
Vladimir Oltean (1):
net: dsa: properly check for the bridge_leave methods in
dsa_switch_bridge_leave()
Wei Li (1):
tools: bpf: Fix error in 'make -C tools/ bpf_install'
Wolfgang Bumiller (1):
net: bridge: sync fdb to new unicast-filtering ports
Yang Yingliang (1):
thermal/core: Correct function name thermal_zone_device_unregister()
wenxu (1):
net/sched: act_ct: fix err check for nf_conntrack_confirm
Makefile | 9 +-
arch/arm/boot/dts/am335x-baltos.dtsi | 4 +-
arch/arm/boot/dts/am335x-evmsk.dts | 2 +-
.../boot/dts/am335x-moxa-uc-2100-common.dtsi | 2 +-
.../boot/dts/am335x-moxa-uc-8100-common.dtsi | 2 +-
arch/arm/boot/dts/am33xx-l4.dtsi | 2 +-
arch/arm/boot/dts/am437x-gp-evm.dts | 5 +-
arch/arm/boot/dts/am437x-l4.dtsi | 2 +-
arch/arm/boot/dts/am57xx-cl-som-am57x.dts | 13 +--
arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts | 5 +-
arch/arm/boot/dts/aspeed-bmc-opp-tacoma.dts | 6 +-
arch/arm/boot/dts/bcm-cygnus.dtsi | 2 +-
arch/arm/boot/dts/bcm-hr2.dtsi | 2 +-
arch/arm/boot/dts/bcm-nsp.dtsi | 2 +-
arch/arm/boot/dts/bcm2711-rpi-4-b.dts | 4 +-
arch/arm/boot/dts/bcm2711.dtsi | 2 +-
arch/arm/boot/dts/bcm2835-rpi-a-plus.dts | 4 +-
arch/arm/boot/dts/bcm2835-rpi-a.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-b-plus.dts | 4 +-
arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-b.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-cm1.dtsi | 2 +-
arch/arm/boot/dts/bcm2835-rpi-zero-w.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi-zero.dts | 2 +-
arch/arm/boot/dts/bcm2835-rpi.dtsi | 2 +-
arch/arm/boot/dts/bcm2836-rpi-2-b.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-a-plus.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts | 4 +-
arch/arm/boot/dts/bcm2837-rpi-3-b.dts | 2 +-
arch/arm/boot/dts/bcm2837-rpi-cm3.dtsi | 2 +-
arch/arm/boot/dts/bcm283x.dtsi | 2 +-
arch/arm/boot/dts/bcm63138.dtsi | 2 +-
arch/arm/boot/dts/bcm7445-bcm97445svmb.dts | 4 +-
arch/arm/boot/dts/bcm7445.dtsi | 2 +-
arch/arm/boot/dts/bcm911360_entphn.dts | 4 +-
arch/arm/boot/dts/bcm958300k.dts | 4 +-
arch/arm/boot/dts/bcm958305k.dts | 4 +-
arch/arm/boot/dts/bcm958522er.dts | 4 +-
arch/arm/boot/dts/bcm958525er.dts | 4 +-
arch/arm/boot/dts/bcm958525xmc.dts | 4 +-
arch/arm/boot/dts/bcm958622hr.dts | 4 +-
arch/arm/boot/dts/bcm958623hr.dts | 4 +-
arch/arm/boot/dts/bcm958625hr.dts | 4 +-
arch/arm/boot/dts/bcm958625k.dts | 4 +-
arch/arm/boot/dts/bcm963138dvt.dts | 4 +-
arch/arm/boot/dts/bcm988312hr.dts | 4 +-
arch/arm/boot/dts/dm816x.dtsi | 2 +-
arch/arm/boot/dts/dra7-ipu-dsp-common.dtsi | 6 +-
arch/arm/boot/dts/dra7-l4.dtsi | 4 +-
arch/arm/boot/dts/dra72x.dtsi | 6 +-
arch/arm/boot/dts/dra74-ipu-dsp-common.dtsi | 2 +-
arch/arm/boot/dts/dra74x.dtsi | 8 +-
arch/arm/boot/dts/gemini-dlink-dns-313.dts | 2 +-
arch/arm/boot/dts/gemini-nas4220b.dts | 2 +-
arch/arm/boot/dts/gemini-rut1xx.dts | 2 +-
arch/arm/boot/dts/gemini-wbd111.dts | 2 +-
arch/arm/boot/dts/gemini-wbd222.dts | 2 +-
arch/arm/boot/dts/gemini.dtsi | 1 +
arch/arm/boot/dts/imx6dl-riotboard.dts | 2 +
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi | 5 +-
arch/arm/boot/dts/omap4-l4.dtsi | 4 +-
arch/arm/boot/dts/omap5-l4.dtsi | 4 +-
arch/arm/boot/dts/rk3036-kylin.dts | 2 +-
arch/arm/boot/dts/rk3066a.dtsi | 6 +-
arch/arm/boot/dts/rk3188.dtsi | 14 +--
arch/arm/boot/dts/rk322x.dtsi | 12 +-
arch/arm/boot/dts/rk3288-rock2-som.dtsi | 2 +-
arch/arm/boot/dts/rk3288-vyasa.dts | 4 +-
arch/arm/boot/dts/rk3288.dtsi | 14 +--
arch/arm/boot/dts/ste-ab8500.dtsi | 28 ++---
arch/arm/boot/dts/ste-ab8505.dtsi | 24 ++--
arch/arm/boot/dts/ste-href-ab8500.dtsi | 2 +-
arch/arm/boot/dts/ste-href-tvk1281618-r3.dtsi | 3 +
arch/arm/boot/dts/ste-href.dtsi | 2 +-
arch/arm/boot/dts/ste-snowball.dts | 2 +-
arch/arm/boot/dts/stm32429i-eval.dts | 8 +-
arch/arm/boot/dts/stm32746g-eval.dts | 6 +-
arch/arm/boot/dts/stm32f429-disco.dts | 6 +-
arch/arm/boot/dts/stm32f429.dtsi | 10 +-
arch/arm/boot/dts/stm32f469-disco.dts | 6 +-
arch/arm/boot/dts/stm32f746.dtsi | 12 +-
arch/arm/boot/dts/stm32f769-disco.dts | 6 +-
arch/arm/boot/dts/stm32h743.dtsi | 4 -
arch/arm/boot/dts/stm32mp151.dtsi | 12 +-
arch/arm/boot/dts/stm32mp157a-stinger96.dtsi | 7 +-
.../arm/boot/dts/stm32mp157c-odyssey-som.dtsi | 7 +-
arch/arm/boot/dts/stm32mp157c-odyssey.dts | 2 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi | 7 +-
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi | 7 +-
arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi | 2 +-
arch/arm/boot/dts/stm32mp15xx-osd32.dtsi | 7 +-
.../boot/dts/tegra20-acer-a500-picasso.dts | 2 +-
arch/arm/boot/dts/tegra20-harmony.dts | 2 +-
arch/arm/boot/dts/tegra20-medcom-wide.dts | 2 +-
arch/arm/boot/dts/tegra20-plutux.dts | 2 +-
arch/arm/boot/dts/tegra20-seaboard.dts | 2 +-
arch/arm/boot/dts/tegra20-tec.dts | 2 +-
arch/arm/boot/dts/tegra20-ventana.dts | 2 +-
.../tegra30-asus-nexus7-grouper-ti-pmic.dtsi | 2 +-
arch/arm/boot/dts/tegra30-cardhu.dtsi | 2 +-
arch/arm/mach-imx/suspend-imx53.S | 4 +-
arch/arm/mach-omap2/pm33xx-core.c | 40 +++++++
arch/arm64/boot/dts/arm/juno-base.dtsi | 6 +-
.../arm64/boot/dts/freescale/fsl-ls208xa.dtsi | 1 -
arch/arm64/boot/dts/freescale/imx8mq.dtsi | 16 +++
.../dts/marvell/armada-3720-turris-mox.dts | 6 +-
arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 8 ++
arch/arm64/boot/dts/marvell/cn9130-db.dts | 2 +-
arch/arm64/boot/dts/qcom/sc7180-idp.dts | 2 +-
arch/arm64/boot/dts/rockchip/px30.dtsi | 16 +--
.../arm64/boot/dts/rockchip/rk3308-roc-cc.dts | 4 +-
.../boot/dts/rockchip/rk3328-nanopi-r2s.dts | 4 +-
.../arm64/boot/dts/rockchip/rk3328-roc-cc.dts | 4 +-
arch/arm64/boot/dts/rockchip/rk3328.dtsi | 6 +-
.../boot/dts/rockchip/rk3399-gru-scarlet.dtsi | 2 +-
arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi | 4 +-
arch/arm64/boot/dts/rockchip/rk3399.dtsi | 42 +++----
.../arm64/boot/dts/ti/k3-am654-base-board.dts | 2 +-
.../dts/ti/k3-j7200-common-proc-board.dts | 2 +-
.../dts/ti/k3-j721e-common-proc-board.dts | 2 +-
arch/ia64/include/asm/pgtable.h | 5 +-
arch/ia64/mm/init.c | 6 +-
arch/s390/include/asm/stacktrace.h | 97 ++++++++++++++++
arch/s390/kernel/traps.c | 2 +
arch/x86/events/intel/uncore_snbep.c | 6 +-
arch/x86/net/bpf_jit_comp.c | 3 +
drivers/dma-buf/sync_file.c | 13 ++-
drivers/firmware/Kconfig | 2 +-
drivers/firmware/arm_scmi/common.h | 2 +-
drivers/firmware/arm_scmi/driver.c | 2 +
drivers/firmware/tegra/Makefile | 1 +
drivers/firmware/tegra/bpmp-private.h | 3 +-
drivers/firmware/tegra/bpmp.c | 3 +-
drivers/firmware/turris-mox-rwtm.c | 1 +
drivers/gpu/drm/panel/panel-novatek-nt35510.c | 4 +-
drivers/memory/tegra/tegra124-emc.c | 4 +-
drivers/memory/tegra/tegra30-emc.c | 4 +-
drivers/net/dsa/mv88e6xxx/chip.c | 12 +-
.../net/ethernet/broadcom/genet/bcmgenet.c | 23 ++--
.../ethernet/broadcom/genet/bcmgenet_wol.c | 6 -
drivers/net/ethernet/moxa/moxart_ether.c | 4 +-
drivers/net/ethernet/qualcomm/emac/emac.c | 3 +-
drivers/net/ethernet/ti/tlan.c | 3 +-
drivers/net/fddi/defza.c | 3 +-
drivers/net/netdevsim/ipsec.c | 8 +-
drivers/net/vmxnet3/vmxnet3_ethtool.c | 22 +++-
drivers/reset/reset-ti-syscon.c | 4 +-
drivers/rtc/rtc-max77686.c | 4 +-
drivers/rtc/rtc-mxc_v2.c | 1 +
drivers/scsi/aic7xxx/aic7xxx_core.c | 2 +-
drivers/scsi/libfc/fc_rport.c | 13 ++-
drivers/scsi/qedf/qedf_io.c | 22 +++-
drivers/soc/tegra/fuse/fuse-tegra30.c | 3 +-
drivers/thermal/imx_sc_thermal.c | 3 +
drivers/thermal/rcar_gen3_thermal.c | 5 +-
drivers/thermal/sprd_thermal.c | 15 ++-
drivers/thermal/thermal_core.c | 2 +-
drivers/thermal/thermal_of.c | 3 +
drivers/usb/cdns3/gadget.c | 8 +-
fs/cifs/cifs_dfs_ref.c | 3 +
fs/f2fs/sysfs.c | 4 +
include/linux/bpf.h | 1 +
include/linux/huge_mm.h | 2 +-
include/linux/swap.h | 9 --
include/linux/swapops.h | 2 +
include/net/dst_metadata.h | 4 +-
include/net/ip6_route.h | 2 +-
include/net/tcp.h | 4 +
kernel/bpf/core.c | 8 +-
kernel/bpf/verifier.c | 60 ++++------
kernel/sched/fair.c | 4 +-
mm/huge_memory.c | 36 +++---
mm/memory.c | 36 +++---
mm/page_alloc.c | 106 +++++++++++-------
mm/shmem.c | 14 +--
net/bridge/br_if.c | 17 ++-
net/dsa/switch.c | 4 +-
net/ipv4/ip_tunnel.c | 18 ++-
net/ipv4/tcp.c | 3 +
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 +-
net/ipv4/tcp_output.c | 1 +
net/ipv4/udp.c | 6 +-
net/ipv6/tcp_ipv6.c | 21 +++-
net/ipv6/udp.c | 2 +-
net/ipv6/xfrm6_output.c | 2 +-
net/netfilter/nf_conntrack_netlink.c | 3 +
net/sched/act_ct.c | 14 ++-
scripts/Kbuild.include | 7 +-
scripts/mkcompile_h | 14 ++-
tools/bpf/Makefile | 7 +-
tools/bpf/bpftool/jit_disasm.c | 6 +-
192 files changed, 815 insertions(+), 567 deletions(-)
--
2.20.1
1
120
Backport LTS 5.10.52 from upstream.
Alex Bee (2):
arm64: dts: rockchip: Re-add regulator-boot-on, regulator-always-on
for vdd_gpu on rk3399-roc-pc
arm64: dts: rockchip: Re-add regulator-always-on for vcc_sdio for
rk3399-roc-pc
Alexander Shishkin (1):
intel_th: Wait until port is in reset before programming it
Aneesh Kumar K.V (1):
powerpc/mm/book3s64: Fix possible build error
Arnd Bergmann (2):
partitions: msdos: fix one-byte get_unaligned()
mips: always link byteswap helpers into decompressor
Aswath Govindraju (2):
ARM: dts: am335x: align ti,pindir-d0-out-d1-in property with dt-shema
ARM: dts: am437x: align ti,pindir-d0-out-d1-in property with dt-shema
Athira Rajeev (1):
selftests/powerpc: Fix "no_handler" EBB selftest
Benjamin Herrenschmidt (1):
powerpc/boot: Fixup device-tree on little endian
Bixuan Cui (1):
power: reset: gpio-poweroff: add missing MODULE_DEVICE_TABLE
Chandrakanth Patil (2):
scsi: megaraid_sas: Fix resource leak in case of probe failure
scsi: megaraid_sas: Handle missing interrupts while re-enabling IRQs
Chang S. Bae (1):
x86/signal: Detect and prevent an alternate signal stack overflow
Chao Yu (4):
f2fs: atgc: fix to set default age threshold
f2fs: add MODULE_SOFTDEP to ensure crc32 is included in the initramfs
f2fs: compress: fix to disallow temp extension
f2fs: fix to avoid adding tab before doc section
Christian Brauner (1):
cgroup: verify that source is a string
Christoph Niedermaier (3):
ARM: dts: imx6q-dhcom: Fix ethernet reset time properties
ARM: dts: imx6q-dhcom: Fix ethernet plugin detection problems
ARM: dts: imx6q-dhcom: Add gpios pinctrl for i2c bus recovery
Christophe JAILLET (3):
tty: serial: 8250: serial_cs: Fix a memory leak in error handling path
remoteproc: k3-r5: Fix an error message
scsi: be2iscsi: Fix an error handling path in beiscsi_dev_probe()
Chuck Lever (1):
NFSD: Fix TP_printk() format specifier in nfsd_clid_class
Chunfeng Yun (1):
usb: common: usb-conn-gpio: fix NULL pointer dereference of charger
Chunyan Zhang (1):
thermal/drivers/sprd: Add missing MODULE_DEVICE_TABLE
Corentin Labbe (1):
ARM: dts: gemini-rut1xx: remove duplicate ethernet node
Cristian Marussi (1):
firmware: arm_scmi: Reset Rx buffer to max size during async commands
Dan Carpenter (2):
rtc: fix snprintf() checking in is_rtc_hctosys()
scsi: scsi_dh_alua: Fix signedness bug in alua_rtpg()
Daniel Mack (1):
serial: tty: uartlite: fix console setup
Dimitri John Ledkov (1):
lib/decompress_unlz4.c: correctly handle zero-padding around initrds.
Dmitry Torokhov (1):
i2c: core: Disable client irq on reboot/shutdown
Eli Cohen (3):
vdpa/mlx5: Fix umem sizes assignments on VQ create
vdpa/mlx5: Fix possible failure in umem size calculation
vdpa/mlx5: Clear vq ready indication upon device reset
Fabio Aiuto (1):
staging: rtl8723bs: fix macro value for 2.4Ghz only device
Fabrice Fontaine (1):
s390: disable SSP when needed
Frederic Weisbecker (1):
srcu: Fix broken node geometry after early ssp init
Gao Xiang (1):
nfs: fix acl memory leak of posix_acl_create()
Geert Uytterhoeven (6):
reset: RESET_BRCMSTB_RESCAL should depend on ARCH_BRCMSTB
reset: RESET_INTEL_GW should depend on X86
ARM: dts: r8a7779, marzen: Fix DU clock names
arm64: dts: renesas: Add missing opp-suspend properties
arm64: dts: renesas: r8a7796[01]: Fix OPP table entry voltages
arm64: dts: renesas: r8a779a0: Drop power-domains property from GIC
node
Geoff Levand (1):
powerpc/ps3: Add dma_mask to ps3_dma_region
Geoffrey D. Bennett (4):
ALSA: usb-audio: scarlett2: Fix 18i8 Gen 2 PCM Input count
ALSA: usb-audio: scarlett2: Fix data_mutex lock
ALSA: usb-audio: scarlett2: Fix scarlett2_*_ctl_put() return values
ALSA: usb-audio: scarlett2: Fix 6i6 Gen 2 line out descriptions
Gowtham Tammana (1):
ARM: dts: dra7: Fix duplicate USB4 target module node
Greg Kroah-Hartman (1):
Revert "drm/ast: Remove reference to struct drm_device.pdev"
Hannes Reinecke (2):
scsi: core: Fixup calling convention for scsi_mode_sense()
scsi: scsi_dh_alua: Check for negative result value
Hans de Goede (1):
ACPI: video: Add quirk for the Dell Vostro 3350
Heiko Carstens (4):
s390/processor: always inline stap() and __load_psw_mask()
s390/ipl_parm: fix program check new psw handling
s390/mem_detect: fix diag260() program check new psw handling
s390/mem_detect: fix tprot() program check new psw handling
Icenowy Zheng (1):
arm64: dts: allwinner: a64-sopine-baseboard: change RGMII mode to TXID
James Smart (2):
scsi: lpfc: Fix "Unexpected timeout" error in direct attach topology
scsi: lpfc: Fix crash when lpfc_sli4_hba_setup() fails to initialize
the SGLs
Jan Kiszka (1):
watchdog: iTCO_wdt: Account for rebooting on second timeout
Jaroslav Kysela (1):
ASoC: soc-pcm: fix the return value in dpcm_apply_symmetry()
Javier Martinez Canillas (1):
PCI: rockchip: Register IRQ handlers after device and data are ready
Jeff Layton (1):
ceph: remove bogus checks and WARN_ONs from ceph_set_page_dirty
Jiajun Cao (1):
ALSA: hda: Add IRQ check for platform_get_irq()
Jiapeng Chong (1):
fs/jfs: Fix missing error code in lmLogInit()
Jing Xiangfeng (1):
drm/gma500: Add the missed drm_gem_object_put() in
psb_user_framebuffer_create()
John Garry (1):
scsi: core: Cap scsi_host cmd_per_lun at can_queue
Jon Hunter (1):
PCI: tegra194: Fix tegra_pcie_ep_raise_msi_irq() ill-defined shift
Jonathan Cameron (2):
iio: gyro: fxa21002c: Balance runtime pm + use
pm_runtime_resume_and_get().
iio: magn: bmc150: Balance runtime pm + use
pm_runtime_resume_and_get()
José Roberto de Souza (1):
drm/dp_mst: Add missing drm parameters to recently added call to
drm_dbg_kms()
Kashyap Desai (1):
scsi: megaraid_sas: Early detection of VD deletion through RaidMap
update
Kefeng Wang (1):
KVM: mmio: Fix use-after-free Read in
kvm_vm_ioctl_unregister_coalesced_mmio
Kishon Vijay Abraham I (1):
arm64: dts: ti: k3-j721e-main: Fix external refclk input to SERDES
Koby Elbaz (2):
habanalabs/gaudi: set the correct cpu_id on MME2_QM failure
habanalabs: remove node from list before freeing the node
Krzysztof Kozlowski (10):
power: supply: max17042: Do not enforce (incorrect) interrupt trigger
type
reset: a10sr: add missing of_match_table reference
ARM: exynos: add missing of_node_put for loop iteration
ARM: dts: exynos: fix PWM LED max brightness on Odroid XU/XU3
ARM: dts: exynos: fix PWM LED max brightness on Odroid HC1
ARM: dts: exynos: fix PWM LED max brightness on Odroid XU4
memory: stm32-fmc2-ebi: add missing of_node_put for loop iteration
memory: atmel-ebi: add missing of_node_put for loop iteration
memory: fsl_ifc: fix leak of IO mapping on probe failure
memory: fsl_ifc: fix leak of private memory on probe failure
Krzysztof Wilczyński (1):
PCI/sysfs: Fix dsm_label_utf16s_to_utf8s() buffer overrun
Lai Jiangshan (1):
KVM: X86: Disable hardware breakpoints unconditionally before
kvm_x86->run()
Liguang Zhang (1):
ACPI: AMBA: Fix resource name in /proc/iomem
Linus Torvalds (1):
certs: add 'x509_revocation_list' to gitignore
Linus Walleij (1):
power: supply: ab8500: Avoid NULL pointers
Logan Gunthorpe (1):
PCI/P2PDMA: Avoid pci_get_slot(), which may sleep
Long Li (1):
PCI: hv: Fix a race condition when removing the device
Luiz Sampaio (1):
w1: ds2438: fixing bug that would always get page0
Lukas Wunner (1):
PCI: pciehp: Ignore Link Down/Up caused by DPC
Lv Yunlong (1):
misc/libmasm/module: Fix two use after free in ibmasm_init_one
Marco Elver (1):
kcov: add __no_sanitize_coverage to fix noinstr for all architectures
Marek Behún (2):
firmware: turris-mox-rwtm: fix reply status decoding function
firmware: turris-mox-rwtm: report failures better
Marek Vasut (2):
ARM: dts: stm32: Connect PHY IRQ line on DH STM32MP1 SoM
ARM: dts: stm32: Rework LAN8710Ai PHY reset on DHCOM SoM
Martin Blumenstingl (1):
PCI: intel-gw: Fix INTx enable
Martin Fäcknitz (1):
MIPS: vdso: Invalid GIC access through VDSO
Matthew Auld (1):
drm/i915/gtt: drop the page table optimisation
Maurizio Lombardi (1):
nvme-tcp: can't set sk_user_data without write_lock
Michael Kelley (1):
scsi: storvsc: Correctly handle multiple flags in srb_status
Michael S. Tsirkin (1):
virtio_net: move tx vq operation under tx queue lock
Michael Walle (1):
serial: fsl_lpuart: disable DMA for console and fix sysrq
Mike Christie (7):
scsi: iscsi: Add iscsi_cls_conn refcount helpers
scsi: iscsi: Fix conn use after free during resets
scsi: iscsi: Fix shost->max_id use
scsi: qedi: Fix null ref during abort handling
scsi: qedi: Fix race during abort timeouts
scsi: qedi: Fix TMF session block/unblock use
scsi: qedi: Fix cleanup session block/unblock use
Mike Marshall (1):
orangefs: fix orangefs df output.
Nathan Chancellor (2):
hexagon: handle {,SOFT}IRQENTRY_TEXT in linker script
hexagon: use common DISCARDS macro
NeilBrown (1):
SUNRPC: prevent port reuse on transports which don't request it.
Nick Desaulniers (1):
ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1
Nicolas Ferre (1):
dt-bindings: i2c: at91: fix example for scl-gpios
Niklas Söderlund (1):
thermal/drivers/rcar_gen3_thermal: Fix coefficient calculations
Nikolay Aleksandrov (2):
net: bridge: multicast: fix PIM hello router port marking race
net: bridge: multicast: fix MRD advertisement router port marking race
Pali Rohár (2):
firmware: turris-mox-rwtm: fail probing when firmware does not support
hwrng
firmware: turris-mox-rwtm: show message about HWRNG registration
Paul Cercueil (2):
drm/ingenic: Fix non-OSD mode
drm/ingenic: Switch IPU plane to type OVERLAY
Paul E. McKenney (1):
rcu: Reject RCU_LOCKDEP_WARN() false positives
Paulo Alcantara (1):
cifs: handle reconnect of tcon when there is no cached dfs referral
Peter Robinson (1):
gpio: pca953x: Add support for the On Semi pca9655
Peter Zijlstra (2):
jump_label: Fix jump_label_text_reserved() vs __init
static_call: Fix static_call_text_reserved() vs __init
Philip Yang (1):
drm/amdkfd: fix sysfs kobj leak
Philipp Zabel (1):
reset: bail if try_module_get() fails
Pierre-Louis Bossart (2):
ASoC: Intel: sof_sdw: add mutual exclusion between PCH DMIC and RT715
ASoC: Intel: kbl_da7219_max98357a: shrink platform_id below 20
characters
Po-Hsu Lin (1):
selftests: timers: rtcpie: skip test if default RTC device does not
exist
Rafał Miłecki (1):
ARM: dts: BCM5301X: Fixup SPI binding
Randy Dunlap (2):
PCI: ftpci100: Rename macro name collision
mips: disable branch profiling in boot/decompress.o
Rashmi A (1):
phy: intel: Fix for warnings due to EMMC clock 175Mhz change in FIP
Robin Gong (1):
dmaengine: fsl-qdma: check dma_set_mask return value
Roger Quadros (1):
arm64: dts: ti: j7200-main: Enable USB2 PHY RX sensitivity workaround
Ruslan Bilovol (1):
usb: gadget: f_hid: fix endianness issue with descriptors
Salvatore Bonaccorso (1):
ARM: dts: sun8i: h3: orangepi-plus: Fix ethernet phy-mode
Sandor Bodo-Merle (2):
PCI: iproc: Fix multi-MSI base vector number allocation
PCI: iproc: Support multi-MSI only on uniprocessor kernel
Sascha Hauer (1):
ubifs: Fix off-by-one error
Sean Christopherson (2):
KVM: x86: Use guest MAXPHYADDR from CPUID.0x8000_0008 iff TDP is
enabled
KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAs
Sherry Sun (1):
tty: serial: fsl_lpuart: fix the potential risk of division or modulo
by zero
Siddharth Gupta (1):
remoteproc: core: Fix cdev remove and rproc del
Srinivas Neeli (2):
gpio: zynq: Check return value of pm_runtime_get_sync
gpio: zynq: Check return value of irq_get_irq_data
Stefan Eichenberger (1):
watchdog: imx_sc_wdt: fix pretimeout
Steffen Maier (1):
scsi: zfcp: Report port fc_security as unknown early during remote
cable pull
Stephan Gerhold (1):
power: supply: rt5033_battery: Fix device tree enumeration
Stephen Boyd (1):
arm64: dts: qcom: trogdor: Add no-hpd to DSI bridge node
Steven Rostedt (VMware) (1):
tracing: Do not reference char * as a string in histograms
Suganath Prabu S (1):
scsi: mpt3sas: Fix deadlock while cancelling the running firmware
event
Takashi Iwai (3):
ALSA: usx2y: Avoid camelCase
ALSA: usx2y: Don't call free_pages_exact() with NULL address
ALSA: sb: Fix potential double-free of CSP mixer elements
Takashi Sakamoto (3):
Revert "ALSA: bebob/oxfw: fix Kconfig entry for Mackie d.2 Pro"
ALSA: bebob: add support for ToneWeal FW66
ALSA: firewire-motu: fix detection for S/PDIF source on optical
interface in v2 protocol
Tao Ren (1):
watchdog: aspeed: fix hardware timeout calculation
Thomas Gleixner (3):
x86/fpu: Return proper error codes from user access functions
x86/fpu: Fix copy_xstate_to_kernel() gap handling
x86/fpu: Limit xstate copy size in xstateregs_set()
Tong Zhang (2):
misc: alcor_pci: fix null-ptr-deref when there is no PCI bridge
misc: alcor_pci: fix inverted branch condition
Tony Lindgren (1):
mfd: cpcap: Fix cpcap dmamask not set warnings
Trond Myklebust (8):
NFSv4: Fix delegation return in cases where we have to retry
NFS: nfs_find_open_context() may only select open files
NFSv4: Initialise connection to the server in nfs4_alloc_client()
NFSv4: Fix an Oops in pnfs_mark_request_commit() when doing O_DIRECT
nfsd: Reduce contention for the nfsd_file nf_rwsem
NFSv4/pnfs: Fix the layout barrier update
NFSv4/pnfs: Fix layoutget behaviour after invalidation
NFSv4/pNFS: Don't call _nfs4_pnfs_v3_ds_connect multiple times
Tyrel Datwyler (1):
scsi: core: Fix bad pointer dereference when ehandler kthread is
invalid
Uwe Kleine-König (4):
backlight: lm3630a: Fix return code of .update_status() callback
pwm: spear: Don't modify HW state in .remove callback
pwm: tegra: Don't modify HW state in .remove callback
pwm: imx1: Don't disable clocks at device remove time
Valentin Vidic (1):
s390/sclp_vt220: fix console name to match device
Valentine Barshak (1):
arm64: dts: renesas: v3msk: Fix memory size
Ville Syrjälä (1):
drm/i915/gt: Fix -EDEADLK handling regression
Vitaly Kuznetsov (1):
KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA
Wayne Lin (2):
drm/dp_mst: Do not set proposed vcpi directly
drm/dp_mst: Avoid to mess up payload table by ports in stale topology
Wei Yongjun (1):
watchdog: jz4740: Fix return value check in jz4740_wdt_probe()
Xie Yongji (3):
virtio-blk: Fix memory leak among suspend/resume procedure
virtio_net: Fix error handling in virtnet_restore()
virtio_console: Assure used length from device is limited
Xiyu Yang (2):
iommu/arm-smmu: Fix arm_smmu_device refcount leak when
arm_smmu_rpm_get fails
iommu/arm-smmu: Fix arm_smmu_device refcount leak in address
translation
Xuewen Yan (1):
sched/uclamp: Ignore max aggregation if rq is idle
Yang Yingliang (3):
leds: tlc591xx: fix return value check in tlc591xx_probe()
ALSA: ppc: fix error return code in snd_pmac_probe()
usb: gadget: hid: fix error return code in hid_bind()
Yizhuo Zhai (1):
Input: hideep - fix the uninitialized use in hideep_nvm_unlock()
Yufen Yu (2):
ALSA: ac97: fix PM reference leak in ac97_bus_remove()
ASoC: img: Fix PM reference leak in img_i2s_in_probe()
Zhen Lei (8):
fbmem: Do not delete the mode that is still in use
ASoC: soc-core: Fix the error return code in
snd_soc_of_parse_audio_routing()
um: fix error return code in slip_open()
um: fix error return code in winch_tramp()
ubifs: journal: Fix error return code in ubifs_jnl_write_inode()
ALSA: isa: Fix error return code in snd_cmi8330_probe()
memory: pl353: Fix error return code in pl353_smc_probe()
firmware: tegra: Fix error return code in tegra210_bpmp_init()
Zhihao Cheng (1):
ubifs: Set/Clear I_LINKABLE under i_lock for whiteout inode
Zou Wei (14):
ASoC: intel/boards: add missing MODULE_DEVICE_TABLE
mfd: da9052/stmpe: Add and modify MODULE_DEVICE_TABLE
fsi: Add missing MODULE_DEVICE_TABLE
leds: turris-omnia: add missing MODULE_DEVICE_TABLE
power: supply: sc27xx: Add missing MODULE_DEVICE_TABLE
power: supply: sc2731_charger: Add missing MODULE_DEVICE_TABLE
watchdog: Fix possible use-after-free in wdt_startup()
watchdog: sc520_wdt: Fix possible use-after-free in wdt_turnoff()
watchdog: Fix possible use-after-free by calling del_timer_sync()
PCI: tegra: Add missing MODULE_DEVICE_TABLE
power: supply: charger-manager: add missing MODULE_DEVICE_TABLE
power: supply: ab8500: add missing MODULE_DEVICE_TABLE
pwm: img: Fix PM reference leak in img_pwm_enable()
reset: brcmstb: Add missing MODULE_DEVICE_TABLE
ching Huang (2):
scsi: arcmsr: Fix the wrong CDB payload report to IOP
scsi: arcmsr: Fix doorbell status being updated late on ARC-1886
.../devicetree/bindings/i2c/i2c-at91.txt | 2 +-
Documentation/filesystems/f2fs.rst | 16 +-
arch/arm/boot/dts/am335x-cm-t335.dts | 2 +-
arch/arm/boot/dts/am43x-epos-evm.dts | 4 +-
arch/arm/boot/dts/am5718.dtsi | 6 +-
arch/arm/boot/dts/bcm5301x.dtsi | 18 +-
arch/arm/boot/dts/dra7-l4.dtsi | 22 -
arch/arm/boot/dts/dra71x.dtsi | 4 -
arch/arm/boot/dts/dra72x.dtsi | 4 -
arch/arm/boot/dts/dra74x.dtsi | 92 ++--
arch/arm/boot/dts/exynos5422-odroidhc1.dts | 2 +-
arch/arm/boot/dts/exynos5422-odroidxu4.dts | 2 +-
.../boot/dts/exynos54xx-odroidxu-leds.dtsi | 4 +-
arch/arm/boot/dts/gemini-rut1xx.dts | 12 -
arch/arm/boot/dts/imx6q-dhcom-som.dtsi | 41 +-
arch/arm/boot/dts/r8a7779-marzen.dts | 2 +-
arch/arm/boot/dts/r8a7779.dtsi | 1 +
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi | 10 +-
arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts | 2 +-
arch/arm/mach-exynos/exynos.c | 2 +
arch/arm/probes/kprobes/test-thumb.c | 10 +-
.../allwinner/sun50i-a64-sopine-baseboard.dts | 2 +-
arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi | 2 +
arch/arm64/boot/dts/renesas/r8a774a1.dtsi | 1 +
arch/arm64/boot/dts/renesas/r8a77960.dtsi | 7 +-
arch/arm64/boot/dts/renesas/r8a77961.dtsi | 7 +-
.../arm64/boot/dts/renesas/r8a77970-v3msk.dts | 2 +-
arch/arm64/boot/dts/renesas/r8a779a0.dtsi | 1 -
.../boot/dts/rockchip/rk3399-roc-pc.dtsi | 3 +
arch/arm64/boot/dts/ti/k3-j7200-main.dtsi | 1 +
.../dts/ti/k3-j721e-common-proc-board.dts | 4 +
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 58 +--
arch/hexagon/kernel/vmlinux.lds.S | 9 +-
arch/mips/boot/compressed/Makefile | 4 +-
arch/mips/boot/compressed/decompress.c | 2 +
arch/mips/include/asm/vdso/vdso.h | 2 +-
arch/powerpc/boot/devtree.c | 59 ++-
arch/powerpc/boot/ns16550.c | 9 +-
arch/powerpc/include/asm/ps3.h | 2 +
arch/powerpc/mm/book3s64/radix_tlb.c | 26 +-
arch/powerpc/platforms/ps3/mm.c | 12 +
arch/s390/Makefile | 1 +
arch/s390/boot/ipl_parm.c | 19 +-
arch/s390/boot/mem_detect.c | 47 +-
arch/s390/include/asm/processor.h | 4 +-
arch/s390/kernel/setup.c | 2 +-
arch/s390/purgatory/Makefile | 1 +
arch/um/drivers/chan_user.c | 3 +-
arch/um/drivers/slip_user.c | 3 +-
arch/x86/include/asm/fpu/internal.h | 19 +-
arch/x86/kernel/fpu/regset.c | 2 +-
arch/x86/kernel/fpu/xstate.c | 105 ++--
arch/x86/kernel/signal.c | 24 +-
arch/x86/kvm/cpuid.c | 8 +-
arch/x86/kvm/mmu/mmu.c | 2 +
arch/x86/kvm/mmu/paging.h | 14 +
arch/x86/kvm/mmu/paging_tmpl.h | 4 +-
arch/x86/kvm/mmu/spte.h | 6 -
arch/x86/kvm/svm/svm.c | 11 +-
arch/x86/kvm/x86.c | 2 +
block/partitions/ldm.c | 2 +-
block/partitions/ldm.h | 3 -
block/partitions/msdos.c | 24 +-
certs/.gitignore | 1 +
drivers/acpi/acpi_amba.c | 1 +
drivers/acpi/acpi_video.c | 9 +
drivers/block/virtio_blk.c | 2 +
drivers/char/virtio_console.c | 4 +-
drivers/dma/fsl-qdma.c | 6 +-
drivers/firmware/arm_scmi/driver.c | 4 +
drivers/firmware/tegra/bpmp-tegra210.c | 2 +-
drivers/firmware/turris-mox-rwtm.c | 55 ++-
drivers/fsi/fsi-master-aspeed.c | 1 +
drivers/fsi/fsi-master-ast-cf.c | 1 +
drivers/fsi/fsi-master-gpio.c | 1 +
drivers/fsi/fsi-occ.c | 1 +
drivers/gpio/gpio-pca953x.c | 1 +
drivers/gpio/gpio-zynq.c | 15 +-
drivers/gpu/drm/amd/amdkfd/kfd_process.c | 14 +-
.../amd/amdkfd/kfd_process_queue_manager.c | 1 +
drivers/gpu/drm/ast/ast_main.c | 5 +-
drivers/gpu/drm/drm_dp_mst_topology.c | 68 ++-
drivers/gpu/drm/gma500/framebuffer.c | 7 +-
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 5 +-
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c | 2 +-
drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 20 +-
drivers/gpu/drm/ingenic/ingenic-ipu.c | 2 +-
drivers/hwtracing/intel_th/core.c | 17 +
drivers/hwtracing/intel_th/gth.c | 16 +
drivers/hwtracing/intel_th/intel_th.h | 3 +
drivers/i2c/i2c-core-base.c | 3 +
drivers/iio/gyro/fxas21002c_core.c | 11 +-
drivers/iio/magnetometer/bmc150_magn.c | 10 +-
drivers/input/touchscreen/hideep.c | 13 +-
drivers/iommu/arm/arm-smmu/arm-smmu.c | 10 +-
drivers/leds/leds-tlc591xx.c | 8 +-
drivers/leds/leds-turris-omnia.c | 1 +
drivers/memory/atmel-ebi.c | 4 +-
drivers/memory/fsl_ifc.c | 8 +-
drivers/memory/pl353-smc.c | 1 +
drivers/memory/stm32-fmc2-ebi.c | 4 +
drivers/mfd/da9052-i2c.c | 1 +
drivers/mfd/motorola-cpcap.c | 4 +
drivers/mfd/stmpe-i2c.c | 2 +-
drivers/misc/cardreader/alcor_pci.c | 8 +-
drivers/misc/habanalabs/gaudi/gaudi.c | 3 +-
drivers/misc/habanalabs/goya/goya.c | 1 +
drivers/misc/ibmasm/module.c | 5 +-
drivers/net/virtio_net.c | 27 +-
drivers/nvme/target/tcp.c | 1 -
drivers/pci/controller/dwc/pcie-intel-gw.c | 10 +-
drivers/pci/controller/dwc/pcie-tegra194.c | 2 +-
drivers/pci/controller/pci-ftpci100.c | 30 +-
drivers/pci/controller/pci-hyperv.c | 30 +-
drivers/pci/controller/pci-tegra.c | 1 +
drivers/pci/controller/pcie-iproc-msi.c | 29 +-
drivers/pci/controller/pcie-rockchip-host.c | 12 +-
drivers/pci/hotplug/pciehp_hpc.c | 36 ++
drivers/pci/p2pdma.c | 34 +-
drivers/pci/pci-label.c | 2 +-
drivers/pci/pci.h | 4 +
drivers/pci/pcie/dpc.c | 74 ++-
drivers/phy/intel/phy-intel-keembay-emmc.c | 3 +-
drivers/power/reset/gpio-poweroff.c | 1 +
drivers/power/supply/Kconfig | 3 +-
drivers/power/supply/ab8500_btemp.c | 1 +
drivers/power/supply/ab8500_charger.c | 19 +-
drivers/power/supply/ab8500_fg.c | 1 +
drivers/power/supply/charger-manager.c | 1 +
drivers/power/supply/max17042_battery.c | 2 +-
drivers/power/supply/rt5033_battery.c | 7 +
drivers/power/supply/sc2731_charger.c | 1 +
drivers/power/supply/sc27xx_fuel_gauge.c | 1 +
drivers/pwm/pwm-img.c | 2 +-
drivers/pwm/pwm-imx1.c | 2 -
drivers/pwm/pwm-spear.c | 4 -
drivers/pwm/pwm-tegra.c | 13 -
drivers/remoteproc/remoteproc_cdev.c | 2 +-
drivers/remoteproc/remoteproc_core.c | 2 +-
drivers/remoteproc/ti_k3_r5_remoteproc.c | 2 +-
drivers/reset/Kconfig | 4 +-
drivers/reset/core.c | 5 +-
drivers/reset/reset-a10sr.c | 1 +
drivers/reset/reset-brcmstb.c | 1 +
drivers/rtc/proc.c | 4 +-
drivers/s390/char/sclp_vt220.c | 4 +-
drivers/s390/scsi/zfcp_sysfs.c | 1 +
drivers/scsi/arcmsr/arcmsr_hba.c | 19 +-
drivers/scsi/be2iscsi/be_main.c | 5 +-
drivers/scsi/bnx2i/bnx2i_iscsi.c | 2 +-
drivers/scsi/cxgbi/libcxgbi.c | 4 +-
drivers/scsi/device_handler/scsi_dh_alua.c | 11 +-
drivers/scsi/hosts.c | 4 +
drivers/scsi/libiscsi.c | 122 +++--
drivers/scsi/lpfc/lpfc_els.c | 9 +
drivers/scsi/lpfc/lpfc_sli.c | 5 +-
drivers/scsi/megaraid/megaraid_sas.h | 12 +
drivers/scsi/megaraid/megaraid_sas_base.c | 96 +++-
drivers/scsi/megaraid/megaraid_sas_fp.c | 6 +-
drivers/scsi/megaraid/megaraid_sas_fusion.c | 10 +-
drivers/scsi/mpt3sas/mpt3sas_scsih.c | 22 +
drivers/scsi/qedi/qedi.h | 1 +
drivers/scsi/qedi/qedi_fw.c | 24 +-
drivers/scsi/qedi/qedi_iscsi.c | 37 +-
drivers/scsi/qedi/qedi_main.c | 2 +-
drivers/scsi/scsi_lib.c | 10 +-
drivers/scsi/scsi_transport_iscsi.c | 12 +
drivers/scsi/scsi_transport_sas.c | 9 +-
drivers/scsi/sd.c | 12 +-
drivers/scsi/sr.c | 2 +-
drivers/scsi/storvsc_drv.c | 61 +--
drivers/staging/rtl8723bs/hal/odm.h | 5 +-
drivers/thermal/rcar_gen3_thermal.c | 2 +-
drivers/thermal/sprd_thermal.c | 1 +
drivers/tty/serial/8250/serial_cs.c | 11 +-
drivers/tty/serial/fsl_lpuart.c | 9 +
drivers/tty/serial/uartlite.c | 27 +-
drivers/usb/common/usb-conn-gpio.c | 44 +-
drivers/usb/gadget/function/f_hid.c | 2 +-
drivers/usb/gadget/legacy/hid.c | 4 +-
drivers/vdpa/mlx5/net/mlx5_vnet.c | 28 +-
drivers/video/backlight/lm3630a_bl.c | 12 +-
drivers/video/fbdev/core/fbmem.c | 12 +-
drivers/w1/slaves/w1_ds2438.c | 4 +-
drivers/watchdog/aspeed_wdt.c | 2 +-
drivers/watchdog/iTCO_wdt.c | 12 +-
drivers/watchdog/imx_sc_wdt.c | 11 +-
drivers/watchdog/jz4740_wdt.c | 4 +-
drivers/watchdog/lpc18xx_wdt.c | 2 +-
drivers/watchdog/sbc60xxwdt.c | 2 +-
drivers/watchdog/sc520_wdt.c | 2 +-
drivers/watchdog/w83877f_wdt.c | 2 +-
fs/ceph/addr.c | 10 +-
fs/cifs/connect.c | 6 +-
fs/f2fs/gc.c | 1 +
fs/f2fs/namei.c | 16 +-
fs/f2fs/super.c | 1 +
fs/jfs/jfs_logmgr.c | 1 +
fs/nfs/delegation.c | 71 ++-
fs/nfs/delegation.h | 1 +
fs/nfs/direct.c | 17 +-
fs/nfs/inode.c | 4 +
fs/nfs/nfs3proc.c | 4 +-
fs/nfs/nfs4_fs.h | 1 +
fs/nfs/nfs4client.c | 82 ++--
fs/nfs/pnfs.c | 40 +-
fs/nfs/pnfs_nfs.c | 52 +-
fs/nfsd/nfs4state.c | 3 -
fs/nfsd/trace.h | 29 --
fs/nfsd/vfs.c | 18 +-
fs/orangefs/super.c | 2 +-
fs/ubifs/dir.c | 7 +
fs/ubifs/journal.c | 3 +-
fs/ubifs/xattr.c | 2 +-
include/linux/compiler-clang.h | 17 +
include/linux/compiler-gcc.h | 6 +
include/linux/compiler_types.h | 2 +-
include/linux/nfs_fs.h | 1 +
include/linux/rcupdate.h | 2 +-
include/linux/sched/signal.h | 19 +-
include/scsi/libiscsi.h | 11 +-
include/scsi/scsi_transport_iscsi.h | 2 +
kernel/cgroup/cgroup-v1.c | 2 +
kernel/jump_label.c | 13 +-
kernel/rcu/rcu.h | 2 +
kernel/rcu/srcutree.c | 3 +
kernel/rcu/tree.c | 16 +-
kernel/rcu/update.c | 2 +-
kernel/sched/sched.h | 21 +-
kernel/static_call.c | 13 +-
kernel/trace/trace_events_hist.c | 6 +-
lib/decompress_unlz4.c | 8 +
net/bridge/br_multicast.c | 6 +
net/sunrpc/xprtsock.c | 3 +-
sound/ac97/bus.c | 2 +-
sound/firewire/Kconfig | 5 +-
sound/firewire/bebob/bebob.c | 5 +-
sound/firewire/motu/motu-protocol-v2.c | 13 +-
sound/firewire/oxfw/oxfw.c | 2 +-
sound/isa/cmi8330.c | 2 +-
sound/isa/sb/sb16_csp.c | 8 +-
sound/pci/hda/hda_tegra.c | 3 +
sound/ppc/powermac.c | 6 +-
sound/soc/img/img-i2s-in.c | 2 +-
sound/soc/intel/boards/kbl_da7219_max98357a.c | 4 +-
sound/soc/intel/boards/sof_da7219_max98373.c | 1 +
sound/soc/intel/boards/sof_rt5682.c | 1 +
sound/soc/intel/boards/sof_sdw.c | 19 +-
sound/soc/intel/boards/sof_sdw_common.h | 1 +
.../intel/common/soc-acpi-intel-kbl-match.c | 2 +-
sound/soc/soc-core.c | 2 +-
sound/soc/soc-pcm.c | 2 +-
sound/usb/mixer_scarlett_gen2.c | 39 +-
sound/usb/usx2y/usX2Yhwdep.c | 56 +--
sound/usb/usx2y/usX2Yhwdep.h | 2 +-
sound/usb/usx2y/usb_stream.c | 7 +-
sound/usb/usx2y/usbus428ctldefs.h | 102 ++--
sound/usb/usx2y/usbusx2y.c | 218 ++++-----
sound/usb/usx2y/usbusx2y.h | 58 +--
sound/usb/usx2y/usbusx2yaudio.c | 448 +++++++++---------
sound/usb/usx2y/usx2yhwdeppcm.c | 410 ++++++++--------
sound/usb/usx2y/usx2yhwdeppcm.h | 4 +-
.../powerpc/pmu/ebb/no_handler_test.c | 2 -
tools/testing/selftests/timers/rtcpie.c | 10 +-
virt/kvm/coalesced_mmio.c | 2 +-
265 files changed, 2508 insertions(+), 1668 deletions(-)
create mode 100644 arch/x86/kvm/mmu/paging.h
--
2.20.1
1
237
Backport 5.10.51 LTS patches
Aaron Liu (1):
drm/amdgpu: enable sdma0 tmz for Raven/Renoir(V2)
Al Cooper (1):
mmc: sdhci: Fix warning message when accessing RPMB in HS400 mode
Alex Bee (2):
drm: rockchip: add missing registers for RK3188
drm: rockchip: add missing registers for RK3066
Amber Lin (1):
drm/amdkfd: Fix circular lock in nocpsch path
Amit Cohen (1):
selftests: Clean forgotten resources as part of cleanup()
Andrey Grodzovsky (2):
drm/scheduler: Fix hang when sched_entity released
drm/sched: Avoid data corruptions
Andy Shevchenko (1):
net: pch_gbe: Use proper accessors to BE data in pch_ptp_match()
Ansuel Smith (1):
net: mdio: ipq8064: add regmap config to disable REGCACHE
Arnd Bergmann (1):
media: subdev: disallow ioctl for saa6588/davinci
Arturo Giusti (1):
udf: Fix NULL pointer dereference in udf_symlink function
Benjamin Drung (1):
media: uvcvideo: Fix pixel format change for Elgato Cam Link 4K
Bibo Mao (1):
hugetlb: clear huge pte during flush function on mips platform
Bixuan Cui (1):
pinctrl: equilibrium: Add missing MODULE_DEVICE_TABLE
Brandon Syu (1):
drm/amd/display: fix HDCP reset sequence on reinitialize
Cameron Nemo (2):
arm64: dts: rockchip: add rk3328 dwc3 usb controller node
arm64: dts: rockchip: Enable USB3 for rk3328 Rock64
Chao Yu (1):
f2fs: fix to avoid racing on fsync_entry_slab by multi filesystem
instances
Christian Löhle (1):
mmc: core: Allow UHS-I voltage switch for SDSC cards if supported
Christophe JAILLET (1):
nvmem: core: add a missing of_node_put
Christophe Leroy (1):
powerpc/mm: Fix lockup on kernel exec fault
Damien Le Moal (2):
dm: Fix dm_accept_partial_bio() relative to zone management commands
dm zoned: check zone capacity
Dan Carpenter (2):
drm/vc4: fix argument ordering in vc4_crtc_get_margins()
ath11k: unlock on error path in ath11k_mac_op_add_interface()
Daniel Borkmann (1):
bpf: Fix up register-based shifts in interpreter to silence KUBSAN
Daniel Lenski (1):
Bluetooth: btusb: Add a new QCA_ROME device (0cf3:e500)
Daniel Vetter (4):
drm/tegra: Don't set allow_fb_modifiers explicitly
drm/msm/mdp4: Fix modifier support enabling
drm/arm/malidp: Always list modifiers
drm/nouveau: Don't set allow_fb_modifiers explicitly
Davide Caratti (1):
net/sched: cls_api: increase max_reclassify_loop
Dinghao Liu (1):
clk: renesas: rcar-usb2-clock-sel: Fix error handling in .probe()
Dmitry Osipenko (3):
clk: tegra: Fix refcounting of gate clocks
clk: tegra: Ensure that PLLU configuration is applied properly
ASoC: tegra: Set driver_name=tegra for all machine drivers
Dmytro Laktyushkin (1):
drm/amd/display: fix use_max_lb flag for 420 pixel formats
Eli Cohen (1):
net/mlx5: Fix lag port remapping logic
Felix Fietkau (1):
mt76: mt7615: fix fixed-rate tx status reporting
Ferry Toth (1):
extcon: intel-mrfld: Sync hardware and software state on init
Fugang Duan (1):
net: fec: add ndo_select_queue to fix TX bandwidth fluctuations
Gerd Rausch (1):
RDMA/cma: Fix rdma_resolve_route() memory leak
Gioh Kim (1):
RDMA/rtrs: Change MAX_SESS_QUEUE_DEPTH
Guchun Chen (1):
drm/amd/display: fix incorrrect valid irq check
Gustavo A. R. Silva (1):
wireless: wext-spy: Fix out-of-bounds warning
Hans de Goede (1):
mmc: sdhci-acpi: Disable write protect detection on Toshiba Encore 2
WT8-B
Haren Myneni (1):
powerpc/powernv/vas: Release reference to tgid during window close
Harry Wentland (1):
drm/amd/display: Reject non-zero src_y and src_x for video planes
Heiner Kallweit (1):
r8169: avoid link-up interrupt issue on RTL8106e if user enables ASPM
Hilda Wu (1):
Bluetooth: btusb: Add support USB ALT 3 for WBS
Horatiu Vultur (1):
net: bridge: mrp: Update ring transitions.
Huang Pei (1):
MIPS: add PMD table accounting into MIPS'pmd_alloc_one
Huy Nguyen (1):
net/mlx5e: IPsec/rep_tc: Fix rep_tc_update_skb drops IPsec packet
Jack Zhang (1):
drm/amd/amdgpu/sriov disable all ip hw status by default
Jacob Keller (2):
ice: fix incorrect payload indicator on PTYPE
ice: mark PTYPE 2 as reserved
Jakub Kicinski (1):
net: ip: avoid OOM kills with large UDP sends over loopback
Jan Kara (1):
rq-qos: fix missed wake-ups in rq_qos_throttle try two
Jeremy Linton (1):
coresight: Propagate symlink failure
Jesse Brandeburg (4):
e100: handle eeprom as little endian
igb: handle vlan types with checker enabled
igb: fix assignment on big endian machines
i40e: fix PTP on 5Gb links
Jian Shen (1):
net: fix mistake path for netdev_features_strings
Jiansong Chen (1):
drm/amdgpu: remove unsafe optimization to drop preamble ib
Jiapeng Chong (1):
RDMA/cxgb4: Fix missing error code in create_qp()
Jing Xiangfeng (1):
drm/radeon: Add the missed drm_gem_object_put() in
radeon_user_framebuffer_create()
Joakim Zhang (1):
net: phy: realtek: add delay to fix RXC generation issue
Joe Thornber (1):
dm space maps: don't reset space map allocation cursor when committing
Johan Hovold (3):
media: dtv5100: fix control-request directions
media: gspca/sq905: fix control-request direction
media: gspca/sunplus: fix zero-length control requests
Johannes Berg (4):
iwlwifi: mvm: don't change band on bound PHY contexts
iwlwifi: pcie: free IML DMA memory allocation
iwlwifi: pcie: fix context info freeing
mac80211: consider per-CPU statistics if present
Jonathan Kim (1):
drm/amdkfd: fix circular locking on get_wave_state
Joseph Greathouse (1):
drm/amdgpu: Update NV SIMD-per-CU to 2
Kai-Heng Feng (1):
Bluetooth: Shutdown controller after workqueues are flushed or
cancelled
Kees Cook (4):
drm/amd/display: Avoid HDCP over-read and corruption
drm/i915/display: Do not zero past infoframes.vsc
lkdtm/bugs: XFAIL UNALIGNED_LOAD_STORE_WRITE
selftests/lkdtm: Fix expected text for CR4 pinning
Kiran K (1):
Bluetooth: Fix alt settings for incoming SCO with transparent coding
format
Konstantin Kharlamov (1):
PCI: Leave Apple Thunderbolt controllers on for s2idle or standby
Kuninori Morimoto (1):
clk: renesas: r8a77995: Add ZA2 clock
KuoHsiang Chou (1):
drm/ast: Fixed CVE for DP501
Lee Gibson (1):
wl1251: Fix possible buffer overflow in wl1251_cmd_scan
Limeng (1):
mfd: syscon: Free the allocated name field of struct regmap_config
Linus Walleij (1):
power: supply: ab8500: Fix an old bug
Liu Ying (1):
drm/bridge: nwl-dsi: Force a full modeset when crtc_state->active is
changed to be true
Liwei Song (1):
ice: set the value of global config lock timeout longer
Longpeng(Mike) (1):
vsock: notify server to shutdown when client has pending signal
Luiz Augusto von Dentz (2):
Bluetooth: L2CAP: Fix invalid access if ECRED Reconfigure fails
Bluetooth: L2CAP: Fix invalid access on ECRED Connection response
Lv Yunlong (1):
ipack/carriers/tpci200: Fix a double free in tpci200_pci_probe
Lyude Paul (1):
drm/dp: Handle zeroed port counts in drm_dp_read_downstream_info()
Marcelo Ricardo Leitner (2):
sctp: validate from_addr_param return
sctp: add size validation when walking chunks
Mark Yacoub (1):
drm/amd/display: Verify Gamma & Degamma LUT sizes in
amdgpu_dm_atomic_check
Mateusz Kwiatkowski (1):
drm/vc4: Fix clock source for VEC PixelValve on BCM2711
Max Gurtovoy (1):
IB/isert: Align target max I/O size to initiator size
Maxime Ripard (3):
drm/vc4: txp: Properly set the possible_crtcs mask
drm/vc4: crtc: Skip the TXP
drm/vc4: hdmi: Prevent clock unbalance
Maximilian Luz (1):
pinctrl/amd: Add device HID for new AMD GPIO controller
Mikulas Patocka (4):
dm writecache: don't split bios when overwriting contiguous cache
content
dm writecache: commit just one block, not a full page
dm writecache: flush origin device when writing and cache is full
dm writecache: write at least 4k when committing
Minchan Kim (1):
selinux: use __GFP_NOWARN with GFP_NOWAIT in the AVC
Nathan Chancellor (2):
powerpc/barrier: Avoid collision with clang's __lwsync macro
qemu_fw_cfg: Make fw_cfg_rev_attr a proper kobj_attribute
Nick Desaulniers (1):
MIPS: set mips32r5 for virt extensions
Nikola Cornij (1):
drm/amd/display: Fix DCN 3.01 DSCCLK validation
Nirmoy Das (1):
drm/amdkfd: use allowed domain for vmbo validation
Odin Ugedal (1):
sched/fair: Ensure _sum and _avg values stay consistent
Pali Rohár (2):
PCI: aardvark: Fix checking for PIO Non-posted Request
PCI: aardvark: Implement workaround for the readback value of VEND_ID
Pascal Terjan (1):
rtl8xxxu: Fix device info for RTL8192EU devices
Paul Burton (2):
tracing: Simplify & fix saved_tgids logic
tracing: Resize tgid_map to pid_max, not PID_MAX_DEFAULT
Paul Cercueil (3):
MIPS: cpu-probe: Fix FPU detection on Ingenic JZ4760(B)
MIPS: ingenic: Select CPU_SUPPORTS_CPUFREQ && MIPS_EXTERNAL_TIMER
MIPS: MT extensions are not available on MIPS32r1
Paul M Stillwell Jr (1):
ice: fix clang warning regarding deadcode.DeadStores
Pavel Begunkov (1):
io_uring: fix false WARN_ONCE
Pavel Skripkin (3):
reiserfs: add check for invalid 1st journal block
media: zr364xx: fix memory leak in zr364xx_start_readpipe
jfs: fix GPF in diFree
Petr Pavlu (1):
ipmi/watchdog: Stop watchdog timer when the current action is 'none'
Ping-Ke Shih (1):
cfg80211: fix default HE tx bitrate mask in 2G band
Radim Pavlik (1):
pinctrl: mcp23s08: fix race condition in irq handler
Roman Li (1):
drm/amd/display: Update scaling settings on modeset
Russ Weight (1):
fpga: stratix10-soc: Add missing fpga_mgr_free() call
Rustam Kovhaev (1):
bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc()
Ryder Lee (1):
mt76: mt7915: fix IEEE80211_HE_PHY_CAP7_MAX_NC for station mode
Sai Prakash Ranjan (1):
coresight: tmc-etf: Fix global-out-of-bounds in
tmc_update_etf_buffer()
Samuel Holland (1):
clocksource/arm_arch_timer: Improve Allwinner A64 timer workaround
Sean Young (1):
media, bpf: Do not copy more entries than user space requested
Sebastian Andrzej Siewior (1):
net: Treat __napi_schedule_irqoff() as __napi_schedule() on PREEMPT_RT
Shaul Triebitz (1):
iwlwifi: mvm: fix error print when session protection ends
Srinivas Pandruvada (1):
thermal/drivers/int340x/processor_thermal: Fix tcc setting
Stanley.Yang (1):
drm/amdgpu: fix bad address translation for sienna_cichlid
Steffen Klassert (1):
xfrm: Fix error reporting in xfrm_state_construct.
Tedd Ho-Jeong An (1):
Bluetooth: mgmt: Fix the command returns garbage parameter value
Tetsuo Handa (1):
smackfs: restrict bytes count in smk_set_cipso()
Thomas Gleixner (1):
cpu/hotplug: Cure the cpusets trainwreck
Thomas Hebb (1):
drm/rockchip: dsi: remove extra component_del() call
Thomas Zimmermann (3):
drm/mxsfb: Don't select DRM_KMS_FB_HELPER
drm/zte: Don't select DRM_KMS_FB_HELPER
drm/ast: Remove reference to struct drm_device.pdev
Tiezhu Yang (1):
drm/radeon: Call radeon_suspend_kms() in radeon_pci_shutdown() for
Loongson64
Tim Jiang (1):
Bluetooth: btusb: fix bt fiwmare downloading failure issue for qca
btsoc.
Timo Sigurdsson (1):
ata: ahci_sunxi: Disable DIPM
Tony Lindgren (1):
wlcore/wl12xx: Fix wl12xx get_mac error if device is in ELP
Vladimir Oltean (2):
net: mdio: provide shim implementation of devm_of_mdiobus_register
net: stmmac: the XPCS obscures a potential "PHY not found" error
Vladimir Stempen (1):
drm/amd/display: Release MST resources on switch from MST to SST
Wang Li (1):
drm/mediatek: Fix PM reference leak in mtk_crtc_ddp_hw_init()
Weilun Du (1):
mac80211_hwsim: add concurrent channels scanning support over virtio
Wesley Chalmers (2):
drm/amd/display: Set DISPCLK_MAX_ERRDET_CYCLES to 7
drm/amd/display: Fix off-by-one error in DML
Willy Tarreau (1):
ipv6: use prandom_u32() for ID generation
Wolfram Sang (1):
mmc: core: clear flags before allowing to retune
Xianting Tian (1):
virtio_net: Remove BUG() to avoid machine dead
Xiao Yang (1):
RDMA/rxe: Don't overwrite errno from ib_umem_get()
Xiaochen Shen (1):
selftests/resctrl: Fix incorrect parsing of option "-t"
Xie Yongji (2):
drm/virtio: Fix double free on probe failure
virtio-net: Add validation for used length
Yang Yingliang (10):
net: mscc: ocelot: check return value after calling
platform_get_resource()
net: bcmgenet: check return value after calling
platform_get_resource()
net: mvpp2: check return value after calling platform_get_resource()
net: micrel: check return value after calling platform_get_resource()
net: moxa: Use devm_platform_get_and_ioremap_resource()
net: sgi: ioc3-eth: check return value after calling
platform_get_resource()
fjes: check return value after calling platform_get_resource()
net: ipa: Add missing of_node_put() in ipa_firmware_load()
net: sched: fix error return code in tcf_del_walker()
io_uring: fix clear IORING_SETUP_R_DISABLED in wrong function
Yu Kuai (1):
drm: bridge: cdns-mhdp8546: Fix PM reference leak in
Yu Liu (1):
Bluetooth: Fix the HCI to MGMT status conversion table
Yuchung Cheng (1):
net: tcp better handling of reordering then loss cases
Yun Zhou (1):
seq_buf: Fix overflow in seq_buf_putmem_hex()
Zhenyu Ye (1):
arm64: tlb: fix the TTL value of tlb_get_level
Zheyu Ma (2):
atm: nicstar: use 'dma_free_coherent' instead of 'kfree'
atm: nicstar: register the interrupt handler in the right place
Zou Wei (8):
atm: iphase: fix possible use-after-free in ia_module_exit()
mISDN: fix possible use-after-free in HFC_cleanup()
atm: nicstar: Fix possible use-after-free in nicstar_cleanup()
drm/bridge: lt9611: Add missing MODULE_DEVICE_TABLE
drm/vc4: hdmi: Fix PM reference leak in vc4_hdmi_encoder_pre_crtc_co()
drm/bridge: cdns: Fix PM reference leak in cdns_dsi_transfer()
cw1200: add missing MODULE_DEVICE_TABLE
pinctrl: mcp23s08: Fix missing unlock on error in mcp23s08_irq()
gushengxian (1):
flow_offload: action should not be NULL when it is referenced
mark-yw.chen (1):
Bluetooth: btusb: Fixed too many in-token issue for Mediatek Chip.
xinhui pan (1):
drm/amdkfd: Walk through list with dqm lock hold
zhanglianjie (1):
MIPS: loongsoon64: Reserve memory below starting pfn to prevent Oops
Íñigo Huguet (2):
sfc: avoid double pci_remove of VFs
sfc: error code if SRIOV cannot be disabled
.../arm64/boot/dts/rockchip/rk3328-rock64.dts | 5 +
arch/arm64/boot/dts/rockchip/rk3328.dtsi | 19 +++
arch/arm64/include/asm/tlb.h | 4 +
arch/mips/Kconfig | 2 +
arch/mips/include/asm/cpu-features.h | 4 +-
arch/mips/include/asm/hugetlb.h | 8 +-
arch/mips/include/asm/mipsregs.h | 8 +-
arch/mips/include/asm/pgalloc.h | 10 +-
arch/mips/kernel/cpu-probe.c | 5 +
arch/mips/loongson64/numa.c | 3 +
arch/powerpc/include/asm/barrier.h | 2 +
arch/powerpc/mm/fault.c | 4 +-
arch/powerpc/platforms/powernv/vas-window.c | 9 +-
block/blk-rq-qos.c | 4 +-
drivers/ata/ahci_sunxi.c | 2 +-
drivers/atm/iphase.c | 2 +-
drivers/atm/nicstar.c | 26 ++--
drivers/bluetooth/btusb.c | 24 ++-
drivers/char/ipmi/ipmi_watchdog.c | 22 +--
drivers/clk/renesas/r8a77995-cpg-mssr.c | 1 +
drivers/clk/renesas/rcar-usb2-clock-sel.c | 24 +--
drivers/clk/tegra/clk-periph-gate.c | 72 +++++----
drivers/clk/tegra/clk-periph.c | 11 ++
drivers/clk/tegra/clk-pll.c | 9 +-
drivers/clocksource/arm_arch_timer.c | 2 +-
drivers/extcon/extcon-intel-mrfld.c | 9 ++
drivers/firmware/qemu_fw_cfg.c | 8 +-
drivers/fpga/stratix10-soc.c | 1 +
.../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 21 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 11 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_umc.h | 5 +
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 4 +-
drivers/gpu/drm/amd/amdgpu/umc_v8_7.c | 2 +-
.../drm/amd/amdkfd/kfd_device_queue_manager.c | 68 +++++----
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 24 ++-
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 1 +
.../amd/display/amdgpu_dm/amdgpu_dm_color.c | 41 +++++-
.../gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +
.../drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c | 9 +-
.../drm/amd/display/dc/dcn20/dcn20_hwseq.c | 2 +-
.../dc/dml/dcn30/display_mode_vba_30.c | 78 ++++------
drivers/gpu/drm/amd/display/dc/irq_types.h | 2 +-
.../gpu/drm/amd/display/modules/hdcp/hdcp.c | 1 -
.../display/modules/hdcp/hdcp1_execution.c | 4 +-
drivers/gpu/drm/amd/include/navi10_enum.h | 2 +-
drivers/gpu/drm/arm/malidp_planes.c | 9 +-
drivers/gpu/drm/ast/ast_dp501.c | 139 +++++++++++++-----
drivers/gpu/drm/ast/ast_drv.h | 12 ++
drivers/gpu/drm/ast/ast_main.c | 11 +-
.../drm/bridge/cadence/cdns-mhdp8546-core.c | 4 +-
drivers/gpu/drm/bridge/cdns-dsi.c | 2 +-
drivers/gpu/drm/bridge/lontium-lt9611.c | 1 +
drivers/gpu/drm/bridge/nwl-dsi.c | 61 +++++---
drivers/gpu/drm/drm_dp_helper.c | 7 +
drivers/gpu/drm/i915/display/intel_dp.c | 2 +-
drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 2 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 2 -
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 8 +-
drivers/gpu/drm/mxsfb/Kconfig | 1 -
drivers/gpu/drm/nouveau/nouveau_display.c | 1 -
drivers/gpu/drm/radeon/radeon_display.c | 1 +
drivers/gpu/drm/radeon/radeon_drv.c | 8 +-
.../gpu/drm/rockchip/dw-mipi-dsi-rockchip.c | 4 -
drivers/gpu/drm/rockchip/rockchip_vop_reg.c | 21 ++-
drivers/gpu/drm/scheduler/sched_entity.c | 8 +-
drivers/gpu/drm/scheduler/sched_main.c | 24 +++
drivers/gpu/drm/tegra/dc.c | 10 +-
drivers/gpu/drm/tegra/drm.c | 2 -
drivers/gpu/drm/vc4/vc4_crtc.c | 5 +-
drivers/gpu/drm/vc4/vc4_drv.h | 2 +-
drivers/gpu/drm/vc4/vc4_hdmi.c | 10 +-
drivers/gpu/drm/vc4/vc4_txp.c | 2 +-
drivers/gpu/drm/virtio/virtgpu_kms.c | 1 +
drivers/gpu/drm/zte/Kconfig | 1 -
drivers/hwtracing/coresight/coresight-core.c | 2 +-
.../hwtracing/coresight/coresight-tmc-etf.c | 2 +-
drivers/infiniband/core/cma.c | 3 +-
drivers/infiniband/hw/cxgb4/qp.c | 1 +
drivers/infiniband/sw/rxe/rxe_mr.c | 2 +-
drivers/infiniband/ulp/isert/ib_isert.c | 4 +-
drivers/infiniband/ulp/isert/ib_isert.h | 3 -
drivers/infiniband/ulp/rtrs/rtrs-pri.h | 13 +-
drivers/ipack/carriers/tpci200.c | 5 +-
drivers/isdn/hardware/mISDN/hfcpci.c | 2 +-
drivers/md/dm-writecache.c | 48 ++++--
drivers/md/dm-zoned-metadata.c | 7 +
drivers/md/dm.c | 8 +-
.../md/persistent-data/dm-space-map-disk.c | 9 +-
.../persistent-data/dm-space-map-metadata.c | 9 +-
drivers/media/i2c/saa6588.c | 4 +-
drivers/media/pci/bt8xx/bttv-driver.c | 6 +-
drivers/media/pci/saa7134/saa7134-video.c | 6 +-
drivers/media/platform/davinci/vpbe_display.c | 2 +-
drivers/media/platform/davinci/vpbe_venc.c | 6 +-
drivers/media/rc/bpf-lirc.c | 3 +-
drivers/media/usb/dvb-usb/dtv5100.c | 7 +-
drivers/media/usb/gspca/sq905.c | 2 +-
drivers/media/usb/gspca/sunplus.c | 8 +-
drivers/media/usb/uvc/uvc_video.c | 27 ++++
drivers/media/usb/zr364xx/zr364xx.c | 1 +
drivers/mfd/syscon.c | 2 +-
drivers/misc/lkdtm/bugs.c | 3 +
drivers/mmc/core/core.c | 7 +-
drivers/mmc/core/sd.c | 10 +-
drivers/mmc/host/sdhci-acpi.c | 11 ++
drivers/mmc/host/sdhci.c | 4 +
drivers/mmc/host/sdhci.h | 1 +
drivers/net/dsa/ocelot/seville_vsc9953.c | 5 +
drivers/net/ethernet/broadcom/genet/bcmmii.c | 4 +
drivers/net/ethernet/freescale/fec_main.c | 32 ++++
drivers/net/ethernet/intel/e100.c | 12 +-
drivers/net/ethernet/intel/i40e/i40e_ptp.c | 8 +-
drivers/net/ethernet/intel/ice/ice_ethtool.c | 6 +-
.../net/ethernet/intel/ice/ice_lan_tx_rx.h | 4 +-
drivers/net/ethernet/intel/ice/ice_type.h | 2 +-
drivers/net/ethernet/intel/igb/igb_main.c | 9 +-
drivers/net/ethernet/intel/igbvf/netdev.c | 4 +-
.../net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 +
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 6 +-
drivers/net/ethernet/mellanox/mlx5/core/lag.c | 19 ++-
drivers/net/ethernet/micrel/ks8842.c | 4 +
drivers/net/ethernet/moxa/moxart_ether.c | 5 +-
.../ethernet/oki-semi/pch_gbe/pch_gbe_main.c | 19 +--
drivers/net/ethernet/realtek/r8169_main.c | 1 -
drivers/net/ethernet/sfc/ef10_sriov.c | 25 ++--
drivers/net/ethernet/sgi/ioc3-eth.c | 4 +
.../net/ethernet/stmicro/stmmac/stmmac_mdio.c | 21 ++-
drivers/net/fjes/fjes_main.c | 4 +
drivers/net/ipa/ipa_main.c | 1 +
drivers/net/mdio/mdio-ipq8064.c | 33 +++--
drivers/net/phy/realtek.c | 15 +-
drivers/net/virtio_net.c | 22 ++-
drivers/net/wireless/ath/ath11k/mac.c | 4 +-
.../net/wireless/intel/iwlwifi/mvm/mac80211.c | 24 ++-
.../wireless/intel/iwlwifi/mvm/time-event.c | 4 +
.../intel/iwlwifi/pcie/ctxt-info-gen3.c | 15 +-
.../wireless/intel/iwlwifi/pcie/internal.h | 3 +
.../wireless/intel/iwlwifi/pcie/trans-gen2.c | 3 +-
drivers/net/wireless/mac80211_hwsim.c | 48 ++++--
.../net/wireless/mediatek/mt76/mt7615/mac.c | 10 +-
.../net/wireless/mediatek/mt76/mt7915/init.c | 6 +-
.../net/wireless/realtek/rtl8xxxu/rtl8xxxu.h | 11 +-
.../realtek/rtl8xxxu/rtl8xxxu_8192e.c | 59 +++++++-
drivers/net/wireless/st/cw1200/cw1200_sdio.c | 1 +
drivers/net/wireless/ti/wl1251/cmd.c | 9 +-
drivers/net/wireless/ti/wl12xx/main.c | 7 +
drivers/nvmem/core.c | 9 +-
drivers/pci/controller/pci-aardvark.c | 13 +-
drivers/pci/quirks.c | 11 ++
drivers/pinctrl/pinctrl-amd.c | 1 +
drivers/pinctrl/pinctrl-equilibrium.c | 1 +
drivers/pinctrl/pinctrl-mcp23s08.c | 10 +-
.../processor_thermal_device.c | 20 ++-
fs/f2fs/f2fs.h | 2 +
fs/f2fs/recovery.c | 23 +--
fs/f2fs/super.c | 8 +-
fs/io-wq.c | 5 +-
fs/io_uring.c | 2 +-
fs/jfs/inode.c | 3 +-
fs/reiserfs/journal.c | 14 ++
fs/udf/namei.c | 4 +
include/linux/mfd/abx500/ux500_chargalg.h | 2 +-
include/linux/netdev_features.h | 2 +-
include/linux/of_mdio.h | 7 +
include/linux/wait.h | 2 +-
include/media/v4l2-subdev.h | 4 +
include/net/flow_offload.h | 12 +-
include/net/sctp/structs.h | 2 +-
include/uapi/linux/ethtool.h | 4 +-
kernel/bpf/core.c | 61 +++++---
kernel/bpf/ringbuf.c | 2 +
kernel/cpu.c | 49 ++++++
kernel/sched/fair.c | 6 +-
kernel/sched/wait.c | 9 +-
kernel/trace/trace.c | 91 +++++++-----
lib/seq_buf.c | 4 +-
net/bluetooth/hci_core.c | 16 +-
net/bluetooth/hci_event.c | 6 +-
net/bluetooth/l2cap_core.c | 8 +-
net/bluetooth/mgmt.c | 5 +
net/bridge/br_mrp.c | 6 +-
net/core/dev.c | 11 +-
net/ipv4/ip_output.c | 32 ++--
net/ipv4/tcp_input.c | 45 +++---
net/ipv6/ip6_output.c | 32 ++--
net/ipv6/output_core.c | 28 +---
net/mac80211/sta_info.c | 11 +-
net/sched/act_api.c | 3 +-
net/sched/cls_api.c | 2 +-
net/sctp/bind_addr.c | 19 ++-
net/sctp/input.c | 8 +-
net/sctp/ipv6.c | 7 +-
net/sctp/protocol.c | 7 +-
net/sctp/sm_make_chunk.c | 29 ++--
net/vmw_vsock/af_vsock.c | 2 +-
net/wireless/nl80211.c | 9 +-
net/wireless/wext-spy.c | 14 +-
net/xfrm/xfrm_user.c | 28 ++--
security/selinux/avc.c | 13 +-
security/smack/smackfs.c | 2 +
sound/soc/tegra/tegra_alc5632.c | 1 +
sound/soc/tegra/tegra_max98090.c | 1 +
sound/soc/tegra/tegra_rt5640.c | 1 +
sound/soc/tegra/tegra_rt5677.c | 1 +
sound/soc/tegra/tegra_sgtl5000.c | 1 +
sound/soc/tegra/tegra_wm8753.c | 1 +
sound/soc/tegra/tegra_wm8903.c | 1 +
sound/soc/tegra/tegra_wm9712.c | 1 +
sound/soc/tegra/trimslice.c | 1 +
.../net/mlxsw/devlink_trap_l3_drops.sh | 3 +
.../net/mlxsw/devlink_trap_l3_exceptions.sh | 3 +
.../drivers/net/mlxsw/qos_dscp_bridge.sh | 2 +
tools/testing/selftests/lkdtm/tests.txt | 2 +-
.../selftests/net/forwarding/pedit_dsfield.sh | 2 +
.../selftests/net/forwarding/pedit_l4port.sh | 2 +
.../net/forwarding/skbedit_priority.sh | 2 +
tools/testing/selftests/resctrl/README | 2 +-
.../testing/selftests/resctrl/resctrl_tests.c | 4 +-
219 files changed, 1625 insertions(+), 773 deletions(-)
--
2.20.1
1
202

14 Oct '21
backport some bugfix patches for fs/perf/network/sched module.
Liu Jian (1):
igmp: Add ip_mc_list lock in ip_check_mc_rcu
Marco Elver (1):
kcsan: Never set up watchpoints on NULL pointers
Riccardo Mancini (3):
perf probe-file: Delete namelist in del_events() on the error path
perf test bpf: Free obj_buf
perf data: Close all files in close_dir()
Theodore Ts'o (1):
ext4: inline jbd2_journal_[un]register_shrinker()
Xiongfeng Wang (1):
ACPI / PPTT: get PPTT table in the first beginning
Zhang Yi (9):
jbd2: remove the out label in __jbd2_journal_remove_checkpoint()
jbd2: ensure abort the journal if detect IO error when writing
original buffer back
jbd2: don't abort the journal when freeing buffers
jbd2: remove redundant buffer io error checks
jbd2,ext4: add a shrinker to release checkpointed buffers
jbd2: simplify journal_clean_one_cp_list()
ext4: remove bdev_try_to_free_page() callback
fs: remove bdev_try_to_free_page callback
jbd2: export jbd2_journal_[un]register_shrinker()
Zheng Zucheng (1):
Revert "[Huawei] sched: export sched_setscheduler symbol"
arch/arm64/kernel/topology.c | 6 +-
drivers/acpi/pptt.c | 83 ++++++--------
fs/block_dev.c | 15 ---
fs/ext4/super.c | 21 ----
fs/jbd2/checkpoint.c | 206 ++++++++++++++++++++++++++++-------
fs/jbd2/journal.c | 74 +++++++++++++
fs/jbd2/transaction.c | 17 ---
include/linux/acpi.h | 1 +
include/linux/fs.h | 1 -
include/linux/jbd2.h | 35 ++++++
include/trace/events/jbd2.h | 101 +++++++++++++++++
kernel/kcsan/encoding.h | 6 +-
kernel/sched/core.c | 1 -
net/ipv4/igmp.c | 2 +
tools/perf/tests/bpf.c | 2 +
tools/perf/util/data.c | 2 +-
tools/perf/util/probe-file.c | 4 +-
17 files changed, 430 insertions(+), 147 deletions(-)
--
2.20.1
1
17
Ramaxel inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4DBD7
CVE: NA
Initial commit the spfc module for ramaxel Super FC adapter
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
---
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/spfc/Kconfig | 17 +
drivers/scsi/spfc/Makefile | 47 +
drivers/scsi/spfc/common/unf_common.h | 1755 +++++++
drivers/scsi/spfc/common/unf_disc.c | 1276 +++++
drivers/scsi/spfc/common/unf_disc.h | 51 +
drivers/scsi/spfc/common/unf_event.c | 517 ++
drivers/scsi/spfc/common/unf_event.h | 84 +
drivers/scsi/spfc/common/unf_exchg.c | 2317 +++++++++
drivers/scsi/spfc/common/unf_exchg.h | 436 ++
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 +++
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 +
drivers/scsi/spfc/common/unf_fcstruct.h | 459 ++
drivers/scsi/spfc/common/unf_gs.c | 2521 +++++++++
drivers/scsi/spfc/common/unf_gs.h | 58 +
drivers/scsi/spfc/common/unf_init.c | 353 ++
drivers/scsi/spfc/common/unf_io.c | 1220 +++++
drivers/scsi/spfc/common/unf_io.h | 96 +
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ++++
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 +
drivers/scsi/spfc/common/unf_log.h | 178 +
drivers/scsi/spfc/common/unf_lport.c | 1008 ++++
drivers/scsi/spfc/common/unf_lport.h | 519 ++
drivers/scsi/spfc/common/unf_ls.c | 4884 ++++++++++++++++++
drivers/scsi/spfc/common/unf_ls.h | 61 +
drivers/scsi/spfc/common/unf_npiv.c | 1005 ++++
drivers/scsi/spfc/common/unf_npiv.h | 47 +
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 ++
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 +
drivers/scsi/spfc/common/unf_portman.c | 2431 +++++++++
drivers/scsi/spfc/common/unf_portman.h | 96 +
drivers/scsi/spfc/common/unf_rport.c | 2286 ++++++++
drivers/scsi/spfc/common/unf_rport.h | 301 ++
drivers/scsi/spfc/common/unf_scsi.c | 1463 ++++++
drivers/scsi/spfc/common/unf_scsi_common.h | 570 ++
drivers/scsi/spfc/common/unf_service.c | 1439 ++++++
drivers/scsi/spfc/common/unf_service.h | 66 +
drivers/scsi/spfc/common/unf_type.h | 216 +
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ++++
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 +++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1646 ++++++
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 +
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 891 ++++
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 +
drivers/scsi/spfc/hw/spfc_cqm_main.c | 1257 +++++
drivers/scsi/spfc/hw/spfc_cqm_main.h | 414 ++
drivers/scsi/spfc/hw/spfc_cqm_object.c | 959 ++++
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 +
drivers/scsi/spfc/hw/spfc_hba.c | 1724 +++++++
drivers/scsi/spfc/hw/spfc_hba.h | 341 ++
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ++++++
drivers/scsi/spfc/hw/spfc_io.c | 1193 +++++
drivers/scsi/spfc/hw/spfc_io.h | 138 +
drivers/scsi/spfc/hw/spfc_lld.c | 998 ++++
drivers/scsi/spfc/hw/spfc_lld.h | 76 +
drivers/scsi/spfc/hw/spfc_module.h | 297 ++
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 +
drivers/scsi/spfc/hw/spfc_queue.c | 4857 +++++++++++++++++
drivers/scsi/spfc/hw/spfc_queue.h | 711 +++
drivers/scsi/spfc/hw/spfc_service.c | 2169 ++++++++
drivers/scsi/spfc/hw/spfc_service.h | 282 +
drivers/scsi/spfc/hw/spfc_utils.c | 102 +
drivers/scsi/spfc/hw/spfc_utils.h | 202 +
drivers/scsi/spfc/hw/spfc_wqe.c | 646 +++
drivers/scsi/spfc/hw/spfc_wqe.h | 239 +
68 files changed, 53528 insertions(+)
create mode 100644 drivers/scsi/spfc/Kconfig
create mode 100644 drivers/scsi/spfc/Makefile
create mode 100644 drivers/scsi/spfc/common/unf_common.h
create mode 100644 drivers/scsi/spfc/common/unf_disc.c
create mode 100644 drivers/scsi/spfc/common/unf_disc.h
create mode 100644 drivers/scsi/spfc/common/unf_event.c
create mode 100644 drivers/scsi/spfc/common/unf_event.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg.h
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
create mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
create mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
create mode 100644 drivers/scsi/spfc/common/unf_gs.c
create mode 100644 drivers/scsi/spfc/common/unf_gs.h
create mode 100644 drivers/scsi/spfc/common/unf_init.c
create mode 100644 drivers/scsi/spfc/common/unf_io.c
create mode 100644 drivers/scsi/spfc/common/unf_io.h
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
create mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
create mode 100644 drivers/scsi/spfc/common/unf_log.h
create mode 100644 drivers/scsi/spfc/common/unf_lport.c
create mode 100644 drivers/scsi/spfc/common/unf_lport.h
create mode 100644 drivers/scsi/spfc/common/unf_ls.c
create mode 100644 drivers/scsi/spfc/common/unf_ls.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv.h
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_portman.c
create mode 100644 drivers/scsi/spfc/common/unf_portman.h
create mode 100644 drivers/scsi/spfc/common/unf_rport.c
create mode 100644 drivers/scsi/spfc/common/unf_rport.h
create mode 100644 drivers/scsi/spfc/common/unf_scsi.c
create mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
create mode 100644 drivers/scsi/spfc/common/unf_service.c
create mode 100644 drivers/scsi/spfc/common/unf_service.h
create mode 100644 drivers/scsi/spfc/common/unf_type.h
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
create mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
create mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
create mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
create mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
create mode 100644 drivers/scsi/spfc/hw/spfc_io.c
create mode 100644 drivers/scsi/spfc/hw/spfc_io.h
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
create mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
create mode 100644 drivers/scsi/spfc/hw/spfc_module.h
create mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
create mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
create mode 100644 drivers/scsi/spfc/hw/spfc_service.c
create mode 100644 drivers/scsi/spfc/hw/spfc_service.h
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
create mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
create mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 8345f906f5fc..0bdb678bff3a 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -7135,3 +7135,4 @@ CONFIG_ETMEM_SCAN=m
CONFIG_ETMEM_SWAP=m
CONFIG_NET_VENDOR_RAMAXEL=y
CONFIG_SPNIC=m
+CONFIG_SPFC=m
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index c1304f2e7de4..57631bbc8839 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -8514,3 +8514,4 @@ CONFIG_ETMEM_SWAP=m
CONFIG_USERSWAP=y
CONFIG_NET_VENDOR_RAMAXEL=y
CONFIG_SPNIC=m
+CONFIG_SPFC=m
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 0fbe4edeccd0..170d59df48d1 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1151,6 +1151,7 @@ source "drivers/scsi/qla2xxx/Kconfig"
source "drivers/scsi/qla4xxx/Kconfig"
source "drivers/scsi/qedi/Kconfig"
source "drivers/scsi/qedf/Kconfig"
+source "drivers/scsi/spfc/Kconfig"
source "drivers/scsi/huawei/Kconfig"
config SCSI_LPFC
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 78a3c832394c..299d3318fac8 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -85,6 +85,7 @@ obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
+obj-$(CONFIG_SPFC) += spfc/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_HUAWEI_FC) += huawei/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/
diff --git a/drivers/scsi/spfc/Kconfig b/drivers/scsi/spfc/Kconfig
new file mode 100644
index 000000000000..1021089f355c
--- /dev/null
+++ b/drivers/scsi/spfc/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Ramaxel SPFC driver configuration
+#
+
+config SPFC
+ tristate "Ramaxel Fabric Channel Host Adapter Support"
+ default m
+ depends on PCI && SCSI
+ depends on SCSI_FC_ATTRS
+ depends on ARM64 || X86_64
+ help
+ This driver supports Ramaxel Fabric Channel PCIe host adapter.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
+
diff --git a/drivers/scsi/spfc/Makefile b/drivers/scsi/spfc/Makefile
new file mode 100644
index 000000000000..02fe0213e048
--- /dev/null
+++ b/drivers/scsi/spfc/Makefile
@@ -0,0 +1,47 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_SPFC) += spfc.o
+
+subdir-ccflags-y += -I$(src)/../../net/ethernet/ramaxel/spnic/hw
+subdir-ccflags-y += -I$(src)/hw
+subdir-ccflags-y += -I$(src)/common
+
+spfc-objs := common/unf_init.o \
+ common/unf_event.o \
+ common/unf_exchg.o \
+ common/unf_exchg_abort.o \
+ common/unf_io.o \
+ common/unf_io_abnormal.o \
+ common/unf_lport.o \
+ common/unf_npiv.o \
+ common/unf_npiv_portman.o \
+ common/unf_disc.o \
+ common/unf_rport.o \
+ common/unf_service.o \
+ common/unf_ls.o \
+ common/unf_gs.o \
+ common/unf_portman.o \
+ common/unf_scsi.o \
+ hw/spfc_utils.o \
+ hw/spfc_lld.o \
+ hw/spfc_io.o \
+ hw/spfc_wqe.o \
+ hw/spfc_service.o \
+ hw/spfc_chipitf.o \
+ hw/spfc_queue.o \
+ hw/spfc_hba.o \
+ hw/spfc_cqm_bat_cla.o \
+ hw/spfc_cqm_bitmap_table.o \
+ hw/spfc_cqm_main.o \
+ hw/spfc_cqm_object.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hwdev.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_common.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_hwif.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_wq.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_eqs.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_mbox.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.o \
+ ../../net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.o
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
new file mode 100644
index 000000000000..bf9d156e07ce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_common.h
@@ -0,0 +1,1755 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_COMMON_H
+#define UNF_COMMON_H
+
+#include "unf_type.h"
+#include "unf_fcstruct.h"
+
+/* version num */
+#define SPFC_DRV_VERSION "B101"
+#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
+
+#define UNF_MAX_SECTORS 0xffff
+#define UNF_ORIGIN_HOTTAG_MASK 0x7fff
+#define UNF_HOTTAG_FLAG (1 << 15)
+#define UNF_PKG_FREE_OXID 0x0
+#define UNF_PKG_FREE_RXID 0x1
+
+#define UNF_SPFC_MAXRPORT_NUM (2048)
+#define SPFC_DEFAULT_RPORT_INDEX (UNF_SPFC_MAXRPORT_NUM - 1)
+
+/* session use sq num */
+#define UNF_SQ_NUM_PER_SESSION 3
+
+extern atomic_t fc_mem_ref;
+extern u32 unf_dgb_level;
+extern u32 spfc_dif_type;
+extern u32 spfc_dif_enable;
+extern u8 spfc_guard;
+extern int link_lose_tmo;
+
+/* define bits */
+#define UNF_BIT(n) (0x1UL << (n))
+#define UNF_BIT_0 UNF_BIT(0)
+#define UNF_BIT_1 UNF_BIT(1)
+#define UNF_BIT_2 UNF_BIT(2)
+#define UNF_BIT_3 UNF_BIT(3)
+#define UNF_BIT_4 UNF_BIT(4)
+#define UNF_BIT_5 UNF_BIT(5)
+
+#define UNF_BITS_PER_BYTE 8
+
+#define UNF_NOTIFY_UP_CLEAN_FLASH 2
+
+/* Echo macro define */
+#define ECHO_MG_VERSION_LOCAL 1
+#define ECHO_MG_VERSION_REMOTE 2
+
+#define SPFC_WIN_NPIV_NUM 32
+
+#define UNF_GET_NAME_HIGH_WORD(name) (((name) >> 32) & 0xffffffff)
+#define UNF_GET_NAME_LOW_WORD(name) ((name) & 0xffffffff)
+
+#define UNF_FIRST_LPORT_ID_MASK 0xffffff00
+#define UNF_PORT_ID_MASK 0x000000ff
+#define UNF_FIRST_LPORT_ID 0x00000000
+#define UNF_SECOND_LPORT_ID 0x00000001
+#define UNF_EIGHTH_LPORT_ID 0x00000007
+#define SPFC_MAX_COUNTER_TYPE 128
+
+#define UNF_EVENT_ASYN 0
+#define UNF_EVENT_SYN 1
+#define UNF_GLOBAL_EVENT_ASYN 2
+#define UNF_GLOBAL_EVENT_SYN 3
+
+#define UNF_GET_SLOT_ID_BY_PORTID(port_id) (((port_id) & 0x001f00) >> 8)
+#define UNF_GET_FUNC_ID_BY_PORTID(port_id) ((port_id) & 0x0000ff)
+#define UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(port_id) \
+ (((port_id) & 0x00FF00) >> 8)
+
+#define UNF_FC_SERVER_BOARD_8_G 13 /* 8G mode */
+#define UNF_FC_SERVER_BOARD_16_G 7 /* 16G mode */
+#define UNF_FC_SERVER_BOARD_32_G 6 /* 32G mode */
+
+#define UNF_PORT_TYPE_FC_QSFP 1
+#define UNF_PORT_TYPE_FC_SFP 0
+#define UNF_PORT_UNGRADE_FW_RESET_ACTIVE 0
+#define UNF_PORT_UNGRADE_FW_RESET_INACTIVE 1
+
+enum unf_rport_qos_level {
+ UNF_QOS_LEVEL_DEFAULT = 0,
+ UNF_QOS_LEVEL_MIDDLE,
+ UNF_QOS_LEVEL_HIGH,
+ UNF_QOS_LEVEL_BUTT
+};
+
+struct buff_list {
+ u8 *vaddr;
+ dma_addr_t paddr;
+};
+
+struct buf_describe {
+ struct buff_list *buflist;
+ u32 buf_size;
+ u32 buf_num;
+};
+
+#define IO_STATICS
+struct unf_port_info {
+ u32 local_nport_id;
+ u32 nport_id;
+ u32 rport_index;
+ u64 port_name;
+ enum unf_rport_qos_level qos_level;
+ u8 cs_ctrl;
+ u8 rsvd0[3];
+ u32 sqn_base;
+};
+
+struct unf_cfg_item {
+ char *puc_name;
+ u32 min_value;
+ u32 default_value;
+ u32 max_value;
+};
+
+struct unf_port_param {
+ u32 ra_tov;
+ u32 ed_tov;
+};
+
+/* get wwpn adn wwnn */
+struct unf_get_chip_info_argout {
+ u8 board_type;
+ u64 wwpn;
+ u64 wwnn;
+ u64 sys_mac;
+};
+
+/* get sfp info: present and speed */
+struct unf_get_port_info_argout {
+ u8 sfp_speed;
+ u8 present;
+ u8 rsvd[2];
+};
+
+/* SFF-8436(QSFP+) Rev 4.7 */
+struct unf_sfp_plus_field_a0 {
+ u8 identifier;
+ /* offset 1~2 */
+ struct {
+ u8 reserved;
+ u8 status;
+ } status_indicator;
+ /* offset 3~21 */
+ struct {
+ u8 rx_tx_los;
+ u8 tx_fault;
+ u8 all_resv;
+
+ u8 ini_complete : 1;
+ u8 bit_resv : 3;
+ u8 temp_low_warn : 1;
+ u8 temp_high_warn : 1;
+ u8 temp_low_alarm : 1;
+ u8 temp_high_alarm : 1;
+
+ u8 resv : 4;
+ u8 vcc_low_warn : 1;
+ u8 vcc_high_warn : 1;
+ u8 vcc_low_alarm : 1;
+ u8 vcc_high_alarm : 1;
+
+ u8 resv8;
+ u8 rx_pow[2];
+ u8 tx_bias[2];
+ u8 reserved[6];
+ u8 vendor_specifics[3];
+ } interrupt_flag;
+ /* offset 22~33 */
+ struct {
+ u8 temp[2];
+ u8 reserved[2];
+ u8 supply_vol[2];
+ u8 reserveds[2];
+ u8 vendor_specific[4];
+ } module_monitors;
+ /* offset 34~81 */
+ struct {
+ u8 rx_pow[8];
+ u8 tx_bias[8];
+ u8 reserved[16];
+ u8 vendor_specific[16];
+ } channel_monitor_val;
+
+ /* offset 82~85 */
+ u8 reserved[4];
+
+ /* offset 86~97 */
+ struct {
+ /* 86~88 */
+ u8 tx_disable;
+ u8 rx_rate_select;
+ u8 tx_rate_select;
+
+ /* 89~92 */
+ u8 rx4_app_select;
+ u8 rx3_app_select;
+ u8 rx2_app_select;
+ u8 rx1_app_select;
+ /* 93 */
+ u8 power_override : 1;
+ u8 power_set : 1;
+ u8 reserved : 6;
+
+ /* 94~97 */
+ u8 tx4_app_select;
+ u8 tx3_app_select;
+ u8 tx2_app_select;
+ u8 tx1_app_select;
+ /* 98~99 */
+ u8 reserved2[2];
+ } control;
+ /* 100~106 */
+ struct {
+ /* 100 */
+ u8 m_rx1_los : 1;
+ u8 m_rx2_los : 1;
+ u8 m_rx3_los : 1;
+ u8 m_rx4_los : 1;
+ u8 m_tx1_los : 1;
+ u8 m_tx2_los : 1;
+ u8 m_tx3_los : 1;
+ u8 m_tx4_los : 1;
+ /* 101 */
+ u8 m_tx1_fault : 1;
+ u8 m_tx2_fault : 1;
+ u8 m_tx3_fault : 1;
+ u8 m_tx4_fault : 1;
+ u8 reserved : 4;
+ /* 102 */
+ u8 reserved1;
+ /* 103 */
+ u8 mini_cmp_flag : 1;
+ u8 rsv : 3;
+ u8 m_temp_low_warn : 1;
+ u8 m_temp_high_warn : 1;
+ u8 m_temp_low_alarm : 1;
+ u8 m_temp_high_alarm : 1;
+ /* 104 */
+ u8 rsv1 : 4;
+ u8 m_vcc_low_warn : 1;
+ u8 m_vcc_high_warn : 1;
+ u8 m_vcc_low_alarm : 1;
+ u8 m_vcc_high_alarm : 1;
+ /* 105~106 */
+ u8 vendor_specific[2];
+ } module_channel_mask_bit;
+ /* 107~118 */
+ u8 resv[12];
+ /* 119~126 */
+ u8 password_reserved[8];
+ /* 127 */
+ u8 page_select;
+};
+
+/* page 00 */
+struct unf_sfp_plus_field_00 {
+ /* 128~191 */
+ struct {
+ u8 id;
+ u8 id_ext;
+ u8 connector;
+ u8 speci_com[6];
+ u8 mode;
+ u8 speed;
+ u8 encoding;
+ u8 br_nominal;
+ u8 ext_rate_select_com;
+ u8 length_smf;
+ u8 length_om3;
+ u8 length_om2;
+ u8 length_om1;
+ u8 length_copper;
+ u8 device_tech;
+ u8 vendor_name[16];
+ u8 ex_module;
+ u8 vendor_oui[3];
+ u8 vendor_pn[16];
+ u8 vendor_rev[2];
+ /* Wave length or Copper cable Attenuation*/
+ u8 wave_or_copper_attenuation[2];
+ u8 wave_length_toler[2]; /* Wavelength tolerance */
+ u8 max_temp;
+ u8 cc_base;
+ } base_id_fields;
+ /* 192~223 */
+ struct {
+ u8 options[4];
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 diagn_monit_type;
+ u8 enhance_opt;
+ u8 reserved;
+ u8 ccext;
+ } ext_id_fields;
+ /* 224~255 */
+ u8 vendor_spec_eeprom[32];
+};
+
+/* page 01 */
+struct unf_sfp_plus_field_01 {
+ u8 optional01[128];
+};
+
+/* page 02 */
+struct unf_sfp_plus_field_02 {
+ u8 optional02[128];
+};
+
+/* page 03 */
+struct unf_sfp_plus_field_03 {
+ u8 temp_high_alarm[2];
+ u8 temp_low_alarm[2];
+ u8 temp_high_warn[2];
+ u8 temp_low_warn[2];
+
+ u8 reserved1[8];
+
+ u8 vcc_high_alarm[2];
+ u8 vcc_low_alarm[2];
+ u8 vcc_high_warn[2];
+ u8 vcc_low_warn[2];
+
+ u8 reserved2[8];
+ u8 vendor_specific1[16];
+
+ u8 pow_high_alarm[2];
+ u8 pow_low_alarm[2];
+ u8 pow_high_warn[2];
+ u8 pow_low_warn[2];
+
+ u8 bias_high_alarm[2];
+ u8 bias_low_alarm[2];
+ u8 bias_high_warn[2];
+ u8 bias_low_warn[2];
+
+ u8 tx_power_high_alarm[2];
+ u8 tx_power_low_alarm[2];
+ u8 reserved3[4];
+
+ u8 reserved4[8];
+
+ u8 vendor_specific2[16];
+ u8 reserved5[2];
+ u8 vendor_specific3[12];
+ u8 rx_ampl[2];
+ u8 rx_tx_sq_disable;
+ u8 rx_output_disable;
+ u8 chan_monit_mask[12];
+ u8 reserved6[2];
+};
+
+struct unf_sfp_plus_info {
+ struct unf_sfp_plus_field_a0 sfp_plus_info_a0;
+ struct unf_sfp_plus_field_00 sfp_plus_info_00;
+ struct unf_sfp_plus_field_01 sfp_plus_info_01;
+ struct unf_sfp_plus_field_02 sfp_plus_info_02;
+ struct unf_sfp_plus_field_03 sfp_plus_info_03;
+};
+
+struct unf_sfp_data_field_a0 {
+ /* Offset 0~63 */
+ struct {
+ u8 id;
+ u8 id_ext;
+ u8 connector;
+ u8 transceiver[8];
+ u8 encoding;
+ u8 br_nominal; /* Nominal signalling rate, units of 100MBd. */
+ u8 rate_identifier; /* Type of rate select functionality */
+ /* Link length supported for single mode fiber, units of km */
+ u8 length_smk_km;
+ /* Link length supported for single mode fiber,
+ *units of 100 m
+ */
+ u8 length_smf;
+ /* Link length supported for 50 um OM2 fiber,units of 10 m */
+ u8 length_smf_om2;
+ /* Link length supported for 62.5 um OM1 fiber, units of 10 m */
+ u8 length_smf_om1;
+ /*Link length supported for copper/direct attach cable,
+ *units of m
+ */
+ u8 length_cable;
+ /* Link length supported for 50 um OM3 fiber, units of 10m */
+ u8 length_om3;
+ u8 vendor_name[16]; /* ASCII */
+ /* Code for electronic or optical compatibility*/
+ u8 transceiver2;
+ u8 vendor_oui[3]; /* SFP vendor IEEE company ID */
+ u8 vendor_pn[16]; /* Part number provided by SFP vendor (ASCII)
+ */
+ /* Revision level for part number provided by vendor (ASCII) */
+ u8 vendor_rev[4];
+ /* Laser wavelength (Passive/Active Cable
+ *Specification Compliance)
+ */
+ u8 wave_length[2];
+ u8 unallocated;
+ /* Check code for Base ID Fields (addresses 0 to 62)*/
+ u8 cc_base;
+ } base_id_fields;
+
+ /* Offset 64~95 */
+ struct {
+ u8 options[2];
+ u8 br_max;
+ u8 br_min;
+ u8 vendor_sn[16];
+ u8 date_code[8];
+ u8 diag_monitoring_type;
+ u8 enhanced_options;
+ u8 sff8472_compliance;
+ u8 cc_ext;
+ } ext_id_fields;
+
+ /* Offset 96~255 */
+ struct {
+ u8 vendor_spec_eeprom[32];
+ u8 rsvd[128];
+ } vendor_spec_id_fields;
+};
+
+struct unf_sfp_data_field_a2 {
+ /* Offset 0~119 */
+ struct {
+ /* 0~39 */
+ struct {
+ u8 temp_alarm_high[2];
+ u8 temp_alarm_low[2];
+ u8 temp_warning_high[2];
+ u8 temp_warning_low[2];
+
+ u8 vcc_alarm_high[2];
+ u8 vcc_alarm_low[2];
+ u8 vcc_warning_high[2];
+ u8 vcc_warning_low[2];
+
+ u8 bias_alarm_high[2];
+ u8 bias_alarm_low[2];
+ u8 bias_warning_high[2];
+ u8 bias_warning_low[2];
+
+ u8 tx_alarm_high[2];
+ u8 tx_alarm_low[2];
+ u8 tx_warning_high[2];
+ u8 tx_warning_low[2];
+
+ u8 rx_alarm_high[2];
+ u8 rx_alarm_low[2];
+ u8 rx_warning_high[2];
+ u8 rx_warning_low[2];
+ } alarm_warn_th;
+
+ u8 unallocated0[16];
+ u8 ext_cal_constants[36];
+ u8 unallocated1[3];
+ u8 cc_dmi;
+
+ /* 96~105 */
+ struct {
+ u8 temp[2];
+ u8 vcc[2];
+ u8 tx_bias[2];
+ u8 tx_power[2];
+ u8 rx_power[2];
+ } diag;
+
+ u8 unallocated2[4];
+
+ struct {
+ u8 data_rdy_bar_state : 1;
+ u8 rx_los : 1;
+ u8 tx_fault_state : 1;
+ u8 soft_rate_select_state : 1;
+ u8 rate_select_state : 1;
+ u8 rs_state : 1;
+ u8 soft_tx_disable_select : 1;
+ u8 tx_disable_state : 1;
+ } status_ctrl;
+ u8 rsvd;
+
+ /* 112~113 */
+ struct {
+ /* 112 */
+ u8 tx_alarm_low : 1;
+ u8 tx_alarm_high : 1;
+ u8 tx_bias_alarm_low : 1;
+ u8 tx_bias_alarm_high : 1;
+ u8 vcc_alarm_low : 1;
+ u8 vcc_alarm_high : 1;
+ u8 temp_alarm_low : 1;
+ u8 temp_alarm_high : 1;
+
+ /* 113 */
+ u8 rsvd : 6;
+ u8 rx_alarm_low : 1;
+ u8 rx_alarm_high : 1;
+ } alarm;
+
+ u8 unallocated3[2];
+
+ /* 116~117 */
+ struct {
+ /* 116 */
+ u8 tx_warn_lo : 1;
+ u8 tx_warn_hi : 1;
+ u8 bias_warn_lo : 1;
+ u8 bias_warn_hi : 1;
+ u8 vcc_warn_lo : 1;
+ u8 vcc_warn_hi : 1;
+ u8 temp_warn_lo : 1;
+ u8 temp_warn_hi : 1;
+
+ /* 117 */
+ u8 rsvd : 6;
+ u8 rx_warn_lo : 1;
+ u8 rx_warn_hi : 1;
+ } warning;
+
+ u8 ext_status_and_ctrl[2];
+ } diag;
+
+ /* Offset 120~255 */
+ struct {
+ u8 vendor_spec[8];
+ u8 user_eeprom[120];
+ u8 vendor_ctrl[8];
+ } general_use_fields;
+};
+
+struct unf_sfp_info {
+ struct unf_sfp_data_field_a0 sfp_info_a0;
+ struct unf_sfp_data_field_a2 sfp_info_a2;
+};
+
+struct unf_sfp_err_rome_info {
+ struct unf_sfp_info sfp_info;
+ struct unf_sfp_plus_info sfp_plus_info;
+};
+
+struct unf_err_code {
+ u32 loss_of_signal_count;
+ u32 bad_rx_char_count;
+ u32 loss_of_sync_count;
+ u32 link_fail_count;
+ u32 rx_eof_a_count;
+ u32 dis_frame_count;
+ u32 bad_crc_count;
+ u32 proto_error_count;
+};
+
+/* config file */
+enum unf_port_mode {
+ UNF_PORT_MODE_UNKNOWN = 0x00,
+ UNF_PORT_MODE_TGT = 0x10,
+ UNF_PORT_MODE_INI = 0x20,
+ UNF_PORT_MODE_BOTH = 0x30
+};
+
+enum unf_port_upgrade {
+ UNF_PORT_UNSUPPORT_UPGRADE_REPORT = 0x00,
+ UNF_PORT_SUPPORT_UPGRADE_REPORT = 0x01,
+ UNF_PORT_UPGRADE_BUTT
+};
+
+#define UNF_BYTES_OF_DWORD 0x4
+static inline void __attribute__((unused)) unf_big_end_to_cpu(u8 *buffer, u32 size)
+{
+ u32 *buf = NULL;
+ u32 word_sum = 0;
+ u32 index = 0;
+
+ if (!buffer)
+ return;
+
+ buf = (u32 *)buffer;
+
+ /* byte to word */
+ if (size % UNF_BYTES_OF_DWORD == 0)
+ word_sum = size / UNF_BYTES_OF_DWORD;
+ else
+ return;
+
+ /* word to byte */
+ while (index < word_sum) {
+ *buf = be32_to_cpu(*buf);
+ buf++;
+ index++;
+ }
+}
+
+static inline void __attribute__((unused)) unf_cpu_to_big_end(void *buffer, u32 size)
+{
+#define DWORD_BIT 32
+#define BYTE_BIT 8
+ u32 *buf = NULL;
+ u32 word_sum = 0;
+ u32 index = 0;
+ u32 tmp = 0;
+
+ if (!buffer)
+ return;
+
+ buf = (u32 *)buffer;
+
+ /* byte to dword */
+ word_sum = size / UNF_BYTES_OF_DWORD;
+
+ /* dword to byte */
+ while (index < word_sum) {
+ *buf = cpu_to_be32(*buf);
+ buf++;
+ index++;
+ }
+
+ if (size % UNF_BYTES_OF_DWORD) {
+ tmp = cpu_to_be32(*buf);
+ tmp =
+ tmp >> (DWORD_BIT - (size % UNF_BYTES_OF_DWORD) * BYTE_BIT);
+ memcpy(buf, &tmp, (size % UNF_BYTES_OF_DWORD));
+ }
+}
+
+#define UNF_TOP_AUTO_MASK 0x0f
+#define UNF_TOP_UNKNOWN 0xff
+#define SPFC_TOP_AUTO 0x0
+
+#define UNF_NORMAL_MODE 0
+#define UNF_SET_NOMAL_MODE(mode) ((mode) = UNF_NORMAL_MODE)
+
+/*
+ * * SCSI status
+ */
+#define SCSI_GOOD 0x00
+#define SCSI_CHECK_CONDITION 0x02
+#define SCSI_CONDITION_MET 0x04
+#define SCSI_BUSY 0x08
+#define SCSI_INTERMEDIATE 0x10
+#define SCSI_INTERMEDIATE_COND_MET 0x14
+#define SCSI_RESERVATION_CONFLICT 0x18
+#define SCSI_TASK_SET_FULL 0x28
+#define SCSI_ACA_ACTIVE 0x30
+#define SCSI_TASK_ABORTED 0x40
+
+enum unf_act_topo {
+ UNF_ACT_TOP_PUBLIC_LOOP = 0x1,
+ UNF_ACT_TOP_PRIVATE_LOOP = 0x2,
+ UNF_ACT_TOP_P2P_DIRECT = 0x4,
+ UNF_ACT_TOP_P2P_FABRIC = 0x8,
+ UNF_TOP_LOOP_MASK = 0x03,
+ UNF_TOP_P2P_MASK = 0x0c,
+ UNF_TOP_FCOE_MASK = 0x30,
+ UNF_ACT_TOP_UNKNOWN
+};
+
+#define UNF_FL_PORT_LOOP_ADDR 0x00
+#define UNF_INVALID_LOOP_ADDR 0xff
+
+#define UNF_LOOP_ROLE_MASTER_OR_SLAVE 0x0
+#define UNF_LOOP_ROLE_ONLY_SLAVE 0x1
+
+#define UNF_TOU16_CHECK(dest, src, over_action) \
+ do { \
+ if (unlikely(0xFFFF < (src))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "ToU16 error, src 0x%x ", \
+ (src)); \
+ over_action; \
+ } \
+ ((dest) = (u16)(src)); \
+ } while (0)
+
+#define UNF_PORT_SPEED_AUTO 0
+#define UNF_PORT_SPEED_2_G 2
+#define UNF_PORT_SPEED_4_G 4
+#define UNF_PORT_SPEED_8_G 8
+#define UNF_PORT_SPEED_10_G 10
+#define UNF_PORT_SPEED_16_G 16
+#define UNF_PORT_SPEED_32_G 32
+
+#define UNF_PORT_SPEED_UNKNOWN (~0)
+#define UNF_PORT_SFP_SPEED_ERR 0xFF
+
+#define UNF_OP_DEBUG_DUMP 0x0001
+#define UNF_OP_FCPORT_INFO 0x0002
+#define UNF_OP_FCPORT_LINK_CMD_TEST 0x0003
+#define UNF_OP_TEST_MBX 0x0004
+
+/* max frame size */
+#define UNF_MAX_FRAME_SIZE 2112
+
+/* default */
+#define UNF_DEFAULT_FRAME_SIZE 2048
+#define UNF_DEFAULT_EDTOV 2000
+#define UNF_DEFAULT_RATOV 10000
+#define UNF_DEFAULT_FABRIC_RATOV 10000
+#define UNF_MAX_RETRY_COUNT 3
+#define UNF_RRQ_MIN_TIMEOUT_INTERVAL 30000
+#define UNF_LOGO_TIMEOUT_INTERVAL 3000
+#define UNF_SFS_MIN_TIMEOUT_INTERVAL 15000
+#define UNF_WRITE_RRQ_SENDERR_INTERVAL 3000
+#define UNF_REC_TOV 3000
+
+#define UNF_WAIT_SEM_TIMEOUT (5000UL)
+#define UNF_WAIT_ABTS_RSP_TIMEOUT (20000UL)
+#define UNF_MAX_ABTS_WAIT_INTERVAL ((UNF_WAIT_SEM_TIMEOUT - 500) / 1000)
+
+#define UNF_TGT_RRQ_REDUNDANT_TIME 2000
+#define UNF_INI_RRQ_REDUNDANT_TIME 500
+#define UNF_INI_ELS_REDUNDANT_TIME 2000
+
+/* ELS command values */
+#define UNF_ELS_CMND_HIGH_MASK 0xff000000
+#define UNF_ELS_CMND_RJT 0x01000000
+#define UNF_ELS_CMND_ACC 0x02000000
+#define UNF_ELS_CMND_PLOGI 0x03000000
+#define UNF_ELS_CMND_FLOGI 0x04000000
+#define UNF_ELS_CMND_LOGO 0x05000000
+#define UNF_ELS_CMND_RLS 0x0F000000
+#define UNF_ELS_CMND_ECHO 0x10000000
+#define UNF_ELS_CMND_REC 0x13000000
+#define UNF_ELS_CMND_RRQ 0x12000000
+#define UNF_ELS_CMND_PRLI 0x20000000
+#define UNF_ELS_CMND_PRLO 0x21000000
+#define UNF_ELS_CMND_PDISC 0x50000000
+#define UNF_ELS_CMND_FDISC 0x51000000
+#define UNF_ELS_CMND_ADISC 0x52000000
+#define UNF_ELS_CMND_FAN 0x60000000
+#define UNF_ELS_CMND_RSCN 0x61000000
+#define UNF_FCP_CMND_SRR 0x14000000
+#define UNF_GS_CMND_SCR 0x62000000
+
+#define UNF_PLOGI_VERSION_UPPER 0x20
+#define UNF_PLOGI_VERSION_LOWER 0x20
+#define UNF_PLOGI_CONCURRENT_SEQ 0x00FF
+#define UNF_PLOGI_RO_CATEGORY 0x00FE
+#define UNF_PLOGI_SEQ_PER_XCHG 0x0001
+#define UNF_LGN_INFRAMESIZE 2048
+
+/* CT_IU pream defines */
+#define UNF_REV_NPORTID_INIT 0x01000000
+#define UNF_FSTYPE_OPT_INIT 0xfc020000
+#define UNF_FSTYPE_RFT_ID 0x02170000
+#define UNF_FSTYPE_GID_PT 0x01A10000
+#define UNF_FSTYPE_GID_FT 0x01710000
+#define UNF_FSTYPE_RFF_ID 0x021F0000
+#define UNF_FSTYPE_GFF_ID 0x011F0000
+#define UNF_FSTYPE_GNN_ID 0x01130000
+#define UNF_FSTYPE_GPN_ID 0x01120000
+
+#define UNF_CT_IU_RSP_MASK 0xffff0000
+#define UNF_CT_IU_REASON_MASK 0x00ff0000
+#define UNF_CT_IU_EXPLAN_MASK 0x0000ff00
+#define UNF_CT_IU_REJECT 0x80010000
+#define UNF_CT_IU_ACCEPT 0x80020000
+
+#define UNF_FABRIC_FULL_REG 0x00000003
+
+#define UNF_FC4_SCSI_BIT8 0x00000100
+#define UNF_FC4_FCP_TYPE 0x00000008
+#define UNF_FRAG_REASON_VENDOR 0
+
+/* GID_PT, GID_FT */
+#define UNF_GID_PT_TYPE 0x7F000000
+#define UNF_GID_FT_TYPE 0x00000008
+
+/*
+ *FC4 defines
+ */
+#define UNF_FC4_FRAME_PAGE_SIZE 0x10
+#define UNF_FC4_FRAME_PAGE_SIZE_SHIFT 16
+
+#define UNF_FC4_FRAME_PARM_0_FCP 0x08000000
+#define UNF_FC4_FRAME_PARM_0_I_PAIR 0x00002000
+#define UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE 0x00000100
+#define UNF_FC4_FRAME_PARM_0_MASK \
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR | \
+ UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE)
+#define UNF_FC4_FRAME_PARM_3_INI 0x00000020
+#define UNF_FC4_FRAME_PARM_3_TGT 0x00000010
+#define UNF_FC4_FRAME_PARM_3_BOTH \
+ (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT)
+#define UNF_FC4_FRAME_PARM_3_R_XFER_DIS 0x00000002
+#define UNF_FC4_FRAME_PARM_3_W_XFER_DIS 0x00000001
+#define UNF_FC4_FRAME_PARM_3_REC_SUPPORT 0x00000400 /* bit 10 */
+#define UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT 0x00000200 /* bit 9 */
+#define UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT 0x00000100 /* bit 8 */
+#define UNF_FC4_FRAME_PARM_3_CONF_ALLOW 0x00000080 /* bit 7 */
+
+#define UNF_FC4_FRAME_PARM_3_MASK \
+ (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT | \
+ UNF_FC4_FRAME_PARM_3_R_XFER_DIS)
+
+#define UNF_FC4_TYPE_SHIFT 24
+#define UNF_FC4_TYPE_MASK 0xff
+/* FC4 feature we support */
+#define UNF_GFF_ACC_MASK 0xFF000000
+
+/* Reject CT_IU Reason Codes */
+#define UNF_CTIU_RJT_MASK 0xffff0000
+#define UNF_CTIU_RJT_INVALID_COMMAND 0x00010000
+#define UNF_CTIU_RJT_INVALID_VERSION 0x00020000
+#define UNF_CTIU_RJT_LOGIC_ERR 0x00030000
+#define UNF_CTIU_RJT_INVALID_SIZE 0x00040000
+#define UNF_CTIU_RJT_LOGIC_BUSY 0x00050000
+#define UNF_CTIU_RJT_PROTOCOL_ERR 0x00070000
+#define UNF_CTIU_RJT_UNABLE_PERFORM 0x00090000
+#define UNF_CTIU_RJT_NOT_SUPPORTED 0x000B0000
+
+/* FS_RJT Reason code explanations, FC-GS-2 6.5 */
+#define UNF_CTIU_RJT_EXP_MASK 0x0000FF00
+#define UNF_CTIU_RJT_EXP_NO_ADDTION 0x00000000
+#define UNF_CTIU_RJT_EXP_PORTID_NO_REG 0x00000100
+#define UNF_CTIU_RJT_EXP_PORTNAME_NO_REG 0x00000200
+#define UNF_CTIU_RJT_EXP_NODENAME_NO_REG 0x00000300
+#define UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG 0x00000700
+#define UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG 0x00000A00
+
+/*
+ * LS_RJT defines
+ */
+#define UNF_FC_LS_RJT_REASON_MASK 0x00ff0000
+
+/*
+ * LS_RJT reason code defines
+ */
+#define UNF_LS_OK 0x00000000
+#define UNF_LS_RJT_INVALID_COMMAND 0x00010000
+#define UNF_LS_RJT_LOGICAL_ERROR 0x00030000
+#define UNF_LS_RJT_BUSY 0x00050000
+#define UNF_LS_RJT_PROTOCOL_ERROR 0x00070000
+#define UNF_LS_RJT_REQUEST_DENIED 0x00090000
+#define UNF_LS_RJT_NOT_SUPPORTED 0x000b0000
+#define UNF_LS_RJT_CLASS_ERROR 0x000c0000
+
+/*
+ * LS_RJT code explanation
+ */
+#define UNF_LS_RJT_NO_ADDITIONAL_INFO 0x00000000
+#define UNF_LS_RJT_INV_DATA_FIELD_SIZE 0x00000700
+#define UNF_LS_RJT_INV_COMMON_SERV_PARAM 0x00000F00
+#define UNF_LS_RJT_INVALID_OXID_RXID 0x00001700
+#define UNF_LS_RJT_COMMAND_IN_PROGRESS 0x00001900
+#define UNF_LS_RJT_INSUFFICIENT_RESOURCES 0x00002900
+#define UNF_LS_RJT_COMMAND_NOT_SUPPORTED 0x00002C00
+#define UNF_LS_RJT_UNABLE_TO_SUPLY_REQ_DATA 0x00002A00
+#define UNF_LS_RJT_INVALID_PAYLOAD_LENGTH 0x00002D00
+
+#define UNF_P2P_LOCAL_NPORT_ID 0x000000EF
+#define UNF_P2P_REMOTE_NPORT_ID 0x000000D6
+
+#define UNF_BBCREDIT_MANAGE_NFPORT 0
+#define UNF_BBCREDIT_MANAGE_LPORT 1
+#define UNF_BBCREDIT_LPORT 0
+#define UNF_CONTIN_INCREASE_SUPPORT 1
+#define UNF_CLASS_VALID 1
+#define UNF_CLASS_INVALID 0
+#define UNF_NOT_MEANINGFUL 0
+#define UNF_NO_SERVICE_PARAMS 0
+#define UNF_CLEAN_ADDRESS_DEFAULT 0
+#define UNF_PRIORITY_ENABLE 1
+#define UNF_PRIORITY_DISABLE 0
+#define UNF_SEQUEN_DELIVERY_REQ 1 /* Sequential delivery requested */
+
+#define UNF_FC_PROTOCOL_CLASS_3 0x0
+#define UNF_FC_PROTOCOL_CLASS_2 0x1
+#define UNF_FC_PROTOCOL_CLASS_1 0x2
+#define UNF_FC_PROTOCOL_CLASS_F 0x3
+#define UNF_FC_PROTOCOL_CLASS_OTHER 0x4
+
+#define UNF_RSCN_PORT_ADDR 0x0
+#define UNF_RSCN_AREA_ADDR_GROUP 0x1
+#define UNF_RSCN_DOMAIN_ADDR_GROUP 0x2
+#define UNF_RSCN_FABRIC_ADDR_GROUP 0x3
+
+#define UNF_GET_RSCN_PLD_LEN(cmnd) ((cmnd) & 0x0000ffff)
+#define UNF_RSCN_PAGE_LEN 0x4
+
+#define UNF_PORT_LINK_UP 0x0000
+#define UNF_PORT_LINK_DOWN 0x0001
+#define UNF_PORT_RESET_START 0x0002
+#define UNF_PORT_RESET_END 0x0003
+#define UNF_PORT_LINK_UNKNOWN 0x0004
+#define UNF_PORT_NOP 0x0005
+#define UNF_PORT_CORE_FATAL_ERROR 0x0006
+#define UNF_PORT_CORE_UNRECOVERABLE_ERROR 0x0007
+#define UNF_PORT_CORE_RECOVERABLE_ERROR 0x0008
+#define UNF_PORT_LOGOUT 0x0009
+#define UNF_PORT_CLEAR_VLINK 0x000a
+#define UNF_PORT_UPDATE_PROCESS 0x000b
+#define UNF_PORT_DEBUG_DUMP 0x000c
+#define UNF_PORT_GET_FWLOG 0x000d
+#define UNF_PORT_CLEAN_DONE 0x000e
+#define UNF_PORT_BEGIN_REMOVE 0x000f
+#define UNF_PORT_RELEASE_RPORT_INDEX 0x0010
+#define UNF_PORT_ABNORMAL_RESET 0x0012
+
+/*
+ * SCSI begin
+ */
+#define SCSIOPC_TEST_UNIT_READY 0x00
+#define SCSIOPC_INQUIRY 0x12
+#define SCSIOPC_MODE_SENSE_6 0x1A
+#define SCSIOPC_MODE_SENSE_10 0x5A
+#define SCSIOPC_MODE_SELECT_6 0x15
+#define SCSIOPC_RESERVE 0x16
+#define SCSIOPC_RELEASE 0x17
+#define SCSIOPC_START_STOP_UNIT 0x1B
+#define SCSIOPC_READ_CAPACITY_10 0x25
+#define SCSIOPC_READ_CAPACITY_16 0x9E
+#define SCSIOPC_READ_6 0x08
+#define SCSIOPC_READ_10 0x28
+#define SCSIOPC_READ_12 0xA8
+#define SCSIOPC_READ_16 0x88
+#define SCSIOPC_WRITE_6 0x0A
+#define SCSIOPC_WRITE_10 0x2A
+#define SCSIOPC_WRITE_12 0xAA
+#define SCSIOPC_WRITE_16 0x8A
+#define SCSIOPC_WRITE_VERIFY 0x2E
+#define SCSIOPC_VERIFY_10 0x2F
+#define SCSIOPC_VERIFY_12 0xAF
+#define SCSIOPC_VERIFY_16 0x8F
+#define SCSIOPC_REQUEST_SENSE 0x03
+#define SCSIOPC_REPORT_LUN 0xA0
+#define SCSIOPC_FORMAT_UNIT 0x04
+#define SCSIOPC_SEND_DIAGNOSTIC 0x1D
+#define SCSIOPC_WRITE_SAME_10 0x41
+#define SCSIOPC_WRITE_SAME_16 0x93
+#define SCSIOPC_READ_BUFFER 0x3C
+#define SCSIOPC_WRITE_BUFFER 0x3B
+
+#define SCSIOPC_LOG_SENSE 0x4D
+#define SCSIOPC_MODE_SELECT_10 0x55
+#define SCSIOPC_SYNCHRONIZE_CACHE_10 0x35
+#define SCSIOPC_SYNCHRONIZE_CACHE_16 0x91
+#define SCSIOPC_WRITE_AND_VERIFY_10 0x2E
+#define SCSIOPC_WRITE_AND_VERIFY_12 0xAE
+#define SCSIOPC_WRITE_AND_VERIFY_16 0x8E
+#define SCSIOPC_READ_MEDIA_SERIAL_NUMBER 0xAB
+#define SCSIOPC_REASSIGN_BLOCKS 0x07
+#define SCSIOPC_ATA_PASSTHROUGH_16 0x85
+#define SCSIOPC_ATA_PASSTHROUGH_12 0xa1
+
+/*
+ * SCSI end
+ */
+#define IS_READ_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_READ_6 || (opcode) == SCSIOPC_READ_10 || \
+ (opcode) == SCSIOPC_READ_12 || (opcode) == SCSIOPC_READ_16)
+#define IS_WRITE_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_WRITE_6 || (opcode) == SCSIOPC_WRITE_10 || \
+ (opcode) == SCSIOPC_WRITE_12 || (opcode) == SCSIOPC_WRITE_16)
+
+#define IS_VERIFY_COMMAND(opcode) \
+ ((opcode) == SCSIOPC_VERIFY_10 || (opcode) == SCSIOPC_VERIFY_12 || \
+ (opcode) == SCSIOPC_VERIFY_16)
+
+#define FCP_RSP_LEN_VALID_MASK 0x1
+#define FCP_SNS_LEN_VALID_MASK 0x2
+#define FCP_RESID_OVER_MASK 0x4
+#define FCP_RESID_UNDER_MASK 0x8
+#define FCP_CONF_REQ_MASK 0x10
+#define FCP_SCSI_STATUS_GOOD 0x0
+
+#define UNF_DELAYED_WORK_SYNC(ret, port_id, work, work_symb) \
+ do { \
+ if (!cancel_delayed_work_sync(work)) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_INFO, \
+ "[info]LPort or RPort(0x%x) %s worker " \
+ "can't destroy, or no " \
+ "worker", \
+ port_id, work_symb); \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = RETURN_OK; \
+ } \
+ } while (0)
+
+#define UNF_GET_SFS_ENTRY(pkg) ((union unf_sfs_u *)(void *)(((struct unf_frame_pkg *)(pkg)) \
+ ->unf_cmnd_pload_bl.buffer_ptr))
+/* FLOGI */
+#define UNF_GET_FLOGI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->flogi.flogi_payload))
+#define UNF_FLOGI_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
+
+/* FLOGI ACC */
+#define UNF_GET_FLOGI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg))) \
+ ->flogi_acc.flogi_payload))
+#define UNF_FLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
+
+/* FDISC */
+#define UNF_FDISC_PAYLOAD_LEN UNF_FLOGI_PAYLOAD_LEN
+#define UNF_FDISC_ACC_PAYLOAD_LEN UNF_FLOGI_ACC_PAYLOAD_LEN
+
+/* PLOGI */
+#define UNF_GET_PLOGI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi.payload))
+#define UNF_PLOGI_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* PLOGI ACC */
+#define UNF_GET_PLOGI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi_acc.payload))
+#define UNF_PLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* LOGO */
+#define UNF_GET_LOGO_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->logo.payload))
+#define UNF_LOGO_PAYLOAD_LEN sizeof(struct unf_logo_payload)
+
+/* ECHO */
+#define UNF_GET_ECHO_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.echo_pld)
+
+/* ECHO PHYADDR */
+#define UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.phy_echo_addr)
+
+#define UNF_ECHO_PAYLOAD_LEN sizeof(struct unf_echo_payload)
+
+/* REC */
+#define UNF_GET_REC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rec.rec_pld))
+
+#define UNF_REC_PAYLOAD_LEN sizeof(struct unf_rec_pld)
+
+/* ECHO ACC */
+#define UNF_GET_ECHO_ACC_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo_acc.echo_pld)
+#define UNF_ECHO_ACC_PAYLOAD_LEN sizeof(struct unf_echo_payload)
+
+/* RRQ */
+#define UNF_GET_RRQ_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rrq.cmnd))
+#define UNF_RRQ_PAYLOAD_LEN \
+ (sizeof(struct unf_rrq) - sizeof(struct unf_fc_head))
+
+/* PRLI */
+#define UNF_GET_PRLI_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli.payload))
+#define UNF_PRLI_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PRLI ACC */
+#define UNF_GET_PRLI_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli_acc.payload))
+#define UNF_PRLI_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PRLO */
+#define UNF_GET_PRLO_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo.payload))
+#define UNF_PRLO_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+#define UNF_GET_PRLO_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo_acc.payload))
+#define UNF_PRLO_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
+
+/* PDISC */
+#define UNF_GET_PDISC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc.payload))
+#define UNF_PDISC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* PDISC ACC */
+#define UNF_GET_PDISC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc_acc.payload))
+#define UNF_PDISC_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
+
+/* ADISC */
+#define UNF_GET_ADISC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc.adisc_payl))
+#define UNF_ADISC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
+
+/* ADISC ACC */
+#define UNF_GET_ADISC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc_acc.adisc_payl))
+#define UNF_ADISC_ACC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
+
+/* RSCN ACC */
+#define UNF_GET_RSCN_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_RSCN_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* LOGO ACC */
+#define UNF_GET_LOGO_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_LOGO_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* RRQ ACC */
+#define UNF_GET_RRQ_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_RRQ_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* REC ACC */
+#define UNF_GET_REC_ACC_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
+#define UNF_REC_ACC_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+/* GPN_ID */
+#define UNF_GET_GPNID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id.ctiu_pream))
+#define UNF_GPNID_PAYLOAD_LEN \
+ (sizeof(struct unf_gpnid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GPNID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id_rsp.ctiu_pream))
+#define UNF_GPNID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gpnid_rsp) - sizeof(struct unf_fc_head))
+
+/* GNN_ID */
+#define UNF_GET_GNNID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id.ctiu_pream))
+#define UNF_GNNID_PAYLOAD_LEN \
+ (sizeof(struct unf_gnnid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GNNID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id_rsp.ctiu_pream))
+#define UNF_GNNID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gnnid_rsp) - sizeof(struct unf_fc_head))
+
+/* GFF_ID */
+#define UNF_GET_GFFID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id.ctiu_pream))
+#define UNF_GFFID_PAYLOAD_LEN \
+ (sizeof(struct unf_gffid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_GFFID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id_rsp.ctiu_pream))
+#define UNF_GFFID_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_gffid_rsp) - sizeof(struct unf_fc_head))
+
+/* GID_FT/GID_PT */
+#define UNF_GET_GID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
+ ->get_id.gid_req.ctiu_pream))
+
+#define UNF_GID_PAYLOAD_LEN (sizeof(struct unf_ctiu_prem) + sizeof(u32))
+#define UNF_GET_GID_ACC_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
+ ->get_id.gid_rsp.gid_acc_pld)
+#define UNF_GID_ACC_PAYLOAD_LEN sizeof(struct unf_gid_acc_pld)
+
+/* RFT_ID */
+#define UNF_GET_RFTID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id.ctiu_pream))
+#define UNF_RFTID_PAYLOAD_LEN \
+ (sizeof(struct unf_rftid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_RFTID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id_rsp.ctiu_pream))
+#define UNF_RFTID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
+
+/* RFF_ID */
+#define UNF_GET_RFFID_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id.ctiu_pream))
+#define UNF_RFFID_PAYLOAD_LEN \
+ (sizeof(struct unf_rffid) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_RFFID_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id_rsp.ctiu_pream))
+#define UNF_RFFID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
+
+/* ACC&RJT */
+#define UNF_GET_ELS_ACC_RJT_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_rjt.cmnd))
+#define UNF_ELS_ACC_RJT_LEN \
+ (sizeof(struct unf_els_rjt) - sizeof(struct unf_fc_head))
+
+/* SCR */
+#define UNF_SCR_PAYLOAD(pkg) \
+ (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->scr.payload)
+#define UNF_SCR_PAYLOAD_LEN \
+ (sizeof(struct unf_scr) - sizeof(struct unf_fc_head))
+
+#define UNF_SCR_RSP_PAYLOAD(pkg) \
+ (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_acc.cmnd))
+#define UNF_SCR_RSP_PAYLOAD_LEN \
+ (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
+
+#define UNF_GS_RSP_PAYLOAD_LEN \
+ (sizeof(union unf_sfs_u) - sizeof(struct unf_fc_head))
+
+#define UNF_GET_XCHG_TAG(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
+#define UNF_GET_ABTS_XCHG_TAG(pkg) \
+ ((u16)(((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]) >> 16))
+#define UNF_GET_IO_XCHG_TAG(pkg) \
+ ((u16)((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]))
+
+#define UNF_GET_HOTPOOL_TAG(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
+#define UNF_GET_SID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.csctl_sid & \
+ UNF_NPORTID_MASK)
+#define UNF_GET_DID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.rctl_did & \
+ UNF_NPORTID_MASK)
+#define UNF_GET_OXID(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid >> 16)
+#define UNF_GET_RXID(pkg) \
+ ((u16)((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid)
+#define UNF_GET_XID_RELEASE_TIMER(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->release_task_id_timer)
+#define UNF_GETXCHGALLOCTIME(pkg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME])
+
+#define UNF_SET_XCHG_ALLOC_TIME(pkg, xchg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = \
+ (((struct unf_xchg *)(xchg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]))
+#define UNF_SET_ABORT_INFO_IOTYPE(pkg, xchg) \
+ (((struct unf_frame_pkg *)(pkg)) \
+ ->private_data[PKG_PRIVATE_XCHG_ABORT_INFO] |= \
+ (((u8)(((struct unf_xchg *)(xchg))->data_direction & 0x7)) \
+ << 2))
+
+#define UNF_CHECK_NPORT_FPORT_BIT(els_payload) \
+ (((struct unf_flogi_fdisc_payload *)(els_payload)) \
+ ->fabric_parms.co_parms.nport)
+
+#define UNF_GET_RSP_BUF(pkg) \
+ ((void *)(((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.buffer_ptr))
+#define UNF_GET_RSP_LEN(pkg) \
+ (((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.length)
+
+#define UNF_N_PORT 0
+#define UNF_F_PORT 1
+
+#define UNF_GET_RA_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
+#define UNF_GET_RT_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
+#define UNF_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
+#define UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
+#define UNF_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
+#define UNF_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
+
+enum unf_pcie_error_code {
+ UNF_PCIE_ERROR_NONE = 0,
+ UNF_PCIE_DATAPARITYDETECTED = 1,
+ UNF_PCIE_SIGNALTARGETABORT,
+ UNF_PCIE_RECEIVEDTARGETABORT,
+ UNF_PCIE_RECEIVEDMASTERABORT,
+ UNF_PCIE_SIGNALEDSYSTEMERROR,
+ UNF_PCIE_DETECTEDPARITYERROR,
+ UNF_PCIE_CORRECTABLEERRORDETECTED,
+ UNF_PCIE_NONFATALERRORDETECTED,
+ UNF_PCIE_FATALERRORDETECTED,
+ UNF_PCIE_UNSUPPORTEDREQUESTDETECTED,
+ UNF_PCIE_AUXILIARYPOWERDETECTED,
+ UNF_PCIE_TRANSACTIONSPENDING,
+
+ UNF_PCIE_UNCORRECTINTERERRSTATUS,
+ UNF_PCIE_UNSUPPORTREQERRSTATUS,
+ UNF_PCIE_ECRCERRORSTATUS,
+ UNF_PCIE_MALFORMEDTLPSTATUS,
+ UNF_PCIE_RECEIVEROVERFLOWSTATUS,
+ UNF_PCIE_UNEXPECTCOMPLETESTATUS,
+ UNF_PCIE_COMPLETERABORTSTATUS,
+ UNF_PCIE_COMPLETIONTIMEOUTSTATUS,
+ UNF_PCIE_FLOWCTRLPROTOCOLERRSTATUS,
+ UNF_PCIE_POISONEDTLPSTATUS,
+ UNF_PCIE_SURPRISEDOWNERRORSTATUS,
+ UNF_PCIE_DATALINKPROTOCOLERRSTATUS,
+ UNF_PCIE_ADVISORYNONFATALERRSTATUS,
+ UNF_PCIE_REPLAYTIMERTIMEOUTSTATUS,
+ UNF_PCIE_REPLAYNUMROLLOVERSTATUS,
+ UNF_PCIE_BADDLLPSTATUS,
+ UNF_PCIE_BADTLPSTATUS,
+ UNF_PCIE_RECEIVERERRORSTATUS,
+
+ UNF_PCIE_BUTT
+};
+
+#define UNF_DMA_HI32(a) (((a) >> 32) & 0xffffffff)
+#define UNF_DMA_LO32(a) ((a) & 0xffffffff)
+
+#define UNF_WWN_LEN 8
+#define UNF_MAC_LEN 6
+
+/* send BLS/ELS/BLS REPLY/ELS REPLY/GS/ */
+/* rcvd BLS/ELS/REQ DONE/REPLY DONE */
+#define UNF_PKG_BLS_REQ 0x0100
+#define UNF_PKG_BLS_REQ_DONE 0x0101
+#define UNF_PKG_BLS_REPLY 0x0102
+#define UNF_PKG_BLS_REPLY_DONE 0x0103
+
+#define UNF_PKG_ELS_REQ 0x0200
+#define UNF_PKG_ELS_REQ_DONE 0x0201
+
+#define UNF_PKG_ELS_REPLY 0x0202
+#define UNF_PKG_ELS_REPLY_DONE 0x0203
+
+#define UNF_PKG_GS_REQ 0x0300
+#define UNF_PKG_GS_REQ_DONE 0x0301
+
+#define UNF_PKG_TGT_XFER 0x0400
+#define UNF_PKG_TGT_RSP 0x0401
+#define UNF_PKG_TGT_RSP_NOSGL 0x0402
+#define UNF_PKG_TGT_RSP_STATUS 0x0403
+
+#define UNF_PKG_INI_IO 0x0500
+#define UNF_PKG_INI_RCV_TGT_RSP 0x0507
+
+/* external sgl struct start */
+struct unf_esgl_page {
+ u64 page_address;
+ dma_addr_t esgl_phy_addr;
+ u32 page_size;
+};
+
+/* external sgl struct end */
+struct unf_esgl {
+ struct list_head entry_esgl;
+ struct unf_esgl_page page;
+};
+
+#define UNF_RESPONE_DATA_LEN 8
+struct unf_frame_payld {
+ u8 *buffer_ptr;
+ dma_addr_t buf_dma_addr;
+ u32 length;
+};
+
+enum pkg_private_index {
+ PKG_PRIVATE_LOWLEVEL_XCHG_ADD = 0,
+ PKG_PRIVATE_XCHG_HOT_POOL_INDEX = 1, /* Hot Pool Index */
+ PKG_PRIVATE_XCHG_RPORT_INDEX = 2, /* RPort index */
+ PKG_PRIVATE_XCHG_VP_INDEX = 3, /* VPort index */
+ PKG_PRIVATE_XCHG_SSQ_INDEX,
+ PKG_PRIVATE_RPORT_RX_SIZE,
+ PKG_PRIVATE_XCHG_TIMEER,
+ PKG_PRIVATE_XCHG_ALLOC_TIME,
+ PKG_PRIVATE_XCHG_ABORT_INFO,
+ PKG_PRIVATE_ECHO_CMD_SND_TIME, /* local send echo cmd time stamp */
+ PKG_PRIVATE_ECHO_ACC_RCV_TIME, /* local receive echo acc time stamp */
+ PKG_PRIVATE_ECHO_CMD_RCV_TIME, /* remote receive echo cmd time stamp */
+ PKG_PRIVATE_ECHO_RSP_SND_TIME, /* remote send echo rsp time stamp */
+ PKG_MAX_PRIVATE_DATA_SIZE
+};
+
+extern u32 dix_flag;
+extern u32 dif_sgl_mode;
+extern u32 dif_app_esc_check;
+extern u32 dif_ref_esc_check;
+
+#define UNF_DIF_ACTION_NONE 0
+
+enum unf_adm_dif_mode_E {
+ UNF_SWITCH_DIF_DIX = 0,
+ UNF_APP_REF_ESCAPE,
+ ALL_DIF_MODE = 20,
+};
+
+#define UNF_DIF_CRC_ERR 0x1001
+#define UNF_DIF_APP_ERR 0x1002
+#define UNF_DIF_LBA_ERR 0x1003
+
+#define UNF_VERIFY_CRC_MASK (1 << 1)
+#define UNF_VERIFY_APP_MASK (1 << 2)
+#define UNF_VERIFY_LBA_MASK (1 << 3)
+
+#define UNF_REPLACE_CRC_MASK (1 << 8)
+#define UNF_REPLACE_APP_MASK (1 << 9)
+#define UNF_REPLACE_LBA_MASK (1 << 10)
+
+#define UNF_DIF_ACTION_MASK (0xff << 16)
+#define UNF_DIF_ACTION_INSERT (0x1 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_DELETE (0x2 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_FORWARD (0x3 << 16)
+#define UNF_DIF_ACTION_VERIFY_AND_REPLACE (0x4 << 16)
+
+#define UNF_DIF_ACTION_NO_INCREASE_REFTAG (0x1 << 24)
+
+#define UNF_DEFAULT_CRC_GUARD_SEED (0)
+#define UNF_CAL_512_BLOCK_CNT(data_len) ((data_len) >> 9)
+#define UNF_CAL_BLOCK_CNT(data_len, sector_size) ((data_len) / (sector_size))
+#define UNF_CAL_CRC_BLK_CNT(crc_data_len, sector_size) \
+ ((crc_data_len) / ((sector_size) + 8))
+
+#define UNF_DIF_DOUBLE_SGL (1 << 1)
+#define UNF_DIF_SECTSIZE_4KB (1 << 2)
+#define UNF_DIF_SECTSIZE_512 (0 << 2)
+#define UNF_DIF_LBA_NONE_INCREASE (1 << 3)
+#define UNF_DIF_TYPE3 (1 << 4)
+
+#define SECTOR_SIZE_512 512
+#define SECTOR_SIZE_4096 4096
+#define SPFC_DIF_APP_REF_ESC_NOT_CHECK 1
+#define SPFC_DIF_APP_REF_ESC_CHECK 0
+
+struct unf_dif {
+ u16 crc;
+ u16 app_tag;
+ u32 lba;
+};
+
+enum unf_io_state { UNF_INI_IO = 0, UNF_TGT_XFER = 1, UNF_TGT_RSP = 2 };
+
+#define UNF_PKG_LAST_RESPONSE 0
+#define UNF_PKG_NOT_LAST_RESPONSE 1
+
+struct unf_frame_pkg {
+ /* pkt type:BLS/ELS/FC4LS/CMND/XFER/RSP */
+ u32 type;
+ u32 last_pkg_flag;
+ u32 fcp_conf_flag;
+
+#define UNF_FCP_RESPONSE_VALID 0x01
+#define UNF_FCP_SENSE_VALID 0x02
+ u32 response_and_sense_valid_flag; /* resp and sense vailed flag */
+ u32 cmnd;
+ struct unf_fc_head frame_head;
+ u32 entry_count;
+ void *xchg_contex;
+ u32 transfer_len;
+ u32 residus_len;
+ u32 status;
+ u32 status_sub_code;
+ enum unf_io_state io_state;
+ u32 qos_level;
+ u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
+ struct unf_fcp_cmnd *fcp_cmnd;
+ struct unf_dif_control_info dif_control;
+ struct unf_frame_payld unf_cmnd_pload_bl;
+ struct unf_frame_payld unf_rsp_pload_bl;
+ struct unf_frame_payld unf_sense_pload_bl;
+ void *upper_cmd;
+ u32 abts_maker_status;
+ u32 release_task_id_timer;
+ u8 byte_orders;
+ u8 rx_or_ox_id;
+ u8 class_mode;
+ u8 rsvd;
+ u8 *peresp;
+ u32 rcvrsp_len;
+ ulong timeout;
+ u32 origin_hottag;
+ u32 origin_magicnum;
+};
+
+#define UNF_MAX_SFS_XCHG 2048
+#define UNF_RESERVE_SFS_XCHG 128 /* times on exchange mgr num */
+
+struct unf_lport_cfg_item {
+ u32 port_id;
+ u32 port_mode; /* INI(0x20), TGT(0x10), BOTH(0x30) */
+ u32 port_topology; /* 0x3:loop , 0xc:p2p ,0xf:auto */
+ u32 max_queue_depth;
+ u32 max_io; /* Recommended Value 512-4096 */
+ u32 max_login;
+ u32 max_sfs_xchg;
+ u32 port_speed; /* 0:auto 1:1Gbps 2:2Gbps 4:4Gbps 8:8Gbps 16:16Gbps */
+ u32 tape_support; /* ape support */
+ u32 fcp_conf; /* fcp confirm support */
+ u32 bbscn;
+};
+
+struct unf_port_dynamic_info {
+ u32 sfp_posion;
+ u32 sfp_valid;
+ u32 phy_link;
+ u32 firmware_state;
+ u32 cur_speed;
+ u32 mailbox_timeout_cnt;
+};
+
+struct unf_port_intr_coalsec {
+ u32 delay_timer;
+ u32 depth;
+};
+
+struct unf_port_topo {
+ u32 topo_cfg;
+ enum unf_act_topo topo_act;
+};
+
+struct unf_port_transfer_para {
+ u32 type;
+ u32 value;
+};
+
+struct unf_buf {
+ u8 *buf;
+ u32 buf_len;
+};
+
+/* get ucode & up ver */
+#define SPFC_VER_LEN (16)
+#define SPFC_COMPILE_TIME_LEN (20)
+struct unf_fw_version {
+ u32 message_type;
+ u8 fw_version[SPFC_VER_LEN];
+};
+
+struct unf_port_wwn {
+ u64 sys_port_wwn;
+ u64 sys_node_name;
+};
+
+enum unf_port_config_set_op {
+ UNF_PORT_CFG_SET_SPEED,
+ UNF_PORT_CFG_SET_PORT_SWITCH,
+ UNF_PORT_CFG_SET_POWER_STATE,
+ UNF_PORT_CFG_SET_PORT_STATE,
+ UNF_PORT_CFG_UPDATE_WWN,
+ UNF_PORT_CFG_TEST_FLASH,
+ UNF_PORT_CFG_UPDATE_FABRIC_PARAM,
+ UNF_PORT_CFG_UPDATE_PLOGI_PARAM,
+ UNF_PORT_CFG_SET_BUTT
+};
+
+enum unf_port_cfg_get_op {
+ UNF_PORT_CFG_GET_TOPO_ACT,
+ UNF_PORT_CFG_GET_LOOP_MAP,
+ UNF_PORT_CFG_GET_SFP_PRESENT,
+ UNF_PORT_CFG_GET_FW_VER,
+ UNF_PORT_CFG_GET_HW_VER,
+ UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
+ UNF_PORT_CFG_GET_WORKBALE_BBSCN,
+ UNF_PORT_CFG_GET_FC_SERDES,
+ UNF_PORT_CFG_GET_LOOP_ALPA,
+ UNF_PORT_CFG_GET_MAC_ADDR,
+ UNF_PORT_CFG_GET_SFP_VER,
+ UNF_PORT_CFG_GET_SFP_SUPPORT_UPDATE,
+ UNF_PORT_CFG_GET_SFP_LOG,
+ UNF_PORT_CFG_GET_PCIE_LINK_STATE,
+ UNF_PORT_CFG_GET_FLASH_DATA_INFO,
+ UNF_PORT_CFG_GET_BUTT,
+};
+
+enum unf_port_config_state {
+ UNF_PORT_CONFIG_STATE_START,
+ UNF_PORT_CONFIG_STATE_STOP,
+ UNF_PORT_CONFIG_STATE_RESET,
+ UNF_PORT_CONFIG_STATE_STOP_INTR,
+ UNF_PORT_CONFIG_STATE_BUTT
+};
+
+enum unf_port_config_update {
+ UNF_PORT_CONFIG_UPDATE_FW_MINIMUM,
+ UNF_PORT_CONFIG_UPDATE_FW_ALL,
+ UNF_PORT_CONFIG_UPDATE_BUTT
+};
+
+enum unf_disable_vp_mode {
+ UNF_DISABLE_VP_MODE_ONLY = 0x8,
+ UNF_DISABLE_VP_MODE_REINIT_LINK = 0x9,
+ UNF_DISABLE_VP_MODE_NOFAB_LOGO = 0xA,
+ UNF_DISABLE_VP_MODE_LOGO_ALL = 0xB
+};
+
+struct unf_vport_info {
+ u16 vp_index;
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+ enum unf_disable_vp_mode disable_mode;
+ u32 nport_id; /* maybe acquired by lowlevel and update to common */
+ void *vport;
+};
+
+struct unf_port_login_parms {
+ enum unf_act_topo act_topo;
+
+ u32 rport_index;
+ u32 seq_cnt : 1;
+ u32 ed_tov : 1;
+ u32 reserved : 14;
+ u32 tx_mfs : 16;
+ u32 ed_tov_timer_val;
+
+ u8 remote_rttov_tag;
+ u8 remote_edtov_tag;
+ u16 remote_bb_credit;
+ u16 compared_bbscn;
+ u32 compared_edtov_val;
+ u32 compared_ratov_val;
+ u32 els_cmnd_code;
+};
+
+struct unf_mbox_head_info {
+ /* mbox header */
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 pad0;
+
+ /* operation */
+ u32 opcode : 4;
+ u32 pad1 : 28;
+};
+
+struct unf_mbox_head_sts {
+ /* mbox header */
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 pad0;
+
+ /* operation */
+ u16 pad1;
+ u8 pad2;
+ u8 status;
+};
+
+struct unf_low_level_service_op {
+ u32 (*unf_ls_gs_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_bls_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_cmnd_send)(void *hba, struct unf_frame_pkg *pkg);
+ u32 (*unf_rsp_send)(void *handle, struct unf_frame_pkg *pkg);
+ u32 (*unf_release_rport_res)(void *handle, struct unf_port_info *rport_info);
+ u32 (*unf_flush_ini_resp_que)(void *handle);
+ u32 (*unf_alloc_rport_res)(void *handle, struct unf_port_info *rport_info);
+ u32 (*ll_release_xid)(void *handle, struct unf_frame_pkg *pkg);
+ u32 (*unf_xfer_send)(void *handle, struct unf_frame_pkg *pkg);
+};
+
+struct unf_low_level_port_mgr_op {
+ /* fcport/opcode/input parameter */
+ u32 (*ll_port_config_set)(void *fc_port, enum unf_port_config_set_op opcode, void *para_in);
+
+ /* fcport/opcode/output parameter */
+ u32 (*ll_port_config_get)(void *fc_port, enum unf_port_cfg_get_op opcode, void *para_out);
+};
+
+struct unf_chip_info {
+ u8 chip_type;
+ u8 chip_work_mode;
+ u8 disable_err_flag;
+};
+
+struct unf_low_level_functioon_op {
+ struct unf_chip_info chip_info;
+ /* low level type */
+ u32 low_level_type;
+ const char *name;
+ struct pci_dev *dev;
+ u64 sys_node_name;
+ u64 sys_port_name;
+ struct unf_lport_cfg_item lport_cfg_items;
+#define UNF_LOW_LEVEL_MGR_TYPE_ACTIVE 0
+#define UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE 1
+ const u32 xchg_mgr_type;
+
+#define UNF_NO_EXTRA_ABTS_XCHG 0x0
+#define UNF_LL_IOC_ABTS_XCHG 0x1
+ const u32 abts_xchg;
+
+#define UNF_CM_RPORT_SET_QUALIFIER 0x0
+#define UNF_CM_RPORT_SET_QUALIFIER_REUSE 0x1
+#define UNF_CM_RPORT_SET_QUALIFIER_SPFC 0x2
+
+ /* low level pass-through flag. */
+#define UNF_LOW_LEVEL_PASS_THROUGH_FIP 0x0
+#define UNF_LOW_LEVEL_PASS_THROUGH_FABRIC_LOGIN 0x1
+#define UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN 0x2
+ u32 passthrough_flag;
+
+ /* low level parameter */
+ u32 support_max_npiv_num;
+ u32 support_max_ssq_num;
+ u32 support_max_speed;
+ u32 support_min_speed;
+ u32 fc_ser_max_speed;
+
+ u32 support_max_rport;
+
+ u32 support_max_hot_tag_range;
+ u32 sfp_type;
+ u32 update_fw_reset_active;
+ u32 support_upgrade_report;
+ u32 multi_conf_support;
+ u32 port_type;
+#define UNF_LOW_LEVEL_RELEASE_RPORT_SYNC 0x0
+#define UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC 0x1
+ u8 rport_release_type;
+#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_FIXED 0x0
+#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG 0x1
+ u8 sirt_page_mode;
+ u8 sfp_speed;
+
+ /* IO reference */
+ struct unf_low_level_service_op service_op;
+
+ /* Port Mgr reference */
+ struct unf_low_level_port_mgr_op port_mgr_op;
+
+ u8 chip_id;
+};
+
+struct unf_cm_handle_op {
+ /* return:L_Port */
+ void *(*unf_alloc_local_port)(void *private_data,
+ struct unf_low_level_functioon_op *low_level_op);
+
+ /* input para:L_Port */
+ u32 (*unf_release_local_port)(void *lport);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_ls_gs_pkg)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_bls_pkg)(void *lport, struct unf_frame_pkg *pkg);
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_send_els_done)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_marker_status)(void *lport, struct unf_frame_pkg *pkg);
+ u32 (*unf_receive_abts_marker_status)(void *lport, struct unf_frame_pkg *pkg);
+ /* input para:L_Port, FRAME_PKG_S */
+ u32 (*unf_receive_ini_response)(void *lport, struct unf_frame_pkg *pkg);
+
+ int (*unf_get_cfg_parms)(char *section_name,
+ struct unf_cfg_item *cfg_parm, u32 *cfg_value,
+ u32 item_num);
+
+ /* TGT IO interface */
+ u32 (*unf_process_fcp_cmnd)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* TGT IO Done */
+ u32 (*unf_tgt_cmnd_xfer_or_rsp_echo)(void *lport, struct unf_frame_pkg *pkg);
+
+ u32 (*unf_cm_get_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
+ u32 (*unf_cm_get_dif_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
+
+ struct unf_esgl_page *(*unf_get_one_free_esgl_page)(void *lport, struct unf_frame_pkg *pkg);
+
+ /* input para:L_Port, EVENT */
+ u32 (*unf_fc_port_event)(void *lport, u32 events, void *input);
+
+ int (*unf_drv_start_work)(void *lport);
+
+ void (*unf_card_rport_chip_err)(struct pci_dev const *pci_dev);
+};
+
+u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle);
+int unf_common_init(void);
+void unf_common_exit(void);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_disc.c b/drivers/scsi/spfc/common/unf_disc.c
new file mode 100644
index 000000000000..c48d0ba670d4
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_disc.c
@@ -0,0 +1,1276 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_disc.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_event.h"
+#include "unf_lport.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+#include "unf_portman.h"
+
+#define UNF_LIST_RSCN_PAGE_CNT 2560
+#define UNF_MAX_PORTS_PRI_LOOP 2
+#define UNF_MAX_GS_SEND_NUM 8
+#define UNF_OS_REMOVE_CARD_TIMEOUT (60 * 1000)
+
+static void unf_set_disc_state(struct unf_disc *disc,
+ enum unf_disc_state states)
+{
+ FC_CHECK_RETURN_VOID(disc);
+
+ if (states != disc->states) {
+ /* Reset disc retry count */
+ disc->retry_count = 0;
+ }
+
+ disc->states = states;
+}
+
+static inline u32 unf_get_loop_map(struct unf_lport *lport, u8 loop_map[], u32 loop_map_size)
+{
+ struct unf_buf buf = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ buf.buf = loop_map;
+ buf.buf_len = loop_map_size;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
+ UNF_PORT_CFG_GET_LOOP_MAP,
+ (void *)&buf);
+ return ret;
+}
+
+static void unf_login_with_loop_node(struct unf_lport *lport, u32 alpa)
+{
+ /* Only used for Private Loop LOGIN */
+ struct unf_rport *unf_rport = NULL;
+ ulong rport_flag = 0;
+ u32 port_feature = 0;
+ u32 ret;
+
+ /* Check AL_PA validity */
+ if (lport->nport_id == alpa) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) is the same as RPort with AL_PA(0x%x), do nothing",
+ lport->port_id, alpa);
+ return;
+ }
+
+ if (alpa == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) is fabric, do nothing",
+ lport->port_id, alpa);
+ return;
+ }
+
+ /* Get & set R_Port: reuse only */
+ unf_rport = unf_get_rport_by_nport_id(lport, alpa);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x_0x%p) login with private loop",
+ lport->port_id, lport->nport_id, alpa, unf_rport);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, alpa);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) allocate new RPort(0x%x) failed",
+ lport->port_id, lport->nport_id, alpa);
+ return;
+ }
+
+ /* Update R_Port state & N_Port_ID */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ unf_rport->nport_id = alpa;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* Private Loop: check whether need delay to send PLOGI or not */
+ port_feature = unf_rport->options;
+
+ /* check Rport and Lport feature */
+ if (port_feature == UNF_PORT_MODE_UNKNOWN &&
+ lport->options == UNF_PORT_MODE_INI) {
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ unf_check_rport_need_delay_plogi(lport, unf_rport, port_feature);
+ }
+}
+
+static int unf_discover_private_loop(void *arg_in, void *arg_out)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 i = 0;
+ u8 loop_id = 0;
+ u32 alpa_index = 0;
+ u8 loop_map[UNF_LOOPMAP_COUNT];
+
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+ memset(loop_map, 0x0, UNF_LOOPMAP_COUNT);
+
+ /* Get Port Loop Map */
+ ret = unf_get_loop_map(unf_lport, loop_map, UNF_LOOPMAP_COUNT);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) get loop map failed", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Check Loop Map Ports Count */
+ if (loop_map[ARRAY_INDEX_0] > UNF_MAX_PORTS_PRI_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has more than %d ports(%u) in private loop",
+ unf_lport->port_id, UNF_MAX_PORTS_PRI_LOOP, loop_map[ARRAY_INDEX_0]);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* AL_PA = 0 means Public Loop */
+ if (loop_map[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
+ loop_map[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) one or more AL_PA is 0x00, indicate it's FL_Port",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Discovery Private Loop Ports */
+ for (i = 0; i < loop_map[ARRAY_INDEX_0]; i++) {
+ alpa_index = i + 1;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) start to disc(0x%x) with count(0x%x)",
+ unf_lport->port_id, loop_map[alpa_index], i);
+
+ /* Check whether need delay to send PLOGI or not */
+ loop_id = loop_map[alpa_index];
+ unf_login_with_loop_node(unf_lport, (u32)loop_id);
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_disc_start(void *lport)
+{
+ /*
+ * Call by:
+ * 1. Enter Private Loop Login
+ * 2. Analysis RSCN payload
+ * 3. SCR callback
+ * *
+ * Doing:
+ * Fabric/Public Loop: Send GID_PT
+ * Private Loop: (delay to) send PLOGI or send LOGO immediately
+ * P2P: do nothing
+ */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_cm_event_report *event = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ act_topo = unf_lport->act_topo;
+ disc = &unf_lport->disc;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) with topo(0x%x) begin to discovery",
+ unf_lport->port_id, act_topo);
+
+ if (act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ /* 1. Fabric or Public Loop Topology: for directory server */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport,
+ UNF_FC_FID_DIR_SERV); /* 0xfffffc */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unable to get SNS RPort(0xfffffc)",
+ unf_lport->port_id);
+
+ unf_rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC,
+ UNF_FC_FID_DIR_SERV);
+ if (!unf_rport)
+ return UNF_RETURN_ERROR;
+
+ unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_set_disc_state(disc, UNF_DISC_ST_START); /* disc start */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_NORMAL_ENTER);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /*
+ * NOTE: Send GID_PT
+ * The Name Server shall, when it receives a GID_PT request,
+ * return all Port Identifiers having registered support for the
+ * specified Port Type. One or more Port Identifiers, having
+ * registered as the specified Port Type, are returned.
+ */
+ ret = unf_send_gid_pt(unf_lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_disc_error_recovery(unf_lport);
+ } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private Loop: to thread process */
+ event = unf_get_one_event_node(unf_lport);
+ FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
+
+ event->lport = unf_lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_discover_private_loop;
+ event->para_in = (void *)unf_lport;
+
+ unf_post_one_event_node(unf_lport, event);
+ } else {
+ /* P2P toplogy mode: Do nothing */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with topo(0x%x) need do nothing",
+ unf_lport->port_id, act_topo);
+ }
+
+ return ret;
+}
+
+static u32 unf_disc_stop(void *lport)
+{
+ /* Call by GID_ACC processer */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *root_lport = NULL;
+ struct unf_rport *sns_port = NULL;
+ struct unf_disc_rport *disc_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_disc *root_disc = NULL;
+ struct list_head *node = NULL;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ root_disc = &root_lport->disc;
+
+ /* Get R_Port for Directory server */
+ sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric RPort(0xfffffc) failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* for R_Port from disc pool busy list */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (list_empty(&disc->disc_rport_mgr.list_disc_rports_busy)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ return RETURN_OK;
+ }
+
+ node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
+ do {
+ /* Delete from Disc busy list */
+ disc_rport = list_entry(node, struct unf_disc_rport, entry_rport);
+ nport_id = disc_rport->nport_id;
+ list_del_init(node);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Add back to (free) Disc R_Port pool (list) */
+ spin_lock_irqsave(&root_disc->rport_busy_pool_lock, flag);
+ list_add_tail(node, &root_disc->disc_rport_mgr.list_disc_rports_pool);
+ spin_unlock_irqrestore(&root_disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) remove nportid:0x%x from rportbusy list",
+ unf_lport->port_id, unf_lport->nport_id, disc_rport->nport_id);
+ /* Send GNN_ID to Name Server */
+ ret = unf_get_and_post_disc_event(unf_lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
+
+ /* NOTE: go to next stage */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, sns_port, nport_id);
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
+ } while (node != &disc->disc_rport_mgr.list_disc_rports_busy);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return ret;
+}
+
+static u32 unf_init_rport_pool(struct unf_lport *lport)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+ u32 i = 0;
+ u32 bitmap_cnt = 0;
+ ulong flag = 0;
+ u32 max_login = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Init RPort Pool info */
+ rport_pool = &lport->rport_pool;
+ max_login = lport->low_level_func.lport_cfg_items.max_login;
+ rport_pool->rport_pool_completion = NULL;
+ rport_pool->rport_pool_count = max_login;
+ spin_lock_init(&rport_pool->rport_free_pool_lock);
+ INIT_LIST_HEAD(&rport_pool->list_rports_pool); /* free RPort pool */
+
+ /* 1. Alloc RPort Pool buffer/resource (memory) */
+ rport_pool->rport_pool_add = vmalloc((size_t)(max_login * sizeof(struct unf_rport)));
+ if (!rport_pool->rport_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate RPort(s) resource failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_pool->rport_pool_add, 0, (max_login * sizeof(struct unf_rport)));
+
+ /* 2. Alloc R_Port Pool bitmap */
+ bitmap_cnt = (lport->low_level_func.support_max_rport) / BITS_PER_LONG + 1;
+ rport_pool->rpi_bitmap = vmalloc((size_t)(bitmap_cnt * sizeof(ulong)));
+ if (!rport_pool->rpi_bitmap) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate RPort Bitmap failed", lport->port_id);
+
+ vfree(rport_pool->rport_pool_add);
+ rport_pool->rport_pool_add = NULL;
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_pool->rpi_bitmap, 0, (bitmap_cnt * sizeof(ulong)));
+
+ /* 3. Rport resource Management: Add Rports (buffer) to Rport Pool List
+ */
+ unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ for (i = 0; i < rport_pool->rport_pool_count; i++) {
+ spin_lock_init(&unf_rport->rport_state_lock);
+ list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
+ sema_init(&unf_rport->task_sema, 0);
+ unf_rport++;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ return ret;
+}
+
+static void unf_free_rport_pool(struct unf_lport *lport)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 remain = 0;
+ u64 timeout = 0;
+ u32 max_login = 0;
+ u32 i;
+ struct unf_rport *unf_rport = NULL;
+ struct completion rport_pool_completion;
+
+ init_completion(&rport_pool_completion);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport_pool = &lport->rport_pool;
+ max_login = lport->low_level_func.lport_cfg_items.max_login;
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (rport_pool->rport_pool_count != max_login) {
+ rport_pool->rport_pool_completion = &rport_pool_completion;
+ remain = max_login - rport_pool->rport_pool_count;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for RPort pool completion, remain(0x%x)",
+ lport->port_id, remain);
+
+ unf_show_all_rport(lport);
+
+ timeout = wait_for_completion_timeout(rport_pool->rport_pool_completion,
+ msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
+ if (timeout == 0)
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for RPort pool completion end",
+ lport->port_id);
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ rport_pool->rport_pool_completion = NULL;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+ }
+
+ unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
+ for (i = 0; i < rport_pool->rport_pool_count; i++) {
+ if (!unf_rport)
+ break;
+ unf_rport++;
+ }
+
+ if ((lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) == 0) {
+ vfree(rport_pool->rport_pool_add);
+ rport_pool->rport_pool_add = NULL;
+ vfree(rport_pool->rpi_bitmap);
+ rport_pool->rpi_bitmap = NULL;
+ }
+}
+
+static void unf_init_rscn_node(struct unf_port_id_page *port_id_page)
+{
+ FC_CHECK_RETURN_VOID(port_id_page);
+
+ port_id_page->addr_format = 0;
+ port_id_page->event_qualifier = 0;
+ port_id_page->reserved = 0;
+ port_id_page->port_id_area = 0;
+ port_id_page->port_id_domain = 0;
+ port_id_page->port_id_port = 0;
+}
+
+struct unf_port_id_page *unf_get_free_rscn_node(void *rscn_mg)
+{
+ /* Call by Save RSCN Port_ID */
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_node = NULL;
+ struct list_head *list_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(rscn_mg, NULL);
+ rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ if (list_empty(&rscn_mgr->list_free_rscn_page)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]No RSCN node anymore");
+
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+ return NULL;
+ }
+
+ /* Get from list_free_RSCN_page */
+ list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_free_rscn_page);
+ list_del(list_node);
+ rscn_mgr->free_rscn_count--;
+ port_id_node = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
+ unf_init_rscn_node(port_id_node);
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ return port_id_node;
+}
+
+static void unf_release_rscn_node(void *rscn_mg, void *port_id_node)
+{
+ /* Call by RSCN GID_ACC */
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_page = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rscn_mg);
+ FC_CHECK_RETURN_VOID(port_id_node);
+ rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
+ port_id_page = (struct unf_port_id_page *)port_id_node;
+
+ /* Back to list_free_RSCN_page */
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ rscn_mgr->free_rscn_count++;
+ unf_init_rscn_node(port_id_page);
+ list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+}
+
+static u32 unf_init_rscn_pool(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct unf_port_id_page *port_id_page = NULL;
+ u32 ret = RETURN_OK;
+ u32 i = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ /* Get RSCN Pool buffer */
+ rscn_mgr->rscn_pool_add = vmalloc(UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
+ if (!rscn_mgr->rscn_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RSCN pool failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rscn_mgr->rscn_pool_add, 0,
+ UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ port_id_page = (struct unf_port_id_page *)(rscn_mgr->rscn_pool_add);
+ for (i = 0; i < UNF_LIST_RSCN_PAGE_CNT; i++) {
+ /* Add tail to list_free_RSCN_page */
+ list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
+
+ rscn_mgr->free_rscn_count++;
+ port_id_page++;
+ }
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ return ret;
+}
+
+static void unf_freerscn_pool(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ if (disc->rscn_mgr.rscn_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) free RSCN pool", lport->nport_id);
+
+ vfree(disc->rscn_mgr.rscn_pool_add);
+ disc->rscn_mgr.rscn_pool_add = NULL;
+ }
+}
+
+static u32 unf_init_rscn_mgr(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ INIT_LIST_HEAD(&rscn_mgr->list_free_rscn_page); /* free RSCN page list */
+ INIT_LIST_HEAD(&rscn_mgr->list_using_rscn_page); /* busy RSCN page list */
+ spin_lock_init(&rscn_mgr->rscn_id_list_lock);
+ rscn_mgr->free_rscn_count = 0;
+ rscn_mgr->unf_get_free_rscn_node = unf_get_free_rscn_node;
+ rscn_mgr->unf_release_rscn_node = unf_release_rscn_node;
+
+ ret = unf_init_rscn_pool(lport);
+ return ret;
+}
+
+static void unf_destroy_rscn_mngr(struct unf_lport *lport)
+{
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ rscn_mgr->free_rscn_count = 0;
+ rscn_mgr->unf_get_free_rscn_node = NULL;
+ rscn_mgr->unf_release_rscn_node = NULL;
+
+ unf_freerscn_pool(lport);
+}
+
+static u32 unf_init_disc_rport_pool(struct unf_lport *lport)
+{
+ struct unf_disc_rport_mg *disc_mgr = NULL;
+ struct unf_disc_rport *disc_rport = NULL;
+ u32 i = 0;
+ u32 max_log_in = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ max_log_in = lport->low_level_func.lport_cfg_items.max_login;
+ disc_mgr = &lport->disc.disc_rport_mgr;
+
+ /* Alloc R_Port Disc Pool buffer */
+ disc_mgr->disc_pool_add =
+ vmalloc(max_log_in * sizeof(struct unf_disc_rport));
+ if (!disc_mgr->disc_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate disc RPort pool failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(disc_mgr->disc_pool_add, 0, (max_log_in * sizeof(struct unf_disc_rport)));
+
+ /* Add R_Port to (free) DISC R_Port Pool */
+ spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
+ disc_rport = (struct unf_disc_rport *)(disc_mgr->disc_pool_add);
+ for (i = 0; i < max_log_in; i++) {
+ /* Add tail to list_disc_Rport_pool */
+ list_add_tail(&disc_rport->entry_rport, &disc_mgr->list_disc_rports_pool);
+
+ disc_rport++;
+ }
+ spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
+
+ return RETURN_OK;
+}
+
+static void unf_free_disc_rport_pool(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ if (disc->disc_rport_mgr.disc_pool_add) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) free disc RPort pool", lport->port_id);
+
+ vfree(disc->disc_rport_mgr.disc_pool_add);
+ disc->disc_rport_mgr.disc_pool_add = NULL;
+ }
+}
+
+int unf_discover_port_info(void *arg_in)
+{
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
+
+ disc_gs_info = (struct unf_disc_gs_event_info *)arg_in;
+ unf_lport = (struct unf_lport *)disc_gs_info->lport;
+ unf_rport = (struct unf_rport *)disc_gs_info->rport;
+
+ switch (disc_gs_info->type) {
+ case UNF_DISC_GET_PORT_NAME:
+ ret = unf_send_gpn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send GPN_ID failed RPort(0x%x)",
+ unf_lport->nport_id, disc_gs_info->rport_id);
+ unf_rcv_gpn_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
+ }
+ break;
+ case UNF_DISC_GET_FEATURE:
+ ret = unf_send_gff_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send GFF_ID failed to get RPort(0x%x)'s feature",
+ unf_lport->port_id, disc_gs_info->rport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
+ }
+ break;
+ case UNF_DISC_GET_NODE_NAME:
+ ret = unf_send_gnn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) GNN_ID send failed with NPort ID(0x%x)",
+ unf_lport->port_id, disc_gs_info->rport_id);
+
+ /* NOTE: Continue to next stage */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_rport, disc_gs_info->rport_id);
+ }
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Send GS packet type(0x%x) is unknown", disc_gs_info->type);
+ }
+
+ kfree(disc_gs_info);
+
+ return (int)ret;
+}
+
+u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
+ enum unf_disc_type type)
+{
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flag = 0;
+ struct unf_lport *root_lport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc_manage_info *disc_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+
+ if (unf_lport->link_up == UNF_PORT_LINK_DOWN)
+ return RETURN_OK;
+
+ root_lport = unf_lport->root_lport;
+ disc_info = &root_lport->disc.disc_thread_info;
+
+ if (disc_info->thread_exit)
+ return RETURN_OK;
+
+ disc_gs_info = kmalloc(sizeof(struct unf_disc_gs_event_info), GFP_ATOMIC);
+ if (!disc_gs_info)
+ return UNF_RETURN_ERROR;
+
+ disc_gs_info->type = type;
+ disc_gs_info->lport = unf_lport;
+ disc_gs_info->rport = sns_port;
+ disc_gs_info->rport_id = nport_id;
+
+ INIT_LIST_HEAD(&disc_gs_info->list_entry);
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
+ list_add_tail(&disc_gs_info->list_entry, &disc_info->list_head);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
+ wake_up_process(disc_info->thread);
+ return RETURN_OK;
+}
+
+static int unf_disc_event_process(void *arg)
+{
+ struct list_head *node = NULL;
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flags = 0;
+ struct unf_disc *disc = (struct unf_disc *)arg;
+ struct unf_disc_manage_info *disc_info = &disc->disc_thread_info;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) enter discovery thread.", disc->lport->port_id);
+
+ while (!kthread_should_stop()) {
+ if (disc_info->thread_exit)
+ break;
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flags);
+ if ((list_empty(&disc_info->list_head)) ||
+ (atomic_read(&disc_info->disc_contrl_size) == 0)) {
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ node = UNF_OS_LIST_NEXT(&disc_info->list_head);
+ list_del_init(node);
+ disc_gs_info = list_entry(node, struct unf_disc_gs_event_info, list_entry);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
+ unf_discover_port_info(disc_gs_info);
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Port(0x%x) discovery thread over.", disc->lport->port_id);
+
+ return RETURN_OK;
+}
+
+void unf_flush_disc_event(void *disc, void *vport)
+{
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_disc_gs_event_info *disc_gs_info = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(disc);
+
+ disc_info = &unf_disc->disc_thread_info;
+
+ spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
+ list_for_each_safe(list, list_tmp, &disc_info->list_head) {
+ disc_gs_info = list_entry(list, struct unf_disc_gs_event_info, list_entry);
+
+ if (!vport || disc_gs_info->lport == vport) {
+ list_del_init(&disc_gs_info->list_entry);
+ kfree(disc_gs_info);
+ }
+ }
+
+ if (!vport)
+ atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
+ spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
+}
+
+void unf_disc_ctrl_size_inc(void *lport, u32 cmnd)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ if (atomic_read(&unf_lport->disc.disc_thread_info.disc_contrl_size) ==
+ UNF_MAX_GS_SEND_NUM)
+ return;
+
+ if (cmnd == NS_GPN_ID || cmnd == NS_GNN_ID || cmnd == NS_GFF_ID)
+ atomic_inc(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+}
+
+void unf_destroy_disc_thread(void *disc)
+{
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+
+ FC_CHECK_RETURN_VOID(unf_disc);
+
+ disc_info = &unf_disc->disc_thread_info;
+
+ disc_info->thread_exit = true;
+ unf_flush_disc_event(unf_disc, NULL);
+
+ wake_up_process(disc_info->thread);
+ kthread_stop(disc_info->thread);
+ disc_info->thread = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) destroy discovery thread succeed.",
+ unf_disc->lport->port_id);
+}
+
+u32 unf_crerate_disc_thread(void *disc)
+{
+ struct unf_disc_manage_info *disc_info = NULL;
+ struct unf_disc *unf_disc = (struct unf_disc *)disc;
+
+ FC_CHECK_RETURN_VALUE(unf_disc, UNF_RETURN_ERROR);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ disc_info = &unf_disc->disc_thread_info;
+
+ memset(disc_info, 0, sizeof(struct unf_disc_manage_info));
+
+ INIT_LIST_HEAD(&disc_info->list_head);
+ spin_lock_init(&disc_info->disc_event_list_lock);
+ atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
+
+ disc_info->thread_exit = false;
+ disc_info->thread = kthread_create(unf_disc_event_process, unf_disc, "%x_DiscT",
+ unf_disc->lport->port_id);
+
+ if (IS_ERR(disc_info->thread) || !disc_info->thread) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) creat discovery thread(0x%p) unsuccessful.",
+ unf_disc->lport->port_id, disc_info->thread);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ wake_up_process(disc_info->thread);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) creat discovery thread succeed.", unf_disc->lport->port_id);
+
+ return RETURN_OK;
+}
+
+void unf_disc_ref_cnt_dec(struct unf_disc *disc)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(disc);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (atomic_dec_and_test(&disc->disc_ref_cnt)) {
+ if (disc->disc_completion)
+ complete(disc->disc_completion);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+}
+
+void unf_wait_disc_complete(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u64 time_out = 0;
+
+ struct completion disc_completion;
+
+ init_completion(&disc_completion);
+ disc = &lport->disc;
+
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&disc->disc_work),
+ "Disc_work");
+ if (ret == RETURN_OK)
+ unf_disc_ref_cnt_dec(disc);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (atomic_read(&disc->disc_ref_cnt) != 0) {
+ disc->disc_completion = &disc_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for discover completion",
+ lport->port_id);
+
+ time_out =
+ wait_for_completion_timeout(disc->disc_completion,
+ msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
+ if (time_out == 0)
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_DISC_DIRTY);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for discover completion end", lport->port_id);
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ disc->disc_completion = NULL;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+}
+
+void unf_disc_mgr_destroy(void *lport)
+{
+ struct unf_disc *disc = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = (struct unf_lport *)lport;
+
+ disc = &unf_lport->disc;
+ disc->retry_count = 0;
+ disc->disc_temp.unf_disc_start = NULL;
+ disc->disc_temp.unf_disc_stop = NULL;
+ disc->disc_temp.unf_disc_callback = NULL;
+
+ unf_free_disc_rport_pool(unf_lport);
+ unf_destroy_rscn_mngr(unf_lport);
+ unf_wait_disc_complete(unf_lport);
+
+ if (unf_lport->root_lport != unf_lport)
+ return;
+
+ unf_destroy_disc_thread(disc);
+ unf_free_rport_pool(unf_lport);
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR;
+}
+
+void unf_disc_error_recovery(void *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) find RPort failed", unf_lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+
+ /* Delay work is pending */
+ if (delayed_work_pending(&disc->disc_work)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) disc_work is running and do nothing",
+ unf_lport->port_id);
+ return;
+ }
+
+ /* Continue to retry */
+ if (disc->retry_count < disc->max_retry_count) {
+ disc->retry_count++;
+ delay = (ulong)unf_lport->ed_tov;
+ if (queue_delayed_work(unf_wq, &disc->disc_work,
+ (ulong)msecs_to_jiffies((u32)delay)))
+ atomic_inc(&disc->disc_ref_cnt);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ } else {
+ /* Go to next stage */
+ if (disc->states == UNF_DISC_ST_GIDPT_WAIT) {
+ /* GID_PT_WAIT --->>> Send GID_FT */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ while ((ret != RETURN_OK) &&
+ (disc->retry_count < disc->max_retry_count)) {
+ ret = unf_send_gid_ft(unf_lport, unf_rport);
+ disc->retry_count++;
+ }
+ } else if (disc->states == UNF_DISC_ST_GIDFT_WAIT) {
+ /* GID_FT_WAIT --->>> Send LOGO */
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ } else {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+ }
+}
+
+enum unf_disc_state unf_disc_stat_start(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ if (event == UNF_EVENT_DISC_NORMAL_ENTER)
+ next_state = UNF_DISC_ST_GIDPT_WAIT;
+ else
+ next_state = old_state;
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_gid_pt_wait(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ switch (event) {
+ case UNF_EVENT_DISC_FAILED:
+ next_state = UNF_DISC_ST_GIDPT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_RETRY_TIMEOUT:
+ next_state = UNF_DISC_ST_GIDFT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_SUCCESS:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ case UNF_EVENT_DISC_LINKDOWN:
+ next_state = UNF_DISC_ST_START;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_gid_ft_wait(enum unf_disc_state old_state,
+ enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ switch (event) {
+ case UNF_EVENT_DISC_FAILED:
+ next_state = UNF_DISC_ST_GIDFT_WAIT;
+ break;
+
+ case UNF_EVENT_DISC_RETRY_TIMEOUT:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ case UNF_EVENT_DISC_LINKDOWN:
+ next_state = UNF_DISC_ST_START;
+ break;
+
+ case UNF_EVENT_DISC_SUCCESS:
+ next_state = UNF_DISC_ST_END;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+enum unf_disc_state unf_disc_stat_end(enum unf_disc_state old_state, enum unf_disc_event event)
+{
+ enum unf_disc_state next_state = UNF_DISC_ST_END;
+
+ if (event == UNF_EVENT_DISC_LINKDOWN)
+ next_state = UNF_DISC_ST_START;
+ else
+ next_state = old_state;
+
+ return next_state;
+}
+
+void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event event)
+{
+ struct unf_disc *disc = NULL;
+ enum unf_disc_state old_state = UNF_DISC_ST_START;
+ enum unf_disc_state next_state = UNF_DISC_ST_START;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ disc = &lport->disc;
+ old_state = disc->states;
+
+ switch (disc->states) {
+ case UNF_DISC_ST_START:
+ next_state = unf_disc_stat_start(old_state, event);
+ break;
+
+ case UNF_DISC_ST_GIDPT_WAIT:
+ next_state = unf_disc_stat_gid_pt_wait(old_state, event);
+ break;
+
+ case UNF_DISC_ST_GIDFT_WAIT:
+ next_state = unf_disc_stat_gid_ft_wait(old_state, event);
+ break;
+
+ case UNF_DISC_ST_END:
+ next_state = unf_disc_stat_end(old_state, event);
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ unf_set_disc_state(disc, next_state);
+}
+
+static void unf_lport_disc_timeout(struct work_struct *work)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ enum unf_disc_state state = UNF_DISC_ST_END;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ disc = container_of(work, struct unf_disc, disc_work.work);
+ if (!disc) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Get discover pointer failed");
+
+ return;
+ }
+
+ unf_lport = disc->lport;
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Find Port by discovery work failed");
+
+ unf_disc_ref_cnt_dec(disc);
+ return;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ state = disc->states;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV); /* 0xfffffc */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric RPort failed", unf_lport->port_id);
+
+ unf_disc_ref_cnt_dec(disc);
+ return;
+ }
+
+ switch (state) {
+ case UNF_DISC_ST_START:
+ break;
+
+ case UNF_DISC_ST_GIDPT_WAIT:
+ (void)unf_send_gid_pt(unf_lport, unf_rport);
+ break;
+
+ case UNF_DISC_ST_GIDFT_WAIT:
+ (void)unf_send_gid_ft(unf_lport, unf_rport);
+ break;
+
+ case UNF_DISC_ST_END:
+ break;
+
+ default:
+ break;
+ }
+
+ unf_disc_ref_cnt_dec(disc);
+}
+
+u32 unf_init_disc_mgr(struct unf_lport *lport)
+{
+ struct unf_disc *disc = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ disc = &lport->disc;
+ disc->max_retry_count = UNF_DISC_RETRY_TIMES;
+ disc->retry_count = 0;
+ disc->disc_flag = UNF_DISC_NONE;
+ INIT_LIST_HEAD(&disc->list_busy_rports);
+ INIT_LIST_HEAD(&disc->list_delete_rports);
+ INIT_LIST_HEAD(&disc->list_destroy_rports);
+ spin_lock_init(&disc->rport_busy_pool_lock);
+
+ disc->disc_rport_mgr.disc_pool_add = NULL;
+ INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_pool);
+ INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_busy);
+
+ disc->disc_completion = NULL;
+ disc->lport = lport;
+ INIT_DELAYED_WORK(&disc->disc_work, unf_lport_disc_timeout);
+ disc->disc_temp.unf_disc_start = unf_disc_start;
+ disc->disc_temp.unf_disc_stop = unf_disc_stop;
+ disc->disc_temp.unf_disc_callback = NULL;
+ atomic_set(&disc->disc_ref_cnt, 0);
+
+ /* Init RSCN Manager */
+ ret = unf_init_rscn_mgr(lport);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ if (lport->root_lport != lport)
+ return ret;
+
+ ret = unf_crerate_disc_thread(disc);
+ if (ret != RETURN_OK) {
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Init R_Port free Pool */
+ ret = unf_init_rport_pool(lport);
+ if (ret != RETURN_OK) {
+ unf_destroy_disc_thread(disc);
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Init R_Port free disc Pool */
+ ret = unf_init_disc_rport_pool(lport);
+ if (ret != RETURN_OK) {
+ unf_destroy_disc_thread(disc);
+ unf_free_rport_pool(lport);
+ unf_destroy_rscn_mngr(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/common/unf_disc.h b/drivers/scsi/spfc/common/unf_disc.h
new file mode 100644
index 000000000000..7ecad3eec424
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_disc.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_DISC_H
+#define UNF_DISC_H
+
+#include "unf_type.h"
+
+#define UNF_DISC_RETRY_TIMES 3
+#define UNF_DISC_NONE 0
+#define UNF_DISC_FABRIC 1
+#define UNF_DISC_LOOP 2
+
+enum unf_disc_state {
+ UNF_DISC_ST_START = 0x3000,
+ UNF_DISC_ST_GIDPT_WAIT,
+ UNF_DISC_ST_GIDFT_WAIT,
+ UNF_DISC_ST_END
+};
+
+enum unf_disc_event {
+ UNF_EVENT_DISC_NORMAL_ENTER = 0x8000,
+ UNF_EVENT_DISC_FAILED = 0x8001,
+ UNF_EVENT_DISC_SUCCESS = 0x8002,
+ UNF_EVENT_DISC_RETRY_TIMEOUT = 0x8003,
+ UNF_EVENT_DISC_LINKDOWN = 0x8004
+};
+
+enum unf_disc_type {
+ UNF_DISC_GET_PORT_NAME = 0,
+ UNF_DISC_GET_NODE_NAME,
+ UNF_DISC_GET_FEATURE
+};
+
+struct unf_disc_gs_event_info {
+ void *lport;
+ void *rport;
+ u32 rport_id;
+ enum unf_disc_type type;
+ struct list_head list_entry;
+};
+
+u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
+ enum unf_disc_type type);
+
+void unf_flush_disc_event(void *disc, void *vport);
+void unf_disc_ctrl_size_inc(void *lport, u32 cmnd);
+void unf_disc_error_recovery(void *lport);
+void unf_disc_mgr_destroy(void *lport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_event.c b/drivers/scsi/spfc/common/unf_event.c
new file mode 100644
index 000000000000..cf51c31ca4a3
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_event.c
@@ -0,0 +1,517 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_event.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+struct unf_event_list fc_event_list;
+struct unf_global_event_queue global_event_queue;
+
+/* Max global event node */
+#define UNF_MAX_GLOBAL_ENENT_NODE 24
+
+u32 unf_init_event_msg(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ event_mgr = &lport->event_mgr;
+
+ /* Get and Initial Event Node resource */
+ event_mgr->mem_add = vmalloc((size_t)event_mgr->free_event_count *
+ sizeof(struct unf_cm_event_report));
+ if (!event_mgr->mem_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate event manager failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(event_mgr->mem_add, 0,
+ ((size_t)event_mgr->free_event_count * sizeof(struct unf_cm_event_report)));
+
+ event_node = (struct unf_cm_event_report *)(event_mgr->mem_add);
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ for (index = 0; index < event_mgr->free_event_count; index++) {
+ INIT_LIST_HEAD(&event_node->list_entry);
+ list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
+ event_node++;
+ }
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+
+ return ret;
+}
+
+static void unf_del_event_center_fun_op(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ event_mgr = &lport->event_mgr;
+ event_mgr->unf_get_free_event_func = NULL;
+ event_mgr->unf_release_event = NULL;
+ event_mgr->unf_post_event_func = NULL;
+}
+
+void unf_init_event_node(struct unf_cm_event_report *event_node)
+{
+ FC_CHECK_RETURN_VOID(event_node);
+
+ event_node->event = UNF_EVENT_TYPE_REQUIRE;
+ event_node->event_asy_flag = UNF_EVENT_ASYN;
+ event_node->delay_times = 0;
+ event_node->para_in = NULL;
+ event_node->para_out = NULL;
+ event_node->result = 0;
+ event_node->lport = NULL;
+ event_node->unf_event_task = NULL;
+}
+
+struct unf_cm_event_report *unf_get_free_event_node(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+
+ if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
+ return NULL;
+
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flags);
+ if (list_empty(&event_mgr->list_free_event)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) have no event node anymore",
+ unf_lport->port_id);
+
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+ return NULL;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&event_mgr->list_free_event);
+ list_del(list_node);
+ event_mgr->free_event_count--;
+ event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
+
+ unf_init_event_node(event_node);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+
+ return event_node;
+}
+
+void unf_post_event(void *lport, void *event_node)
+{
+ struct unf_cm_event_report *cm_event_node = NULL;
+ struct unf_chip_manage_info *card_thread_info = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(event_node);
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+
+ /* If null, post to global event center */
+ if (!lport) {
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ fc_event_list.list_num++;
+ list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ wake_up_process(event_task_thread);
+ } else {
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ card_thread_info = unf_lport->chip_info;
+
+ /* Post to global event center */
+ if (!card_thread_info) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Port(0x%x) has strange event with type(0x%x)",
+ unf_lport->nport_id, cm_event_node->event);
+
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ fc_event_list.list_num++;
+ list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ wake_up_process(event_task_thread);
+ } else {
+ spin_lock_irqsave(&card_thread_info->chip_event_list_lock, flags);
+ card_thread_info->list_num++;
+ list_add_tail(&cm_event_node->list_entry, &card_thread_info->list_head);
+ spin_unlock_irqrestore(&card_thread_info->chip_event_list_lock, flags);
+
+ wake_up_process(card_thread_info->thread);
+ }
+ }
+}
+
+void unf_check_event_mgr_status(struct unf_event_mgr *event_mgr)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(event_mgr);
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ if (event_mgr->emg_completion && event_mgr->free_event_count == UNF_MAX_EVENT_NODE)
+ complete(event_mgr->emg_completion);
+
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+}
+
+void unf_release_event(void *lport, void *event_node)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_event_report *cm_event_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(event_node);
+
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flags);
+ event_mgr->free_event_count++;
+ unf_init_event_node(cm_event_node);
+ list_add_tail(&cm_event_node->list_entry, &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
+
+ unf_check_event_mgr_status(event_mgr);
+}
+
+void unf_release_global_event(void *event_node)
+{
+ ulong flag = 0;
+ struct unf_cm_event_report *cm_event_node = NULL;
+
+ FC_CHECK_RETURN_VOID(event_node);
+ cm_event_node = (struct unf_cm_event_report *)event_node;
+
+ unf_init_event_node(cm_event_node);
+
+ spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
+ global_event_queue.list_number++;
+ list_add_tail(&cm_event_node->list_entry, &global_event_queue.global_event_list);
+ spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
+}
+
+u32 unf_init_event_center(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Initial Disc manager */
+ event_mgr = &unf_lport->event_mgr;
+ event_mgr->free_event_count = UNF_MAX_EVENT_NODE;
+ event_mgr->unf_get_free_event_func = unf_get_free_event_node;
+ event_mgr->unf_release_event = unf_release_event;
+ event_mgr->unf_post_event_func = unf_post_event;
+
+ INIT_LIST_HEAD(&event_mgr->list_free_event);
+ spin_lock_init(&event_mgr->port_event_lock);
+ event_mgr->emg_completion = NULL;
+
+ ret = unf_init_event_msg(unf_lport);
+
+ return ret;
+}
+
+void unf_wait_event_mgr_complete(struct unf_event_mgr *event_mgr)
+{
+ struct unf_event_mgr *event_mgr_temp = NULL;
+ bool wait = false;
+ ulong mg_flag = 0;
+
+ struct completion fc_event_completion;
+
+ init_completion(&fc_event_completion);
+ FC_CHECK_RETURN_VOID(event_mgr);
+ event_mgr_temp = event_mgr;
+
+ spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
+ if (event_mgr_temp->free_event_count != UNF_MAX_EVENT_NODE) {
+ event_mgr_temp->emg_completion = &fc_event_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
+
+ if (wait)
+ wait_for_completion(event_mgr_temp->emg_completion);
+
+ spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
+ event_mgr_temp->emg_completion = NULL;
+ spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
+}
+
+u32 unf_event_center_destroy(void *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ ulong list_lock_flag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+ event_mgr = &unf_lport->event_mgr;
+
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, list_lock_flag);
+ if (!list_empty(&fc_event_list.list_head)) {
+ list_for_each_safe(list, list_tmp, &fc_event_list.list_head) {
+ event_node = list_entry(list, struct unf_cm_event_report, list_entry);
+
+ if (event_node->lport == unf_lport) {
+ list_del_init(&event_node->list_entry);
+ if (event_node->event_asy_flag == UNF_EVENT_SYN) {
+ event_node->result = UNF_RETURN_ERROR;
+ complete(&event_node->event_comp);
+ }
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, flag);
+ event_mgr->free_event_count++;
+ list_add_tail(&event_node->list_entry,
+ &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, list_lock_flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait event",
+ unf_lport->port_id);
+
+ unf_wait_event_mgr_complete(event_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait event process end",
+ unf_lport->port_id);
+
+ unf_del_event_center_fun_op(unf_lport);
+
+ vfree(event_mgr->mem_add);
+ event_mgr->mem_add = NULL;
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER;
+
+ return ret;
+}
+
+static void unf_procee_asyn_event(struct unf_cm_event_report *event_node)
+{
+ struct unf_lport *lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ lport = (struct unf_lport *)event_node->lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ if (event_node->unf_event_task) {
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+ }
+
+ if (lport->event_mgr.unf_release_event)
+ lport->event_mgr.unf_release_event(lport, event_node);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Port(0x%x) handle event(0x%x) failed",
+ lport->port_id, event_node->event);
+ }
+}
+
+void unf_handle_event(struct unf_cm_event_report *event_node)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 event = 0;
+ u32 event_asy_flag = UNF_EVENT_ASYN;
+
+ FC_CHECK_RETURN_VOID(event_node);
+
+ event = event_node->event;
+ event_asy_flag = event_node->event_asy_flag;
+
+ switch (event_asy_flag) {
+ case UNF_EVENT_SYN: /* synchronous event node */
+ case UNF_GLOBAL_EVENT_SYN:
+ if (event_node->unf_event_task)
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+
+ event_node->result = ret;
+ complete(&event_node->event_comp);
+ break;
+
+ case UNF_EVENT_ASYN: /* asynchronous event node */
+ unf_procee_asyn_event(event_node);
+ break;
+
+ case UNF_GLOBAL_EVENT_ASYN:
+ if (event_node->unf_event_task) {
+ ret = (u32)event_node->unf_event_task(event_node->para_in,
+ event_node->para_out);
+ }
+
+ unf_release_global_event(event_node);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]handle global event(0x%x) failed", event);
+ }
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
+ "[warn]Unknown event(0x%x)", event);
+ break;
+ }
+}
+
+u32 unf_init_global_event_msg(void)
+{
+ struct unf_cm_event_report *event_node = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ ulong flag = 0;
+
+ INIT_LIST_HEAD(&global_event_queue.global_event_list);
+ spin_lock_init(&global_event_queue.global_event_list_lock);
+ global_event_queue.list_number = 0;
+
+ global_event_queue.global_event_add = vmalloc(UNF_MAX_GLOBAL_ENENT_NODE *
+ sizeof(struct unf_cm_event_report));
+ if (!global_event_queue.global_event_add) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Can't allocate global event queue");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(global_event_queue.global_event_add, 0,
+ (UNF_MAX_GLOBAL_ENENT_NODE * sizeof(struct unf_cm_event_report)));
+
+ event_node = (struct unf_cm_event_report *)(global_event_queue.global_event_add);
+
+ spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
+ for (index = 0; index < UNF_MAX_GLOBAL_ENENT_NODE; index++) {
+ INIT_LIST_HEAD(&event_node->list_entry);
+ list_add_tail(&event_node->list_entry, &global_event_queue.global_event_list);
+
+ global_event_queue.list_number++;
+ event_node++;
+ }
+ spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
+
+ return ret;
+}
+
+void unf_destroy_global_event_msg(void)
+{
+ if (global_event_queue.list_number != UNF_MAX_GLOBAL_ENENT_NODE) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[warn]Global event release not complete with remain nodes(0x%x)",
+ global_event_queue.list_number);
+ }
+
+ vfree(global_event_queue.global_event_add);
+}
+
+u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
+ int (*unf_event_task)(void *arg_in, void *arg_out))
+{
+ struct list_head *list_node = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ spinlock_t *event_list_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(unf_event_task, UNF_RETURN_ERROR);
+
+ if (event_asy_flag != UNF_GLOBAL_EVENT_ASYN && event_asy_flag != UNF_GLOBAL_EVENT_SYN) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Event async flag(0x%x) abnormity",
+ event_asy_flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ event_list_lock = &global_event_queue.global_event_list_lock;
+ spin_lock_irqsave(event_list_lock, flag);
+ if (list_empty(&global_event_queue.global_event_list)) {
+ spin_unlock_irqrestore(event_list_lock, flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&global_event_queue.global_event_list);
+ list_del_init(list_node);
+ global_event_queue.list_number--;
+ event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
+ spin_unlock_irqrestore(event_list_lock, flag);
+
+ /* Initial global event */
+ unf_init_event_node(event_node);
+ init_completion(&event_node->event_comp);
+ event_node->event_asy_flag = event_asy_flag;
+ event_node->unf_event_task = unf_event_task;
+ event_node->para_in = (void *)para_in;
+ event_node->para_out = NULL;
+
+ unf_post_event(NULL, event_node);
+
+ if (event_asy_flag == UNF_GLOBAL_EVENT_SYN) {
+ /* must wait for complete */
+ wait_for_completion(&event_node->event_comp);
+ ret = event_node->result;
+ unf_release_global_event(event_node);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ return ret;
+}
+
+struct unf_cm_event_report *unf_get_one_event_node(void *lport)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(unf_lport->event_mgr.unf_get_free_event_func, NULL);
+
+ return unf_lport->event_mgr.unf_get_free_event_func((void *)unf_lport);
+}
+
+void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(event);
+
+ FC_CHECK_RETURN_VOID(unf_lport->event_mgr.unf_post_event_func);
+ FC_CHECK_RETURN_VOID(event);
+
+ unf_lport->event_mgr.unf_post_event_func((void *)unf_lport, event);
+}
diff --git a/drivers/scsi/spfc/common/unf_event.h b/drivers/scsi/spfc/common/unf_event.h
new file mode 100644
index 000000000000..4d23f11986af
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_event.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_EVENT_H
+#define UNF_EVENT_H
+
+#include "unf_type.h"
+
+#define UNF_MAX_EVENT_NODE 256
+
+enum unf_event_type {
+ UNF_EVENT_TYPE_ALARM = 0, /* Alarm */
+ UNF_EVENT_TYPE_REQUIRE, /* Require */
+ UNF_EVENT_TYPE_RECOVERY, /* Recovery */
+ UNF_EVENT_TYPE_BUTT
+};
+
+struct unf_cm_event_report {
+ /* event type */
+ u32 event;
+
+ /* ASY flag */
+ u32 event_asy_flag;
+
+ /* Delay times,must be async event */
+ u32 delay_times;
+
+ struct list_head list_entry;
+
+ void *lport;
+
+ /* parameter */
+ void *para_in;
+ void *para_out;
+ u32 result;
+
+ /* recovery strategy */
+ int (*unf_event_task)(void *arg_in, void *arg_out);
+
+ struct completion event_comp;
+};
+
+struct unf_event_mgr {
+ spinlock_t port_event_lock;
+ u32 free_event_count;
+
+ struct list_head list_free_event;
+
+ struct completion *emg_completion;
+
+ void *mem_add;
+ struct unf_cm_event_report *(*unf_get_free_event_func)(void *lport);
+ void (*unf_release_event)(void *lport, void *event_node);
+ void (*unf_post_event_func)(void *lport, void *event_node);
+};
+
+struct unf_global_event_queue {
+ void *global_event_add;
+ u32 list_number;
+ struct list_head global_event_list;
+ spinlock_t global_event_list_lock;
+};
+
+struct unf_event_list {
+ struct list_head list_head;
+ spinlock_t fc_event_list_lock;
+ u32 list_num; /* list node number */
+};
+
+void unf_handle_event(struct unf_cm_event_report *event_node);
+u32 unf_init_global_event_msg(void);
+void unf_destroy_global_event_msg(void);
+u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
+ int (*unf_event_task)(void *arg_in, void *arg_out));
+struct unf_cm_event_report *unf_get_one_event_node(void *lport);
+void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event);
+u32 unf_event_center_destroy(void *lport);
+u32 unf_init_event_center(void *lport);
+
+extern struct task_struct *event_task_thread;
+extern struct unf_global_event_queue global_event_queue;
+extern struct unf_event_list fc_event_list;
+#endif
+
diff --git a/drivers/scsi/spfc/common/unf_exchg.c b/drivers/scsi/spfc/common/unf_exchg.c
new file mode 100644
index 000000000000..830cad7e6962
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg.c
@@ -0,0 +1,2317 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_exchg.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_io.h"
+#include "unf_exchg_abort.h"
+
+#define SPFC_XCHG_TYPE_MASK 0xFFFF
+#define UNF_DEL_XCHG_TIMER_SAFE(xchg) \
+ do { \
+ if (cancel_delayed_work(&((xchg)->timeout_work))) { \
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR, \
+ "Exchange(0x%p) is free, but timer is pending.", \
+ xchg); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, \
+ UNF_CRITICAL, \
+ "Exchange(0x%p) is free, but timer is running.", \
+ xchg); \
+ } \
+ } while (0)
+
+static struct unf_io_flow_id io_stage_table[] = {
+ {"XCHG_ALLOC"}, {"TGT_RECEIVE_ABTS"},
+ {"TGT_ABTS_DONE"}, {"TGT_IO_SRR"},
+ {"SFS_RESPONSE"}, {"SFS_TIMEOUT"},
+ {"INI_SEND_CMND"}, {"INI_RESPONSE_DONE"},
+ {"INI_EH_ABORT"}, {"INI_EH_DEVICE_RESET"},
+ {"INI_EH_BLS_DONE"}, {"INI_IO_TIMEOUT"},
+ {"INI_REQ_TIMEOUT"}, {"XCHG_CANCEL_TIMER"},
+ {"XCHG_FREE_XCHG"}, {"SEND_ELS"},
+ {"IO_XCHG_WAIT"},
+};
+
+static void unf_init_xchg_attribute(struct unf_xchg *xchg);
+static void unf_delay_work_del_syn(struct unf_xchg *xchg);
+static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr,
+ bool done_ini_flag);
+static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr);
+
+void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+ ulong xchg_flag = 0;
+ struct unf_xchg_mgr *xchg_mgrs = NULL;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgrs = unf_get_xchg_mgr_by_lport(lport, i);
+
+ if (!xchg_mgrs) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
+ "Can't find LPort(0x%x) MgrIdx %u exchange manager.",
+ lport->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&xchg_mgrs->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ list_for_each_safe(node, next_node,
+ (&xchg_mgrs->hot_pool->ini_busylist)) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (INI_IO_STATE_UPTASK & xchg->io_state &&
+ (atomic_read(&xchg->ref_cnt) > 0)) {
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
+ up(&xchg->task_sema);
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
+ "Wake up task command exchange(0x%p), Hot Pool Tag(0x%x).",
+ xchg, xchg->hotpooltag);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ }
+
+ spin_unlock_irqrestore(&xchg_mgrs->hot_pool->xchg_hotpool_lock,
+ hot_pool_lock_flags);
+ }
+}
+
+void *unf_cm_get_free_xchg(void *lport, u32 xchg_type)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ /* Find the corresponding Lport Xchg management template. */
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_xchg_get_free_and_init), NULL);
+
+ return xchg_mgr_temp->unf_xchg_get_free_and_init(unf_lport, xchg_type);
+}
+
+void unf_cm_free_xchg(void *lport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VOID(unlikely(lport));
+ FC_CHECK_RETURN_VOID(unlikely(xchg));
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+ FC_CHECK_RETURN_VOID(unlikely(xchg_mgr_temp->unf_xchg_release));
+
+ /*
+ * unf_cm_free_xchg --->>> unf_free_xchg
+ * --->>> unf_xchg_ref_dec --->>> unf_free_fcp_xchg --->>>
+ * unf_done_ini_xchg
+ */
+ xchg_mgr_temp->unf_xchg_release(lport, xchg);
+}
+
+void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ /* Find the corresponding Lport Xchg management template */
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_tag), NULL);
+
+ return xchg_mgr_temp->unf_look_up_xchg_by_tag(lport, hot_pool_tag);
+}
+
+void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ /* Find the corresponding Lport Xchg management template */
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_id), NULL);
+
+ return xchg_mgr_temp->unf_look_up_xchg_by_id(lport, ox_id, oid);
+}
+
+struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
+ struct unf_xchg *xchg = NULL;
+
+ FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
+
+ FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn), NULL);
+
+ xchg = (struct unf_xchg *)xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn(unf_lport,
+ command_sn,
+ world_id,
+ pinitiator);
+
+ return xchg;
+}
+
+static u32 unf_init_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr,
+ u32 xchg_sum, u32 sfs_sum)
+{
+ struct unf_xchg *xchg_mem = NULL;
+ union unf_sfs_u *sfs_mm_start = NULL;
+ dma_addr_t sfs_dma_addr;
+ struct unf_xchg *xchg = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE((sfs_sum <= xchg_sum), UNF_RETURN_ERROR);
+
+ free_pool = &xchg_mgr->free_pool;
+ xchg_mem = xchg_mgr->fcp_mm_start;
+ xchg = xchg_mem;
+
+ sfs_mm_start = (union unf_sfs_u *)xchg_mgr->sfs_mm_start;
+ sfs_dma_addr = xchg_mgr->sfs_phy_addr;
+ /* 1. Allocate the SFS UNION memory to each SFS XCHG
+ * and mount the SFS XCHG to the corresponding FREE linked list
+ */
+ free_pool->total_sfs_xchg = 0;
+ free_pool->sfs_xchg_sum = sfs_sum;
+ for (i = 0; i < sfs_sum; i++) {
+ INIT_LIST_HEAD(&xchg->list_xchg_entry);
+ INIT_LIST_HEAD(&xchg->list_esgls);
+ spin_lock_init(&xchg->xchg_state_lock);
+ sema_init(&xchg->task_sema, 0);
+ sema_init(&xchg->echo_info.echo_sync_sema, 0);
+
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr = sfs_mm_start;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr = sfs_dma_addr;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = sizeof(*sfs_mm_start);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
+ free_pool->total_sfs_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ sfs_mm_start++;
+ sfs_dma_addr = sfs_dma_addr + sizeof(union unf_sfs_u);
+ xchg++;
+ }
+
+ free_pool->total_fcp_xchg = 0;
+
+ for (i = 0; (i < xchg_sum - sfs_sum); i++) {
+ INIT_LIST_HEAD(&xchg->list_xchg_entry);
+
+ INIT_LIST_HEAD(&xchg->list_esgls);
+ spin_lock_init(&xchg->xchg_state_lock);
+ sema_init(&xchg->task_sema, 0);
+ sema_init(&xchg->echo_info.echo_sync_sema, 0);
+
+ /* alloc dma buffer for fcp_rsp_iu */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
+ free_pool->total_fcp_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg++;
+ }
+
+ free_pool->fcp_xchg_sum = free_pool->total_fcp_xchg;
+
+ return RETURN_OK;
+}
+
+static u32 unf_get_xchg_config_sum(struct unf_lport *lport, u32 *xchg_sum)
+{
+ struct unf_lport_cfg_item *lport_cfg_items = NULL;
+
+ lport_cfg_items = &lport->low_level_func.lport_cfg_items;
+
+ /* It has been checked at the bottom layer. Don't need to check it
+ * again.
+ */
+ *xchg_sum = lport_cfg_items->max_sfs_xchg + lport_cfg_items->max_io;
+ if ((*xchg_sum / UNF_EXCHG_MGR_NUM) == 0 ||
+ lport_cfg_items->max_sfs_xchg / UNF_EXCHG_MGR_NUM == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) Xchgsum(%u) or SfsXchg(%u) is less than ExchangeMgrNum(%u).",
+ lport->port_id, *xchg_sum, lport_cfg_items->max_sfs_xchg,
+ UNF_EXCHG_MGR_NUM);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (*xchg_sum > (INVALID_VALUE16 - 1)) {
+ /* If the format of ox_id/rx_id is exceeded, this function is
+ * not supported
+ */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) Exchange num(0x%x) is Too Big.",
+ lport->port_id, *xchg_sum);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void unf_xchg_cancel_timer(void *xchg)
+{
+ struct unf_xchg *tmp_xchg = NULL;
+ bool need_dec_xchg_ref = false;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ tmp_xchg = (struct unf_xchg *)xchg;
+
+ spin_lock_irqsave(&tmp_xchg->xchg_state_lock, flag);
+ if (cancel_delayed_work(&tmp_xchg->timeout_work))
+ need_dec_xchg_ref = true;
+
+ spin_unlock_irqrestore(&tmp_xchg->xchg_state_lock, flag);
+
+ if (need_dec_xchg_ref)
+ unf_xchg_ref_dec(xchg, XCHG_CANCEL_TIMER);
+}
+
+void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ unf_lport = lport;
+
+ /* hot Xchg */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "INI busy :");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
+ atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "SFS :");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "0x%p---0x%x---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, xchg->cmnd_code, (u32)xchg->hotpooltag,
+ (u32)xchg->xchg_type, (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
+ (u32)xchg->did, atomic_read(&xchg->ref_cnt),
+ (u32)xchg->io_state, xchg->alloc_jif);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "Destroy list.");
+ list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->list_destroy_xchg) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
+ atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
+}
+
+static u32 unf_free_lport_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_OS_WAITIO_TIMEOUT (10 * 1000)
+
+ ulong free_pool_lock_flags = 0;
+ bool wait = false;
+ u32 total_xchg = 0;
+ u32 total_xchg_sum = 0;
+ u32 ret = RETURN_OK;
+ u64 time_out = 0;
+ struct completion xchg_mgr_completion;
+
+ init_completion(&xchg_mgr_completion);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg_mgr->hot_pool, UNF_RETURN_ERROR);
+
+ unf_free_lport_sfs_xchg(xchg_mgr, false);
+
+ /* free INI Mode exchanges belong to L_Port */
+ unf_free_lport_ini_xchg(xchg_mgr, false);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+ total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
+ total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
+ if (total_xchg != total_xchg_sum) {
+ xchg_mgr->free_pool.xchg_mgr_completion = &xchg_mgr_completion;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for exchange manager completion (0x%x:0x%x)",
+ lport->port_id, total_xchg, total_xchg_sum);
+
+ unf_show_all_xchg(lport, xchg_mgr);
+
+ time_out = wait_for_completion_timeout(xchg_mgr->free_pool.xchg_mgr_completion,
+ msecs_to_jiffies(UNF_OS_WAITIO_TIMEOUT));
+ if (time_out == 0)
+ unf_free_lport_destroy_xchg(xchg_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for exchange manager completion end",
+ lport->port_id);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
+ xchg_mgr->free_pool.xchg_mgr_completion = NULL;
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock,
+ free_pool_lock_flags);
+ }
+
+ return ret;
+}
+
+void unf_free_lport_all_xchg(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ ;
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+
+ continue;
+ }
+ unf_free_lport_sfs_xchg(xchg_mgr, false);
+
+ /* free INI Mode exchanges belong to L_Port */
+ unf_free_lport_ini_xchg(xchg_mgr, false);
+
+ unf_free_lport_destroy_xchg(xchg_mgr);
+ }
+}
+
+static void unf_delay_work_del_syn(struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* synchronous release timer */
+ if (!cancel_delayed_work_sync(&xchg->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Exchange(0x%p), State(0x%x) can't delete work timer, timer is running or no timer.",
+ xchg, xchg->io_state);
+ } else {
+ /* The reference count cannot be directly subtracted.
+ * This prevents the XCHG from being moved to the Free linked
+ * list when the card is unloaded.
+ */
+ unf_cm_free_xchg(xchg->lport, xchg);
+ }
+}
+
+static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
+{
+ struct list_head *list = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->sfs_busylist)) {
+ list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->sfs_busylist);
+ list_del_init(list);
+
+ /* Prevent the xchg of the sfs from being accessed repeatedly.
+ * The xchg is first mounted to the destroy linked list.
+ */
+ list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
+
+ xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ unf_delay_work_del_syn(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Free SFS Exchange(0x%p), State(0x%x), Reference count(%d), Start time(%llu).",
+ xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
+{
+ struct list_head *list = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong hot_pool_lock_flags = 0;
+ u32 up_status = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
+ /* for each INI busy_list (exchange) node */
+ list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->ini_busylist);
+
+ /* Put exchange node to destroy_list, prevent done repeatly */
+ list_del_init(list);
+ list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
+ xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
+ if (atomic_read(&xchg->ref_cnt) <= 0)
+ continue;
+
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ hot_pool_lock_flags);
+ unf_delay_work_del_syn(xchg);
+
+ /* In the case of INI done, the command should be set to fail to
+ * prevent data inconsistency caused by the return of OK
+ */
+ up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_IO_PORT_LOGOUT);
+
+ if (INI_IO_STATE_UPABORT & xchg->io_state) {
+ /*
+ * About L_Port destroy:
+ * UP_ABORT ---to--->>> ABORT_Port_Removing
+ */
+ up_status = UNF_IO_ABORT_PORT_REMOVING;
+ }
+
+ xchg->scsi_cmnd_info.result = up_status;
+ up(&xchg->task_sema);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Free INI exchange(0x%p) state(0x%x) reference count(%d) start time(%llu)",
+ xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ /* go to next INI busy_list (exchange) node */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_WAIT_DESTROY_EMPTY_STEP_MS 1000
+#define UNF_WAIT_IO_STATE_TGT_FRONT_MS (10 * 1000)
+
+ struct unf_xchg *xchg = NULL;
+ struct list_head *next_xchg_node = NULL;
+ ulong hot_pool_lock_flags = 0;
+ ulong xchg_flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
+
+ /* In this case, the timer on the destroy linked list is deleted.
+ * You only need to check whether the timer is released at the end of
+ * the tgt.
+ */
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ while (!list_empty(&xchg_mgr->hot_pool->list_destroy_xchg)) {
+ next_xchg_node = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->list_destroy_xchg);
+ xchg = list_entry(next_xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Free Exchange(0x%p), Type(0x%x), State(0x%x), Reference count(%d), Start time(%llu)",
+ xchg, xchg->xchg_type, xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+
+ /* This interface can be invoked to ensure that the timer is
+ * successfully canceled or wait until the timer execution is
+ * complete
+ */
+ unf_delay_work_del_syn(xchg);
+
+ /*
+ * If the timer is canceled successfully, delete Xchg
+ * If the timer has burst, the Xchg may have been released,In
+ * this case, deleting the Xchg will be failed
+ */
+ unf_cm_free_xchg(xchg->lport, xchg);
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+ };
+
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
+}
+
+static void unf_free_all_big_sfs(struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_big_sfs *big_sfs = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ /* Release the free resources in the busy state */
+ spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+ list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_busypool) {
+ list_del(node);
+ list_add_tail(node, &xchg_mgr->big_sfs_pool.list_freepool);
+ }
+
+ list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_freepool) {
+ list_del(node);
+ big_sfs = list_entry(node, struct unf_big_sfs, entry_bigsfs);
+ if (big_sfs->addr)
+ big_sfs->addr = NULL;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+
+ if (xchg_mgr->big_sfs_buf_list.buflist) {
+ for (i = 0; i < xchg_mgr->big_sfs_buf_list.buf_num; i++) {
+ kfree(xchg_mgr->big_sfs_buf_list.buflist[i].vaddr);
+ xchg_mgr->big_sfs_buf_list.buflist[i].vaddr = NULL;
+ }
+
+ kfree(xchg_mgr->big_sfs_buf_list.buflist);
+ xchg_mgr->big_sfs_buf_list.buflist = NULL;
+ }
+}
+
+static void unf_free_big_sfs_pool(struct unf_xchg_mgr *xchg_mgr)
+{
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Free Big SFS Pool, Count(0x%x).",
+ xchg_mgr->big_sfs_pool.free_count);
+
+ unf_free_all_big_sfs(xchg_mgr);
+ xchg_mgr->big_sfs_pool.free_count = 0;
+
+ if (xchg_mgr->big_sfs_pool.big_sfs_pool) {
+ vfree(xchg_mgr->big_sfs_pool.big_sfs_pool);
+ xchg_mgr->big_sfs_pool.big_sfs_pool = NULL;
+ }
+}
+
+static void unf_free_xchg_mgr_mem(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 i = 0;
+ u32 xchg_sum = 0;
+ struct unf_xchg_free_pool *free_pool = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ unf_free_big_sfs_pool(xchg_mgr);
+
+ /* The sfs is released first, and the XchgMgr is allocated by the get
+ * free page. Therefore, the XchgMgr is compared with the '0'
+ */
+ if (xchg_mgr->sfs_mm_start != 0) {
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ }
+
+ /* Release Xchg first */
+ if (xchg_mgr->fcp_mm_start) {
+ unf_get_xchg_config_sum(lport, &xchg_sum);
+ xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
+
+ xchg = xchg_mgr->fcp_mm_start;
+ for (i = 0; i < xchg_sum; i++) {
+ if (!xchg)
+ break;
+ xchg++;
+ }
+
+ vfree(xchg_mgr->fcp_mm_start);
+ xchg_mgr->fcp_mm_start = NULL;
+ }
+
+ /* release the hot pool */
+ if (xchg_mgr->hot_pool) {
+ vfree(xchg_mgr->hot_pool);
+ xchg_mgr->hot_pool = NULL;
+ }
+
+ free_pool = &xchg_mgr->free_pool;
+
+ vfree(xchg_mgr);
+}
+
+static void unf_free_xchg_mgr(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ /* 1. At first, free exchanges for this Exch_Mgr */
+ ret = unf_free_lport_xchg(lport, xchg_mgr);
+
+ /* 2. Delete this Exch_Mgr entry */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_del_init(&xchg_mgr->xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* 3. free Exch_Mgr memory if necessary */
+ if (ret == RETURN_OK) {
+ /* free memory directly */
+ unf_free_xchg_mgr_mem(lport, xchg_mgr);
+ } else {
+ /* Add it to Dirty list */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_drty_xchg_mgr_head);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* Mark dirty flag */
+ unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY);
+ }
+}
+
+void unf_free_all_xchg_mgr(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* for each L_Port->Exch_Mgr_List */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ while (!list_empty(&lport->list_xchg_mgr_head)) {
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ unf_free_xchg_mgr(lport, xchg_mgr);
+ if (i < UNF_EXCHG_MGR_NUM)
+ lport->xchg_mgr[i] = NULL;
+
+ i++;
+
+ /* go to next */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR;
+}
+
+static u32 unf_init_xchg_mgr(struct unf_xchg_mgr *xchg_mgr)
+{
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+
+ memset(xchg_mgr, 0, sizeof(struct unf_xchg_mgr));
+
+ INIT_LIST_HEAD(&xchg_mgr->xchg_mgr_entry);
+ xchg_mgr->fcp_mm_start = NULL;
+ xchg_mgr->mem_szie = sizeof(struct unf_xchg_mgr);
+
+ return RETURN_OK;
+}
+
+static u32 unf_init_xchg_mgr_free_pool(struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+
+ free_pool = &xchg_mgr->free_pool;
+ INIT_LIST_HEAD(&free_pool->list_free_xchg_list);
+ INIT_LIST_HEAD(&free_pool->list_sfs_xchg_list);
+ spin_lock_init(&free_pool->xchg_freepool_lock);
+ free_pool->fcp_xchg_sum = 0;
+ free_pool->xchg_mgr_completion = NULL;
+
+ return RETURN_OK;
+}
+
+static u32 unf_init_xchg_hot_pool(struct unf_lport *lport, struct unf_xchg_hot_pool *hot_pool,
+ u32 xchg_sum)
+{
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+
+ INIT_LIST_HEAD(&hot_pool->sfs_busylist);
+ INIT_LIST_HEAD(&hot_pool->ini_busylist);
+ spin_lock_init(&hot_pool->xchg_hotpool_lock);
+ INIT_LIST_HEAD(&hot_pool->list_destroy_xchg);
+ hot_pool->total_xchges = 0;
+ hot_pool->wait_state = false;
+ hot_pool->lport = lport;
+
+ /* Slab Pool Index */
+ hot_pool->slab_next_index = 0;
+ UNF_TOU16_CHECK(hot_pool->slab_total_sum, xchg_sum, return UNF_RETURN_ERROR);
+
+ return RETURN_OK;
+}
+
+static u32 unf_alloc_and_init_big_sfs_pool(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
+{
+#define UNF_MAX_RESOURCE_RESERVED_FOR_RSCN 20
+#define UNF_BIG_SFS_POOL_TYPES 6
+ u32 i = 0;
+ u32 size = 0;
+ u32 align_size = 0;
+ u32 npiv_cnt = 0;
+ struct unf_big_sfs_pool *big_sfs_pool = NULL;
+ struct unf_big_sfs *big_sfs_buff = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 buf_cnt_per_huge_buf;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ big_sfs_pool = &xchg_mgr->big_sfs_pool;
+
+ INIT_LIST_HEAD(&big_sfs_pool->list_freepool);
+ INIT_LIST_HEAD(&big_sfs_pool->list_busypool);
+ spin_lock_init(&big_sfs_pool->big_sfs_pool_lock);
+ npiv_cnt = lport->low_level_func.support_max_npiv_num;
+
+ /*
+ * The value*6 indicates GID_PT/GID_FT, RSCN, and ECHO
+ * Another command is received when a command is being responded
+ * A maximum of 20 resources are reserved for the RSCN. During the test,
+ * multiple rscn are found. As a result, the resources are insufficient
+ * and the disc fails.
+ */
+ big_sfs_pool->free_count = (npiv_cnt + 1) * UNF_BIG_SFS_POOL_TYPES +
+ UNF_MAX_RESOURCE_RESERVED_FOR_RSCN;
+ big_sfs_buff =
+ (struct unf_big_sfs *)vmalloc(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ if (!big_sfs_buff) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Allocate Big SFS buf fail.");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(big_sfs_buff, 0, big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ xchg_mgr->mem_szie += (u32)(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
+ big_sfs_pool->big_sfs_pool = (void *)big_sfs_buff;
+
+ /*
+ * Use the larger value of sizeof (struct unf_gid_acc_pld) and sizeof
+ * (struct unf_rscn_pld) to avoid the icp error.Therefore, the value is
+ * directly assigned instead of being compared.
+ */
+ size = sizeof(struct unf_gid_acc_pld);
+ align_size = ALIGN(size, PAGE_SIZE);
+
+ buf_total_size = align_size * big_sfs_pool->free_count;
+ xchg_mgr->big_sfs_buf_list.buf_size =
+ buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE
+ : buf_total_size;
+
+ buf_cnt_per_huge_buf = xchg_mgr->big_sfs_buf_list.buf_size / align_size;
+ buf_num = big_sfs_pool->free_count % buf_cnt_per_huge_buf
+ ? big_sfs_pool->free_count / buf_cnt_per_huge_buf + 1
+ : big_sfs_pool->free_count / buf_cnt_per_huge_buf;
+
+ xchg_mgr->big_sfs_buf_list.buflist = (struct buff_list *)kmalloc(buf_num *
+ sizeof(struct buff_list), GFP_KERNEL);
+ xchg_mgr->big_sfs_buf_list.buf_num = buf_num;
+
+ if (!xchg_mgr->big_sfs_buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate BigSfs pool buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(xchg_mgr->big_sfs_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr =
+ kmalloc(xchg_mgr->big_sfs_buf_list.buf_size, GFP_ATOMIC);
+ if (xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr ==
+ NULL) {
+ goto free_buff;
+ }
+ memset(xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr, 0,
+ xchg_mgr->big_sfs_buf_list.buf_size);
+ }
+
+ for (i = 0; i < big_sfs_pool->free_count; i++) {
+ if (i != 0 && !(i % buf_cnt_per_huge_buf))
+ cur_buf_idx++;
+
+ cur_buf_offset = align_size * (i % buf_cnt_per_huge_buf);
+ big_sfs_buff->addr = xchg_mgr->big_sfs_buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ big_sfs_buff->size = size;
+ xchg_mgr->mem_szie += size;
+ list_add_tail(&big_sfs_buff->entry_bigsfs, &big_sfs_pool->list_freepool);
+ big_sfs_buff++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[EVENT]Allocate BigSfs pool size:%d,align_size:%d,buf_num:%u,buf_size:%u",
+ size, align_size, xchg_mgr->big_sfs_buf_list.buf_num,
+ xchg_mgr->big_sfs_buf_list.buf_size);
+ return RETURN_OK;
+free_buff:
+ unf_free_all_big_sfs(xchg_mgr);
+ vfree(big_sfs_buff);
+ big_sfs_pool->big_sfs_pool = NULL;
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_free_one_big_sfs(struct unf_xchg *xchg)
+{
+ ulong flag = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ xchg_mgr = xchg->xchg_mgr;
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+ if (!xchg->big_sfs_buf)
+ return;
+
+ if (xchg->cmnd_code != NS_GID_PT && xchg->cmnd_code != NS_GID_FT &&
+ xchg->cmnd_code != ELS_ECHO &&
+ xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_ECHO)) && xchg->cmnd_code != ELS_RSCN &&
+ xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_RSCN))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Exchange(0x%p), Command(0x%x) big SFS buf is not NULL.",
+ xchg, xchg->cmnd_code);
+ }
+
+ spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+ list_del(&xchg->big_sfs_buf->entry_bigsfs);
+ list_add_tail(&xchg->big_sfs_buf->entry_bigsfs,
+ &xchg_mgr->big_sfs_pool.list_freepool);
+ xchg_mgr->big_sfs_pool.free_count++;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Free one big SFS buf(0x%p), Count(0x%x), Exchange(0x%p), Command(0x%x).",
+ xchg->big_sfs_buf->addr, xchg_mgr->big_sfs_pool.free_count,
+ xchg, xchg->cmnd_code);
+ spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
+}
+
+static void unf_free_exchg_mgr_info(struct unf_lport *lport)
+{
+ u32 i;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_for_each_safe(node, next_node, &lport->list_xchg_mgr_head) {
+ list_del(node);
+ xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = lport->xchg_mgr[i];
+
+ if (xchg_mgr) {
+ unf_free_big_sfs_pool(xchg_mgr);
+
+ if (xchg_mgr->sfs_mm_start) {
+ dma_free_coherent(&lport->low_level_func.dev->dev,
+ xchg_mgr->sfs_mem_size, xchg_mgr->sfs_mm_start,
+ xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ }
+
+ if (xchg_mgr->fcp_mm_start) {
+ vfree(xchg_mgr->fcp_mm_start);
+ xchg_mgr->fcp_mm_start = NULL;
+ }
+
+ if (xchg_mgr->hot_pool) {
+ vfree(xchg_mgr->hot_pool);
+ xchg_mgr->hot_pool = NULL;
+ }
+
+ vfree(xchg_mgr);
+ lport->xchg_mgr[i] = NULL;
+ }
+ }
+}
+
+static u32 unf_alloc_and_init_xchg_mgr(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg_mem = NULL;
+ void *sfs_mm_start = 0;
+ dma_addr_t sfs_phy_addr = 0;
+ u32 xchg_sum = 0;
+ u32 sfs_xchg_sum = 0;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 slab_num = 0;
+ u32 i = 0;
+
+ ret = unf_get_xchg_config_sum(lport, &xchg_sum);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) can't get Exchange.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* SFS Exchange Sum */
+ sfs_xchg_sum = lport->low_level_func.lport_cfg_items.max_sfs_xchg /
+ UNF_EXCHG_MGR_NUM;
+ xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
+ slab_num = lport->low_level_func.support_max_hot_tag_range / UNF_EXCHG_MGR_NUM;
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ /* Alloc Exchange Manager */
+ xchg_mgr = (struct unf_xchg_mgr *)vmalloc(sizeof(struct unf_xchg_mgr));
+ if (!xchg_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Exchange Manager Memory Fail.",
+ lport->port_id);
+ goto exit;
+ }
+
+ /* Init Exchange Manager */
+ ret = unf_init_xchg_mgr(xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange Manager unsuccessful.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+
+ /* Initialize the Exchange Free Pool resource */
+ ret = unf_init_xchg_mgr_free_pool(xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange Manager Free Pool unsuccessful.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+
+ /* Allocate memory for Hot Pool and Xchg slab */
+ hot_pool = vmalloc(sizeof(struct unf_xchg_hot_pool) +
+ sizeof(struct unf_xchg *) * slab_num);
+ if (!hot_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Hot Pool Memory Fail.",
+ lport->port_id);
+ goto free_xchg_mgr;
+ }
+ memset(hot_pool, 0,
+ sizeof(struct unf_xchg_hot_pool) + sizeof(struct unf_xchg *) * slab_num);
+
+ xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg_hot_pool) +
+ sizeof(struct unf_xchg *) * slab_num);
+ /* Initialize the Exchange Hot Pool resource */
+ ret = unf_init_xchg_hot_pool(lport, hot_pool, slab_num);
+ if (ret != RETURN_OK)
+ goto free_hot_pool;
+
+ hot_pool->base += (u16)(i * slab_num);
+ /* Allocate the memory of all Xchg (IO/SFS) */
+ xchg_mem = vmalloc(sizeof(struct unf_xchg) * xchg_sum);
+ if (!xchg_mem) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate Exchange Memory Fail.",
+ lport->port_id);
+ goto free_hot_pool;
+ }
+ memset(xchg_mem, 0, sizeof(struct unf_xchg) * xchg_sum);
+
+ xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg) * xchg_sum);
+ xchg_mgr->hot_pool = hot_pool;
+ xchg_mgr->fcp_mm_start = xchg_mem;
+ /* Allocate the memory used by the SFS Xchg to carry the
+ * ELS/BLS/GS command and response
+ */
+ xchg_mgr->sfs_mem_size = (u32)(sizeof(union unf_sfs_u) * sfs_xchg_sum);
+
+ /* Apply for the DMA space for sending sfs frames.
+ * If the value of DMA32 is less than 4 GB, cross-4G problems
+ * will not occur
+ */
+ sfs_mm_start = dma_alloc_coherent(&lport->low_level_func.dev->dev,
+ xchg_mgr->sfs_mem_size,
+ &sfs_phy_addr, GFP_KERNEL);
+ if (!sfs_mm_start) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) Get Free Pagers Fail .",
+ lport->port_id);
+ goto free_xchg_mem;
+ }
+ memset(sfs_mm_start, 0, sizeof(union unf_sfs_u) * sfs_xchg_sum);
+
+ xchg_mgr->mem_szie += xchg_mgr->sfs_mem_size;
+ xchg_mgr->sfs_mm_start = sfs_mm_start;
+ xchg_mgr->sfs_phy_addr = sfs_phy_addr;
+ /* The Xchg is initialized and mounted to the Free Pool */
+ ret = unf_init_xchg(lport, xchg_mgr, xchg_sum, sfs_xchg_sum);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) initialization Exchange unsuccessful, Exchange Number(%d), SFS Exchange number(%d).",
+ lport->port_id, xchg_sum, sfs_xchg_sum);
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ goto free_xchg_mem;
+ }
+
+ /* Apply for the memory used by GID_PT, GID_FT, and RSCN */
+ ret = unf_alloc_and_init_big_sfs_pool(lport, xchg_mgr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) allocate big SFS fail", lport->port_id);
+ dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
+ xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
+ xchg_mgr->sfs_mm_start = 0;
+ goto free_xchg_mem;
+ }
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ lport->xchg_mgr[i] = (void *)xchg_mgr;
+ list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_xchg_mgr_head);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) ExchangeMgr:(0x%p),Base:(0x%x).",
+ lport->port_id, lport->xchg_mgr[i], hot_pool->base);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) allocate Exchange Manager size(0x%x).",
+ lport->port_id, xchg_mgr->mem_szie);
+ return RETURN_OK;
+free_xchg_mem:
+ vfree(xchg_mem);
+free_hot_pool:
+ vfree(hot_pool);
+free_xchg_mgr:
+ vfree(xchg_mgr);
+exit:
+ unf_free_exchg_mgr_info(lport);
+ return UNF_RETURN_ERROR;
+}
+
+void unf_xchg_mgr_destroy(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_free_all_xchg_mgr(lport);
+}
+
+u32 unf_alloc_xchg_resource(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ INIT_LIST_HEAD(&lport->list_drty_xchg_mgr_head);
+ INIT_LIST_HEAD(&lport->list_xchg_mgr_head);
+ spin_lock_init(&lport->xchg_mgr_lock);
+
+ /* LPort Xchg Management Unit Alloc */
+ if (unf_alloc_and_init_xchg_mgr(lport) != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only)
+{
+ u32 dirty_xchg = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_for_each_safe(node, next_node, &lport->list_drty_xchg_mgr_head) {
+ xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ if (xchg_mgr) {
+ dirty_xchg = (xchg_mgr->free_pool.total_fcp_xchg +
+ xchg_mgr->free_pool.total_sfs_xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has %u dirty exchange(s)",
+ lport->port_id, dirty_xchg);
+
+ unf_show_all_xchg(lport, xchg_mgr);
+
+ if (!show_only) {
+ /* Delete Dirty Exchange Mgr entry */
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ list_del_init(&xchg_mgr->xchg_mgr_entry);
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ /* Free Dirty Exchange Mgr memory */
+ unf_free_xchg_mgr_mem(lport, xchg_mgr);
+ }
+ }
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ }
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+ }
+}
+
+struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport, u32 idx)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE((idx < UNF_EXCHG_MGR_NUM), NULL);
+
+ spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
+ xchg_mgr = lport->xchg_mgr[idx];
+ spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
+
+ return xchg_mgr;
+}
+
+struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
+ u32 mgr_idx)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)(lport->root_lport);
+
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ /* Get Xchg Manager */
+ xchg_mgr = unf_get_xchg_mgr_by_lport(unf_lport, mgr_idx);
+ if (!xchg_mgr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) Exchange Manager is NULL.",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ /* Get Xchg Manager Hot Pool */
+ return xchg_mgr->hot_pool;
+}
+
+static inline void unf_hot_pool_slab_set(struct unf_xchg_hot_pool *hot_pool,
+ u16 slab_index, struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(hot_pool);
+
+ hot_pool->xchg_slab[slab_index] = xchg;
+}
+
+static inline struct unf_xchg *unf_get_xchg_by_xchg_tag(struct unf_xchg_hot_pool *hot_pool,
+ u16 slab_index)
+{
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ return hot_pool->xchg_slab[slab_index];
+}
+
+static void *unf_look_up_xchg_by_tag(void *lport, u16 hot_pool_tag)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 exchg_mgr_idx = 0;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, lport is the Vport pointer,
+ * the share uses the ExchMgr of RootLport
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ exchg_mgr_idx = (hot_pool_tag * UNF_EXCHG_MGR_NUM) /
+ unf_lport->low_level_func.support_max_hot_tag_range;
+ if (unlikely(exchg_mgr_idx >= UNF_EXCHG_MGR_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) Get ExchgMgr %u err",
+ unf_lport->port_id, exchg_mgr_idx);
+
+ return NULL;
+ }
+
+ xchg_mgr = unf_lport->xchg_mgr[exchg_mgr_idx];
+
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ExchgMgr %u is null",
+ unf_lport->port_id, exchg_mgr_idx);
+
+ return NULL;
+ }
+
+ hot_pool = xchg_mgr->hot_pool;
+
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) Hot Pool is NULL.",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ if (unlikely(hot_pool_tag >= (hot_pool->slab_total_sum + hot_pool->base))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]LPort(0x%x) can't Input Tag(0x%x), Max(0x%x).",
+ unf_lport->port_id, hot_pool_tag,
+ (hot_pool->slab_total_sum + hot_pool->base));
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ xchg = unf_get_xchg_by_xchg_tag(hot_pool, hot_pool_tag - hot_pool->base);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ return (void *)xchg;
+}
+
+static void *unf_find_xchg_by_ox_id(void *lport, u16 ox_id, u32 oid)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+ ulong xchg_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, the lport is the Vport pointer,
+ * and the share uses the ExchMgr of the RootLport
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) MgrIdex %u Hot Pool is NULL.",
+ unf_lport->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. Traverse sfs_busy list */
+ list_for_each_safe(node, next_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
+ if (unf_check_oxid_matched(ox_id, oid, xchg)) {
+ atomic_inc(&xchg->ref_cnt);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ return xchg;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ }
+
+ /* 2. Traverse INI_Busy List */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
+ if (unf_check_oxid_matched(ox_id, oid, xchg)) {
+ atomic_inc(&xchg->ref_cnt);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ return xchg;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock,
+ xchg_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ return NULL;
+}
+
+static inline bool unf_check_xchg_matched(struct unf_xchg *xchg, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ bool matched = false;
+
+ matched = (command_sn == xchg->cmnd_sn);
+ if (matched && (atomic_read(&xchg->ref_cnt) > 0))
+ return true;
+ else
+ return false;
+}
+
+static void *unf_look_up_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In NPIV, lport is a Vport pointer, and idle resources are shared by
+ * ExchMgr of RootLport. However, busy resources are mounted on each
+ * vport. Therefore, vport needs to be used.
+ */
+ unf_lport = (struct unf_lport *)lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ /* from busy_list */
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_check_xchg_matched(xchg, command_sn, world_id, pinitiator)) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ return xchg;
+ }
+ }
+
+ /* vport: from destroy_list */
+ if (unf_lport != unf_lport->root_lport) {
+ list_for_each_safe(node, next_node, &hot_pool->list_destroy_xchg) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_check_xchg_matched(xchg, command_sn, world_id,
+ pinitiator)) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) lookup exchange from destroy list",
+ unf_lport->port_id);
+
+ return xchg;
+ }
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ return NULL;
+}
+
+static inline u32 unf_alloc_hot_pool_slab(struct unf_xchg_hot_pool *hot_pool, struct unf_xchg *xchg)
+{
+ u16 slab_index = 0;
+
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ /* Check whether the hotpool tag is in the specified range sirt.
+ * If yes, set up the management relationship. If no, handle the problem
+ * according to the normal IO. If the sirt digitmap is used but the tag
+ * is occupied, it indicates that the I/O is discarded.
+ */
+
+ hot_pool->slab_next_index = (u16)hot_pool->slab_next_index;
+ slab_index = hot_pool->slab_next_index;
+ while (unf_get_xchg_by_xchg_tag(hot_pool, slab_index)) {
+ slab_index++;
+ slab_index = slab_index % hot_pool->slab_total_sum;
+
+ /* Rewind occurs */
+ if (slab_index == hot_pool->slab_next_index) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "There is No Slab At Hot Pool(0x%p) for xchg(0x%p).",
+ hot_pool, xchg);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ unf_hot_pool_slab_set(hot_pool, slab_index, xchg);
+ xchg->hotpooltag = slab_index + hot_pool->base;
+ slab_index++;
+ hot_pool->slab_next_index = slab_index % hot_pool->slab_total_sum;
+
+ return RETURN_OK;
+}
+
+struct unf_esgl_page *
+unf_get_and_add_one_free_esgl_page(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_esgl *esgl = NULL;
+ struct list_head *list_head = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ /* Obtain a new Esgl from the EsglPool and add it to the list_esgls of
+ * the Xchg
+ */
+ spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
+ if (!list_empty(&lport->esgl_pool.list_esgl_pool)) {
+ list_head = UNF_OS_LIST_NEXT(&lport->esgl_pool.list_esgl_pool);
+ list_del(list_head);
+ lport->esgl_pool.esgl_pool_count--;
+ list_add_tail(list_head, &xchg->list_esgls);
+
+ esgl = list_entry(list_head, struct unf_esgl, entry_esgl);
+ atomic_inc(&xchg->esgl_cnt);
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) esgl pool is empty",
+ lport->nport_id);
+
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ return NULL;
+ }
+
+ return &esgl->page;
+}
+
+void unf_release_esgls(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(xchg->lport);
+
+ if (atomic_read(&xchg->esgl_cnt) <= 0)
+ return;
+
+ /* In the case of NPIV, the Vport pointer is saved in v_pstExch,
+ * and the EsglPool of RootLport is shared.
+ */
+ unf_lport = (xchg->lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ spin_lock_irqsave(&unf_lport->esgl_pool.esgl_pool_lock, flag);
+ if (!list_empty(&xchg->list_esgls)) {
+ list_for_each_safe(list, list_tmp, &xchg->list_esgls) {
+ list_del(list);
+ list_add_tail(list, &unf_lport->esgl_pool.list_esgl_pool);
+ unf_lport->esgl_pool.esgl_pool_count++;
+ atomic_dec(&xchg->esgl_cnt);
+ }
+ }
+ spin_unlock_irqrestore(&unf_lport->esgl_pool.esgl_pool_lock, flag);
+}
+
+static void unf_add_back_to_fcp_list(struct unf_xchg_free_pool *free_pool, struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(free_pool);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_init_xchg_attribute(xchg);
+
+ /* The released I/O resources are added to the queue tail to facilitate
+ * fault locating
+ */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
+ free_pool->total_fcp_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+}
+
+static void unf_check_xchg_mgr_status(struct unf_xchg_mgr *xchg_mgr)
+{
+ ulong flags = 0;
+ u32 total_xchg = 0;
+ u32 total_xchg_sum = 0;
+
+ FC_CHECK_RETURN_VOID(xchg_mgr);
+
+ spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
+
+ total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
+ total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
+
+ if (xchg_mgr->free_pool.xchg_mgr_completion && total_xchg == total_xchg_sum)
+ complete(xchg_mgr->free_pool.xchg_mgr_completion);
+
+ spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
+}
+
+static void unf_free_fcp_xchg(struct unf_xchg *xchg)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* Releasing a Specified INI I/O and Invoking the scsi_done Process */
+ unf_done_ini_xchg(xchg);
+ free_pool = xchg->free_pool;
+ xchg_mgr = xchg->xchg_mgr;
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+
+ atomic_dec(&unf_rport->pending_io_cnt);
+ /* Release the Esgls in the Xchg structure and return it to the EsglPool
+ * of the Lport
+ */
+ unf_release_esgls(xchg);
+
+ if (unlikely(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu)) {
+ kfree(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu);
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = NULL;
+ }
+
+ /* Mount I/O resources to the FCP Free linked list */
+ unf_add_back_to_fcp_list(free_pool, xchg);
+
+ /* The Xchg is released synchronously and then forcibly released to
+ * prevent the Xchg from accessing the Xchg in the normal I/O process
+ */
+ if (unlikely(unf_lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+}
+
+static void unf_init_io_xchg_param(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ static atomic64_t exhd_id;
+
+ xchg->start_jif = atomic64_inc_return(&exhd_id);
+ xchg->xchg_mgr = xchg_mgr;
+ xchg->free_pool = &xchg_mgr->free_pool;
+ xchg->hot_pool = xchg_mgr->hot_pool;
+ xchg->lport = lport;
+ xchg->xchg_type = UNF_XCHG_TYPE_INI;
+ xchg->free_xchg = unf_free_fcp_xchg;
+ xchg->scsi_or_tgt_cmnd_func = NULL;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
+ xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
+ xchg->io_send_abort = false;
+ xchg->io_abort_result = false;
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->sid = INVALID_VALUE32;
+ xchg->did = INVALID_VALUE32;
+ xchg->oid = INVALID_VALUE32;
+ xchg->seq_id = INVALID_VALUE8;
+ xchg->cmnd_code = INVALID_VALUE32;
+ xchg->data_len = 0;
+ xchg->resid_len = 0;
+ xchg->data_direction = DMA_NONE;
+ xchg->may_consume_res_cnt = 0;
+ xchg->fast_consume_res_cnt = 0;
+ xchg->io_front_jif = 0;
+ xchg->tmf_state = 0;
+ xchg->ucode_abts_state = INVALID_VALUE32;
+ xchg->abts_state = 0;
+ xchg->rport_bind_jifs = INVALID_VALUE64;
+ xchg->scsi_id = INVALID_VALUE32;
+ xchg->qos_level = 0;
+ xchg->world_id = INVALID_VALUE32;
+
+ memset(&xchg->dif_control, 0, sizeof(struct unf_dif_control_info));
+ memset(&xchg->req_sgl_info, 0, sizeof(struct unf_req_sgl_info));
+ memset(&xchg->dif_sgl_info, 0, sizeof(struct unf_req_sgl_info));
+ xchg->scsi_cmnd_info.result = 0;
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == INVALID_VALUE32) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ atomic_set(&xchg->ref_cnt, 0);
+ atomic_set(&xchg->delay_flag, 0);
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ INIT_DELAYED_WORK(&xchg->timeout_work, unf_fc_ini_io_xchg_time_out);
+}
+
+static struct unf_xchg *unf_alloc_io_xchg(struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ free_pool = &xchg_mgr->free_pool;
+ hot_pool = xchg_mgr->hot_pool;
+ FC_CHECK_RETURN_VALUE(free_pool, NULL);
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ /* 1. Free Pool */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ if (unlikely(list_empty(&free_pool->list_free_xchg_list))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "Port(0x%x) have no Exchange anymore.",
+ lport->port_id);
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+ return NULL;
+ }
+
+ /* Select an idle node from free pool */
+ list_node = UNF_OS_LIST_NEXT(&free_pool->list_free_xchg_list);
+ list_del(list_node);
+ free_pool->total_fcp_xchg--;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
+ /*
+ * Hot Pool:
+ * When xchg is mounted to Hot Pool, the mount mode and release mode
+ * of Xchg must be specified and stored in the sfs linked list.
+ */
+ flags = 0;
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ unf_add_back_to_fcp_list(free_pool, xchg);
+ if (unlikely(lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+
+ return NULL;
+ }
+ list_add_tail(&xchg->list_xchg_entry, &hot_pool->ini_busylist);
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 3. Exchange State */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_init_io_xchg_param(xchg, lport, xchg_mgr);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static void unf_add_back_to_sfs_list(struct unf_xchg_free_pool *free_pool,
+ struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(free_pool);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_init_xchg_attribute(xchg);
+
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+
+ list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
+ free_pool->total_sfs_xchg++;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+}
+
+static void unf_free_sfs_xchg(struct unf_xchg *xchg)
+{
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ free_pool = xchg->free_pool;
+ unf_lport = xchg->lport;
+ xchg_mgr = xchg->xchg_mgr;
+
+ /* The memory is applied for when the GID_PT/GID_FT is sent.
+ * If no response is received, the GID_PT/GID_FT needs to be forcibly
+ * released.
+ */
+
+ unf_free_one_big_sfs(xchg);
+
+ unf_add_back_to_sfs_list(free_pool, xchg);
+
+ if (unlikely(unf_lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+}
+
+static void unf_fc_xchg_add_timer(void *xchg, ulong time_ms,
+ enum unf_timer_type time_type)
+{
+ ulong flag = 0;
+ struct unf_xchg *unf_xchg = NULL;
+ ulong times_ms = time_ms;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_xchg = (struct unf_xchg *)xchg;
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ /* update timeout */
+ switch (time_type) {
+ /* The processing of TGT RRQ timeout is the same as that of TGT IO
+ * timeout. The timeout period is different.
+ */
+ case UNF_TIMER_TYPE_TGT_RRQ:
+ times_ms = times_ms + UNF_TGT_RRQ_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "TGT RRQ Timer set.");
+ break;
+
+ case UNF_TIMER_TYPE_INI_RRQ:
+ times_ms = times_ms - UNF_INI_RRQ_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "INI RRQ Timer set.");
+ break;
+
+ case UNF_TIMER_TYPE_SFS:
+ times_ms = times_ms + UNF_INI_ELS_REDUNDANT_TIME;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "INI ELS Timer set.");
+ break;
+ default:
+ break;
+ }
+
+ /* The xchg of the timer must be valid. If the reference count of xchg
+ * is 0, the timer must not be added
+ */
+ if (atomic_read(&unf_xchg->ref_cnt) <= 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[warn]Abnormal Exchange(0x%p), Reference count(0x%x), Can't add timer.",
+ unf_xchg, atomic_read(&unf_xchg->ref_cnt));
+ return;
+ }
+
+ /* Delay Work: Hold for timer */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
+ if (queue_delayed_work(unf_lport->xchg_wq, &unf_xchg->timeout_work,
+ (ulong)msecs_to_jiffies((u32)times_ms))) {
+ /* hold for timer */
+ atomic_inc(&unf_xchg->ref_cnt);
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
+}
+
+static void unf_init_sfs_xchg_param(struct unf_xchg *xchg,
+ struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ xchg->free_pool = &xchg_mgr->free_pool;
+ xchg->hot_pool = xchg_mgr->hot_pool;
+ xchg->lport = lport;
+ xchg->xchg_mgr = xchg_mgr;
+ xchg->free_xchg = unf_free_sfs_xchg;
+ xchg->xchg_type = UNF_XCHG_TYPE_SFS;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->scsi_cmnd_info.result = 0;
+ xchg->ob_callback_sts = UNF_IO_SUCCESS;
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] ==
+ INVALID_VALUE32) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
+ }
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ INIT_DELAYED_WORK(&xchg->timeout_work, unf_sfs_xchg_time_out);
+}
+
+static struct unf_xchg *unf_alloc_sfs_xchg(struct unf_lport *lport,
+ struct unf_xchg_mgr *xchg_mgr)
+{
+ struct unf_xchg *xchg = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_xchg_free_pool *free_pool = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ free_pool = &xchg_mgr->free_pool;
+ hot_pool = xchg_mgr->hot_pool;
+ FC_CHECK_RETURN_VALUE(free_pool, NULL);
+ FC_CHECK_RETURN_VALUE(hot_pool, NULL);
+
+ /* Select an idle node from free pool */
+ spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
+ if (list_empty(&free_pool->list_sfs_xchg_list)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) have no Exchange anymore.",
+ lport->port_id);
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+ return NULL;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&free_pool->list_sfs_xchg_list);
+ list_del(list_node);
+ free_pool->total_sfs_xchg--;
+ spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
+
+ xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
+ /*
+ * The xchg is mounted to the Hot Pool.
+ * The mount mode and release mode of the xchg must be specified
+ * and stored in the sfs linked list.
+ */
+ flags = 0;
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ unf_add_back_to_sfs_list(free_pool, xchg);
+ if (unlikely(lport->port_removing))
+ unf_check_xchg_mgr_status(xchg_mgr);
+
+ return NULL;
+ }
+
+ list_add_tail(&xchg->list_xchg_entry, &hot_pool->sfs_busylist);
+ hot_pool->total_xchges++;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_init_sfs_xchg_param(xchg, lport, xchg_mgr);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static void *unf_get_new_xchg(void *lport, u32 xchg_type)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 exchg_type = 0;
+ u16 xchg_mgr_type;
+ u32 rtry_cnt = 0;
+ u32 last_exchg_mgr_idx;
+
+ xchg_mgr_type = (xchg_type >> UNF_SHIFT_16);
+ exchg_type = xchg_type & SPFC_XCHG_TYPE_MASK;
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* In the case of NPIV, the lport is the Vport pointer,
+ * and the share uses the ExchMgr of the RootLport.
+ */
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ if (unlikely((atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) ||
+ (atomic_read(&((struct unf_lport *)lport)->lport_no_operate_flag) ==
+ UNF_LPORT_NOP))) {
+ return NULL;
+ }
+
+ last_exchg_mgr_idx = (u32)atomic64_inc_return(&unf_lport->last_exchg_mgr_idx);
+try_next_mgr:
+ rtry_cnt++;
+ if (unlikely(rtry_cnt > UNF_EXCHG_MGR_NUM))
+ return NULL;
+
+ /* If Fixed mode,only use XchgMgr 0 */
+ if (unlikely(xchg_mgr_type == UNF_XCHG_MGR_TYPE_FIXED)) {
+ xchg_mgr = (struct unf_xchg_mgr *)unf_lport->xchg_mgr[ARRAY_INDEX_0];
+ } else {
+ xchg_mgr = (struct unf_xchg_mgr *)unf_lport
+ ->xchg_mgr[last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM];
+ }
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) get exchangemgr %u is null.",
+ unf_lport->port_id, last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM);
+ return NULL;
+ }
+ last_exchg_mgr_idx++;
+
+ /* Allocate entries based on the Exchange type */
+ switch (exchg_type) {
+ case UNF_XCHG_TYPE_SFS:
+ xchg = unf_alloc_sfs_xchg(lport, xchg_mgr);
+ break;
+ case UNF_XCHG_TYPE_INI:
+ xchg = unf_alloc_io_xchg(lport, xchg_mgr);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) unwonted, Exchange type(0x%x).",
+ unf_lport->port_id, exchg_type);
+ break;
+ }
+
+ if (likely(xchg)) {
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->debug_hook = false;
+ xchg->alloc_jif = jiffies;
+
+ atomic_set(&xchg->ref_cnt, 1);
+ atomic_set(&xchg->esgl_cnt, 0);
+ } else {
+ goto try_next_mgr;
+ }
+
+ return xchg;
+}
+
+static void unf_free_xchg(void *lport, void *xchg)
+{
+ struct unf_xchg *unf_xchg = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ unf_xchg_ref_dec(unf_xchg, XCHG_FREE_XCHG);
+}
+
+u32 unf_init_xchg_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ lport->xchg_mgr_temp.unf_xchg_get_free_and_init = unf_get_new_xchg;
+ lport->xchg_mgr_temp.unf_xchg_release = unf_free_xchg;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_tag = unf_look_up_xchg_by_tag;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_id = unf_find_xchg_by_ox_id;
+ lport->xchg_mgr_temp.unf_xchg_add_timer = unf_fc_xchg_add_timer;
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer = unf_xchg_cancel_timer;
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io = unf_xchg_abort_all_xchg;
+ lport->xchg_mgr_temp.unf_look_up_xchg_by_cmnd_sn = unf_look_up_xchg_by_cmnd_sn;
+ lport->xchg_mgr_temp.unf_xchg_abort_by_lun = unf_xchg_abort_by_lun;
+ lport->xchg_mgr_temp.unf_xchg_abort_by_session = unf_xchg_abort_by_session;
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort = unf_xchg_mgr_io_xchg_abort;
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort = unf_xchg_mgr_sfs_xchg_abort;
+
+ return RETURN_OK;
+}
+
+void unf_release_xchg_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) has dirty exchange, Don't release exchange manager template.",
+ lport->port_id);
+
+ return;
+ }
+
+ memset(&lport->xchg_mgr_temp, 0, sizeof(struct unf_cm_xchg_mgr_template));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP;
+}
+
+void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong pool_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ hot_pool->wait_state = wait_state;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ if (unlikely(xchg->debug_hook)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OX_ID_RX_ID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Stage(%s)",
+ xchg, xchg->io_state, xchg->sid, xchg->did,
+ xchg->oxid, xchg->rxid, xchg->alloc_jif,
+ atomic_read(&xchg->ref_cnt),
+ io_stage_table[io_stage].stage);
+ }
+
+ hot_pool = xchg->hot_pool;
+ FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
+
+ /* Exchange -> Hot Pool Tag check */
+ if (unlikely((xchg->hotpooltag >= (hot_pool->slab_total_sum + hot_pool->base)) ||
+ xchg->hotpooltag < hot_pool->base)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Xchg(0x%p) S_ID(%xh) D_ID(0x%x) hot_pool_tag(0x%x) is bigger than slab total num(0x%x) base(0x%x)",
+ xchg, xchg->sid, xchg->did, xchg->hotpooltag,
+ hot_pool->slab_total_sum + hot_pool->base, hot_pool->base);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* atomic read & inc */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (unlikely(atomic_read(&xchg->ref_cnt) <= 0)) {
+ ret = UNF_RETURN_ERROR;
+ } else {
+ if (unf_get_xchg_by_xchg_tag(hot_pool, xchg->hotpooltag - hot_pool->base) == xchg) {
+ atomic_inc(&xchg->ref_cnt);
+ ret = RETURN_OK;
+ } else {
+ ret = UNF_RETURN_ERROR;
+ }
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return ret;
+}
+
+void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
+{
+ /* Atomic dec ref_cnt & test, free exchange if necessary (ref_cnt==0) */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ void (*free_xchg)(struct unf_xchg *) = NULL;
+ ulong flags = 0;
+ ulong xchg_lock_falgs = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ if (xchg->debug_hook) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OXID_RXID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Statge %s",
+ xchg, xchg->io_state, xchg->sid, xchg->did, xchg->oxid,
+ xchg->rxid, xchg->alloc_jif,
+ atomic_read(&xchg->ref_cnt),
+ io_stage_table[io_stage].stage);
+ }
+
+ hot_pool = xchg->hot_pool;
+ FC_CHECK_RETURN_VOID(hot_pool);
+ FC_CHECK_RETURN_VOID((xchg->hotpooltag >= hot_pool->base));
+
+ /*
+ * 1. Atomic dec & test
+ * 2. Free exchange if necessary (ref_cnt == 0)
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_falgs);
+ if (atomic_dec_and_test(&xchg->ref_cnt)) {
+ free_xchg = xchg->free_xchg;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ unf_hot_pool_slab_set(hot_pool,
+ xchg->hotpooltag - hot_pool->base, NULL);
+ /* Delete exchange list entry */
+ list_del_init(&xchg->list_xchg_entry);
+ hot_pool->total_xchges--;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* unf_free_fcp_xchg --->>> unf_done_ini_xchg */
+ if (free_xchg)
+ free_xchg(xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
+ }
+}
+
+static void unf_init_xchg_attribute(struct unf_xchg *xchg)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->xchg_mgr = NULL;
+ xchg->free_pool = NULL;
+ xchg->hot_pool = NULL;
+ xchg->lport = NULL;
+ xchg->rport = NULL;
+ xchg->disc_rport = NULL;
+ xchg->io_state = UNF_IO_STATE_NEW;
+ xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
+ xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
+ xchg->io_send_abort = false;
+ xchg->io_abort_result = false;
+ xchg->abts_state = 0;
+ xchg->oxid = INVALID_VALUE16;
+ xchg->abort_oxid = INVALID_VALUE16;
+ xchg->rxid = INVALID_VALUE16;
+ xchg->sid = INVALID_VALUE32;
+ xchg->did = INVALID_VALUE32;
+ xchg->oid = INVALID_VALUE32;
+ xchg->disc_portid = INVALID_VALUE32;
+ xchg->seq_id = INVALID_VALUE8;
+ xchg->cmnd_code = INVALID_VALUE32;
+ xchg->cmnd_sn = INVALID_VALUE64;
+ xchg->data_len = 0;
+ xchg->resid_len = 0;
+ xchg->data_direction = DMA_NONE;
+ xchg->hotpooltag = INVALID_VALUE16;
+ xchg->big_sfs_buf = NULL;
+ xchg->may_consume_res_cnt = 0;
+ xchg->fast_consume_res_cnt = 0;
+ xchg->io_front_jif = INVALID_VALUE64;
+ xchg->ob_callback_sts = UNF_IO_SUCCESS;
+ xchg->start_jif = 0;
+ xchg->rport_bind_jifs = INVALID_VALUE64;
+ xchg->scsi_id = INVALID_VALUE32;
+ xchg->qos_level = 0;
+ xchg->world_id = INVALID_VALUE32;
+
+ memset(&xchg->seq, 0, sizeof(struct unf_seq));
+ memset(&xchg->fcp_cmnd, 0, sizeof(struct unf_fcp_cmnd));
+ memset(&xchg->scsi_cmnd_info, 0, sizeof(struct unf_scsi_cmd_info));
+ memset(&xchg->dif_info, 0, sizeof(struct dif_info));
+ memset(xchg->private_data, 0, (PKG_MAX_PRIVATE_DATA_SIZE * sizeof(u32)));
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
+ xchg->echo_info.response_time = 0;
+
+ if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
+ if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0,
+ sizeof(union unf_sfs_u));
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
+ }
+ } else if (xchg->xchg_type != UNF_XCHG_TYPE_INI) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Exchange Type(0x%x) SFS Union uninited.",
+ xchg->xchg_type);
+ }
+ xchg->xchg_type = UNF_XCHG_TYPE_INVALID;
+ xchg->xfer_or_rsp_echo = NULL;
+ xchg->scsi_or_tgt_cmnd_func = NULL;
+ xchg->ob_callback = NULL;
+ xchg->callback = NULL;
+ xchg->free_xchg = NULL;
+
+ atomic_set(&xchg->ref_cnt, 0);
+ atomic_set(&xchg->esgl_cnt, 0);
+ atomic_set(&xchg->delay_flag, 0);
+
+ if (delayed_work_pending(&xchg->timeout_work))
+ UNF_DEL_XCHG_TIMER_SAFE(xchg);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+}
+
+bool unf_busy_io_completed(struct unf_lport *lport)
+{
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong pool_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(lport, true);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
+ if (unlikely(!xchg_mgr)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Exchange Manager is NULL",
+ lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ if (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) ini busylist is not empty.",
+ lport->port_id);
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ return false;
+ }
+ spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
+ pool_lock_flags);
+ }
+ return true;
+}
diff --git a/drivers/scsi/spfc/common/unf_exchg.h b/drivers/scsi/spfc/common/unf_exchg.h
new file mode 100644
index 000000000000..5390f932fe44
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg.h
@@ -0,0 +1,436 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCEXCH_H
+#define UNF_FCEXCH_H
+
+#include "unf_type.h"
+#include "unf_fcstruct.h"
+#include "unf_lport.h"
+#include "unf_scsi_common.h"
+
+enum unf_ioflow_id {
+ XCHG_ALLOC = 0,
+ TGT_RECEIVE_ABTS,
+ TGT_ABTS_DONE,
+ TGT_IO_SRR,
+ SFS_RESPONSE,
+ SFS_TIMEOUT,
+ INI_SEND_CMND,
+ INI_RESPONSE_DONE,
+ INI_EH_ABORT,
+ INI_EH_DEVICE_RESET,
+ INI_EH_BLS_DONE,
+ INI_IO_TIMEOUT,
+ INI_REQ_TIMEOUT,
+ XCHG_CANCEL_TIMER,
+ XCHG_FREE_XCHG,
+ SEND_ELS,
+ IO_XCHG_WAIT,
+ XCHG_BUTT
+};
+
+enum unf_xchg_type {
+ UNF_XCHG_TYPE_INI = 0, /* INI IO */
+ UNF_XCHG_TYPE_SFS = 1,
+ UNF_XCHG_TYPE_INVALID
+};
+
+enum unf_xchg_mgr_type {
+ UNF_XCHG_MGR_TYPE_RANDOM = 0,
+ UNF_XCHG_MGR_TYPE_FIXED = 1,
+ UNF_XCHG_MGR_TYPE_INVALID
+};
+
+enum tgt_io_send_stage {
+ TGT_IO_SEND_STAGE_NONE = 0,
+ TGT_IO_SEND_STAGE_DOING = 1, /* xfer/rsp into queue */
+ TGT_IO_SEND_STAGE_DONE = 2, /* xfer/rsp into queue complete */
+ TGT_IO_SEND_STAGE_ECHO = 3, /* driver handled TSTS */
+ TGT_IO_SEND_STAGE_INVALID
+};
+
+enum tgt_io_send_result {
+ TGT_IO_SEND_RESULT_OK = 0, /* xfer/rsp enqueue succeed */
+ TGT_IO_SEND_RESULT_FAIL = 1, /* xfer/rsp enqueue fail */
+ TGT_IO_SEND_RESULT_INVALID
+};
+
+struct unf_io_flow_id {
+ char *stage;
+};
+
+#define unf_check_oxid_matched(ox_id, oid, xchg) \
+ (((ox_id) == (xchg)->oxid) && ((oid) == (xchg)->oid) && \
+ (atomic_read(&(xchg)->ref_cnt) > 0))
+
+#define UNF_CHECK_ALLOCTIME_VALID(lport, xchg_tag, exchg, pkg_alloc_time, \
+ xchg_alloc_time) \
+ do { \
+ if (unlikely(((pkg_alloc_time) != 0) && \
+ ((pkg_alloc_time) != (xchg_alloc_time)))) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR, \
+ "Lport(0x%x_0x%x_0x%x_0x%p) AllocTime is not " \
+ "equal,PKG " \
+ "AllocTime:0x%x,Exhg AllocTime:0x%x", \
+ (lport)->port_id, (lport)->nport_id, xchg_tag, \
+ exchg, pkg_alloc_time, xchg_alloc_time); \
+ return UNF_RETURN_ERROR; \
+ }; \
+ if (unlikely((pkg_alloc_time) == 0)) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR, \
+ "Lport(0x%x_0x%x_0x%x_0x%p) pkgtime err,PKG " \
+ "AllocTime:0x%x,Exhg AllocTime:0x%x", \
+ (lport)->port_id, (lport)->nport_id, xchg_tag, \
+ exchg, pkg_alloc_time, xchg_alloc_time); \
+ }; \
+ } while (0)
+
+#define UNF_SET_SCSI_CMND_RESULT(xchg, cmnd_result) \
+ ((xchg)->scsi_cmnd_info.result = (cmnd_result))
+
+#define UNF_GET_GS_SFS_XCHG_TIMER(lport) (3 * (ulong)(lport)->ra_tov)
+
+#define UNF_GET_BLS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_GET_ELS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_ELS_ECHO_RESULT_OK 0
+#define UNF_ELS_ECHO_RESULT_FAIL 1
+
+struct unf_xchg;
+/* Xchg hot pool, busy IO lookup Xchg */
+struct unf_xchg_hot_pool {
+ /* Xchg sum, in hot pool */
+ u16 total_xchges;
+ bool wait_state;
+
+ /* pool lock */
+ spinlock_t xchg_hotpool_lock;
+
+ /* Xchg posiontion list */
+ struct list_head sfs_busylist;
+ struct list_head ini_busylist;
+ struct list_head list_destroy_xchg;
+
+ /* Next free hot point */
+ u16 slab_next_index;
+ u16 slab_total_sum;
+ u16 base;
+
+ struct unf_lport *lport;
+
+ struct unf_xchg *xchg_slab[ARRAY_INDEX_0];
+};
+
+/* Xchg's FREE POOL */
+struct unf_xchg_free_pool {
+ spinlock_t xchg_freepool_lock;
+
+ u32 fcp_xchg_sum;
+
+ /* IO used Xchg */
+ struct list_head list_free_xchg_list;
+ u32 total_fcp_xchg;
+
+ /* SFS used Xchg */
+ struct list_head list_sfs_xchg_list;
+ u32 total_sfs_xchg;
+ u32 sfs_xchg_sum;
+
+ struct completion *xchg_mgr_completion;
+};
+
+struct unf_big_sfs {
+ struct list_head entry_bigsfs;
+ void *addr;
+ u32 size;
+};
+
+struct unf_big_sfs_pool {
+ void *big_sfs_pool;
+ u32 free_count;
+ struct list_head list_freepool;
+ struct list_head list_busypool;
+ spinlock_t big_sfs_pool_lock;
+};
+
+/* Xchg Manager for vport Xchg */
+struct unf_xchg_mgr {
+ /* MG type */
+ u32 mgr_type;
+
+ /* MG entry */
+ struct list_head xchg_mgr_entry;
+
+ /* MG attribution */
+ u32 mem_szie;
+
+ /* MG alloced resource */
+ void *fcp_mm_start;
+
+ u32 sfs_mem_size;
+ void *sfs_mm_start;
+ dma_addr_t sfs_phy_addr;
+
+ struct unf_xchg_free_pool free_pool;
+ struct unf_xchg_hot_pool *hot_pool;
+
+ struct unf_big_sfs_pool big_sfs_pool;
+
+ struct buf_describe big_sfs_buf_list;
+};
+
+struct unf_seq {
+ /* Seq ID */
+ u8 seq_id;
+
+ /* Seq Cnt */
+ u16 seq_cnt;
+
+ /* Seq state and len,maybe used for fcoe */
+ u16 seq_stat;
+ u32 rec_data_len;
+};
+
+union unf_xchg_fcp_sfs {
+ struct unf_sfs_entry sfs_entry;
+ struct unf_fcp_rsp_iu_entry fcp_rsp_entry;
+};
+
+#define UNF_IO_STATE_NEW 0
+#define TGT_IO_STATE_SEND_XFERRDY (1 << 2) /* succeed to send XFer rdy */
+#define TGT_IO_STATE_RSP (1 << 5) /* chip send rsp */
+#define TGT_IO_STATE_ABORT (1 << 7)
+
+#define INI_IO_STATE_UPTASK \
+ (1 << 15) /* INI Upper-layer Task Management Commands */
+#define INI_IO_STATE_UPABORT \
+ (1 << 16) /* INI Upper-layer timeout Abort flag \
+ */
+#define INI_IO_STATE_DRABORT (1 << 17) /* INI driver Abort flag */
+#define INI_IO_STATE_DONE (1 << 18) /* INI complete flag */
+#define INI_IO_STATE_WAIT_RRQ (1 << 19) /* INI wait send rrq */
+#define INI_IO_STATE_UPSEND_ERR (1 << 20) /* INI send fail flag */
+/* INI only clear firmware resource flag */
+#define INI_IO_STATE_ABORT_RESOURCE (1 << 21)
+/* ioc abort:INI send ABTS ,5S timeout Semaphore,than set 1 */
+#define INI_IO_STATE_ABORT_TIMEOUT (1 << 22)
+#define INI_IO_STATE_RRQSEND_ERR (1 << 23) /* INI send RRQ fail flag */
+#define INI_IO_STATE_LOGO (1 << 24) /* INI busy IO session logo status */
+#define INI_IO_STATE_TMF_ABORT (1 << 25) /* INI TMF ABORT IO flag */
+#define INI_IO_STATE_REC_TIMEOUT_WAIT (1 << 26) /* INI REC TIMEOUT WAIT */
+#define INI_IO_STATE_REC_TIMEOUT (1 << 27) /* INI REC TIMEOUT */
+
+#define TMF_RESPONSE_RECEIVED (1 << 0)
+#define MARKER_STS_RECEIVED (1 << 1)
+#define ABTS_RESPONSE_RECEIVED (1 << 2)
+
+struct unf_scsi_cmd_info {
+ ulong time_out;
+ ulong abort_time_out;
+ void *scsi_cmnd;
+ void (*done)(struct unf_scsi_cmnd *scsi_cmd);
+ ini_get_sgl_entry_buf unf_get_sgl_entry_buf;
+ struct unf_ini_error_code *err_code_table; /* error code table */
+ char *sense_buf;
+ u32 err_code_table_cout; /* Size of the error code table */
+ u32 buf_len;
+ u32 entry_cnt;
+ u32 result; /* Stores command execution results */
+ u32 port_id;
+/* Re-search for rport based on scsiid during retry. Otherwise,
+ *data inconsistency will occur
+ */
+ u32 scsi_id;
+ void *sgl;
+ uplevel_cmd_done uplevel_done;
+};
+
+struct unf_req_sgl_info {
+ void *sgl;
+ void *sgl_start;
+ u32 req_index;
+ u32 entry_index;
+};
+
+struct unf_els_echo_info {
+ u64 response_time;
+ struct semaphore echo_sync_sema;
+ u32 echo_result;
+};
+
+struct unf_xchg {
+ /* Mg resource relative */
+ /* list delete from HotPool */
+ struct unf_xchg_hot_pool *hot_pool;
+
+ /* attach to FreePool */
+ struct unf_xchg_free_pool *free_pool;
+ struct unf_xchg_mgr *xchg_mgr;
+ struct unf_lport *lport; /* Local LPort/VLPort */
+ struct unf_rport *rport; /* Rmote Port */
+ struct unf_rport *disc_rport; /* Discover Rmote Port */
+ struct list_head list_xchg_entry;
+ struct list_head list_abort_xchg_entry;
+ spinlock_t xchg_state_lock;
+
+ /* Xchg reference */
+ atomic_t ref_cnt;
+ atomic_t esgl_cnt;
+ bool debug_hook;
+ /* Xchg attribution */
+ u16 hotpooltag;
+ u16 abort_oxid;
+ u32 xchg_type; /* LS,TGT CMND ,REQ,or SCSI Cmnd */
+ u16 oxid;
+ u16 rxid;
+ u32 sid;
+ u32 did;
+ u32 oid; /* ID of the exchange initiator */
+ u32 disc_portid; /* Send GNN_ID/GFF_ID NPortId */
+ u8 seq_id;
+ u8 byte_orders; /* Byte order */
+ struct unf_seq seq;
+
+ u32 cmnd_code;
+ u32 world_id;
+ /* Dif control */
+ struct unf_dif_control_info dif_control;
+ struct dif_info dif_info;
+ /* IO status Abort,timer out */
+ u32 io_state; /* TGT_IO_STATE_E */
+ u32 tmf_state; /* TMF STATE */
+ u32 ucode_abts_state;
+ u32 abts_state;
+
+ /* IO Enqueuing */
+ enum tgt_io_send_stage io_send_stage; /* tgt_io_send_stage */
+ /* IO Enqueuing result, success or failure */
+ enum tgt_io_send_result io_send_result; /* tgt_io_send_result */
+
+ u8 io_send_abort; /* is or not send io abort */
+ /*result of io abort cmd(succ:true; fail:false)*/
+ u8 io_abort_result;
+ /* for INI,Indicates the length of the data transmitted over the PCI
+ * link
+ */
+ u32 data_len;
+ /* ResidLen,greater than 0 UnderFlow or Less than Overflow */
+ int resid_len;
+ /* +++++++++++++++++IO Special++++++++++++++++++++ */
+ /* point to tgt cmnd/req/scsi cmnd */
+ /* Fcp cmnd */
+ struct unf_fcp_cmnd fcp_cmnd;
+
+ struct unf_scsi_cmd_info scsi_cmnd_info;
+
+ struct unf_req_sgl_info req_sgl_info;
+
+ struct unf_req_sgl_info dif_sgl_info;
+
+ u64 cmnd_sn;
+ void *pinitiator;
+
+ /* timestamp */
+ u64 start_jif;
+ u64 alloc_jif;
+
+ u64 io_front_jif;
+
+ u32 may_consume_res_cnt;
+ u32 fast_consume_res_cnt;
+
+ /* scsi req info */
+ u32 data_direction;
+
+ struct unf_big_sfs *big_sfs_buf;
+
+ /* scsi cmnd sense_buffer pointer */
+ union unf_xchg_fcp_sfs fcp_sfs_union;
+
+ /* One exchange may use several External Sgls */
+ struct list_head list_esgls;
+ struct unf_els_echo_info echo_info;
+ struct semaphore task_sema;
+
+ /* for RRQ ,IO Xchg add to SFS Xchg */
+ void *io_xchg;
+
+ /* Xchg delay work */
+ struct delayed_work timeout_work;
+
+ void (*xfer_or_rsp_echo)(struct unf_xchg *xchg, u32 status);
+
+ /* wait list XCHG send function */
+ int (*scsi_or_tgt_cmnd_func)(struct unf_xchg *xchg);
+
+ /* send result callback */
+ void (*ob_callback)(struct unf_xchg *xchg);
+
+ /* Response IO callback */
+ void (*callback)(void *lport, void *rport, void *xchg);
+
+ /* Xchg release function */
+ void (*free_xchg)(struct unf_xchg *xchg);
+
+ /* +++++++++++++++++low level Special++++++++++++++++++++ */
+ /* private data,provide for low level */
+ u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
+
+ u64 rport_bind_jifs;
+
+ /* sfs exchg ob callback status */
+ u32 ob_callback_sts;
+ u32 scsi_id;
+ u32 qos_level;
+ void *ls_rsp_addr;
+ void *ls_req;
+ u32 status;
+ atomic_t delay_flag;
+ void *upper_ct;
+};
+
+struct unf_esgl_page *
+unf_get_and_add_one_free_esgl_page(struct unf_lport *lport,
+ struct unf_xchg *xchg);
+void unf_release_xchg_mgr_temp(struct unf_lport *lport);
+u32 unf_init_xchg_mgr_temp(struct unf_lport *lport);
+u32 unf_alloc_xchg_resource(struct unf_lport *lport);
+void unf_free_all_xchg_mgr(struct unf_lport *lport);
+void unf_xchg_mgr_destroy(struct unf_lport *lport);
+u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
+void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
+struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport,
+ u32 mgr_idx);
+struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
+ u32 mgr_idx);
+void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag);
+struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator);
+void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid);
+void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
+ u64 lun_id, void *tm_xchg,
+ bool abort_all_lun_flag);
+void unf_cm_xchg_abort_by_session(struct unf_lport *lport,
+ struct unf_rport *rport);
+
+void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did,
+ u32 extra_io_stat);
+void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did);
+void unf_cm_free_xchg(void *lport, void *xchg);
+void *unf_cm_get_free_xchg(void *lport, u32 xchg_type);
+void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag);
+void unf_release_esgls(struct unf_xchg *xchg);
+void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr);
+void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only);
+void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport);
+void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state);
+void unf_free_lport_all_xchg(struct unf_lport *lport);
+extern u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
+ u32 err_table_count, u32 drv_err_code);
+bool unf_busy_io_completed(struct unf_lport *lport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.c b/drivers/scsi/spfc/common/unf_exchg_abort.c
new file mode 100644
index 000000000000..68f751be04aa
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg_abort.c
@@ -0,0 +1,825 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_exchg_abort.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_ls.h"
+#include "unf_io.h"
+
+void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport, struct unf_rport *rport, u32 sid,
+ u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: set ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort) {
+ /* The SID/DID of the Xchg is in reverse direction in different
+ * phases. Therefore, the reverse direction needs to be
+ * considered
+ */
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, sid, did,
+ extra_io_state);
+ lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, did, sid,
+ extra_io_state);
+ }
+}
+
+void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid, u32 did)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort) {
+ /* The SID/DID of the Xchg is in reverse direction in different
+ * phases, therefore, the reverse direction needs to be
+ * considered
+ */
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, sid, did);
+ lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, did, sid);
+ }
+}
+
+void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
+ u64 lun_id, void *xchg, bool abort_all_lun_flag)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ void (*unf_xchg_abort_by_lun)(void *, void *, u64, void *, bool) = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_xchg_abort_by_lun = lport->xchg_mgr_temp.unf_xchg_abort_by_lun;
+ if (unf_xchg_abort_by_lun)
+ unf_xchg_abort_by_lun((void *)lport, (void *)rport, lun_id,
+ xchg, abort_all_lun_flag);
+}
+
+void unf_cm_xchg_abort_by_session(struct unf_lport *lport, struct unf_rport *rport)
+{
+ void (*unf_xchg_abort_by_session)(void *, void *) = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_xchg_abort_by_session = lport->xchg_mgr_temp.unf_xchg_abort_by_session;
+ if (unf_xchg_abort_by_session)
+ unf_xchg_abort_by_session((void *)lport, (void *)rport);
+}
+
+static void unf_xchg_abort_all_sfs_xchg(struct unf_lport *lport, bool clean)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT,
+ UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.", lport->port_id);
+
+ continue;
+ }
+
+ if (!clean) {
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* Clearing the SFS_Busy_list Exchange Resource */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (atomic_read(&xchg->ref_cnt) > 0)
+ xchg->io_state |= TGT_IO_STATE_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ } else {
+ continue;
+ }
+ }
+}
+
+static void unf_xchg_abort_ini_io_xchg(struct unf_lport *lport, bool clean)
+{
+ /* Clean L_Port/V_Port Link Down I/O: Abort */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 io_state = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ lport->port_id);
+
+ continue;
+ }
+
+ if (!clean) {
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* 1. Abort INI_Busy_List IO */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (atomic_read(&xchg->ref_cnt) > 0)
+ xchg->io_state |= INI_IO_STATE_DRABORT | io_state;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ } else {
+ /* Do nothing, just return */
+ continue;
+ }
+ }
+}
+
+void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = (struct unf_lport *)lport;
+
+ switch (xchg_type) {
+ case UNF_XCHG_TYPE_SFS:
+ unf_xchg_abort_all_sfs_xchg(unf_lport, clean);
+ break;
+ /* Clean L_Port/V_Port Link Down I/O: Abort */
+ case UNF_XCHG_TYPE_INI:
+ unf_xchg_abort_ini_io_xchg(unf_lport, clean);
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unknown exch type(0x%x)",
+ unf_lport->port_id, xchg_type);
+ break;
+ }
+}
+
+static void unf_xchg_abort_ini_send_tm_cmd(void *lport, void *rport, u64 lun_id)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i = 0;
+ u64 raw_lun_id = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = (struct unf_rport *)rport;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+
+ raw_lun_id = *(u64 *)(xchg->fcp_cmnd.lun) >> UNF_SHIFT_16 &
+ UNF_RAW_LUN_ID_MASK;
+ if (lun_id == raw_lun_id && unf_rport == xchg->rport) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
+ xchg, xchg->io_state,
+ ((struct unf_lport *)lport)->nport_id,
+ unf_rport->nport_id, xchg->hotpooltag);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+static void unf_xchg_abort_ini_tmf_target_reset(void *lport, void *rport)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = (struct unf_rport *)rport;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy_list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ if (unf_rport == xchg->rport) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
+ xchg, xchg->io_state, unf_lport->nport_id,
+ unf_rport->nport_id, xchg->hotpooltag);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *xchg,
+ bool abort_all_lun_flag)
+{
+ /* ABORT: set UP_ABORT tag for target LUN I/O */
+ struct unf_xchg *tm_xchg = (struct unf_xchg *)xchg;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) LUN_ID(0x%llx) TM_EXCH(0x%p) flag(%d)",
+ ((struct unf_lport *)lport)->port_id, lun_id, xchg,
+ abort_all_lun_flag);
+
+ /* for INI Mode */
+ if (!tm_xchg) {
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_xchg_abort_ini_send_tm_cmd(lport, rport, lun_id);
+
+ return;
+ }
+}
+
+void unf_xchg_abort_by_session(void *lport, void *rport)
+{
+ /*
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) Rport(0x%x) start session reset with TMF",
+ ((struct unf_lport *)lport)->port_id, ((struct unf_rport *)rport)->nport_id);
+
+ unf_xchg_abort_ini_tmf_target_reset(lport, rport);
+}
+
+void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ ulong xchg_flag = 0;
+ u32 i;
+ u32 io_abort_flag = INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR |
+ INI_IO_STATE_TMF_ABORT;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* 1. for each exchange from busy_list */
+ list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (lport == xchg->lport && scsi_id == xchg->scsi_id &&
+ !(xchg->io_state & io_abort_flag)) {
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Exchange(%p) scsi_cmd(0x%p) state(0x%x) scsi_id(0x%x) tag(0x%x) upabort by scsi id",
+ xchg, xchg->scsi_cmnd_info.scsi_cmnd,
+ xchg->io_state, scsi_id, xchg->hotpooltag);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+}
+
+static void unf_ini_busy_io_xchg_abort(void *xchg_hot_pool, void *rport,
+ u32 sid, u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: Set (DRV) ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong xchg_lock_flags = 0;
+
+ unf_rport = (struct unf_rport *)rport;
+ hot_pool = (struct unf_xchg_hot_pool *)xchg_hot_pool;
+
+ /* ABORT INI IO: INI_BUSY_LIST */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (did == xchg->did && sid == xchg->sid &&
+ unf_rport == xchg->rport &&
+ (atomic_read(&xchg->ref_cnt) > 0)) {
+ xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
+ xchg->io_state |= INI_IO_STATE_DRABORT;
+ xchg->io_state |= extra_io_state;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Abort INI:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid,
+ (u32)xchg->sid, (u32)xchg->did, (u32)xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+}
+
+void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did, u32 extra_io_state)
+{
+ /*
+ * for target session: set ABORT
+ * 1. R_Port remove
+ * 2. Send PLOGI_ACC callback
+ * 3. RCVD PLOGI
+ * 4. RCVD LOGO
+ */
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong pool_lock_falgs = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* 1. Clear INI (session) IO: INI Mode */
+ unf_ini_busy_io_xchg_abort(hot_pool, rport, sid, did, extra_io_state);
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ }
+}
+
+void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong pool_lock_falgs = 0;
+ ulong xchg_lock_flags = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
+ if (!hot_pool) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT,
+ UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.",
+ unf_lport->port_id);
+
+ continue;
+ }
+
+ unf_rport = (struct unf_rport *)rport;
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+
+ /* Clear the SFS exchange of the corresponding connection */
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
+ if (did == xchg->did && sid == xchg->sid &&
+ unf_rport == xchg->rport && (atomic_read(&xchg->ref_cnt) > 0)) {
+ xchg->io_state |= TGT_IO_STATE_ABORT;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Abort SFS:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
+ xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
+ (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
+ (u32)xchg->did, (u32)xchg->io_state,
+ atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
+ }
+}
+
+static void unf_fc_wait_abts_complete(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_scsi_cmnd scsi_cmnd = {0};
+ ulong flag = 0;
+ u32 time_out_value = 2000;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 io_result;
+
+ scsi_cmnd.scsi_id = xchg->scsi_cmnd_info.scsi_id;
+ scsi_cmnd.upper_cmnd = xchg->scsi_cmnd_info.scsi_cmnd;
+ scsi_cmnd.done = xchg->scsi_cmnd_info.done;
+ scsi_image_table = &unf_lport->rport_scsi_table;
+
+ if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid);
+ goto ABTS_FIAILED;
+ }
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
+ xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
+ xchg->ucode_abts_state);
+ io_result = DID_BUS_BUSY;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_cmnd.scsi_id, io_result);
+ unf_complete_cmnd(&scsi_cmnd, io_result << UNF_SHIFT_16);
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
+ unf_lport->port_id, xchg, xchg->hotpooltag,
+ xchg->scsi_cmnd_info.result, xchg->io_state);
+ goto ABTS_FIAILED;
+
+ABTS_FIAILED:
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+}
+
+void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = lport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ if (xchg->io_state & INI_IO_STATE_UPABORT) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "LPort(0x%x) xchange(0x%p) OX_ID(0x%x), RX_ID(0x%x) Cmdsn(0x%lx) has been aborted.",
+ unf_lport->port_id, xchg, xchg->oxid,
+ xchg->rxid, (ulong)xchg->cmnd_sn);
+ return;
+ }
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_KEVENT,
+ "LPort(0x%x) exchg(0x%p) OX_ID(0x%x) RX_ID(0x%x) Cmdsn(0x%lx) timeout abort it",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid, (ulong)xchg->cmnd_sn);
+
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
+
+ sema_init(&xchg->task_sema, 0);
+
+ if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ABTS, Send ABTS unsuccessful. Exchange OX_ID(0x%x), RX_ID(0x%x).",
+ unf_lport->port_id, xchg->oxid, xchg->rxid);
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ return;
+ }
+ unf_fc_wait_abts_complete(unf_lport, xchg);
+}
+
+static void unf_fc_ini_io_rec_wait_time_out(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ ulong time_out = 0;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) Rec timeout exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg, xchg->oxid,
+ xchg->rxid, xchg->io_state);
+
+ if (xchg->rport_bind_jifs == rport->rport_alloc_jifs) {
+ unf_send_rec(lport, rport, xchg);
+
+ if (xchg->scsi_cmnd_info.abort_time_out > 0) {
+ time_out = (xchg->scsi_cmnd_info.abort_time_out > UNF_REC_TOV) ?
+ (xchg->scsi_cmnd_info.abort_time_out - UNF_REC_TOV) : 0;
+ if (time_out > 0) {
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out,
+ UNF_TIMER_TYPE_REQ_IO);
+ } else {
+ unf_fc_abort_time_out_cmnd(lport, xchg);
+ }
+ }
+ }
+}
+
+static void unf_fc_ini_send_abts_time_out(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ if (xchg->rport_bind_jifs == rport->rport_alloc_jifs &&
+ xchg->rport_bind_jifs != INVALID_VALUE64) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) first time to send abts timeout, retry again OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg, xchg->oxid,
+ xchg->rxid, xchg->hotpooltag, xchg->io_state);
+
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
+
+ if (unf_send_abts(lport, xchg) != RETURN_OK) {
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ unf_abts_timeout_recovery_default(rport, xchg);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) rport is invalid, exchg rport jiff(0x%llx 0x%llx), free exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ lport->port_id, rport->nport_id, xchg,
+ xchg->rport_bind_jifs, rport->rport_alloc_jifs,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+}
+
+void unf_fc_ini_io_xchg_time_out(struct work_struct *work)
+{
+ struct unf_xchg *xchg = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 port_valid_flag = 0;
+
+ xchg = container_of(work, struct unf_xchg, timeout_work.work);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ ret = unf_xchg_ref_inc(xchg, INI_IO_TIMEOUT);
+ FC_CHECK_RETURN_VOID(ret == RETURN_OK);
+
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+
+ port_valid_flag = (!unf_lport) || (!unf_rport);
+ if (port_valid_flag) {
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ return;
+ }
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ /* 1. for Send RRQ failed Timer timeout */
+ if (INI_IO_STATE_RRQSEND_ERR & xchg->io_state) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]LPort(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for RRQ send failed OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ unf_notify_chip_free_xid(xchg);
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+ /* Second ABTS timeout and enter LOGO process */
+ else if ((INI_IO_STATE_ABORT_TIMEOUT & xchg->io_state) &&
+ (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for second abts send OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ unf_abts_timeout_recovery_default(unf_rport, xchg);
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+ /* First time to send ABTS, timeout and retry to send ABTS again */
+ else if ((INI_IO_STATE_UPABORT & xchg->io_state) &&
+ (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
+ xchg->io_state |= INI_IO_STATE_ABORT_TIMEOUT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_ini_send_abts_time_out(unf_lport, unf_rport, xchg);
+ }
+ /* 3. IO_DONE */
+ else if ((INI_IO_STATE_DONE & xchg->io_state) &&
+ (ABTS_RESPONSE_RECEIVED & xchg->abts_state)) {
+ /*
+ * for IO_DONE:
+ * 1. INI ABTS first timer time out
+ * 2. INI RCVD ABTS Response
+ * 3. Normal case for I/O Done
+ */
+ /* Send ABTS & RCVD RSP & no timeout */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ if (unf_send_rrq(unf_lport, unf_rport, xchg) == RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]LPort(0x%x) send RRQ succeed to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]LPort(0x%x) can't send RRQ to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, xchg,
+ xchg->oxid, xchg->rxid, xchg->io_state);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_RRQSEND_ERR;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
+ (ulong)UNF_WRITE_RRQ_SENDERR_INTERVAL, UNF_TIMER_TYPE_INI_IO);
+ }
+ } else if (INI_IO_STATE_REC_TIMEOUT_WAIT & xchg->io_state) {
+ xchg->io_state &= ~INI_IO_STATE_REC_TIMEOUT_WAIT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_ini_io_rec_wait_time_out(unf_lport, unf_rport, xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_fc_abort_time_out_cmnd(unf_lport, xchg);
+ }
+
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+ unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
+}
+
+void unf_sfs_xchg_time_out(struct work_struct *work)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+ xchg = container_of(work, struct unf_xchg, timeout_work.work);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ ret = unf_xchg_ref_inc(xchg, SFS_TIMEOUT);
+ FC_CHECK_RETURN_VOID(ret == RETURN_OK);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]SFS Exch(%p) Cmnd(0x%x) IO Exch(0x%p) Sid_Did(0x%x:0x%x) HotTag(0x%x) State(0x%x) Timeout.",
+ xchg, xchg->cmnd_code, xchg->io_xchg, xchg->sid, xchg->did,
+ xchg->hotpooltag, xchg->io_state);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if ((xchg->io_state & TGT_IO_STATE_ABORT) &&
+ xchg->cmnd_code != ELS_RRQ && xchg->cmnd_code != ELS_LOGO) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "SFS Exch(0x%p) Cmnd(0x%x) Hot Pool Tag(0x%x) timeout, but aborted, no need to handle.",
+ xchg, xchg->cmnd_code, xchg->hotpooltag);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* The sfs times out. If the sfs is ELS reply,
+ * go to UNF_RPortErrorRecovery/unf_lport_error_recovery.
+ * Otherwise, go to the corresponding obCallback.
+ */
+ if (UNF_XCHG_IS_ELS_REPLY(xchg) && unf_rport) {
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
+ unf_lport_error_recovery(unf_lport);
+ else
+ unf_rport_error_recovery(unf_rport);
+
+ } else if (xchg->ob_callback) {
+ xchg->ob_callback(xchg);
+ } else {
+ /* Do nothing */
+ }
+ unf_notify_chip_free_xid(xchg);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+ unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
+}
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.h b/drivers/scsi/spfc/common/unf_exchg_abort.h
new file mode 100644
index 000000000000..b55f4eea2cce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_exchg_abort.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCEXCH_ABORT_H
+#define UNF_FCEXCH_ABORT_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+
+#define UNF_RAW_LUN_ID_MASK 0x000000000000ffff
+
+void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *tm_xchg,
+ bool abort_all_lun_flag);
+void unf_xchg_abort_by_session(void *lport, void *rport);
+void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did,
+ u32 extra_io_state);
+void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did);
+void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean);
+void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg);
+void unf_fc_ini_io_xchg_time_out(struct work_struct *work);
+void unf_sfs_xchg_time_out(struct work_struct *work);
+void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id);
+#endif
diff --git a/drivers/scsi/spfc/common/unf_fcstruct.h b/drivers/scsi/spfc/common/unf_fcstruct.h
new file mode 100644
index 000000000000..d6eb8592994b
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_fcstruct.h
@@ -0,0 +1,459 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_FCSTRUCT_H
+#define UNF_FCSTRUCT_H
+
+#include "unf_type.h"
+#include "unf_scsi_common.h"
+
+#define FC_RCTL_BLS 0x80000000
+
+/*
+ * * R_CTL Basic Link Data defines
+ */
+
+#define FC_RCTL_BLS_ACC (FC_RCTL_BLS | 0x04000000)
+#define FC_RCTL_BLS_RJT (FC_RCTL_BLS | 0x05000000)
+
+/*
+ * * BA_RJT reason code defines
+ */
+#define FCXLS_BA_RJT_LOGICAL_ERROR 0x00030000
+
+/*
+ * * BA_RJT code explanation
+ */
+
+#define FCXLS_LS_RJT_INVALID_OXID_RXID 0x00001700
+
+/*
+ * * ELS ACC
+ */
+struct unf_els_acc {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+};
+
+/*
+ * * ELS RJT
+ */
+struct unf_els_rjt {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 reason_code;
+};
+
+/*
+ * * FLOGI payload,
+ * * FC-LS-2 FLOGI, PLOGI, FDISC or LS_ACC Payload
+ */
+struct unf_flogi_fdisc_payload {
+ u32 cmnd;
+ struct unf_fabric_parm fabric_parms;
+};
+
+/*
+ * * Flogi and Flogi accept frames. They are the same structure
+ */
+struct unf_flogi_fdisc_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_flogi_fdisc_payload flogi_payload;
+};
+
+/*
+ * * Fdisc and Fdisc accept frames. They are the same structure
+ */
+
+struct unf_fdisc_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_flogi_fdisc_payload fdisc_payload;
+};
+
+/*
+ * * PLOGI payload
+ */
+struct unf_plogi_payload {
+ u32 cmnd;
+ struct unf_lgn_parm stparms;
+};
+
+/*
+ *Plogi, Plogi accept, Pdisc and Pdisc accept frames. They are all the same
+ *structure.
+ */
+struct unf_plogi_pdisc {
+ struct unf_fc_head frame_hdr;
+ struct unf_plogi_payload payload;
+};
+
+/*
+ * * LOGO logout link service requests invalidation of service parameters and
+ * * port name.
+ * * see FC-PH 4.3 Section 21.4.8
+ */
+struct unf_logo_payload {
+ u32 cmnd;
+ u32 nport_id;
+ u32 high_port_name;
+ u32 low_port_name;
+};
+
+/*
+ * * payload to hold LOGO command
+ */
+struct unf_logo {
+ struct unf_fc_head frame_hdr;
+ struct unf_logo_payload payload;
+};
+
+/*
+ * * payload for ECHO command, refer to FC-LS-2 4.2.4
+ */
+struct unf_echo_payload {
+ u32 cmnd;
+#define UNF_FC_ECHO_PAYLOAD_LENGTH 255 /* Length in words */
+ u32 data[UNF_FC_ECHO_PAYLOAD_LENGTH];
+};
+
+struct unf_echo {
+ struct unf_fc_head frame_hdr;
+ struct unf_echo_payload *echo_pld;
+ dma_addr_t phy_echo_addr;
+};
+
+#define UNF_PRLI_SIRT_EXTRA_SIZE 12
+
+/*
+ * * payload for PRLI and PRLO
+ */
+struct unf_prli_payload {
+ u32 cmnd;
+#define UNF_FC_PRLI_PAYLOAD_LENGTH 7 /* Length in words */
+ u32 parms[UNF_FC_PRLI_PAYLOAD_LENGTH];
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_prli_prlo {
+ struct unf_fc_head frame_hdr;
+ struct unf_prli_payload payload;
+};
+
+struct unf_adisc_payload {
+ u32 cmnd;
+ u32 hard_address;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ u32 nport_id;
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_adisc {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_adisc_payload
+ adisc_payl; /* Payload data containing ADISC info
+ */
+};
+
+/*
+ * * RLS payload
+ */
+struct unf_rls_payload {
+ u32 cmnd;
+ u32 nport_id; /* in litle endian format */
+};
+
+/*
+ * * RLS
+ */
+struct unf_rls {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_rls_payload rls; /* payload data containing the RLS info */
+};
+
+/*
+ * * RLS accept payload
+ */
+struct unf_rls_acc_payload {
+ u32 cmnd;
+ u32 link_failure_count;
+ u32 loss_of_sync_count;
+ u32 loss_of_signal_count;
+ u32 primitive_seq_count;
+ u32 invalid_trans_word_count;
+ u32 invalid_crc_count;
+};
+
+/*
+ * * RLS accept
+ */
+struct unf_rls_acc {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_rls_acc_payload
+ rls; /* payload data containing the RLS ACC info
+ */
+};
+
+/*
+ * * FCHS structure with payload
+ */
+struct unf_rrq {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 sid;
+ u32 oxid_rxid;
+};
+
+#define UNF_SCR_PAYLOAD_CNT 2
+struct unf_scr {
+ struct unf_fc_head frame_hdr;
+ u32 payload[UNF_SCR_PAYLOAD_CNT];
+};
+
+struct unf_ctiu_prem {
+ u32 rev_inid;
+ u32 gstype_gssub_options;
+ u32 cmnd_rsp_size;
+ u32 frag_reason_exp_vend;
+};
+
+#define UNF_FC4TYPE_CNT 8
+struct unf_rftid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+ u32 fc4_types[UNF_FC4TYPE_CNT];
+};
+
+struct unf_rffid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+ u32 fc4_feature;
+};
+
+struct unf_rffid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_gffid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gffid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 fc4_feature[32];
+};
+
+struct unf_gnnid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gnnid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 node_name[2];
+};
+
+struct unf_gpnid {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 nport_id;
+};
+
+struct unf_gpnid_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+ u32 port_name[2];
+};
+
+struct unf_rft_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_ls_rjt_pld {
+ u32 srr_op; /* 01000000h */
+ u8 vandor;
+ u8 reason_exp;
+ u8 reason;
+ u8 reserved;
+};
+
+struct unf_ls_rjt {
+ struct unf_fc_head frame_hdr;
+ struct unf_ls_rjt_pld pld;
+};
+
+struct unf_rec_pld {
+ u32 rec_cmnd;
+ u32 xchg_org_sid; /* bit0-bit23 */
+ u16 rx_id;
+ u16 ox_id;
+};
+
+struct unf_rec {
+ struct unf_fc_head frame_hdr;
+ struct unf_rec_pld rec_pld;
+};
+
+struct unf_rec_acc_pld {
+ u32 cmnd;
+ u16 rx_id;
+ u16 ox_id;
+ u32 org_addr_id; /* bit0-bit23 */
+ u32 rsp_addr_id; /* bit0-bit23 */
+};
+
+struct unf_rec_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_rec_acc_pld payload;
+};
+
+struct unf_gid {
+ struct unf_ctiu_prem ctiu_pream;
+ u32 scope_type;
+};
+
+struct unf_gid_acc {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+#define UNF_LOOPMAP_COUNT 128
+struct unf_loop_init {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+#define UNF_FC_ALPA_BIT_MAP_SIZE 4
+ u32 alpha_bit_map[UNF_FC_ALPA_BIT_MAP_SIZE];
+};
+
+struct unf_loop_map {
+ struct unf_fc_head frame_hdr;
+ u32 cmnd;
+ u32 loop_map[32];
+};
+
+struct unf_ctiu_rjt {
+ struct unf_fc_head frame_hdr;
+ struct unf_ctiu_prem ctiu_pream;
+};
+
+struct unf_gid_acc_pld {
+ struct unf_ctiu_prem ctiu_pream;
+
+ u32 gid_port_id[UNF_GID_PORT_CNT];
+};
+
+struct unf_gid_rsp {
+ struct unf_gid_acc_pld *gid_acc_pld;
+};
+
+struct unf_gid_req_rsp {
+ struct unf_fc_head frame_hdr;
+ struct unf_gid gid_req;
+ struct unf_gid_rsp gid_rsp;
+};
+
+/* FC-LS-2 Table 31 RSCN Payload */
+struct unf_rscn_port_id_page {
+ u8 port_id_port;
+ u8 port_id_area;
+ u8 port_id_domain;
+
+ u8 addr_format : 2;
+ u8 event_qualifier : 4;
+ u8 reserved : 2;
+};
+
+struct unf_rscn_pld {
+ u32 cmnd;
+ struct unf_rscn_port_id_page port_id_page[UNF_RSCN_PAGE_SUM];
+};
+
+struct unf_rscn {
+ struct unf_fc_head frame_hdr;
+ struct unf_rscn_pld *rscn_pld;
+};
+
+union unf_sfs_u {
+ struct {
+ struct unf_fc_head frame_head;
+ u8 data[0];
+ } sfs_common;
+ struct unf_els_acc els_acc;
+ struct unf_els_rjt els_rjt;
+ struct unf_plogi_pdisc plogi;
+ struct unf_logo logo;
+ struct unf_echo echo;
+ struct unf_echo echo_acc;
+ struct unf_prli_prlo prli;
+ struct unf_prli_prlo prlo;
+ struct unf_rls rls;
+ struct unf_rls_acc rls_acc;
+ struct unf_plogi_pdisc pdisc;
+ struct unf_adisc adisc;
+ struct unf_rrq rrq;
+ struct unf_flogi_fdisc_acc flogi;
+ struct unf_fdisc_acc fdisc;
+ struct unf_scr scr;
+ struct unf_rec rec;
+ struct unf_rec_acc rec_acc;
+ struct unf_ls_rjt ls_rjt;
+ struct unf_rscn rscn;
+ struct unf_gid_req_rsp get_id;
+ struct unf_rftid rft_id;
+ struct unf_rft_rsp rft_id_rsp;
+ struct unf_rffid rff_id;
+ struct unf_rffid_rsp rff_id_rsp;
+ struct unf_gffid gff_id;
+ struct unf_gffid_rsp gff_id_rsp;
+ struct unf_gnnid gnn_id;
+ struct unf_gnnid_rsp gnn_id_rsp;
+ struct unf_gpnid gpn_id;
+ struct unf_gpnid_rsp gpn_id_rsp;
+ struct unf_plogi_pdisc plogi_acc;
+ struct unf_plogi_pdisc pdisc_acc;
+ struct unf_adisc adisc_acc;
+ struct unf_prli_prlo prli_acc;
+ struct unf_prli_prlo prlo_acc;
+ struct unf_flogi_fdisc_acc flogi_acc;
+ struct unf_fdisc_acc fdisc_acc;
+ struct unf_loop_init lpi;
+ struct unf_loop_map loop_map;
+ struct unf_ctiu_rjt ctiu_rjt;
+};
+
+struct unf_sfs_entry {
+ union unf_sfs_u *fc_sfs_entry_ptr; /* Virtual addr of SFS buffer */
+ u64 sfs_buff_phy_addr; /* Physical addr of SFS buffer */
+ u32 sfs_buff_len; /* Length of bytes in SFS buffer */
+ u32 cur_offset;
+};
+
+struct unf_fcp_rsp_iu_entry {
+ u8 *fcp_rsp_iu;
+ u32 fcp_sense_len;
+};
+
+struct unf_rjt_info {
+ u32 els_cmnd_code;
+ u32 reason_code;
+ u32 reason_explanation;
+ u8 class_mode;
+ u8 ucrsvd[3];
+};
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_gs.c b/drivers/scsi/spfc/common/unf_gs.c
new file mode 100644
index 000000000000..cb5fc1a5d246
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_gs.c
@@ -0,0 +1,2521 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_gs.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+#include "unf_ls.h"
+
+static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gpn_id_ob_callback(struct unf_xchg *xchg);
+static void unf_gnn_id_ob_callback(struct unf_xchg *xchg);
+static void unf_scr_callback(void *lport, void *rport, void *xchg);
+static void unf_scr_ob_callback(struct unf_xchg *xchg);
+static void unf_gff_id_ob_callback(struct unf_xchg *xchg);
+static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg);
+static void unf_gid_ft_ob_callback(struct unf_xchg *xchg);
+static void unf_gid_ft_callback(void *lport, void *rport, void *xchg);
+static void unf_gid_pt_ob_callback(struct unf_xchg *xchg);
+static void unf_gid_pt_callback(void *lport, void *rport, void *xchg);
+static void unf_rft_id_ob_callback(struct unf_xchg *xchg);
+static void unf_rft_id_callback(void *lport, void *rport, void *xchg);
+static void unf_rff_id_callback(void *lport, void *rport, void *xchg);
+static void unf_rff_id_ob_callback(struct unf_xchg *xchg);
+
+#define UNF_GET_DOMAIN_ID(x) (((x) & 0xFF0000) >> 16)
+#define UNF_GET_AREA_ID(x) (((x) & 0x00FF00) >> 8)
+
+#define UNF_GID_LAST_PORT_ID 0x80
+#define UNF_GID_CONTROL(nport_id) ((nport_id) >> 24)
+#define UNF_GET_PORT_OPTIONS(fc_4feature) ((fc_4feature) >> 20)
+
+#define UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(port_id_page) \
+ (((u32)(port_id_page)->port_id_domain << 16) | \
+ ((u32)(port_id_page)->port_id_area << 8) | \
+ ((u32)(port_id_page)->port_id_port))
+
+#define UNF_GNN_GFF_ID_RJT_REASON(rjt_reason) \
+ ((UNF_CTIU_RJT_UNABLE_PERFORM == \
+ ((rjt_reason) & UNF_CTIU_RJT_MASK)) && \
+ ((UNF_CTIU_RJT_EXP_PORTID_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
+ (UNF_CTIU_RJT_EXP_PORTNAME_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
+ (UNF_CTIU_RJT_EXP_NODENAME_NO_REG == \
+ ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK))))
+
+u32 unf_send_scr(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* after RCVD RFF_ID ACC */
+ struct unf_scr *scr = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, NULL, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for SCR",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_SCR;
+
+ xchg->callback = unf_scr_callback;
+ xchg->ob_callback = unf_scr_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ scr = &fc_entry->scr;
+ memset(scr, 0, sizeof(struct unf_scr));
+ scr->payload[ARRAY_INDEX_0] = (UNF_GS_CMND_SCR); /* SCR is 0x62 */
+ scr->payload[ARRAY_INDEX_1] = (UNF_FABRIC_FULL_REG); /* Full registration */
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: SCR send %s. Port(0x%x_0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_gff_id_pld(struct unf_gffid *gff_id, u32 nport_id)
+{
+ FC_CHECK_RETURN_VOID(gff_id);
+
+ gff_id->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gff_id->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gff_id->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GFF_ID);
+ gff_id->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ gff_id->nport_id = nport_id;
+}
+
+static void unf_ctpass_thru_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs = NULL;
+ u32 cmnd_rsp_size = 0;
+
+ struct send_com_trans_out *out_send = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ sfs = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ gid_acc_pld = sfs->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) CT PassThru response payload is NULL",
+ unf_lport->port_id);
+
+ return;
+ }
+
+ out_send = (struct send_com_trans_out *)unf_xchg->upper_ct;
+
+ cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ out_send->hba_status = 0; /* HBA_STATUS_OK 0 */
+ out_send->total_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ out_send->actual_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ unf_cpu_to_big_end(out_send->resp_buffer, (u32)out_send->total_resp_buffer_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) CT PassThru was receive len is(0x%0x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ out_send->total_resp_buffer_cnt);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ out_send->hba_status = 13; /* HBA_STATUS_ERROR_ELS_REJECT 13 */
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was rejected",
+ unf_lport->port_id, unf_lport->nport_id);
+ } else {
+ out_send->hba_status = 1; /* HBA_STATUS_ERROR 1 */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was UNKNOWN",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+
+ up(&unf_lport->wmi_task_sema);
+}
+
+u32 unf_send_ctpass_thru(struct unf_lport *lport, void *buffer, u32 bufflen)
+{
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_rport *sns_port = NULL;
+ struct send_com_trans_in *in_send = (struct send_com_trans_in *)buffer;
+ struct send_com_trans_out *out_send =
+ (struct send_com_trans_out *)buffer;
+ struct unf_ctiu_prem *ctiu_pream = NULL;
+ struct unf_gid *gs_pld = NULL;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buffer, UNF_RETURN_ERROR);
+
+ ctiu_pream = (struct unf_ctiu_prem *)in_send->req_buffer;
+ unf_cpu_to_big_end(ctiu_pream, sizeof(struct unf_gid));
+
+ if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GIEL) {
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_MGMT_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't find SNS port",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ } else if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GA_NXT) {
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't find SNS port",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[info]%s cmnd(0x%x) is error:", __func__,
+ ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ xchg->cmnd_code = ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16;
+ xchg->upper_ct = buffer;
+ xchg->ob_callback = NULL;
+ xchg->callback = unf_ctpass_thru_callback;
+ xchg->oxid = xchg->hotpooltag;
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = bufflen;
+ gs_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ memset(gs_pld, 0, sizeof(struct unf_gid));
+ memcpy(gs_pld, (struct unf_gid *)in_send->req_buffer, sizeof(struct unf_gid));
+ fc_entry->get_id.gid_rsp.gid_acc_pld = (struct unf_gid_acc_pld *)out_send->resp_buffer;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+
+ return ret;
+}
+
+u32 unf_send_gff_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ struct unf_gffid *gff_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ }
+
+ xchg->cmnd_code = NS_GFF_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->ob_callback = unf_gff_id_ob_callback;
+ xchg->callback = unf_gff_id_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gff_id = &fc_entry->gff_id;
+ memset(gff_id, 0, sizeof(struct unf_gffid));
+ unf_fill_gff_id_pld(gff_id, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GFF_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gnnid_pld(struct unf_gnnid *gnnid_pld, u32 nport_id)
+{
+ /* Inquiry R_Port node name from SW */
+ FC_CHECK_RETURN_VOID(gnnid_pld);
+
+ gnnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gnnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gnnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GNN_ID);
+ gnnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ gnnid_pld->nport_id = nport_id;
+}
+
+u32 unf_send_gnn_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ /* from DISC stop/re-login */
+ struct unf_gnnid *unf_gnnid = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) send gnnid to 0x%x.", lport->port_id,
+ lport->nport_id, nport_id);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
+ sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange can't be NULL for GNN_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ }
+
+ xchg->cmnd_code = NS_GNN_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->ob_callback = unf_gnn_id_ob_callback;
+ xchg->callback = unf_gnn_id_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ unf_gnnid = &fc_entry->gnn_id; /* GNNID payload */
+ memset(unf_gnnid, 0, sizeof(struct unf_gnnid));
+ unf_fill_gnnid_pld(unf_gnnid, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GNN_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x) inquire Nportid(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gpnid_pld(struct unf_gpnid *gpnid_pld, u32 nport_id)
+{
+ FC_CHECK_RETURN_VOID(gpnid_pld);
+
+ gpnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gpnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gpnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GPN_ID);
+ gpnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ /* Inquiry WWN from SW */
+ gpnid_pld->nport_id = nport_id;
+}
+
+u32 unf_send_gpn_id(struct unf_lport *lport, struct unf_rport *sns_port,
+ u32 nport_id)
+{
+ struct unf_gpnid *gpnid_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ /* Lport is invalid, no retry or handle required, return ok */
+ return RETURN_OK;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
+ sns_port, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GPN_ID",
+ lport->port_id);
+
+ return unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_PORT_NAME);
+ }
+
+ xchg->cmnd_code = NS_GPN_ID;
+ xchg->disc_portid = nport_id;
+
+ xchg->callback = unf_gpn_id_callback;
+ xchg->ob_callback = unf_gpn_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, sns_port);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gpnid_pld = &fc_entry->gpn_id;
+ memset(gpnid_pld, 0, sizeof(struct unf_gpnid));
+ unf_fill_gpnid_pld(gpnid_pld, nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ else
+ atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GPN_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ sns_port->nport_id, nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gid_ft_pld(struct unf_gid *gid_pld)
+{
+ FC_CHECK_RETURN_VOID(gid_pld);
+
+ gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_FT);
+ gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ gid_pld->scope_type = (UNF_GID_FT_TYPE);
+}
+
+u32 unf_send_gid_ft(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_gid *gid_pld = NULL;
+ struct unf_gid_rsp *gid_rsp = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GID_FT",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_GID_FT;
+
+ xchg->ob_callback = unf_gid_ft_ob_callback;
+ xchg->callback = unf_gid_ft_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ unf_fill_gid_ft_pld(gid_pld);
+ gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
+
+ gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate GID_FT response buffer failed",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
+ gid_rsp->gid_acc_pld = gid_acc_pld;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GID_FT send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_gid_pt_pld(struct unf_gid *gid_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(gid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_PT);
+ gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+
+ /* 0x7F000000 means NX_Port */
+ gid_pld->scope_type = (UNF_GID_PT_TYPE);
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, gid_pld,
+ sizeof(struct unf_gid));
+}
+
+u32 unf_send_gid_pt(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* from DISC start */
+ struct unf_gid *gid_pld = NULL;
+ struct unf_gid_rsp *gid_rsp = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for GID_PT",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_GID_PT;
+
+ xchg->ob_callback = unf_gid_pt_ob_callback;
+ xchg->callback = unf_gid_pt_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
+ unf_fill_gid_pt_pld(gid_pld, lport);
+ gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
+
+ gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%0x) Allocate GID_PT response buffer failed",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
+ gid_rsp->gid_acc_pld = gid_acc_pld;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: GID_PT send %s. Port(0x%x_0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_rft_id_pld(struct unf_rftid *rftid_pld,
+ struct unf_lport *lport)
+{
+ u32 index = 1;
+
+ FC_CHECK_RETURN_VOID(rftid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rftid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ rftid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ rftid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFT_ID);
+ rftid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ rftid_pld->nport_id = (lport->nport_id);
+ rftid_pld->fc4_types[ARRAY_INDEX_0] = (UNF_FC4_SCSI_BIT8);
+
+ for (index = ARRAY_INDEX_2; index < UNF_FC4TYPE_CNT; index++)
+ rftid_pld->fc4_types[index] = 0;
+}
+
+u32 unf_send_rft_id(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* After PLOGI process */
+ struct unf_rftid *rft_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RFT_ID",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_RFT_ID;
+
+ xchg->callback = unf_rft_id_callback;
+ xchg->ob_callback = unf_rft_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ rft_id = &fc_entry->rft_id;
+ memset(rft_id, 0, sizeof(struct unf_rftid));
+ unf_fill_rft_id_pld(rft_id, lport);
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RFT_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x). rport(0x%p) wwpn(0x%llx) ",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, rport->nport_id, rport, rport->port_name);
+
+ return ret;
+}
+
+static void unf_fill_rff_id_pld(struct unf_rffid *rffid_pld,
+ struct unf_lport *lport, u32 fc4_type)
+{
+ FC_CHECK_RETURN_VOID(rffid_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ rffid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
+ rffid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
+ rffid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFF_ID);
+ rffid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
+ rffid_pld->nport_id = (lport->nport_id);
+ rffid_pld->fc4_feature = (fc4_type | (lport->options << UNF_SHIFT_4));
+}
+
+u32 unf_send_rff_id(struct unf_lport *lport, struct unf_rport *rport,
+ u32 fc4_type)
+{
+ /* from RFT_ID, then Send SCR */
+ struct unf_rffid *rff_id = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "%s Enter", __func__);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
+ rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RFF_ID",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = NS_RFF_ID;
+
+ xchg->callback = unf_rff_id_callback;
+ xchg->ob_callback = unf_rff_id_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_GS_REQ;
+
+ rff_id = &fc_entry->rff_id;
+ memset(rff_id, 0, sizeof(struct unf_rffid));
+ unf_fill_rff_id_pld(rff_id, lport, fc4_type);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RFF_ID feature 0x%x(10:TGT,20:INI,30:COM) send %s. Port(0x%x_0x%x)--->RPortid(0x%x) rport(0x%p)",
+ lport->options, (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+
+ return ret;
+}
+
+void unf_handle_init_gid_acc(struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ /*
+ * from SCR ACC callback
+ * NOTE: inquiry disc R_Port used for NPIV
+ */
+ struct unf_disc_rport *disc_rport = NULL;
+ struct unf_disc *disc = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ /*
+ * 1. Find & Check & Get (new) R_Port from list_disc_rports_pool
+ * then, Add to R_Port Disc_busy_list
+ */
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (gid_acc_pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ /* for each N_Port_ID from GID_ACC payload */
+ if (lport->nport_id != nport_id && nport_id != 0 &&
+ (!unf_lookup_lport_by_nportid(lport, nport_id))) {
+ /* for New Port, not L_Port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) get nportid(0x%x) from GID_ACC",
+ lport->port_id, lport->nport_id, nport_id);
+
+ /* Get R_Port from list of RPort Disc Pool */
+ disc_rport = unf_rport_get_free_and_init(lport,
+ UNF_PORT_TYPE_DISC, nport_id);
+ if (!disc_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can't allocate new rport(0x%x) from disc pool",
+ lport->port_id, lport->nport_id,
+ nport_id);
+
+ index++;
+ continue;
+ }
+ }
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ /*
+ * 2. Do port disc stop operation:
+ * NOTE: Do DISC & release R_Port from busy_list back to
+ * list_disc_rports_pool
+ */
+ disc = &lport->disc;
+ if (!disc->disc_temp.unf_disc_stop) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) disc stop function is NULL",
+ lport->port_id, lport->nport_id);
+
+ return;
+ }
+
+ ret = disc->disc_temp.unf_disc_stop(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) do disc stop failed",
+ lport->port_id, lport->nport_id);
+ }
+}
+
+u32 unf_rport_relogin(struct unf_lport *lport, u32 nport_id)
+{
+ /* Send GNN_ID */
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get SNS R_Port */
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send GNN_ID now to SW */
+ ret = unf_get_and_post_disc_event(lport, sns_port, nport_id,
+ UNF_DISC_GET_NODE_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
+
+ /* NOTE: Continue to next stage */
+ unf_rcv_gnn_id_rsp_unknown(lport, sns_port, nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_rport_check_wwn(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send GPN_ID */
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ /* Get SNS R_Port */
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send GPN_ID to SW */
+ ret = unf_get_and_post_disc_event(lport, sns_port, rport->nport_id,
+ UNF_DISC_GET_PORT_NAME);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->nport_id, UNF_DISC_GET_PORT_NAME,
+ rport->nport_id);
+
+ unf_rcv_gpn_id_rsp_unknown(lport, rport->nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_handle_rscn_port_not_indisc(struct unf_lport *lport, u32 rscn_nport_id)
+{
+ /* RSCN Port_ID not in GID_ACC payload table: Link Down */
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* from R_Port busy list by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
+ if (unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) RPort(0x%x) wwpn(0x%llx) has been removed and link down it",
+ lport->port_id, rscn_nport_id, unf_rport->port_name);
+
+ unf_rport_linkdown(lport, unf_rport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) has no RPort(0x%x) and do nothing",
+ lport->nport_id, rscn_nport_id);
+ }
+
+ return ret;
+}
+
+u32 unf_handle_rscn_port_indisc(struct unf_lport *lport, u32 rscn_nport_id)
+{
+ /* Send GPN_ID or re-login(GNN_ID) */
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* from R_Port busy list by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
+ if (unf_rport) {
+ /* R_Port exist: send GPN_ID */
+ ret = unf_rport_check_wwn(lport, unf_rport);
+ } else {
+ if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI))
+ /* Re-LOGIN with INI mode: Send GNN_ID */
+ ret = unf_rport_relogin(lport, rscn_nport_id);
+ else
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with no INI feature. Do nothing",
+ lport->nport_id);
+ }
+
+ return ret;
+}
+
+static u32 unf_handle_rscn_port_addr(struct unf_port_id_page *portid_page,
+ struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ /*
+ * Input parameters:
+ * 1. Port_ID_page: saved from RSCN payload
+ * 2. GID_ACC_payload: back from GID_ACC (GID_PT or GID_FT)
+ * *
+ * Do work: check whether RSCN Port_ID within GID_ACC payload or not
+ * then, re-login or link down rport
+ */
+ u32 rscn_nport_id = 0;
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+ u32 ret = RETURN_OK;
+ bool have_same_id = false;
+
+ FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* 1. get RSCN_NPort_ID from (L_Port->Disc->RSCN_Mgr)->RSCN_Port_ID_Page
+ */
+ rscn_nport_id = UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(portid_page);
+
+ /*
+ * 2. for RSCN_NPort_ID
+ * check whether RSCN_NPort_ID within GID_ACC_Payload or not
+ */
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (gid_acc_pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ if (lport->nport_id != nport_id && nport_id != 0) {
+ /* is not L_Port */
+ if (nport_id == rscn_nport_id) {
+ /* RSCN Port_ID within GID_ACC payload */
+ have_same_id = true;
+ break;
+ }
+ }
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ /* 3. RSCN_Port_ID not within GID_ACC payload table */
+ if (!have_same_id) {
+ /* rport has been removed */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table failed",
+ lport->port_id, lport->nport_id, rscn_nport_id);
+
+ /* Link down rport */
+ ret = unf_handle_rscn_port_not_indisc(lport, rscn_nport_id);
+
+ } else { /* 4. RSCN_Port_ID within GID_ACC payload table */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table succeed",
+ lport->port_id, lport->nport_id, rscn_nport_id);
+
+ /* Re-login with INI mode */
+ ret = unf_handle_rscn_port_indisc(lport, rscn_nport_id);
+ }
+
+ return ret;
+}
+
+void unf_check_rport_rscn_process(struct unf_rport *rport,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *unf_rport = rport;
+ struct unf_port_id_page *unf_portid_page = portid_page;
+ u8 addr_format = unf_portid_page->addr_format;
+
+ switch (addr_format) {
+ /* domain+area */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain &&
+ UNF_GET_AREA_ID(unf_rport->nport_id) == unf_portid_page->port_id_area)
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+
+ break;
+ /* domain */
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain)
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+
+ break;
+ /* all */
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
+ break;
+ default:
+ break;
+ }
+}
+
+static void unf_set_rport_rscn_position(struct unf_lport *lport,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS)
+ unf_check_rport_rscn_process(unf_rport, portid_page);
+ } else {
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+}
+
+static void unf_set_rport_rscn_position_local(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ if (unf_rport->rscn_position == UNF_RPORT_NEED_PROCESS)
+ unf_rport->rscn_position = UNF_RPORT_ONLY_IN_LOCAL_PROCESS;
+ } else {
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+}
+
+static void unf_reset_rport_rscn_setting(struct unf_lport *lport)
+{
+ struct unf_rport *rport = NULL;
+ struct list_head *list_node = NULL;
+ struct list_head *list_nextnode = NULL;
+ struct unf_disc *disc = NULL;
+ ulong rport_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
+ rport = list_entry(list_node, struct unf_rport, entry_rport);
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+}
+
+void unf_compare_nport_id_with_rport_list(struct unf_lport *lport, u32 nport_id,
+ struct unf_port_id_page *portid_page)
+{
+ struct unf_rport *rport = NULL;
+ ulong rport_flag = 0;
+ u8 addr_format = portid_page->addr_format;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ switch (addr_format) {
+ /* domain+area */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ if ((UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain) ||
+ (UNF_GET_AREA_ID(nport_id) != portid_page->port_id_area))
+ return;
+
+ break;
+ /* domain */
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ if (UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain)
+ return;
+
+ break;
+ /* all */
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ break;
+ /* can't enter this branch guarantee by outer */
+ default:
+ break;
+ }
+
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+
+ if (!rport) {
+ if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) Find Rport(0x%x) by RSCN",
+ lport->nport_id, nport_id);
+ unf_rport_relogin(lport, nport_id);
+ }
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ if (rport->rscn_position == UNF_RPORT_NEED_PROCESS)
+ rport->rscn_position = UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS;
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+}
+
+static void unf_compare_disc_with_local_rport(struct unf_lport *lport,
+ struct unf_gid_acc_pld *pld,
+ struct unf_port_id_page *page)
+{
+ u32 gid_port_id = 0;
+ u32 nport_id = 0;
+ u32 index = 0;
+ u8 control = 0;
+
+ FC_CHECK_RETURN_VOID(pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ while (index < UNF_GID_PORT_CNT) {
+ gid_port_id = (pld->gid_port_id[index]);
+ nport_id = UNF_NPORTID_MASK & gid_port_id;
+ control = UNF_GID_CONTROL(gid_port_id);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO, "[info]Port(0x%x) DISC N_Port_ID(0x%x)",
+ lport->nport_id, nport_id);
+
+ if (nport_id != 0 &&
+ (!unf_lookup_lport_by_nportid(lport, nport_id)))
+ unf_compare_nport_id_with_rport_list(lport, nport_id, page);
+
+ if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
+ break;
+
+ index++;
+ }
+
+ unf_set_rport_rscn_position_local(lport);
+}
+
+static u32 unf_process_each_rport_after_rscn(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_rport *rport)
+{
+ ulong rport_flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+
+ if (rport->rscn_position == UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), check wwpn",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rscn_position);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ ret = unf_rport_check_wwn(lport, rport);
+ } else if (rport->rscn_position == UNF_RPORT_ONLY_IN_LOCAL_PROCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), linkdown it",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rscn_position);
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ unf_rport_linkdown(lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+ }
+
+ return ret;
+}
+
+static u32 unf_process_local_rport_after_rscn(struct unf_lport *lport,
+ struct unf_rport *sns_port)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct list_head *list_node = NULL;
+ struct unf_disc *disc = NULL;
+ ulong disc_flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ if (list_empty(&disc->list_busy_rports)) {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
+
+ do {
+ unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
+
+ if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS) {
+ list_node = UNF_OS_LIST_NEXT(list_node);
+ continue;
+ } else {
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+ ret = unf_process_each_rport_after_rscn(lport, sns_port, unf_rport);
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
+ }
+ } while (list_node != &disc->list_busy_rports);
+
+ unf_reset_rport_rscn_setting(lport);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ return ret;
+}
+
+static u32 unf_handle_rscn_group_addr(struct unf_port_id_page *portid_page,
+ struct unf_gid_acc_pld *gid_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_rport *sns_port = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find fabric port failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_set_rport_rscn_position(lport, portid_page);
+ unf_compare_disc_with_local_rport(lport, gid_acc_pld, portid_page);
+
+ ret = unf_process_local_rport_after_rscn(lport, sns_port);
+
+ return ret;
+}
+
+static void unf_handle_rscn_gid_acc(struct unf_gid_acc_pld *gid_acc_pid,
+ struct unf_lport *lport)
+{
+ /* for N_Port_ID table return from RSCN */
+ struct unf_port_id_page *port_id_page = NULL;
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ struct list_head *list_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pid);
+ FC_CHECK_RETURN_VOID(lport);
+ rscn_mgr = &lport->disc.rscn_mgr;
+
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ while (!list_empty(&rscn_mgr->list_using_rscn_page)) {
+ /*
+ * for each RSCN_Using_Page(NPortID)
+ * for each
+ * L_Port->Disc->RSCN_Mgr->RSCN_Using_Page(Port_ID_Page)
+ * * NOTE:
+ * check using_page_port_id whether within GID_ACC payload or
+ * not
+ */
+ list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_using_rscn_page);
+ port_id_page = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
+ list_del(list_node); /* NOTE: here delete node (from RSCN using Page) */
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+
+ switch (port_id_page->addr_format) {
+ /* each page of RSNC corresponding one of N_Port_ID */
+ case UNF_RSCN_PORT_ADDR:
+ (void)unf_handle_rscn_port_addr(port_id_page, gid_acc_pid, lport);
+ break;
+
+ /* each page of RSNC corresponding address group */
+ case UNF_RSCN_AREA_ADDR_GROUP:
+ case UNF_RSCN_DOMAIN_ADDR_GROUP:
+ case UNF_RSCN_FABRIC_ADDR_GROUP:
+ (void)unf_handle_rscn_group_addr(port_id_page, gid_acc_pid, lport);
+ break;
+
+ default:
+ break;
+ }
+
+ /* NOTE: release this RSCN_Node */
+ rscn_mgr->unf_release_rscn_node(rscn_mgr, port_id_page);
+
+ /* go to next */
+ spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
+ }
+
+ spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
+}
+
+static void unf_gid_acc_handle(struct unf_gid_acc_pld *gid_acc_pid,
+ struct unf_lport *lport)
+{
+#define UNF_NONE_DISC 0X0 /* before enter DISC */
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(gid_acc_pid);
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ switch (disc->disc_option) {
+ case UNF_INIT_DISC: /* from SCR callback with INI mode */
+ disc->disc_option = UNF_NONE_DISC;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_handle_init_gid_acc(gid_acc_pid, lport); /* R_Port from Disc_list */
+ break;
+
+ case UNF_RSCN_DISC: /* from RSCN payload parse(analysis) */
+ disc->disc_option = UNF_NONE_DISC;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_handle_rscn_gid_acc(gid_acc_pid, lport); /* R_Port from busy_list */
+ break;
+
+ default:
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x)'s disc option(0x%x) is abnormal",
+ lport->port_id, lport->nport_id, disc->disc_option);
+ break;
+ }
+}
+
+static void unf_gid_ft_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_ptr)
+ return;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ if (!lport)
+ return;
+
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(lport);
+}
+
+static void unf_gid_ft_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ disc = &unf_lport->disc;
+
+ sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_FT response payload is NULL",
+ unf_lport->port_id);
+
+ return;
+ }
+
+ cmnd_rsp_size = gid_acc_pld->ctiu_pream.cmnd_rsp_size;
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Process GID_FT ACC */
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_FT was rejected with reason code(0x%x)",
+ unf_lport->port_id, rjt_reason);
+
+ if (UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG ==
+ (rjt_reason & UNF_CTIU_RJT_EXP_MASK)) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+ } else {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(unf_lport);
+ }
+}
+
+static void unf_gid_pt_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ struct unf_disc *disc = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_ptr)
+ return;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ if (!lport)
+ return;
+
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Do DISC recovery operation */
+ unf_disc_error_recovery(lport);
+}
+
+static void unf_gid_pt_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_gid_acc_pld *gid_acc_pld = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ union unf_sfs_u *sfs_ptr = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ disc = &unf_lport->disc;
+ unf_xchg = (struct unf_xchg *)xchg;
+ sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
+ if (!gid_acc_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) GID_PT response payload is NULL",
+ unf_lport->port_id);
+ return;
+ }
+
+ cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) GID_PT was rejected with reason code(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, rjt_reason);
+
+ if ((rjt_reason & UNF_CTIU_RJT_EXP_MASK) ==
+ UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ unf_gid_acc_handle(gid_acc_pld, unf_lport);
+ } else {
+ ret = unf_send_gid_ft(unf_lport, unf_rport);
+ if (ret != RETURN_OK)
+ goto SEND_GID_PT_FT_FAILED;
+ }
+ } else {
+ goto SEND_GID_PT_FT_FAILED;
+ }
+
+ return;
+SEND_GID_PT_FT_FAILED:
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ unf_disc_error_recovery(unf_lport);
+}
+
+static void unf_gnn_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send GFF_ID */
+ struct unf_lport *lport = NULL;
+ struct unf_rport *sns_port = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(lport);
+ sns_port = xchg->rport;
+ FC_CHECK_RETURN_VOID(sns_port);
+ nport_id = xchg->disc_portid;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GNN_ID failed to inquire RPort(0x%x)",
+ lport->port_id, nport_id);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ /* NOTE: continue next stage */
+ ret = unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(lport, nport_id);
+ }
+}
+
+static void unf_rcv_gnn_id_acc(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_gnnid_rsp *gnnid_rsp_pld,
+ u32 nport_id)
+{
+ /* Send GFF_ID or Link down immediately */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
+ struct unf_rport *rport = NULL;
+ u64 node_name = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
+
+ node_name = ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_0]) << UNF_SHIFT_32) |
+ ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_1]));
+
+ if (unf_lport->node_name == node_name) {
+ /* R_Port & L_Port with same Node Name */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) has the same node name(0x%llx) with RPort(0x%x), linkdown it",
+ unf_lport->port_id, node_name, nport_id);
+
+ /* Destroy immediately */
+ unf_rport_immediate_link_down(unf_lport, rport);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) got RPort(0x%x) with node name(0x%llx) by GNN_ID",
+ unf_lport->port_id, nport_id, node_name);
+
+ /* Start to Send GFF_ID */
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
+ nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+ }
+}
+
+static void unf_rcv_gnn_id_rjt(struct unf_lport *lport,
+ struct unf_rport *sns_port,
+ struct unf_gnnid_rsp *gnnid_rsp_pld,
+ u32 nport_id)
+{
+ /* Send GFF_ID */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
+ u32 rjt_reason = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
+
+ rjt_reason = (unf_gnnid_rsp_pld->ctiu_pream.frag_reason_exp_vend);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) GNN_ID was rejected with reason code(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, rjt_reason);
+
+ if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
+ /* Node existence: Continue next stage */
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
+ nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+ }
+}
+
+void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id)
+{
+ /* Send GFF_ID */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_sns_port = sns_port;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) Rportid(0x%x) GNN_ID response is unknown. Sending GFF_ID",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port, nport_id, UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE,
+ nport_id);
+
+ /* NOTE: go to next stage */
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_sns_port = (struct unf_rport *)sns_port;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_gnnid_rsp *gnnid_rsp_pld = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ nport_id = unf_xchg->disc_portid;
+ gnnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gnn_id_rsp;
+ cmnd_rsp_size = gnnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ /* Case ACC: send GFF_ID or Link down immediately */
+ unf_rcv_gnn_id_acc(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ /* Case RJT: send GFF_ID */
+ unf_rcv_gnn_id_rjt(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
+ } else { /* NOTE: continue next stage */
+ /* Case unknown: send GFF_ID */
+ unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_sns_port, nport_id);
+ }
+}
+
+static void unf_gff_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send PLOGI */
+ struct unf_lport *lport = NULL;
+ struct unf_lport *root_lport = NULL;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ nport_id = xchg->disc_portid;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ /* Get (safe) R_Port */
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate new RPort(0x%x)",
+ lport->port_id, nport_id);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) send GFF_ID(0x%x_0x%x) to RPort(0x%x_0x%x) abnormal",
+ lport->port_id, lport->nport_id, xchg->oxid, xchg->rxid,
+ rport->rport_index, rport->nport_id);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* NOTE: Start to send PLOGI */
+ ret = unf_send_plogi(lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send PLOGI failed, enter recovry",
+ lport->port_id);
+
+ /* Do R_Port recovery */
+ unf_rport_error_recovery(rport);
+ }
+}
+
+void unf_rcv_gff_id_acc(struct unf_lport *lport,
+ struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
+{
+ /* Delay to LOGIN */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
+ u32 fc_4feacture = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(gffid_rsp_pld);
+
+ fc_4feacture = unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1];
+ if ((UNF_GFF_ACC_MASK & fc_4feacture) == 0)
+ fc_4feacture = be32_to_cpu(unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1]);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x) received GFF_ID ACC. FC4 feature is 0x%x(1:TGT,2:INI,3:COM)",
+ unf_lport->port_id, unf_lport->nport_id, nport_id, fc_4feacture);
+
+ /* Check (& Get new) R_Port */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (rport || (UNF_GET_PORT_OPTIONS(fc_4feacture) != UNF_PORT_MODE_INI)) {
+ rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+ } else {
+ return;
+ }
+
+ if ((fc_4feacture & UNF_GFF_ACC_MASK) != 0) {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->options = UNF_GET_PORT_OPTIONS(fc_4feacture);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ } else if (rport->port_name != INVALID_WWPN) {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->options = unf_get_port_feature(rport->port_name);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ /* NOTE: Send PLOGI if necessary */
+ unf_check_rport_need_delay_plogi(unf_lport, rport, rport->options);
+}
+
+void unf_rcv_gff_id_rjt(struct unf_lport *lport,
+ struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
+{
+ /* Delay LOGIN or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
+ u32 rjt_reason = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(gffid_rsp_pld);
+
+ /* Check (& Get new) R_Port */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get RPort by N_Port_ID(0x%x) failed and alloc new",
+ unf_lport->port_id, nport_id);
+
+ rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ rjt_reason = unf_gffid_rsp_pld->ctiu_pream.frag_reason_exp_vend;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but was rejected. Reason code(0x%x)",
+ unf_lport->port_id, nport_id, rjt_reason);
+
+ if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Delay to send PLOGI */
+ unf_rport_delay_login(rport);
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (rport->rp_state == UNF_RPORT_ST_INIT) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Enter closing state */
+ unf_rport_enter_logo(unf_lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+ }
+}
+
+void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
+{
+ /* Send PLOGI */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but response is unknown",
+ unf_lport->port_id, nport_id);
+
+ /* Get (Safe) R_Port & Set State */
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
+
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can't get RPort by NPort ID(0x%x), allocate new RPort",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+
+ rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) can not send PLOGI for RPort(0x%x), enter recovery",
+ unf_lport->port_id, nport_id);
+
+ unf_rport_error_recovery(rport);
+ }
+}
+
+static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_lport *root_lport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_gffid_rsp *gffid_rsp_pld = NULL;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ nport_id = unf_xchg->disc_portid;
+
+ gffid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gff_id_rsp;
+ cmnd_rsp_size = (gffid_rsp_pld->ctiu_pream.cmnd_rsp_size);
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ /* Case for GFF_ID ACC: (Delay)PLOGI */
+ unf_rcv_gff_id_acc(unf_lport, gffid_rsp_pld, nport_id);
+ } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
+ /* Case for GFF_ID RJT: Delay PLOGI or LOGO directly */
+ unf_rcv_gff_id_rjt(unf_lport, gffid_rsp_pld, nport_id);
+ } else {
+ /* Send PLOGI */
+ unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_rcv_gpn_id_acc(struct unf_lport *lport,
+ u32 nport_id, u64 port_name)
+{
+ /* then PLOGI or re-login */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ rport = unf_find_valid_rport(unf_lport, port_name, nport_id);
+ if (rport) {
+ /* R_Port with TGT mode & L_Port with INI mode:
+ * send PLOGI with INIT state
+ */
+ if ((rport->options & UNF_PORT_MODE_TGT) == UNF_PORT_MODE_TGT) {
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->nport_id = nport_id;
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI failed for 0x%x, enter recovry",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ unf_rport_error_recovery(rport);
+ }
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (rport->rp_state != UNF_RPORT_ST_PLOGI_WAIT &&
+ rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
+ rport->rp_state != UNF_RPORT_ST_READY) {
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Do LOGO operation */
+ unf_rport_enter_logo(unf_lport, rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+ }
+ } else {
+ /* Send GNN_ID */
+ (void)unf_rport_relogin(unf_lport, nport_id);
+ }
+}
+
+static void unf_rcv_gpn_id_rjt(struct unf_lport *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *rport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ if (rport)
+ /* Do R_Port Link down */
+ unf_rport_linkdown(unf_lport, rport);
+}
+
+void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) wrong response of GPN_ID with RPort(0x%x)",
+ unf_lport->port_id, nport_id);
+
+ /* NOTE: go to next stage */
+ (void)unf_rport_relogin(unf_lport, nport_id);
+}
+
+static void unf_gpn_id_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ lport = xchg->lport;
+ nport_id = xchg->disc_portid;
+ FC_CHECK_RETURN_VOID(lport);
+
+ root_lport = (struct unf_lport *)lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send GPN_ID failed to inquire RPort(0x%x)",
+ lport->port_id, nport_id);
+
+ /* NOTE: go to next stage */
+ (void)unf_rport_relogin(lport, nport_id);
+}
+
+static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_gpnid_rsp *gpnid_rsp_pld = NULL;
+ u64 port_name = 0;
+ u32 cmnd_rsp_size = 0;
+ u32 nport_id = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(sns_port);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ nport_id = unf_xchg->disc_portid;
+
+ root_lport = (struct unf_lport *)unf_lport->root_lport;
+ atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
+ wake_up_process(root_lport->disc.disc_thread_info.thread);
+
+ gpnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gpn_id_rsp;
+ cmnd_rsp_size = gpnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* GPN_ID ACC */
+ port_name = ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_0])
+ << UNF_SHIFT_32) |
+ ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_1]));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) GPN_ID ACC with WWN(0x%llx) RPort NPort ID(0x%x)",
+ unf_lport->port_id, port_name, nport_id);
+
+ /* Send PLOGI or LOGO or GNN_ID */
+ unf_rcv_gpn_id_acc(unf_lport, nport_id, port_name);
+ } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* GPN_ID RJT: Link Down */
+ unf_rcv_gpn_id_rjt(unf_lport, nport_id);
+ } else {
+ /* GPN_ID response type unknown: Send GNN_ID */
+ unf_rcv_gpn_id_rsp_unknown(unf_lport, nport_id);
+ }
+}
+
+static void unf_rff_id_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
+ lport->port_id, lport->nport_id);
+
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_rff_id_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_ctiu_prem *ctiu_prem = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmnd_rsp_size = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (unlikely(!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr))
+ return;
+
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FCTRL);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FCTRL);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate RPort(0x%x)",
+ unf_lport->port_id, UNF_FC_FID_FCTRL);
+ return;
+ }
+
+ unf_rport->nport_id = UNF_FC_FID_FCTRL;
+ ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rff_id_rsp.ctiu_pream;
+ cmnd_rsp_size = ctiu_prem->cmnd_rsp_size;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x_0x%x) RFF_ID rsp is (0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK));
+
+ /* RSP Type check: some SW not support RFF_ID, go to next stage also */
+ if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) receive RFF ACC(0x%x) in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFF RJT(0x%x) in state(0x%x) with RJT reason code(0x%x) explanation(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
+ }
+
+ /* L_Port state check */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_RFF_ID_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFF reply in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ return;
+ }
+ /* LPort: RFF_ID_WAIT --> SCR_WAIT */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ ret = unf_send_scr(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send SCR failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static void unf_rft_id_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFT_ID failed",
+ lport->port_id, lport->nport_id);
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_rft_id_callback(void *lport, void *rport, void *xchg)
+{
+ /* RFT_ID --->>> RFF_ID */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_ctiu_prem *ctiu_prem = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmnd_rsp_size = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) SFS entry is NULL with state(0x%x)",
+ unf_lport->port_id, unf_lport->states);
+ return;
+ }
+
+ ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->rft_id_rsp.ctiu_pream;
+ cmnd_rsp_size = (ctiu_prem->cmnd_rsp_size);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) RFT_ID response is (0x%x)",
+ (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->port_id,
+ unf_lport->nport_id);
+
+ if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
+ /* Case for RFT_ID ACC: send RFF_ID */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_RFT_ID_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) receive RFT_ID ACC in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->states);
+
+ return;
+ }
+
+ /* LPort: RFT_ID_WAIT --> RFF_ID_WAIT */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Start to send RFF_ID GS command */
+ ret = unf_send_rff_id(unf_lport, unf_rport, UNF_FC4_FCP_TYPE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ unf_lport_error_recovery(unf_lport);
+ }
+ } else {
+ /* Case for RFT_ID RJT: do recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) receive RFT_ID RJT with reason_code(0x%x) explanation(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
+ (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static void unf_scr_ob_callback(struct unf_xchg *xchg)
+{
+ /* Callback fucnion for exception: Do L_Port error recovery */
+ struct unf_lport *lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send SCR failed and do port recovery",
+ lport->port_id);
+
+ unf_lport_error_recovery(lport);
+}
+
+static void unf_scr_callback(void *lport, void *rport, void *xchg)
+{
+ /* Callback function for SCR response: Send GID_PT with INI mode */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_els_acc *els_acc = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong port_flag = 0;
+ ulong disc_flag = 0;
+ u32 cmnd = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+ disc = &unf_lport->disc;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ els_acc = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_acc;
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ cmnd = be32_to_cpu(els_acc->cmnd);
+ else
+ cmnd = (els_acc->cmnd);
+
+ if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ spin_lock_irqsave(&unf_lport->lport_state_lock, port_flag);
+ if (unf_lport->states != UNF_LPORT_ST_SCR_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock,
+ port_flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) receive SCR ACC with error state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->states);
+ return;
+ }
+
+ /* LPort: SCR_WAIT --> READY */
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ if (unf_lport->states == UNF_LPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) enter READY state when received SCR response",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+
+ /* Start to Discovery with INI mode: GID_PT */
+ if ((unf_lport->options & UNF_PORT_MODE_INI) ==
+ UNF_PORT_MODE_INI) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock,
+ port_flag);
+
+ if (unf_lport->disc.disc_temp.unf_disc_start) {
+ spin_lock_irqsave(&disc->rport_busy_pool_lock,
+ disc_flag);
+ unf_lport->disc.disc_option = UNF_INIT_DISC;
+ disc->last_disc_jiff = jiffies;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) DISC %s with INI mode",
+ unf_lport->port_id,
+ (ret != RETURN_OK) ? "failed" : "succeed");
+ }
+ return;
+ }
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, port_flag);
+ /* NOTE: set state with UNF_DISC_ST_END used for
+ * RSCN process
+ */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ unf_lport->disc.states = UNF_DISC_ST_END;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) is TGT mode, no need to discovery",
+ unf_lport->port_id);
+
+ return;
+ }
+ unf_lport_error_recovery(unf_lport);
+}
+
+void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
+ struct unf_rport *rport, u32 port_feature)
+{
+ /*
+ * Called by:
+ * 1. Private loop
+ * 2. RCVD GFF_ID ACC
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+ u32 nport_id = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ nport_id = unf_rport->nport_id;
+
+ /*
+ * Send GFF_ID means L_Port has INI attribute
+ * *
+ * When to send PLOGI:
+ * 1. R_Port has TGT mode (COM or TGT), send PLOGI immediately
+ * 2. R_Port only with INI, send LOGO immediately
+ * 3. R_Port with unknown attribute, delay to send PLOGI
+ */
+ if ((UNF_PORT_MODE_TGT & port_feature) ||
+ (UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF &
+ unf_lport->enhanced_features)) {
+ /* R_Port has TGT mode: send PLOGI immediately */
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to send PLOGI */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id,
+ nport_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else if (port_feature == UNF_PORT_MODE_INI) {
+ /* R_Port only with INI mode: can't send PLOGI
+ * --->>> LOGO/nothing
+ */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ if (unf_rport->rp_state == UNF_RPORT_ST_INIT) {
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send LOGO to RPort(0x%x) which only with INI mode",
+ unf_lport->port_id, unf_lport->nport_id, nport_id);
+
+ /* Enter Closing state */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ } else {
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ }
+ } else {
+ /* Unknown R_Port attribute: Delay to send PLOGI */
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update R_Port state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_delay_login(unf_rport);
+ }
+}
diff --git a/drivers/scsi/spfc/common/unf_gs.h b/drivers/scsi/spfc/common/unf_gs.h
new file mode 100644
index 000000000000..d9856133b3cd
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_gs.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_GS_H
+#define UNF_GS_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+u32 unf_send_scr(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_ctpass_thru(struct unf_lport *lport,
+ void *buffer, u32 bufflen);
+
+u32 unf_send_gid_ft(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_gid_pt(struct unf_lport *lport,
+ struct unf_rport *rport);
+u32 unf_send_gpn_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+u32 unf_send_gnn_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+u32 unf_send_gff_id(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+
+u32 unf_send_rff_id(struct unf_lport *lport,
+ struct unf_rport *rport, u32 fc4_type);
+u32 unf_send_rft_id(struct unf_lport *lport,
+ struct unf_rport *rport);
+void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
+ struct unf_rport *sns_port, u32 nport_id);
+void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
+void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
+void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
+ struct unf_rport *rport, u32 port_feature);
+
+struct send_com_trans_in {
+ unsigned char port_wwn[8];
+ u32 req_buffer_count;
+ unsigned char req_buffer[ARRAY_INDEX_1];
+};
+
+struct send_com_trans_out {
+ u32 hba_status;
+ u32 total_resp_buffer_cnt;
+ u32 actual_resp_buffer_cnt;
+ unsigned char resp_buffer[ARRAY_INDEX_1];
+};
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_init.c b/drivers/scsi/spfc/common/unf_init.c
new file mode 100644
index 000000000000..7e6f98d16977
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_init.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_event.h"
+#include "unf_exchg.h"
+#include "unf_portman.h"
+#include "unf_rport.h"
+#include "unf_service.h"
+#include "unf_io.h"
+#include "unf_io_abnormal.h"
+
+#define UNF_PID 12
+#define MY_PID UNF_PID
+
+#define RPORT_FEATURE_POOL_SIZE 4096
+struct task_struct *event_task_thread;
+struct workqueue_struct *unf_wq;
+
+atomic_t fc_mem_ref;
+
+struct unf_global_card_thread card_thread_mgr;
+u32 unf_dgb_level = UNF_MAJOR;
+u32 log_print_level = UNF_INFO;
+u32 log_limited_times = UNF_LOGIN_ATT_PRINT_TIMES;
+
+static struct unf_esgl_page *unf_get_one_free_esgl_page
+ (void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)pkg->xchg_contex;
+
+ return unf_get_and_add_one_free_esgl_page(unf_lport, unf_xchg);
+}
+
+static int unf_get_cfg_parms(char *section_name, struct unf_cfg_item *cfg_itm,
+ u32 *cfg_value, u32 itemnum)
+{
+ /* Maximum length of a configuration item value, including the end
+ * character
+ */
+#define UNF_MAX_ITEM_VALUE_LEN (256)
+
+ u32 *unf_cfg_value = NULL;
+ struct unf_cfg_item *unf_cfg_itm = NULL;
+ u32 i = 0;
+
+ unf_cfg_itm = cfg_itm;
+ unf_cfg_value = cfg_value;
+
+ for (i = 0; i < itemnum; i++) {
+ if (!unf_cfg_itm || !unf_cfg_value) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR,
+ "[err]Config name or value is NULL");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (strcmp("End", unf_cfg_itm->puc_name) == 0x0)
+ break;
+
+ if (strcmp("fw_path", unf_cfg_itm->puc_name) == 0x0) {
+ unf_cfg_itm++;
+ unf_cfg_value += UNF_MAX_ITEM_VALUE_LEN / sizeof(u32);
+ continue;
+ }
+
+ *unf_cfg_value = unf_cfg_itm->default_value;
+ unf_cfg_itm++;
+ unf_cfg_value++;
+ }
+
+ return RETURN_OK;
+}
+
+struct unf_cm_handle_op unf_cm_handle_ops = {
+ .unf_alloc_local_port = unf_lport_create_and_init,
+ .unf_release_local_port = unf_release_local_port,
+ .unf_receive_ls_gs_pkg = unf_receive_ls_gs_pkg,
+ .unf_receive_bls_pkg = unf_receive_bls_pkg,
+ .unf_send_els_done = unf_send_els_done,
+ .unf_receive_ini_response = unf_ini_scsi_completed,
+ .unf_get_cfg_parms = unf_get_cfg_parms,
+ .unf_receive_marker_status = unf_recv_tmf_marker_status,
+ .unf_receive_abts_marker_status = unf_recv_abts_marker_status,
+
+ .unf_process_fcp_cmnd = NULL,
+ .unf_tgt_cmnd_xfer_or_rsp_echo = NULL,
+ .unf_cm_get_sgl_entry = unf_ini_get_sgl_entry,
+ .unf_cm_get_dif_sgl_entry = unf_ini_get_dif_sgl_entry,
+ .unf_get_one_free_esgl_page = unf_get_one_free_esgl_page,
+ .unf_fc_port_event = unf_fc_port_link_event,
+};
+
+u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle)
+{
+ FC_CHECK_RETURN_VALUE(cm_handle, UNF_RETURN_ERROR);
+
+ memcpy(cm_handle, &unf_cm_handle_ops, sizeof(struct unf_cm_handle_op));
+
+ return RETURN_OK;
+}
+
+static void unf_deinit_cm_handle_ops(void)
+{
+ memset(&unf_cm_handle_ops, 0, sizeof(struct unf_cm_handle_op));
+}
+
+int unf_event_process(void *worker_ptr)
+{
+ struct list_head *event_list = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ struct completion *create_done = (struct completion *)worker_ptr;
+ ulong flags = 0;
+
+ set_user_nice(current, UNF_OS_THRD_PRI_LOW);
+ recalc_sigpending();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[event]Enter event thread");
+
+ if (create_done)
+ complete(create_done);
+
+ do {
+ spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
+ if (list_empty(&fc_event_list.list_head)) {
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ event_list = UNF_OS_LIST_NEXT(&fc_event_list.list_head);
+ list_del_init(event_list);
+ fc_event_list.list_num--;
+ event_node = list_entry(event_list,
+ struct unf_cm_event_report,
+ list_entry);
+ spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
+
+ /* Process event node */
+ unf_handle_event(event_node);
+ }
+ } while (!kthread_should_stop());
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[event]Event thread exit");
+
+ return RETURN_OK;
+}
+
+static int unf_creat_event_center(void)
+{
+ struct completion create_done;
+
+ init_completion(&create_done);
+ INIT_LIST_HEAD(&fc_event_list.list_head);
+ fc_event_list.list_num = 0;
+ spin_lock_init(&fc_event_list.fc_event_list_lock);
+
+ event_task_thread = kthread_run(unf_event_process, &create_done, "spfc_event");
+ if (IS_ERR(event_task_thread)) {
+ complete_and_exit(&create_done, 0);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create event thread failed(0x%p)",
+ event_task_thread);
+
+ return UNF_RETURN_ERROR;
+ }
+ wait_for_completion(&create_done);
+ return RETURN_OK;
+}
+
+static void unf_cm_event_thread_exit(void)
+{
+ if (event_task_thread)
+ kthread_stop(event_task_thread);
+}
+
+static void unf_init_card_mgr_list(void)
+{
+ /* So far, do not care */
+ INIT_LIST_HEAD(&card_thread_mgr.card_list_head);
+
+ spin_lock_init(&card_thread_mgr.global_card_list_lock);
+
+ card_thread_mgr.card_num = 0;
+}
+
+int unf_port_feature_pool_init(void)
+{
+ u32 index = 0;
+ u32 rport_feature_pool_size = 0;
+ struct unf_rport_feature_recard *rport_feature = NULL;
+ unsigned long flags = 0;
+
+ rport_feature_pool_size = sizeof(struct unf_rport_feature_pool);
+ port_feature_pool = vmalloc(rport_feature_pool_size);
+ if (!port_feature_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]cannot allocate rport feature pool");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_feature_pool, 0, rport_feature_pool_size);
+ spin_lock_init(&port_feature_pool->port_fea_pool_lock);
+ INIT_LIST_HEAD(&port_feature_pool->list_busy_head);
+ INIT_LIST_HEAD(&port_feature_pool->list_free_head);
+
+ port_feature_pool->port_feature_pool_addr =
+ vmalloc((size_t)(RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard)));
+ if (!port_feature_pool->port_feature_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]cannot allocate rport feature pool address");
+
+ vfree(port_feature_pool);
+ port_feature_pool = NULL;
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(port_feature_pool->port_feature_pool_addr, 0,
+ RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard));
+ rport_feature = (struct unf_rport_feature_recard *)
+ port_feature_pool->port_feature_pool_addr;
+
+ spin_lock_irqsave(&port_feature_pool->port_fea_pool_lock, flags);
+ for (index = 0; index < RPORT_FEATURE_POOL_SIZE; index++) {
+ list_add_tail(&rport_feature->entry_feature, &port_feature_pool->list_free_head);
+ rport_feature++;
+ }
+ spin_unlock_irqrestore(&port_feature_pool->port_fea_pool_lock, flags);
+
+ return RETURN_OK;
+}
+
+void unf_free_port_feature_pool(void)
+{
+ if (port_feature_pool->port_feature_pool_addr) {
+ vfree(port_feature_pool->port_feature_pool_addr);
+ port_feature_pool->port_feature_pool_addr = NULL;
+ }
+
+ vfree(port_feature_pool);
+ port_feature_pool = NULL;
+}
+
+int unf_common_init(void)
+{
+ int ret = RETURN_OK;
+
+ unf_dgb_level = UNF_MAJOR;
+ log_print_level = UNF_KEVENT;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "UNF Driver Version:%s.", SPFC_DRV_VERSION);
+
+ atomic_set(&fc_mem_ref, 0);
+ ret = unf_port_feature_pool_init();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port Feature Pool init failed");
+ return ret;
+ }
+
+ ret = (int)unf_register_ini_transport();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]INI interface init failed");
+ goto REG_INITRANSPORT_FAIL;
+ }
+
+ unf_port_mgmt_init();
+ unf_init_card_mgr_list();
+ ret = (int)unf_init_global_event_msg();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create global event center failed");
+ goto CREAT_GLBEVENTMSG_FAIL;
+ }
+
+ ret = (int)unf_creat_event_center();
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create event center (thread) failed");
+ goto CREAT_EVENTCENTER_FAIL;
+ }
+
+ unf_wq = create_workqueue("unf_wq");
+ if (!unf_wq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create work queue failed");
+ goto CREAT_WORKQUEUE_FAIL;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Init common layer succeed");
+ return ret;
+CREAT_WORKQUEUE_FAIL:
+ unf_cm_event_thread_exit();
+CREAT_EVENTCENTER_FAIL:
+ unf_destroy_global_event_msg();
+CREAT_GLBEVENTMSG_FAIL:
+ unf_unregister_ini_transport();
+REG_INITRANSPORT_FAIL:
+ unf_free_port_feature_pool();
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_destroy_dirty_port(void)
+{
+ u32 ditry_port_num = 0;
+
+ unf_show_dirty_port(false, &ditry_port_num);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Sys has %u dirty L_Port(s)", ditry_port_num);
+}
+
+void unf_common_exit(void)
+{
+ unf_free_port_feature_pool();
+
+ unf_destroy_dirty_port();
+
+ flush_workqueue(unf_wq);
+ destroy_workqueue(unf_wq);
+ unf_wq = NULL;
+
+ unf_cm_event_thread_exit();
+
+ unf_destroy_global_event_msg();
+
+ unf_deinit_cm_handle_ops();
+
+ unf_port_mgmt_deinit();
+
+ unf_unregister_ini_transport();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]SPFC module remove succeed, memory reference count is %d",
+ atomic_read(&fc_mem_ref));
+}
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
new file mode 100644
index 000000000000..b1255ecba88c
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io.c
@@ -0,0 +1,1220 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_io.h"
+#include "unf_log.h"
+#include "unf_portman.h"
+#include "unf_service.h"
+#include "unf_io_abnormal.h"
+
+u32 sector_size_flag;
+
+#define UNF_GET_FCP_CTL(pkg) ((((pkg)->status) >> UNF_SHIFT_8) & 0xFF)
+#define UNF_GET_SCSI_STATUS(pkg) (((pkg)->status) & 0xFF)
+
+static u32 unf_io_success_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 up_status);
+static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+
+struct unf_ini_error_handler_s {
+ u32 ini_error_code;
+ u32 (*unf_ini_error_handler)(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status);
+};
+
+struct unf_ini_error_handler_s ini_error_handler_table[] = {
+ {UNF_IO_SUCCESS, unf_io_success_handler},
+ {UNF_IO_ABORTED, unf_ini_error_default_handler},
+ {UNF_IO_FAILED, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_ABTS, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_LOGIN, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_REET, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_FAILED, unf_ini_error_default_handler},
+ {UNF_IO_OUTOF_ORDER, unf_ini_error_default_handler},
+ {UNF_IO_FTO, unf_ini_error_default_handler},
+ {UNF_IO_LINK_FAILURE, unf_ini_error_default_handler},
+ {UNF_IO_OVER_FLOW, unf_ini_error_default_handler},
+ {UNF_IO_RSP_OVER, unf_ini_error_default_handler},
+ {UNF_IO_LOST_FRAME, unf_ini_error_default_handler},
+ {UNF_IO_UNDER_FLOW, unf_io_underflow_handler},
+ {UNF_IO_HOST_PROG_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_SEST_PROG_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_INVALID_ENTRY, unf_ini_error_default_handler},
+ {UNF_IO_ABORT_SEQ_NOT, unf_ini_error_default_handler},
+ {UNF_IO_REJECT, unf_ini_error_default_handler},
+ {UNF_IO_EDC_IN_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_EDC_OUT_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_UNINIT_KEK_ERR, unf_ini_error_default_handler},
+ {UNF_IO_DEK_OUTOF_RANGE, unf_ini_error_default_handler},
+ {UNF_IO_KEY_UNWRAP_ERR, unf_ini_error_default_handler},
+ {UNF_IO_KEY_TAG_ERR, unf_ini_error_default_handler},
+ {UNF_IO_KEY_ECC_ERR, unf_ini_error_default_handler},
+ {UNF_IO_BLOCK_SIZE_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_ILLEGAL_CIPHER_MODE, unf_ini_error_default_handler},
+ {UNF_IO_CLEAN_UP, unf_ini_error_default_handler},
+ {UNF_IO_ABORTED_BY_TARGET, unf_ini_error_default_handler},
+ {UNF_IO_TRANSPORT_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_LINK_FLASH, unf_ini_error_default_handler},
+ {UNF_IO_TIMEOUT, unf_ini_error_default_handler},
+ {UNF_IO_DMA_ERROR, unf_ini_error_default_handler},
+ {UNF_IO_DIF_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_INCOMPLETE, unf_ini_error_default_handler},
+ {UNF_IO_DIF_REF_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_DIF_GEN_ERROR, unf_ini_dif_error_handler},
+ {UNF_IO_NO_XCHG, unf_ini_error_default_handler}
+ };
+
+void unf_done_ini_xchg(struct unf_xchg *xchg)
+{
+ /*
+ * About I/O Done
+ * 1. normal case
+ * 2. Send ABTS & RCVD RSP
+ * 3. Send ABTS & timer timeout
+ */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ ulong flags = 0;
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ if (unlikely(!xchg->scsi_cmnd_info.scsi_cmnd))
+ return;
+
+ /* 1. Free RX_ID for INI SIRT: Do not care */
+
+ /*
+ * 2. set & check exchange state
+ * *
+ * for Set UP_ABORT Tag:
+ * 1) L_Port destroy
+ * 2) LUN reset
+ * 3) Target/Session reset
+ * 4) SCSI send Abort(ABTS)
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_DONE;
+ if (unlikely(xchg->io_state &
+ (INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR | INI_IO_STATE_TMF_ABORT))) {
+ /*
+ * a. UPABORT: scsi have send ABTS
+ * --->>> do not call SCSI_Done, return directly
+ * b. UPSEND_ERR: error happened duiring LLDD send SCSI_CMD
+ * --->>> do not call SCSI_Done, scsi need retry
+ */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[event]Exchange(0x%p) Cmdsn:0x%lx upCmd:%p hottag(0x%x) with state(0x%x) has been aborted or send error",
+ xchg, (ulong)xchg->cmnd_sn, xchg->scsi_cmnd_info.scsi_cmnd,
+ xchg->hotpooltag, xchg->io_state);
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ return;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+
+ /*
+ * 3. Set:
+ * scsi_cmnd;
+ * cmnd_done_func;
+ * cmnd up_level_done;
+ * sense_buff_addr;
+ * resid_length;
+ * cmnd_result;
+ * dif_info
+ * **
+ * UNF_SCSI_CMND <<-- UNF_SCSI_CMND_INFO
+ */
+ UNF_SET_HOST_CMND((&scsi_cmd), scsi_cmnd_info->scsi_cmnd);
+ UNF_SER_CMND_DONE_FUNC((&scsi_cmd), scsi_cmnd_info->done);
+ UNF_SET_UP_LEVEL_CMND_DONE_FUNC(&scsi_cmd, scsi_cmnd_info->uplevel_done);
+ scsi_cmd.drv_private = xchg->lport;
+ if (unlikely((UNF_SCSI_STATUS(xchg->scsi_cmnd_info.result)) & FCP_SNS_LEN_VALID_MASK)) {
+ unf_save_sense_data(scsi_cmd.upper_cmnd,
+ (char *)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
+ (int)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len);
+ }
+ UNF_SET_RESID((&scsi_cmd), (u32)xchg->resid_len);
+ UNF_SET_CMND_RESULT((&scsi_cmd), scsi_cmnd_info->result);
+ memcpy(&scsi_cmd.dif_info, &xchg->dif_info, sizeof(struct dif_info));
+
+ scsi_id = scsi_cmnd_info->scsi_id;
+
+ UNF_DONE_SCSI_CMND((&scsi_cmd));
+
+ /* 4. Update IO result CNT */
+ if (likely(xchg->lport)) {
+ scsi_image_table = &xchg->lport->rport_scsi_table;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id,
+ (scsi_cmnd_info->result >> UNF_SHIFT_16));
+ }
+}
+
+static inline u32 unf_ini_get_sgl_entry_buf(ini_get_sgl_entry_buf ini_get_sgl,
+ void *cmnd, void *driver_sgl,
+ void **upper_sgl, u32 *req_index,
+ u32 *index, char **buf,
+ u32 *buf_len)
+{
+ if (unlikely(!ini_get_sgl)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Command(0x%p) Get sgl Entry func Null.", cmnd);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ini_get_sgl(cmnd, driver_sgl, upper_sgl, req_index, index, buf, buf_len);
+}
+
+u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len)
+{
+ struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
+
+ unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+
+ /* Get SGL Entry buffer for INI Mode */
+ ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
+ unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
+ &unf_xchg->req_sgl_info.sgl,
+ &unf_xchg->scsi_cmnd_info.port_id,
+ &((unf_xchg->req_sgl_info).entry_index), buf, buf_len);
+
+ return ret;
+}
+
+u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len)
+{
+ struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
+
+ unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+
+ /* Get SGL Entry buffer for INI Mode */
+ ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
+ unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
+ &unf_xchg->dif_sgl_info.sgl,
+ &unf_xchg->scsi_cmnd_info.port_id,
+ &((unf_xchg->dif_sgl_info).entry_index), buf, buf_len);
+ return ret;
+}
+
+u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
+ u32 err_table_count, u32 drv_err_code)
+{
+ u32 loop = 0;
+
+ /* fail return UNF_RETURN_ERROR,adjust by up level */
+ if (unlikely(!err_table)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Error Code Table is Null, Error Code(0x%x).", drv_err_code);
+
+ return (u32)UNF_SCSI_HOST(DID_ERROR);
+ }
+
+ for (loop = 0; loop < err_table_count; loop++) {
+ if (err_table[loop].drv_errcode == drv_err_code)
+ return err_table[loop].ap_errcode;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Unsupported Ap Error code by Error Code(0x%x).", drv_err_code);
+
+ return (u32)UNF_SCSI_HOST(DID_ERROR);
+}
+
+static u32 unf_ini_status_handle(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg)
+{
+ u32 loop = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 up_status = 0;
+
+ for (loop = 0; loop < sizeof(ini_error_handler_table) /
+ sizeof(struct unf_ini_error_handler_s); loop++) {
+ if (UNF_GET_LL_ERR(pkg) == ini_error_handler_table[loop].ini_error_code) {
+ up_status =
+ unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_GET_LL_ERR(pkg));
+
+ if (ini_error_handler_table[loop].unf_ini_error_handler) {
+ ret = ini_error_handler_table[loop]
+ .unf_ini_error_handler(xchg, pkg, up_status);
+ } else {
+ /* set exchange->result ---to--->>>scsi_result */
+ ret = unf_ini_error_default_handler(xchg, pkg, up_status);
+ }
+
+ return ret;
+ }
+ }
+
+ up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
+ xchg->scsi_cmnd_info.err_code_table_cout,
+ UNF_IO_SOFT_ERR);
+
+ ret = unf_ini_error_default_handler(xchg, pkg, up_status);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Can not find com status, SID(0x%x) exchange(0x%p) com_status(0x%x) DID(0x%x) hot_pool_tag(0x%x)",
+ xchg->sid, xchg, pkg->status, xchg->did, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_analysis_response_info(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 *up_status)
+{
+ u8 *resp_buf = NULL;
+
+ /* LL_Driver use Little End, and copy RSP_INFO to COM_Driver */
+ if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response resp buffer len is invalid 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+ return;
+ }
+
+ resp_buf = (u8 *)pkg->unf_rsp_pload_bl.buffer_ptr;
+ if (resp_buf) {
+ /* If chip use Little End, then change it to Big End */
+ if ((pkg->byte_orders & UNF_BIT_3) == 0)
+ unf_cpu_to_big_end(resp_buf, pkg->unf_rsp_pload_bl.length);
+
+ /* Chip DAM data with Big End */
+ if (resp_buf[ARRAY_INDEX_3] != UNF_FCP_TM_RSP_COMPLETE) {
+ *up_status = UNF_SCSI_HOST(DID_BUS_BUSY);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) DID bus busy, scsi_status(0x%x)",
+ xchg->lport, UNF_GET_SCSI_STATUS(pkg));
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response, resp buffer is NULL resp buffer len is 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+ }
+}
+
+static void unf_analysis_sense_info(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 *up_status)
+{
+ u32 length = 0;
+
+ /* 4 bytes Align */
+ length = MIN(SCSI_SENSE_DATA_LEN, pkg->unf_sense_pload_bl.length);
+
+ if (unlikely(pkg->unf_sense_pload_bl.length > SCSI_SENSE_DATA_LEN)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Receive FCP response resp buffer len is 0x%x",
+ pkg->unf_sense_pload_bl.length);
+ }
+ /*
+ * If have sense info then copy directly
+ * else, the chip has been dma the data to sense buffer
+ */
+
+ if (length != 0 && pkg->unf_rsp_pload_bl.buffer_ptr) {
+ /* has been dma to exchange buffer */
+ if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
+ *up_status = UNF_SCSI_HOST(DID_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response resp buffer len is invalid 0x%x",
+ pkg->unf_rsp_pload_bl.length);
+
+ return;
+ }
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = (u8 *)kmalloc(length, GFP_ATOMIC);
+ if (!xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Alloc FCP sense buffer failed");
+ return;
+ }
+
+ memcpy(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
+ ((u8 *)(pkg->unf_rsp_pload_bl.buffer_ptr)) +
+ pkg->unf_rsp_pload_bl.length, length);
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = length;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Receive FCP response, sense buffer is NULL sense buffer len is 0x%x",
+ length);
+ }
+}
+
+static u32 unf_io_success_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ u8 scsi_status = 0;
+ u8 control = 0;
+ u32 status = up_status;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ control = UNF_GET_FCP_CTL(pkg);
+ scsi_status = UNF_GET_SCSI_STATUS(pkg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%p), Exchange(0x%p) Completed, Control(0x%x), Scsi Status(0x%x)",
+ xchg->lport, xchg, control, scsi_status);
+
+ if (control & FCP_SNS_LEN_VALID_MASK) {
+ /* has sense info */
+ if (scsi_status == FCP_SCSI_STATUS_GOOD)
+ scsi_status = SCSI_CHECK_CONDITION;
+
+ unf_analysis_sense_info(xchg, pkg, &status);
+ } else {
+ /*
+ * When the FCP_RSP_LEN_VALID bit is set to one,
+ * the content of the SCSI STATUS CODE field is not reliable
+ * and shall be ignored by the application client.
+ */
+ if (control & FCP_RSP_LEN_VALID_MASK)
+ unf_analysis_response_info(xchg, pkg, &status);
+ }
+
+ xchg->scsi_cmnd_info.result = status | UNF_SCSI_STATUS(scsi_status);
+
+ return RETURN_OK;
+}
+
+static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg,
+ u32 up_status)
+{
+ /* set exchange->result ---to--->>> scsi_cmnd->result */
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]SID(0x%x) exchange(0x%p) com_status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response_len(0x%x)",
+ xchg->sid, xchg, pkg->status, up_status, xchg->did,
+ xchg->hotpooltag, pkg->residus_len);
+
+ xchg->scsi_cmnd_info.result =
+ up_status | UNF_SCSI_STATUS(UNF_GET_SCSI_STATUS(pkg));
+
+ return RETURN_OK;
+}
+
+static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ u8 *sense_data = NULL;
+ u16 sense_code = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ /*
+ * According to DIF scheme
+ * drive set check condition(0x2) when dif error occurs,
+ * and returns the values base on the upper-layer verification resule
+ * Check sequence: crc,Lba,App,
+ * if CRC error is found, the subsequent check is not performed
+ */
+ xchg->scsi_cmnd_info.result = UNF_SCSI_STATUS(SCSI_CHECK_CONDITION);
+
+ sense_code = (u16)pkg->status_sub_code;
+ sense_data = (u8 *)kmalloc(SCSI_SENSE_DATA_LEN, GFP_ATOMIC);
+ if (!sense_data) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Alloc FCP sense buffer failed");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(sense_data, 0, SCSI_SENSE_DATA_LEN);
+ sense_data[ARRAY_INDEX_0] = SENSE_DATA_RESPONSE_CODE; /* response code:0x70 */
+ sense_data[ARRAY_INDEX_2] = ILLEGAL_REQUEST; /* sense key:0x05; */
+ sense_data[ARRAY_INDEX_7] = ADDITINONAL_SENSE_LEN; /* additional sense length:0x7 */
+ sense_data[ARRAY_INDEX_12] = (u8)(sense_code >> UNF_SHIFT_8);
+ sense_data[ARRAY_INDEX_13] = (u8)sense_code;
+
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = sense_data;
+ xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = SCSI_SENSE_DATA_LEN;
+
+ /* valid sense data length snscode[13] */
+ return RETURN_OK;
+}
+
+static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg, u32 up_status)
+{
+ /* under flow: residlen > 0 */
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_REPORT_LUN &&
+ xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_INQUIRY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]IO under flow: SID(0x%x) exchange(0x%p) com status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response SID(0x%x)",
+ xchg->sid, xchg, pkg->status, up_status,
+ xchg->did, xchg->hotpooltag, pkg->residus_len);
+ }
+
+ xchg->resid_len = (int)pkg->residus_len;
+ (void)unf_io_success_handler(xchg, pkg, up_status);
+
+ return RETURN_OK;
+}
+
+void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size)
+{
+ /*
+ * Exception during process Que_CMND
+ * 1. L_Port == NULL;
+ * 2. L_Port == removing;
+ * 3. R_Port == NULL;
+ * 4. Xchg == NULL.
+ */
+ FC_CHECK_RETURN_VOID((UNF_GET_CMND_DONE_FUNC(scsi_cmnd)));
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Command(0x%p), Result(0x%x).", scsi_cmnd, result_size);
+
+ UNF_SET_CMND_RESULT(scsi_cmnd, result_size);
+
+ UNF_DONE_SCSI_CMND(scsi_cmnd);
+}
+
+static inline void unf_bind_xchg_scsi_cmd(struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+
+ /* UNF_SCSI_CMND_INFO <<-- UNF_SCSI_CMND */
+ scsi_cmnd_info->err_code_table = UNF_GET_ERR_CODE_TABLE(scsi_cmnd);
+ scsi_cmnd_info->err_code_table_cout = UNF_GET_ERR_CODE_TABLE_COUNT(scsi_cmnd);
+ scsi_cmnd_info->done = UNF_GET_CMND_DONE_FUNC(scsi_cmnd);
+ scsi_cmnd_info->scsi_cmnd = UNF_GET_HOST_CMND(scsi_cmnd);
+ scsi_cmnd_info->sense_buf = (char *)UNF_GET_SENSE_BUF_ADDR(scsi_cmnd);
+ scsi_cmnd_info->uplevel_done = UNF_GET_UP_LEVEL_CMND_DONE(scsi_cmnd);
+ scsi_cmnd_info->unf_get_sgl_entry_buf = UNF_GET_SGL_ENTRY_BUF_FUNC(scsi_cmnd);
+ scsi_cmnd_info->sgl = UNF_GET_CMND_SGL(scsi_cmnd);
+ scsi_cmnd_info->time_out = scsi_cmnd->time_out;
+ scsi_cmnd_info->entry_cnt = scsi_cmnd->entry_count;
+ scsi_cmnd_info->port_id = (u32)scsi_cmnd->port_id;
+ scsi_cmnd_info->scsi_id = UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd);
+}
+
+u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_fcp_cmnd *fcp_cmnd = NULL;
+ u32 control = 0;
+ u16 xchg_tag = 0x0ffff;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong xchg_flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)lport;
+ xchg_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
+
+ /* 1. Find Exchange Context */
+ unf_xchg = unf_cm_lookup_xchg_by_tag(lport, (u16)xchg_tag);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) can not find exchange by tag(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, xchg_tag);
+
+ /* NOTE: return directly */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 2. Consistency check */
+ UNF_CHECK_ALLOCTIME_VALID(unf_lport, xchg_tag, unf_xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ /* 3. Increase ref_cnt for exchange protecting */
+ ret = unf_xchg_ref_inc(unf_xchg, INI_RESPONSE_DONE); /* hold */
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ fcp_cmnd = &unf_xchg->fcp_cmnd;
+ control = fcp_cmnd->control;
+ control = UNF_GET_TASK_MGMT_FLAGS(control);
+
+ /* 4. Cancel timer if necessary */
+ if (unf_xchg->scsi_cmnd_info.time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ /* 5. process scsi TMF if necessary */
+ if (control != 0) {
+ unf_process_scsi_mgmt_result(pkg, unf_xchg);
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
+
+ /* NOTE: return directly */
+ return RETURN_OK;
+ }
+
+ /* 6. Xchg Abort state check */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
+ unf_xchg->oxid = UNF_GET_OXID(pkg);
+ unf_xchg->rxid = UNF_GET_RXID(pkg);
+ if (INI_IO_STATE_UPABORT & unf_xchg->io_state) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) find exchange(%p) state(0x%x) has been aborted",
+ unf_lport->port_id, unf_xchg, unf_xchg->io_state);
+
+ /* NOTE: release exchange during SCSI ABORT(ABTS) */
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
+
+ return ret;
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+
+ /*
+ * 7. INI SCSI CMND Status process
+ * set exchange->result ---to--->>> scsi_result
+ */
+ ret = unf_ini_status_handle(unf_xchg, pkg);
+
+ /* 8. release exchangenecessary */
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* 9. dec exch ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold: release resource now */
+
+ return ret;
+}
+
+u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ if (unlikely(!lport->low_level_func.service_op.unf_cmnd_send)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) low level send scsi function is NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return lport->low_level_func.service_op.unf_cmnd_send(lport->fc_port, pkg);
+}
+
+struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
+ struct unf_ini_error_code *err_code_table,
+ u32 err_code_table_cout, u32 scsi_id, u32 *scsi_result)
+{
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ /* scsi_table -> session_table ->image_table */
+ scsi_image_table = &lport->rport_scsi_table;
+
+ /* 1. Scsi_Id validity check */
+ if (unlikely(scsi_id >= scsi_image_table->max_scsi_id)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Input scsi_id(0x%x) bigger than max_scsi_id(0x%x).",
+ scsi_id, scsi_image_table->max_scsi_id);
+
+ *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table, err_code_table_cout,
+ UNF_IO_SOFT_ERR); /* did_soft_error */
+
+ return NULL;
+ }
+
+ /* 2. GetR_Port_Info/R_Port: use Scsi_Id find from L_Port's
+ * Rport_Scsi_Table (image table)
+ */
+ spin_lock_irqsave(&scsi_image_table->scsi_image_table_lock, flags);
+ wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
+ unf_rport = wwpn_rport_info->rport;
+ spin_unlock_irqrestore(&scsi_image_table->scsi_image_table_lock, flags);
+
+ if (unlikely(!unf_rport)) {
+ *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table,
+ err_code_table_cout,
+ UNF_IO_PORT_LOGOUT);
+
+ return NULL;
+ }
+
+ return unf_rport;
+}
+
+static u32 unf_build_xchg_fcpcmnd(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ memcpy(fcp_cmnd->cdb, &UNF_GET_FCP_CMND(scsi_cmnd), scsi_cmnd->cmnd_len);
+
+ if ((fcp_cmnd->control == UNF_FCP_WR_DATA &&
+ (IS_READ_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0]))) ||
+ (fcp_cmnd->control == UNF_FCP_RD_DATA &&
+ (IS_WRITE_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0])))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR,
+ "Scsi command direction inconsistent, CDB[ARRAY_INDEX_0](0x%x), direction(0x%x).",
+ fcp_cmnd->cdb[ARRAY_INDEX_0], fcp_cmnd->control);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
+
+ unf_big_end_to_cpu((void *)fcp_cmnd->cdb, sizeof(fcp_cmnd->cdb));
+ fcp_cmnd->data_length = UNF_GET_DATA_LEN(scsi_cmnd);
+
+ return RETURN_OK;
+}
+
+static void unf_adjust_xchg_len(struct unf_xchg *xchg, u32 scsi_cmnd)
+{
+ switch (scsi_cmnd) {
+ case SCSIOPC_REQUEST_SENSE: /* requires different buffer */
+ xchg->data_len = UNF_SCSI_SENSE_BUFFERSIZE;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR, "Request Sense new.");
+ break;
+
+ case SCSIOPC_TEST_UNIT_READY:
+ case SCSIOPC_RESERVE:
+ case SCSIOPC_RELEASE:
+ case SCSIOPC_START_STOP_UNIT:
+ xchg->data_len = 0;
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void unf_copy_dif_control(struct unf_dif_control_info *dif_control,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ dif_control->fcp_dl = scsi_cmnd->dif_control.fcp_dl;
+ dif_control->protect_opcode = scsi_cmnd->dif_control.protect_opcode;
+ dif_control->start_lba = scsi_cmnd->dif_control.start_lba;
+ dif_control->app_tag = scsi_cmnd->dif_control.app_tag;
+
+ dif_control->flags = scsi_cmnd->dif_control.flags;
+ dif_control->dif_sge_count = scsi_cmnd->dif_control.dif_sge_count;
+ dif_control->dif_sgl = scsi_cmnd->dif_control.dif_sgl;
+}
+
+static void unf_adjust_dif_pci_transfer_len(struct unf_xchg *xchg, u32 direction)
+{
+ struct unf_dif_control_info *dif_control = NULL;
+ u32 sector_size = 0;
+
+ dif_control = &xchg->dif_control;
+
+ if (dif_control->protect_opcode == UNF_DIF_ACTION_NONE)
+ return;
+ if ((dif_control->flags & UNF_DIF_SECTSIZE_4KB) == 0)
+ sector_size = SECTOR_SIZE_512;
+ else
+ sector_size = SECTOR_SIZE_4096;
+ switch (dif_control->protect_opcode & UNF_DIF_ACTION_MASK) {
+ case UNF_DIF_ACTION_INSERT:
+ if (direction == DMA_TO_DEVICE) {
+ /* write IO,insert,Indicates that data with DIF is
+ * transmitted over the link.
+ */
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ } else {
+ /* read IO,insert,Indicates that the internal DIf is
+ * carried, and the link does not carry the DIf.
+ */
+ dif_control->fcp_dl = xchg->data_len;
+ }
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_DELETE:
+ if (direction == DMA_TO_DEVICE) {
+ /* write IO,Delete,Indicates that the internal DIf is
+ * carried, and the link does not carry the DIf.
+ */
+ dif_control->fcp_dl = xchg->data_len;
+ } else {
+ /* read IO,Delete,Indicates that data with DIF is
+ * carried on the link and does not contain DIF on
+ * internal.
+ */
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ }
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
+ dif_control->fcp_dl = xchg->data_len +
+ UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
+ break;
+
+ default:
+ dif_control->fcp_dl = xchg->data_len;
+ break;
+ }
+
+ xchg->fcp_cmnd.data_length = dif_control->fcp_dl;
+}
+
+static void unf_get_dma_direction(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_TO_DEVICE) {
+ fcp_cmnd->control = UNF_FCP_WR_DATA;
+ } else if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_FROM_DEVICE) {
+ fcp_cmnd->control = UNF_FCP_RD_DATA;
+ } else {
+ /* DMA Direction None */
+ fcp_cmnd->control = 0;
+ }
+}
+
+static int unf_save_scsi_cmnd_to_xchg(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_xchg *unf_xchg = xchg;
+ u32 result_size = 0;
+
+ scsi_cmnd->driver_scribble = (void *)unf_xchg->start_jif;
+ unf_xchg->rport = unf_rport;
+ unf_xchg->rport_bind_jifs = unf_rport->rport_alloc_jifs;
+
+ /* Build Xchg SCSI_CMND info */
+ unf_bind_xchg_scsi_cmd(unf_xchg, scsi_cmnd);
+
+ unf_xchg->data_len = UNF_GET_DATA_LEN(scsi_cmnd);
+ unf_xchg->data_direction = UNF_GET_DATA_DIRECTION(scsi_cmnd);
+ unf_xchg->sid = unf_lport->nport_id;
+ unf_xchg->did = unf_rport->nport_id;
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
+ unf_xchg->world_id = scsi_cmnd->world_id;
+ unf_xchg->cmnd_sn = scsi_cmnd->cmnd_sn;
+ unf_xchg->pinitiator = scsi_cmnd->pinitiator;
+ unf_xchg->scsi_id = scsi_cmnd->scsi_id;
+ if (scsi_cmnd->qos_level == UNF_QOS_LEVEL_DEFAULT)
+ unf_xchg->qos_level = unf_rport->qos_level;
+ else
+ unf_xchg->qos_level = scsi_cmnd->qos_level;
+
+ unf_get_dma_direction(&unf_xchg->fcp_cmnd, scsi_cmnd);
+ result_size = unf_build_xchg_fcpcmnd(&unf_xchg->fcp_cmnd, scsi_cmnd);
+ if (unlikely(result_size != RETURN_OK))
+ return UNF_RETURN_ERROR;
+
+ unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
+
+ unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
+
+ /* Dif (control) info */
+ unf_copy_dif_control(&unf_xchg->dif_control, scsi_cmnd);
+ memcpy(&unf_xchg->dif_info, &scsi_cmnd->dif_info, sizeof(struct dif_info));
+ unf_adjust_dif_pci_transfer_len(unf_xchg, UNF_GET_DATA_DIRECTION(scsi_cmnd));
+
+ /* single sgl info */
+ if (unf_xchg->data_direction != DMA_NONE && UNF_GET_CMND_SGL(scsi_cmnd)) {
+ unf_xchg->req_sgl_info.sgl = UNF_GET_CMND_SGL(scsi_cmnd);
+ unf_xchg->req_sgl_info.sgl_start = unf_xchg->req_sgl_info.sgl;
+ /* Save the sgl header for easy
+ * location and printing.
+ */
+ unf_xchg->req_sgl_info.req_index = 0;
+ unf_xchg->req_sgl_info.entry_index = 0;
+ }
+
+ if (scsi_cmnd->dif_control.dif_sgl) {
+ unf_xchg->dif_sgl_info.sgl = UNF_INI_GET_DIF_SGL(scsi_cmnd);
+ unf_xchg->dif_sgl_info.entry_index = 0;
+ unf_xchg->dif_sgl_info.req_index = 0;
+ unf_xchg->dif_sgl_info.sgl_start = unf_xchg->dif_sgl_info.sgl;
+ }
+
+ return RETURN_OK;
+}
+
+static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+#define UNF_MAX_PENDING_IO_CNT 3
+ struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_frame_pkg pkg = {0};
+ u32 result_size = 0;
+ ulong flags = 0;
+
+ memcpy(&pkg.dif_control, &unf_xchg->dif_control, sizeof(struct unf_dif_control_info));
+ pkg.dif_control.fcp_dl = unf_xchg->dif_control.fcp_dl;
+ pkg.transfer_len = unf_xchg->data_len; /* Pcie data transfer length */
+ pkg.xchg_contex = unf_xchg;
+ pkg.qos_level = unf_xchg->qos_level;
+ scsi_cmnd_info = &xchg->scsi_cmnd_info;
+ pkg.entry_count = unf_xchg->scsi_cmnd_info.entry_cnt;
+ if (unf_xchg->data_direction == DMA_NONE || !scsi_cmnd_info->sgl)
+ pkg.entry_count = 0;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
+ unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+
+ unf_select_sq(unf_xchg, &pkg);
+ pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
+ pkg.frame_head.csctl_sid = unf_lport->nport_id;
+ pkg.frame_head.rctl_did = unf_rport->nport_id;
+ pkg.upper_cmd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
+
+ /* exch->fcp_rsp_id --->>> pkg->buffer_ptr */
+ pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16 | unf_xchg->rxid);
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]LPort (0x%p), Nport ID(0x%x) RPort ID(0x%x) direction(0x%x) magic number(0x%x) IO to entry count(0x%x) hottag(0x%x)",
+ unf_lport, unf_lport->nport_id, unf_rport->nport_id,
+ xchg->data_direction, pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ pkg.entry_count, unf_xchg->hotpooltag);
+
+ atomic_inc(&unf_rport->pending_io_cnt);
+ if (unf_rport->tape_support_needed &&
+ (atomic_read(&unf_rport->pending_io_cnt) <= UNF_MAX_PENDING_IO_CNT)) {
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_REC_TIMEOUT_WAIT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ scsi_cmnd_info->abort_time_out = scsi_cmnd_info->time_out;
+ scsi_cmnd_info->time_out = UNF_REC_TOV;
+ }
+ /* 3. add INI I/O timer if necessary */
+ if (scsi_cmnd_info->time_out != 0) {
+ /* I/O inner timer, do not used at this time */
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer(unf_xchg,
+ scsi_cmnd_info->time_out, UNF_TIMER_TYPE_REQ_IO);
+ }
+
+ /* 4. R_Port state check */
+ if (unlikely(unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP ||
+ unf_rport->rp_state > UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
+ unf_lport->port_id, unf_rport, unf_rport->nport_id,
+ unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
+
+ result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
+ scsi_cmnd_info->err_code_table_cout,
+ UNF_IO_INCOMPLETE);
+ scsi_cmnd_info->result = result_size;
+
+ if (scsi_cmnd_info->time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* DID_IMM_RETRY */
+ return RETURN_OK;
+ } else if (unf_rport->rp_state < UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
+ unf_lport->port_id, unf_rport, unf_rport->nport_id,
+ unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR; /* need retry */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ if (unlikely(scsi_cmnd_info->time_out != 0))
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)unf_xchg);
+
+ /* Host busy & need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 5. send scsi_cmnd to FC_LL Driver */
+ if (unf_hardware_start_io(unf_lport, &pkg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port (0x%x) pUpperCmd(0x%p) Hardware Send IO failed.",
+ unf_lport->port_id, pkg.upper_cmd);
+
+ unf_release_esgls(unf_xchg);
+
+ result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
+ scsi_cmnd_info->err_code_table_cout,
+ UNF_IO_INCOMPLETE);
+ scsi_cmnd_info->result = result_size;
+
+ if (scsi_cmnd_info->time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* SCSI_DONE */
+ return RETURN_OK;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_prefer_to_send_scsi_cmnd(struct unf_xchg *xchg)
+{
+ /*
+ * About INI_IO_STATE_DRABORT:
+ * 1. Set ABORT tag: Clean L_Port/V_Port Link Down I/O
+ * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
+ * *
+ * 2. Set ABORT tag: for target session:
+ * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
+ * a. R_Port remove
+ * b. Send PLOGI_ACC callback
+ * c. RCVD PLOGI
+ * d. RCVD LOGO
+ * *
+ * 3. if set ABORT: prevent send scsi_cmnd to target
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ int ret = RETURN_OK;
+ ulong flags = 0;
+
+ unf_lport = xchg->lport;
+
+ unf_rport = xchg->rport;
+ if (unlikely(!unf_lport || !unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%p) or RPort(0x%p) is NULL", unf_lport, unf_rport);
+
+ /* if happened (never happen): need retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 1. inc ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(xchg, INI_SEND_CMND);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exhg(%p) exception ref(%d) ", unf_lport->port_id,
+ xchg, atomic_read(&xchg->ref_cnt));
+ /* exchange exception, need retry */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 2. Xchg Abort state check: Free EXCH if necessary */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (unlikely((xchg->io_state & INI_IO_STATE_UPABORT) ||
+ (xchg->io_state & INI_IO_STATE_DRABORT))) {
+ /* Prevent to send: UP_ABORT/DRV_ABORT */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
+ ret = RETURN_OK;
+
+ unf_xchg_ref_dec(xchg, INI_SEND_CMND);
+ unf_cm_free_xchg(unf_lport, xchg);
+
+ /*
+ * Release exchange & return directly:
+ * 1. FC LLDD rcvd ABTS before scsi_cmnd: do nothing
+ * 2. INI_IO_STATE_UPABORT/INI_IO_STATE_DRABORT: discard this
+ * cmnd directly
+ */
+ return ret;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* 3. Send FCP_CMND to FC_LL Driver */
+ ret = unf_send_fcpcmnd(unf_lport, unf_rport, xchg);
+ if (unlikely(ret != RETURN_OK)) {
+ /* exchange exception, need retry */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send exhg(%p) hottag(0x%x) to Rport(%p) NPortID(0x%x) state(0x%x) scsi_id(0x%x) failed",
+ unf_lport->port_id, xchg, xchg->hotpooltag, unf_rport,
+ unf_rport->nport_id, unf_rport->rp_state, unf_rport->scsi_id);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ /* need retry */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
+ unf_cm_free_xchg(unf_lport, xchg);
+ }
+
+ /* 4. dec ref_cnt */
+ unf_xchg_ref_dec(xchg, INI_SEND_CMND);
+
+ return ret;
+}
+
+struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ /* cmd -->> L_Port */
+ unf_lport = (struct unf_lport *)UNF_GET_HOST_PORT_BY_CMND(scsi_cmnd);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Find Port by scsi_cmnd(0x%p) failed", scsi_cmnd);
+
+ /* cmnd -->> scsi_host_id -->> L_Port */
+ unf_lport = unf_find_lport_by_scsi_hostid(UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+ }
+
+ return unf_lport;
+}
+
+int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Command --->>> FC FCP Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 cmnd_result = 0;
+ int ret = RETURN_OK;
+ ulong flags = 0;
+ u32 scsi_id = 0;
+ u32 exhg_mgr_type = UNF_XCHG_MGR_TYPE_RANDOM;
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+
+ /*
+ * corresponds to the insertion or removal scenario or the remove card
+ * scenario. This method is used to search for LPort information based
+ * on SCSI_HOST_ID. The Slave alloc is not invoked when LUNs are not
+ * scanned. Therefore, the Lport cannot be obtained. You need to obtain
+ * the Lport from the Lport linked list.
+ * *
+ * FC After Link Up, the first SCSI command is inquiry.
+ * Before inquiry, SCSI delivers slave_alloc.
+ */
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Find Port by scsi cmd(0x%p) failed", scsi_cmnd);
+
+ /* find from ini_error_code_table1 */
+ cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_IO_NO_LPORT);
+
+ /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* Get Local SCSI_Image_table & SCSI_ID */
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ /* 2. L_Port State check */
+ if (unlikely(unf_lport->port_removing || unf_lport->pcie_link_down)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing(%d) or pcielinkdown(%d) and return with scsi_id(0x%x)",
+ unf_lport->port_id, unf_lport->port_removing,
+ unf_lport->pcie_link_down, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_IO_NO_LPORT);
+
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
+
+ /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* 3. Get R_Port */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ /* never happen: do not care */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) find RPort by scsi_id(0x%x) failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
+
+ /* DID_NOT_CONNECT/DID_SOFT_ERROR & SCSI_DONE & RETURN_OK(0) &
+ * I/O error
+ */
+ unf_complete_cmnd(scsi_cmnd, cmnd_result);
+ return RETURN_OK;
+ }
+
+ /* 4. Can't get exchange & return host busy, retry by uplevel */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport,
+ exhg_mgr_type << UNF_SHIFT_16 | UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) get free exchange for INI IO(0x%x) failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ /* NOTE: need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_ERROR);
+
+ /* 5. Save the SCSI CMND information in advance. */
+ ret = unf_save_scsi_cmnd_to_xchg(unf_lport, unf_rport, unf_xchg, scsi_cmnd);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) save scsi_cmnd info(0x%x) to exchange failed",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ /* INI_IO_STATE_UPSEND_ERR: Don't Do SCSI_DONE, need retry I/O */
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+
+ /* NOTE: need scsi retry */
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Get exchange(0x%p) hottag(0x%x) for Pcmd:%p,Cmdsn:0x%lx,WorldId:%d",
+ unf_xchg, unf_xchg->hotpooltag, scsi_cmnd->upper_cmnd,
+ (ulong)scsi_cmnd->cmnd_sn, scsi_cmnd->world_id);
+ /* 6. Send SCSI CMND */
+ ret = unf_prefer_to_send_scsi_cmnd(unf_xchg);
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/common/unf_io.h b/drivers/scsi/spfc/common/unf_io.h
new file mode 100644
index 000000000000..d8e50eb8035e
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_IO_H
+#define UNF_IO_H
+
+#include "unf_type.h"
+#include "unf_scsi_common.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#define UNF_MAX_TARGET_NUMBER 2048
+#define UNF_DEFAULT_MAX_LUN 0xFFFF
+#define UNF_MAX_DMA_SEGS 0x400
+#define UNF_MAX_SCSI_CMND_LEN 16
+#define UNF_MAX_BUS_CHANNEL 0
+#define UNF_DMA_BOUNDARY 0xffffffffffffffff
+#define UNF_MAX_CMND_PER_LUN 64 /* LUN max command */
+#define UNF_CHECK_LUN_ID_MATCH(lun_id, raw_lun_id, scsi_id, xchg) \
+ (((lun_id) == (raw_lun_id) || (lun_id) == INVALID_VALUE64) && \
+ ((scsi_id) == (xchg)->scsi_id))
+
+#define NO_SENSE 0x00
+#define RECOVERED_ERROR 0x01
+#define NOT_READY 0x02
+#define MEDIUM_ERROR 0x03
+#define HARDWARE_ERROR 0x04
+#define ILLEGAL_REQUEST 0x05
+#define UNIT_ATTENTION 0x06
+#define DATA_PROTECT 0x07
+#define BLANK_CHECK 0x08
+#define COPY_ABORTED 0x0a
+#define ABORTED_COMMAND 0x0b
+#define VOLUME_OVERFLOW 0x0d
+#define MISCOMPARE 0x0e
+
+#define SENSE_DATA_RESPONSE_CODE 0x70
+#define ADDITINONAL_SENSE_LEN 0x7
+
+extern u32 sector_size_flag;
+
+#define UNF_GET_SCSI_HOST_ID_BY_CMND(cmd) ((cmd)->scsi_host_id)
+#define UNF_GET_SCSI_ID_BY_CMND(cmd) ((cmd)->scsi_id)
+#define UNF_GET_HOST_PORT_BY_CMND(cmd) ((cmd)->drv_private)
+#define UNF_GET_FCP_CMND(cmd) ((cmd)->pcmnd[ARRAY_INDEX_0])
+#define UNF_GET_DATA_LEN(cmd) ((cmd)->transfer_len)
+#define UNF_GET_DATA_DIRECTION(cmd) ((cmd)->data_direction)
+
+#define UNF_GET_HOST_CMND(cmd) ((cmd)->upper_cmnd)
+#define UNF_GET_CMND_DONE_FUNC(cmd) ((cmd)->done)
+#define UNF_GET_UP_LEVEL_CMND_DONE(cmd) ((cmd)->uplevel_done)
+#define UNF_GET_SGL_ENTRY_BUF_FUNC(cmd) ((cmd)->unf_ini_get_sgl_entry)
+#define UNF_GET_SENSE_BUF_ADDR(cmd) ((cmd)->sense_buf)
+#define UNF_GET_ERR_CODE_TABLE(cmd) ((cmd)->err_code_table)
+#define UNF_GET_ERR_CODE_TABLE_COUNT(cmd) ((cmd)->err_code_table_cout)
+
+#define UNF_SET_HOST_CMND(cmd, host_cmd) ((cmd)->upper_cmnd = (host_cmd))
+#define UNF_SER_CMND_DONE_FUNC(cmd, pfn) ((cmd)->done = (pfn))
+#define UNF_SET_UP_LEVEL_CMND_DONE_FUNC(cmd, pfn) ((cmd)->uplevel_done = (pfn))
+
+#define UNF_SET_RESID(cmd, uiresid) ((cmd)->resid = (uiresid))
+#define UNF_SET_CMND_RESULT(cmd, uiresult) ((cmd)->result = ((int)(uiresult)))
+
+#define UNF_DONE_SCSI_CMND(cmd) ((cmd)->done(cmd))
+
+#define UNF_GET_CMND_SGL(cmd) ((cmd)->sgl)
+#define UNF_INI_GET_DIF_SGL(cmd) ((cmd)->dif_control.dif_sgl)
+
+u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len);
+u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len);
+void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size);
+void unf_done_ini_xchg(struct unf_xchg *xchg);
+u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg);
+u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg);
+void unf_abts_timeout_recovery_default(void *rport, void *xchg);
+int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_cm_virtual_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
+struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
+ struct unf_ini_error_code *errcode_table,
+ u32 errcode_table_count,
+ u32 scsi_id, u32 *scsi_result);
+u32 UNF_IOExchgDelayProcess(struct unf_lport *lport, struct unf_xchg *xchg);
+struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd);
+int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgnt_cmd_type);
+void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
new file mode 100644
index 000000000000..fece7aa5f441
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.c
@@ -0,0 +1,986 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_io_abnormal.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_rport.h"
+#include "unf_io.h"
+#include "unf_portman.h"
+#include "unf_service.h"
+
+static int unf_send_abts_success(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ u32 time_out_value)
+{
+ bool need_wait_marker = true;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+ u32 return_value = 0;
+ ulong xchg_flag = 0;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ need_wait_marker = (xchg->abts_state & MARKER_STS_RECEIVED) ? false : true;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ if (need_wait_marker) {
+ if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x 0x%x) RX_ID(0x%x)",
+ lport->port_id, xchg, xchg->oxid,
+ xchg->hotpooltag, xchg->rxid);
+
+ /* Cancel abts rsp timer when sema timeout */
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ /* Cnacel the flag of INI_IO_STATE_UPABORT and process
+ * the io in TMF
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+ } else {
+ xchg->ucode_abts_state = UNF_IO_SUCCESS;
+ }
+
+ scsi_image_table = &lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
+ xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
+ lport->port_id, xchg, xchg->oxid, xchg->rxid, xchg->ucode_abts_state);
+ return_value = DID_RESET;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
+ unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ /* Cancel abts rsp timer when sema timeout */
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) oxid(0x%x) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
+ lport->port_id, xchg, xchg->oxid, xchg->hotpooltag,
+ xchg->scsi_cmnd_info.result, xchg->io_state);
+
+ /* return fail and then enter TMF */
+ return UNF_SCSI_ABORT_FAIL;
+}
+
+static int unf_ini_abort_cmnd(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /*
+ * About INI_IO_STATE_UPABORT:
+ * *
+ * 1. Check: L_Port destroy
+ * 2. Check: I/O XCHG timeout
+ * 3. Set ABORT: send ABTS
+ * 4. Set ABORT: LUN reset
+ * 5. Set ABORT: Target reset
+ * 6. Check: Prevent to send I/O to target
+ * (unf_prefer_to_send_scsi_cmnd)
+ * 7. Check: Done INI XCHG --->>> do not call scsi_done, return directly
+ * 8. Check: INI SCSI Complete --->>> do not call scsi_done, return
+ * directly
+ */
+#define UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT (2000)
+
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong rport_flag = 0;
+ ulong xchg_flag = 0;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ u32 scsi_id = 0;
+ u32 time_out_value = (u32)UNF_WAIT_SEM_TIMEOUT;
+ u32 return_value = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_SCSI_ABORT_FAIL);
+ unf_lport = lport;
+
+ /* 1. Xchg State Set: INI_IO_STATE_UPABORT */
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state |= INI_IO_STATE_UPABORT;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ /* 2. R_Port check */
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS but no RPort, OX_ID(0x%x) RX_ID(0x%x)",
+ unf_lport->port_id, xchg->oxid, xchg->rxid);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort's state(0x%x) is not ready but send ABTS also, exchange(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_rport->rp_state, xchg, xchg->hotpooltag);
+
+ /*
+ * Important: Send ABTS also & update timer
+ * Purpose: only used for release chip (uCode) resource,
+ * continue
+ */
+ time_out_value = UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT;
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* 3. L_Port State check */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing", unf_lport->port_id);
+
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ scsi_id = scsi_cmnd->scsi_id;
+
+ /* If pcie linkdown, complete this io and flush all io */
+ if (unlikely(unf_lport->pcie_link_down)) {
+ return_value = DID_RESET;
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
+ unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
+ unf_free_lport_all_xchg(lport);
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[abort]Port(0x%x) Exchg(0x%p) delay(%llu) SID(0x%x) DID(0x%x) wwpn(0x%llx) hottag(0x%x) scsi_id(0x%x) lun_id(0x%x) cmdsn(0x%llx) Ini:%p",
+ unf_lport->port_id, xchg,
+ (u64)jiffies_to_msecs(jiffies) - (u64)jiffies_to_msecs(xchg->alloc_jif),
+ xchg->sid, xchg->did, unf_rport->port_name, xchg->hotpooltag,
+ scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id, scsi_cmnd->cmnd_sn,
+ scsi_cmnd->pinitiator);
+
+ /* Init abts marker semaphore */
+ sema_init(&xchg->task_sema, 0);
+
+ if (xchg->scsi_cmnd_info.time_out != 0)
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(xchg);
+
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT,
+ UNF_TIMER_TYPE_INI_ABTS);
+
+ /* 4. Send INI ABTS CMND */
+ if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Send ABTS failed. Exch(0x%p) hottag(0x%x)",
+ unf_lport->port_id, xchg, xchg->hotpooltag);
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
+ xchg->io_state &= ~INI_IO_STATE_UPABORT;
+ xchg->io_state |= INI_IO_STATE_TMF_ABORT;
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
+
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ return unf_send_abts_success(unf_lport, xchg, scsi_cmnd, time_out_value);
+}
+
+static void unf_flush_ini_resp_que(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (lport->low_level_func.service_op.unf_flush_ini_resp_que)
+ (void)lport->low_level_func.service_op.unf_flush_ini_resp_que(lport->fc_port);
+}
+
+int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /*
+ * SCSI ABORT Command --->>> FC ABTS Command
+ * If return ABORT_FAIL, then enter TMF process
+ */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_lport *xchg_lport = NULL;
+ int ret = UNF_SCSI_ABORT_SUCCESS;
+ ulong flag = 0;
+
+ /* 1. Get L_Port: Point to Scsi_host */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi host id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* 2. find target Xchg for INI Abort CMND */
+ unf_xchg = unf_cm_lookup_xchg_by_cmnd_sn(unf_lport, scsi_cmnd->cmnd_sn,
+ scsi_cmnd->world_id,
+ scsi_cmnd->pinitiator);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x) can't find exchange by Cmdsn(0x%lx),Ini:%p",
+ unf_lport->port_id, (ulong)scsi_cmnd->cmnd_sn,
+ scsi_cmnd->pinitiator);
+
+ unf_flush_ini_resp_que(unf_lport);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ /* 3. increase ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_ABORT);
+ if (unlikely(ret != RETURN_OK)) {
+ unf_flush_ini_resp_que(unf_lport);
+
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ scsi_cmnd->upper_cmnd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
+ unf_xchg->debug_hook = true;
+
+ /* 4. Exchang L_Port/R_Port Get & check */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
+ xchg_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
+
+ if (unlikely(!xchg_lport || !unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Exchange(0x%p)'s L_Port or R_Port is NULL, state(0x%x)",
+ unf_xchg, unf_xchg->io_state);
+
+ unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
+
+ if (!xchg_lport)
+ /* for L_Port */
+ return UNF_SCSI_ABORT_FAIL;
+ /* for R_Port */
+ return UNF_SCSI_ABORT_SUCCESS;
+ }
+
+ /* 5. Send INI Abort Cmnd */
+ ret = unf_ini_abort_cmnd(xchg_lport, unf_xchg, scsi_cmnd);
+
+ /* 6. decrease exchange ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
+
+ return ret;
+}
+
+u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(unf_lport, unf_rport);
+
+ return RETURN_OK;
+}
+
+void unf_abts_timeout_recovery_default(void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+ ulong flags = 0;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ unf_lport = unf_xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ if (INI_IO_STATE_DONE & unf_xchg->io_state) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ return;
+ }
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ if (unf_xchg->rport_bind_jifs != unf_rport->rport_alloc_jifs)
+ return;
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(unf_lport, unf_rport);
+}
+
+u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg)
+{
+ /* Do port reset or R_Port LOGO */
+ int ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+
+ unf_lport = unf_xchg->lport->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ /* 1. TMF response timeout & Marker STS timeout */
+ if (!(unf_xchg->tmf_state &
+ (MARKER_STS_RECEIVED | TMF_RESPONSE_RECEIVED))) {
+ /* TMF timeout & marker timeout */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive marker status timeout and do recovery",
+ unf_lport->port_id);
+
+ /* Do port reset */
+ ret = unf_cm_reset_port(unf_lport->port_id);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) do reset failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+ }
+
+ /* 2. default case: Do LOGO process */
+ unf_tmf_timeout_recovery_default(unf_rport, unf_xchg);
+
+ return RETURN_OK;
+}
+
+void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /*
+ * for device(lun)/target(session) reset:
+ * Do port reset or R_Port LOGO
+ */
+ if (lport->unf_tmf_abnormal_recovery)
+ lport->unf_tmf_abnormal_recovery((void *)rport, (void *)xchg);
+}
+
+int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Device/LUN Reset Command --->>> FC LUN/Device Reset Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 cmnd_result = 0;
+ int ret = SUCCESS;
+
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]Enter device/LUN reset handler");
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ /* 2. L_Port State checking */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) is removing", unf_lport);
+
+ return FAILED;
+ }
+
+ /*
+ * 3. Get R_Port: no rport is found or rport is not ready,return ok
+ * from: L_Port -->> rport_scsi_table (image table) -->>
+ * rport_info_table
+ */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Can't find rport by scsi_id(0x%x)",
+ unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ return SUCCESS;
+ }
+
+ /*
+ * 4. Set the I/O of the corresponding LUN to abort.
+ * *
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_cm_xchg_abort_by_lun(unf_lport, unf_rport, *((u64 *)scsi_cmnd->lun_id), NULL, false);
+
+ /* 5. R_Port state check */
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) state(0x%x) SCSI Command(0x%p), rport is not ready",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, scsi_cmnd);
+
+ return SUCCESS;
+ }
+
+ /* 6. Get & inc ref_cnt free Xchg for Device reset */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) can't get free exchange", unf_lport);
+
+ return FAILED;
+ }
+
+ /* increase ref_cnt for protecting exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
+
+ /* 7. Send Device/LUN Reset to Low level */
+ ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
+ UNF_FCP_TM_LOGICAL_UNIT_RESET);
+ if (unlikely(ret == FAILED)) {
+ /*
+ * Do port reset or R_Port LOGO:
+ * 1. FAILED: send failed
+ * 2. FAILED: semaphore timeout
+ * 3. SUCCESS: rcvd rsp & semaphore has been waken up
+ */
+ unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
+ }
+
+ /*
+ * 8. Release resource immediately if necessary
+ * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
+ * up)
+ */
+ if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
+ unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
+
+ /* decrease ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
+
+ return SUCCESS;
+}
+
+int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI Target Reset Command --->>> FC Session Reset/Delete Command */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 cmnd_result = 0;
+ int ret = SUCCESS;
+
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ /* 2. L_Port State check */
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) is removing", unf_lport);
+
+ return FAILED;
+ }
+
+ /*
+ * 3. Get R_Port: no rport is found or rport is not ready,return ok
+ * from: L_Port -->> rport_scsi_table (image table) -->>
+ * rport_info_table
+ */
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
+ scsi_cmnd->err_code_table_cout,
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find rport by scsi_id(0x%x)",
+ UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
+
+ return SUCCESS;
+ }
+
+ /*
+ * 4. set UP_ABORT on Target IO and Session IO
+ * *
+ * LUN Reset: set UP_ABORT tag, with:
+ * INI_Busy_list, IO_Wait_list,
+ * IO_Delay_list, IO_Delay_transfer_list
+ */
+ unf_cm_xchg_abort_by_session(unf_lport, unf_rport);
+
+ /* 5. R_Port state check */
+ if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) state(0x%x) is not ready, SCSI Command(0x%p)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, scsi_cmnd);
+
+ return SUCCESS;
+ }
+
+ /* 6. Get free Xchg for Target Reset CMND */
+ unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
+ if (unlikely(!unf_xchg)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%p) can't get free exchange", unf_lport);
+
+ return FAILED;
+ }
+
+ /* increase ref_cnt to protect exchange */
+ ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
+
+ /* 7. Send Target Reset Cmnd to low-level */
+ ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
+ UNF_FCP_TM_TARGET_RESET);
+ if (unlikely(ret == FAILED)) {
+ /*
+ * Do port reset or R_Port LOGO:
+ * 1. FAILED: send failed
+ * 2. FAILED: semaphore timeout
+ * 3. SUCCESS: rcvd rsp & semaphore has been waken up
+ */
+ unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
+ }
+
+ /*
+ * 8. Release resource immediately if necessary
+ * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
+ * up)
+ */
+ if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
+ unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
+
+ /* decrease exchange ref_cnt */
+ unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
+
+ return SUCCESS;
+}
+
+int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
+{
+ /* SCSI BUS Reset Command --->>> FC Port Reset Command */
+ struct unf_lport *unf_lport = NULL;
+ int cmnd_result = 0;
+
+ /* 1. Get L_Port */
+ unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can't find port by scsi_host_id(0x%x)",
+ UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
+
+ return FAILED;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[event]Do port reset with scsi_bus_reset");
+
+ cmnd_result = unf_cm_reset_port(unf_lport->port_id);
+ if (unlikely(cmnd_result == UNF_RETURN_ERROR))
+ return FAILED;
+ else
+ return SUCCESS;
+}
+
+void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ u8 *rsp_info = NULL;
+ u8 rsp_code = 0;
+ u32 code_index = 0;
+
+ /*
+ * LLT found that:RSP_CODE is the third byte of
+ * FCP_RSP_INFO, on Little endian should be byte 0, For
+ * detail FCP_4 Table 26 FCP_RSP_INFO field format
+ * *
+ * 1. state setting
+ * 2. wake up semaphore
+ */
+ FC_CHECK_RETURN_VOID(pkg);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ xchg->tmf_state |= TMF_RESPONSE_RECEIVED;
+
+ if (UNF_GET_LL_ERR(pkg) != UNF_IO_SUCCESS ||
+ pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Send scsi manage command failed with error code(0x%x) resp len(0x%x)",
+ UNF_GET_LL_ERR(pkg), pkg->unf_rsp_pload_bl.length);
+
+ xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+
+ return;
+ }
+
+ rsp_info = pkg->unf_rsp_pload_bl.buffer_ptr;
+ if (rsp_info && pkg->unf_rsp_pload_bl.length != 0) {
+ /* change to little end if necessary */
+ if (pkg->byte_orders & UNF_BIT_3)
+ unf_big_end_to_cpu(rsp_info, pkg->unf_rsp_pload_bl.length);
+ }
+
+ if (!rsp_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]FCP response data pointer is NULL with Xchg TAG(0x%x)",
+ xchg->hotpooltag);
+
+ xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]FCP response data length(0x%x), RSP_CODE(0x%x:%x:%x:%x:%x:%x:%x:%x)",
+ pkg->unf_rsp_pload_bl.length, rsp_info[ARRAY_INDEX_0],
+ rsp_info[ARRAY_INDEX_1], rsp_info[ARRAY_INDEX_2],
+ rsp_info[ARRAY_INDEX_3], rsp_info[ARRAY_INDEX_4],
+ rsp_info[ARRAY_INDEX_5], rsp_info[ARRAY_INDEX_6],
+ rsp_info[ARRAY_INDEX_7]);
+
+ rsp_code = rsp_info[code_index];
+ if (rsp_code == UNF_FCP_TM_RSP_COMPLETE || rsp_code == UNF_FCP_TM_RSP_SUCCEED)
+ xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
+ else
+ xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
+
+ /* wakeup semaphore & return */
+ up(&xchg->task_sema);
+}
+
+static void unf_build_task_mgmt_fcp_cmnd(struct unf_fcp_cmnd *fcp_cmnd,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgmt)
+{
+ FC_CHECK_RETURN_VOID(fcp_cmnd);
+ FC_CHECK_RETURN_VOID(scsi_cmnd);
+
+ unf_big_end_to_cpu((void *)scsi_cmnd->lun_id, UNF_FCP_LUNID_LEN_8);
+ (*(u64 *)(scsi_cmnd->lun_id)) >>= UNF_SHIFT_8;
+ memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
+
+ /*
+ * If the TASK MANAGEMENT FLAGS field is set to a nonzero value,
+ * the FCP_CDB field, the FCP_DL field, the TASK ATTRIBUTE field,
+ * the RDDATA bit, and the WRDATA bit shall be ignored and the
+ * FCP_BIDIRECTIONAL_READ_DL field shall not be included in the FCP_CMND
+ * IU payload
+ */
+ fcp_cmnd->control = UNF_SET_TASK_MGMT_FLAGS((u32)(task_mgmt));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "SCSI cmnd(0x%x) is task mgmt cmnd. ntrl Flag(LITTLE END) is 0x%x.",
+ task_mgmt, fcp_cmnd->control);
+}
+
+int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ enum unf_task_mgmt_cmd task_mgnt_cmd_type)
+{
+ /*
+ * 1. Device/LUN reset
+ * 2. Target/Session reset
+ */
+ struct unf_xchg *unf_xchg = NULL;
+ int ret = SUCCESS;
+ struct unf_frame_pkg pkg = {0};
+ ulong xchg_flag = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, FAILED);
+ FC_CHECK_RETURN_VALUE(rport, FAILED);
+ FC_CHECK_RETURN_VALUE(xchg, FAILED);
+ FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
+ FC_CHECK_RETURN_VALUE(task_mgnt_cmd_type <= UNF_FCP_TM_TERMINATE_TASK &&
+ task_mgnt_cmd_type >= UNF_FCP_TM_QUERY_TASK_SET, FAILED);
+
+ unf_xchg = xchg;
+ unf_xchg->lport = lport;
+ unf_xchg->rport = rport;
+
+ /* 1. State: Up_Task */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
+ unf_xchg->io_state |= INI_IO_STATE_UPTASK;
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
+ pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16) | unf_xchg->rxid;
+
+ /* 2. Set TASK MANAGEMENT FLAGS of FCP_CMND to the corresponding task
+ * management command
+ */
+ unf_build_task_mgmt_fcp_cmnd(&unf_xchg->fcp_cmnd, scsi_cmnd, task_mgnt_cmd_type);
+
+ pkg.xchg_contex = unf_xchg;
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
+ pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.frame_head.csctl_sid = lport->nport_id;
+ pkg.frame_head.rctl_did = rport->nport_id;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+
+ if (unlikely(lport->pcie_link_down)) {
+ unf_free_lport_all_xchg(lport);
+ return SUCCESS;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) Hottag(0x%x) lunid(0x%llx)",
+ lport->port_id, task_mgnt_cmd_type, rport->nport_id,
+ unf_xchg->hotpooltag, *((u64 *)scsi_cmnd->lun_id));
+
+ /* 3. Init exchange task semaphore */
+ sema_init(&unf_xchg->task_sema, 0);
+
+ /* 4. Send Mgmt Task to low-level */
+ if (unf_hardware_start_io(lport, &pkg) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) failed",
+ lport->port_id, task_mgnt_cmd_type, rport->nport_id);
+
+ return FAILED;
+ }
+
+ /*
+ * semaphore timeout
+ **
+ * Code review: The second input parameter needs to be converted to
+ jiffies.
+ * set semaphore after the message is sent successfully.The semaphore is
+ returned when the semaphore times out or is woken up.
+ **
+ * 5. The semaphore is cleared and counted when the Mgmt Task message is
+ sent, and is Wake Up when the RSP message is received.
+ * If the semaphore is not Wake Up, the semaphore is triggered after
+ timeout. That is, no RSP message is received within the timeout period.
+ */
+ if (down_timeout(&unf_xchg->task_sema, (s64)msecs_to_jiffies((u32)UNF_WAIT_SEM_TIMEOUT))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) timeout scsi id(0x%x) lun id(0x%x)",
+ lport->nport_id, task_mgnt_cmd_type,
+ rport->nport_id, scsi_cmnd->scsi_id,
+ (u32)scsi_cmnd->raw_lun_id);
+ unf_notify_chip_free_xid(unf_xchg);
+ /* semaphore timeout */
+ ret = FAILED;
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->states == UNF_LPORT_ST_RESET)
+ ret = SUCCESS;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return ret;
+ }
+
+ /*
+ * 6. NOTE: no timeout (has been waken up)
+ * Do Scsi_Cmnd(Mgmt Task) result checking
+ * *
+ * FAILED: with error code or RSP is error
+ * SUCCESS: others
+ */
+ if (unf_xchg->scsi_cmnd_info.result == UNF_IO_SUCCESS) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp succeed",
+ lport->nport_id, task_mgnt_cmd_type, rport->nport_id);
+
+ ret = SUCCESS;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp failed scsi id(0x%x) lun id(0x%x)",
+ lport->nport_id, task_mgnt_cmd_type, rport->nport_id,
+ scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id);
+
+ ret = FAILED;
+ }
+
+ return ret;
+}
+
+u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 uret = RETURN_OK;
+ struct unf_xchg *unf_xchg = NULL;
+ u16 hot_pool_tag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Find exchange which point to marker sts */
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+
+ unf_xchg =
+ (struct unf_xchg *)(unf_lport->xchg_mgr_temp
+ .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /*
+ * NOTE: set exchange TMF state with MARKER_STS_RECEIVED
+ * *
+ * About TMF state
+ * 1. STS received
+ * 2. Response received
+ * 3. Do check if necessary
+ */
+ unf_xchg->tmf_state |= MARKER_STS_RECEIVED;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Marker STS: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x), EXCH: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x)",
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
+ (u16)(pkg->frame_head.oxid_rxid >> UNF_SHIFT_16),
+ (u16)(pkg->frame_head.oxid_rxid), unf_xchg->did, unf_xchg->sid,
+ unf_xchg->oxid, unf_xchg->rxid);
+
+ return uret;
+}
+
+u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ /* Find exchange by tag */
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+
+ unf_xchg =
+ (struct unf_xchg *)(unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)unf_lport,
+ hot_pool_tag));
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /*
+ * NOTE: set exchange ABTS state with MARKER_STS_RECEIVED
+ * *
+ * About exchange ABTS state
+ * 1. STS received
+ * 2. Response received
+ * 3. Do check if necessary
+ * *
+ * About Exchange status get from low level
+ * 1. Set: when RCVD ABTS Marker
+ * 2. Set: when RCVD ABTS Req Done
+ * 3. value: set value with pkg->status
+ */
+ spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
+ unf_xchg->ucode_abts_state = pkg->status;
+ unf_xchg->abts_state |= MARKER_STS_RECEIVED;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) wake up SEMA for Abts marker exchange(0x%p) oxid(0x%x 0x%x) hottag(0x%x) status(0x%x)",
+ unf_lport->port_id, unf_xchg, unf_xchg->oxid, unf_xchg->rxid,
+ unf_xchg->hotpooltag, pkg->abts_maker_status);
+
+ /*
+ * NOTE: Second time for ABTS marker received, or
+ * ABTS response have been received, no need to wake up sema
+ */
+ if ((INI_IO_STATE_ABORT_TIMEOUT & unf_xchg->io_state) ||
+ (ABTS_RESPONSE_RECEIVED & unf_xchg->abts_state)) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) no need to wake up SEMA for Abts marker ABTS_STATE(0x%x) IO_STATE(0x%x)",
+ unf_lport->port_id, unf_xchg->abts_state, unf_xchg->io_state);
+
+ return RETURN_OK;
+ }
+
+ if (unf_xchg->io_state & INI_IO_STATE_TMF_ABORT) {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) receive Abts marker, exchange(%p) state(0x%x) free it",
+ unf_lport->port_id, unf_xchg, unf_xchg->io_state);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ } else {
+ spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
+ up(&unf_xchg->task_sema);
+ }
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.h b/drivers/scsi/spfc/common/unf_io_abnormal.h
new file mode 100644
index 000000000000..6eced45c6497
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_IO__ABNORMAL_H
+#define UNF_IO__ABNORMAL_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+#include "unf_exchg.h"
+
+#define UNF_GET_LL_ERR(pkg) (((pkg)->status) >> 16)
+
+void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg);
+u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg);
+u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_log.h b/drivers/scsi/spfc/common/unf_log.h
new file mode 100644
index 000000000000..801e23ac0829
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_log.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LOG_H
+#define UNF_LOG_H
+#include "unf_type.h"
+
+#define UNF_CRITICAL 1
+#define UNF_ERR 2
+#define UNF_WARN 3
+#define UNF_KEVENT 4
+#define UNF_MAJOR 5
+#define UNF_MINOR 6
+#define UNF_INFO 7
+#define UNF_DATA 7
+#define UNF_ALL 7
+
+enum unf_debug_type {
+ UNF_DEBUG_TYPE_MML = 0,
+ UNF_DEBUG_TYPE_DIAGNOSE = 1,
+ UNF_DEBUG_TYPE_MESSAGE = 2,
+ UNF_DEBUG_TYPE_BUTT
+};
+
+enum unf_log_attr {
+ UNF_LOG_LOGIN_ATT = 0x1,
+ UNF_LOG_IO_ATT = 0x2,
+ UNF_LOG_EQUIP_ATT = 0x4,
+ UNF_LOG_REG_ATT = 0x8,
+ UNF_LOG_REG_MML_TEST = 0x10,
+ UNF_LOG_EVENT = 0x20,
+ UNF_LOG_NORMAL = 0x40,
+ UNF_LOG_ABNORMAL = 0X80,
+ UNF_LOG_BUTT
+};
+
+enum event_log {
+ UNF_EVTLOG_DRIVER_SUC = 0,
+ UNF_EVTLOG_DRIVER_INFO,
+ UNF_EVTLOG_DRIVER_WARN,
+ UNF_EVTLOG_DRIVER_ERR,
+ UNF_EVTLOG_LINK_SUC,
+ UNF_EVTLOG_LINK_INFO,
+ UNF_EVTLOG_LINK_WARN,
+ UNF_EVTLOG_LINK_ERR,
+ UNF_EVTLOG_IO_SUC,
+ UNF_EVTLOG_IO_INFO,
+ UNF_EVTLOG_IO_WARN,
+ UNF_EVTLOG_IO_ERR,
+ UNF_EVTLOG_TOOL_SUC,
+ UNF_EVTLOG_TOOL_INFO,
+ UNF_EVTLOG_TOOL_WARN,
+ UNF_EVTLOG_TOOL_ERR,
+ UNF_EVTLOG_BUT
+};
+
+#define UNF_IO_ATT_PRINT_TIMES 2
+#define UNF_LOGIN_ATT_PRINT_TIMES 100
+
+#define UNF_IO_ATT_PRINT_LIMIT msecs_to_jiffies(2 * 1000)
+
+extern u32 unf_dgb_level;
+extern u32 log_print_level;
+extern u32 log_limited_times;
+
+#define DRV_LOG_LIMIT(module_id, log_level, log_att, format, ...) \
+ do { \
+ static unsigned long pre; \
+ static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
+ if (time_after_eq(jiffies, pre + (UNF_IO_ATT_PRINT_LIMIT))) { \
+ if (log_att == UNF_LOG_ABNORMAL) { \
+ should_print = UNF_IO_ATT_PRINT_TIMES; \
+ } else { \
+ should_print = log_limited_times; \
+ } \
+ } \
+ if (should_print < 0) { \
+ if (log_att != UNF_LOG_ABNORMAL) \
+ pre = jiffies; \
+ break; \
+ } \
+ if (should_print-- > 0) { \
+ printk(log_level "[%d][FC_UNF]" format "[%s][%-5d]\n", \
+ smp_processor_id(), ##__VA_ARGS__, __func__, \
+ __LINE__); \
+ } \
+ if (should_print == 0) { \
+ printk(log_level "[FC_UNF]log is limited[%s][%-5d]\n", \
+ __func__, __LINE__); \
+ } \
+ pre = jiffies; \
+ } while (0)
+
+#define FC_CHECK_RETURN_VALUE(condition, ret) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ return ret; \
+ } \
+ } while (0)
+
+#define FC_CHECK_RETURN_VOID(condition) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ return; \
+ } \
+ } while (0)
+
+#define FC_DRV_PRINT(log_att, log_level, format, ...) \
+ do { \
+ if (unlikely((log_level) <= log_print_level)) { \
+ if (log_level == UNF_CRITICAL) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_CRIT, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_WARN) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_WARNING, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_ERR) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_ERR, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_MAJOR || \
+ log_level == UNF_MINOR || \
+ log_level == UNF_KEVENT) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_NOTICE, \
+ log_att, format, ##__VA_ARGS__); \
+ } else if (log_level == UNF_INFO || \
+ log_level == UNF_DATA) { \
+ DRV_LOG_LIMIT(UNF_PID, KERN_INFO, \
+ log_att, format, ##__VA_ARGS__); \
+ } \
+ } \
+ } while (0)
+
+#define UNF_PRINT_SFS(dbg_level, portid, data, size) \
+ do { \
+ if ((dbg_level) <= log_print_level) { \
+ u32 cnt = 0; \
+ printk(KERN_INFO "[INFO]Port(0x%x) sfs:0x", (portid)); \
+ for (cnt = 0; cnt < (size) / 4; cnt++) { \
+ printk(KERN_INFO "%08x ", \
+ ((u32 *)(data))[cnt]); \
+ } \
+ printk(KERN_INFO "[FC_UNF][%s]\n", __func__); \
+ } \
+ } while (0)
+
+#define UNF_PRINT_SFS_LIMIT(dbg_level, portid, data, size) \
+ do { \
+ if ((dbg_level) <= log_print_level) { \
+ static ulong pre; \
+ static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
+ if (time_after_eq( \
+ jiffies, pre + UNF_IO_ATT_PRINT_LIMIT)) { \
+ should_print = log_limited_times; \
+ } \
+ if (should_print < 0) { \
+ pre = jiffies; \
+ break; \
+ } \
+ if (should_print-- > 0) { \
+ UNF_PRINT_SFS(dbg_level, portid, data, size); \
+ } \
+ if (should_print == 0) { \
+ printk( \
+ KERN_INFO \
+ "[FC_UNF]sfs log is limited[%s][%-5d]\n", \
+ __func__, __LINE__); \
+ } \
+ pre = jiffies; \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_lport.c b/drivers/scsi/spfc/common/unf_lport.c
new file mode 100644
index 000000000000..66d3ac14d676
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_lport.c
@@ -0,0 +1,1008 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_lport.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_service.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+#include "unf_portman.h"
+
+static void unf_lport_config(struct unf_lport *lport);
+void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type)
+{
+ FC_CHECK_RETURN_VOID((lport));
+
+ lport->dirty_flag |= (u32)type;
+}
+
+u32 unf_init_lport_route(struct unf_lport *lport)
+{
+ u32 ret = RETURN_OK;
+ int ret_val = 0;
+
+ FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
+
+ /* Init L_Port route work */
+ INIT_DELAYED_WORK(&lport->route_timer_work, unf_lport_route_work);
+
+ /* Delay route work */
+ ret_val = queue_delayed_work(unf_wq, &lport->route_timer_work,
+ (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
+ if (unlikely((!(bool)(ret_val)))) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x) schedule route work failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = unf_lport_ref_inc(lport);
+ return ret;
+}
+
+void unf_destroy_lport_route(struct unf_lport *lport)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Cancel (route timer) delay work */
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&lport->route_timer_work),
+ "Route Timer work");
+ if (ret == RETURN_OK)
+ /* Corresponding to ADD operation */
+ unf_lport_ref_dec(lport);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE;
+}
+
+void unf_init_port_parms(struct unf_lport *lport)
+{
+ INIT_LIST_HEAD(&lport->list_vports_head);
+ INIT_LIST_HEAD(&lport->list_intergrad_vports);
+ INIT_LIST_HEAD(&lport->list_destroy_vports);
+ INIT_LIST_HEAD(&lport->entry_lport);
+ INIT_LIST_HEAD(&lport->list_qos_head);
+
+ spin_lock_init(&lport->qos_mgr_lock);
+ spin_lock_init(&lport->lport_state_lock);
+
+ lport->max_frame_size = max_frame_size;
+ lport->ed_tov = UNF_DEFAULT_EDTOV;
+ lport->ra_tov = UNF_DEFAULT_RATOV;
+ lport->fabric_node_name = 0;
+ lport->qos_level = UNF_QOS_LEVEL_DEFAULT;
+ lport->qos_cs_ctrl = false;
+ lport->priority = (bool)UNF_PRIORITY_DISABLE;
+ lport->port_dirt_exchange = false;
+
+ unf_lport_config(lport);
+
+ unf_set_lport_state(lport, UNF_LPORT_ST_ONLINE);
+
+ lport->link_up = UNF_PORT_LINK_DOWN;
+ lport->port_removing = false;
+ lport->lport_free_completion = NULL;
+ lport->last_tx_fault_jif = 0;
+ lport->enhanced_features = 0;
+ lport->destroy_step = INVALID_VALUE32;
+ lport->dirty_flag = 0;
+ lport->switch_state = false;
+ lport->bbscn_support = false;
+ lport->loop_back_test_mode = false;
+ lport->start_work_state = UNF_START_WORK_STOP;
+ lport->sfp_power_fault_count = 0;
+ lport->sfp_9545_fault_count = 0;
+
+ atomic_set(&lport->lport_no_operate_flag, UNF_LPORT_NORMAL);
+ atomic_set(&lport->port_ref_cnt, 0);
+ atomic_set(&lport->scsi_session_add_success, 0);
+ atomic_set(&lport->scsi_session_add_failed, 0);
+ atomic_set(&lport->scsi_session_del_success, 0);
+ atomic_set(&lport->scsi_session_del_failed, 0);
+ atomic_set(&lport->add_start_work_failed, 0);
+ atomic_set(&lport->add_closing_work_failed, 0);
+ atomic_set(&lport->alloc_scsi_id, 0);
+ atomic_set(&lport->resume_scsi_id, 0);
+ atomic_set(&lport->reuse_scsi_id, 0);
+ atomic_set(&lport->device_alloc, 0);
+ atomic_set(&lport->device_destroy, 0);
+ atomic_set(&lport->session_loss_tmo, 0);
+ atomic_set(&lport->host_no, 0);
+ atomic64_set(&lport->exchg_index, 0x1000);
+ atomic_inc(&lport->port_ref_cnt);
+
+ memset(&lport->port_dynamic_info, 0, sizeof(struct unf_port_dynamic_info));
+ memset(&lport->link_service_info, 0, sizeof(struct unf_link_service_collect));
+ memset(&lport->err_code_sum, 0, sizeof(struct unf_err_code));
+}
+
+void unf_reset_lport_params(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport->link_up = UNF_PORT_LINK_DOWN;
+ unf_lport->nport_id = 0;
+ unf_lport->max_frame_size = max_frame_size;
+ unf_lport->ed_tov = UNF_DEFAULT_EDTOV;
+ unf_lport->ra_tov = UNF_DEFAULT_RATOV;
+ unf_lport->fabric_node_name = 0;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_online(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_UP:
+ next_state = UNF_LPORT_ST_LINK_UP;
+ break;
+
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_initial(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_UP:
+ next_state = UNF_LPORT_ST_LINK_UP;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_linkup(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_FLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_READY:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_flogi_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_PLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_READY:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_plogi_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_RFT_ID_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_rftid_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_RFF_ID_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_rffid_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_SCR_WAIT;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_scr_wait(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_REMOTE_ACC:
+ next_state = UNF_LPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state
+unf_lport_state_logo(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_OFFLINE;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_offline(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_ONLINE:
+ next_state = UNF_LPORT_ST_ONLINE;
+ break;
+
+ case UNF_EVENT_LPORT_RESET:
+ next_state = UNF_LPORT_ST_RESET;
+ break;
+
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_reset(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_NORMAL_ENTER:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_lport_login_state unf_lport_state_ready(enum unf_lport_login_state old_state,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+
+ switch (lport_event) {
+ case UNF_EVENT_LPORT_LINK_DOWN:
+ next_state = UNF_LPORT_ST_INITIAL;
+ break;
+
+ case UNF_EVENT_LPORT_RESET:
+ next_state = UNF_LPORT_ST_RESET;
+ break;
+
+ case UNF_EVENT_LPORT_OFFLINE:
+ next_state = UNF_LPORT_ST_LOGO;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static struct unf_lport_state_ma lport_state[] = {
+ {UNF_LPORT_ST_ONLINE, unf_lport_state_online},
+ {UNF_LPORT_ST_INITIAL, unf_lport_state_initial},
+ {UNF_LPORT_ST_LINK_UP, unf_lport_state_linkup},
+ {UNF_LPORT_ST_FLOGI_WAIT, unf_lport_state_flogi_wait},
+ {UNF_LPORT_ST_PLOGI_WAIT, unf_lport_state_plogi_wait},
+ {UNF_LPORT_ST_RFT_ID_WAIT, unf_lport_state_rftid_wait},
+ {UNF_LPORT_ST_RFF_ID_WAIT, unf_lport_state_rffid_wait},
+ {UNF_LPORT_ST_SCR_WAIT, unf_lport_state_scr_wait},
+ {UNF_LPORT_ST_LOGO, unf_lport_state_logo},
+ {UNF_LPORT_ST_OFFLINE, unf_lport_state_offline},
+ {UNF_LPORT_ST_RESET, unf_lport_state_reset},
+ {UNF_LPORT_ST_READY, unf_lport_state_ready},
+};
+
+void unf_lport_state_ma(struct unf_lport *lport,
+ enum unf_lport_event lport_event)
+{
+ enum unf_lport_login_state old_state = UNF_LPORT_ST_ONLINE;
+ enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
+ u32 index = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ old_state = lport->states;
+
+ while (index < (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
+ if (lport->states == lport_state[index].lport_state) {
+ next_state = lport_state[index].lport_state_ma(old_state, lport_event);
+ break;
+ }
+ index++;
+ }
+
+ if (index >= (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
+ next_state = old_state;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Port(0x%x) hold state(0x%x)",
+ lport->port_id, lport->states);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with old state(0x%x) event(0x%x) next state(0x%x)",
+ lport->port_id, old_state, lport_event, next_state);
+
+ unf_set_lport_state(lport, next_state);
+}
+
+u32 unf_lport_retry_flogi(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get (new) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Check L_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) no need to retry FLOGI with state(0x%x)",
+ lport->port_id, lport->states);
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+ return RETURN_OK;
+ }
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send FLOGI or FDISC */
+ if (lport->root_lport != lport) {
+ ret = unf_send_fdisc(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FDISC failed", lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+ } else {
+ ret = unf_send_flogi(lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI failed\n", lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+ }
+
+ return ret;
+}
+
+u32 unf_lport_name_server_register(struct unf_lport *lport,
+ enum unf_lport_login_state state)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 fabric_id = UNF_FC_FID_DIR_SERV;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (state == UNF_LPORT_ST_SCR_WAIT)
+ fabric_id = UNF_FC_FID_FCTRL;
+
+ /* Get (safe) R_Port */
+ unf_rport =
+ unf_get_rport_by_nport_id(lport, fabric_id);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ fabric_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port & L_Port state */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = fabric_id;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ switch (state) {
+ /* RFT_ID */
+ case UNF_LPORT_ST_RFT_ID_WAIT:
+ ret = unf_send_rft_id(lport, unf_rport);
+ break;
+ /* RFF_ID */
+ case UNF_LPORT_ST_RFF_ID_WAIT:
+ ret = unf_send_rff_id(lport, unf_rport, UNF_FC4_FCP_TYPE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) register SCSI FC4Type to fabric(0xfffffc) failed",
+ lport->nport_id);
+ unf_lport_error_recovery(lport);
+ }
+ break;
+
+ /* SCR */
+ case UNF_LPORT_ST_SCR_WAIT:
+ ret = unf_send_scr(lport, unf_rport);
+ break;
+
+ /* PLOGI */
+ case UNF_LPORT_ST_PLOGI_WAIT:
+ default:
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ ret = unf_send_plogi(lport, unf_rport);
+ break;
+ }
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) register fabric(0xfffffc) failed",
+ lport->nport_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+u32 unf_lport_enter_sns_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (!rport)
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
+ else
+ unf_rport = rport;
+
+ if (!unf_rport) {
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+
+ /* Update L_Port & R_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Do R_Port LOGO state */
+ unf_rport_enter_logo(lport, unf_rport);
+
+ return ret;
+}
+
+void unf_lport_enter_sns_plogi(struct unf_lport *lport)
+{
+ /* Fabric or Public Loop Mode: Login with Name server */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Get (safe) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (unf_rport) {
+ /* for port swap: Delete old R_Port if necessary */
+ if (unf_rport->local_nport_id != lport->nport_id) {
+ unf_rport_immediate_link_down(lport, unf_rport);
+ unf_rport = NULL;
+ }
+ }
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ UNF_FC_FID_DIR_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send PLOGI to Fabric(0xfffffc) */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI to name server failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+int unf_get_port_params(void *arg_in, void *arg_out)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
+ struct unf_low_level_port_mgr_op *port_mgr = NULL;
+ struct unf_port_param port_params = {0};
+
+ FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
+
+ port_mgr = &unf_lport->low_level_func.port_mgr_op;
+ if (!port_mgr->ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x) low level port_config_get function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get parameters with default:R_A_TOV(%d) E_D_TOV(%d)",
+ unf_lport->port_id, UNF_DEFAULT_FABRIC_RATOV,
+ UNF_DEFAULT_EDTOV);
+
+ port_params.ra_tov = UNF_DEFAULT_FABRIC_RATOV;
+ port_params.ed_tov = UNF_DEFAULT_EDTOV;
+
+ /* Update parameters with Fabric mode */
+ if (unf_lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP ||
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ unf_lport->ra_tov = port_params.ra_tov;
+ unf_lport->ed_tov = port_params.ed_tov;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_lport_enter_flogi(struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_cm_event_report *event = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* Get (safe) R_Port */
+ nport_id = UNF_FC_FID_FLOGI;
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RPort failed",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Updtae L_Port state */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* Update R_Port N_Port_ID */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ event = unf_get_one_event_node(lport);
+ if (event) {
+ event->lport = lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_get_port_params;
+ event->para_in = (void *)lport;
+ unf_post_one_event_node(lport, event);
+ }
+
+ if (lport->root_lport != lport) {
+ /* for NPIV */
+ ret = unf_send_fdisc(lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_lport_error_recovery(lport);
+ } else {
+ /* for Physical Port */
+ ret = unf_send_flogi(lport, unf_rport);
+ if (ret != RETURN_OK)
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state)
+{
+ FC_CHECK_RETURN_VOID(lport);
+ if (lport->states != state)
+ lport->retries = 0;
+
+ lport->states = state;
+}
+
+static void unf_lport_timeout(struct work_struct *work)
+{
+ struct unf_lport *unf_lport = NULL;
+ enum unf_lport_login_state state = UNF_LPORT_ST_READY;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+ unf_lport = container_of(work, struct unf_lport, retry_work.work);
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ state = unf_lport->states;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is timeout with state(0x%x)",
+ unf_lport->port_id, state);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ switch (state) {
+ /* FLOGI retry */
+ case UNF_LPORT_ST_FLOGI_WAIT:
+ (void)unf_lport_retry_flogi(unf_lport);
+ break;
+
+ case UNF_LPORT_ST_PLOGI_WAIT:
+ case UNF_LPORT_ST_RFT_ID_WAIT:
+ case UNF_LPORT_ST_RFF_ID_WAIT:
+ case UNF_LPORT_ST_SCR_WAIT:
+ (void)unf_lport_name_server_register(unf_lport, state);
+ break;
+
+ /* Send LOGO External */
+ case UNF_LPORT_ST_LOGO:
+ break;
+
+ /* Do nothing */
+ case UNF_LPORT_ST_OFFLINE:
+ case UNF_LPORT_ST_READY:
+ case UNF_LPORT_ST_RESET:
+ case UNF_LPORT_ST_ONLINE:
+ case UNF_LPORT_ST_INITIAL:
+ case UNF_LPORT_ST_LINK_UP:
+
+ unf_lport->retries = 0;
+ break;
+ default:
+ break;
+ }
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+}
+
+static void unf_lport_config(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ INIT_DELAYED_WORK(&lport->retry_work, unf_lport_timeout);
+
+ lport->max_retry_count = UNF_MAX_RETRY_COUNT;
+ lport->retries = 0;
+}
+
+void unf_lport_error_recovery(struct unf_lport *lport)
+{
+ ulong delay = 0;
+ ulong flag = 0;
+ int ret_val = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ ret = unf_lport_ref_inc(lport);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing and no need process",
+ lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* Port State: removing */
+ if (lport->port_removing) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is removing and no need process",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Port State: offline */
+ if (lport->states == UNF_LPORT_ST_OFFLINE) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is offline and no need process",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Queue work state check */
+ if (delayed_work_pending(&lport->retry_work)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return;
+ }
+
+ /* Do retry operation */
+ if (lport->retries < lport->max_retry_count) {
+ lport->retries++;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) enter recovery and retry %u times",
+ lport->port_id, lport->nport_id, lport->retries);
+
+ delay = (ulong)lport->ed_tov;
+ ret_val = queue_delayed_work(unf_wq, &lport->retry_work,
+ (ulong)msecs_to_jiffies((u32)delay));
+ if (ret_val != 0) {
+ atomic_inc(&lport->port_ref_cnt);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) queue work success and reference count is %d",
+ lport->port_id,
+ atomic_read(&lport->port_ref_cnt));
+ }
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+ } else {
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_TIMEOUT);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) register operation timeout and do LOGO",
+ lport->port_id);
+
+ (void)unf_lport_enter_sns_logo(lport, NULL);
+ }
+
+ unf_lport_ref_dec_to_destroy(lport);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (vp_index == 0)
+ return lport;
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_index) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by index is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_index(lport, vp_index);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_did) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by D_ID is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_did(lport, did);
+}
+
+struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn)
+{
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (!lport->lport_mgr_temp.unf_look_up_vport_by_wwpn) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do look up vport by WWPN is NULL",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ return lport->lport_mgr_temp.unf_look_up_vport_by_wwpn(lport, wwpn);
+}
+
+void unf_cm_vport_remove(struct unf_lport *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_lport = vport->root_lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ if (!unf_lport->lport_mgr_temp.unf_vport_remove) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) function do vport remove is NULL",
+ unf_lport->port_id);
+ return;
+ }
+
+ unf_lport->lport_mgr_temp.unf_vport_remove(vport);
+}
diff --git a/drivers/scsi/spfc/common/unf_lport.h b/drivers/scsi/spfc/common/unf_lport.h
new file mode 100644
index 000000000000..dbd531f15b13
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_lport.h
@@ -0,0 +1,519 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LPORT_H
+#define UNF_LPORT_H
+
+#include "unf_type.h"
+#include "unf_disc.h"
+#include "unf_event.h"
+#include "unf_common.h"
+
+#define UNF_PORT_TYPE_FC 0
+#define UNF_PORT_TYPE_DISC 1
+#define UNF_FW_UPDATE_PATH_LEN_MAX 255
+#define UNF_EXCHG_MGR_NUM (4)
+#define UNF_ERR_CODE_PRINT_TIME 10 /* error code print times */
+#define UNF_MAX_IO_TYPE_STAT_NUM 48 /* IO abnormal max counter */
+#define UNF_MAX_IO_RETURN_VALUE 0x12
+#define UNF_MAX_SCSI_CMD 0xFF
+#define UNF_MAX_LPRT_SCSI_ID_MAP 2048
+
+enum unf_scsi_error_handle_type {
+ UNF_SCSI_ABORT_IO_TYPE = 0,
+ UNF_SCSI_DEVICE_RESET_TYPE,
+ UNF_SCSI_TARGET_RESET_TYPE,
+ UNF_SCSI_BUS_RESET_TYPE,
+ UNF_SCSI_HOST_RESET_TYPE,
+ UNF_SCSI_VIRTUAL_RESET_TYPE,
+ UNF_SCSI_ERROR_HANDLE_BUTT
+};
+
+enum unf_lport_destroy_step {
+ UNF_LPORT_DESTROY_STEP_0_SET_REMOVING = 0,
+ UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT,
+ UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE,
+ UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER,
+ UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR,
+ UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL,
+ UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR,
+ UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP,
+ UNF_LPORT_DESTROY_STEP_8_DESTROY_RPORT_MG_TMP,
+ UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP,
+ UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE,
+ UNF_LPORT_DESTROY_STEP_11_UNREG_TGT_HOST,
+ UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST,
+ UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE,
+ UNF_LPORT_DESTROY_STEP_BUTT
+};
+
+enum unf_lport_enhanced_feature {
+ /* Enhance GFF feature connect even if fail to get GFF feature */
+ UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF = 0x0001,
+ UNF_LPORT_ENHANCED_FEATURE_IO_TRANSFERLIST = 0x0002, /* Enhance IO balance */
+ UNF_LPORT_ENHANCED_FEATURE_IO_CHECKPOINT = 0x0004, /* Enhance IO check */
+ UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE = 0x0008, /* Close FW ROUTE */
+ /* lowest frequency read SFP information */
+ UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE = 0x0010,
+ UNF_LPORT_ENHANCED_FEATURE_BUTT
+};
+
+enum unf_lport_login_state {
+ UNF_LPORT_ST_ONLINE = 0x2000, /* uninitialized */
+ UNF_LPORT_ST_INITIAL, /* initialized and LinkDown */
+ UNF_LPORT_ST_LINK_UP, /* initialized and Link UP */
+ UNF_LPORT_ST_FLOGI_WAIT, /* waiting for FLOGI completion */
+ UNF_LPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
+ UNF_LPORT_ST_RNN_ID_WAIT, /* waiting for RNN_ID completion */
+ UNF_LPORT_ST_RSNN_NN_WAIT, /* waiting for RSNN_NN completion */
+ UNF_LPORT_ST_RSPN_ID_WAIT, /* waiting for RSPN_ID completion */
+ UNF_LPORT_ST_RPN_ID_WAIT, /* waiting for RPN_ID completion */
+ UNF_LPORT_ST_RFT_ID_WAIT, /* waiting for RFT_ID completion */
+ UNF_LPORT_ST_RFF_ID_WAIT, /* waiting for RFF_ID completion */
+ UNF_LPORT_ST_SCR_WAIT, /* waiting for SCR completion */
+ UNF_LPORT_ST_READY, /* ready for use */
+ UNF_LPORT_ST_LOGO, /* waiting for LOGO completion */
+ UNF_LPORT_ST_RESET, /* being reset and will restart */
+ UNF_LPORT_ST_OFFLINE, /* offline */
+ UNF_LPORT_ST_BUTT
+};
+
+enum unf_lport_event {
+ UNF_EVENT_LPORT_NORMAL_ENTER = 0x8000, /* next state enter */
+ UNF_EVENT_LPORT_ONLINE = 0x8001, /* LPort link up */
+ UNF_EVENT_LPORT_LINK_UP = 0x8002, /* LPort link up */
+ UNF_EVENT_LPORT_LINK_DOWN = 0x8003, /* LPort link down */
+ UNF_EVENT_LPORT_OFFLINE = 0x8004, /* lPort bing stopped */
+ UNF_EVENT_LPORT_RESET = 0x8005,
+ UNF_EVENT_LPORT_REMOTE_ACC = 0x8006, /* next state enter */
+ UNF_EVENT_LPORT_REMOTE_RJT = 0x8007, /* rport reject */
+ UNF_EVENT_LPORT_REMOTE_TIMEOUT = 0x8008, /* rport time out */
+ UNF_EVENT_LPORT_READY = 0x8009,
+ UNF_EVENT_LPORT_REMOTE_BUTT
+};
+
+struct unf_cm_disc_mg_template {
+ /* start input:L_Port,return:ok/fail */
+ u32 (*unf_disc_start)(void *lport);
+ /* stop input: L_Port,return:ok/fail */
+ u32 (*unf_disc_stop)(void *lport);
+
+ /* Callback after disc complete[with event:ok/fail]. */
+ void (*unf_disc_callback)(void *lport, u32 result);
+};
+
+struct unf_chip_manage_info {
+ struct list_head list_chip_thread_entry;
+ struct list_head list_head;
+ spinlock_t chip_event_list_lock;
+ struct task_struct *thread;
+ u32 list_num;
+ u32 slot_id;
+ u8 chip_id;
+ u8 rsv;
+ u8 sfp_9545_fault;
+ u8 sfp_power_fault;
+ atomic_t ref_cnt;
+ u32 thread_exit;
+ struct unf_chip_info chip_info;
+ atomic_t card_loop_test_flag;
+ spinlock_t card_loop_back_state_lock;
+ char update_path[UNF_FW_UPDATE_PATH_LEN_MAX];
+};
+
+enum unf_timer_type {
+ UNF_TIMER_TYPE_TGT_IO,
+ UNF_TIMER_TYPE_INI_IO,
+ UNF_TIMER_TYPE_REQ_IO,
+ UNF_TIMER_TYPE_TGT_RRQ,
+ UNF_TIMER_TYPE_INI_RRQ,
+ UNF_TIMER_TYPE_SFS,
+ UNF_TIMER_TYPE_INI_ABTS
+};
+
+struct unf_cm_xchg_mgr_template {
+ void *(*unf_xchg_get_free_and_init)(void *lport, u32 xchg_type);
+ void *(*unf_look_up_xchg_by_id)(void *lport, u16 ox_id, u32 oid);
+ void *(*unf_look_up_xchg_by_tag)(void *lport, u16 hot_pool_tag);
+ void (*unf_xchg_release)(void *lport, void *xchg);
+ void (*unf_xchg_mgr_io_xchg_abort)(void *lport, void *rport, u32 sid, u32 did,
+ u32 extra_io_state);
+ void (*unf_xchg_mgr_sfs_xchg_abort)(void *lport, void *rport, u32 sid, u32 did);
+ void (*unf_xchg_add_timer)(void *xchg, ulong time_ms, enum unf_timer_type time_type);
+ void (*unf_xchg_cancel_timer)(void *xchg);
+ void (*unf_xchg_abort_all_io)(void *lport, u32 xchg_type, bool clean);
+ void *(*unf_look_up_xchg_by_cmnd_sn)(void *lport, u64 command_sn,
+ u32 world_id, void *pinitiator);
+ void (*unf_xchg_abort_by_lun)(void *lport, void *rport, u64 lun_id, void *xchg,
+ bool abort_all_lun_flag);
+
+ void (*unf_xchg_abort_by_session)(void *lport, void *rport);
+};
+
+struct unf_cm_lport_template {
+ void *(*unf_look_up_vport_by_index)(void *lport, u16 vp_index);
+ void *(*unf_look_up_vport_by_port_id)(void *lport, u32 port_id);
+ void *(*unf_look_up_vport_by_wwpn)(void *lport, u64 wwpn);
+ void *(*unf_look_up_vport_by_did)(void *lport, u32 did);
+ void (*unf_vport_remove)(void *vport);
+};
+
+struct unf_lport_state_ma {
+ enum unf_lport_login_state lport_state;
+ enum unf_lport_login_state (*lport_state_ma)(enum unf_lport_login_state old_state,
+ enum unf_lport_event event);
+};
+
+struct unf_rport_pool {
+ u32 rport_pool_count;
+ void *rport_pool_add;
+ struct list_head list_rports_pool;
+ spinlock_t rport_free_pool_lock;
+ /* for synchronous reuse RPort POOL completion */
+ struct completion *rport_pool_completion;
+ ulong *rpi_bitmap;
+};
+
+struct unf_vport_pool {
+ u16 vport_pool_count;
+ void *vport_pool_addr;
+ struct list_head list_vport_pool;
+ spinlock_t vport_pool_lock;
+ struct completion *vport_pool_completion;
+ u16 slab_next_index; /* Next free vport */
+ u16 slab_total_sum; /* Total Vport num */
+ struct unf_lport *vport_slab[ARRAY_INDEX_0];
+};
+
+struct unf_esgl_pool {
+ u32 esgl_pool_count;
+ void *esgl_pool_addr;
+ struct list_head list_esgl_pool;
+ spinlock_t esgl_pool_lock;
+ struct buf_describe esgl_buff_list;
+};
+
+/* little endium */
+struct unf_port_id_page {
+ struct list_head list_node_rscn;
+ u8 port_id_port;
+ u8 port_id_area;
+ u8 port_id_domain;
+ u8 addr_format : 2;
+ u8 event_qualifier : 4;
+ u8 reserved : 2;
+};
+
+struct unf_rscn_mgr {
+ spinlock_t rscn_id_list_lock;
+ u32 free_rscn_count;
+ struct list_head list_free_rscn_page;
+ struct list_head list_using_rscn_page;
+ void *rscn_pool_add;
+ struct unf_port_id_page *(*unf_get_free_rscn_node)(void *rscn_mg);
+ void (*unf_release_rscn_node)(void *rscn_mg, void *rscn_node);
+};
+
+struct unf_disc_rport_mg {
+ void *disc_pool_add;
+ struct list_head list_disc_rports_pool;
+ struct list_head list_disc_rports_busy;
+};
+
+struct unf_disc_manage_info {
+ struct list_head list_head;
+ spinlock_t disc_event_list_lock;
+ atomic_t disc_contrl_size;
+
+ u32 thread_exit;
+ struct task_struct *thread;
+};
+
+struct unf_disc {
+ u32 retry_count;
+ u32 max_retry_count;
+ u32 disc_flag;
+
+ struct completion *disc_completion;
+ atomic_t disc_ref_cnt;
+
+ struct list_head list_busy_rports;
+ struct list_head list_delete_rports;
+ struct list_head list_destroy_rports;
+
+ spinlock_t rport_busy_pool_lock;
+
+ struct unf_lport *lport;
+ enum unf_disc_state states;
+ struct delayed_work disc_work;
+
+ /* Disc operation template */
+ struct unf_cm_disc_mg_template disc_temp;
+
+ /* UNF_INIT_DISC/UNF_RSCN_DISC */
+ u32 disc_option;
+
+ /* RSCN list */
+ struct unf_rscn_mgr rscn_mgr;
+ struct unf_disc_rport_mg disc_rport_mgr;
+ struct unf_disc_manage_info disc_thread_info;
+
+ u64 last_disc_jiff;
+};
+
+enum unf_service_item {
+ UNF_SERVICE_ITEM_FLOGI = 0,
+ UNF_SERVICE_ITEM_PLOGI,
+ UNF_SERVICE_ITEM_PRLI,
+ UNF_SERVICE_ITEM_RSCN,
+ UNF_SERVICE_ITEM_ABTS,
+ UNF_SERVICE_ITEM_PDISC,
+ UNF_SERVICE_ITEM_ADISC,
+ UNF_SERVICE_ITEM_LOGO,
+ UNF_SERVICE_ITEM_SRR,
+ UNF_SERVICE_ITEM_RRQ,
+ UNF_SERVICE_ITEM_ECHO,
+ UNF_SERVICE_BUTT
+};
+
+/* Link service counter */
+struct unf_link_service_collect {
+ u64 service_cnt[UNF_SERVICE_BUTT];
+};
+
+struct unf_pcie_error_count {
+ u32 pcie_error_count[UNF_PCIE_BUTT];
+};
+
+#define INVALID_WWPN 0
+
+enum unf_device_scsi_state {
+ UNF_SCSI_ST_INIT = 0,
+ UNF_SCSI_ST_OFFLINE,
+ UNF_SCSI_ST_ONLINE,
+ UNF_SCSI_ST_DEAD,
+ UNF_SCSI_ST_BUTT
+};
+
+struct unf_wwpn_dfx_counter_info {
+ atomic64_t io_done_cnt[UNF_MAX_IO_RETURN_VALUE];
+ atomic64_t scsi_cmd_cnt[UNF_MAX_SCSI_CMD];
+ atomic64_t target_busy;
+ atomic64_t host_busy;
+ atomic_t error_handle[UNF_SCSI_ERROR_HANDLE_BUTT];
+ atomic_t error_handle_result[UNF_SCSI_ERROR_HANDLE_BUTT];
+ atomic_t device_alloc;
+ atomic_t device_destroy;
+};
+
+#define UNF_MAX_LUN_PER_TARGET 256
+struct unf_wwpn_rport_info {
+ u64 wwpn;
+ struct unf_rport *rport; /* Rport which linkup */
+ void *lport; /* Lport */
+ u32 target_id; /* target_id distribute by scsi */
+ u32 las_ten_scsi_state;
+ atomic_t scsi_state;
+ struct unf_wwpn_dfx_counter_info *dfx_counter;
+ struct delayed_work loss_tmo_work;
+ bool need_scan;
+ struct list_head fc_lun_list;
+ u8 *lun_qos_level;
+};
+
+struct unf_rport_scsi_id_image {
+ spinlock_t scsi_image_table_lock;
+ struct unf_wwpn_rport_info
+ *wwn_rport_info_table;
+ u32 max_scsi_id;
+};
+
+enum unf_lport_dirty_flag {
+ UNF_LPORT_DIRTY_FLAG_NONE = 0,
+ UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY = 0x100,
+ UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY = 0x200,
+ UNF_LPORT_DIRTY_FLAG_DISC_DIRTY = 0x400,
+ UNF_LPORT_DIRTY_FLAG_BUTT
+};
+
+typedef struct unf_rport *(*unf_rport_set_qualifier)(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid);
+
+typedef u32 (*unf_tmf_status_recovery)(void *rport, void *xchg);
+
+enum unf_start_work_state {
+ UNF_START_WORK_STOP,
+ UNF_START_WORK_BEGIN,
+ UNF_START_WORK_COMPLETE
+};
+
+struct unf_qos_info {
+ u64 wwpn;
+ u32 nport_id;
+ enum unf_rport_qos_level qos_level;
+ struct list_head entry_qos_info;
+};
+
+struct unf_ini_private_info {
+ u32 driver_type; /* Driver Type */
+ void *lower; /* driver private pointer */
+};
+
+struct unf_product_host_info {
+ void *tgt_host;
+ struct Scsi_Host *host;
+ struct unf_ini_private_info drv_private_info;
+ struct Scsi_Host scsihost;
+};
+
+struct unf_lport {
+ u32 port_type; /* Port Type, fc or fcoe */
+ atomic_t port_ref_cnt; /* LPort reference counter */
+ void *fc_port; /* hard adapter hba pointer */
+ void *rport, *drport; /* Used for SCSI interface */
+ void *vport;
+ ulong system_io_bus_num;
+
+ struct unf_product_host_info host_info; /* scsi host mg */
+ struct unf_rport_scsi_id_image rport_scsi_table;
+ bool port_removing;
+ bool io_allowed;
+ bool port_dirt_exchange;
+
+ spinlock_t xchg_mgr_lock;
+ struct list_head list_xchg_mgr_head;
+ struct list_head list_drty_xchg_mgr_head;
+ void *xchg_mgr[UNF_EXCHG_MGR_NUM];
+ bool qos_cs_ctrl;
+ bool priority;
+ enum unf_rport_qos_level qos_level;
+ spinlock_t qos_mgr_lock;
+ struct list_head list_qos_head;
+ struct list_head list_vports_head; /* Vport Mg */
+ struct list_head list_intergrad_vports; /* Vport intergrad list */
+ struct list_head list_destroy_vports; /* Vport destroy list */
+
+ struct list_head entry_vport; /* VPort entry, hook in list_vports_head */
+
+ struct list_head entry_lport; /* LPort entry */
+ spinlock_t lport_state_lock; /* UL Port Lock */
+ struct unf_disc disc; /* Disc and rport Mg */
+ struct unf_rport_pool rport_pool; /* rport pool,Vport share Lport pool */
+ struct unf_esgl_pool esgl_pool; /* external sgl pool */
+ u32 port_id; /* Port Management ,0x11000 etc. */
+ enum unf_lport_login_state states;
+ u32 link_up;
+ u32 speed;
+
+ u64 node_name;
+ u64 port_name;
+ u64 fabric_node_name;
+ u32 nport_id;
+ u32 max_frame_size;
+ u32 ed_tov;
+ u32 ra_tov;
+ u32 class_of_service;
+ u32 options; /* ini or tgt */
+ u32 retries;
+ u32 max_retry_count;
+ enum unf_act_topo act_topo;
+ bool switch_state; /* TRUE---->ON,false---->OFF */
+ bool last_switch_state; /* TRUE---->ON,false---->OFF */
+ bool bbscn_support; /* TRUE---->ON,false---->OFF */
+
+ enum unf_start_work_state start_work_state;
+ struct unf_cm_xchg_mgr_template xchg_mgr_temp; /* Xchg Mg operation template */
+ struct unf_cm_lport_template lport_mgr_temp; /* Xchg LPort operation template */
+ struct unf_low_level_functioon_op low_level_func;
+ struct unf_event_mgr event_mgr; /* Disc and rport Mg */
+ struct delayed_work retry_work; /* poll work or delay work */
+
+ struct workqueue_struct *link_event_wq;
+ struct workqueue_struct *xchg_wq;
+ atomic64_t io_stat[UNF_MAX_IO_TYPE_STAT_NUM];
+ struct unf_err_code err_code_sum; /* Error code counter */
+ struct unf_port_dynamic_info port_dynamic_info;
+ struct unf_link_service_collect link_service_info;
+ struct unf_pcie_error_count pcie_error_cnt;
+ unf_rport_set_qualifier unf_qualify_rport; /* Qualify Rport */
+
+ unf_tmf_status_recovery unf_tmf_abnormal_recovery; /* tmf marker recovery */
+
+ struct delayed_work route_timer_work; /* L_Port timer route */
+
+ u16 vp_index; /* Vport Index, Lport:0 */
+ u16 path_id;
+ struct unf_vport_pool *vport_pool; /* Only for Lport */
+ void *lport_mgr[UNF_MAX_LPRT_SCSI_ID_MAP];
+ bool vport_remove_flags;
+
+ void *root_lport; /* Point to physic Lport */
+
+ struct completion *lport_free_completion; /* Free LPort Completion */
+
+#define UNF_LPORT_NOP 1
+#define UNF_LPORT_NORMAL 0
+
+ atomic_t lport_no_operate_flag;
+
+ bool loop_back_test_mode;
+ bool switch_state_before_test_mode; /* TRUE---->ON,false---->OFF */
+ u32 enhanced_features; /* Enhanced Features */
+
+ u32 destroy_step;
+ u32 dirty_flag;
+ struct unf_chip_manage_info *chip_info;
+
+ u8 unique_position;
+ u8 sfp_power_fault_count;
+ u8 sfp_9545_fault_count;
+ u64 last_tx_fault_jif; /* SFP last tx fault jiffies */
+ u32 target_cnt;
+ /* Server card: UNF_FC_SERVER_BOARD_32_G(6) for 32G mode,
+ * UNF_FC_SERVER_BOARD_16_G(7) for 16G mode
+ */
+ u32 card_type;
+ atomic_t scsi_session_add_success;
+ atomic_t scsi_session_add_failed;
+ atomic_t scsi_session_del_success;
+ atomic_t scsi_session_del_failed;
+ atomic_t add_start_work_failed;
+ atomic_t add_closing_work_failed;
+ atomic_t device_alloc;
+ atomic_t device_destroy;
+ atomic_t session_loss_tmo;
+ atomic_t alloc_scsi_id;
+ atomic_t resume_scsi_id;
+ atomic_t reuse_scsi_id;
+ atomic64_t last_exchg_mgr_idx;
+ atomic_t host_no;
+ atomic64_t exchg_index;
+ int scan_world_id;
+ struct semaphore wmi_task_sema;
+ bool ready_to_remove;
+ u32 pcie_link_down_cnt;
+ bool pcie_link_down;
+ u8 fw_version[SPFC_VER_LEN];
+ atomic_t link_lose_tmo;
+ u32 max_ssq_num;
+};
+
+void unf_lport_state_ma(struct unf_lport *lport, enum unf_lport_event lport_event);
+void unf_lport_error_recovery(struct unf_lport *lport);
+void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state);
+void unf_init_port_parms(struct unf_lport *lport);
+u32 unf_lport_enter_flogi(struct unf_lport *lport);
+void unf_lport_enter_sns_plogi(struct unf_lport *lport);
+u32 unf_init_disc_mgr(struct unf_lport *lport);
+u32 unf_init_lport_route(struct unf_lport *lport);
+void unf_destroy_lport_route(struct unf_lport *lport);
+void unf_reset_lport_params(struct unf_lport *lport);
+void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type);
+struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index);
+struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did);
+struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn);
+void unf_cm_vport_remove(struct unf_lport *vport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_ls.c b/drivers/scsi/spfc/common/unf_ls.c
new file mode 100644
index 000000000000..6dab1f9cbb46
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_ls.c
@@ -0,0 +1,4884 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_ls.h"
+#include "unf_log.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+#include "unf_gs.h"
+#include "unf_npiv.h"
+
+static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_prli_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_acc_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_ob_callback(struct unf_xchg *xchg);
+static void unf_logo_callback(void *lport, void *rport, void *xchg);
+static void unf_rrq_callback(void *lport, void *rport, void *xchg);
+static void unf_rrq_ob_callback(struct unf_xchg *xchg);
+static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id);
+static void
+unf_lport_update_time_params(struct unf_lport *lport,
+ struct unf_flogi_fdisc_payload *flogi_payload);
+
+static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
+ u64 remote_port_name,
+ u64 remote_node_name);
+#define UNF_LOWLEVEL_BBCREDIT 0x6
+#define UNF_DEFAULT_BB_SC_N 0
+
+#define UNF_ECHO_REQ_SIZE 0
+#define UNF_ECHO_WAIT_SEM_TIMEOUT(lport) (2 * (ulong)(lport)->ra_tov)
+
+#define UNF_SERVICE_COLLECT(service_collect, item) \
+ do { \
+ if ((item) < UNF_SERVICE_BUTT) { \
+ (service_collect).service_cnt[(item)]++; \
+ } \
+ } while (0)
+
+static void unf_check_rport_need_delay_prli(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 port_feature)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ port_feature &= UNF_PORT_MODE_BOTH;
+
+ /* Used for: L_Port has INI mode & R_Port is not SW */
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /*
+ * 1. immediately: R_Port only with TGT, or
+ * L_Port only with INI & R_Port has TGT mode, send PRLI
+ * immediately
+ */
+ if ((port_feature == UNF_PORT_MODE_TGT ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) ||
+ (UNF_PORT_MODE_TGT == (port_feature & UNF_PORT_MODE_TGT))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+ ret = unf_send_prli(lport, rport, ELS_PRLI);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI failed",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+
+ unf_rport_error_recovery(rport);
+ }
+ }
+ /* 2. R_Port has BOTH mode or unknown, Delay to send PRLI */
+ else if (port_feature != UNF_PORT_MODE_INI) {
+ /* Prevent: PRLI done before PLOGI */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) delay to send PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+
+ /* Delay to send PRLI to R_Port */
+ unf_rport_delay_login(rport);
+ } else {
+ /* 3. R_Port only with INI mode: wait for R_Port's PRLI:
+ * Do not care
+ */
+ /* Cancel recovery(timer) work */
+ if (delayed_work_pending(&rport->recovery_work)) {
+ if (cancel_delayed_work(&rport->recovery_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) is pure INI",
+ lport->port_id,
+ lport->nport_id,
+ rport->nport_id,
+ port_feature);
+
+ unf_rport_ref_dec(rport);
+ }
+ }
+
+ /* Server: R_Port only support INI, do not care this
+ * case
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) wait for PRLI",
+ lport->port_id, lport->nport_id,
+ rport->nport_id, port_feature);
+ }
+ }
+}
+
+static u32 unf_low_level_bb_credit(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 bb_credit = UNF_LOWLEVEL_BBCREDIT;
+
+ if (unlikely(!lport))
+ return bb_credit;
+
+ unf_lport = lport;
+
+ if (unlikely(!unf_lport->low_level_func.port_mgr_op.ll_port_config_get))
+ return bb_credit;
+
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get((void *)unf_lport->fc_port,
+ UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
+ (void *)&bb_credit);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get BB_Credit failed, use default value(%d)",
+ unf_lport->port_id, UNF_LOWLEVEL_BBCREDIT);
+
+ bb_credit = UNF_LOWLEVEL_BBCREDIT;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with BB_Credit(%u)", unf_lport->port_id,
+ bb_credit);
+
+ return bb_credit;
+}
+
+u32 unf_low_level_bb_scn(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_low_level_port_mgr_op *port_mgr = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 bb_scn = UNF_DEFAULT_BB_SC_N;
+
+ if (unlikely(!lport))
+ return bb_scn;
+
+ unf_lport = lport;
+ port_mgr = &unf_lport->low_level_func.port_mgr_op;
+
+ if (unlikely(!port_mgr->ll_port_config_get))
+ return bb_scn;
+
+ ret = port_mgr->ll_port_config_get((void *)unf_lport->fc_port,
+ UNF_PORT_CFG_GET_WORKBALE_BBSCN,
+ (void *)&bb_scn);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) get bbscn failed, use default value(%d)",
+ unf_lport->port_id, UNF_DEFAULT_BB_SC_N);
+
+ bb_scn = UNF_DEFAULT_BB_SC_N;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x)'s bbscn(%d)", unf_lport->port_id, bb_scn);
+
+ return bb_scn;
+}
+
+static void unf_fill_rec_pld(struct unf_rec_pld *rec_pld, u32 sid)
+{
+ FC_CHECK_RETURN_VOID(rec_pld);
+
+ rec_pld->rec_cmnd = (UNF_ELS_CMND_REC);
+ rec_pld->xchg_org_sid = sid;
+ rec_pld->ox_id = INVALID_VALUE16;
+ rec_pld->rx_id = INVALID_VALUE16;
+}
+
+u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *io_xchg)
+{
+ struct unf_rec_pld *rec_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(io_xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_REC;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ pkg.origin_hottag = io_xchg->hotpooltag;
+ pkg.origin_magicnum = io_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ rec_pld = &fc_entry->rec.rec_pld;
+ memset(rec_pld, 0, sizeof(struct unf_rec_pld));
+
+ unf_fill_rec_pld(rec_pld, lport->nport_id);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: Send REC %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, lport->port_name, rport->nport_id,
+ rport->port_name, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_flogi_pld(struct unf_flogi_fdisc_payload *flogi_pld,
+ struct unf_lport *lport)
+{
+ struct unf_fabric_parm *fabric_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(flogi_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ fabric_parms = &flogi_pld->fabric_parms;
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
+ lport->act_topo == UNF_TOP_P2P_MASK) {
+ /* Fabric or P2P or FCoE VN2VN topology */
+ fabric_parms->co_parms.bb_credit = unf_low_level_bb_credit(lport);
+ fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ fabric_parms->co_parms.bbscn = unf_low_level_bb_scn(lport);
+ } else {
+ /* Loop topology here */
+ fabric_parms->co_parms.clean_address = UNF_CLEAN_ADDRESS_DEFAULT;
+ fabric_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ fabric_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ }
+
+ if (lport->low_level_func.support_max_npiv_num != 0)
+ /* support NPIV */
+ fabric_parms->co_parms.clean_address = 1;
+
+ fabric_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+
+ /* according the user value to set the priority */
+ if (lport->qos_cs_ctrl)
+ fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_ENABLE;
+ else
+ fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_DISABLE;
+
+ fabric_parms->cl_parms[ARRAY_INDEX_2].sequential_delivery = UNF_SEQUEN_DELIVERY_REQ;
+ fabric_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+
+ fabric_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ fabric_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ fabric_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ fabric_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+}
+
+u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_xchg *xchg = NULL;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for FLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* FLOGI */
+ xchg->cmnd_code = ELS_FLOGI;
+
+ /* for rcvd flogi acc/rjt processer */
+ xchg->callback = unf_flogi_callback;
+ /* for send flogi failed processer */
+ xchg->ob_callback = unf_flogi_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ flogi_pld = &fc_entry->flogi.flogi_payload;
+ memset(flogi_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
+ unf_fill_flogi_pld(flogi_pld, lport);
+ flogi_pld->cmnd = (UNF_ELS_CMND_FLOGI);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Begin to send FLOGI. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ lport->port_id, rport->nport_id, xchg->hotpooltag);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, flogi_pld,
+ sizeof(struct unf_flogi_fdisc_payload));
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]LOGIN: Send FLOGI failed. Port(0x%x)--->RPort(0x%x)",
+ lport->port_id, rport->nport_id);
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_xchg *exch = NULL;
+ struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ exch = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!exch) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for FDISC",
+ lport->port_id);
+
+ return ret;
+ }
+
+ exch->cmnd_code = ELS_FDISC;
+
+ exch->callback = unf_fdisc_callback;
+ exch->ob_callback = unf_fdisc_ob_callback;
+
+ unf_fill_package(&pkg, exch, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ fdisc_pld = &fc_entry->fdisc.fdisc_payload;
+ memset(fdisc_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
+ unf_fill_flogi_pld(fdisc_pld, lport);
+ fdisc_pld->cmnd = UNF_ELS_CMND_FDISC;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, exch);
+
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)exch);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FDISC send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, exch->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_plogi_pld(struct unf_plogi_payload *plogi_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(plogi_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = lport->root_lport;
+ plogi_pld->cmnd = (UNF_ELS_CMND_PLOGI);
+ login_parms = &plogi_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P or Fabric mode or FCoE VN2VN */
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ /* Public loop & Private loop mode */
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = UNF_DEFAULT_EDTOV;
+ if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
+ login_parms->cl_parms[ARRAY_INDEX_2].priority =
+ UNF_PRIORITY_ENABLE;
+ } else {
+ login_parms->cl_parms[ARRAY_INDEX_2].priority =
+ UNF_PRIORITY_DISABLE;
+ }
+
+ /* for class_3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_pld, sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_plogi_payload *plogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PLOGI;
+
+ xchg->callback = unf_plogi_callback;
+ xchg->ob_callback = unf_plogi_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ unf_cm_xchg_mgr_abort_io_by_id(lport, rport, xchg->sid, xchg->did, 0);
+
+ plogi_pld = &fc_entry->plogi.payload;
+ memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_plogi_pld(plogi_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PLOGI %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ lport->nport_id, lport->port_name, rport->nport_id,
+ rport->port_name, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_fill_logo_pld(struct unf_logo_payload *logo_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(logo_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ logo_pld->cmnd = (UNF_ELS_CMND_LOGO);
+ logo_pld->nport_id = (lport->nport_id);
+ logo_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ logo_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, logo_pld, sizeof(struct unf_logo_payload));
+}
+
+u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_logo_payload *logo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for LOGO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_LOGO;
+ /* retry or link down immediately */
+ xchg->callback = unf_logo_callback;
+ /* do nothing */
+ xchg->ob_callback = unf_logo_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ logo_pld = &fc_entry->logo.payload;
+ memset(logo_pld, 0, sizeof(struct unf_logo_payload));
+ unf_fill_logo_pld(logo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ rport->logo_retries++;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) hottag(0x%x) Retries(%d)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg->hotpooltag, rport->logo_retries);
+
+ return ret;
+}
+
+u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did)
+{
+ struct unf_logo_payload *logo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, did, NULL, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for LOGO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_LOGO;
+
+ unf_fill_package(&pkg, xchg, NULL);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ logo_pld = &fc_entry->logo.payload;
+ memset(logo_pld, 0, sizeof(struct unf_logo_payload));
+ unf_fill_logo_pld(logo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ did, xchg->hotpooltag);
+
+ return ret;
+}
+
+static void unf_echo_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = (struct unf_rport *)rport;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_echo_payload *echo_rsp_pld = NULL;
+ u32 cmnd = 0;
+ u32 mag_ver_local = 0;
+ u32 mag_ver_remote = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ echo_rsp_pld = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.echo_pld;
+ FC_CHECK_RETURN_VOID(echo_rsp_pld);
+
+ if (unf_xchg->byte_orders & UNF_BIT_2) {
+ unf_big_end_to_cpu((u8 *)echo_rsp_pld, sizeof(struct unf_echo_payload));
+ cmnd = echo_rsp_pld->cmnd;
+ } else {
+ cmnd = echo_rsp_pld->cmnd;
+ }
+
+ mag_ver_local = echo_rsp_pld->data[ARRAY_INDEX_0];
+ mag_ver_remote = echo_rsp_pld->data[ARRAY_INDEX_1];
+
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ if (mag_ver_local == ECHO_MG_VERSION_LOCAL &&
+ mag_ver_remote == ECHO_MG_VERSION_REMOTE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), remote rcv echo:(0x%x), remote snd echo acc:(0x%x), local rcv echo acc:(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+ } else if ((mag_ver_local == ECHO_MG_VERSION_LOCAL) &&
+ (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
+ /* the peer don't supprt smartping, only local snd and
+ * rcv rsp time stamp
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), local rcv echo acc:(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+ } else if ((mag_ver_local != ECHO_MG_VERSION_LOCAL) &&
+ (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR,
+ "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local and remote is not FC HBA",
+ unf_lport->port_id, unf_rport->nport_id);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ECHO to RPort(0x%x) and received RJT",
+ unf_lport->port_id, unf_rport->nport_id);
+ }
+
+ unf_xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
+ unf_xchg->echo_info.response_time = jiffies - unf_xchg->echo_info.response_time;
+
+ /* wake up semaphore */
+ up(&unf_xchg->echo_info.echo_sync_sema);
+}
+
+static void unf_echo_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ECHO to RPort(0x%x) but timeout",
+ unf_lport->port_id, unf_rport->nport_id);
+
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
+
+ /* wake up semaphore */
+ up(&xchg->echo_info.echo_sync_sema);
+}
+
+u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time)
+{
+ struct unf_echo_payload *echo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ ulong delay = 0;
+ dma_addr_t phy_echo_addr;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(time, UNF_RETURN_ERROR);
+
+ delay = UNF_ECHO_WAIT_SEM_TIMEOUT(lport);
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for ECHO",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* ECHO */
+ xchg->cmnd_code = ELS_ECHO;
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = UNF_ECHO_REQ_SIZE;
+
+ /* Set callback function, wake up semaphore */
+ xchg->callback = unf_echo_callback;
+ /* wake up semaphore */
+ xchg->ob_callback = unf_echo_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ echo_pld = (struct unf_echo_payload *)unf_get_one_big_sfs_buf(xchg);
+ if (!echo_pld) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't allocate buffer for ECHO",
+ lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ fc_entry->echo.echo_pld = echo_pld;
+ phy_echo_addr = pci_map_single(lport->low_level_func.dev, echo_pld,
+ UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_addr)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) pci map err", lport->port_id);
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ fc_entry->echo.phy_echo_addr = phy_echo_addr;
+ memset(echo_pld, 0, sizeof(struct unf_echo_payload));
+ echo_pld->cmnd = (UNF_ELS_CMND_ECHO);
+ echo_pld->data[ARRAY_INDEX_0] = ECHO_MG_VERSION_LOCAL;
+
+ ret = unf_xchg_ref_inc(xchg, SEND_ELS);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ xchg->echo_info.response_time = jiffies;
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ } else {
+ if (down_timeout(&xchg->echo_info.echo_sync_sema,
+ (long)msecs_to_jiffies((u32)delay))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]ECHO send %s. Port(0x%x)--->RPort(0x%x) but response timeout ",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id);
+
+ xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
+ }
+
+ if (xchg->echo_info.echo_result == UNF_ELS_ECHO_RESULT_FAIL) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "Echo send fail or timeout");
+
+ ret = UNF_RETURN_ERROR;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "echo acc rsp,echo_cmd_snd(0x%xus)-->echo_cmd_rcv(0x%xus)-->echo_acc_ snd(0x%xus)-->echo_acc_rcv(0x%xus).",
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
+ xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
+
+ *time =
+ (xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] -
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME]) -
+ (xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] -
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME]);
+ }
+ }
+
+ pci_unmap_single(lport->low_level_func.dev, phy_echo_addr,
+ UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
+ fc_entry->echo.phy_echo_addr = 0;
+ unf_xchg_ref_dec(xchg, SEND_ELS);
+
+ return ret;
+}
+
+static void unf_fill_prli_pld(struct unf_prli_payload *prli_pld,
+ struct unf_lport *lport)
+{
+ u32 pld_len = 0;
+
+ FC_CHECK_RETURN_VOID(prli_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pld_len = sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE;
+ prli_pld->cmnd =
+ (UNF_ELS_CMND_PRLI |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)pld_len));
+
+ prli_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR);
+ prli_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prli_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+
+ /* About Read Xfer_rdy disable */
+ prli_pld->parms[ARRAY_INDEX_3] = (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | lport->options);
+
+ /* About FCP confirm */
+ if (lport->low_level_func.lport_cfg_items.fcp_conf)
+ prli_pld->parms[ARRAY_INDEX_3] |= UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+
+ /* About Tape support */
+ if (lport->low_level_func.lport_cfg_items.tape_support)
+ prli_pld->parms[ARRAY_INDEX_3] |=
+ (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s PRLI payload: options(0x%x) parameter-3(0x%x)",
+ lport->port_id, lport->options,
+ prli_pld->parms[ARRAY_INDEX_3]);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_pld, sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
+ u32 cmnd_code)
+{
+ struct unf_prli_payload *prli_pal = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PRLI",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = cmnd_code;
+
+ /* for rcvd prli acc/rjt processer */
+ xchg->callback = unf_prli_callback;
+ /* for send prli failed processer */
+ xchg->ob_callback = unf_prli_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ prli_pal = &fc_entry->prli.payload;
+ memset(prli_pal, 0, sizeof(struct unf_prli_payload));
+ unf_fill_prli_pld(prli_pal, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_prlo_pld(struct unf_prli_payload *prlo_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(prlo_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ prlo_pld->cmnd = (UNF_ELS_CMND_PRLO);
+ prlo_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP);
+ prlo_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prlo_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+ prlo_pld->parms[ARRAY_INDEX_3] = UNF_NO_SERVICE_PARAMS;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_pld, sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_prli_payload *prlo_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PRLO", lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PRLO;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ prlo_pld = &fc_entry->prlo.payload;
+ memset(prlo_pld, 0, sizeof(struct unf_prli_payload));
+ unf_fill_prlo_pld(prlo_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLO send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_pdisc_pld(struct unf_plogi_payload *pdisc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(pdisc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pdisc_pld->cmnd = (UNF_ELS_CMND_PDISC);
+ login_parms = &pdisc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P & Fabric */
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ /* Public loop & Private loop */
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ /* :1 */
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_pld, sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* PLOGI/PDISC with same payload */
+ struct unf_plogi_payload *pdisc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PDISC",
+ lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_PDISC;
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ pdisc_pld = &fc_entry->pdisc.payload;
+ memset(pdisc_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_pdisc_pld(pdisc_pld, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PDISC send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_adisc_pld(struct unf_adisc_payload *adisc_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(adisc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ adisc_pld->cmnd = (UNF_ELS_CMND_ADISC);
+ adisc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ adisc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ adisc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ adisc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
+}
+
+u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_adisc_payload *adisc_pal = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for ADISC", lport->port_id);
+
+ return ret;
+ }
+
+ xchg->cmnd_code = ELS_ADISC;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+
+ adisc_pal = &fc_entry->adisc.adisc_payl;
+ memset(adisc_pal, 0, sizeof(struct unf_adisc_payload));
+ unf_fill_adisc_pld(adisc_pal, lport);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: ADISC send %s. Port(0x%x)--->RPort(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id);
+
+ return ret;
+}
+
+static void unf_fill_rrq_pld(struct unf_rrq *rrq_pld, struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(rrq_pld);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ rrq_pld->cmnd = UNF_ELS_CMND_RRQ;
+ rrq_pld->sid = xchg->sid;
+ rrq_pld->oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+}
+
+u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /* after ABTS Done */
+ struct unf_rrq *rrq_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for RRQ",
+ lport->port_id);
+
+ return ret;
+ }
+
+ unf_xchg->cmnd_code = ELS_RRQ; /* RRQ */
+
+ unf_xchg->callback = unf_rrq_callback; /* release I/O exchange context */
+ unf_xchg->ob_callback = unf_rrq_ob_callback; /* release I/O exchange context */
+ unf_xchg->io_xchg = xchg; /* pointer to IO XCHG */
+
+ unf_fill_package(&pkg, unf_xchg, rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ rrq_pld = &fc_entry->rrq;
+ memset(rrq_pld, 0, sizeof(struct unf_rrq));
+ unf_fill_rrq_pld(rrq_pld, xchg);
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, unf_xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)unf_xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RRQ send %s. Port(0x%x)--->RPort(0x%x) free old exchange(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg->hotpooltag);
+
+ return ret;
+}
+
+u32 unf_send_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_flogi_fdisc_payload *flogi_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_FLOGI);
+
+ xchg->did = 0; /* D_ID must be 0 */
+ xchg->sid = UNF_FC_FID_FLOGI; /* S_ID must be 0xfffffe */
+ xchg->oid = xchg->sid;
+ xchg->callback = NULL;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->ob_callback = unf_flogi_acc_ob_callback; /* call back for sending
+ * FLOGI response
+ */
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ flogi_acc_pld = &fc_entry->flogi_acc.flogi_payload;
+ flogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ unf_fill_flogi_pld(flogi_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]LOGIN: FLOGI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+ return ret;
+}
+
+static void unf_fill_plogi_acc_pld(struct unf_plogi_payload *plogi_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(plogi_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ plogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ login_parms = &plogi_acc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT; /* 0 */
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT; /* 1 */
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_acc_pld,
+ sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_plogi_payload *plogi_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PLOGI);
+
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->callback = NULL;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->ob_callback = unf_plogi_acc_ob_callback; /* call back for sending PLOGI ACC */
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ plogi_acc_pld = &fc_entry->plogi_acc.payload;
+ unf_fill_plogi_acc_pld(plogi_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PLOGI ACC send %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, lport->nport_id, lport->port_name,
+ rport->nport_id, rport->port_name, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+static void unf_fill_prli_acc_pld(struct unf_prli_payload *prli_acc_pld,
+ struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ u32 port_mode = UNF_FC4_FRAME_PARM_3_TGT;
+
+ FC_CHECK_RETURN_VOID(prli_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ prli_acc_pld->cmnd =
+ (UNF_ELS_CMND_ACC |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)(sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE)));
+
+ prli_acc_pld->parms[ARRAY_INDEX_0] =
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR |
+ UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
+ prli_acc_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
+ prli_acc_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
+
+ /* About INI/TGT mode */
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /* return INI (0x20): R_Port has TGT mode, L_Port has INI mode
+ */
+ port_mode = UNF_FC4_FRAME_PARM_3_INI;
+ } else {
+ port_mode = lport->options;
+ }
+
+ /* About Read xfer_rdy disable */
+ prli_acc_pld->parms[ARRAY_INDEX_3] =
+ (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | port_mode); /* 0x2 */
+
+ /* About Tape support */
+ if (rport->tape_support_needed) {
+ prli_acc_pld->parms[ARRAY_INDEX_3] |=
+ (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "PRLI ACC tape support");
+ }
+
+ /* About confirm */
+ if (lport->low_level_func.lport_cfg_items.fcp_conf)
+ prli_acc_pld->parms[ARRAY_INDEX_3] |=
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW; /* 0x80 */
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_acc_pld,
+ sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prli_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLI);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback =
+ unf_prli_acc_ob_callback; /* callback when send succeed */
+
+ unf_fill_package(&pkg, xchg, rport);
+
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ prli_acc_pld = &fc_entry->prli_acc.payload;
+ unf_fill_prli_acc_pld(prli_acc_pld, lport, rport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+u32 unf_send_rec_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ /* Reserved */
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return RETURN_OK;
+}
+
+static void unf_rrq_acc_ob_callback(struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VOID(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]RRQ ACC Xchg(0x%p) tag(0x%x)", xchg,
+ xchg->hotpooltag);
+}
+
+static void unf_fill_els_acc_pld(struct unf_els_acc *els_acc_pld)
+{
+ FC_CHECK_RETURN_VOID(els_acc_pld);
+
+ els_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+}
+
+u32 unf_send_rscn_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *rscn_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RSCN);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_rscn_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ rscn_acc = &fc_entry->els_acc;
+ unf_fill_els_acc_pld(rscn_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: RSCN ACC send %s. Port(0x%x)--->RPort(0x%x) with OXID(0x%x) RXID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_send_logo_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *logo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_LOGO);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_logo_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ logo_acc = &fc_entry->els_acc;
+ unf_fill_els_acc_pld(logo_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: LOGO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed",
+ lport->port_id, rport->nport_id, ox_id, rx_id);
+ }
+
+ return ret;
+}
+
+static u32 unf_send_rrq_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_els_acc *rrq_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->callback = NULL; /* do noting */
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ rrq_acc = &fc_entry->els_acc;
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RRQ);
+ xchg->ob_callback = unf_rrq_acc_ob_callback; /* do noting */
+ unf_fill_els_acc_pld(rrq_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RRQ ACC send %s. Port(0x%x)--->RPort(0x%x) with Xchg(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, xchg, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_pdisc_acc_pld(struct unf_plogi_payload *pdisc_acc_pld,
+ struct unf_lport *lport)
+{
+ struct unf_lgn_parm *login_parms = NULL;
+
+ FC_CHECK_RETURN_VOID(pdisc_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ pdisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+ login_parms = &pdisc_acc_pld->stparms;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
+ login_parms->co_parms.bbscn =
+ (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
+ ? 0
+ : unf_low_level_bb_scn(lport);
+ } else {
+ login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
+ login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
+ }
+
+ login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
+ login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
+ login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
+ login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
+ login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
+ login_parms->co_parms.e_d_tov = (lport->ed_tov);
+
+ login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
+ login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
+ login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
+ login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
+
+ login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_acc_pld,
+ sizeof(struct unf_plogi_payload));
+}
+
+u32 unf_send_pdisc_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_plogi_payload *pdisc_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PDISC);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_pdisc_acc_ob_callback;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ pdisc_acc_pld = &fc_entry->pdisc_acc.payload;
+ unf_fill_pdisc_acc_pld(pdisc_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PDISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_adisc_acc_pld(struct unf_adisc_payload *adisc_acc_pld,
+ struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(adisc_acc_pld);
+ FC_CHECK_RETURN_VOID(lport);
+
+ adisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
+
+ adisc_acc_pld->hard_address = (lport->nport_id & UNF_ALPA_MASK);
+ adisc_acc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
+ adisc_acc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
+ adisc_acc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ adisc_acc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+ adisc_acc_pld->nport_id = lport->nport_id;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_acc_pld,
+ sizeof(struct unf_adisc_payload));
+}
+
+u32 unf_send_adisc_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_adisc_payload *adisc_acc_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ADISC);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_adisc_acc_ob_callback;
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ adisc_acc_pld = &fc_entry->adisc_acc.adisc_payl;
+ unf_fill_adisc_acc_pld(adisc_acc_pld, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send ADISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_fill_prlo_acc_pld(struct unf_prli_prlo *prlo_acc,
+ struct unf_lport *lport)
+{
+ struct unf_prli_payload *prlo_acc_pld = NULL;
+
+ FC_CHECK_RETURN_VOID(prlo_acc);
+
+ prlo_acc_pld = &prlo_acc->payload;
+ prlo_acc_pld->cmnd =
+ (UNF_ELS_CMND_ACC |
+ ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
+ ((u32)sizeof(struct unf_prli_payload)));
+ prlo_acc_pld->parms[ARRAY_INDEX_0] =
+ (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
+ prlo_acc_pld->parms[ARRAY_INDEX_1] = 0;
+ prlo_acc_pld->parms[ARRAY_INDEX_2] = 0;
+ prlo_acc_pld->parms[ARRAY_INDEX_3] = 0;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_acc_pld,
+ sizeof(struct unf_prli_payload));
+}
+
+u32 unf_send_prlo_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg)
+{
+ struct unf_prli_prlo *prlo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLO);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(fc_entry, 0, sizeof(union unf_sfs_u));
+ prlo_acc = &fc_entry->prlo_acc;
+ unf_fill_prlo_acc_pld(prlo_acc, lport);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send PRLO ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+static void unf_prli_acc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Report R_Port scsi Link Up */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /* Update & Report Link Up */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
+ rport_state = unf_rport->rp_state;
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]LOGIN: Port(0x%x) RPort(0x%x) state(0x%x) WWN(0x%llx) prliacc",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, unf_rport->port_name);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ if (rport_state == UNF_RPORT_ST_READY) {
+ unf_rport->logo_retries = 0;
+ unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
+ unf_rport->options);
+ }
+}
+
+static void unf_schedule_open_work(struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ /* Used for L_Port port only with TGT, or R_Port only with INI */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = 0;
+ u32 port_feature = INVALID_VALUE32;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ delay = (ulong)unf_lport->ed_tov;
+ port_feature = unf_rport->options & UNF_PORT_MODE_BOTH;
+
+ if (unf_lport->options == UNF_PORT_MODE_TGT ||
+ port_feature == UNF_PORT_MODE_INI) {
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+
+ ret = unf_rport_ref_inc(unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) abnormal, no need open",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ return;
+ }
+
+ /* Delay work pending check */
+ if (delayed_work_pending(&unf_rport->open_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) open work is running, no need re-open",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ unf_rport_ref_dec(unf_rport);
+ return;
+ }
+
+ /* start open work */
+ if (queue_delayed_work(unf_wq, &unf_rport->open_work,
+ (ulong)msecs_to_jiffies((u32)delay))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) start open work",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ (void)unf_rport_ref_inc(unf_rport);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(unf_rport);
+ }
+}
+
+static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ /*
+ * 1. According to FC-LS 4.2.7.1:
+ * after RCVD PLOGI or sending PLOGI ACC, need to termitate open EXCH
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport,
+ unf_rport->nport_id, unf_lport->nport_id, 0);
+
+ /* 2. Send PLOGI ACC fail */
+ if (xchg->ob_callback_sts != UNF_IO_SUCCESS) {
+ /* Do R_Port recovery */
+ unf_rport_error_recovery(unf_rport);
+
+ /* Do not care: Just used for L_Port only is TGT mode or R_Port
+ * only is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC failed(0x%x) with RPort(0x%x) feature(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_lport->options, xchg->ob_callback_sts,
+ unf_rport->nport_id, unf_rport->options);
+
+ return;
+ }
+
+ /* 3. Private Loop: check whether or not need to send PRLI */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
+ (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) with State(0x%x) return directly",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+ return;
+ }
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* 4. Set Port Feature with BOTH: cancel */
+ if (unf_rport->options == UNF_PORT_MODE_UNKNOWN && unf_rport->port_name != INVALID_WWPN)
+ unf_rport->options = unf_get_port_feature(unf_rport->port_name);
+
+ /*
+ * 5. Check whether need to send PRLI delay
+ * Call by: RCVD PLOGI ACC or callback for sending PLOGI ACC succeed
+ */
+ unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
+
+ /* 6. Do not care: Just used for L_Port only is TGT mode or R_Port only
+ * is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC succeed with RPort(0x%x) feature(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
+ unf_rport->nport_id, unf_rport->options);
+}
+
+static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Callback for Sending FLOGI ACC succeed */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u64 rport_port_name = 0;
+ u64 rport_node_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(xchg->lport);
+ FC_CHECK_RETURN_VOID(xchg->rport);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->port_name == 0 && unf_rport->node_name == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x_0x%x_0x%x) already send Plogi with RPort(0x%x) feature(0x%x).",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
+ unf_rport->nport_id, unf_rport->options);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+ return;
+ }
+
+ rport_port_name = unf_rport->port_name;
+ rport_node_name = unf_rport->node_name;
+
+ /* Swap case: Set WWPN & WWNN with zero */
+ unf_rport->port_name = 0;
+ unf_rport->node_name = 0;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Enter PLOGI stage: after send FLOGI ACC succeed */
+ unf_login_with_rport_in_n2n(unf_lport, rport_port_name, rport_node_name);
+}
+
+static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_logo_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg)
+{
+}
+
+static inline u8 unf_determin_bbscn(u8 local_bbscn, u8 remote_bbscn)
+{
+ if (remote_bbscn == 0 || local_bbscn == 0)
+ local_bbscn = 0;
+ else
+ local_bbscn = local_bbscn > remote_bbscn ? local_bbscn : remote_bbscn;
+
+ return local_bbscn;
+}
+
+static void unf_cfg_lowlevel_fabric_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_fabric_parm *login_parms)
+{
+ struct unf_port_login_parms login_co_parms = {0};
+ u32 remote_edtov = 0;
+ u32 ret = 0;
+ u8 remote_edtov_resolution = 0; /* 0:ms; 1:ns */
+
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
+ return;
+
+ login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.remote_edtov_tag = 0;
+ login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
+ login_co_parms.compared_bbscn =
+ (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+
+ remote_edtov_resolution = (u8)UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(login_parms);
+ remote_edtov = UNF_GET_E_D_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.compared_edtov_val =
+ remote_edtov_resolution ? (remote_edtov / UNF_OS_MS_TO_NS)
+ : remote_edtov;
+
+ login_co_parms.compared_ratov_val = UNF_GET_RA_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.els_cmnd_code = ELS_FLOGI;
+
+ if (UNF_TOP_P2P_MASK & (u32)lport->act_topo) {
+ login_co_parms.act_topo = (login_parms->co_parms.nport == UNF_F_PORT)
+ ? UNF_ACT_TOP_P2P_FABRIC
+ : UNF_ACT_TOP_P2P_DIRECT;
+ } else {
+ login_co_parms.act_topo = lport->act_topo;
+ }
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
+ UNF_PORT_CFG_UPDATE_FABRIC_PARAM, (void *)&login_co_parms);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Lowlevel unsupport fabric config");
+ }
+}
+
+u32 unf_check_flogi_params(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_fabric_parm *fabric_parms)
+{
+ u32 ret = RETURN_OK;
+ u32 high_port_name;
+ u32 low_port_name;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(fabric_parms, UNF_RETURN_ERROR);
+
+ if (fabric_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID) {
+ /* Discard directly */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) NPort_ID(0x%x) FLOGI not support class3",
+ lport->port_id, rport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
+ low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
+ if (fabric_parms->high_port_name == high_port_name &&
+ fabric_parms->low_port_name == low_port_name) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]The wwpn(0x%x%x) of lport(0x%x) is same as the wwpn of rport(0x%x)",
+ high_port_name, low_port_name, lport->port_id, rport->nport_id);
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+static void unf_save_fabric_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_fabric_parm *fabric_parms)
+{
+ u64 fabric_node_name = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(fabric_parms);
+
+ fabric_node_name = (u64)(((u64)(fabric_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_parms->low_node_name)));
+
+ /* R_Port for 0xfffffe is used for FLOGI, not need to save WWN */
+ if (fabric_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
+ else
+ rport->max_frame_size = fabric_parms->co_parms.bb_receive_data_field_size;
+
+ /* with Fabric attribute */
+ if (fabric_parms->co_parms.nport == UNF_F_PORT) {
+ rport->ed_tov = fabric_parms->co_parms.e_d_tov;
+ rport->ra_tov = fabric_parms->co_parms.r_a_tov;
+ lport->ed_tov = fabric_parms->co_parms.e_d_tov;
+ lport->ra_tov = fabric_parms->co_parms.r_a_tov;
+ lport->fabric_node_name = fabric_node_name;
+ }
+
+ /* Configure info from FLOGI to chip */
+ unf_cfg_lowlevel_fabric_params(lport, rport, fabric_parms);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) Rport(0x%x) login parameter: E_D_TOV = %u. LPort E_D_TOV = %u. fabric nodename: 0x%x%x",
+ lport->port_id, rport->nport_id, (fabric_parms->co_parms.e_d_tov),
+ lport->ed_tov, fabric_parms->high_node_name, fabric_parms->low_node_name);
+}
+
+u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_flogi_fdisc_acc *flogi_frame = NULL;
+ struct unf_fabric_parm *fabric_login_parms = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+ u64 wwpn = 0;
+ u64 wwnn = 0;
+ enum unf_act_topo unf_active_topo;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x)<---RPort(0x%x) Receive FLOGI with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_FLOGI);
+
+ /* Check L_Port state: Offline */
+ if (lport->states >= UNF_LPORT_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with state(0x%x) not need to handle FLOGI",
+ lport->port_id, lport->states);
+
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ flogi_frame = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi;
+ fabric_login_parms = &flogi_frame->flogi_payload.fabric_parms;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &flogi_frame->flogi_payload,
+ sizeof(struct unf_flogi_fdisc_payload));
+ wwpn = (u64)(((u64)(fabric_login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)fabric_login_parms->low_port_name));
+ wwnn = (u64)(((u64)(fabric_login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)fabric_login_parms->low_node_name));
+
+ /* Get (new) R_Port: reuse only */
+ unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no RPort. do nothing", lport->port_id);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port info */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->port_name = wwpn;
+ unf_rport->node_name = wwnn;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Check RCVD FLOGI parameters: only for class-3 */
+ ret = unf_check_flogi_params(lport, unf_rport, fabric_login_parms);
+ if (ret != RETURN_OK) {
+ /* Discard directly */
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Save fabric parameters */
+ unf_save_fabric_params(lport, unf_rport, fabric_login_parms);
+
+ if ((u32)lport->act_topo & UNF_TOP_P2P_MASK) {
+ unf_active_topo =
+ (fabric_login_parms->co_parms.nport == UNF_F_PORT)
+ ? UNF_ACT_TOP_P2P_FABRIC
+ : UNF_ACT_TOP_P2P_DIRECT;
+ unf_lport_update_topo(lport, unf_active_topo);
+ }
+ /* Send ACC for FLOGI */
+ ret = unf_send_flogi_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI ACC failed and do recover",
+ lport->port_id);
+
+ /* Do L_Port recovery */
+ unf_lport_error_recovery(lport);
+ }
+
+ return ret;
+}
+
+static void unf_cfg_lowlevel_port_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms,
+ u32 cmd_type)
+{
+ struct unf_port_login_parms login_co_parms = {0};
+ u32 ret = 0;
+
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
+ return;
+
+ login_co_parms.rport_index = rport->rport_index;
+ login_co_parms.seq_cnt = 0;
+ login_co_parms.ed_tov = 0; /* ms */
+ login_co_parms.ed_tov_timer_val = lport->ed_tov;
+ login_co_parms.tx_mfs = rport->max_frame_size;
+
+ login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
+ login_co_parms.remote_edtov_tag = 0;
+ login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
+ login_co_parms.els_cmnd_code = cmd_type;
+
+ if (lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ login_co_parms.compared_bbscn = 0;
+ } else {
+ login_co_parms.compared_bbscn =
+ (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ }
+
+ login_co_parms.compared_edtov_val = lport->ed_tov;
+ login_co_parms.compared_ratov_val = lport->ra_tov;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
+ UNF_PORT_CFG_UPDATE_PLOGI_PARAM, (void *)&login_co_parms);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Lowlevel unsupport port config", lport->port_id);
+ }
+}
+
+u32 unf_check_plogi_params(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms)
+{
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(login_parms, UNF_RETURN_ERROR);
+
+ /* Parameters check: Class-type */
+ if (login_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID ||
+ login_parms->co_parms.bb_receive_data_field_size == 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort N_Port_ID(0x%x) with PLOGI parameters invalid: class3(%u), BBReceiveDataFieldSize(0x%x), send LOGO",
+ lport->port_id, rport->nport_id,
+ login_parms->cl_parms[ARRAY_INDEX_2].valid,
+ login_parms->co_parms.bb_receive_data_field_size);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO); /* --->>> LOGO */
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* Enter LOGO stage */
+ unf_rport_enter_logo(lport, rport);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 16G FC Brocade SW, Domain Controller's PLOGI both support CLASS-1 &
+ * CLASS-2
+ */
+ if (login_parms->cl_parms[ARRAY_INDEX_0].valid == UNF_CLASS_VALID ||
+ login_parms->cl_parms[ARRAY_INDEX_1].valid == UNF_CLASS_VALID) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) get PLOGI class1(%u) class2(%u) from N_Port_ID(0x%x)",
+ lport->port_id, login_parms->cl_parms[ARRAY_INDEX_0].valid,
+ login_parms->cl_parms[ARRAY_INDEX_1].valid, rport->nport_id);
+ }
+
+ return ret;
+}
+
+static void unf_save_plogi_params(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms,
+ u32 cmd_code)
+{
+#define UNF_DELAY_TIME 100 /* WWPN smaller delay to send PRLI with COM mode */
+
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+ u32 ed_tov = 0;
+ u32 remote_edtov = 0;
+
+ if (login_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
+ else
+ rport->max_frame_size = login_parms->co_parms.bb_receive_data_field_size;
+
+ wwnn = (u64)(((u64)(login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_node_name));
+ wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_port_name));
+
+ remote_edtov = login_parms->co_parms.e_d_tov;
+ ed_tov = login_parms->co_parms.e_d_tov_resolution
+ ? (remote_edtov / UNF_OS_MS_TO_NS)
+ : remote_edtov;
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+ rport->local_nport_id = lport->nport_id;
+
+ if (lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
+ lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* P2P or Private Loop or FCoE VN2VN */
+ lport->ed_tov = (lport->ed_tov > ed_tov) ? lport->ed_tov : ed_tov;
+ lport->ra_tov = 2 * lport->ed_tov; /* 2 * E_D_TOV */
+
+ if (ed_tov != 0)
+ rport->ed_tov = ed_tov;
+ else
+ rport->ed_tov = UNF_DEFAULT_EDTOV;
+ } else {
+ /* SAN: E_D_TOV updated by FLOGI */
+ rport->ed_tov = lport->ed_tov;
+ }
+
+ /* WWPN smaller: delay to send PRLI */
+ if (rport->port_name > lport->port_name)
+ rport->ed_tov += UNF_DELAY_TIME; /* 100ms */
+
+ /* Configure port parameters to low level (chip) */
+ unf_cfg_lowlevel_port_params(lport, rport, login_parms, cmd_code);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) with WWPN(0x%llx) WWNN(0x%llx) login: ED_TOV(%u) Port: ED_TOV(%u)",
+ lport->port_id, rport->nport_id, rport->port_name, rport->node_name,
+ ed_tov, lport->ed_tov);
+}
+
+static bool unf_check_bbscn_is_enabled(u8 local_bbscn, u8 remote_bbscn)
+{
+ return unf_determin_bbscn(local_bbscn, remote_bbscn) ? true : false;
+}
+
+static u32 unf_irq_process_switch2thread(void *lport, struct unf_xchg *xchg,
+ unf_event_task evt_task)
+{
+ struct unf_cm_event_report *event = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 ret = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+ unf_xchg = xchg;
+
+ if (unlikely(!unf_lport->event_mgr.unf_get_free_event_func ||
+ !unf_lport->event_mgr.unf_post_event_func ||
+ !unf_lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) event function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = unf_xchg_ref_inc(unf_xchg, SFS_RESPONSE);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ event = unf_lport->event_mgr.unf_get_free_event_func((void *)lport);
+ FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
+
+ event->lport = unf_lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = evt_task;
+ event->para_in = xchg;
+ unf_lport->event_mgr.unf_post_event_func(unf_lport, event);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) start to switch thread process now",
+ unf_lport->port_id);
+
+ return ret;
+}
+
+u32 unf_plogi_handler_com_process(struct unf_xchg *xchg)
+{
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_plogi_pdisc *plogi_frame = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_xchg->rport, UNF_RETURN_ERROR);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
+ login_parms = &plogi_frame->payload.stparms;
+
+ unf_save_plogi_params(unf_lport, unf_rport, login_parms, ELS_PLOGI);
+
+ /* Update state: PLOGI_WAIT */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = unf_xchg->sid;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Send PLOGI ACC to remote port */
+ ret = unf_send_plogi_acc(unf_lport, unf_rport, unf_xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI ACC failed",
+ unf_lport->port_id);
+
+ /* NOTE: exchange has been freed inner(before) */
+ unf_rport_error_recovery(unf_rport);
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) send PLOGI ACC to Port(0x%x) succeed",
+ unf_lport->port_id, unf_rport->nport_id);
+
+ return ret;
+}
+
+int unf_plogi_async_handle(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ ret = unf_plogi_handler_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return (int)ret;
+}
+
+u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_xchg *unf_xchg = xchg;
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_plogi_pdisc *plogi_frame = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ struct unf_rjt_info rjt_info = {0};
+ u64 wwpn = INVALID_VALUE64;
+ u32 ret = UNF_RETURN_ERROR;
+ bool bbscn_enabled = false;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ /* 1. Maybe: PLOGI is sent by Name server */
+ if (sid < UNF_FC_FID_DOM_MGR ||
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PLOGI. Port(0x%x_0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+ }
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PLOGI);
+
+ /* 2. State check: Offline */
+ if (unf_lport->states >= UNF_LPORT_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received PLOGI with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Get R_Port by WWpn */
+ plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
+ login_parms = &plogi_frame->payload.stparms;
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &plogi_frame->payload,
+ sizeof(struct unf_plogi_payload));
+
+ wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)login_parms->low_port_name));
+
+ /* 3. Get (new) R_Port (by wwpn) */
+ unf_rport = unf_find_rport(unf_lport, sid, wwpn);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, sid);
+ if (!unf_rport) {
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_PLOGI;
+ rjt_info.reason_code = UNF_LS_RJT_BUSY;
+ rjt_info.reason_explanation = UNF_LS_RJT_INSUFFICIENT_RESOURCES;
+
+ /* R_Port is NULL: Send ELS RJT for PLOGI */
+ (void)unf_send_els_rjt_by_did(unf_lport, unf_xchg, sid, &rjt_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no RPort and send PLOGI reject",
+ unf_lport->port_id);
+ return RETURN_OK;
+ }
+
+ /*
+ * 4. According to FC-LS 4.2.7.1:
+ * After RCVD PLogi or send Plogi ACC, need to termitate open EXCH
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport, sid, unf_lport->nport_id, 0);
+
+ /* 5. Cancel recovery timer work after RCVD PLOGI */
+ if (cancel_delayed_work(&unf_rport->recovery_work))
+ atomic_dec(&unf_rport->rport_ref_cnt);
+
+ /*
+ * 6. Plogi parameters check
+ * Call by: (RCVD) PLOGI handler & callback function for RCVD PLOGI_ACC
+ */
+ ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg(unf_lport, unf_xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_xchg->lport = lport;
+ unf_xchg->rport = unf_rport;
+ unf_xchg->sid = sid;
+
+ /* 7. About bbscn for context change */
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT && bbscn_enabled) {
+ switch2thread = true;
+ unf_lport->bbscn_support = true;
+ }
+
+ /* 8. Process PLOGI Frame: switch to thread if necessary */
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR complete sync */
+ ret = unf_irq_process_switch2thread(unf_lport, unf_xchg, unf_plogi_async_handle);
+ } else {
+ ret = unf_plogi_handler_com_process(unf_xchg);
+ }
+
+ return ret;
+}
+
+static void unf_obtain_tape_capacity(struct unf_lport *lport,
+ struct unf_rport *rport, u32 tape_parm)
+{
+ u32 rec_support = 0;
+ u32 task_retry_support = 0;
+ u32 retry_support = 0;
+
+ rec_support = tape_parm & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support =
+ tape_parm & UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = tape_parm & UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+
+ if (lport->low_level_func.lport_cfg_items.tape_support &&
+ rec_support && task_retry_support && retry_support) {
+ rport->tape_support_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FC_tape is needed for RPort(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id);
+ }
+
+ if ((tape_parm & UNF_FC4_FRAME_PARM_3_CONF_ALLOW) &&
+ lport->low_level_func.lport_cfg_items.fcp_conf) {
+ rport->fcp_conf_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FCP confirm is needed for RPort(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id);
+ }
+}
+
+static u32 unf_prli_handler_com_process(struct unf_xchg *xchg)
+{
+ struct unf_prli_prlo *prli = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flags = 0;
+ u32 sid = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+
+ unf_xchg = xchg;
+ FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
+ unf_lport = unf_xchg->lport;
+ sid = xchg->sid;
+
+ UNF_SERVICE_COLLECT(unf_lport->link_service_info, UNF_SERVICE_ITEM_PRLI);
+
+ /* 1. Get R_Port: for each R_Port from rport_busy_list */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, sid);
+ if (!unf_rport) {
+ /* non session (R_Port) existence */
+ (void)unf_send_logo_by_did(unf_lport, sid);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received PRLI but no RPort SID(0x%x) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, sid, xchg->oxid);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Receive PRLI. Port(0x%x)<---RPort(0x%x) with S_ID(0x%x)",
+ unf_lport->port_id, unf_rport->nport_id, sid);
+
+ /* 2. Get PRLI info */
+ prli = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli;
+ if (sid < UNF_FC_FID_DOM_MGR || unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PRLI. Port(0x%x_0x%x)<---RPort(0x%x) parameter-3(0x%x) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, sid,
+ prli->payload.parms[ARRAY_INDEX_3], xchg->oxid);
+ }
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &prli->payload,
+ sizeof(struct unf_prli_payload));
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+
+ /* 3. Increase R_Port ref_cnt */
+ ret = unf_rport_ref_inc(unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x_0x%p) is removing and do nothing",
+ unf_lport->port_id, unf_rport->nport_id, unf_rport);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ return RETURN_ERROR;
+ }
+
+ /* 4. Cancel R_Port Open work */
+ if (cancel_delayed_work(&unf_rport->open_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) cancel open work succeed",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ /* This is not the last counter */
+ atomic_dec(&unf_rport->rport_ref_cnt);
+ }
+
+ /* 5. Check R_Port state */
+ if (unf_rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
+ unf_rport->rp_state != UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) with state(0x%x) when received PRLI, send LOGO",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* NOTE: Start to send LOGO */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+
+ unf_cm_free_xchg(unf_lport, xchg);
+ unf_rport_ref_dec(unf_rport);
+
+ return RETURN_ERROR;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* 6. Update R_Port options(INI/TGT/BOTH) */
+ unf_rport->options =
+ prli->payload.parms[ARRAY_INDEX_3] &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ unf_update_port_feature(unf_rport->port_name, unf_rport->options);
+
+ /* for Confirm */
+ unf_rport->fcp_conf_needed = false;
+
+ unf_obtain_tape_capacity(unf_lport, unf_rport, prli->payload.parms[ARRAY_INDEX_3]);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) parameter-3(0x%x) options(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
+ prli->payload.parms[ARRAY_INDEX_3], unf_rport->options);
+
+ /* 7. Send PRLI ACC */
+ ret = unf_send_prli_acc(unf_lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI ACC failed",
+ unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
+
+ /* NOTE: exchange has been freed inner(before) */
+ unf_rport_error_recovery(unf_rport);
+ }
+
+ /* 8. Decrease R_Port ref_cnt */
+ unf_rport_ref_dec(unf_rport);
+
+ return ret;
+}
+
+int unf_prli_async_handle(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ ret = unf_prli_handler_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return (int)ret;
+}
+
+u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ bool switch2thread = false;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->sid = sid;
+ xchg->lport = lport;
+ unf_lport = lport;
+
+ if (lport->bbscn_support &&
+ lport->act_topo == UNF_ACT_TOP_P2P_DIRECT)
+ switch2thread = true;
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync */
+ ret = unf_irq_process_switch2thread(lport, xchg, unf_prli_async_handle);
+ } else {
+ ret = unf_prli_handler_com_process(xchg);
+ }
+
+ return ret;
+}
+
+static void unf_save_rscn_port_id(struct unf_rscn_mgr *rscn_mg,
+ struct unf_rscn_port_id_page *rscn_port_id)
+{
+ struct unf_port_id_page *exit_port_id_page = NULL;
+ struct unf_port_id_page *new_port_id_page = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ bool is_repeat = false;
+
+ FC_CHECK_RETURN_VOID(rscn_mg);
+ FC_CHECK_RETURN_VOID(rscn_port_id);
+
+ /* 1. check new RSCN Port_ID (RSNC_Page) whether within RSCN_Mgr or not
+ */
+ spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
+ if (list_empty(&rscn_mg->list_using_rscn_page)) {
+ is_repeat = false;
+ } else {
+ /* Check repeat: for each exist RSCN page form RSCN_Mgr Page
+ * list
+ */
+ list_for_each_safe(node, next_node, &rscn_mg->list_using_rscn_page) {
+ exit_port_id_page = list_entry(node, struct unf_port_id_page,
+ list_node_rscn);
+ if (exit_port_id_page->port_id_port == rscn_port_id->port_id_port &&
+ exit_port_id_page->port_id_area == rscn_port_id->port_id_area &&
+ exit_port_id_page->port_id_domain == rscn_port_id->port_id_domain) {
+ is_repeat = true;
+ break;
+ }
+ }
+ }
+ spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
+
+ FC_CHECK_RETURN_VOID(rscn_mg->unf_get_free_rscn_node);
+
+ /* 2. Get & add free RSNC Node --->>> RSCN_Mgr */
+ if (!is_repeat) {
+ new_port_id_page = rscn_mg->unf_get_free_rscn_node(rscn_mg);
+ if (!new_port_id_page) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_ERR, "[err]Get free RSCN node failed");
+
+ return;
+ }
+
+ new_port_id_page->addr_format = rscn_port_id->addr_format;
+ new_port_id_page->event_qualifier = rscn_port_id->event_qualifier;
+ new_port_id_page->reserved = rscn_port_id->reserved;
+ new_port_id_page->port_id_domain = rscn_port_id->port_id_domain;
+ new_port_id_page->port_id_area = rscn_port_id->port_id_area;
+ new_port_id_page->port_id_port = rscn_port_id->port_id_port;
+
+ /* Add entry to list: using_rscn_page */
+ spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
+ list_add_tail(&new_port_id_page->list_node_rscn, &rscn_mg->list_using_rscn_page);
+ spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has repeat RSCN node with domain(0x%x) area(0x%x)",
+ rscn_port_id->port_id_domain, rscn_port_id->port_id_area,
+ rscn_port_id->port_id_port);
+ }
+}
+
+static u32 unf_analysis_rscn_payload(struct unf_lport *lport,
+ struct unf_rscn_pld *rscn_pld)
+{
+#define UNF_OS_DISC_REDISC_TIME 10000
+
+ struct unf_rscn_port_id_page *rscn_port_id = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rscn_mgr *rscn_mgr = NULL;
+ u32 index = 0;
+ u32 pld_len = 0;
+ u32 port_id_page_cnt = 0;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ bool eb_need_disc_flag = false;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
+
+ /* This field is the length in bytes of the entire Payload, inclusive of
+ * the word 0
+ */
+ pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
+ pld_len -= sizeof(rscn_pld->cmnd);
+ port_id_page_cnt = pld_len / UNF_RSCN_PAGE_LEN;
+
+ /* Pages within payload is nor more than 255 */
+ if (port_id_page_cnt > UNF_RSCN_PAGE_SUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) page num(0x%x) exceed 255 in RSCN",
+ lport->port_id, lport->nport_id, port_id_page_cnt);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* L_Port-->Disc-->Rscn_Mgr */
+ disc = &lport->disc;
+ rscn_mgr = &disc->rscn_mgr;
+
+ /* for each ID from RSCN_Page: check whether need to Disc or not */
+ while (index < port_id_page_cnt) {
+ rscn_port_id = &rscn_pld->port_id_page[index];
+ if (unf_lookup_lport_by_nportid(lport, *(u32 *)rscn_port_id)) {
+ /* Prevent to create session with L_Port which have the
+ * same N_Port_ID
+ */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) find local N_Port_ID(0x%x) within RSCN payload",
+ ((struct unf_lport *)(lport->root_lport))->nport_id,
+ *(u32 *)rscn_port_id);
+ } else {
+ /* New RSCN_Page ID find, save it to RSCN_Mgr */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x_0x%x) save RSCN N_Port_ID(0x%x)",
+ lport->port_id, lport->nport_id,
+ *(u32 *)rscn_port_id);
+
+ /* 1. new RSCN_Page ID find, save it to RSCN_Mgr */
+ unf_save_rscn_port_id(rscn_mgr, rscn_port_id);
+ eb_need_disc_flag = true;
+ }
+ index++;
+ }
+
+ if (!eb_need_disc_flag) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) find all N_Port_ID and do not need to disc",
+ ((struct unf_lport *)(lport->root_lport))->nport_id);
+
+ return RETURN_OK;
+ }
+
+ /* 2. Do/Start Disc: Check & do Disc (GID_PT) process */
+ if (!disc->disc_temp.unf_disc_start) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) DISC start function is NULL",
+ lport->nport_id, lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (disc->states == UNF_DISC_ST_END ||
+ ((jiffies - disc->last_disc_jiff) > msecs_to_jiffies(UNF_OS_DISC_REDISC_TIME))) {
+ disc->disc_option = UNF_RSCN_DISC;
+ disc->last_disc_jiff = jiffies;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ ret = disc->disc_temp.unf_disc_start(lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_INFO,
+ "[info]Port(0x%x_0x%x) DISC state(0x%x) with last time(%llu) and don't do DISC",
+ lport->port_id, lport->nport_id, disc->states,
+ disc->last_disc_jiff);
+
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+
+ return ret;
+}
+
+u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ /*
+ * A RSCN ELS shall be sent to registered Nx_Ports
+ * when an event occurs that may have affected the state of
+ * one or more Nx_Ports, or the ULP state within the Nx_Port.
+ * *
+ * The Payload of a RSCN Request includes a list
+ * containing the addresses of the affected Nx_Ports.
+ * *
+ * Each affected Port_ID page contains the ID of the Nx_Port,
+ * Fabric Controller, E_Port, domain, or area for which the event was
+ * detected.
+ */
+ struct unf_rscn_pld *rscn_pld = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 pld_len = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Receive RSCN Port(0x%x_0x%x)<---RPort(0x%x) OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RSCN);
+
+ /* 1. Get R_Port by S_ID */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid); /* rport busy_list */
+ if (!unf_rport) {
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, sid);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) received RSCN but has no RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, lport->nport_id, sid, xchg->oxid);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_rport->nport_id = sid;
+ }
+
+ rscn_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
+ pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, rscn_pld, pld_len);
+
+ /* 2. NOTE: Analysis RSCN payload(save & disc if necessary) */
+ ret = unf_analysis_rscn_payload(lport, rscn_pld);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) analysis RSCN failed",
+ lport->port_id, lport->nport_id);
+ }
+
+ /* 3. send rscn_acc after analysis payload */
+ ret = unf_send_rscn_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) send RSCN response failed",
+ lport->port_id, lport->nport_id);
+ }
+
+ return ret;
+}
+
+static void unf_analysis_pdisc_pld(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_plogi_pdisc *pdisc)
+{
+ struct unf_lgn_parm *pdisc_params = NULL;
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(pdisc);
+
+ pdisc_params = &pdisc->payload.stparms;
+ if (pdisc_params->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE;
+ else
+ rport->max_frame_size = pdisc_params->co_parms.bb_receive_data_field_size;
+
+ wwnn = (u64)(((u64)(pdisc_params->high_node_name) << UNF_SHIFT_32) |
+ ((u64)pdisc_params->low_node_name));
+ wwpn = (u64)(((u64)(pdisc_params->high_port_name) << UNF_SHIFT_32) |
+ ((u64)pdisc_params->low_port_name));
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) save PDISC parameters to Rport(0x%x) WWPN(0x%llx) WWNN(0x%llx)",
+ lport->port_id, rport->nport_id, rport->port_name,
+ rport->node_name);
+}
+
+u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport, struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_rjt_info rjt_info;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_PDISC;
+ rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
+ rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
+
+ ret = unf_send_els_rjt_by_rport(lport, xchg, rport, &rjt_info);
+
+ return ret;
+}
+
+u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_plogi_pdisc *pdisc = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flags = 0;
+ u32 ret = RETURN_OK;
+ u64 wwpn = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PDISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PDISC);
+ pdisc = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->pdisc;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &pdisc->payload,
+ sizeof(struct unf_plogi_payload));
+ wwpn = (u64)(((u64)(pdisc->payload.stparms.high_port_name) << UNF_SHIFT_32) |
+ ((u64)pdisc->payload.stparms.low_port_name));
+
+ unf_rport = unf_find_rport(lport, sid, wwpn);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]Port(0x%x) get exist RPort(0x%x) when receive PDISC with S_Id(0x%x)",
+ lport->port_id, unf_rport->nport_id, sid);
+
+ if (sid >= UNF_FC_FID_DOM_MGR)
+ return unf_send_pdisc_rjt(lport, unf_rport, xchg);
+
+ unf_analysis_pdisc_pld(lport, unf_rport, pdisc);
+
+ /* State: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving PDISC",
+ lport->port_id, sid);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) handle PDISC failed",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Report Down/Up event to scsi */
+ unf_update_lport_state_by_linkup_event(lport,
+ unf_rport, unf_rport->options);
+ } else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
+ (unf_rport->session)) {
+ /* State: Closing */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
+ /* State: PRLI_WAIT */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) handle PDISC failed",
+ lport->port_id);
+
+ return ret;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC, send LOGO",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport_enter_logo(lport, unf_rport);
+ unf_cm_free_xchg(lport, xchg);
+ }
+ }
+
+ return ret;
+}
+
+static void unf_analysis_adisc_pld(struct unf_lport *lport,
+ struct unf_rport *rport,
+ struct unf_adisc_payload *adisc_pld)
+{
+ u64 wwpn = INVALID_VALUE64;
+ u64 wwnn = INVALID_VALUE64;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(adisc_pld);
+
+ wwnn = (u64)(((u64)(adisc_pld->high_node_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_node_name));
+ wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_port_name));
+
+ rport->port_name = wwpn;
+ rport->node_name = wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) save ADISC parameters to RPort(0x%x), WWPN(0x%llx) WWNN(0x%llx) NPort ID(0x%x)",
+ lport->port_id, rport->nport_id, rport->port_name,
+ rport->node_name, adisc_pld->nport_id);
+}
+
+u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_adisc_payload *adisc_pld = NULL;
+ ulong flags = 0;
+ u64 wwpn = 0;
+ u32 ret = RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive ADISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ADISC);
+ adisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->adisc.adisc_payl;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
+ wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
+ ((u64)adisc_pld->low_port_name));
+
+ unf_rport = unf_find_rport(lport, sid, wwpn);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]Port(0x%x) get exist RPort(0x%x) when receive ADISC with S_ID(0x%x)",
+ lport->port_id, unf_rport->nport_id, sid);
+
+ unf_analysis_adisc_pld(lport, unf_rport, adisc_pld);
+
+ /* State: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving ADISC",
+ lport->port_id, sid);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Return ACC directly */
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
+
+ return ret;
+ }
+
+ /* Report Down/Up event to SCSI */
+ unf_update_lport_state_by_linkup_event(lport, unf_rport, unf_rport->options);
+ }
+ /* State: Closing */
+ else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
+ (unf_rport->session)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport = unf_get_safe_rport(lport, unf_rport,
+ UNF_RPORT_REUSE_RECOVER,
+ unf_rport->nport_id);
+ if (unf_rport) {
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->nport_id = sid;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed",
+ lport->port_id);
+
+ return ret;
+ }
+
+ unf_update_lport_state_by_linkup_event(lport,
+ unf_rport, unf_rport->options);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find RPort by NPort_ID(0x%x). Free exchange and send LOGO",
+ lport->port_id, sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ (void)unf_send_logo_by_did(lport, sid);
+ }
+ } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
+ /* State: PRLI_WAIT */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ ret = unf_send_adisc_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
+
+ return ret;
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC, send LOGO",
+ lport->port_id, sid, unf_rport->rp_state);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ unf_rport_enter_logo(lport, unf_rport);
+ unf_cm_free_xchg(lport, xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) receive REC", lport->port_id);
+
+ /* Send rec acc */
+ ret = unf_send_rec_acc(lport, unf_rport, xchg); /* discard directly */
+
+ return ret;
+}
+
+u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rrq *rrq = NULL;
+ struct unf_xchg *xchg_reused = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ u32 unf_sid = 0;
+ ulong flags = 0;
+ struct unf_rjt_info rjt_info = {0};
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RRQ);
+ rrq = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rrq;
+ ox_id = (u16)(rrq->oxid_rxid >> UNF_SHIFT_16);
+ rx_id = (u16)(rrq->oxid_rxid);
+ unf_sid = rrq->sid & UNF_NPORTID_MASK;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[warn]Receive RRQ. Port(0x%x)<---RPort(0x%x) sfsXchg(0x%p) OX_ID(0x%x,0x%x) RX_ID(0x%x)",
+ lport->port_id, sid, xchg, ox_id, xchg->oxid, rx_id);
+
+ /* Get R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive RRQ but has no RPort(0x%x)",
+ lport->port_id, sid);
+
+ /* NOTE: send LOGO */
+ unf_send_logo_by_did(lport, unf_sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ /* Get Target (Abort I/O) exchange context */
+ xchg_reused = unf_cm_lookup_xchg_by_id(lport, ox_id, unf_sid); /* unf_find_xchg_by_ox_id */
+ if (!xchg_reused) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cannot find exchange with OX_ID(0x%x) RX_ID(0x%x) S_ID(0x%x)",
+ lport->port_id, ox_id, rx_id, unf_sid);
+
+ rjt_info.els_cmnd_code = ELS_RRQ;
+ rjt_info.reason_code = FCXLS_BA_RJT_LOGICAL_ERROR | FCXLS_LS_RJT_INVALID_OXID_RXID;
+
+ /* NOTE: send ELS RJT */
+ if (unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info) != RETURN_OK) {
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+ }
+
+ hot_pool = xchg_reused->hot_pool;
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) OxId(0x%x) Rxid(0x%x) Sid(0x%x) Hot Pool is NULL.",
+ lport->port_id, ox_id, rx_id, unf_sid);
+
+ return ret;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ xchg_reused->oxid = INVALID_VALUE16;
+ xchg_reused->rxid = INVALID_VALUE16;
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+
+ /* NOTE: release I/O exchange context */
+ unf_xchg_ref_dec(xchg_reused, SFS_RESPONSE);
+
+ /* Send RRQ ACC */
+ ret = unf_send_rrq_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can not send RRQ rsp. Xchg(0x%p) Ioxchg(0x%p) OX_RX_ID(0x%x 0x%x) S_ID(0x%x)",
+ lport->port_id, xchg, xchg_reused, ox_id, rx_id, unf_sid);
+
+ unf_cm_free_xchg(lport, xchg);
+ }
+
+ return ret;
+}
+
+u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport *logo_rport = NULL;
+ struct unf_logo *logo = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 nport_id = 0;
+ struct unf_rjt_info rjt_info = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
+ logo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->logo;
+ nport_id = logo->payload.nport_id & UNF_NPORTID_MASK;
+
+ if (sid < UNF_FC_FID_DOM_MGR) {
+ /* R_Port is not fabric port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]LOGIN: Receive LOGO. Port(0x%x)<---RPort(0x%x) NPort_ID(0x%x) OXID(0x%x)",
+ lport->port_id, sid, nport_id, xchg->oxid);
+ }
+
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &logo->payload,
+ sizeof(struct unf_logo_payload));
+
+ /*
+ * 1. S_ID unequal to NPort_ID:
+ * link down Rport find by NPort_ID immediately
+ */
+ if (sid != nport_id) {
+ logo_rport = unf_get_rport_by_nport_id(lport, nport_id);
+ if (logo_rport)
+ unf_rport_immediate_link_down(lport, logo_rport);
+ }
+
+ /* 2. Get R_Port by S_ID (frame header) */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
+ if (!unf_rport) {
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = ELS_LOGO;
+ rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
+ rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
+ ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive LOGO but has no RPort(0x%x)",
+ lport->port_id, sid);
+
+ return ret;
+ }
+
+ /*
+ * 3. I/O resource release: set ABORT tag
+ * *
+ * Call by: R_Port remove; RCVD LOGO; RCVD PLOGI; send PLOGI ACC
+ */
+ unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, sid, lport->nport_id, INI_IO_STATE_LOGO);
+
+ /* 4. Send LOGO ACC */
+ ret = unf_send_logo_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) send LOGO failed", lport->port_id);
+ }
+ /*
+ * 5. Do same operations with RCVD LOGO/PRLO & Send LOGO:
+ * retry (LOGIN or LOGO) or link down immediately
+ */
+ unf_process_rport_after_logo(lport, unf_rport);
+
+ return ret;
+}
+
+u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_prli_prlo *prlo = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Receive PRLO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
+
+ /* Get (new) R_Port */
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive PRLO but has no RPort",
+ lport->port_id);
+
+ /* Discard directly */
+ unf_cm_free_xchg(lport, xchg);
+ return ret;
+ }
+
+ prlo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prlo;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &prlo->payload,
+ sizeof(struct unf_prli_payload));
+
+ /* Send PRLO ACC to remote */
+ ret = unf_send_prlo_acc(lport, unf_rport, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send PRLO ACC failed", lport->port_id);
+ }
+
+ /* Enter Enhanced action after LOGO (retry LOGIN or LOGO) */
+ unf_process_rport_after_logo(lport, unf_rport);
+
+ return ret;
+}
+
+static void unf_fill_echo_acc_pld(struct unf_echo *echo_acc)
+{
+ struct unf_echo_payload *echo_acc_pld = NULL;
+
+ FC_CHECK_RETURN_VOID(echo_acc);
+
+ echo_acc_pld = echo_acc->echo_pld;
+ FC_CHECK_RETURN_VOID(echo_acc_pld);
+
+ echo_acc_pld->cmnd = UNF_ELS_CMND_ACC;
+}
+
+static void unf_echo_acc_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = xchg->lport;
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr) {
+ pci_unmap_single(unf_lport->low_level_func.dev,
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc
+ .phy_echo_addr,
+ UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr = 0;
+ }
+}
+
+static u32 unf_send_echo_acc(struct unf_lport *lport, u32 did,
+ struct unf_xchg *xchg)
+{
+ struct unf_echo *echo_acc = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+ struct unf_frame_pkg pkg;
+ dma_addr_t phy_echo_acc_addr;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ECHO);
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = unf_echo_acc_callback;
+
+ unf_fill_package(&pkg, xchg, xchg->rport);
+ pkg.type = UNF_PKG_ELS_REPLY;
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ echo_acc = &fc_entry->echo_acc;
+ unf_fill_echo_acc_pld(echo_acc);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+ phy_echo_acc_addr = pci_map_single(lport->low_level_func.dev,
+ echo_acc->echo_pld,
+ UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_acc_addr)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) pci map err",
+ lport->port_id);
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ echo_acc->phy_echo_addr = phy_echo_acc_addr;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ pci_unmap_single(lport->low_level_func.dev,
+ phy_echo_acc_addr, UNF_ECHO_PAYLOAD_LEN,
+ DMA_BIDIRECTIONAL);
+ echo_acc->phy_echo_addr = 0;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]ECHO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ did, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
+{
+ struct unf_echo_payload *echo_pld = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 data_len = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ data_len = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Receive ECHO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x))",
+ lport->port_id, sid, xchg->oxid);
+
+ UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ECHO);
+ echo_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+ UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, echo_pld, data_len);
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ xchg->rport = unf_rport;
+
+ ret = unf_send_echo_acc(lport, sid, xchg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) send ECHO ACC failed", lport->port_id);
+ }
+
+ return ret;
+}
+
+static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
+ u64 remote_port_name,
+ u64 remote_node_name)
+{
+ /*
+ * Call by (P2P):
+ * 1. RCVD FLOGI ACC
+ * 2. Send FLOGI ACC succeed
+ * *
+ * Compare WWN, larger is master, then send PLOGI
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong lport_flag = 0;
+ ulong rport_flag = 0;
+ u64 port_name = 0;
+ u64 node_name = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, lport_flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: FLOGI_WAIT --> READY */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, lport_flag);
+
+ port_name = remote_port_name;
+ node_name = remote_node_name;
+
+ if (unf_lport->port_name > port_name) {
+ /* Master case: send PLOGI */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s WWN(0x%llx) is larger than rport(0x%llx), should be master",
+ unf_lport->port_id, unf_lport->port_name, port_name);
+
+ /* Update N_Port_ID now: 0xEF */
+ unf_lport->nport_id = UNF_P2P_LOCAL_NPORT_ID;
+
+ unf_rport = unf_find_valid_rport(lport, port_name, UNF_P2P_REMOTE_NPORT_ID);
+ unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
+ UNF_P2P_REMOTE_NPORT_ID);
+ if (unf_rport) {
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+ unf_rport->nport_id = UNF_P2P_REMOTE_NPORT_ID; /* 0xD6 */
+ unf_rport->local_nport_id = UNF_P2P_LOCAL_NPORT_ID; /* 0xEF */
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
+ if (unf_rport->rp_state == UNF_RPORT_ST_PLOGI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) Rport(0x%x) have sent PLOGI or PRLI with state(0x%x)",
+ unf_lport->port_id,
+ unf_rport->nport_id,
+ unf_rport->rp_state);
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock,
+ rport_flag);
+ return;
+ }
+ /* Update L_Port State: PLOGI_WAIT */
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
+
+ /* P2P with master: Start to Send PLOGI */
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) with WWN(0x%llx) send PLOGI to(0x%llx) failed",
+ unf_lport->port_id,
+ unf_lport->port_name, port_name);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ /* Get/Alloc R_Port failed */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with WWN(0x%llx) allocate RPort(ID:0x%x,WWPN:0x%llx) failed",
+ unf_lport->port_id, unf_lport->port_name,
+ UNF_P2P_REMOTE_NPORT_ID, port_name);
+ }
+ } else {
+ /* Slave case: L_Port's Port Name is smaller than R_Port */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) with WWN(0x%llx) is smaller than rport(0x%llx), do nothing",
+ unf_lport->port_id, unf_lport->port_name, port_name);
+ }
+}
+
+void unf_lport_enter_mns_plogi(struct unf_lport *lport)
+{
+ /* Fabric or Public Loop Mode: Login with Name server */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* Get (safe) R_Port */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, UNF_FC_FID_MGMT_SERV);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate RPort failed", lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_MGMT_SERV; /* 0xfffffa */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+
+ /* Get & Set new free exchange */
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange can't be NULL for PLOGI", lport->port_id);
+
+ return;
+ }
+
+ xchg->cmnd_code = ELS_PLOGI; /* PLOGI */
+ xchg->did = unf_rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = unf_lport;
+ xchg->rport = unf_rport;
+
+ /* Set callback function */
+ xchg->callback = NULL; /* for rcvd plogi acc/rjt processer */
+ xchg->ob_callback = NULL; /* for send plogi failed processer */
+
+ unf_fill_package(&pkg, xchg, unf_rport);
+ pkg.type = UNF_PKG_ELS_REQ;
+ /* Fill PLOGI payload */
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return;
+ }
+
+ plogi_pld = &fc_entry->plogi.payload;
+ memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
+ unf_fill_plogi_pld(plogi_pld, lport);
+
+ /* Start to Send PLOGI command */
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+}
+
+static void unf_register_to_switch(struct unf_lport *lport)
+{
+ /* Register to Fabric, used for: FABRIC & PUBLI LOOP */
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* Login with Name server: PLOGI */
+ unf_lport_enter_sns_plogi(lport);
+
+ unf_lport_enter_mns_plogi(lport);
+
+ /* Physical Port */
+ if (lport->root_lport == lport &&
+ lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ unf_linkup_all_vports(lport);
+ }
+}
+
+void unf_fdisc_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do recovery */
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FDISC send failed");
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ /* Do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+}
+
+void unf_fdisc_callback(void *lport, void *rport, void *exch)
+{
+ /* Register to Name Server or Do recovery */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *xchg = NULL;
+ struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
+ ulong flag = 0;
+ u32 cmd = 0;
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ xchg = (struct unf_xchg *)exch;
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(exch);
+ FC_CHECK_RETURN_VOID(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+ fdisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->fdisc_acc.fdisc_payload;
+ if (xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)fdisc_pld, sizeof(struct unf_flogi_fdisc_payload));
+
+ cmd = fdisc_pld->cmnd;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no Rport", unf_lport->port_id);
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ if ((cmd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ /* Case for ACC */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) receive Flogi/Fdisc ACC in state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_lport_update_nport_id(unf_lport, xchg->sid);
+ unf_lport_update_time_params(unf_lport, fdisc_pld);
+ unf_register_to_switch(unf_lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
+
+ /* Case for RJT: Do L_Port recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+void unf_flogi_ob_callback(struct unf_xchg *xchg)
+{
+ /* Send FLOGI failed & Do L_Port recovery */
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ /* Get L_port from exchange context */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send FLOGI failed",
+ unf_lport->port_id);
+
+ /* Check L_Port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send FLOGI failed with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+}
+
+static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = nport_id;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+static void
+unf_lport_update_time_params(struct unf_lport *lport,
+ struct unf_flogi_fdisc_payload *flogi_payload)
+{
+ ulong flag = 0;
+ u32 ed_tov = 0;
+ u32 ra_tov = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(flogi_payload);
+
+ ed_tov = flogi_payload->fabric_parms.co_parms.e_d_tov;
+ ra_tov = flogi_payload->fabric_parms.co_parms.r_a_tov;
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* FC-FS-3: 21.3.4, 21.3.5 */
+ if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ lport->ed_tov = ed_tov;
+ lport->ra_tov = ra_tov;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) with topo(0x%x) no need to save time parameters",
+ lport->port_id, lport->nport_id, lport->act_topo);
+ }
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+static void unf_rcv_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_flogi_fdisc_payload *flogi_pld,
+ u32 nport_id, struct unf_xchg *xchg)
+{
+ /* PLOGI to Name server or remote port */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_flogi_fdisc_payload *unf_flogi_pld = flogi_pld;
+ struct unf_fabric_parm *fabric_params = NULL;
+ u64 port_name = 0;
+ u64 node_name = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(flogi_pld);
+
+ /* Check L_Port state: FLOGI_WAIT */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x_0x%x) receive FLOGI ACC with state(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
+
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+ return;
+ }
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ fabric_params = &unf_flogi_pld->fabric_parms;
+ node_name =
+ (u64)(((u64)(fabric_params->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_params->low_node_name)));
+ port_name =
+ (u64)(((u64)(fabric_params->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(fabric_params->low_port_name)));
+
+ /* flogi acc pyload class 3 service priority value */
+ if (unf_lport->root_lport == unf_lport && unf_lport->qos_cs_ctrl &&
+ fabric_params->cl_parms[ARRAY_INDEX_2].priority == UNF_PRIORITY_ENABLE)
+ unf_lport->priority = (bool)UNF_PRIORITY_ENABLE;
+ else
+ unf_lport->priority = (bool)UNF_PRIORITY_DISABLE;
+
+ /* Save Flogi parameters */
+ unf_save_fabric_params(unf_lport, unf_rport, fabric_params);
+
+ if (UNF_CHECK_NPORT_FPORT_BIT(unf_flogi_pld) == UNF_N_PORT) {
+ /* P2P Mode */
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_DIRECT);
+ unf_login_with_rport_in_n2n(unf_lport, port_name, node_name);
+ } else {
+ /* for:
+ * UNF_ACT_TOP_PUBLIC_LOOP/UNF_ACT_TOP_P2P_FABRIC
+ * /UNF_TOP_P2P_MASK
+ */
+ if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP)
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_FABRIC);
+
+ unf_lport_update_nport_id(unf_lport, nport_id);
+ unf_lport_update_time_params(unf_lport, unf_flogi_pld);
+
+ /* Save process both for Public loop & Fabric */
+ unf_register_to_switch(unf_lport);
+ }
+}
+
+static void unf_flogi_acc_com_process(struct unf_xchg *xchg)
+{
+ /* Maybe within interrupt or thread context */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ u32 nport_id = 0;
+ u32 cmnd = 0;
+ ulong flags = 0;
+ struct unf_xchg *unf_xchg = xchg;
+
+ FC_CHECK_RETURN_VOID(unf_xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->lport);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
+ cmnd = flogi_pld->cmnd;
+
+ /* Get N_Port_ID & R_Port */
+ /* Others: 0xFFFFFE */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
+ nport_id = UNF_FC_FID_FLOGI;
+
+ /* Get Safe R_Port: reuse only */
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can not allocate new Rport", unf_lport->port_id);
+
+ return;
+ }
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->nport_id = UNF_FC_FID_FLOGI;
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ /* Process FLOGI ACC or RJT */
+ if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_rport->nport_id, unf_xchg->oxid);
+
+ /* Case for ACC */
+ unf_rcv_flogi_acc(unf_lport, unf_rport, flogi_pld, unf_xchg->sid, unf_xchg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_rport->nport_id,
+ unf_xchg->oxid);
+
+ /* Case for RJT: do L_Port error recovery */
+ unf_lport_error_recovery(unf_lport);
+ }
+}
+
+static int unf_rcv_flogi_acc_async_callback(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_flogi_acc_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return RETURN_OK;
+}
+
+void unf_flogi_callback(void *lport, void *rport, void *xchg)
+{
+ /* Callback function for FLOGI ACC or RJT */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_flogi_fdisc_payload *flogi_pld = NULL;
+ bool bbscn_enabled = false;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+
+ unf_xchg->lport = lport;
+ flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
+
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)flogi_pld, sizeof(struct unf_flogi_fdisc_payload));
+
+ if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP &&
+ (UNF_CHECK_NPORT_FPORT_BIT(flogi_pld) == UNF_F_PORT))
+ /* Get Top Mode (P2P_F) --->>> used for BBSCN */
+ act_topo = UNF_ACT_TOP_P2P_FABRIC;
+
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(&flogi_pld->fabric_parms));
+ if (act_topo == UNF_ACT_TOP_P2P_FABRIC && bbscn_enabled) {
+ /* BBSCN Enable or not --->>> used for Context change */
+ unf_lport->bbscn_support = true;
+ switch2thread = true;
+ }
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync: for Root Port */
+ (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
+ unf_rcv_flogi_acc_async_callback);
+ } else {
+ /* Process FLOGI response directly */
+ unf_flogi_acc_com_process(unf_xchg);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ALL,
+ "[info]Port(0x%x) process FLOGI response: switch(%d) to thread done",
+ unf_lport->port_id, switch2thread);
+}
+
+void unf_plogi_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do L_Port or R_Port recovery */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ unf_lport = xchg->lport;
+ unf_rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI(0x%x_0x%x) to RPort(%p:0x%x_0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, xchg->oxid,
+ xchg->rxid, unf_rport, unf_rport->rport_index,
+ unf_rport->nport_id);
+
+ /* Start to recovery */
+ if (unf_rport->nport_id > UNF_FC_FID_DOM_MGR) {
+ /* with Name server: R_Port is fabric --->>> L_Port error
+ * recovery
+ */
+ unf_lport_error_recovery(unf_lport);
+ } else {
+ /* R_Port is not fabric --->>> R_Port error recovery */
+ unf_rport_error_recovery(unf_rport);
+ }
+}
+
+void unf_rcv_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_lgn_parm *login_parms)
+{
+ /* PLOGI ACC: PRLI(non fabric) or RFT_ID(fabric) */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_lgn_parm *unf_login_parms = login_parms;
+ u64 node_name = 0;
+ u64 port_name = 0;
+ ulong flag = 0;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(login_parms);
+
+ node_name = (u64)(((u64)(unf_login_parms->high_node_name) << UNF_SHIFT_32) |
+ ((u64)(unf_login_parms->low_node_name)));
+ port_name = (u64)(((u64)(unf_login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(unf_login_parms->low_port_name)));
+
+ /* ACC & Case for: R_Port is fabric (RFT_ID) */
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR) {
+ /* Check L_Port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ if (unf_lport->states != UNF_LPORT_ST_PLOGI_WAIT) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive PLOGI ACC with error state(0x%x)",
+ lport->port_id, unf_lport->states);
+
+ return;
+ }
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* PLOGI parameters save */
+ unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
+
+ /* Update R_Port WWPN & WWNN */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to Send RFT_ID */
+ ret = unf_send_rft_id(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send RFT_ID failed",
+ lport->port_id);
+
+ unf_lport_error_recovery(unf_lport);
+ }
+ } else {
+ /* ACC & Case for: R_Port is not fabric */
+ if (unf_rport->options == UNF_PORT_MODE_UNKNOWN &&
+ unf_rport->port_name != INVALID_WWPN)
+ unf_rport->options = unf_get_port_feature(port_name);
+
+ /* Set Port Feature with BOTH: cancel */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->node_name = node_name;
+ unf_rport->port_name = port_name;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x)<---LS_ACC(DID:0x%x SID:0x%x) for PLOGI ACC with RPort state(0x%x) NodeName(0x%llx) E_D_TOV(%u)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport->rp_state,
+ unf_rport->node_name, unf_rport->ed_tov);
+
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
+ (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
+ unf_rport->rp_state == UNF_RPORT_ST_READY)) {
+ /* Do nothing, return directly */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+ return;
+ }
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* PLOGI parameters save */
+ unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
+
+ /*
+ * Need Delay to Send PRLI or not
+ * Used for: L_Port with INI mode & R_Port is not Fabric
+ */
+ unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
+
+ /* Do not care: Just used for L_Port only is TGT mode or R_Port
+ * only is INI mode
+ */
+ unf_schedule_open_work(unf_lport, unf_rport);
+ }
+}
+
+void unf_plogi_acc_com_process(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ ulong flag = 0;
+ u64 port_name = 0;
+ u32 rport_nport_id = 0;
+ u32 cmnd = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(unf_xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->lport);
+ FC_CHECK_RETURN_VOID(unf_xchg->rport);
+
+ unf_lport = unf_xchg->lport;
+ unf_rport = unf_xchg->rport;
+ rport_nport_id = unf_rport->nport_id;
+ plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
+ login_parms = &plogi_pld->stparms;
+ cmnd = (plogi_pld->cmnd);
+
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ /* Case for PLOGI ACC: Go to next stage */
+ port_name =
+ (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
+ ((u64)(login_parms->low_port_name)));
+
+ /* Get (new) R_Port: 0xfffffc has same WWN with 0xfffcxx */
+ unf_rport = unf_find_rport(unf_lport, rport_nport_id, port_name);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
+ UNF_RPORT_REUSE_ONLY, rport_nport_id);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) alloc new RPort with wwpn(0x%llx) failed",
+ unf_lport->port_id, unf_lport->nport_id, port_name);
+ return;
+ }
+
+ /* PLOGI parameters check */
+ ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
+ if (ret != RETURN_OK)
+ return;
+
+ /* Update R_Port state */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->nport_id = rport_nport_id;
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Start to process PLOGI ACC */
+ unf_rcv_plogi_acc(unf_lport, unf_rport, login_parms);
+ } else {
+ /* Case for PLOGI RJT: L_Port or R_Port recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x)<---RPort(0x%p) with LS_RJT(DID:0x%x SID:0x%x) for PLOGI",
+ unf_lport->port_id, unf_rport, unf_lport->nport_id,
+ unf_rport->nport_id);
+
+ if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
+ unf_lport_error_recovery(unf_lport);
+ else
+ unf_rport_error_recovery(unf_rport);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PLOGI response(0x%x). Port(0x%x_0x%x)<---RPort(0x%x_0x%p) wwpn(0x%llx) OX_ID(0x%x)",
+ cmnd, unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
+ unf_rport, port_name, unf_xchg->oxid);
+}
+
+static int unf_rcv_plogi_acc_async_callback(void *argc_in, void *argc_out)
+{
+ struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
+
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ unf_plogi_acc_com_process(xchg);
+
+ unf_xchg_ref_dec(xchg, SFS_RESPONSE);
+
+ return RETURN_OK;
+}
+
+void unf_plogi_callback(void *lport, void *rport, void *xchg)
+{
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
+ struct unf_plogi_payload *plogi_pld = NULL;
+ struct unf_lgn_parm *login_parms = NULL;
+ bool bbscn_enabled = false;
+ bool switch2thread = false;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
+
+ plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
+ login_parms = &plogi_pld->stparms;
+ unf_xchg->lport = lport;
+
+ if (unf_xchg->byte_orders & UNF_BIT_2)
+ unf_big_end_to_cpu((u8 *)plogi_pld, sizeof(struct unf_plogi_payload));
+
+ bbscn_enabled =
+ unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
+ (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
+ if ((bbscn_enabled) &&
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ switch2thread = true;
+ unf_lport->bbscn_support = true;
+ }
+
+ if (switch2thread && unf_lport->root_lport == unf_lport) {
+ /* Wait for LR done sync: just for ROOT Port */
+ (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
+ unf_rcv_plogi_acc_async_callback);
+ } else {
+ unf_plogi_acc_com_process(unf_xchg);
+ }
+}
+
+static void unf_logo_ob_callback(struct unf_xchg *xchg)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ struct unf_rport *old_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ u32 nport_id = 0;
+ u32 logo_retry = 0;
+ u32 max_frame_size = 0;
+ u64 port_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_xchg = xchg;
+ old_rport = unf_xchg->rport;
+ logo_retry = old_rport->logo_retries;
+ max_frame_size = old_rport->max_frame_size;
+ port_name = old_rport->port_name;
+ unf_rport_enter_closing(old_rport);
+
+ lport = unf_xchg->lport;
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ return;
+
+ /* Get R_Port by exchange info: Init state */
+ nport_id = unf_xchg->did;
+ rport = unf_get_rport_by_nport_id(lport, nport_id);
+ rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cannot allocate RPort", lport->port_id);
+ return;
+ }
+
+ rport->logo_retries = logo_retry;
+ rport->max_frame_size = max_frame_size;
+ rport->port_name = port_name;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]LOGIN: Port(0x%x) received LOGO RSP timeout topo(0x%x) retries(%u)",
+ lport->port_id, lport->act_topo, rport->logo_retries);
+
+ /* RCVD LOGO/PRLO & SEND LOGO: the same process */
+ if (rport->logo_retries < UNF_MAX_RETRY_COUNT) {
+ /* <: retry (LOGIN or LOGO) if necessary */
+ unf_process_rport_after_logo(lport, rport);
+ } else {
+ /* >=: Link down */
+ unf_rport_immediate_link_down(lport, rport);
+ }
+}
+
+static void unf_logo_callback(void *lport, void *rport, void *xchg)
+{
+ /* RCVD LOGO ACC/RJT: retry(LOGIN/LOGO) or link down immediately */
+ struct unf_lport *unf_lport = (struct unf_lport *)lport;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport *old_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_els_rjt *els_acc_rjt = NULL;
+ u32 cmnd = 0;
+ u32 nport_id = 0;
+ u32 logo_retry = 0;
+ u32 max_frame_size = 0;
+ u64 port_name = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ old_rport = unf_xchg->rport;
+
+ logo_retry = old_rport->logo_retries;
+ max_frame_size = old_rport->max_frame_size;
+ port_name = old_rport->port_name;
+ unf_rport_enter_closing(old_rport);
+
+ if (unf_is_lport_valid(lport) != RETURN_OK)
+ return;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
+ return;
+
+ /* Get R_Port by exchange info: Init state */
+ els_acc_rjt = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_rjt;
+ nport_id = unf_xchg->did;
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
+ unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_INIT, nport_id);
+
+ if (!unf_rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Port(0x%x) cannot allocate RPort",
+ unf_lport->port_id);
+ return;
+ }
+
+ unf_rport->logo_retries = logo_retry;
+ unf_rport->max_frame_size = max_frame_size;
+ unf_rport->port_name = port_name;
+ cmnd = be32_to_cpu(els_acc_rjt->cmnd);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x) received LOGO RSP(0x%x),topo(0x%x) Port options(0x%x) RPort options(0x%x) retries(%u)",
+ unf_lport->port_id, (cmnd & UNF_ELS_CMND_HIGH_MASK),
+ unf_lport->act_topo, unf_lport->options, unf_rport->options,
+ unf_rport->logo_retries);
+
+ /* RCVD LOGO/PRLO & SEND LOGO: the same process */
+ if (unf_rport->logo_retries < UNF_MAX_RETRY_COUNT) {
+ /* <: retry (LOGIN or LOGO) if necessary */
+ unf_process_rport_after_logo(unf_lport, unf_rport);
+ } else {
+ /* >=: Link down */
+ unf_rport_immediate_link_down(unf_lport, unf_rport);
+ }
+}
+
+void unf_prli_ob_callback(struct unf_xchg *xchg)
+{
+ /* Do R_Port recovery */
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flag);
+ lport = xchg->lport;
+ rport = xchg->rport;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI failed and do recovery",
+ lport->port_id, lport->nport_id, rport->nport_id);
+
+ /* Start to do R_Port error recovery */
+ unf_rport_error_recovery(rport);
+}
+
+void unf_prli_callback(void *lport, void *rport, void *xchg)
+{
+ /* RCVD PRLI RSP: ACC or RJT --->>> SCSI Link Up */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ ulong flag = 0;
+ u32 cmnd = 0;
+ u32 options = 0;
+ u32 fcp_conf = 0;
+ u32 rec_support = 0;
+ u32 task_retry_support = 0;
+ u32 retry_support = 0;
+ u32 tape_support = 0;
+ u32 fc4_type = 0;
+ enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = (struct unf_lport *)lport;
+ unf_rport = (struct unf_rport *)rport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) exchange(%p) entry is NULL",
+ unf_lport->port_id, unf_xchg);
+ return;
+ }
+
+ /* Get PRLI ACC payload */
+ prli_acc_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli_acc.payload;
+ if (unf_xchg->byte_orders & UNF_BIT_2) {
+ /* Change to little End, About INI/TGT mode & confirm info */
+ options = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ cmnd = be32_to_cpu(prli_acc_pld->cmnd);
+ fcp_conf = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+ rec_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+ fc4_type = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_0]) >>
+ UNF_FC4_TYPE_SHIFT & UNF_FC4_TYPE_MASK;
+ } else {
+ options = (prli_acc_pld->parms[ARRAY_INDEX_3]) &
+ (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
+
+ cmnd = (prli_acc_pld->cmnd);
+ fcp_conf = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
+ rec_support = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
+ task_retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
+ UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
+ retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
+ UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
+ fc4_type = prli_acc_pld->parms[ARRAY_INDEX_0] >> UNF_FC4_TYPE_SHIFT;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: PRLI RSP: RPort(0x%x) parameter-3(0x%x) option(0x%x) cmd(0x%x) uiRecSupport:%u",
+ unf_rport->nport_id, prli_acc_pld->parms[ARRAY_INDEX_3],
+ options, cmnd, rec_support);
+
+ /* PRLI ACC: R_Port READY & Report R_Port Link Up */
+ if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
+ /* Update R_Port options(INI/TGT/BOTH) */
+ unf_rport->options = options;
+
+ unf_update_port_feature(unf_rport->port_name, unf_rport->options);
+
+ /* NOTE: R_Port only with INI mode, send LOGO */
+ if (unf_rport->options == UNF_PORT_MODE_INI) {
+ /* Update R_Port state: LOGO */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* NOTE: Start to Send LOGO */
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ return;
+ }
+
+ /* About confirm */
+ if (fcp_conf && unf_lport->low_level_func.lport_cfg_items.fcp_conf) {
+ unf_rport->fcp_conf_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) FCP config is need for RPort(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+ }
+
+ tape_support = (rec_support && task_retry_support && retry_support);
+ if (tape_support && unf_lport->low_level_func.lport_cfg_items.tape_support) {
+ unf_rport->tape_support_needed = true;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x_0x%x) Rec is enabled for RPort(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id);
+ }
+
+ /* Update R_Port state: READY */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
+ rport_state = unf_rport->rp_state;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Report R_Port online (Link Up) event to SCSI */
+ if (rport_state == UNF_RPORT_ST_READY) {
+ unf_rport->logo_retries = 0;
+ unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
+ unf_rport->options);
+ }
+ } else {
+ /* PRLI RJT: Do R_Port error recovery */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Port(0x%x)<---LS_RJT(DID:0x%x SID:0x%x) for PRLI. RPort(0x%p) OX_ID(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id,
+ unf_rport->nport_id, unf_rport, unf_xchg->oxid);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+}
+
+static void unf_rrq_callback(void *lport, void *rport, void *xchg)
+{
+ /* Release I/O */
+ struct unf_lport *unf_lport = NULL;
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_xchg *io_xchg = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_xchg = (struct unf_xchg *)xchg;
+
+ if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange(0x%p) SfsEntryPtr is NULL",
+ unf_lport->port_id, unf_xchg);
+ return;
+ }
+
+ io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
+ if (!io_xchg) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) IO exchange is NULL. RRQ cb sfs xchg(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_xchg, unf_xchg->hotpooltag);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) release IO exch(0x%p) tag(0x%x). RRQ cb sfs xchg(0x%p) tag(0x%x)",
+ unf_lport->port_id, unf_xchg->io_xchg, io_xchg->hotpooltag,
+ unf_xchg, unf_xchg->hotpooltag);
+
+ /* After RRQ Success, Free xid */
+ unf_notify_chip_free_xid(io_xchg);
+
+ /* NOTE: release I/O exchange resource */
+ unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
+}
+
+static void unf_rrq_ob_callback(struct unf_xchg *xchg)
+{
+ /* Release I/O */
+ struct unf_xchg *unf_xchg = NULL;
+ struct unf_xchg *io_xchg = NULL;
+
+ unf_xchg = (struct unf_xchg *)xchg;
+ if (!unf_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Exchange can't be NULL");
+ return;
+ }
+
+ io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
+ if (!io_xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]IO exchange can't be NULL with Sfs exch(0x%p) tag(0x%x)",
+ unf_xchg, unf_xchg->hotpooltag);
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]send RRQ failed: SFS exch(0x%p) tag(0x%x) exch(0x%p) tag(0x%x) OXID_RXID(0x%x_0x%x) SID_DID(0x%x_0x%x)",
+ unf_xchg, unf_xchg->hotpooltag, io_xchg, io_xchg->hotpooltag,
+ io_xchg->oxid, io_xchg->rxid, io_xchg->sid, io_xchg->did);
+
+ /* If RRQ failure or timepout, Free xid. */
+ unf_notify_chip_free_xid(io_xchg);
+
+ /* NOTE: Free I/O exchange resource */
+ unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
+}
+
diff --git a/drivers/scsi/spfc/common/unf_ls.h b/drivers/scsi/spfc/common/unf_ls.h
new file mode 100644
index 000000000000..5fdd9e1a258d
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_ls.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_LS_H
+#define UNF_LS_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
+ u32 cmnd_code);
+u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did);
+u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time);
+u32 unf_send_plogi_rjt_by_did(struct unf_lport *lport, u32 did);
+u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+void unf_flogi_ob_callback(struct unf_xchg *xchg);
+void unf_flogi_callback(void *lport, void *rport, void *xchg);
+void unf_fdisc_ob_callback(struct unf_xchg *xchg);
+void unf_fdisc_callback(void *lport, void *rport, void *xchg);
+
+void unf_plogi_ob_callback(struct unf_xchg *xchg);
+void unf_plogi_callback(void *lport, void *rport, void *xchg);
+void unf_prli_ob_callback(struct unf_xchg *xchg);
+void unf_prli_callback(void *lport, void *rport, void *xchg);
+u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *xchg);
+u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
+ struct unf_xchg *io_xchg);
+
+u32 unf_low_level_bb_scn(struct unf_lport *lport);
+typedef int (*unf_event_task)(void *arg_in, void *arg_out);
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_npiv.c b/drivers/scsi/spfc/common/unf_npiv.c
new file mode 100644
index 000000000000..0d441f1c9e06
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv.c
@@ -0,0 +1,1005 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_npiv.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_exchg.h"
+#include "unf_portman.h"
+#include "unf_npiv_portman.h"
+
+#define UNF_DELETE_VPORT_MAX_WAIT_TIME_MS 60000
+
+u32 unf_init_vport_pool(struct unf_lport *lport)
+{
+ u32 ret = RETURN_OK;
+ u32 i;
+ u16 vport_cnt = 0;
+ struct unf_lport *vport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ u32 vport_pool_size;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, RETURN_ERROR);
+
+ UNF_TOU16_CHECK(vport_cnt, lport->low_level_func.support_max_npiv_num,
+ return RETURN_ERROR);
+ if (vport_cnt == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) do not support NPIV",
+ lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ vport_pool_size = sizeof(struct unf_vport_pool) + sizeof(struct unf_lport *) * vport_cnt;
+ lport->vport_pool = vmalloc(vport_pool_size);
+ if (!lport->vport_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate vport pool",
+ lport->port_id);
+
+ return RETURN_ERROR;
+ }
+ memset(lport->vport_pool, 0, vport_pool_size);
+ vport_pool = lport->vport_pool;
+ vport_pool->vport_pool_count = vport_cnt;
+ vport_pool->vport_pool_completion = NULL;
+ spin_lock_init(&vport_pool->vport_pool_lock);
+ INIT_LIST_HEAD(&vport_pool->list_vport_pool);
+
+ vport_pool->vport_pool_addr =
+ vmalloc((size_t)(vport_cnt * sizeof(struct unf_lport)));
+ if (!vport_pool->vport_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate vport pool address",
+ lport->port_id);
+ vfree(lport->vport_pool);
+ lport->vport_pool = NULL;
+
+ return RETURN_ERROR;
+ }
+
+ memset(vport_pool->vport_pool_addr, 0,
+ vport_cnt * sizeof(struct unf_lport));
+ vport = (struct unf_lport *)vport_pool->vport_pool_addr;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ for (i = 0; i < vport_cnt; i++) {
+ list_add_tail(&vport->entry_vport, &vport_pool->list_vport_pool);
+ vport++;
+ }
+
+ vport_pool->slab_next_index = 0;
+ vport_pool->slab_total_sum = vport_cnt;
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return ret;
+}
+
+void unf_free_vport_pool(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ bool wait = false;
+ ulong flag = 0;
+ u32 remain = 0;
+ struct completion vport_pool_completion;
+
+ init_completion(&vport_pool_completion);
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(lport->vport_pool);
+ vport_pool = lport->vport_pool;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+
+ if (vport_pool->slab_total_sum != vport_pool->vport_pool_count) {
+ vport_pool->vport_pool_completion = &vport_pool_completion;
+ remain = vport_pool->slab_total_sum - vport_pool->vport_pool_count;
+ wait = true;
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ if (wait) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to wait for vport pool completion remain(0x%x)",
+ lport->port_id, remain);
+
+ wait_for_completion(vport_pool->vport_pool_completion);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) wait for vport pool completion end",
+ lport->port_id);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ vport_pool->vport_pool_completion = NULL;
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ }
+
+ if (lport->vport_pool->vport_pool_addr) {
+ vfree(lport->vport_pool->vport_pool_addr);
+ lport->vport_pool->vport_pool_addr = NULL;
+ }
+
+ vfree(lport->vport_pool);
+ lport->vport_pool = NULL;
+}
+
+struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
+ u16 slab_index)
+{
+ FC_CHECK_RETURN_VALUE(vport_pool, NULL);
+
+ return vport_pool->vport_slab[slab_index];
+}
+
+static inline void unf_vport_pool_slab_set(struct unf_vport_pool *vport_pool,
+ u16 slab_index,
+ struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport_pool);
+
+ vport_pool->vport_slab[slab_index] = vport;
+}
+
+u32 unf_alloc_vp_index(struct unf_vport_pool *vport_pool,
+ struct unf_lport *vport, u16 vpid)
+{
+ u16 slab_index;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(vport_pool, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ if (vpid == 0) {
+ slab_index = vport_pool->slab_next_index;
+ while (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
+ slab_index = (slab_index + 1) % vport_pool->slab_total_sum;
+
+ if (vport_pool->slab_next_index == slab_index) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]VPort pool has no slab ");
+
+ return RETURN_ERROR;
+ }
+ }
+ } else {
+ slab_index = vpid - 1;
+ if (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN,
+ "[warn]VPort Index(0x%x) is occupy", vpid);
+
+ return RETURN_ERROR;
+ }
+ }
+
+ unf_vport_pool_slab_set(vport_pool, slab_index, vport);
+
+ vport_pool->slab_next_index = (slab_index + 1) % vport_pool->slab_total_sum;
+
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flags);
+ vport->vp_index = slab_index + 1;
+ spin_unlock_irqrestore(&vport->lport_state_lock, flags);
+
+ return RETURN_OK;
+}
+
+void unf_free_vp_index(struct unf_vport_pool *vport_pool,
+ struct unf_lport *vport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(vport_pool);
+ FC_CHECK_RETURN_VOID(vport);
+
+ if (vport->vp_index == 0 ||
+ vport->vp_index > vport_pool->slab_total_sum) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Input vpoot index(0x%x) is beyond the normal range, min(0x1), max(0x%x).",
+ vport->vp_index, vport_pool->slab_total_sum);
+ return;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ unf_vport_pool_slab_set(vport_pool, vport->vp_index - 1,
+ NULL); /* SlabIndex=VpIndex-1 */
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flags);
+ vport->vp_index = INVALID_VALUE16;
+ spin_unlock_irqrestore(&vport->lport_state_lock, flags);
+}
+
+struct unf_lport *unf_get_free_vport(struct unf_lport *lport)
+{
+ struct unf_lport *vport = NULL;
+ struct list_head *list_head = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(lport->vport_pool, NULL);
+
+ vport_pool = lport->vport_pool;
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ if (!list_empty(&vport_pool->list_vport_pool)) {
+ list_head = UNF_OS_LIST_NEXT(&vport_pool->list_vport_pool);
+ list_del(list_head);
+ vport_pool->vport_pool_count--;
+ list_add_tail(list_head, &lport->list_vports_head);
+ vport = list_entry(list_head, struct unf_lport, entry_vport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]LPort(0x%x)'s vport pool is empty", lport->port_id);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return vport;
+}
+
+void unf_vport_back_to_pool(void *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *list = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = vport;
+ unf_lport = (struct unf_lport *)(unf_vport->root_lport);
+ FC_CHECK_RETURN_VOID(unf_lport);
+ FC_CHECK_RETURN_VOID(unf_lport->vport_pool);
+
+ unf_free_vp_index(unf_lport->vport_pool, unf_vport);
+
+ spin_lock_irqsave(&unf_lport->vport_pool->vport_pool_lock, flag);
+
+ list = &unf_vport->entry_vport;
+ list_del(list);
+ list_add_tail(list, &unf_lport->vport_pool->list_vport_pool);
+ unf_lport->vport_pool->vport_pool_count++;
+
+ spin_unlock_irqrestore(&unf_lport->vport_pool->vport_pool_lock, flag);
+}
+
+void unf_init_vport_from_lport(struct unf_lport *vport, struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+ FC_CHECK_RETURN_VOID(lport);
+
+ vport->port_type = lport->port_type;
+ vport->fc_port = lport->fc_port;
+ vport->act_topo = lport->act_topo;
+ vport->root_lport = lport;
+ vport->unf_qualify_rport = lport->unf_qualify_rport;
+ vport->link_event_wq = lport->link_event_wq;
+ vport->xchg_wq = lport->xchg_wq;
+
+ memcpy(&vport->xchg_mgr_temp, &lport->xchg_mgr_temp,
+ sizeof(struct unf_cm_xchg_mgr_template));
+
+ memcpy(&vport->event_mgr, &lport->event_mgr, sizeof(struct unf_event_mgr));
+
+ memset(&vport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
+
+ memcpy(&vport->low_level_func, &lport->low_level_func,
+ sizeof(struct unf_low_level_functioon_op));
+}
+
+void unf_check_vport_pool_status(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ vport_pool = lport->vport_pool;
+ FC_CHECK_RETURN_VOID(vport_pool);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+
+ if (vport_pool->vport_pool_completion &&
+ vport_pool->slab_total_sum == vport_pool->vport_pool_count) {
+ complete(vport_pool->vport_pool_completion);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+void unf_vport_fabric_logo(struct unf_lport *vport)
+{
+ struct unf_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ unf_rport = unf_get_rport_by_nport_id(vport, UNF_FC_FID_FLOGI);
+ FC_CHECK_RETURN_VOID(unf_rport);
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(vport, unf_rport);
+}
+
+void unf_vport_deinit(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = (struct unf_lport *)vport;
+
+ unf_unregister_scsi_host(unf_vport);
+
+ unf_disc_mgr_destroy(unf_vport);
+
+ unf_release_xchg_mgr_temp(unf_vport);
+
+ unf_release_vport_mgr_temp(unf_vport);
+
+ unf_destroy_scsi_id_table(unf_vport);
+
+ unf_lport_release_lw_funop(unf_vport);
+ unf_vport->fc_port = NULL;
+ unf_vport->vport = NULL;
+
+ if (unf_vport->lport_free_completion) {
+ complete(unf_vport->lport_free_completion);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) point(0x%p) completion free function is NULL",
+ unf_vport->port_id, unf_vport);
+ dump_stack();
+ }
+}
+
+void unf_vport_ref_dec(struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+
+ if (atomic_dec_and_test(&vport->port_ref_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) point(0x%p) reference count is 0 and freevport",
+ vport->port_id, vport);
+
+ unf_vport_deinit(vport);
+ }
+}
+
+u32 unf_vport_init(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+ unf_vport = (struct unf_lport *)vport;
+
+ unf_vport->options = UNF_PORT_MODE_INI;
+ unf_vport->nport_id = 0;
+
+ if (unf_init_scsi_id_table(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) can not initialize SCSI ID table",
+ unf_vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ if (unf_init_disc_mgr(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) can not initialize discover manager",
+ unf_vport->port_id);
+ unf_destroy_scsi_id_table(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ if (unf_register_scsi_host(unf_vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Vport(0x%x) vport can not register SCSI host",
+ unf_vport->port_id);
+ unf_disc_mgr_destroy(unf_vport);
+ unf_destroy_scsi_id_table(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Vport(0x%x) Create succeed with wwpn(0x%llx)",
+ unf_vport->port_id, unf_vport->port_name);
+
+ return RETURN_OK;
+}
+
+void unf_vport_remove(void *vport)
+{
+ struct unf_lport *unf_vport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct completion port_free_completion;
+
+ init_completion(&port_free_completion);
+ FC_CHECK_RETURN_VOID(vport);
+ unf_vport = (struct unf_lport *)vport;
+ unf_lport = (struct unf_lport *)(unf_vport->root_lport);
+ unf_vport->lport_free_completion = &port_free_completion;
+
+ unf_set_lport_removing(unf_vport);
+
+ unf_vport_ref_dec(unf_vport);
+
+ wait_for_completion(unf_vport->lport_free_completion);
+ unf_vport_back_to_pool(unf_vport);
+
+ unf_check_vport_pool_status(unf_lport);
+}
+
+u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level)
+{
+#define VPORT_WWN_MASK 0xff00ffffffffffff
+#define VPORT_WWN_SHIFT 48
+
+ struct fc_vport_identifiers vid = {0};
+ struct Scsi_Host *host = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ u16 vport_id = 0;
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Cannot find LPort by (0x%x).", port_id);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport = unf_cm_lookup_vport_by_wwpn(unf_lport, wwpn);
+ if (unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) has find vport with wwpn(0x%llx), can't create again",
+ unf_lport->port_id, wwpn);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport = unf_get_free_vport(unf_lport);
+ if (!unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Can not get free vport from pool");
+
+ return RETURN_ERROR;
+ }
+
+ unf_init_port_parms(unf_vport);
+ unf_init_vport_from_lport(unf_vport, unf_lport);
+
+ if ((unf_lport->port_name & VPORT_WWN_MASK) == (wwpn & VPORT_WWN_MASK)) {
+ vport_id = (wwpn & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
+ if (vport_id == 0)
+ vport_id = (unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
+ }
+
+ if (unf_alloc_vp_index(unf_lport->vport_pool, unf_vport, vport_id) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Vport can not allocate vport index");
+ unf_vport_back_to_pool(unf_vport);
+
+ return RETURN_ERROR;
+ }
+ unf_vport->port_id = (((u32)unf_vport->vp_index) << PORTID_VPINDEX_SHIT) |
+ unf_lport->port_id;
+
+ vid.roles = FC_PORT_ROLE_FCP_INITIATOR;
+ vid.vport_type = FC_PORTTYPE_NPIV;
+ vid.disable = false;
+ vid.node_name = unf_lport->node_name;
+
+ if (wwpn) {
+ vid.port_name = wwpn;
+ } else {
+ if ((unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT !=
+ unf_vport->vp_index) {
+ vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK) |
+ (((u64)unf_vport->vp_index) << VPORT_WWN_SHIFT);
+ } else {
+ vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK);
+ }
+ }
+
+ unf_vport->port_name = vid.port_name;
+
+ host = unf_lport->host_info.host;
+
+ if (!fc_vport_create(host, 0, &vid)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) Cannot Failed to create vport wwpn=%llx",
+ unf_lport->port_id, vid.port_name);
+
+ unf_vport_back_to_pool(unf_vport);
+
+ return RETURN_ERROR;
+ }
+
+ unf_vport->qos_level = qos_level;
+ return RETURN_OK;
+}
+
+struct unf_lport *unf_creat_vport(struct unf_lport *lport,
+ struct vport_config *vport_config)
+{
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *vport = NULL;
+ enum unf_act_topo lport_topo;
+ enum unf_lport_login_state lport_state;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(vport_config, NULL);
+
+ if (vport_config->port_mode != FC_PORT_ROLE_FCP_INITIATOR) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Only support INITIATOR port mode(0x%x)",
+ vport_config->port_mode);
+
+ return NULL;
+ }
+ unf_lport = lport;
+
+ if (unf_lport->root_lport != unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) not root port return",
+ unf_lport->port_id);
+
+ return NULL;
+ }
+
+ vport = unf_cm_lookup_vport_by_wwpn(unf_lport, vport_config->port_name);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) can not find vport with wwpn(0x%llx)",
+ unf_lport->port_id, vport_config->port_name);
+
+ return NULL;
+ }
+
+ ret = unf_vport_init(vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]VPort(0x%x) can not initialize vport",
+ vport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ lport_topo = unf_lport->act_topo;
+ lport_state = unf_lport->states;
+
+ vport_config->node_name = unf_lport->node_name;
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ vport->port_name = vport_config->port_name;
+ vport->node_name = vport_config->node_name;
+
+ if (lport_topo == UNF_ACT_TOP_P2P_FABRIC &&
+ lport_state >= UNF_LPORT_ST_PLOGI_WAIT &&
+ lport_state <= UNF_LPORT_ST_READY) {
+ vport->link_up = unf_lport->link_up;
+ (void)unf_lport_login(vport, lport_topo);
+ }
+
+ return vport;
+}
+
+u32 unf_drop_vport(struct unf_lport *vport)
+{
+ u32 ret = RETURN_ERROR;
+ struct fc_vport *unf_vport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ unf_vport = vport->vport;
+ if (!unf_vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) find vport in scsi is NULL",
+ vport->port_id);
+
+ return ret;
+ }
+
+ ret = fc_vport_terminate(unf_vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]VPort(0x%x) terminate vport(%p) in scsi failed",
+ vport->port_id, unf_vport);
+
+ return ret;
+ }
+ return ret;
+}
+
+u32 unf_delete_vport(u32 port_id, u32 vp_index)
+{
+ struct unf_lport *unf_lport = NULL;
+ u16 unf_vp_index = 0;
+ struct unf_lport *vport = NULL;
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can not be found by portid", port_id);
+
+ return RETURN_ERROR;
+ }
+
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) is in NOP, destroy all vports function will be called",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ UNF_TOU16_CHECK(unf_vp_index, vp_index, return RETURN_ERROR);
+ vport = unf_cm_lookup_vport_by_vp_index(unf_lport, unf_vp_index);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Can not lookup VPort by VPort index(0x%x)",
+ unf_vp_index);
+
+ return RETURN_ERROR;
+ }
+
+ return unf_drop_vport(vport);
+}
+
+void unf_vport_abort_all_sfs_exch(struct unf_lport *vport)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ ulong pool_lock_flags = 0;
+ ulong exch_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(vport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+ spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
+ if (vport == exch->lport && (atomic_read(&exch->ref_cnt) > 0)) {
+ exch->io_state |= TGT_IO_STATE_ABORT;
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ unf_disc_ctrl_size_inc(vport, exch->cmnd_code);
+ /* Transfer exch to destroy chain */
+ list_del(xchg_node);
+ list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
+ } else {
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+void unf_vport_abort_ini_io_exch(struct unf_lport *vport)
+{
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ ulong pool_lock_flags = 0;
+ ulong exch_lock_flags = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(vport);
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) MgrIdex %u hot pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id, i);
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ if (vport == exch->lport && atomic_read(&exch->ref_cnt) > 0) {
+ /* Transfer exch to destroy chain */
+ list_del(xchg_node);
+ list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
+
+ spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
+ exch->io_state |= INI_IO_STATE_DRABORT;
+ spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
+ }
+ }
+
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
+ }
+}
+
+void unf_vport_abort_exch(struct unf_lport *vport)
+{
+ FC_CHECK_RETURN_VOID(vport);
+
+ unf_vport_abort_all_sfs_exch(vport);
+
+ unf_vport_abort_ini_io_exch(vport);
+}
+
+u32 unf_vport_wait_all_exch_removed(struct unf_lport *vport)
+{
+#define UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS 1000
+ struct unf_xchg_hot_pool *hot_pool = NULL;
+ struct list_head *xchg_node = NULL;
+ struct list_head *next_xchg_node = NULL;
+ struct unf_xchg *exch = NULL;
+ u32 vport_uses = 0;
+ ulong flags = 0;
+ u32 wait_timeout = 0;
+ u32 i = 0;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ while (1) {
+ vport_uses = 0;
+
+ for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
+ hot_pool =
+ unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
+ if (unlikely(!hot_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) hot Pool is NULL",
+ ((struct unf_lport *)(vport->root_lport))->port_id);
+
+ continue;
+ }
+
+ spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
+ list_for_each_safe(xchg_node, next_xchg_node,
+ &hot_pool->list_destroy_xchg) {
+ exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
+
+ if (exch->lport != vport)
+ continue;
+ vport_uses++;
+ if (wait_timeout >=
+ UNF_DELETE_VPORT_MAX_WAIT_TIME_MS) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[error]VPort(0x%x) Abort Exch(0x%p) Type(0x%x) OxRxid(0x%x 0x%x),sid did(0x%x 0x%x) SeqId(0x%x) IOState(0x%x) Ref(0x%x)",
+ vport->port_id, exch,
+ (u32)exch->xchg_type,
+ (u32)exch->oxid,
+ (u32)exch->rxid, (u32)exch->sid,
+ (u32)exch->did, (u32)exch->seq_id,
+ (u32)exch->io_state,
+ atomic_read(&exch->ref_cnt));
+ }
+ }
+ spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
+ }
+
+ if (vport_uses == 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) has removed all exchanges it used",
+ vport->port_id);
+ break;
+ }
+
+ if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
+ return RETURN_ERROR;
+
+ msleep(UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS);
+ wait_timeout += UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_vport_wait_rports_removed(struct unf_lport *vport)
+{
+#define UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS 5000
+
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ u32 vport_uses = 0;
+ ulong flags = 0;
+ u32 wait_timeout = 0;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+ disc = &vport->disc;
+
+ while (1) {
+ vport_uses = 0;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
+ list_for_each_safe(node, next_node, &disc->list_delete_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Delete",
+ vport->port_id, unf_rport->nport_id, unf_rport);
+ vport_uses++;
+ }
+
+ list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Destroy",
+ vport->port_id, unf_rport->nport_id, unf_rport);
+ vport_uses++;
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
+
+ if (vport_uses == 0) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) has removed all RPorts it used",
+ vport->port_id);
+ break;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Vport(0x%x) has %u RPorts not removed wait timeout(%u ms)",
+ vport->port_id, vport_uses, wait_timeout);
+
+ if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
+ return RETURN_ERROR;
+
+ msleep(UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS);
+ wait_timeout += UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS;
+ }
+
+ return RETURN_OK;
+}
+
+u32 unf_destroy_one_vport(struct unf_lport *vport)
+{
+ u32 ret;
+ struct unf_lport *root_port = NULL;
+
+ FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
+
+ root_port = (struct unf_lport *)vport->root_lport;
+
+ unf_vport_fabric_logo(vport);
+
+ /* 1 set NOP */
+ atomic_set(&vport->lport_no_operate_flag, UNF_LPORT_NOP);
+ vport->port_removing = true;
+
+ /* 2 report linkdown to scsi and delele rpot */
+ unf_linkdown_one_vport(vport);
+
+ /* 3 set abort for exchange */
+ unf_vport_abort_exch(vport);
+
+ /* 4 wait exch return freepool */
+ if (!root_port->port_dirt_exchange) {
+ ret = unf_vport_wait_all_exch_removed(vport);
+ if (ret != RETURN_OK) {
+ if (!root_port->port_removing) {
+ vport->port_removing = false;
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]VPort(0x%x) can not wait Exchange return freepool",
+ vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) is removing, there is dirty exchange, continue",
+ root_port->port_id);
+
+ root_port->port_dirt_exchange = true;
+ }
+ }
+
+ /* wait rport return rportpool */
+ ret = unf_vport_wait_rports_removed(vport);
+ if (ret != RETURN_OK) {
+ vport->port_removing = false;
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]VPort(0x%x) can not wait Rport return freepool",
+ vport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ unf_cm_vport_remove(vport);
+
+ return RETURN_OK;
+}
+
+void unf_destroy_all_vports(struct unf_lport *lport)
+{
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+
+ unf_lport = lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Lport(0x%x) VPort pool is NULL", unf_lport->port_id);
+
+ return;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
+ atomic_dec(&vport->port_ref_cnt);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_destroy_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_destroy_vports);
+ vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&vport->entry_vport);
+ list_add_tail(&vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]VPort(0x%x) Destroy begin", vport->port_id);
+ unf_drop_vport(vport);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]VPort(0x%x) Destroy end", vport->port_id);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+u32 unf_init_vport_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ lport->lport_mgr_temp.unf_look_up_vport_by_index = unf_lookup_vport_by_index;
+ lport->lport_mgr_temp.unf_look_up_vport_by_port_id = unf_lookup_vport_by_portid;
+ lport->lport_mgr_temp.unf_look_up_vport_by_did = unf_lookup_vport_by_did;
+ lport->lport_mgr_temp.unf_look_up_vport_by_wwpn = unf_lookup_vport_by_wwpn;
+ lport->lport_mgr_temp.unf_vport_remove = unf_vport_remove;
+
+ return RETURN_OK;
+}
+
+void unf_release_vport_mgr_temp(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ memset(&lport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP;
+}
diff --git a/drivers/scsi/spfc/common/unf_npiv.h b/drivers/scsi/spfc/common/unf_npiv.h
new file mode 100644
index 000000000000..6f522470f47a
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_NPIV_H
+#define UNF_NPIV_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+/* product VPORT configure */
+struct vport_config {
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+};
+
+/* product Vport function */
+#define PORTID_VPINDEX_MASK 0xff000000
+#define PORTID_VPINDEX_SHIT 24
+u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level);
+struct unf_lport *unf_creat_vport(struct unf_lport *lport,
+ struct vport_config *vport_config);
+u32 unf_delete_vport(u32 port_id, u32 vp_index);
+
+/* Vport pool creat and release function */
+u32 unf_init_vport_pool(struct unf_lport *lport);
+void unf_free_vport_pool(struct unf_lport *lport);
+
+/* Lport resigster stLPortMgTemp function */
+void unf_vport_remove(void *vport);
+void unf_vport_ref_dec(struct unf_lport *vport);
+
+/* linkdown all Vport after receive linkdown event */
+void unf_linkdown_all_vports(void *lport);
+/* Lport receive Flogi Acc linkup all Vport */
+void unf_linkup_all_vports(struct unf_lport *lport);
+/* Lport remove delete all Vport */
+void unf_destroy_all_vports(struct unf_lport *lport);
+void unf_vport_fabric_logo(struct unf_lport *vport);
+u32 unf_destroy_one_vport(struct unf_lport *vport);
+u32 unf_drop_vport(struct unf_lport *vport);
+u32 unf_init_vport_mgr_temp(struct unf_lport *lport);
+void unf_release_vport_mgr_temp(struct unf_lport *lport);
+struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
+ u16 slab_index);
+#endif
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.c b/drivers/scsi/spfc/common/unf_npiv_portman.c
new file mode 100644
index 000000000000..b4f393f2e732
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv_portman.c
@@ -0,0 +1,360 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_npiv_portman.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "unf_rport.h"
+#include "unf_npiv.h"
+#include "unf_portman.h"
+
+void *unf_lookup_vport_by_index(void *lport, u16 vp_index)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ if (vp_index == 0 || vp_index > vport_pool->slab_total_sum) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) input vport index(0x%x) is beyond the normal range(0x1~0x%x)",
+ unf_lport->port_id, vp_index, vport_pool->slab_total_sum);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ unf_vport = unf_get_vport_by_slab_index(vport_pool, vp_index - 1);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return (void *)unf_vport;
+}
+
+void *unf_lookup_vport_by_portid(void *lport, u32 port_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_id == port_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_id == port_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no vport ID(0x%x).",
+ unf_lport->port_id, port_id);
+ return NULL;
+}
+
+void *unf_lookup_vport_by_did(void *lport, u32 did)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == did) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == did) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) has no vport Nport ID(0x%x)", unf_lport->port_id, did);
+ return NULL;
+}
+
+void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_name == wwpn) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->port_name == wwpn) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has no vport WWPN(0x%llx)",
+ unf_lport->port_id, wwpn);
+
+ return NULL;
+}
+
+void unf_linkdown_one_vport(struct unf_lport *vport)
+{
+ ulong flag = 0;
+ struct unf_lport *root_lport = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
+ "[info]VPort(0x%x) linkdown", vport->port_id);
+
+ spin_lock_irqsave(&vport->lport_state_lock, flag);
+ vport->link_up = UNF_PORT_LINK_DOWN;
+ vport->nport_id = 0; /* set nportid 0 before send fdisc again */
+ unf_lport_state_ma(vport, UNF_EVENT_LPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&vport->lport_state_lock, flag);
+
+ root_lport = (struct unf_lport *)vport->root_lport;
+
+ unf_flush_disc_event(&root_lport->disc, vport);
+
+ unf_clean_linkdown_rport(vport);
+}
+
+void unf_linkdown_all_vports(void *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = (struct unf_lport *)lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) VPort pool is NULL", unf_lport->port_id);
+
+ return;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
+ (void)unf_lport_ref_inc(unf_vport);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_intergrad_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ unf_linkdown_one_vport(unf_vport);
+
+ unf_vport_ref_dec(unf_vport);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+}
+
+int unf_process_vports_linkup(void *arg_in, void *arg_out)
+{
+#define UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS 100
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ int ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(arg_in, RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)arg_in;
+
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is NOP don't continue", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ if (unf_lport->link_up != UNF_PORT_LINK_UP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is not linkup don't continue.",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) VPort pool is NULL.", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* Transfer to the transition chain */
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
+ (void)unf_lport_ref_inc(unf_vport);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ while (!list_empty(&unf_lport->list_intergrad_vports)) {
+ node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ list_del_init(&unf_vport->entry_vport);
+ list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ if (atomic_read(&unf_vport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ unf_vport_ref_dec(unf_vport);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ continue;
+ }
+
+ if (unf_lport->link_up == UNF_PORT_LINK_UP &&
+ unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Vport(0x%x) begin login", unf_vport->port_id);
+
+ unf_vport->link_up = UNF_PORT_LINK_UP;
+ (void)unf_lport_login(unf_vport, unf_lport->act_topo);
+
+ msleep(UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS);
+ } else {
+ unf_linkdown_one_vport(unf_vport);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Vport(0x%x) login failed because root port linkdown",
+ unf_vport->port_id);
+ }
+
+ unf_vport_ref_dec(unf_vport);
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
+
+ return ret;
+}
+
+void unf_linkup_all_vports(struct unf_lport *lport)
+{
+ struct unf_cm_event_report *event = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
+ !lport->event_mgr.unf_post_event_func ||
+ !lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) Event fun is NULL",
+ lport->port_id);
+ return;
+ }
+
+ event = lport->event_mgr.unf_get_free_event_func((void *)lport);
+ FC_CHECK_RETURN_VOID(event);
+
+ event->lport = lport;
+ event->event_asy_flag = UNF_EVENT_ASYN;
+ event->unf_event_task = unf_process_vports_linkup;
+ event->para_in = (void *)lport;
+
+ lport->event_mgr.unf_post_event_func(lport, event);
+}
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.h b/drivers/scsi/spfc/common/unf_npiv_portman.h
new file mode 100644
index 000000000000..284c23c9abe4
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_npiv_portman.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_NPIV_PORTMAN_H
+#define UNF_NPIV_PORTMAN_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+/* Lport resigster stLPortMgTemp function */
+void *unf_lookup_vport_by_index(void *lport, u16 vp_index);
+void *unf_lookup_vport_by_portid(void *lport, u32 port_id);
+void *unf_lookup_vport_by_did(void *lport, u32 did);
+void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn);
+void unf_linkdown_one_vport(struct unf_lport *vport);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_portman.c b/drivers/scsi/spfc/common/unf_portman.c
new file mode 100644
index 000000000000..ef8f90eb3777
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_portman.c
@@ -0,0 +1,2431 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_portman.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+#include "unf_io.h"
+#include "unf_npiv.h"
+#include "unf_scsi_common.h"
+
+#define UNF_LPORT_CHIP_ERROR(unf_lport) \
+ ((unf_lport)->pcie_error_cnt.pcie_error_count[UNF_PCIE_FATALERRORDETECTED])
+
+struct unf_global_lport global_lport_mgr;
+
+static int unf_port_switch(struct unf_lport *lport, bool switch_flag);
+static u32 unf_build_lport_wwn(struct unf_lport *lport);
+static int unf_lport_destroy(void *lport, void *arg_out);
+static u32 unf_port_linkup(struct unf_lport *lport, void *input);
+static u32 unf_port_linkdown(struct unf_lport *lport, void *input);
+static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input);
+static u32 unf_port_reset_start(struct unf_lport *lport, void *input);
+static u32 unf_port_reset_end(struct unf_lport *lport, void *input);
+static u32 unf_port_nop(struct unf_lport *lport, void *input);
+static void unf_destroy_card_thread(struct unf_lport *lport);
+static u32 unf_creat_card_thread(struct unf_lport *lport);
+static u32 unf_find_card_thread(struct unf_lport *lport);
+static u32 unf_port_begin_remove(struct unf_lport *lport, void *input);
+
+static struct unf_port_action g_lport_action[] = {
+ {UNF_PORT_LINK_UP, unf_port_linkup},
+ {UNF_PORT_LINK_DOWN, unf_port_linkdown},
+ {UNF_PORT_RESET_START, unf_port_reset_start},
+ {UNF_PORT_RESET_END, unf_port_reset_end},
+ {UNF_PORT_NOP, unf_port_nop},
+ {UNF_PORT_BEGIN_REMOVE, unf_port_begin_remove},
+ {UNF_PORT_RELEASE_RPORT_INDEX, unf_port_release_rport_index},
+ {UNF_PORT_ABNORMAL_RESET, unf_port_abnormal_reset},
+};
+
+static void unf_destroy_dirty_rport(struct unf_lport *lport, bool show_only)
+{
+ u32 dirty_rport = 0;
+
+ /* for whole L_Port */
+ if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) {
+ dirty_rport = lport->rport_pool.rport_pool_count;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has %d dirty RPort(s)",
+ lport->port_id, dirty_rport);
+
+ /* Show L_Port's R_Port(s) from busy_list & destroy_list */
+ unf_show_all_rport(lport);
+
+ /* free R_Port pool memory & bitmap */
+ if (!show_only) {
+ vfree(lport->rport_pool.rport_pool_add);
+ lport->rport_pool.rport_pool_add = NULL;
+ vfree(lport->rport_pool.rpi_bitmap);
+ lport->rport_pool.rpi_bitmap = NULL;
+ }
+ }
+}
+
+void unf_show_dirty_port(bool show_only, u32 *dirty_port_num)
+{
+ struct list_head *node = NULL;
+ struct list_head *node_next = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+ u32 port_num = 0;
+
+ FC_CHECK_RETURN_VOID(dirty_port_num);
+
+ /* for each dirty L_Port from global L_Port list */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_for_each_safe(node, node_next, &global_lport_mgr.dirty_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has dirty data(0x%x)",
+ unf_lport->port_id, unf_lport->dirty_flag);
+
+ /* Destroy dirty L_Port's exchange(s) & R_Port(s) */
+ unf_destroy_dirty_xchg(unf_lport, show_only);
+ unf_destroy_dirty_rport(unf_lport, show_only);
+
+ /* Delete (dirty L_Port) list entry if necessary */
+ if (!show_only) {
+ list_del_init(node);
+ vfree(unf_lport);
+ }
+
+ port_num++;
+ }
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ *dirty_port_num = port_num;
+}
+
+void unf_show_all_rport(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 rport_cnt = 0;
+ u32 target_cnt = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_lport = lport;
+ disc = &unf_lport->disc;
+
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) disc state(0x%x)", unf_lport->port_id, disc->states);
+
+ /* for each R_Port from busy_list */
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ rport_cnt++;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) busy RPorts(%u_%p) WWPN(0x%016llx) scsi_id(0x%x) local N_Port_ID(0x%x) N_Port_ID(0x%06x). State(0x%04x) options(0x%04x) index(0x%04x) ref(%d) pend:%d",
+ unf_lport->port_id, rport_cnt, unf_rport,
+ unf_rport->port_name, unf_rport->scsi_id,
+ unf_rport->local_nport_id, unf_rport->nport_id,
+ unf_rport->rp_state, unf_rport->options,
+ unf_rport->rport_index,
+ atomic_read(&unf_rport->rport_ref_cnt),
+ atomic_read(&unf_rport->pending_io_cnt));
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR)
+ target_cnt++;
+ }
+
+ unf_lport->target_cnt = target_cnt;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) targetnum=(%u)", unf_lport->port_id,
+ unf_lport->target_cnt);
+
+ /* for each R_Port from destroy_list */
+ list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
+ unf_rport = list_entry(node, struct unf_rport, entry_rport);
+ rport_cnt++;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) destroy RPorts(%u) WWPN(0x%016llx) N_Port_ID(0x%06x) State(0x%04x) options(0x%04x) index(0x%04x) ref(%d)",
+ unf_lport->port_id, rport_cnt, unf_rport->port_name,
+ unf_rport->nport_id, unf_rport->rp_state,
+ unf_rport->options, unf_rport->rport_index,
+ atomic_read(&unf_rport->rport_ref_cnt));
+ }
+
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+}
+
+u32 unf_lport_ref_inc(struct unf_lport *lport)
+{
+ ulong lport_flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_read(&lport->port_ref_cnt) <= 0) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ atomic_inc(&lport->port_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%p) port_id(0x%x) reference count is %d",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ return RETURN_OK;
+}
+
+void unf_lport_ref_dec(struct unf_lport *lport)
+{
+ ulong flags = 0;
+ ulong lport_flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "LPort(0x%p), port ID(0x%x), reference count is %d.",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_dec_and_test(&lport->port_ref_cnt)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+
+ /* attaches the lport to the destroy linked list for dfx */
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ (void)unf_lport_destroy(lport, NULL);
+ } else {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+ }
+}
+
+void unf_lport_update_topo(struct unf_lport *lport,
+ enum unf_act_topo active_topo)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ if (active_topo > UNF_ACT_TOP_UNKNOWN || active_topo < UNF_ACT_TOP_PUBLIC_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) set invalid topology(0x%x) with current value(0x%x)",
+ lport->nport_id, active_topo, lport->act_topo);
+
+ return;
+ }
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->act_topo = active_topo;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+}
+
+void unf_set_lport_removing(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ lport->fc_port = NULL;
+ lport->port_removing = true;
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_0_SET_REMOVING;
+}
+
+u32 unf_release_local_port(void *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ struct completion lport_free_completion;
+
+ init_completion(&lport_free_completion);
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ unf_lport->lport_free_completion = &lport_free_completion;
+ unf_set_lport_removing(unf_lport);
+ unf_lport_ref_dec(unf_lport);
+ wait_for_completion(unf_lport->lport_free_completion);
+ /* for dirty case */
+ if (unf_lport->dirty_flag == 0)
+ vfree(unf_lport);
+
+ return RETURN_OK;
+}
+
+static void unf_free_all_esgl_pages(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ u32 i;
+
+ FC_CHECK_RETURN_VOID(lport);
+ spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
+ list_for_each_safe(node, next_node, &lport->esgl_pool.list_esgl_pool) {
+ list_del(node);
+ }
+
+ spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
+ if (lport->esgl_pool.esgl_buff_list.buflist) {
+ for (i = 0; i < lport->esgl_pool.esgl_buff_list.buf_num; i++) {
+ if (lport->esgl_pool.esgl_buff_list.buflist[i].vaddr) {
+ dma_free_coherent(&lport->low_level_func.dev->dev,
+ lport->esgl_pool.esgl_buff_list.buf_size,
+ lport->esgl_pool.esgl_buff_list.buflist[i].vaddr,
+ lport->esgl_pool.esgl_buff_list.buflist[i].paddr);
+ lport->esgl_pool.esgl_buff_list.buflist[i].vaddr = NULL;
+ }
+ }
+ kfree(lport->esgl_pool.esgl_buff_list.buflist);
+ lport->esgl_pool.esgl_buff_list.buflist = NULL;
+ }
+}
+
+static u32 unf_init_esgl_pool(struct unf_lport *lport)
+{
+ struct unf_esgl *esgl = NULL;
+ u32 ret = RETURN_OK;
+ u32 index = 0;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 curbuf_idx = 0;
+ u32 curbuf_offset = 0;
+ u32 buf_cnt_perhugebuf;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ lport->esgl_pool.esgl_pool_count = lport->low_level_func.lport_cfg_items.max_io;
+ spin_lock_init(&lport->esgl_pool.esgl_pool_lock);
+ INIT_LIST_HEAD(&lport->esgl_pool.list_esgl_pool);
+
+ lport->esgl_pool.esgl_pool_addr =
+ vmalloc((size_t)((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
+ if (!lport->esgl_pool.esgl_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "LPort(0x%x) cannot allocate ESGL Pool.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ esgl = (struct unf_esgl *)lport->esgl_pool.esgl_pool_addr;
+ memset(esgl, 0, ((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
+
+ buf_total_size = (u32)(PAGE_SIZE * lport->esgl_pool.esgl_pool_count);
+
+ lport->esgl_pool.esgl_buff_list.buf_size =
+ buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE : buf_total_size;
+ buf_cnt_perhugebuf = lport->esgl_pool.esgl_buff_list.buf_size / PAGE_SIZE;
+ buf_num = lport->esgl_pool.esgl_pool_count % buf_cnt_perhugebuf
+ ? lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf + 1
+ : lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf;
+ lport->esgl_pool.esgl_buff_list.buflist =
+ (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
+ lport->esgl_pool.esgl_buff_list.buf_num = buf_num;
+
+ if (!lport->esgl_pool.esgl_buff_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate Esgl pool buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(lport->esgl_pool.esgl_buff_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ lport->esgl_pool.esgl_buff_list.buflist[alloc_idx]
+ .vaddr = dma_alloc_coherent(&lport->low_level_func.dev->dev,
+ lport->esgl_pool.esgl_buff_list.buf_size,
+ &lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].paddr, GFP_KERNEL);
+ if (!lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr)
+ goto free_buff;
+ memset(lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr, 0,
+ lport->esgl_pool.esgl_buff_list.buf_size);
+ }
+
+ /* allocates the Esgl page, and the DMA uses the */
+ for (index = 0; index < lport->esgl_pool.esgl_pool_count; index++) {
+ if (index != 0 && !(index % buf_cnt_perhugebuf))
+ curbuf_idx++;
+ curbuf_offset = (u32)(PAGE_SIZE * (index % buf_cnt_perhugebuf));
+ esgl->page.page_address =
+ (u64)lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].vaddr + curbuf_offset;
+ esgl->page.page_size = PAGE_SIZE;
+ esgl->page.esgl_phy_addr =
+ lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].paddr + curbuf_offset;
+ list_add_tail(&esgl->entry_esgl, &lport->esgl_pool.list_esgl_pool);
+ esgl++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
+
+ return ret;
+free_buff:
+ unf_free_all_esgl_pages(lport);
+ vfree(lport->esgl_pool.esgl_pool_addr);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_free_esgl_pool(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ unf_free_all_esgl_pages(lport);
+ lport->esgl_pool.esgl_pool_count = 0;
+
+ if (lport->esgl_pool.esgl_pool_addr) {
+ vfree(lport->esgl_pool.esgl_pool_addr);
+ lport->esgl_pool.esgl_pool_addr = NULL;
+ }
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL;
+}
+
+struct unf_lport *unf_find_lport_by_port_id(u32 port_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ u32 portid = port_id & (~PORTID_VPINDEX_MASK);
+ u16 vport_index;
+ spinlock_t *lport_list_lock = NULL;
+
+ lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+ vport_index = (port_id & PORTID_VPINDEX_MASK) >> PORTID_VPINDEX_SHIT;
+ spin_lock_irqsave(lport_list_lock, flags);
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ if (unf_lport->port_id == portid && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ if (unf_lport->port_id == portid && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
+ }
+ }
+
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return NULL;
+}
+
+u32 unf_is_vport_valid(struct unf_lport *lport, struct unf_lport *vport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ spinlock_t *vport_pool_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(vport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+ vport_pool = unf_lport->vport_pool;
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ vport_pool_lock = &vport_pool->vport_pool_lock;
+ spin_lock_irqsave(vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ if (unf_vport == vport && !unf_vport->port_removing) {
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+
+ if (unf_vport == vport && !unf_vport->port_removing) {
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock_irqrestore(vport_pool_lock, flag);
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 unf_is_lport_valid(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ spinlock_t *lport_list_lock = NULL;
+
+ lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+ spin_lock_irqsave(lport_list_lock, flags);
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.destroy_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+
+ if (unf_lport == lport && !unf_lport->port_removing) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_clean_linkdown_io(struct unf_lport *lport, bool clean_flag)
+{
+ /* Clean L_Port/V_Port Link Down I/O: Set Abort Tag */
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(lport->xchg_mgr_temp.unf_xchg_abort_all_io);
+
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_INI, clean_flag);
+ lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_SFS, clean_flag);
+}
+
+u32 unf_fc_port_link_event(void *lport, u32 events, void *input)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+
+ if (unlikely(!lport))
+ return UNF_RETURN_ERROR;
+ unf_lport = (struct unf_lport *)lport;
+
+ ret = unf_lport_ref_inc(unf_lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) is removing and do nothing",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* process port event */
+ while (index < (sizeof(g_lport_action) / sizeof(struct unf_port_action))) {
+ if (g_lport_action[index].action == events) {
+ ret = g_lport_action[index].unf_action(unf_lport, input);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return ret;
+ }
+ index++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive unknown event(0x%x)",
+ unf_lport->port_id, events);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return ret;
+}
+
+void unf_port_mgmt_init(void)
+{
+ memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
+
+ INIT_LIST_HEAD(&global_lport_mgr.lport_list_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.intergrad_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.destroy_list_head);
+
+ INIT_LIST_HEAD(&global_lport_mgr.dirty_list_head);
+
+ spin_lock_init(&global_lport_mgr.global_lport_list_lock);
+
+ UNF_SET_NOMAL_MODE(global_lport_mgr.dft_mode);
+
+ global_lport_mgr.start_work = true;
+}
+
+void unf_port_mgmt_deinit(void)
+{
+ if (global_lport_mgr.lport_sum != 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]There are %u port pool memory giveaway",
+ global_lport_mgr.lport_sum);
+ }
+
+ memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Common port manager exit succeed");
+}
+
+static void unf_port_register(struct unf_lport *lport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Register LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
+
+ /* Add to the global management linked list header */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.lport_list_head);
+ global_lport_mgr.lport_sum++;
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+}
+
+static void unf_port_unregister(struct unf_lport *lport)
+{
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Unregister LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
+
+ /* Remove from the global management linked list header */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+}
+
+int unf_port_start_work(struct unf_lport *lport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ if (lport->start_work_state != UNF_START_WORK_STOP) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+ lport->start_work_state = UNF_START_WORK_COMPLETE;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ /* switch sfp to start work */
+ (void)unf_port_switch(lport, true);
+
+ return RETURN_OK;
+}
+
+static u32
+unf_lport_init_lw_funop(struct unf_lport *lport,
+ struct unf_low_level_functioon_op *low_level_op)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(low_level_op, UNF_RETURN_ERROR);
+
+ lport->port_id = low_level_op->lport_cfg_items.port_id;
+ lport->port_name = low_level_op->sys_port_name;
+ lport->node_name = low_level_op->sys_node_name;
+ lport->options = low_level_op->lport_cfg_items.port_mode;
+ lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ lport->max_ssq_num = low_level_op->support_max_ssq_num;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) .", lport->port_id);
+
+ memcpy(&lport->low_level_func, low_level_op, sizeof(struct unf_low_level_functioon_op));
+
+ return RETURN_OK;
+}
+
+void unf_lport_release_lw_funop(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+
+ memset(&lport->low_level_func, 0, sizeof(struct unf_low_level_functioon_op));
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE;
+}
+
+struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id)
+{
+ struct list_head *node = NULL, *next_node = NULL;
+ struct list_head *vp_node = NULL, *next_vp_node = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_lport *unf_vport = NULL;
+ ulong flags = 0;
+ ulong pool_flags = 0;
+ spinlock_t *vp_pool_lock = NULL;
+ spinlock_t *lport_list_lock = &global_lport_mgr.global_lport_list_lock;
+
+ spin_lock_irqsave(lport_list_lock, flags);
+ list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
+ if (scsi_host_id == UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_lport;
+ }
+
+ /* support NPIV */
+ if (unf_lport->vport_pool) {
+ spin_lock_irqsave(vp_pool_lock, pool_flags);
+ list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
+
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ }
+ }
+
+ list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
+ unf_lport = list_entry(node, struct unf_lport, entry_lport);
+ vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_lport;
+ }
+
+ /* support NPIV */
+ if (unf_lport->vport_pool) {
+ spin_lock_irqsave(vp_pool_lock, pool_flags);
+ list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
+
+ if (scsi_host_id ==
+ UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(vp_pool_lock, pool_flags);
+ }
+ }
+ spin_unlock_irqrestore(lport_list_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Can not find port by scsi_host_id(0x%x), may be removing",
+ scsi_host_id);
+
+ return NULL;
+}
+
+u32 unf_init_scsi_id_table(struct unf_lport *lport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
+ struct unf_wwpn_rport_info *wwpn_port_info = NULL;
+ u32 idx;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ rport_scsi_id_image = &lport->rport_scsi_table;
+ rport_scsi_id_image->max_scsi_id = UNF_MAX_SCSI_ID;
+
+ /* If the number of remote connections supported by the L_Port is 0, an
+ * exception occurs
+ */
+ if (rport_scsi_id_image->max_scsi_id == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x), supported maximum login is zero.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rport_scsi_id_image->wwn_rport_info_table =
+ vmalloc(rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
+ if (!rport_scsi_id_image->wwn_rport_info_table) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate SCSI ID Table(0x%x).",
+ lport->port_id, rport_scsi_id_image->max_scsi_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(rport_scsi_id_image->wwn_rport_info_table, 0,
+ rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
+
+ wwpn_port_info = rport_scsi_id_image->wwn_rport_info_table;
+
+ for (idx = 0; idx < rport_scsi_id_image->max_scsi_id; idx++) {
+ INIT_DELAYED_WORK(&wwpn_port_info->loss_tmo_work, unf_sesion_loss_timeout);
+ INIT_LIST_HEAD(&wwpn_port_info->fc_lun_list);
+ wwpn_port_info->lport = lport;
+ wwpn_port_info->target_id = INVALID_VALUE32;
+ wwpn_port_info++;
+ }
+
+ spin_lock_init(&rport_scsi_id_image->scsi_image_table_lock);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) supported maximum login is %u.",
+ lport->port_id, rport_scsi_id_image->max_scsi_id);
+
+ return RETURN_OK;
+}
+
+void unf_destroy_scsi_id_table(struct unf_lport *lport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ u32 i = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ rport_scsi_id_image = &lport->rport_scsi_table;
+ if (rport_scsi_id_image->wwn_rport_info_table) {
+ for (i = 0; i < UNF_MAX_SCSI_ID; i++) {
+ wwn_rport_info = &rport_scsi_id_image->wwn_rport_info_table[i];
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+ if (wwn_rport_info->lun_qos_level) {
+ vfree(wwn_rport_info->lun_qos_level);
+ wwn_rport_info->lun_qos_level = NULL;
+ }
+ }
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) cancel loss tmo work success", lport->port_id);
+ }
+ vfree(rport_scsi_id_image->wwn_rport_info_table);
+ rport_scsi_id_image->wwn_rport_info_table = NULL;
+ }
+
+ rport_scsi_id_image->max_scsi_id = 0;
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE;
+}
+
+static u32 unf_lport_init(struct unf_lport *lport, void *private_data,
+ struct unf_low_level_functioon_op *low_level_op)
+{
+ u32 ret = RETURN_OK;
+ char work_queue_name[13];
+
+ unf_init_port_parms(lport);
+
+ /* Associating LPort with FCPort */
+ lport->fc_port = private_data;
+
+ /* VpIndx=0 is reserved for Lport, and rootLport points to its own */
+ lport->vp_index = 0;
+ lport->root_lport = lport;
+ lport->chip_info = NULL;
+
+ /* Initialize the units related to L_Port and lw func */
+ ret = unf_lport_init_lw_funop(lport, low_level_op);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize lowlevel function unsuccessful.",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Init Linkevent workqueue */
+ snprintf(work_queue_name, sizeof(work_queue_name), "%x_lkq", lport->port_id);
+
+ lport->link_event_wq = create_singlethread_workqueue(work_queue_name);
+ if (!lport->link_event_wq) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) creat link event work queue failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ snprintf(work_queue_name, sizeof(work_queue_name), "%x_xchgwq", lport->port_id);
+ lport->xchg_wq = create_workqueue(work_queue_name);
+ if (!lport->xchg_wq) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) creat Exchg work queue failed",
+ lport->port_id);
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+ return UNF_RETURN_ERROR;
+ }
+ /* scsi table (R_Port) required for initializing INI
+ * Initialize the scsi id Table table to manage the mapping between SCSI
+ * ID, WWN, and Rport.
+ */
+
+ ret = unf_init_scsi_id_table(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ return ret;
+ }
+
+ /* Initialize the EXCH resource */
+ ret = unf_alloc_xchg_resource(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) can't allocate exchange resource.", lport->port_id);
+
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+
+ /* Initialize the ESGL resource pool used by Lport */
+ ret = unf_init_esgl_pool(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+ /* Initialize the disc manager under Lport */
+ ret = unf_init_disc_mgr(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize discover manager unsuccessful.",
+ lport->port_id);
+
+ return ret;
+ }
+
+ /* Initialize the LPort manager */
+ ret = unf_init_vport_mgr_temp(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize RPort manager unsuccessful.", lport->port_id);
+
+ goto RELEASE_LPORT;
+ }
+
+ /* Initialize the EXCH manager */
+ ret = unf_init_xchg_mgr_temp(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize exchange manager unsuccessful.",
+ lport->port_id);
+ goto RELEASE_LPORT;
+ }
+ /* Initialize the resources required by the event processing center */
+ ret = unf_init_event_center(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) initialize event center unsuccessful.", lport->port_id);
+ goto RELEASE_LPORT;
+ }
+ /* Initialize the initialization status of Lport */
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
+
+ /* Initialize the Lport route test case */
+ ret = unf_init_lport_route(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+ (void)unf_event_center_destroy(lport);
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+ /* Thesupports the initialization stepof the NPIV */
+ ret = unf_init_vport_pool(lport);
+ if (ret != RETURN_OK) {
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ unf_destroy_lport_route(lport);
+ (void)unf_event_center_destroy(lport);
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+ }
+
+ /* qualifier rport callback */
+ lport->unf_qualify_rport = unf_rport_set_qualifier_key_reuse;
+ lport->unf_tmf_abnormal_recovery = unf_tmf_timeout_recovery_special;
+ return RETURN_OK;
+RELEASE_LPORT:
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ unf_disc_mgr_destroy(lport);
+ unf_free_esgl_pool(lport);
+ unf_free_all_xchg_mgr(lport);
+ unf_destroy_scsi_id_table(lport);
+
+ return ret;
+}
+
+void unf_free_qos_info(struct unf_lport *lport)
+{
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_qos_info *qos_info = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&lport->qos_mgr_lock, flag);
+ list_for_each_safe(node, next_node, &lport->list_qos_head) {
+ qos_info = (struct unf_qos_info *)list_entry(node,
+ struct unf_qos_info, entry_qos_info);
+ list_del_init(&qos_info->entry_qos_info);
+ kfree(qos_info);
+ }
+
+ spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
+}
+
+u32 unf_lport_deinit(struct unf_lport *lport)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_free_qos_info(lport);
+
+ unf_unregister_scsi_host(lport);
+
+ /* If the card is unloaded normally, the thread is stopped once. The
+ * problem does not occur if you stop the thread again.
+ */
+ unf_destroy_lport_route(lport);
+
+ /* minus the reference count of the card event; the last port deletes
+ * the card thread
+ */
+ unf_destroy_card_thread(lport);
+ flush_workqueue(lport->link_event_wq);
+ destroy_workqueue(lport->link_event_wq);
+ lport->link_event_wq = NULL;
+
+ (void)unf_event_center_destroy(lport);
+ unf_free_vport_pool(lport);
+ unf_xchg_mgr_destroy(lport);
+
+ unf_free_esgl_pool(lport);
+
+ /* reliability review :Disc should release after Xchg. Destroy the disc
+ * manager
+ */
+ unf_disc_mgr_destroy(lport);
+
+ unf_release_xchg_mgr_temp(lport);
+
+ unf_release_vport_mgr_temp(lport);
+
+ unf_destroy_scsi_id_table(lport);
+
+ flush_workqueue(lport->xchg_wq);
+ destroy_workqueue(lport->xchg_wq);
+ lport->xchg_wq = NULL;
+
+ /* Releasing the lw Interface Template */
+ unf_lport_release_lw_funop(lport);
+ lport->fc_port = NULL;
+
+ return RETURN_OK;
+}
+
+static int unf_card_event_process(void *arg)
+{
+ struct list_head *node = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong flags = 0;
+ struct unf_chip_manage_info *chip_info = (struct unf_chip_manage_info *)arg;
+
+ set_user_nice(current, UNF_OS_THRD_PRI_LOW);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Slot(%u) chip(0x%x) enter event thread.",
+ chip_info->slot_id, chip_info->chip_id);
+
+ while (!kthread_should_stop()) {
+ if (chip_info->thread_exit)
+ break;
+
+ spin_lock_irqsave(&chip_info->chip_event_list_lock, flags);
+ if (list_empty(&chip_info->list_head)) {
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
+ } else {
+ node = UNF_OS_LIST_NEXT(&chip_info->list_head);
+ list_del_init(node);
+ chip_info->list_num--;
+ event_node = list_entry(node, struct unf_cm_event_report, list_entry);
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
+ unf_handle_event(event_node);
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Slot(%u) chip(0x%x) exit event thread.",
+ chip_info->slot_id, chip_info->chip_id);
+
+ return RETURN_OK;
+}
+
+static void unf_destroy_card_thread(struct unf_lport *lport)
+{
+ struct unf_event_mgr *event_mgr = NULL;
+ struct unf_chip_manage_info *chip_info = NULL;
+ struct list_head *list = NULL;
+ struct list_head *list_tmp = NULL;
+ struct unf_cm_event_report *event_node = NULL;
+ ulong event_lock_flag = 0;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ chip_info = lport->chip_info;
+ if (!chip_info) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) has no event thread.", lport->port_id);
+ return;
+ }
+ event_mgr = &lport->event_mgr;
+
+ spin_lock_irqsave(&chip_info->chip_event_list_lock, flag);
+ if (!list_empty(&chip_info->list_head)) {
+ list_for_each_safe(list, list_tmp, &chip_info->list_head) {
+ event_node = list_entry(list, struct unf_cm_event_report, list_entry);
+
+ /* The LPort under the global event node is null. */
+ if (event_node->lport == lport) {
+ list_del_init(&event_node->list_entry);
+ if (event_node->event_asy_flag == UNF_EVENT_SYN) {
+ event_node->result = UNF_RETURN_ERROR;
+ complete(&event_node->event_comp);
+ }
+
+ spin_lock_irqsave(&event_mgr->port_event_lock, event_lock_flag);
+ event_mgr->free_event_count++;
+ list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
+ spin_unlock_irqrestore(&event_mgr->port_event_lock,
+ event_lock_flag);
+ }
+ }
+ }
+ spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flag);
+
+ /* If the number of events introduced by the event thread is 0,
+ * it indicates that no interface is used. In this case, thread
+ * resources need to be consumed
+ */
+ if (atomic_dec_and_test(&chip_info->ref_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) destroy slot(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_info->slot_id, chip_info->chip_id);
+ chip_info->thread_exit = true;
+ wake_up_process(chip_info->thread);
+ kthread_stop(chip_info->thread);
+ chip_info->thread = NULL;
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_del_init(&chip_info->list_chip_thread_entry);
+ card_thread_mgr.card_num--;
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ vfree(chip_info);
+ }
+
+ lport->chip_info = NULL;
+}
+
+static u32 unf_creat_card_thread(struct unf_lport *lport)
+{
+ ulong flag = 0;
+ struct unf_chip_manage_info *chip_manage_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* If the thread cannot be found, apply for a new thread. */
+ chip_manage_info = (struct unf_chip_manage_info *)
+ vmalloc(sizeof(struct unf_chip_manage_info));
+ if (!chip_manage_info) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) cannot allocate thread memory.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(chip_manage_info, 0, sizeof(struct unf_chip_manage_info));
+
+ memcpy(&chip_manage_info->chip_info, &lport->low_level_func.chip_info,
+ sizeof(struct unf_chip_info));
+ chip_manage_info->slot_id = UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id);
+ chip_manage_info->chip_id = lport->low_level_func.chip_id;
+ chip_manage_info->list_num = 0;
+ chip_manage_info->sfp_9545_fault = false;
+ chip_manage_info->sfp_power_fault = false;
+ atomic_set(&chip_manage_info->ref_cnt, 1);
+ atomic_set(&chip_manage_info->card_loop_test_flag, false);
+ spin_lock_init(&chip_manage_info->card_loop_back_state_lock);
+ INIT_LIST_HEAD(&chip_manage_info->list_head);
+ spin_lock_init(&chip_manage_info->chip_event_list_lock);
+
+ chip_manage_info->thread_exit = false;
+ chip_manage_info->thread = kthread_create(unf_card_event_process,
+ chip_manage_info, "%x_et", lport->port_id);
+
+ if (IS_ERR(chip_manage_info->thread) || !chip_manage_info->thread) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) creat event thread(0x%p) unsuccessful.",
+ lport->port_id, chip_manage_info->thread);
+
+ vfree(chip_manage_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->chip_info = chip_manage_info;
+ wake_up_process(chip_manage_info->thread);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) creat slot(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_manage_info->slot_id,
+ chip_manage_info->chip_id);
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_add_tail(&chip_manage_info->list_chip_thread_entry, &card_thread_mgr.card_list_head);
+ card_thread_mgr.card_num++;
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ return RETURN_OK;
+}
+
+static u32 unf_find_card_thread(struct unf_lport *lport)
+{
+ ulong flag = 0;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_chip_manage_info *chip_info = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
+ list_for_each_safe(node, next_node, &card_thread_mgr.card_list_head) {
+ chip_info = list_entry(node, struct unf_chip_manage_info, list_chip_thread_entry);
+
+ if (chip_info->chip_id == lport->low_level_func.chip_id &&
+ chip_info->slot_id ==
+ UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id)) {
+ atomic_inc(&chip_info->ref_cnt);
+ lport->chip_info = chip_info;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) find card(%u) chip(0x%x) event thread succeed.",
+ lport->port_id, chip_info->slot_id, chip_info->chip_id);
+
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
+
+ ret = unf_creat_card_thread(lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) creat event thread unsuccessful. Destroy LPort.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ } else {
+ return RETURN_OK;
+ }
+}
+
+void *unf_lport_create_and_init(void *private_data, struct unf_low_level_functioon_op *low_level_op)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (!private_data) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Private Data is NULL");
+
+ return NULL;
+ }
+ if (!low_level_op) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LowLevel port(0x%p) function is NULL", private_data);
+
+ return NULL;
+ }
+
+ /* 1. vmalloc & Memset L_Port */
+ unf_lport = vmalloc(sizeof(struct unf_lport));
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Alloc LPort memory failed.");
+
+ return NULL;
+ }
+ memset(unf_lport, 0, sizeof(struct unf_lport));
+
+ /* 2. L_Port Init */
+ if (unf_lport_init(unf_lport, private_data, low_level_op) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort initialize unsuccessful.");
+
+ vfree(unf_lport);
+
+ return NULL;
+ }
+
+ /* 4. Get or Create Chip Thread
+ * Chip_ID & Slot_ID
+ */
+ ret = unf_find_card_thread(unf_lport);
+ if (ret != RETURN_OK) {
+ (void)unf_lport_deinit(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%x) Find Chip thread unsuccessful. Destroy LPort.",
+ unf_lport->port_id);
+
+ vfree(unf_lport);
+ return NULL;
+ }
+
+ /* 5. Registers with in the port management global linked list */
+ unf_port_register(unf_lport);
+ /* update WWN */
+ if (unf_build_lport_wwn(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+ (void)unf_lport_deinit(unf_lport);
+ vfree(unf_lport);
+ return NULL;
+ }
+
+ // unf_init_link_lose_tmo(unf_lport);//TO DO
+
+ /* initialize Scsi Host */
+ if (unf_register_scsi_host(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+ (void)unf_lport_deinit(unf_lport);
+ vfree(unf_lport);
+ return NULL;
+ }
+ /* 7. Here, start work now */
+ if (global_lport_mgr.start_work) {
+ if (unf_port_start_work(unf_lport) != RETURN_OK) {
+ unf_port_unregister(unf_lport);
+
+ (void)unf_lport_deinit(unf_lport);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) start work failed", unf_lport->port_id);
+ vfree(unf_lport);
+ return NULL;
+ }
+ }
+
+ return unf_lport;
+}
+
+static int unf_lport_destroy(void *lport, void *arg_out)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flags = 0;
+
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR, "LPort is NULL.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_lport = (struct unf_lport *)lport;
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "Destroy LPort(0x%p), ID(0x%x).", unf_lport, unf_lport->port_id);
+ /* NPIV Ensure that all Vport are deleted */
+ unf_destroy_all_vports(unf_lport);
+ unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT;
+
+ (void)unf_lport_deinit(lport);
+
+ /* The port is removed from the destroy linked list. The next step is to
+ * release the memory
+ */
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ list_del(&unf_lport->entry_lport);
+
+ /* If the port has dirty memory, the port is mounted to the linked list
+ * of dirty ports
+ */
+ if (unf_lport->dirty_flag)
+ list_add_tail(&unf_lport->entry_lport, &global_lport_mgr.dirty_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ if (unf_lport->lport_free_completion) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Complete LPort(0x%p), port ID(0x%x)'s Free Completion.",
+ unf_lport, unf_lport->port_id);
+ complete(unf_lport->lport_free_completion);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "LPort(0x%p), port ID(0x%x)'s Free Completion is NULL.",
+ unf_lport, unf_lport->port_id);
+ dump_stack();
+ }
+
+ return RETURN_OK;
+}
+
+static int unf_port_switch(struct unf_lport *lport, bool switch_flag)
+{
+ struct unf_lport *unf_lport = lport;
+ int ret = UNF_RETURN_ERROR;
+ bool flag = false;
+
+ FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
+
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
+ "[warn]Port(0x%x)'s config(switch) function is NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ flag = switch_flag ? true : false;
+
+ ret = (int)unf_lport->low_level_func.port_mgr_op.ll_port_config_set(unf_lport->fc_port,
+ UNF_PORT_CFG_SET_PORT_SWITCH, (void *)&flag);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_WARN, "[warn]Port(0x%x) switch %s failed",
+ unf_lport->port_id, switch_flag ? "On" : "Off");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ unf_lport->switch_state = (bool)flag;
+
+ return RETURN_OK;
+}
+
+static int unf_send_event(u32 port_id, u32 syn_flag, void *argc_in, void *argc_out,
+ int (*func)(void *argc_in, void *argc_out))
+{
+ struct unf_lport *lport = NULL;
+ struct unf_cm_event_report *event = NULL;
+ int ret = 0;
+
+ lport = unf_find_lport_by_port_id(port_id);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_INFO, "Cannot find LPort(0x%x).", port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unf_lport_ref_inc(lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) is removing, no need process.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
+ !lport->event_mgr.unf_post_event_func ||
+ !lport->event_mgr.unf_release_event)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Event function is NULL.");
+
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (lport->port_removing) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "LPort(0x%x) is removing, no need process.",
+ lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ event = lport->event_mgr.unf_get_free_event_func((void *)lport);
+ if (!event) {
+ unf_lport_ref_dec_to_destroy(lport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ init_completion(&event->event_comp);
+ event->lport = lport;
+ event->event_asy_flag = syn_flag;
+ event->unf_event_task = func;
+ event->para_in = argc_in;
+ event->para_out = argc_out;
+ lport->event_mgr.unf_post_event_func(lport, event);
+
+ if (event->event_asy_flag) {
+ /* You must wait for the other party to return. Otherwise, the
+ * linked list may be in disorder.
+ */
+ wait_for_completion(&event->event_comp);
+ ret = (int)event->result;
+ lport->event_mgr.unf_release_event(lport, event);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ unf_lport_ref_dec_to_destroy(lport);
+ return ret;
+}
+
+static int unf_reset_port(void *arg_in, void *arg_out)
+{
+ struct unf_reset_port_argin *input = (struct unf_reset_port_argin *)arg_in;
+ struct unf_lport *lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ enum unf_port_config_state port_state = UNF_PORT_CONFIG_STATE_RESET;
+
+ FC_CHECK_RETURN_VALUE(input, UNF_RETURN_ERROR);
+
+ lport = unf_find_lport_by_port_id(input->port_id);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Not find LPort(0x%x).",
+ input->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* reset port */
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ lport->speed = UNF_PORT_SPEED_UNKNOWN;
+ lport->fabric_node_name = 0;
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
+ UNF_PORT_CFG_SET_PORT_STATE,
+ (void *)&port_state);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, "Reset port(0x%x) unsuccessful.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_cm_reset_port(u32 port_id)
+{
+ int ret = UNF_RETURN_ERROR;
+
+ ret = unf_send_event(port_id, UNF_EVENT_SYN, (void *)&port_id,
+ (void *)NULL, unf_reset_port);
+ return ret;
+}
+
+int unf_lport_reset_port(struct unf_lport *lport, u32 flag)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ return unf_send_event(lport->port_id, flag, (void *)&lport->port_id,
+ (void *)NULL, unf_reset_port);
+}
+
+static inline u32 unf_get_loop_alpa(struct unf_lport *lport, void *loop_alpa)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
+ UNF_PORT_CFG_GET_LOOP_ALPA, loop_alpa);
+
+ return ret;
+}
+
+static u32 unf_lport_enter_private_loop_login(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: LINK_UP --> READY */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_lport_update_topo(unf_lport, UNF_ACT_TOP_PRIVATE_LOOP);
+
+ /* NOP: check L_Port state */
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Port(0x%x) is NOP, do nothing",
+ unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* INI: check L_Port mode */
+ if (UNF_PORT_MODE_INI != (unf_lport->options & UNF_PORT_MODE_INI)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) has no INI feature(0x%x), do nothing",
+ unf_lport->port_id, unf_lport->options);
+
+ return RETURN_OK;
+ }
+
+ if (unf_lport->disc.disc_temp.unf_disc_start) {
+ ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with nportid(0x%x) start discovery failed",
+ unf_lport->port_id, unf_lport->nport_id);
+ }
+ }
+
+ return ret;
+}
+
+u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo)
+{
+ u32 loop_alpa = 0;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* 1. Update (set) L_Port topo which get from low level */
+ unf_lport_update_topo(lport, act_topo);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+
+ /* 2. Link state check */
+ if (lport->link_up != UNF_PORT_LINK_UP) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with link_state(0x%x) port_state(0x%x) when login",
+ lport->port_id, lport->link_up, lport->states);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* 3. Update L_Port state */
+ unf_lport_state_ma(lport, UNF_EVENT_LPORT_LINK_UP); /* LPort: INITIAL --> LINK UP */
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]LOGIN: Port(0x%x) start to login with topology(0x%x)",
+ lport->port_id, lport->act_topo);
+
+ /* 4. Start logoin */
+ if (act_topo == UNF_TOP_P2P_MASK ||
+ act_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P or Fabric mode */
+ ret = unf_lport_enter_flogi(lport);
+ } else if (act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ /* Public loop */
+ (void)unf_get_loop_alpa(lport, &loop_alpa);
+
+ /* Before FLOGI ALPA just low 8 bit, after FLOGI ACC, switch
+ * will assign complete addresses
+ */
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = loop_alpa;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ ret = unf_lport_enter_flogi(lport);
+ } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private loop */
+ (void)unf_get_loop_alpa(lport, &loop_alpa);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ lport->nport_id = loop_alpa;
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ ret = unf_lport_enter_private_loop_login(lport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]LOGIN: Port(0x%x) login with unknown topology(0x%x)",
+ lport->port_id, lport->act_topo);
+ }
+
+ return ret;
+}
+
+static u32 unf_port_linkup(struct unf_lport *lport, void *input)
+{
+ struct unf_lport *unf_lport = lport;
+ u32 ret = RETURN_OK;
+ enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ /* If NOP state, stop */
+ if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) is NOP and do nothing", unf_lport->port_id);
+
+ return RETURN_OK;
+ }
+
+ /* Update port state */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport->link_up = UNF_PORT_LINK_UP;
+ unf_lport->speed = *((u32 *)input);
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL); /* INITIAL state */
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* set hot pool wait state: so far, do not care */
+ unf_set_hot_pool_wait_state(unf_lport, true);
+
+ unf_lport->enhanced_features |= UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE;
+
+ /* Get port active topopolgy (from low level) */
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) get topo function is NULL", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_TOPO_ACT, (void *)&act_topo);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) get topo from low level failed",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Start Login process */
+ ret = unf_lport_login(unf_lport, act_topo);
+
+ return ret;
+}
+
+static u32 unf_port_linkdown(struct unf_lport *lport, void *input)
+{
+ ulong flag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ /* To prevent repeated reporting linkdown */
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport->speed = UNF_PORT_SPEED_UNKNOWN;
+ unf_lport->act_topo = UNF_ACT_TOP_UNKNOWN;
+ if (unf_lport->link_up == UNF_PORT_LINK_DOWN) {
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+ }
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
+ unf_reset_lport_params(unf_lport);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ unf_set_hot_pool_wait_state(unf_lport, false);
+
+ /*
+ * clear I/O:
+ * 1. INI do ABORT only,
+ * 2. TGT need do source clear with Wait_IO
+ * *
+ * for INI: busy/delay/delay_transfer/wait
+ * Clean L_Port/V_Port Link Down I/O: only set ABORT tag
+ */
+ unf_flush_disc_event(&unf_lport->disc, NULL);
+
+ unf_clean_linkdown_io(unf_lport, false);
+
+ /* for L_Port's R_Ports */
+ unf_clean_linkdown_rport(unf_lport);
+ /* for L_Port's all Vports */
+ unf_linkdown_all_vports(lport);
+ return RETURN_OK;
+}
+
+static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ ret = (u32)unf_lport_reset_port(unf_lport, UNF_EVENT_ASYN);
+
+ return ret;
+}
+
+static u32 unf_port_reset_start(struct unf_lport *lport, void *input)
+{
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_set_lport_state(lport, UNF_LPORT_ST_RESET);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) begin to reset.", lport->port_id);
+
+ return ret;
+}
+
+static u32 unf_port_reset_end(struct unf_lport *lport, void *input)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) reset end.", lport->port_id);
+
+ /* Task management command returns success and avoid repair measures
+ * case offline device
+ */
+ unf_wake_up_scsi_task_cmnd(lport);
+
+ spin_lock_irqsave(&lport->lport_state_lock, flag);
+ unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
+ spin_unlock_irqrestore(&lport->lport_state_lock, flag);
+
+ return RETURN_OK;
+}
+
+static u32 unf_port_nop(struct unf_lport *lport, void *input)
+{
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ unf_lport = lport;
+
+ atomic_set(&unf_lport->lport_no_operate_flag, UNF_LPORT_NOP);
+
+ spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
+ unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
+ unf_reset_lport_params(unf_lport);
+ spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
+
+ /* Set Tag prevent pending I/O to wait_list when close sfp failed */
+ unf_set_hot_pool_wait_state(unf_lport, false);
+
+ unf_flush_disc_event(&unf_lport->disc, NULL);
+
+ /* L_Port/V_Port's I/O(s): Clean Link Down I/O: Set Abort Tag */
+ unf_clean_linkdown_io(unf_lport, false);
+
+ /* L_Port/V_Port's R_Port(s): report link down event to scsi & clear
+ * resource
+ */
+ unf_clean_linkdown_rport(unf_lport);
+ unf_linkdown_all_vports(unf_lport);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) report NOP event done", unf_lport->nport_id);
+
+ return RETURN_OK;
+}
+
+static u32 unf_port_begin_remove(struct unf_lport *lport, void *input)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ /* Cancel route timer delay work */
+ unf_destroy_lport_route(lport);
+
+ return RETURN_OK;
+}
+
+static u32 unf_get_pcie_link_state(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ bool linkstate = true;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(unf_lport->low_level_func.port_mgr_op.ll_port_config_get,
+ UNF_RETURN_ERROR);
+
+ ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_PCIE_LINK_STATE, (void *)&linkstate);
+ if (ret != RETURN_OK || linkstate != true) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_KEVENT, "[err]Can't Get Pcie Link State");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void unf_root_lport_ref_dec(struct unf_lport *lport)
+{
+ ulong flags = 0;
+ ulong lport_flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%p) port_id(0x%x) reference count is %d",
+ lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
+
+ spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
+ spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
+ if (atomic_dec_and_test(&lport->port_ref_cnt)) {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+
+ list_del(&lport->entry_lport);
+ global_lport_mgr.lport_sum--;
+
+ /* Put L_Port to destroy list for debuging */
+ list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+
+ ret = unf_schedule_global_event((void *)lport, UNF_GLOBAL_EVENT_ASYN,
+ unf_lport_destroy);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[warn]Schedule global event faile. remain nodes(0x%x)",
+ global_event_queue.list_number);
+ }
+ } else {
+ spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
+ spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
+ }
+}
+
+void unf_lport_ref_dec_to_destroy(struct unf_lport *lport)
+{
+ if (lport->root_lport != lport)
+ unf_vport_ref_dec(lport);
+ else
+ unf_root_lport_ref_dec(lport);
+}
+
+void unf_lport_route_work(struct work_struct *work)
+{
+#define UNF_MAX_PCIE_LINK_DOWN_TIMES 3
+ struct unf_lport *unf_lport = NULL;
+ int ret = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ unf_lport = container_of(work, struct unf_lport, route_timer_work.work);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_KEVENT, "[err]LPort is NULL");
+
+ return;
+ }
+
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) route work is closing.", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ if (unlikely(unf_get_pcie_link_state(unf_lport)))
+ unf_lport->pcie_link_down_cnt++;
+ else
+ unf_lport->pcie_link_down_cnt = 0;
+
+ if (unf_lport->pcie_link_down_cnt >= UNF_MAX_PCIE_LINK_DOWN_TIMES) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) detected pcie linkdown, closing route work",
+ unf_lport->port_id);
+ unf_lport->pcie_link_down = true;
+ unf_free_lport_all_xchg(unf_lport);
+ unf_lport_ref_dec_to_destroy(unf_lport);
+ return;
+ }
+
+ if (unlikely(UNF_LPORT_CHIP_ERROR(unf_lport))) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) reported chip error, closing route work. ",
+ unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ if (unf_lport->enhanced_features &
+ UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]User close LPort(0x%x) route work. ", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+
+ return;
+ }
+
+ /* Scheduling 1 second */
+ ret = queue_delayed_work(unf_wq, &unf_lport->route_timer_work,
+ (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
+ if (ret == 0) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
+ "[warn]LPort(0x%x) schedule work unsuccessful.", unf_lport->port_id);
+
+ unf_lport_ref_dec_to_destroy(unf_lport);
+ }
+}
+
+static int unf_cm_get_mac_adr(void *argc_in, void *argc_out)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_get_chip_info_argout *chip_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(argc_in, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(argc_out, UNF_RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)argc_in;
+ chip_info = (struct unf_get_chip_info_argout *)argc_out;
+
+ if (!unf_lport) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
+ UNF_MAJOR, " LPort is null.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
+ UNF_PORT_CFG_GET_MAC_ADDR,
+ chip_info) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) get .", unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+int unf_build_sys_wwn(u32 port_id, u64 *sys_port_name, u64 *sys_node_name)
+{
+ struct unf_get_chip_info_argout wwn = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE((sys_port_name), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE((sys_node_name), UNF_RETURN_ERROR);
+
+ unf_lport = unf_find_lport_by_port_id(port_id);
+ if (!unf_lport)
+ return UNF_RETURN_ERROR;
+
+ ret = (u32)unf_send_event(unf_lport->port_id, UNF_EVENT_SYN,
+ (void *)unf_lport, (void *)&wwn, unf_cm_get_mac_adr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "send event(port get mac adr) fail.");
+ return UNF_RETURN_ERROR;
+ }
+
+ /* save card mode: UNF_FC_SERVER_BOARD_32_G(6):32G;
+ * UNF_FC_SERVER_BOARD_16_G(7):16G MODE
+ */
+ unf_lport->card_type = wwn.board_type;
+
+ /* update port max speed */
+ if (wwn.board_type == UNF_FC_SERVER_BOARD_32_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
+ else if (wwn.board_type == UNF_FC_SERVER_BOARD_16_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_16_G;
+ else if (wwn.board_type == UNF_FC_SERVER_BOARD_8_G)
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_8_G;
+ else
+ unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
+
+ *sys_port_name = wwn.wwpn;
+ *sys_node_name = wwn.wwnn;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) Port Name(0x%llx), Node Name(0x%llx.)",
+ port_id, *sys_port_name, *sys_node_name);
+
+ return RETURN_OK;
+}
+
+static u32 unf_update_port_wwn(struct unf_lport *lport,
+ struct unf_port_wwn *port_wwn)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(port_wwn, UNF_RETURN_ERROR);
+
+ /* Now notice lowlevel to update */
+ if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x)'s corresponding function is NULL.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
+ UNF_PORT_CFG_UPDATE_WWN,
+ port_wwn) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) update WWN unsuccessful.",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) update WWN: previous(0x%llx, 0x%llx), now(0x%llx, 0x%llx).",
+ lport->port_id, lport->port_name, lport->node_name,
+ port_wwn->sys_port_wwn, port_wwn->sys_node_name);
+
+ lport->port_name = port_wwn->sys_port_wwn;
+ lport->node_name = port_wwn->sys_node_name;
+
+ return RETURN_OK;
+}
+
+static u32 unf_build_lport_wwn(struct unf_lport *lport)
+{
+ struct unf_port_wwn port_wwn = {0};
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (unf_build_sys_wwn(lport->port_id, &port_wwn.sys_port_wwn,
+ &port_wwn.sys_node_name) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "Port(0x%x) build WWN unsuccessful.", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) build WWN succeed", lport->port_id);
+
+ if (unf_update_port_wwn(lport, &port_wwn) != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+u32 unf_port_release_rport_index(struct unf_lport *lport, void *input)
+{
+ u32 rport_index = INVALID_VALUE32;
+ ulong flag = 0;
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ spinlock_t *rport_pool_lock = NULL;
+
+ unf_lport = (struct unf_lport *)lport->root_lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (input) {
+ rport_index = *(u32 *)input;
+ if (rport_index < lport->low_level_func.support_max_rport) {
+ rport_pool = &unf_lport->rport_pool;
+ rport_pool_lock = &rport_pool->rport_free_pool_lock;
+ spin_lock_irqsave(rport_pool_lock, flag);
+ if (test_bit((int)rport_index, rport_pool->rpi_bitmap)) {
+ clear_bit((int)rport_index, rport_pool->rpi_bitmap);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) try to release a free rport index(0x%x)",
+ lport->port_id, rport_index);
+ }
+ spin_unlock_irqrestore(rport_pool_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) try to release a not exist rport index(0x%x)",
+ lport->port_id, rport_index);
+ }
+ }
+
+ return RETURN_OK;
+}
+
+void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_vport_pool *vport_pool = NULL;
+ struct unf_lport *unf_vport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ unf_lport = (struct unf_lport *)lport;
+ unf_lport = unf_lport->root_lport;
+ vport_pool = unf_lport->vport_pool;
+
+ if (unf_lport->nport_id == nport_id)
+ return unf_lport;
+
+ if (unlikely(!vport_pool)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
+
+ return NULL;
+ }
+
+ spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
+ list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == nport_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+
+ list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
+ unf_vport = list_entry(node, struct unf_lport, entry_vport);
+ if (unf_vport->nport_id == nport_id) {
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+ return unf_vport;
+ }
+ }
+ spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x) has no vport Nport ID(0x%x)",
+ unf_lport->port_id, nport_id);
+
+ return NULL;
+}
+
+int unf_get_link_lose_tmo(struct unf_lport *lport)
+{
+ u32 tmo_value = 0;
+
+ if (!lport)
+ return UNF_LOSE_TMO;
+
+ tmo_value = atomic_read(&lport->link_lose_tmo);
+
+ if (!tmo_value)
+ tmo_value = UNF_LOSE_TMO;
+
+ return (int)tmo_value;
+}
+
+u32 unf_register_scsi_host(struct unf_lport *lport)
+{
+ struct unf_host_param host_param = {0};
+
+ struct Scsi_Host **scsi_host = NULL;
+ struct unf_lport_cfg_item *lport_cfg_items = NULL;
+
+ FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
+
+ /* Point to -->> L_port->Scsi_host */
+ scsi_host = &lport->host_info.host;
+
+ lport_cfg_items = &lport->low_level_func.lport_cfg_items;
+ host_param.can_queue = (int)lport_cfg_items->max_queue_depth;
+
+ /* Performance optimization */
+ host_param.cmnd_per_lun = UNF_MAX_CMND_PER_LUN;
+
+ host_param.sg_table_size = UNF_MAX_DMA_SEGS;
+ host_param.max_id = UNF_MAX_TARGET_NUMBER;
+ host_param.max_lun = UNF_DEFAULT_MAX_LUN;
+ host_param.max_channel = UNF_MAX_BUS_CHANNEL;
+ host_param.max_cmnd_len = UNF_MAX_SCSI_CMND_LEN; /* CDB-16 */
+ host_param.dma_boundary = UNF_DMA_BOUNDARY;
+ host_param.max_sectors = UNF_MAX_SECTORS;
+ host_param.port_id = lport->port_id;
+ host_param.lport = lport;
+ host_param.pdev = &lport->low_level_func.dev->dev;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) allocate scsi host: can queue(%u), command performance LUN(%u), max lun(%u)",
+ lport->port_id, host_param.can_queue, host_param.cmnd_per_lun,
+ host_param.max_lun);
+
+ if (unf_alloc_scsi_host(scsi_host, &host_param) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate scsi host failed", lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) allocate scsi host(0x%x) succeed",
+ lport->port_id, UNF_GET_SCSI_HOST_ID(*scsi_host));
+
+ return RETURN_OK;
+}
+
+void unf_unregister_scsi_host(struct unf_lport *lport)
+{
+ struct Scsi_Host *scsi_host = NULL;
+ u32 host_no = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+
+ scsi_host = lport->host_info.host;
+
+ if (scsi_host) {
+ host_no = UNF_GET_SCSI_HOST_ID(scsi_host);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) starting unregister scsi host(0x%x)",
+ lport->port_id, host_no);
+ unf_free_scsi_host(scsi_host);
+ /* can`t set scsi_host for NULL, since it does`t alloc by itself */
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[warn]Port(0x%x) unregister scsi host, invalid scsi_host ",
+ lport->port_id);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) unregister scsi host(0x%x) succeed",
+ lport->port_id, host_no);
+
+ lport->destroy_step = UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST;
+}
diff --git a/drivers/scsi/spfc/common/unf_portman.h b/drivers/scsi/spfc/common/unf_portman.h
new file mode 100644
index 000000000000..c05d31197bd7
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_portman.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_PORT_MAN_H
+#define UNF_PORT_MAN_H
+
+#include "unf_type.h"
+#include "unf_lport.h"
+
+#define UNF_LPORT_POLL_TIMER ((u32)(1 * 1000))
+#define UNF_TX_CREDIT_REG_32_G 0x2289420
+#define UNF_RX_CREDIT_REG_32_G 0x228950c
+#define UNF_CREDIT_REG_16_G 0x2283418
+#define UNF_PORT_OFFSET_BASE 0x10000
+#define UNF_CREDIT_EMU_VALUE 0x20
+#define UNF_CREDIT_VALUE_32_G 0x8
+#define UNF_CREDIT_VALUE_16_G 0x8000000080008
+
+struct unf_nportid_map {
+ u32 sid;
+ u32 did;
+ void *rport[1024];
+ void *lport;
+};
+
+struct unf_global_card_thread {
+ struct list_head card_list_head;
+ spinlock_t global_card_list_lock;
+ u32 card_num;
+};
+
+/* Global L_Port MG,manage all L_Port */
+struct unf_global_lport {
+ struct list_head lport_list_head;
+
+ /* Temporary list,used in hold list traverse */
+ struct list_head intergrad_head;
+
+ /* destroy list,used in card remove */
+ struct list_head destroy_list_head;
+
+ /* Dirty list,abnormal port */
+ struct list_head dirty_list_head;
+ spinlock_t global_lport_list_lock;
+ u32 lport_sum;
+ u8 dft_mode;
+ bool start_work;
+};
+
+struct unf_port_action {
+ u32 action;
+ u32 (*unf_action)(struct unf_lport *lport, void *input);
+};
+
+struct unf_reset_port_argin {
+ u32 port_id;
+};
+
+extern struct unf_global_lport global_lport_mgr;
+extern struct unf_global_card_thread card_thread_mgr;
+extern struct workqueue_struct *unf_wq;
+
+struct unf_lport *unf_find_lport_by_port_id(u32 port_id);
+struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id);
+void *
+unf_lport_create_and_init(void *private_data,
+ struct unf_low_level_functioon_op *low_level_op);
+u32 unf_fc_port_link_event(void *lport, u32 events, void *input);
+u32 unf_release_local_port(void *lport);
+void unf_lport_route_work(struct work_struct *work);
+void unf_lport_update_topo(struct unf_lport *lport,
+ enum unf_act_topo active_topo);
+void unf_lport_ref_dec(struct unf_lport *lport);
+u32 unf_lport_ref_inc(struct unf_lport *lport);
+void unf_lport_ref_dec_to_destroy(struct unf_lport *lport);
+void unf_port_mgmt_deinit(void);
+void unf_port_mgmt_init(void);
+void unf_show_dirty_port(bool show_only, u32 *dirty_port_num);
+void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id);
+u32 unf_is_lport_valid(struct unf_lport *lport);
+int unf_lport_reset_port(struct unf_lport *lport, u32 flag);
+int unf_cm_ops_handle(u32 type, void **arg_in);
+u32 unf_register_scsi_host(struct unf_lport *lport);
+void unf_unregister_scsi_host(struct unf_lport *lport);
+void unf_destroy_scsi_id_table(struct unf_lport *lport);
+u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo);
+u32 unf_init_scsi_id_table(struct unf_lport *lport);
+void unf_set_lport_removing(struct unf_lport *lport);
+void unf_lport_release_lw_funop(struct unf_lport *lport);
+void unf_show_all_rport(struct unf_lport *lport);
+void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event evnet);
+int unf_get_link_lose_tmo(struct unf_lport *lport);
+u32 unf_port_release_rport_index(struct unf_lport *lport, void *input);
+int unf_cm_reset_port(u32 port_id);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
new file mode 100644
index 000000000000..aa4967fc0ab6
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_rport.c
@@ -0,0 +1,2286 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_rport.h"
+#include "unf_log.h"
+#include "unf_exchg.h"
+#include "unf_ls.h"
+#include "unf_service.h"
+#include "unf_portman.h"
+
+/* rport state:ready --->>> link_down --->>> closing --->>> timeout --->>> delete */
+struct unf_rport_feature_pool *port_feature_pool;
+
+void unf_sesion_loss_timeout(struct work_struct *work)
+{
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ wwpn_rport_info = container_of(work, struct unf_wwpn_rport_info, loss_tmo_work.work);
+ if (unlikely(!wwpn_rport_info)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]wwpn_rport_info is NULL");
+ return;
+ }
+
+ atomic_set(&wwpn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) wwpn(0x%llx) set target(0x%x) scsi state to dead",
+ ((struct unf_lport *)(wwpn_rport_info->lport))->port_id,
+ wwpn_rport_info->wwpn, wwpn_rport_info->target_id);
+}
+
+u32 unf_alloc_scsi_id(struct unf_lport *lport, struct unf_rport *rport)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ ulong flags = 0;
+ u32 index = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+
+ /* 1. At first, existence check */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (rport->port_name == wwn_rport_info->wwpn) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+
+ /* Plug case: reuse again */
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find the same scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->resume_scsi_id);
+ goto find;
+ }
+ }
+
+ /* 2. Alloc new SCSI ID */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (wwn_rport_info->wwpn == INVALID_WWPN) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+ /* Use the free space */
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->wwpn = rport->port_name;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allco new scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->alloc_scsi_id);
+ goto find;
+ }
+ }
+
+ /* 3. Reuse space has been used */
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (atomic_read(&wwn_rport_info->scsi_state) == UNF_SCSI_ST_DEAD) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
+ (&wwn_rport_info->loss_tmo_work),
+ "loss tmo Timer work");
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ if (wwn_rport_info->dfx_counter) {
+ memset(wwn_rport_info->dfx_counter, 0,
+ sizeof(struct unf_wwpn_dfx_counter_info));
+ }
+ if (wwn_rport_info->lun_qos_level) {
+ memset(wwn_rport_info->lun_qos_level, 0,
+ sizeof(u8) * UNF_MAX_LUN_PER_TARGET);
+ }
+ wwn_rport_info->rport = rport;
+ wwn_rport_info->wwpn = rport->port_name;
+ wwn_rport_info->las_ten_scsi_state =
+ atomic_read(&wwn_rport_info->scsi_state);
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) reuse a dead scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
+ lport->port_id, index, wwn_rport_info->wwpn, rport,
+ rport->nport_id);
+
+ atomic_inc(&lport->reuse_scsi_id);
+ goto find;
+ }
+ }
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) there is not enough scsi_id with max_value(0x%x)",
+ lport->port_id, index);
+
+ return INVALID_VALUE32;
+
+find:
+ if (!wwn_rport_info->dfx_counter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) allocate Rport(0x%x) DFX buffer",
+ lport->port_id, wwn_rport_info->rport->nport_id);
+ wwn_rport_info->dfx_counter = vmalloc(sizeof(struct unf_wwpn_dfx_counter_info));
+ if (!wwn_rport_info->dfx_counter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate DFX buffer fail",
+ lport->port_id);
+
+ return INVALID_VALUE32;
+ }
+
+ memset(wwn_rport_info->dfx_counter, 0, sizeof(struct unf_wwpn_dfx_counter_info));
+ }
+
+ return index;
+}
+
+u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn)
+{
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ ulong flags = 0;
+ u32 index = 0;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, INVALID_VALUE32);
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+
+ if (wwpn == 0)
+ return INVALID_VALUE32;
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+
+ for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
+ if (wwn_rport_info->wwpn == wwpn) {
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+ return index;
+ }
+ }
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ return INVALID_VALUE32;
+}
+
+void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state)
+{
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
+
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) RPort scsi_id(0x%x) is max than 0x%x",
+ lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
+ return;
+ }
+
+ scsi_image_table = &lport->rport_scsi_table;
+ wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
+ atomic_set(&wwpn_rport_info->scsi_state, scsi_state);
+}
+
+void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. port_logout
+ * 2. rcvd_rscn_port_not_in_disc
+ * 3. each_rport_after_rscn
+ * 4. rcvd_gpnid_rjt
+ * 5. rport_after_logout(rport is fabric port)
+ */
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* 1. Update R_Port state: Link Down Event --->>> closing state */
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* 3. Port enter closing (then enter to Delete) process */
+ unf_rport_enter_closing(rport);
+}
+
+static struct unf_rport *unf_rport_is_changed(struct unf_lport *lport,
+ struct unf_rport *rport, u32 sid)
+{
+ if (rport) {
+ /* S_ID or D_ID has been changed */
+ if (rport->nport_id != sid || rport->local_nport_id != lport->nport_id) {
+ /* 1. Swap case: (SID or DID changed): Report link down
+ * & delete immediately
+ */
+ unf_rport_immediate_link_down(lport, rport);
+ return NULL;
+ }
+ }
+
+ return rport;
+}
+
+struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid)
+{
+ /* Used for SPFC Chip */
+ struct unf_rport *rport = NULL;
+ struct unf_rport *rporta = NULL;
+ struct unf_rport *rportb = NULL;
+ bool wwpn_flag = false;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* About R_Port by N_Port_ID */
+ rporta = unf_rport_is_changed(lport, rport_by_nport_id, sid);
+
+ /* About R_Port by WWpn */
+ rportb = unf_rport_is_changed(lport, rport_by_wwpn, sid);
+
+ if (!rporta && !rportb) {
+ return NULL;
+ } else if (!rporta && rportb) {
+ /* 3. Plug case: reuse again */
+ rport = rportb;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by wwpn",
+ lport->port_id, rport, rport->port_name,
+ rport->nport_id, rport->local_nport_id);
+
+ return rport;
+ } else if (rporta && !rportb) {
+ wwpn_flag = (rporta->port_name != wwpn && rporta->port_name != 0 &&
+ rporta->port_name != INVALID_VALUE64);
+ if (wwpn_flag) {
+ /* 4. WWPN changed: Report link down & delete
+ * immediately
+ */
+ unf_rport_immediate_link_down(lport, rporta);
+ return NULL;
+ }
+
+ /* Updtae WWPN */
+ rporta->port_name = wwpn;
+ rport = rporta;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by N_Port_ID",
+ lport->port_id, rport, rport->port_name,
+ rport->nport_id, rport->local_nport_id);
+
+ return rport;
+ }
+
+ /* 5. Case for A == B && A != NULL && B != NULL */
+ if (rportb == rporta) {
+ rport = rporta;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) find the same RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x)",
+ lport->port_id, rport, rport->port_name, rport->nport_id,
+ rport->local_nport_id);
+
+ return rport;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find two duplicate login. RPort(A:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x) RPort(B:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x)",
+ lport->port_id, rporta, rporta->port_name, rporta->nport_id,
+ rporta->local_nport_id, rportb, rportb->port_name, rportb->nport_id,
+ rportb->local_nport_id);
+
+ /* 6. Case for A != B && A != NULL && B != NULL: Immediate
+ * Report && Deletion
+ */
+ unf_rport_immediate_link_down(lport, rporta);
+ unf_rport_immediate_link_down(lport, rportb);
+
+ return NULL;
+}
+
+struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 sid)
+{
+ struct unf_rport *rport = NULL;
+ struct unf_rport *rport_by_nport_id = NULL;
+ struct unf_rport *rport_by_wwpn = NULL;
+ ulong flags = 0;
+ spinlock_t *rport_state_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(lport->unf_qualify_rport, NULL);
+
+ /* Get R_Port by WWN & N_Port_ID */
+ rport_by_nport_id = unf_get_rport_by_nport_id(lport, sid);
+ rport_by_wwpn = unf_get_rport_by_wwn(lport, wwpn);
+ rport_state_lock = &rport_by_wwpn->rport_state_lock;
+
+ /* R_Port check: by WWPN */
+ if (rport_by_wwpn) {
+ spin_lock_irqsave(rport_state_lock, flags);
+ if (rport_by_wwpn->nport_id == UNF_FC_FID_FLOGI) {
+ spin_unlock_irqrestore(rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[err]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
+ lport->port_id, rport_by_wwpn, wwpn);
+
+ rport_by_wwpn = NULL;
+ } else {
+ spin_unlock_irqrestore(rport_state_lock, flags);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%p) find by N_Port_ID(0x%x) and RPort(0x%p) by WWPN(0x%llx)",
+ lport->port_id, lport->nport_id, rport_by_nport_id, sid, rport_by_wwpn, wwpn);
+
+ /* R_Port validity check: get by WWPN & N_Port_ID */
+ rport = lport->unf_qualify_rport(lport, rport_by_nport_id,
+ rport_by_wwpn, wwpn, sid);
+
+ return rport;
+}
+
+void unf_rport_delay_login(struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* Do R_Port recovery: PLOGI or PRLI or LOGO */
+ unf_rport_error_recovery(rport);
+}
+
+void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. TMF/ABTS timeout recovery :Y
+ * 2. L_Port error recovery --->>> larger than retry_count :Y
+ * 3. R_Port error recovery --->>> larger than retry_count :Y
+ * 4. Check PLOGI parameter --->>> parameter is error :Y
+ * 5. PRLI handler --->>> R_Port state is error :Y
+ * 6. PDISC handler --->>> R_Port state is not PRLI_WAIT :Y
+ * 7. ADISC handler --->>> R_Port state is not PRLI_WAIT :Y
+ * 8. PLOGI wait timeout with R_PORT is INI mode :Y
+ * 9. RCVD GFFID_RJT --->>> R_Port state is INIT :Y
+ * 10. RCVD GPNID_ACC --->>> R_Port state is error :Y
+ * 11. Private Loop mode with LOGO case :Y
+ * 12. P2P mode with LOGO case :Y
+ * 13. Fabric mode with LOGO case :Y
+ * 14. RCVD PRLI_ACC with R_Port is INI :Y
+ * 15. TGT RCVD BLS_REQ with session is error :Y
+ */
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
+ rport->rp_state == UNF_RPORT_ST_DELETE) {
+ /* 1. Already within Closing or Delete: Do nothing */
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ return;
+ } else if (rport->rp_state == UNF_RPORT_ST_LOGO) {
+ /* 2. Update R_Port state: Normal Enter Event --->>> closing
+ * state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_NORMAL_ENTER);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* Send Logo if necessary */
+ if (unf_send_logo(lport, rport) != RETURN_OK)
+ unf_rport_enter_closing(rport);
+ } else {
+ /* 3. Update R_Port state: Link Down Event --->>> closing state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_enter_closing(rport);
+ }
+}
+
+u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id)
+{
+ ulong flags = 0;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (unlikely(lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) is removing and do nothing",
+ lport->port_id, lport->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) scsi_id(0x%x) is bigger than %d",
+ lport->port_id, lport->nport_id, scsi_id, UNF_MAX_SCSI_ID);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
+ if (rport_scsi_table->wwn_rport_info_table) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x_0x%x) RPort(0x%p) free scsi_id(0x%x) wwpn(0x%llx) target_id(0x%x) succeed",
+ lport->port_id, lport->nport_id,
+ rport_scsi_table->wwn_rport_info_table[scsi_id].rport,
+ scsi_id, rport_scsi_table->wwn_rport_info_table[scsi_id].wwpn,
+ rport_scsi_table->wwn_rport_info_table[scsi_id].target_id);
+
+ spin_lock_irqsave(rport_scsi_tb_lock, flags);
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ if (wwn_rport_info->rport) {
+ wwn_rport_info->rport->rport = NULL;
+ wwn_rport_info->rport = NULL;
+ }
+ wwn_rport_info->target_id = INVALID_VALUE32;
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
+
+ /* NOTE: remain WWPN/Port_Name unchanged(un-cleared) */
+ spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
+
+ return RETURN_OK;
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static void unf_report_ini_linkwown_event(struct unf_lport *lport, struct unf_rport *rport)
+{
+ u32 scsi_id = 0;
+ struct fc_rport *unf_rport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /*
+ * 1. set local device(rport/rport_info_table) state
+ * -------------------------------------------------OFF_LINE
+ * *
+ * about rport->scsi_id
+ * valid during rport link up to link down
+ */
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ scsi_id = rport->scsi_id;
+ unf_set_device_state(lport, scsi_id, UNF_SCSI_ST_OFFLINE);
+
+ /* 2. delete scsi's rport */
+ unf_rport = (struct fc_rport *)rport->rport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ if (unf_rport) {
+ fc_remote_port_delete(unf_rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) delete RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) succeed",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->port_name, scsi_id);
+
+ atomic_inc(&lport->scsi_session_del_success);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[warn]Port(0x%x_0x%x) delete RPort(0x%x_0x%p) failed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+}
+
+static void unf_report_ini_linkup_event(struct unf_lport *lport, struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[event]Port(0x%x) RPort(0x%x_0x%p) put INI link up work(%p) to work_queue",
+ lport->port_id, rport->nport_id, rport, &rport->start_work);
+
+ if (unlikely(!queue_work(lport->link_event_wq, &rport->start_work))) {
+ atomic_inc(&lport->add_start_work_failed);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Port(0x%x) RPort(0x%x_0x%p) put INI link up to work_queue failed",
+ lport->port_id, rport->nport_id, rport);
+ }
+}
+
+void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 rport_att)
+{
+ /* Report R_Port Link Up/Down Event */
+ ulong flag = 0;
+ enum unf_port_state lport_state = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+
+ /* 1. R_Port does not has TGT mode any more */
+ if (((rport_att & UNF_FC4_FRAME_PARM_3_TGT) == 0) &&
+ rport->lport_ini_state == UNF_PORT_STATE_LINKUP) {
+ rport->last_lport_ini_state = rport->lport_ini_state;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) does not have TGT attribute(0x%x) any more",
+ lport->port_id, rport->nport_id, rport_att);
+ }
+
+ /* 2. R_Port with TGT mode, L_Port with INI mode */
+ if ((rport_att & UNF_FC4_FRAME_PARM_3_TGT) &&
+ (lport->options & UNF_FC4_FRAME_PARM_3_INI)) {
+ rport->last_lport_ini_state = rport->lport_ini_state;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKUP;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[warn]Port(0x%x) update INI state with last(0x%x) and now(0x%x)",
+ lport->port_id, rport->last_lport_ini_state,
+ rport->lport_ini_state);
+ }
+
+ /* 3. Report L_Port INI/TGT Down/Up event to SCSI */
+ if (rport->last_lport_ini_state == rport->lport_ini_state) {
+ if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
+ lport->port_id, rport->nport_id, rport,
+ rport->lport_ini_state);
+ }
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ return;
+ }
+
+ lport_state = rport->lport_ini_state;
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ switch (lport_state) {
+ case UNF_PORT_STATE_LINKDOWN:
+ unf_report_ini_linkwown_event(lport, rport);
+ break;
+ case UNF_PORT_STATE_LINKUP:
+ unf_report_ini_linkup_event(lport, rport);
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown link status(0x%x)",
+ lport->port_id, rport->lport_ini_state);
+ break;
+ }
+}
+
+static void unf_rport_callback(void *rport, void *lport, u32 result)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+ FC_CHECK_RETURN_VOID(lport);
+ unf_rport = (struct unf_rport *)rport;
+ unf_lport = (struct unf_lport *)lport;
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport->last_lport_ini_state = unf_rport->lport_ini_state;
+ unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->last_lport_tgt_state = unf_rport->lport_tgt_state;
+ unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+
+ /* Report R_Port Link Down Event to scsi */
+ if (unf_rport->last_lport_ini_state == unf_rport->lport_ini_state) {
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
+ unf_lport->port_id, unf_rport->nport_id,
+ unf_rport, unf_rport->lport_ini_state);
+ }
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ return;
+ }
+
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ unf_report_ini_linkwown_event(unf_lport, unf_rport);
+}
+
+static void unf_rport_recovery_timeout(struct work_struct *work)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *rport = NULL;
+ u32 ret = RETURN_OK;
+ ulong flag = 0;
+ enum unf_rport_login_state rp_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ rport = container_of(work, struct unf_rport, recovery_work.work);
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL");
+
+ return;
+ }
+
+ lport = rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x) Port is NULL", rport->nport_id);
+
+ /* for timer */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rp_state = rport->rp_state;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x) state(0x%x) recovery timer timeout",
+ lport->port_id, lport->nport_id, rport->nport_id, rp_state);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ switch (rp_state) {
+ case UNF_RPORT_ST_PLOGI_WAIT:
+ if ((lport->act_topo == UNF_ACT_TOP_P2P_DIRECT &&
+ lport->port_name > rport->port_name) ||
+ lport->act_topo != UNF_ACT_TOP_P2P_DIRECT) {
+ /* P2P: Name is master with P2P_D
+ * or has INI Mode
+ */
+ ret = unf_send_plogi(rport->lport, rport);
+ }
+ break;
+ case UNF_RPORT_ST_PRLI_WAIT:
+ ret = unf_send_prli(rport->lport, rport, ELS_PRLI);
+ if (ret != RETURN_OK)
+ unf_rport_error_recovery(rport);
+ fallthrough;
+ default:
+ break;
+ }
+
+ if (ret != RETURN_OK)
+ unf_rport_error_recovery(rport);
+
+ /* company with timer */
+ unf_rport_ref_dec(rport);
+}
+
+void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport)
+{
+ ulong flags = 0;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ u32 scsi_id = 0;
+ u32 ret = 0;
+ u32 delay = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ delay = (u32)(unf_get_link_lose_tmo(lport) * 1000);
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ scsi_id = rport->scsi_id;
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+
+ /* 1. Cancel recovery_work */
+ if (cancel_delayed_work(&rport->recovery_work)) {
+ atomic_dec(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel recovery work succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+
+ /* 2. Cancel Open_work */
+ if (cancel_delayed_work(&rport->open_work)) {
+ atomic_dec(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel open work succeed",
+ lport->port_id, lport->nport_id, rport->nport_id, rport);
+ }
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* 3. Work in-queue (switch to thread context) */
+ if (!queue_work(lport->link_event_wq, &rport->closing_work)) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[warn]Port(0x%x) RPort(0x%x_0x%p) add link down to work queue failed",
+ lport->port_id, rport->nport_id, rport);
+
+ atomic_inc(&lport->add_closing_work_failed);
+ } else {
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ (void)unf_rport_ref_inc(rport);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x_0x%p) add link down to work(%p) queue succeed",
+ lport->port_id, rport->nport_id, rport,
+ &rport->closing_work);
+ }
+
+ if (rport->nport_id > UNF_FC_FID_DOM_MGR)
+ return;
+
+ if (scsi_id >= UNF_MAX_SCSI_ID) {
+ scsi_id = unf_get_scsi_id_by_wwpn(lport, rport->port_name);
+ if (scsi_id >= UNF_MAX_SCSI_ID) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%p) NPortId(0x%x) wwpn(0x%llx) option(0x%x) scsi_id(0x%x) is max than(0x%x)",
+ lport->port_id, rport, rport->nport_id,
+ rport->port_name, rport->options, scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return;
+ }
+ }
+
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ ret = queue_delayed_work(unf_wq, &wwn_rport_info->loss_tmo_work,
+ (ulong)msecs_to_jiffies((u32)delay));
+ if (!ret) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info] Port(0x%x) add RPort(0x%p) NPortId(0x%x) scsi_id(0x%x) wwpn(0x%llx) loss timeout work failed",
+ lport->port_id, rport, rport->nport_id, scsi_id,
+ rport->port_name);
+ }
+}
+
+static void unf_rport_closing_timeout(struct work_struct *work)
+{
+ /* closing --->>>(timeout)--->>> delete */
+ struct unf_rport *rport = NULL;
+ struct unf_lport *lport = NULL;
+ struct unf_disc *disc = NULL;
+ ulong rport_flag = 0;
+ ulong disc_flag = 0;
+ void (*unf_rport_callback)(void *, void *, u32) = NULL;
+ enum unf_rport_login_state old_state;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ /* Get R_Port & L_Port & Disc */
+ rport = container_of(work, struct unf_rport, closing_work);
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL");
+ return;
+ }
+
+ lport = rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x_0x%p) Port is NULL",
+ rport->nport_id, rport);
+
+ /* Release directly (for timer) */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+ disc = &lport->disc;
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+
+ old_state = rport->rp_state;
+ /* 1. Update R_Port state: event_timeout --->>> state_delete */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_CLS_TIMEOUT);
+
+ /* Check R_Port state */
+ if (rport->rp_state != UNF_RPORT_ST_DELETE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) closing timeout with error state(0x%x->0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport, old_state, rport->rp_state);
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* Dec ref_cnt for timer */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ unf_rport_callback = rport->unf_rport_callback;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* 2. Put R_Port to delete list */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_delete_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
+
+ /* 3. Report rport link down event to scsi */
+ if (unf_rport_callback) {
+ unf_rport_callback((void *)rport, (void *)rport->lport, RETURN_OK);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x) callback is NULL",
+ rport->nport_id);
+ }
+
+ /* 4. Remove/delete R_Port */
+ unf_rport_ref_dec(rport);
+ unf_rport_ref_dec(rport);
+}
+
+static void unf_rport_linkup_to_scsi(struct work_struct *work)
+{
+ struct fc_rport_identifiers rport_ids;
+ struct fc_rport *rport = NULL;
+ ulong flags = RETURN_OK;
+ struct unf_wwpn_rport_info *wwn_rport_info = NULL;
+ struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
+ u32 scsi_id = 0;
+
+ struct unf_lport *lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ unf_rport = container_of(work, struct unf_rport, start_work);
+ if (unlikely(!unf_rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort is NULL for work(%p)", work);
+ return;
+ }
+
+ lport = unf_rport->lport;
+ if (unlikely(!lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]RPort(0x%x_0x%p) Port is NULL",
+ unf_rport->nport_id, unf_rport);
+ return;
+ }
+
+ /* 1. Alloc R_Port SCSI_ID (image table) */
+ unf_rport->scsi_id = unf_alloc_scsi_id(lport, unf_rport);
+ if (unlikely(unf_rport->scsi_id == INVALID_VALUE32)) {
+ atomic_inc(&lport->scsi_session_add_failed);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) scsi_id(0x%x) is invalid",
+ lport->port_id, lport->nport_id,
+ unf_rport->nport_id, unf_rport,
+ unf_rport->port_name, unf_rport->scsi_id);
+
+ /* NOTE: return */
+ return;
+ }
+
+ /* 2. Add rport to scsi */
+ scsi_id = unf_rport->scsi_id;
+ rport_ids.node_name = unf_rport->node_name;
+ rport_ids.port_name = unf_rport->port_name;
+ rport_ids.port_id = unf_rport->nport_id;
+ rport_ids.roles = FC_RPORT_ROLE_UNKNOWN;
+ rport = fc_remote_port_add(lport->host_info.host, 0, &rport_ids);
+ if (unlikely(!rport)) {
+ atomic_inc(&lport->scsi_session_add_failed);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) report link up to scsi failed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id, unf_rport,
+ unf_rport->port_name);
+
+ unf_free_scsi_id(lport, scsi_id);
+ return;
+ }
+
+ /* 3. Change rport role */
+ *((u32 *)rport->dd_data) = scsi_id; /* save local SCSI_ID to scsi rport */
+ rport->supported_classes = FC_COS_CLASS3;
+ rport_ids.roles |= FC_PORT_ROLE_FCP_TARGET;
+ rport->dev_loss_tmo = (u32)unf_get_link_lose_tmo(lport); /* default 30s */
+ fc_remote_port_rolechg(rport, rport_ids.roles);
+
+ /* 4. Save scsi rport info to local R_Port */
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
+ unf_rport->rport = rport;
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
+
+ rport_scsi_table = &lport->rport_scsi_table;
+ spin_lock_irqsave(&rport_scsi_table->scsi_image_table_lock, flags);
+ wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
+ wwn_rport_info->target_id = rport->scsi_target_id;
+ wwn_rport_info->rport = unf_rport;
+ atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
+ spin_unlock_irqrestore(&rport_scsi_table->scsi_image_table_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x_0x%x) RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) link up to scsi succeed",
+ lport->port_id, lport->nport_id, unf_rport->nport_id,
+ unf_rport->port_name, scsi_id);
+
+ atomic_inc(&lport->scsi_session_add_success);
+}
+
+static void unf_rport_open_timeout(struct work_struct *work)
+{
+ struct unf_rport *rport = NULL;
+ struct unf_lport *lport = NULL;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ rport = container_of(work, struct unf_rport, open_work.work);
+ if (!rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]RPort is NULL");
+
+ return;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ lport = rport->lport;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort(0x%x) open work timeout with state(0x%x)",
+ lport->port_id, lport->nport_id, rport->nport_id,
+ rport->rp_state);
+
+ /* NOTE: R_Port state check */
+ if (rport->rp_state != UNF_RPORT_ST_PRLI_WAIT) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* Dec ref_cnt for timer case */
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* Report R_Port Link Down event */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_enter_closing(rport);
+ /* Dec ref_cnt for timer case */
+ unf_rport_ref_dec(rport);
+}
+
+static u32 unf_alloc_index_for_rport(struct unf_lport *lport, struct unf_rport *rport)
+{
+ ulong rport_flag = 0;
+ ulong pool_flag = 0;
+ u32 alloc_indx = 0;
+ u32 max_rport = 0;
+ struct unf_rport_pool *rport_pool = NULL;
+ spinlock_t *rport_scsi_tb_lock = NULL;
+
+ rport_pool = &lport->rport_pool;
+ rport_scsi_tb_lock = &rport_pool->rport_free_pool_lock;
+ max_rport = lport->low_level_func.lport_cfg_items.max_login;
+
+ max_rport = max_rport > SPFC_DEFAULT_RPORT_INDEX ? SPFC_DEFAULT_RPORT_INDEX : max_rport;
+
+ spin_lock_irqsave(rport_scsi_tb_lock, pool_flag);
+ while (alloc_indx < max_rport) {
+ if (!test_bit((int)alloc_indx, rport_pool->rpi_bitmap)) {
+ /* Case for SPFC */
+ if (unlikely(atomic_read(&lport->lport_no_operate_flag) == UNF_LPORT_NOP)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) is within NOP", lport->port_id);
+
+ spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
+ rport->rport_index = alloc_indx;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) alloc index(0x%x) succeed",
+ lport->port_id, alloc_indx, rport->nport_id);
+
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
+
+ /* Set (index) bit */
+ set_bit((int)alloc_indx, rport_pool->rpi_bitmap);
+
+ /* Break here */
+ break;
+ }
+ alloc_indx++;
+ }
+ spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
+
+ if (max_rport == alloc_indx)
+ return UNF_RETURN_ERROR;
+ return RETURN_OK;
+}
+
+static void unf_check_rport_pool_status(struct unf_lport *lport)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport_pool *rport_pool = NULL;
+ ulong flags = 0;
+ u32 max_rport = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ rport_pool = &unf_lport->rport_pool;
+
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flags);
+ max_rport = unf_lport->low_level_func.lport_cfg_items.max_login;
+ if (rport_pool->rport_pool_completion &&
+ rport_pool->rport_pool_count == max_rport) {
+ complete(rport_pool->rport_pool_completion);
+ }
+
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flags);
+}
+
+static void unf_init_rport_sq_num(struct unf_rport *rport, struct unf_lport *lport)
+{
+ u32 session_order;
+ u32 ssq_average_session_num;
+
+ ssq_average_session_num = (lport->max_ssq_num - 1) / UNF_SQ_NUM_PER_SESSION;
+ session_order = (rport->rport_index) % ssq_average_session_num;
+ rport->sqn_base = (session_order * UNF_SQ_NUM_PER_SESSION);
+}
+
+void unf_init_rport_params(struct unf_rport *rport, struct unf_lport *lport)
+{
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(unf_rport);
+ FC_CHECK_RETURN_VOID(lport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_set_rport_state(unf_rport, UNF_RPORT_ST_INIT);
+ unf_rport->unf_rport_callback = unf_rport_callback;
+ unf_rport->lport = lport;
+ unf_rport->fcp_conf_needed = false;
+ unf_rport->tape_support_needed = false;
+ unf_rport->max_retries = UNF_MAX_RETRY_COUNT;
+ unf_rport->logo_retries = 0;
+ unf_rport->retries = 0;
+ unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ unf_rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ unf_rport->node_name = 0;
+ unf_rport->port_name = INVALID_WWPN;
+ unf_rport->disc_done = 0;
+ unf_rport->scsi_id = INVALID_VALUE32;
+ unf_rport->data_thread = NULL;
+ sema_init(&unf_rport->task_sema, 0);
+ atomic_set(&unf_rport->rport_ref_cnt, 0);
+ atomic_set(&unf_rport->pending_io_cnt, 0);
+ unf_rport->rport_alloc_jifs = jiffies;
+
+ unf_rport->ed_tov = UNF_DEFAULT_EDTOV + 500;
+ unf_rport->ra_tov = UNF_DEFAULT_RATOV;
+
+ INIT_WORK(&unf_rport->closing_work, unf_rport_closing_timeout);
+ INIT_WORK(&unf_rport->start_work, unf_rport_linkup_to_scsi);
+ INIT_DELAYED_WORK(&unf_rport->recovery_work, unf_rport_recovery_timeout);
+ INIT_DELAYED_WORK(&unf_rport->open_work, unf_rport_open_timeout);
+
+ atomic_inc(&unf_rport->rport_ref_cnt);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+}
+
+static u32 unf_alloc_ll_rport_resource(struct unf_lport *lport,
+ struct unf_rport *rport, u32 nport_id)
+{
+ u32 ret = RETURN_OK;
+ struct unf_port_info rport_info = {0};
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_qos_info *qos_info = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong flag = 0;
+
+ unf_lport = lport->root_lport;
+
+ if (unf_lport->low_level_func.service_op.unf_alloc_rport_res) {
+ spin_lock_irqsave(&lport->qos_mgr_lock, flag);
+ rport_info.qos_level = lport->qos_level;
+ list_for_each_safe(node, next_node, &lport->list_qos_head) {
+ qos_info = (struct unf_qos_info *)list_entry(node, struct unf_qos_info,
+ entry_qos_info);
+
+ if (qos_info && qos_info->nport_id == nport_id) {
+ rport_info.qos_level = qos_info->qos_level;
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
+
+ unf_init_rport_sq_num(rport, unf_lport);
+
+ rport->qos_level = rport_info.qos_level;
+ rport_info.nport_id = nport_id;
+ rport_info.rport_index = rport->rport_index;
+ rport_info.local_nport_id = lport->nport_id;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = UNF_CSCTRL_INVALID;
+ rport_info.sqn_base = rport->sqn_base;
+
+ if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
+ if (rport_info.qos_level == UNF_QOS_LEVEL_DEFAULT)
+ rport_info.cs_ctrl = UNF_CSCTRL_LOW;
+ else if (rport_info.qos_level == UNF_QOS_LEVEL_MIDDLE)
+ rport_info.cs_ctrl = UNF_CSCTRL_MIDDLE;
+ else if (rport_info.qos_level == UNF_QOS_LEVEL_HIGH)
+ rport_info.cs_ctrl = UNF_CSCTRL_HIGH;
+ }
+
+ ret = unf_lport->low_level_func.service_op.unf_alloc_rport_res(unf_lport->fc_port,
+ &rport_info);
+ } else {
+ ret = RETURN_OK;
+ }
+
+ return ret;
+}
+
+static void *unf_add_rport_to_busy_list(struct unf_lport *lport,
+ struct unf_rport *new_rport,
+ u32 nport_id)
+{
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *unf_new_rport = new_rport;
+ struct unf_rport *old_rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ spinlock_t *rport_free_lock = NULL;
+ spinlock_t *rport_busy_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(new_rport, NULL);
+
+ unf_lport = lport->root_lport;
+ disc = &lport->disc;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+ rport_pool = &unf_lport->rport_pool;
+ rport_free_lock = &rport_pool->rport_free_pool_lock;
+ rport_busy_lock = &disc->rport_busy_pool_lock;
+
+ spin_lock_irqsave(rport_busy_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ /* According to N_Port_ID */
+ old_rport = list_entry(node, struct unf_rport, entry_rport);
+ if (old_rport->nport_id == nport_id)
+ break;
+ old_rport = NULL;
+ }
+
+ if (old_rport) {
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+
+ /* Use old R_Port & Add new R_Port back to R_Port Pool */
+ spin_lock_irqsave(rport_free_lock, flag);
+ clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
+ list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(rport_free_lock, flag);
+
+ unf_check_rport_pool_status(unf_lport);
+ return (void *)old_rport;
+ }
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+ if (nport_id != UNF_FC_FID_FLOGI) {
+ if (unf_alloc_ll_rport_resource(lport, unf_new_rport, nport_id) != RETURN_OK) {
+ /* Add new R_Port back to R_Port Pool */
+ spin_lock_irqsave(rport_free_lock, flag);
+ clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
+ list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(rport_free_lock, flag);
+ unf_check_rport_pool_status(unf_lport);
+
+ return NULL;
+ }
+ }
+
+ spin_lock_irqsave(rport_busy_lock, flag);
+ /* Add new R_Port to busy list */
+ list_add_tail(&unf_new_rport->entry_rport, &disc->list_busy_rports);
+ unf_new_rport->nport_id = nport_id;
+ unf_new_rport->local_nport_id = lport->nport_id;
+ spin_unlock_irqrestore(rport_busy_lock, flag);
+ unf_init_rport_params(unf_new_rport, lport);
+
+ return (void *)unf_new_rport;
+}
+
+void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport_pool *rport_pool = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_disc *vport_disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *list_head = NULL;
+ ulong flag = 0;
+ struct unf_disc_rport *disc_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = ((struct unf_lport *)lport)->root_lport;
+ FC_CHECK_RETURN_VALUE(unf_lport, NULL);
+
+ /* Check L_Port state: NOP */
+ if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
+ return NULL;
+
+ rport_pool = &unf_lport->rport_pool;
+ disc = &unf_lport->disc;
+
+ /* 1. UNF_PORT_TYPE_DISC: Get from disc_rport_pool */
+ if (port_type == UNF_PORT_TYPE_DISC) {
+ vport_disc = &((struct unf_lport *)lport)->disc;
+ /* NOTE: list_disc_rports_pool used with list_disc_rports_busy */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ if (!list_empty(&disc->disc_rport_mgr.list_disc_rports_pool)) {
+ /* Get & delete from Disc R_Port Pool & Add it to Busy list */
+ list_head = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_pool);
+ list_del_init(list_head);
+ disc_rport = list_entry(list_head, struct unf_disc_rport, entry_rport);
+ disc_rport->nport_id = nport_id;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ /* Add to list_disc_rports_busy */
+ spin_lock_irqsave(&vport_disc->rport_busy_pool_lock, flag);
+ list_add_tail(list_head, &vport_disc->disc_rport_mgr.list_disc_rports_busy);
+ spin_unlock_irqrestore(&vport_disc->rport_busy_pool_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "Port(0x%x_0x%x) add nportid:0x%x to rportbusy list",
+ unf_lport->port_id, unf_lport->nport_id,
+ disc_rport->nport_id);
+ } else {
+ disc_rport = NULL;
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+ }
+
+ /* NOTE: return */
+ return disc_rport;
+ }
+
+ /* 2. UNF_PORT_TYPE_FC (rport_pool): Get from list_rports_pool */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (!list_empty(&rport_pool->list_rports_pool)) {
+ /* Get & delete from R_Port free Pool */
+ list_head = UNF_OS_LIST_NEXT(&rport_pool->list_rports_pool);
+ list_del_init(list_head);
+ rport_pool->rport_pool_count--;
+ rport = list_entry(list_head, struct unf_rport, entry_rport);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) RPort pool is empty",
+ unf_lport->port_id, unf_lport->nport_id);
+
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ /* 3. Alloc (& set bit) R_Port index */
+ if (unf_alloc_index_for_rport(unf_lport, rport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) allocate index for new RPort failed",
+ unf_lport->nport_id);
+
+ /* Alloc failed: Add R_Port back to R_Port Pool */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ list_add_tail(&rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+ unf_check_rport_pool_status(unf_lport);
+ return NULL;
+ }
+
+ /* 4. Add R_Port to busy list */
+ rport = unf_add_rport_to_busy_list(lport, rport, nport_id);
+
+ return (void *)rport;
+}
+
+u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_port_info rport_info;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ memset(&rport_info, 0, sizeof(struct unf_port_info));
+
+ rport_info.rport_index = rport->rport_index;
+ rport_info.nport_id = rport->nport_id;
+ rport_info.port_name = rport->port_name;
+ rport_info.sqn_base = rport->sqn_base;
+
+ /* 2. release R_Port(parent context/Session) resource */
+ if (!lport->low_level_func.service_op.unf_release_rport_res) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) release rport resource function can't be NULL",
+ lport->port_id);
+
+ return ret;
+ }
+
+ ret = lport->low_level_func.service_op.unf_release_rport_res(lport->fc_port, &rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport_index(0x%x, %p) send release session CMND failed",
+ lport->port_id, rport_info.rport_index, rport);
+ }
+
+ return ret;
+}
+
+static void unf_reset_rport_attribute(struct unf_rport *rport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ rport->unf_rport_callback = NULL;
+ rport->lport = NULL;
+ rport->node_name = INVALID_VALUE64;
+ rport->port_name = INVALID_WWPN;
+ rport->nport_id = INVALID_VALUE32;
+ rport->local_nport_id = INVALID_VALUE32;
+ rport->max_frame_size = UNF_MAX_FRAME_SIZE;
+ rport->ed_tov = UNF_DEFAULT_EDTOV;
+ rport->ra_tov = UNF_DEFAULT_RATOV;
+ rport->rport_index = INVALID_VALUE32;
+ rport->scsi_id = INVALID_VALUE32;
+ rport->rport_alloc_jifs = INVALID_VALUE64;
+
+ /* ini or tgt */
+ rport->options = 0;
+
+ /* fcp conf */
+ rport->fcp_conf_needed = false;
+
+ /* special req retry times */
+ rport->retries = 0;
+ rport->logo_retries = 0;
+
+ /* special req retry times */
+ rport->max_retries = UNF_MAX_RETRY_COUNT;
+
+ /* for target mode */
+ rport->session = NULL;
+ rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
+ rport->rp_state = UNF_RPORT_ST_INIT;
+ rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
+ rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
+ rport->disc_done = 0;
+ rport->sqn_base = 0;
+
+ /* for scsi */
+ rport->data_thread = NULL;
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+}
+
+u32 unf_rport_remove(void *rport)
+{
+ struct unf_lport *lport = NULL;
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rport_pool *rport_pool = NULL;
+ ulong flag = 0;
+ u32 rport_index = 0;
+ u32 nport_id = 0;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ unf_rport = (struct unf_rport *)rport;
+ lport = unf_rport->lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ rport_pool = &((struct unf_lport *)lport->root_lport)->rport_pool;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Remove RPort(0x%p) with remote_nport_id(0x%x) local_nport_id(0x%x)",
+ unf_rport, unf_rport->nport_id, unf_rport->local_nport_id);
+
+ /* 1. Terminate open exchange before rport remove: set ABORT tag */
+ unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id, 0);
+
+ /* 2. Abort sfp exchange before rport remove */
+ unf_cm_xchg_mgr_abort_sfs_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id);
+
+ /* 3. Release R_Port resource: session reset/delete */
+ if (likely(unf_rport->nport_id != UNF_FC_FID_FLOGI))
+ (void)unf_release_rport_res(lport, unf_rport);
+
+ nport_id = unf_rport->nport_id;
+
+ /* 4.1 Delete R_Port from disc destroy/delete list */
+ spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
+ list_del_init(&unf_rport->entry_rport);
+ spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
+
+ rport_index = unf_rport->rport_index;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) release RPort(0x%x_%p) with index(0x%x)",
+ lport->port_id, unf_rport->nport_id, unf_rport,
+ unf_rport->rport_index);
+
+ unf_reset_rport_attribute(unf_rport);
+
+ /* 4.2 Add rport to --->>> rport_pool (free pool) & clear bitmap */
+ spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
+ if (unlikely(nport_id == UNF_FC_FID_FLOGI)) {
+ if (test_bit((int)rport_index, rport_pool->rpi_bitmap))
+ clear_bit((int)rport_index, rport_pool->rpi_bitmap);
+ }
+
+ list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
+ rport_pool->rport_pool_count++;
+ spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
+
+ unf_check_rport_pool_status((struct unf_lport *)lport->root_lport);
+ up(&unf_rport->task_sema);
+
+ return RETURN_OK;
+}
+
+u32 unf_rport_ref_inc(struct unf_rport *rport)
+{
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ if (atomic_read(&rport->rport_ref_cnt) <= 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Rport(0x%x) reference count is wrong %d",
+ rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+ return UNF_RETURN_ERROR;
+ }
+
+ atomic_inc(&rport->rport_ref_cnt);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Rport(0x%x) reference count is %d", rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+
+ return RETURN_OK;
+}
+
+void unf_rport_ref_dec(struct unf_rport *rport)
+{
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Rport(0x%x) reference count is %d", rport->nport_id,
+ atomic_read(&rport->rport_ref_cnt));
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ if (atomic_dec_and_test(&rport->rport_ref_cnt)) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ (void)unf_rport_remove(rport);
+ } else {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ }
+}
+
+void unf_set_rport_state(struct unf_rport *rport,
+ enum unf_rport_login_state states)
+{
+ FC_CHECK_RETURN_VOID(rport);
+
+ if (rport->rp_state != states) {
+ /* Reset R_Port retry count */
+ rport->retries = 0;
+ }
+
+ rport->rp_state = states;
+}
+
+static enum unf_rport_login_state
+unf_rport_stat_init(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_ENTER_PLOGI:
+ next_state = UNF_RPORT_ST_PLOGI_WAIT;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_plogi_wait(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_ENTER_PRLI:
+ next_state = UNF_RPORT_ST_PRLI_WAIT;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_prli_wait(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_READY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_ready(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_LOGO:
+ next_state = UNF_RPORT_ST_LOGO;
+ break;
+
+ case UNF_EVENT_RPORT_LINK_DOWN:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_ENTER_PLOGI:
+ next_state = UNF_RPORT_ST_PLOGI_WAIT;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_closing(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_CLS_TIMEOUT:
+ next_state = UNF_RPORT_ST_DELETE;
+ break;
+
+ case UNF_EVENT_RPORT_RELOGIN:
+ next_state = UNF_RPORT_ST_INIT;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+static enum unf_rport_login_state unf_rport_stat_logo(enum unf_rport_login_state old_state,
+ enum unf_rport_event event)
+{
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ switch (event) {
+ case UNF_EVENT_RPORT_NORMAL_ENTER:
+ next_state = UNF_RPORT_ST_CLOSING;
+ break;
+
+ case UNF_EVENT_RPORT_RECOVERY:
+ next_state = UNF_RPORT_ST_READY;
+ break;
+
+ default:
+ next_state = old_state;
+ break;
+ }
+
+ return next_state;
+}
+
+void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event)
+{
+ enum unf_rport_login_state old_state = UNF_RPORT_ST_INIT;
+ enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ old_state = rport->rp_state;
+
+ switch (rport->rp_state) {
+ case UNF_RPORT_ST_INIT:
+ next_state = unf_rport_stat_init(old_state, event);
+ break;
+ case UNF_RPORT_ST_PLOGI_WAIT:
+ next_state = unf_rport_stat_plogi_wait(old_state, event);
+ break;
+ case UNF_RPORT_ST_PRLI_WAIT:
+ next_state = unf_rport_stat_prli_wait(old_state, event);
+ break;
+ case UNF_RPORT_ST_LOGO:
+ next_state = unf_rport_stat_logo(old_state, event);
+ break;
+ case UNF_RPORT_ST_CLOSING:
+ next_state = unf_rport_stat_closing(old_state, event);
+ break;
+ case UNF_RPORT_ST_READY:
+ next_state = unf_rport_stat_ready(old_state, event);
+ break;
+ case UNF_RPORT_ST_DELETE:
+ default:
+ next_state = UNF_RPORT_ST_INIT;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]RPort(0x%x) hold state(0x%x)",
+ rport->nport_id, rport->rp_state);
+ break;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
+ "[info]RPort(0x%x) with oldstate(0x%x) event(0x%x) nextstate(0x%x)",
+ rport->nport_id, old_state, event, next_state);
+
+ unf_set_rport_state(rport, next_state);
+}
+
+void unf_clean_linkdown_rport(struct unf_lport *lport)
+{
+ /* for L_Port's R_Port(s) */
+ struct unf_disc *disc = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct unf_rport *rport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ ulong disc_lock_flag = 0;
+ ulong rport_lock_flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ disc = &lport->disc;
+
+ /* for each busy R_Port */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_lock_flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+
+ /* 1. Prevent process Repeatly: Closing */
+ spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+ continue;
+ }
+
+ /* 2. Increase ref_cnt to protect R_Port */
+ if (unf_rport_ref_inc(rport) != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+ continue;
+ }
+
+ /* 3. Update R_Port state: Link Down Event --->>> closing state
+ */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+
+ /* 4. Put R_Port from busy to destroy list */
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+
+ unf_lport = rport->lport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ /* 5. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(unf_lport, rport);
+
+ /* 6. decrease R_Port ref_cnt (company with 2) */
+ unf_rport_ref_dec(rport);
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_lock_flag);
+}
+
+void unf_rport_enter_closing(struct unf_rport *rport)
+{
+ /*
+ * call by
+ * 1. with RSCN processer
+ * 2. with LOGOUT processer
+ * *
+ * from
+ * 1. R_Port Link Down
+ * 2. R_Port enter LOGO
+ */
+ ulong rport_lock_flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_lport *lport = NULL;
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* 1. Increase ref_cnt to protect R_Port */
+ spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return;
+ }
+
+ /* NOTE: R_Port state has been set(with closing) */
+
+ lport = rport->lport;
+ spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
+
+ /* 2. Put R_Port from busy to destroy list */
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, rport_lock_flag);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, rport_lock_flag);
+
+ /* 3. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(lport, rport);
+
+ /* 4. dec R_Port ref_cnt */
+ unf_rport_ref_dec(rport);
+}
+
+void unf_rport_error_recovery(struct unf_rport *rport)
+{
+ ulong delay = 0;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+ return;
+ }
+
+ /* Check R_Port state */
+ if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
+ rport->rp_state == UNF_RPORT_ST_DELETE) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x_0x%p) offline and no need process",
+ rport->nport_id, rport);
+
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* Check repeatability with recovery work */
+ if (delayed_work_pending(&rport->recovery_work)) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x_0x%p) recovery work is running and no need process",
+ rport->nport_id, rport);
+
+ unf_rport_ref_dec(rport);
+ return;
+ }
+
+ /* NOTE: Re-login or Logout directly (recovery work) */
+ if (rport->retries < rport->max_retries) {
+ rport->retries++;
+ delay = UNF_DEFAULT_EDTOV / 4;
+
+ if (queue_delayed_work(unf_wq, &rport->recovery_work,
+ (ulong)msecs_to_jiffies((u32)delay))) {
+ /* Inc ref_cnt: corresponding to this work timer */
+ (void)unf_rport_ref_inc(rport);
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) state(0x%x) retry login failed",
+ rport->nport_id, rport, rport->rp_state);
+
+ /* Update R_Port state: LOGO event --->>> ST_LOGO */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_enter_logo(rport->lport, rport);
+ }
+
+ unf_rport_ref_dec(rport);
+}
+
+static u32 unf_rport_reuse_only(struct unf_rport *rport)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* R_Port State check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) state(0x%x) is delete or closing no need process",
+ rport->nport_id, rport, rport->rp_state);
+
+ ret = UNF_RETURN_ERROR;
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+static u32 unf_rport_reuse_recover(struct unf_rport *rport)
+{
+ ulong flags = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* R_Port state check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ ret = UNF_RETURN_ERROR;
+ }
+
+ /* Update R_Port state: recovery --->>> ready */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_RECOVERY);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+static u32 unf_rport_reuse_init(struct unf_rport *rport)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flag);
+ ret = unf_rport_ref_inc(rport);
+ if (ret != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ /* R_Port with delete state */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]RPort(0x%x_0x%p) is removing and no need process",
+ rport->nport_id, rport);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]RPort(0x%x)'s state is 0x%x with use_init flag",
+ rport->nport_id, rport->rp_state);
+
+ /* R_Port State check: delete */
+ if (rport->rp_state == UNF_RPORT_ST_DELETE ||
+ rport->rp_state == UNF_RPORT_ST_CLOSING) {
+ ret = UNF_RETURN_ERROR;
+ } else {
+ /* Update R_Port state: re-enter Init state */
+ unf_set_rport_state(rport, UNF_RPORT_ST_INIT);
+ }
+ spin_unlock_irqrestore(&rport->rport_state_lock, flag);
+
+ unf_rport_ref_dec(rport);
+
+ return ret;
+}
+
+struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport,
+ u32 nport_id)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ struct unf_rport *find_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ /* for each r_port from rport_busy_list: compare N_Port_ID */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+ if (rport && rport->nport_id == nport_id) {
+ find_rport = rport;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return find_rport;
+}
+
+struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn)
+{
+ struct unf_lport *unf_lport = NULL;
+ struct unf_disc *disc = NULL;
+ struct unf_rport *rport = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flag = 0;
+ struct unf_rport *find_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ unf_lport = (struct unf_lport *)lport;
+ disc = &unf_lport->disc;
+
+ /* for each r_port from busy_list: compare wwpn(port name) */
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
+ list_for_each_safe(node, next_node, &disc->list_busy_rports) {
+ rport = list_entry(node, struct unf_rport, entry_rport);
+ if (rport && rport->port_name == wwpn) {
+ find_rport = rport;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
+
+ return find_rport;
+}
+
+struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
+ struct unf_rport *rport,
+ enum unf_rport_reuse_flag reuse_flag,
+ u32 nport_id)
+{
+ /*
+ * New add or plug
+ * *
+ * retry_flogi --->>> reuse_only
+ * name_server_register --->>> reuse_only
+ * SNS_plogi --->>> reuse_only
+ * enter_flogi --->>> reuse_only
+ * logout --->>> reuse_only
+ * flogi_handler --->>> reuse_only
+ * plogi_handler --->>> reuse_only
+ * adisc_handler --->>> reuse_recovery
+ * logout_handler --->>> reuse_init
+ * prlo_handler --->>> reuse_init
+ * login_with_loop --->>> reuse_only
+ * gffid_callback --->>> reuse_only
+ * delay_plogi --->>> reuse_only
+ * gffid_rjt --->>> reuse_only
+ * gffid_rsp_unknown --->>> reuse_only
+ * gpnid_acc --->>> reuse_init
+ * fdisc_callback --->>> reuse_only
+ * flogi_acc --->>> reuse_only
+ * plogi_acc --->>> reuse_only
+ * logo_callback --->>> reuse_init
+ * rffid_callback --->>> reuse_only
+ */
+#define UNF_AVOID_LINK_FLASH_TIME 3000
+
+ struct unf_rport *unf_rport = rport;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ /* 1. Alloc New R_Port or Update R_Port Property */
+ if (!unf_rport) {
+ /* If NULL, get/Alloc new node (R_Port from R_Port pool)
+ * directly
+ */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, nport_id);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO,
+ "[info]Port(0x%x) get exist RPort(0x%x) with state(0x%x) and reuse_flag(0x%x)",
+ lport->port_id, unf_rport->nport_id,
+ unf_rport->rp_state, reuse_flag);
+
+ switch (reuse_flag) {
+ case UNF_RPORT_REUSE_ONLY:
+ ret = unf_rport_reuse_only(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list: need get new */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
+ nport_id);
+ }
+ break;
+
+ case UNF_RPORT_REUSE_INIT:
+ ret = unf_rport_reuse_init(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list: need get new */
+ unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
+ nport_id);
+ }
+ break;
+
+ case UNF_RPORT_REUSE_RECOVER:
+ ret = unf_rport_reuse_recover(unf_rport);
+ if (ret != RETURN_OK) {
+ /* R_Port within delete list,
+ * NOTE: do nothing
+ */
+ unf_rport = NULL;
+ }
+ break;
+
+ default:
+ break;
+ }
+ } // end else: R_Port != NULL
+
+ return unf_rport;
+}
+
+u32 unf_get_port_feature(u64 wwpn)
+{
+ struct unf_rport_feature_recard *port_fea = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ ulong flags = 0;
+ struct list_head list_temp_node;
+ struct list_head *list_busy_head = NULL;
+ struct list_head *list_free_head = NULL;
+ spinlock_t *feature_lock = NULL;
+
+ list_busy_head = &port_feature_pool->list_busy_head;
+ list_free_head = &port_feature_pool->list_free_head;
+ feature_lock = &port_feature_pool->port_fea_pool_lock;
+ spin_lock_irqsave(feature_lock, flags);
+ list_for_each_safe(node, next_node, list_busy_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return port_fea->port_feature;
+ }
+ }
+
+ list_for_each_safe(node, next_node, list_free_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return port_fea->port_feature;
+ }
+ }
+
+ /* can't find wwpn */
+ if (list_empty(list_free_head)) {
+ /* free is empty, transport busy to free */
+ list_temp_node = port_feature_pool->list_free_head;
+ port_feature_pool->list_free_head = port_feature_pool->list_busy_head;
+ port_feature_pool->list_busy_head = list_temp_node;
+ }
+
+ port_fea = list_entry(UNF_OS_LIST_PREV(list_free_head),
+ struct unf_rport_feature_recard,
+ entry_feature);
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, list_busy_head);
+
+ port_fea->wwpn = wwpn;
+ port_fea->port_feature = UNF_PORT_MODE_UNKNOWN;
+
+ spin_unlock_irqrestore(feature_lock, flags);
+ return UNF_PORT_MODE_UNKNOWN;
+}
+
+void unf_update_port_feature(u64 wwpn, u32 port_feature)
+{
+ struct unf_rport_feature_recard *port_fea = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct list_head *busy_head = NULL;
+ struct list_head *free_head = NULL;
+ ulong flags = 0;
+ spinlock_t *feature_lock = NULL;
+
+ feature_lock = &port_feature_pool->port_fea_pool_lock;
+ busy_head = &port_feature_pool->list_busy_head;
+ free_head = &port_feature_pool->list_free_head;
+
+ spin_lock_irqsave(feature_lock, flags);
+ list_for_each_safe(node, next_node, busy_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ port_fea->port_feature = port_feature;
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, busy_head);
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return;
+ }
+ }
+
+ list_for_each_safe(node, next_node, free_head) {
+ port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
+
+ if (port_fea->wwpn == wwpn) {
+ port_fea->port_feature = port_feature;
+ list_del(&port_fea->entry_feature);
+ list_add(&port_fea->entry_feature, busy_head);
+
+ spin_unlock_irqrestore(feature_lock, flags);
+
+ return;
+ }
+ }
+
+ spin_unlock_irqrestore(feature_lock, flags);
+}
diff --git a/drivers/scsi/spfc/common/unf_rport.h b/drivers/scsi/spfc/common/unf_rport.h
new file mode 100644
index 000000000000..a9d58cb29b8a
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_rport.h
@@ -0,0 +1,301 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_RPORT_H
+#define UNF_RPORT_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_lport.h"
+
+extern struct unf_rport_feature_pool *port_feature_pool;
+
+#define UNF_MAX_SCSI_ID 2048
+#define UNF_LOSE_TMO 30
+#define UNF_RPORT_INVALID_INDEX 0xffff
+
+/* RSCN compare DISC list with local RPort macro */
+#define UNF_RPORT_NEED_PROCESS 0x1
+#define UNF_RPORT_ONLY_IN_DISC_PROCESS 0x2
+#define UNF_RPORT_ONLY_IN_LOCAL_PROCESS 0x3
+#define UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS 0x4
+#define UNF_RPORT_NOT_NEED_PROCESS 0x5
+
+#define UNF_ECHO_SEND_MAX_TIMES 1
+
+/* csctrl level value */
+#define UNF_CSCTRL_LOW 0x81
+#define UNF_CSCTRL_MIDDLE 0x82
+#define UNF_CSCTRL_HIGH 0x83
+#define UNF_CSCTRL_INVALID 0x0
+
+enum unf_rport_login_state {
+ UNF_RPORT_ST_INIT = 0x1000, /* initialized */
+ UNF_RPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
+ UNF_RPORT_ST_PRLI_WAIT, /* waiting for PRLI completion */
+ UNF_RPORT_ST_READY, /* ready for use */
+ UNF_RPORT_ST_LOGO, /* port logout sent */
+ UNF_RPORT_ST_CLOSING, /* being closed */
+ UNF_RPORT_ST_DELETE, /* port being deleted */
+ UNF_RPORT_ST_BUTT
+};
+
+enum unf_rport_event {
+ UNF_EVENT_RPORT_NORMAL_ENTER = 0x9000,
+ UNF_EVENT_RPORT_ENTER_PLOGI = 0x9001,
+ UNF_EVENT_RPORT_ENTER_PRLI = 0x9002,
+ UNF_EVENT_RPORT_READY = 0x9003,
+ UNF_EVENT_RPORT_LOGO = 0x9004,
+ UNF_EVENT_RPORT_CLS_TIMEOUT = 0x9005,
+ UNF_EVENT_RPORT_RECOVERY = 0x9006,
+ UNF_EVENT_RPORT_RELOGIN = 0x9007,
+ UNF_EVENT_RPORT_LINK_DOWN = 0x9008,
+ UNF_EVENT_RPORT_BUTT
+};
+
+/* RPort local link state */
+enum unf_port_state {
+ UNF_PORT_STATE_LINKUP = 0x1001,
+ UNF_PORT_STATE_LINKDOWN = 0x1002
+};
+
+enum unf_rport_reuse_flag {
+ UNF_RPORT_REUSE_ONLY = 0x1001,
+ UNF_RPORT_REUSE_INIT = 0x1002,
+ UNF_RPORT_REUSE_RECOVER = 0x1003
+};
+
+struct unf_disc_rport {
+ /* RPort entry */
+ struct list_head entry_rport;
+
+ u32 nport_id; /* Remote port NPortID */
+ u32 disc_done; /* 1:Disc done */
+};
+
+struct unf_rport_feature_pool {
+ struct list_head list_busy_head;
+ struct list_head list_free_head;
+ void *port_feature_pool_addr;
+ spinlock_t port_fea_pool_lock;
+};
+
+struct unf_rport_feature_recard {
+ struct list_head entry_feature;
+ u64 wwpn;
+ u32 port_feature;
+ u32 reserved;
+};
+
+struct unf_os_thread_private_data {
+ struct list_head list;
+ spinlock_t spin_lock;
+ struct task_struct *thread;
+ unsigned int in_process;
+ unsigned int cpu_id;
+ atomic_t user_count;
+};
+
+/* Remote Port struct */
+struct unf_rport {
+ u32 max_frame_size;
+ u32 supported_classes;
+
+ /* Dynamic Attributes */
+ /* Remote Port loss timeout in seconds. */
+ u32 dev_loss_tmo;
+
+ u64 node_name;
+ u64 port_name;
+ u32 nport_id; /* Remote port NPortID */
+ u32 local_nport_id;
+
+ u32 roles;
+
+ /* Remote port local INI state */
+ enum unf_port_state lport_ini_state;
+ enum unf_port_state last_lport_ini_state;
+
+ /* Remote port local TGT state */
+ enum unf_port_state lport_tgt_state;
+ enum unf_port_state last_lport_tgt_state;
+
+ /* Port Type,fc or fcoe */
+ u32 port_type;
+
+ /* RPort reference counter */
+ atomic_t rport_ref_cnt;
+
+ /* Pending IO count */
+ atomic_t pending_io_cnt;
+
+ /* RPort entry */
+ struct list_head entry_rport;
+
+ /* Port State,delay reclaim when uiRpState == complete. */
+ enum unf_rport_login_state rp_state;
+ u32 disc_done; /* 1:Disc done */
+
+ struct unf_lport *lport;
+ void *rport;
+ spinlock_t rport_state_lock;
+
+ /* Port attribution */
+ u32 ed_tov;
+ u32 ra_tov;
+ u32 options; /* ini or tgt */
+ u32 last_report_link_up_options;
+ u32 fcp_conf_needed; /* INI Rport send FCP CONF flag */
+ u32 tape_support_needed; /* INI tape support flag */
+ u32 retries; /* special req retry times */
+ u32 logo_retries; /* logo error recovery retry times */
+ u32 max_retries; /* special req retry times */
+ u64 rport_alloc_jifs; /* Rport alloc jiffies */
+
+ void *session;
+
+ /* binding with SCSI */
+ u32 scsi_id;
+
+ /* disc list compare flag */
+ u32 rscn_position;
+
+ u32 rport_index;
+
+ u32 sqn_base;
+ enum unf_rport_qos_level qos_level;
+
+ /* RPort timer,closing status */
+ struct work_struct closing_work;
+
+ /* RPort timer,rport linkup */
+ struct work_struct start_work;
+
+ /* RPort timer,recovery */
+ struct delayed_work recovery_work;
+
+ /* RPort timer,TGT mode,PRLI waiting */
+ struct delayed_work open_work;
+
+ struct semaphore task_sema;
+ /* Callback after rport Ready/delete.[with state:ok/fail].Creat/free TGT session here */
+ /* input : L_Port,R_Port,state:ready --creat session/delete--free session */
+ void (*unf_rport_callback)(void *rport, void *lport, u32 result);
+
+ struct unf_os_thread_private_data *data_thread;
+};
+
+#define UNF_IO_RESULT_CNT(scsi_table, scsi_id, io_result) \
+ do { \
+ if (likely(((io_result) < UNF_MAX_IO_RETURN_VALUE) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) {\
+ atomic64_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->io_done_cnt[(io_result)])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] io return value(0x%x) or " \
+ "scsi id(0x%x) is invalid", \
+ io_result, scsi_id); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_CMD_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_MAX_SCSI_CMD) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) { \
+ atomic64_inc(&(((scsi_table)->wwn_rport_info_table[scsi_id]) \
+ .dfx_counter->scsi_cmd_cnt[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ io_type, scsi_id); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_ERROR_HANDLE_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) && \
+ (((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter)))) { \
+ atomic_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->error_handle[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ (io_type), (scsi_id)); \
+ } \
+ } while (0)
+
+#define UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_table, scsi_id, io_type) \
+ do { \
+ if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
+ ((scsi_id) < UNF_MAX_SCSI_ID) && \
+ ((scsi_table)->wwn_rport_info_table) &&\
+ (((scsi_table)-> \
+ wwn_rport_info_table[scsi_id].dfx_counter)))) { \
+ atomic_inc(&( \
+ (scsi_table) \
+ ->wwn_rport_info_table[scsi_id] \
+ .dfx_counter->error_handle_result[io_type])); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
+ UNF_ERR, \
+ "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
+ "is invalid", \
+ io_type, scsi_id); \
+ } \
+ } while (0)
+
+void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event);
+void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
+ struct unf_rport *rport,
+ u32 rport_att);
+
+void unf_set_rport_state(struct unf_rport *rport, enum unf_rport_login_state states);
+void unf_rport_enter_closing(struct unf_rport *rport);
+u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_initrport_mgr_temp(struct unf_lport *lport);
+void unf_clean_linkdown_rport(struct unf_lport *lport);
+void unf_rport_error_recovery(struct unf_rport *rport);
+struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport, u32 nport_id);
+struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn);
+void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport);
+u32 unf_rport_ref_inc(struct unf_rport *rport);
+void unf_rport_ref_dec(struct unf_rport *rport);
+
+struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
+ struct unf_rport *rport_by_nport_id,
+ struct unf_rport *rport_by_wwpn,
+ u64 wwpn, u32 sid);
+void unf_rport_delay_login(struct unf_rport *rport);
+struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn,
+ u32 sid);
+void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport);
+void unf_apply_for_session(struct unf_lport *lport, struct unf_rport *rport);
+struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
+ struct unf_rport *rport,
+ enum unf_rport_reuse_flag reuse_flag,
+ u32 nport_id);
+void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id);
+
+void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state);
+u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn);
+u32 unf_get_device_state(struct unf_lport *lport, u32 scsi_id);
+u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id);
+void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport);
+void unf_sesion_loss_timeout(struct work_struct *work);
+u32 unf_get_port_feature(u64 wwpn);
+void unf_update_port_feature(u64 wwpn, u32 port_feature);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_scsi.c b/drivers/scsi/spfc/common/unf_scsi.c
new file mode 100644
index 000000000000..961e5dd782c6
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_scsi.c
@@ -0,0 +1,1463 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_scsi_common.h"
+#include "unf_lport.h"
+#include "unf_rport.h"
+#include "unf_portman.h"
+#include "unf_exchg.h"
+#include "unf_exchg_abort.h"
+#include "unf_npiv.h"
+#include "unf_io.h"
+
+#define UNF_LUN_ID_MASK 0x00000000ffff0000
+#define UNF_CMD_PER_LUN 3
+
+static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd);
+static int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_device_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_bus_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_target_reset_handler(struct scsi_cmnd *v_cmnd);
+static int unf_scsi_slave_alloc(struct scsi_device *sdev);
+static void unf_scsi_destroy_slave(struct scsi_device *sdev);
+static int unf_scsi_slave_configure(struct scsi_device *sdev);
+static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
+static void unf_scsi_scan_start(struct Scsi_Host *shost);
+
+static struct scsi_transport_template *scsi_transport_template;
+static struct scsi_transport_template *scsi_transport_template_v;
+
+struct unf_ini_error_code ini_error_code_table1[] = {
+ {UNF_IO_SUCCESS, UNF_SCSI_HOST(DID_OK)},
+ {UNF_IO_ABORTED, UNF_SCSI_HOST(DID_ABORT)},
+ {UNF_IO_FAILED, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_ABTS, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_LOGIN, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_ABORT_REET, UNF_SCSI_HOST(DID_RESET)},
+ {UNF_IO_ABORT_FAILED, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_OUTOF_ORDER, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_FTO, UNF_SCSI_HOST(DID_TIME_OUT)},
+ {UNF_IO_LINK_FAILURE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_OVER_FLOW, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_RSP_OVER, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_LOST_FRAME, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_UNDER_FLOW, UNF_SCSI_HOST(DID_OK)},
+ {UNF_IO_HOST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_SEST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_INVALID_ENTRY, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORT_SEQ_NOT, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_REJECT, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_EDC_IN_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_EDC_OUT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_UNINIT_KEK_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_DEK_OUTOF_RANGE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_UNWRAP_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_TAG_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_KEY_ECC_ERR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_BLOCK_SIZE_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ILLEGAL_CIPHER_MODE, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_CLEAN_UP, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_ABORTED_BY_TARGET, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_TRANSPORT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_LINK_FLASH, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_TIMEOUT, UNF_SCSI_HOST(DID_TIME_OUT)},
+ {UNF_IO_DMA_ERROR, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_NO_LPORT, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_NO_XCHG, UNF_SCSI_HOST(DID_SOFT_ERROR)},
+ {UNF_IO_SOFT_ERR, UNF_SCSI_HOST(DID_SOFT_ERROR)},
+ {UNF_IO_PORT_LOGOUT, UNF_SCSI_HOST(DID_NO_CONNECT)},
+ {UNF_IO_ERREND, UNF_SCSI_HOST(DID_ERROR)},
+ {UNF_IO_DIF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
+ {UNF_IO_INCOMPLETE, UNF_SCSI_HOST(DID_IMM_RETRY)},
+ {UNF_IO_DIF_REF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
+ {UNF_IO_DIF_GEN_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))}
+};
+
+u32 ini_err_code_table_cnt1 = sizeof(ini_error_code_table1) / sizeof(struct unf_ini_error_code);
+
+static void unf_set_rport_loss_tmo(struct fc_rport *rport, u32 timeout)
+{
+ if (timeout)
+ rport->dev_loss_tmo = timeout;
+ else
+ rport->dev_loss_tmo = 1;
+}
+
+static void unf_get_host_port_id(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ fc_host_port_id(shost) = unf_lport->port_id;
+}
+
+static void unf_get_host_speed(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 speed = FC_PORTSPEED_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->speed) {
+ case UNF_PORT_SPEED_2_G:
+ speed = FC_PORTSPEED_2GBIT;
+ break;
+ case UNF_PORT_SPEED_4_G:
+ speed = FC_PORTSPEED_4GBIT;
+ break;
+ case UNF_PORT_SPEED_8_G:
+ speed = FC_PORTSPEED_8GBIT;
+ break;
+ case UNF_PORT_SPEED_16_G:
+ speed = FC_PORTSPEED_16GBIT;
+ break;
+ case UNF_PORT_SPEED_32_G:
+ speed = FC_PORTSPEED_32GBIT;
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown speed(0x%x) for FC mode",
+ unf_lport->port_id, unf_lport->speed);
+ break;
+ }
+
+ fc_host_speed(shost) = speed;
+}
+
+static void unf_get_host_port_type(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 port_type = FC_PORTTYPE_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->act_topo) {
+ case UNF_ACT_TOP_PRIVATE_LOOP:
+ port_type = FC_PORTTYPE_LPORT;
+ break;
+ case UNF_ACT_TOP_PUBLIC_LOOP:
+ port_type = FC_PORTTYPE_NLPORT;
+ break;
+ case UNF_ACT_TOP_P2P_DIRECT:
+ port_type = FC_PORTTYPE_PTP;
+ break;
+ case UNF_ACT_TOP_P2P_FABRIC:
+ port_type = FC_PORTTYPE_NPORT;
+ break;
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) with unknown topo type(0x%x) for FC mode",
+ unf_lport->port_id, unf_lport->act_topo);
+ break;
+ }
+
+ fc_host_port_type(shost) = port_type;
+}
+
+static void unf_get_symbolic_name(struct Scsi_Host *shost)
+{
+ u8 *name = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)(uintptr_t)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Check l_port failed");
+ return;
+ }
+
+ name = fc_host_symbolic_name(shost);
+ if (name)
+ snprintf(name, FC_SYMBOLIC_NAME_SIZE, "SPFC_FW_RELEASE:%s SPFC_DRV_RELEASE:%s",
+ unf_lport->fw_version, SPFC_DRV_VERSION);
+}
+
+static void unf_get_host_fabric_name(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+ fc_host_fabric_name(shost) = unf_lport->fabric_node_name;
+}
+
+static void unf_get_host_port_state(struct Scsi_Host *shost)
+{
+ struct unf_lport *unf_lport = NULL;
+ enum fc_port_state port_state;
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
+ return;
+ }
+
+ switch (unf_lport->link_up) {
+ case UNF_PORT_LINK_DOWN:
+ port_state = FC_PORTSTATE_OFFLINE;
+ break;
+ case UNF_PORT_LINK_UP:
+ port_state = FC_PORTSTATE_ONLINE;
+ break;
+ default:
+ port_state = FC_PORTSTATE_UNKNOWN;
+ break;
+ }
+
+ fc_host_port_state(shost) = port_state;
+}
+
+static void unf_dev_loss_timeout_callbk(struct fc_rport *rport)
+{
+ /*
+ * NOTE: about rport->dd_data
+ * --->>> local SCSI_ID
+ * 1. Assignment during scsi rport link up
+ * 2. Released when scsi rport link down & timeout(30s)
+ * 3. Used during scsi do callback with slave_alloc function
+ */
+ struct Scsi_Host *host = NULL;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
+ return;
+ }
+
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+ return;
+ }
+
+ scsi_id = *(u32 *)(rport->dd_data); /* according to Local SCSI_ID */
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]rport(0x%p) scsi_id(0x%x) is max than(0x%x)",
+ rport, scsi_id, UNF_MAX_SCSI_ID);
+ return;
+ }
+
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[event]Port(0x%x_0x%x) rport(0x%p) scsi_id(0x%x) target_id(0x%x) loss timeout",
+ unf_lport->port_id, unf_lport->nport_id, rport,
+ scsi_id, rport->scsi_target_id);
+
+ atomic_inc(&unf_lport->session_loss_tmo);
+
+ /* Free SCSI ID & set table state with DEAD */
+ (void)unf_free_scsi_id(unf_lport, scsi_id);
+ unf_xchg_up_abort_io_by_scsi_id(unf_lport, scsi_id);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+ }
+
+ *((u32 *)rport->dd_data) = INVALID_VALUE32;
+}
+
+int unf_scsi_create_vport(struct fc_vport *fc_port, bool disabled)
+{
+ struct unf_lport *vport = NULL;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *shost = NULL;
+ struct vport_config vport_config = {0};
+
+ shost = vport_to_shost(fc_port);
+
+ unf_lport = (struct unf_lport *)shost->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+
+ return RETURN_ERROR;
+ }
+
+ vport_config.port_name = fc_port->port_name;
+
+ vport_config.port_mode = fc_port->roles;
+
+ vport = unf_creat_vport(unf_lport, &vport_config);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) Create Vport failed on lldrive",
+ unf_lport->port_id);
+
+ return RETURN_ERROR;
+ }
+
+ fc_port->dd_data = vport;
+ vport->vport = fc_port;
+
+ return RETURN_OK;
+}
+
+int unf_scsi_delete_vport(struct fc_vport *fc_port)
+{
+ int ret = RETURN_ERROR;
+ struct unf_lport *vport = NULL;
+
+ vport = (struct unf_lport *)fc_port->dd_data;
+ if (unf_is_lport_valid(vport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]VPort(%p) is invalid or is removing", vport);
+
+ fc_port->dd_data = NULL;
+
+ return ret;
+ }
+
+ ret = (int)unf_destroy_one_vport(vport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]VPort(0x%x) destroy failed on drive", vport->port_id);
+
+ return ret;
+ }
+
+ fc_port->dd_data = NULL;
+ return ret;
+}
+
+struct fc_function_template function_template = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
+ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = unf_get_host_port_id,
+ .show_host_port_id = 1,
+ .get_host_speed = unf_get_host_speed,
+ .show_host_speed = 1,
+ .get_host_port_type = unf_get_host_port_type,
+ .show_host_port_type = 1,
+ .get_host_symbolic_name = unf_get_symbolic_name,
+ .show_host_symbolic_name = 1,
+ .set_host_system_hostname = NULL,
+ .show_host_system_hostname = 1,
+ .get_host_fabric_name = unf_get_host_fabric_name,
+ .show_host_fabric_name = 1,
+ .get_host_port_state = unf_get_host_port_state,
+ .show_host_port_state = 1,
+
+ .dd_fcrport_size = sizeof(void *),
+ .show_rport_supported_classes = 1,
+
+ .get_starget_node_name = NULL,
+ .show_starget_node_name = 1,
+ .get_starget_port_name = NULL,
+ .show_starget_port_name = 1,
+ .get_starget_port_id = NULL,
+ .show_starget_port_id = 1,
+
+ .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
+ .show_rport_dev_loss_tmo = 0,
+
+ .issue_fc_host_lip = NULL,
+ .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
+ .terminate_rport_io = NULL,
+ .get_fc_host_stats = NULL,
+
+ .vport_create = unf_scsi_create_vport,
+ .vport_disable = NULL,
+ .vport_delete = unf_scsi_delete_vport,
+ .bsg_request = NULL,
+ .bsg_timeout = NULL,
+};
+
+struct fc_function_template function_template_v = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
+ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = unf_get_host_port_id,
+ .show_host_port_id = 1,
+ .get_host_speed = unf_get_host_speed,
+ .show_host_speed = 1,
+ .get_host_port_type = unf_get_host_port_type,
+ .show_host_port_type = 1,
+ .get_host_symbolic_name = unf_get_symbolic_name,
+ .show_host_symbolic_name = 1,
+ .set_host_system_hostname = NULL,
+ .show_host_system_hostname = 1,
+ .get_host_fabric_name = unf_get_host_fabric_name,
+ .show_host_fabric_name = 1,
+ .get_host_port_state = unf_get_host_port_state,
+ .show_host_port_state = 1,
+
+ .dd_fcrport_size = sizeof(void *),
+ .show_rport_supported_classes = 1,
+
+ .get_starget_node_name = NULL,
+ .show_starget_node_name = 1,
+ .get_starget_port_name = NULL,
+ .show_starget_port_name = 1,
+ .get_starget_port_id = NULL,
+ .show_starget_port_id = 1,
+
+ .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
+ .show_rport_dev_loss_tmo = 0,
+
+ .issue_fc_host_lip = NULL,
+ .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
+ .terminate_rport_io = NULL,
+ .get_fc_host_stats = NULL,
+
+ .vport_create = NULL,
+ .vport_disable = NULL,
+ .vport_delete = NULL,
+ .bsg_request = NULL,
+ .bsg_timeout = NULL,
+};
+
+struct scsi_host_template scsi_host_template = {
+ .module = THIS_MODULE,
+ .name = "SPFC",
+
+ .queuecommand = unf_scsi_queue_cmd,
+ .eh_timed_out = fc_eh_timed_out,
+ .eh_abort_handler = unf_scsi_abort_scsi_cmnd,
+ .eh_device_reset_handler = unf_scsi_device_reset_handler,
+
+ .eh_target_reset_handler = unf_scsi_target_reset_handler,
+ .eh_bus_reset_handler = unf_scsi_bus_reset_handler,
+ .eh_host_reset_handler = NULL,
+
+ .slave_configure = unf_scsi_slave_configure,
+ .slave_alloc = unf_scsi_slave_alloc,
+ .slave_destroy = unf_scsi_destroy_slave,
+
+ .scan_finished = unf_scsi_scan_finished,
+ .scan_start = unf_scsi_scan_start,
+
+ .this_id = -1, /* this_id: -1 */
+ .cmd_per_lun = UNF_CMD_PER_LUN,
+ .shost_attrs = NULL,
+ .sg_tablesize = SG_ALL,
+ .max_sectors = UNF_MAX_SECTORS,
+ .supported_mode = MODE_INITIATOR,
+};
+
+void unf_unmap_prot_sgl(struct scsi_cmnd *cmnd)
+{
+ struct device *dev = NULL;
+
+ if ((scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) && spfc_dif_enable &&
+ (scsi_prot_sg_count(cmnd))) {
+ dev = cmnd->device->host->dma_dev;
+ dma_unmap_sg(dev, scsi_prot_sglist(cmnd),
+ (int)scsi_prot_sg_count(cmnd),
+ cmnd->sc_data_direction);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "scsi done cmd:%p op:%u, difsglcount:%u", cmnd,
+ scsi_get_prot_op(cmnd), scsi_prot_sg_count(cmnd));
+ }
+}
+
+void unf_scsi_done(struct unf_scsi_cmnd *scsi_cmd)
+{
+ struct scsi_cmnd *cmd = NULL;
+
+ cmd = (struct scsi_cmnd *)scsi_cmd->upper_cmnd;
+ FC_CHECK_RETURN_VOID(scsi_cmd);
+ FC_CHECK_RETURN_VOID(cmd);
+ FC_CHECK_RETURN_VOID(cmd->scsi_done);
+ scsi_set_resid(cmd, (int)scsi_cmd->resid);
+
+ cmd->result = scsi_cmd->result;
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ return cmd->scsi_done(cmd);
+}
+
+static void unf_get_protect_op(struct scsi_cmnd *cmd,
+ struct unf_dif_control_info *dif_control_info)
+{
+ switch (scsi_get_prot_op(cmd)) {
+ /* OS-HBA: Unprotected, HBA-Target: Protected */
+ case SCSI_PROT_READ_STRIP:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
+ break;
+ case SCSI_PROT_WRITE_INSERT:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
+ break;
+
+ /* OS-HBA: Protected, HBA-Target: Unprotected */
+ case SCSI_PROT_READ_INSERT:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
+ break;
+ case SCSI_PROT_WRITE_STRIP:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
+ break;
+
+ /* OS-HBA: Protected, HBA-Target: Protected */
+ case SCSI_PROT_READ_PASS:
+ case SCSI_PROT_WRITE_PASS:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
+ break;
+
+ default:
+ dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
+ break;
+ }
+}
+
+int unf_get_protect_mode(struct unf_lport *lport, struct scsi_cmnd *scsi_cmd,
+ struct unf_scsi_cmnd *unf_scsi_cmd)
+{
+ struct scsi_cmnd *cmd = NULL;
+ int dif_seg_cnt = 0;
+ struct unf_dif_control_info *dif_control_info = NULL;
+
+ cmd = scsi_cmd;
+ dif_control_info = &unf_scsi_cmd->dif_control;
+
+ unf_get_protect_op(cmd, dif_control_info);
+
+ if (dif_sgl_mode)
+ dif_control_info->flags |= UNF_DIF_DOUBLE_SGL;
+ dif_control_info->flags |= ((cmd->device->sector_size) == SECTOR_SIZE_4096)
+ ? UNF_DIF_SECTSIZE_4KB : UNF_DIF_SECTSIZE_512;
+ dif_control_info->protect_opcode |= UNF_VERIFY_CRC_MASK | UNF_VERIFY_LBA_MASK;
+ dif_control_info->dif_sge_count = scsi_prot_sg_count(cmd);
+ dif_control_info->dif_sgl = scsi_prot_sglist(cmd);
+ dif_control_info->start_lba = cpu_to_le32(((uint32_t)(0xffffffff & scsi_get_lba(cmd))));
+
+ if (cmd->device->sector_size == SECTOR_SIZE_4096)
+ dif_control_info->start_lba = dif_control_info->start_lba >> UNF_SHIFT_3;
+
+ if (scsi_prot_sg_count(cmd)) {
+ dif_seg_cnt = dma_map_sg(&lport->low_level_func.dev->dev, scsi_prot_sglist(cmd),
+ (int)scsi_prot_sg_count(cmd), cmd->sc_data_direction);
+ if (unlikely(!dif_seg_cnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) cmd:%p map dif sgl err",
+ lport->port_id, cmd);
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "build scsi cmd:%p op:%u,difsglcount:%u,difsegcnt:%u", cmd,
+ scsi_get_prot_op(cmd), scsi_prot_sg_count(cmd),
+ dif_seg_cnt);
+ return RETURN_OK;
+}
+
+static u32 unf_get_rport_qos_level(struct scsi_cmnd *cmd, u32 scsi_id,
+ struct unf_rport_scsi_id_image *scsi_image_table)
+{
+ enum unf_rport_qos_level level = 0;
+
+ if (!scsi_image_table->wwn_rport_info_table[scsi_id].lun_qos_level ||
+ cmd->device->lun >= UNF_MAX_LUN_PER_TARGET) {
+ level = 0;
+ } else {
+ level = (scsi_image_table->wwn_rport_info_table[scsi_id]
+ .lun_qos_level[cmd->device->lun]);
+ }
+ return level;
+}
+
+u32 unf_get_frame_entry_buf(void *up_cmnd, void *driver_sgl, void **upper_sgl,
+ u32 *port_id, u32 *index, char **buf, u32 *buf_len)
+{
+#define SPFC_MAX_DMA_LENGTH (0x20000 - 1)
+ struct scatterlist *scsi_sgl = *upper_sgl;
+
+ if (unlikely(!scsi_sgl)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Command(0x%p) can not get SGL.", up_cmnd);
+ return RETURN_ERROR;
+ }
+ *buf = (char *)sg_dma_address(scsi_sgl);
+ *buf_len = sg_dma_len(scsi_sgl);
+ *upper_sgl = (void *)sg_next(scsi_sgl);
+ if (unlikely((*buf_len > SPFC_MAX_DMA_LENGTH) || (*buf_len == 0))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Command(0x%p) dmalen:0x%x is not support.",
+ up_cmnd, *buf_len);
+ return RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void unf_init_scsi_cmnd(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ struct unf_scsi_cmnd *scsi_cmnd,
+ struct unf_rport_scsi_id_image *scsi_image_table,
+ int datasegcnt)
+{
+ static atomic64_t count;
+ enum unf_rport_qos_level level = 0;
+ u32 scsi_id = 0;
+
+ scsi_id = (u32)((u64)cmd->device->hostdata);
+ level = unf_get_rport_qos_level(cmd, scsi_id, scsi_image_table);
+ scsi_cmnd->scsi_host_id = host->host_no; /* save host_no to scsi_cmnd->scsi_host_id */
+ scsi_cmnd->scsi_id = scsi_id;
+ scsi_cmnd->raw_lun_id = ((u64)cmd->device->lun << 16) & UNF_LUN_ID_MASK;
+ scsi_cmnd->data_direction = cmd->sc_data_direction;
+ scsi_cmnd->under_flow = cmd->underflow;
+ scsi_cmnd->cmnd_len = cmd->cmd_len;
+ scsi_cmnd->pcmnd = cmd->cmnd;
+ scsi_cmnd->transfer_len = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
+ scsi_cmnd->sense_buflen = UNF_SCSI_SENSE_BUFFERSIZE;
+ scsi_cmnd->sense_buf = cmd->sense_buffer;
+ scsi_cmnd->time_out = 0;
+ scsi_cmnd->upper_cmnd = cmd;
+ scsi_cmnd->drv_private = (void *)(*(u64 *)shost_priv(host));
+ scsi_cmnd->entry_count = datasegcnt;
+ scsi_cmnd->sgl = scsi_sglist(cmd);
+ scsi_cmnd->unf_ini_get_sgl_entry = unf_get_frame_entry_buf;
+ scsi_cmnd->done = unf_scsi_done;
+ scsi_cmnd->lun_id = (u8 *)&scsi_cmnd->raw_lun_id;
+ scsi_cmnd->err_code_table_cout = ini_err_code_table_cnt1;
+ scsi_cmnd->err_code_table = ini_error_code_table1;
+ scsi_cmnd->world_id = INVALID_WORLD_ID;
+ scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
+ scsi_cmnd->qos_level = level;
+ if (unlikely(scsi_cmnd->cmnd_sn == 0))
+ scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
+}
+
+static void unf_io_error_done(struct scsi_cmnd *cmd,
+ struct unf_rport_scsi_id_image *scsi_image_table,
+ u32 scsi_id, u32 result)
+{
+ cmd->result = (int)(result << UNF_SHIFT_16);
+ cmd->scsi_done(cmd);
+ if (scsi_image_table)
+ UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, result);
+}
+
+static bool unf_scan_device_cmd(struct scsi_cmnd *cmd)
+{
+ return ((cmd->cmnd[0] == INQUIRY) || (cmd->cmnd[0] == REPORT_LUNS));
+}
+
+static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd)
+{
+ struct Scsi_Host *host = NULL;
+ struct scsi_cmnd *cmd = NULL;
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ u32 scsi_id = 0;
+ u32 scsi_state = 0;
+ int ret = SCSI_MLQUEUE_HOST_BUSY;
+ struct unf_lport *unf_lport = NULL;
+ struct fc_rport *rport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_rport *unf_rport = NULL;
+ u32 cmnd_result = 0;
+ u32 rport_state_err = 0;
+ bool scan_device_cmd = false;
+ int datasegcnt = 0;
+
+ host = phost;
+ cmd = pcmd;
+ FC_CHECK_RETURN_VALUE(host, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(cmd, RETURN_ERROR);
+
+ /* Get L_Port from scsi_cmd */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Check l_port failed, cmd(%p)", cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ /* Check device/session local state by device_id */
+ scsi_id = (u32)((u64)cmd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) scsi_id(0x%x) is max than %d",
+ unf_lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ UNF_SCSI_CMD_CNT(scsi_image_table, scsi_id, cmd->cmnd[0]);
+
+ /* Get scsi r_port */
+ rport = starget_to_rport(scsi_target(cmd->device));
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) cmd(%p) to get scsi rport failed",
+ unf_lport->port_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (unlikely(!scsi_image_table->wwn_rport_info_table)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]LPort porid(0x%x) WwnRportInfoTable NULL",
+ unf_lport->port_id);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (unlikely(unf_lport->port_removing)) {
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p) unf_lport removing",
+ unf_lport->port_id, scsi_id, rport, rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ scsi_state = atomic_read(&scsi_image_table->wwn_rport_info_table[scsi_id].scsi_state);
+ if (unlikely(scsi_state != UNF_SCSI_ST_ONLINE)) {
+ if (scsi_state == UNF_SCSI_ST_OFFLINE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_state(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is busy",
+ unf_lport->port_id, scsi_state, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+
+ scan_device_cmd = unf_scan_device_cmd(cmd);
+ /* report lun or inquiry cmd, if send failed, do not
+ * retry, prevent
+ * the scan_mutex in scsi host locked up by eachother
+ */
+ if (scan_device_cmd) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) DID_NO_CONNECT",
+ unf_lport->port_id, host->host_no, scsi_id,
+ (u64)cmd->device->lun, cmd->cmnd[0]);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (likely(scsi_image_table->wwn_rport_info_table)) {
+ if (likely(scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter)) {
+ atomic64_inc(&(scsi_image_table
+ ->wwn_rport_info_table[scsi_id]
+ .dfx_counter->target_busy));
+ }
+ }
+
+ /* Target busy: need scsi retry */
+ return SCSI_MLQUEUE_TARGET_BUSY;
+ }
+ /* timeout(DEAD): scsi_done & return 0 & I/O error */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is loss timeout",
+ unf_lport->port_id, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ return 0;
+ }
+
+ if (scsi_sg_count(cmd)) {
+ datasegcnt = dma_map_sg(&unf_lport->low_level_func.dev->dev, scsi_sglist(cmd),
+ (int)scsi_sg_count(cmd), cmd->sc_data_direction);
+ if (unlikely(!datasegcnt)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), dma map sg err",
+ unf_lport->port_id, scsi_id, rport,
+ rport->scsi_target_id, cmd);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ /* Construct local SCSI CMND info */
+ unf_init_scsi_cmnd(host, cmd, &scsi_cmd, scsi_image_table, datasegcnt);
+
+ if ((scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) && spfc_dif_enable) {
+ ret = unf_get_protect_mode(unf_lport, cmd, &scsi_cmd);
+ if (ret != RETURN_OK) {
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
+ scsi_dma_unmap(cmd);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) transfer length(0x%x) cmd_len(0x%x) direction(0x%x) cmd(0x%x) under_flow(0x%x) protect_opcode is (0x%x) dif_sgl_mode is %d, sector size(%d)",
+ unf_lport->port_id, host->host_no, scsi_id, (u64)cmd->device->lun,
+ scsi_cmd.transfer_len, scsi_cmd.cmnd_len, cmd->sc_data_direction,
+ scsi_cmd.pcmnd[0], scsi_cmd.under_flow,
+ scsi_cmd.dif_control.protect_opcode, dif_sgl_mode,
+ (cmd->device->sector_size));
+
+ /* Bind the Exchange address corresponding to scsi_cmd to
+ * scsi_cmd->host_scribble
+ */
+ cmd->host_scribble = (unsigned char *)scsi_cmd.cmnd_sn;
+ ret = unf_cm_queue_command(&scsi_cmd);
+ if (ret != RETURN_OK) {
+ unf_rport = unf_find_rport_by_scsi_id(unf_lport, ini_error_code_table1,
+ ini_err_code_table_cnt1,
+ scsi_id, &cmnd_result);
+ rport_state_err = (!unf_rport) ||
+ (unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP) ||
+ (unf_rport->rp_state == UNF_RPORT_ST_CLOSING);
+ scan_device_cmd = unf_scan_device_cmd(cmd);
+
+ /* report lun or inquiry cmd if send failed, do not
+ * retry,prevent the scan_mutex in scsi host locked up by
+ * eachother
+ */
+ if (rport_state_err && scan_device_cmd) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) cmResult(0x%x) DID_NO_CONNECT",
+ unf_lport->port_id, host->host_no, scsi_id,
+ (u64)cmd->device->lun, cmd->cmnd[0],
+ cmnd_result);
+ unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ return 0;
+ }
+
+ /* Host busy: scsi need to retry */
+ ret = SCSI_MLQUEUE_HOST_BUSY;
+ if (likely(scsi_image_table->wwn_rport_info_table)) {
+ if (likely(scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter)) {
+ atomic64_inc(&(scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->host_busy));
+ }
+ }
+ scsi_dma_unmap(cmd);
+ unf_unmap_prot_sgl(cmd);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) return(0x%x) to process INI IO falid",
+ unf_lport->port_id, ret);
+ }
+ return ret;
+}
+
+static void unf_init_abts_tmf_scsi_cmd(struct scsi_cmnd *cmnd,
+ struct unf_scsi_cmnd *scsi_cmd,
+ bool abort_cmd)
+{
+ struct Scsi_Host *scsi_host = NULL;
+
+ scsi_host = cmnd->device->host;
+ scsi_cmd->scsi_host_id = scsi_host->host_no;
+ scsi_cmd->scsi_id = (u32)((u64)cmnd->device->hostdata);
+ scsi_cmd->raw_lun_id = (u64)cmnd->device->lun;
+ scsi_cmd->upper_cmnd = cmnd;
+ scsi_cmd->drv_private = (void *)(*(u64 *)shost_priv(scsi_host));
+ scsi_cmd->cmnd_sn = (u64)(cmnd->host_scribble);
+ scsi_cmd->lun_id = (u8 *)&scsi_cmd->raw_lun_id;
+ if (abort_cmd) {
+ scsi_cmd->done = unf_scsi_done;
+ scsi_cmd->world_id = INVALID_WORLD_ID;
+ }
+}
+
+int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *cmnd)
+{
+ /* SCSI ABORT Command --->>> FC ABTS */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ int ret = FAILED;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_ABORT_IO_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[abort]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id,
+ (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Lport(%p) is moving or null", unf_lport);
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* Check local SCSI_ID validity */
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+ return UNF_SCSI_ABORT_FAIL;
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, true);
+ /* Process scsi Abort cmnd */
+ ret = unf_cm_eh_abort_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_ABORT_IO_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
+ scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_device_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* LUN reset */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[device_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi device/LUN reset cmnd */
+ ret = unf_cm_eh_device_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
+ scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_bus_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* BUS Reset */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_lport *unf_lport = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, FAILED);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is null");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
+ UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_BUS_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info][bus_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun,
+ cmnd->cmnd[0]);
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi BUS Reset cmnd */
+ ret = unf_cm_bus_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_BUS_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+int unf_scsi_target_reset_handler(struct scsi_cmnd *cmnd)
+{
+ /* Session reset/delete */
+ struct unf_scsi_cmnd scsi_cmd = {0};
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+ int ret = FAILED;
+ struct unf_lport *unf_lport = NULL;
+ u32 scsi_id = 0;
+ u32 err_handle = 0;
+
+ FC_CHECK_RETURN_VALUE(cmnd, RETURN_ERROR);
+
+ unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is null");
+
+ return FAILED;
+ }
+
+ /* Check local SCSI_ID validity */
+ scsi_id = (u32)((u64)cmnd->device->hostdata);
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
+
+ return FAILED;
+ }
+
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_TARGET_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[target_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
+ unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
+ }
+
+ /* Block scsi (check rport state -> whether offline or not) */
+ ret = fc_block_scsi_eh(cmnd);
+ if (unlikely(ret != 0)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Block scsi eh failed(0x%x)", ret);
+
+ return ret;
+ }
+
+ unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
+ /* Process scsi Target/Session reset/delete cmnd */
+ ret = unf_cm_target_reset_handler(&scsi_cmd);
+ if (ret == UNF_SCSI_ABORT_SUCCESS) {
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ err_handle = UNF_SCSI_TARGET_RESET_TYPE;
+ UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
+ }
+ }
+
+ return ret;
+}
+
+static int unf_scsi_slave_alloc(struct scsi_device *sdev)
+{
+ struct fc_rport *rport = NULL;
+ u32 scsi_id = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *host = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+
+ /* About device */
+ if (unlikely(!sdev)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SDev is null");
+ return -ENXIO;
+ }
+
+ /* About scsi rport */
+ rport = starget_to_rport(scsi_target(sdev));
+ if (unlikely(!rport || fc_remote_port_chkready(rport))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
+
+ if (rport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SCSI rport is not ready(0x%x)",
+ fc_remote_port_chkready(rport));
+ }
+
+ return -ENXIO;
+ }
+
+ /* About host */
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+
+ return -ENXIO;
+ }
+
+ /* About Local Port */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
+
+ return -ENXIO;
+ }
+
+ /* About Local SCSI_ID */
+ scsi_id =
+ *(u32 *)rport->dd_data;
+ if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
+
+ return -ENXIO;
+ }
+
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
+ atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->device_alloc);
+ }
+ atomic_inc(&unf_lport->device_alloc);
+ sdev->hostdata = (void *)(u64)scsi_id;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) use scsi_id(%u) to alloc device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no, sdev->channel, sdev->id,
+ (u32)sdev->lun);
+
+ return 0;
+}
+
+static void unf_scsi_destroy_slave(struct scsi_device *sdev)
+{
+ /*
+ * NOTE: about sdev->hostdata
+ * --->>> pointing to local SCSI_ID
+ * 1. Assignment during slave allocation
+ * 2. Released when callback for slave destroy
+ * 3. Used during: Queue_CMND, Abort CMND, Device Reset, Target Reset &
+ * Bus Reset
+ */
+ struct fc_rport *rport = NULL;
+ u32 scsi_id = 0;
+ struct unf_lport *unf_lport = NULL;
+ struct Scsi_Host *host = NULL;
+ struct unf_rport_scsi_id_image *scsi_image_table = NULL;
+
+ /* About scsi device */
+ if (unlikely(!sdev)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SDev is null");
+
+ return;
+ }
+
+ /* About scsi rport */
+ rport = starget_to_rport(scsi_target(sdev));
+ if (unlikely(!rport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]SCSI rport is null or remote port is not ready");
+ return;
+ }
+
+ /* About host */
+ host = rport_to_shost(rport);
+ if (unlikely(!host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
+
+ return;
+ }
+
+ /* About L_Port */
+ unf_lport = (struct unf_lport *)host->hostdata[0];
+ if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
+ scsi_image_table = &unf_lport->rport_scsi_table;
+ atomic_inc(&unf_lport->device_destroy);
+
+ scsi_id = (u32)((u64)sdev->hostdata);
+ if (scsi_id < UNF_MAX_SCSI_ID && scsi_image_table->wwn_rport_info_table) {
+ if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
+ atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
+ .dfx_counter->device_destroy);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) with scsi_id(%u) to destroy slave device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no,
+ sdev->channel, sdev->id, (u32)sdev->lun);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[err]Port(0x%x) scsi_id(%u) is invalid and destroy device[%u:%u:%u:%u]",
+ unf_lport->port_id, scsi_id, host->host_no,
+ sdev->channel, sdev->id, (u32)sdev->lun);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(%p) is invalid", unf_lport);
+ }
+
+ sdev->hostdata = NULL;
+}
+
+static int unf_scsi_slave_configure(struct scsi_device *sdev)
+{
+#define UNF_SCSI_DEV_DEPTH 32
+ blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
+
+ scsi_change_queue_depth(sdev, UNF_SCSI_DEV_DEPTH);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[event]Enter slave configure, set depth is %d, sdev->tagged_supported is (%d)",
+ UNF_SCSI_DEV_DEPTH, sdev->tagged_supported);
+
+ return 0;
+}
+
+static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Scan finished");
+
+ return 1;
+}
+
+static void unf_scsi_scan_start(struct Scsi_Host *shost)
+{
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Start scsi scan...");
+}
+
+void unf_host_init_attr_setting(struct Scsi_Host *scsi_host)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 speed = FC_PORTSPEED_UNKNOWN;
+
+ unf_lport = (struct unf_lport *)scsi_host->hostdata[0];
+ fc_host_supported_classes(scsi_host) = FC_COS_CLASS3;
+ fc_host_dev_loss_tmo(scsi_host) = (u32)unf_get_link_lose_tmo(unf_lport);
+ fc_host_node_name(scsi_host) = unf_lport->node_name;
+ fc_host_port_name(scsi_host) = unf_lport->port_name;
+
+ fc_host_max_npiv_vports(scsi_host) = (u16)((unf_lport == unf_lport->root_lport) ?
+ unf_lport->low_level_func.support_max_npiv_num
+ : 0);
+ fc_host_npiv_vports_inuse(scsi_host) = 0;
+ fc_host_next_vport_number(scsi_host) = 0;
+
+ /* About speed mode */
+ if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_32_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_32_G) {
+ speed = FC_PORTSPEED_32GBIT | FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT;
+ } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_16_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_16_G) {
+ speed = FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT;
+ } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_8_G &&
+ unf_lport->card_type == UNF_FC_SERVER_BOARD_8_G) {
+ speed = FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT | FC_PORTSPEED_2GBIT;
+ }
+
+ fc_host_supported_speeds(scsi_host) = speed;
+}
+
+int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host,
+ struct unf_host_param *host_param)
+{
+ int ret = RETURN_ERROR;
+ struct Scsi_Host *scsi_host = NULL;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(unf_scsi_host, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(host_param, RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR, "[event]Alloc scsi host...");
+
+ /* Check L_Port validity */
+ unf_lport = (struct unf_lport *)(host_param->lport);
+ if (unlikely(!unf_lport)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port is NULL and return directly");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_host_template.can_queue = host_param->can_queue;
+ scsi_host_template.cmd_per_lun = host_param->cmnd_per_lun;
+ scsi_host_template.sg_tablesize = host_param->sg_table_size;
+ scsi_host_template.max_sectors = host_param->max_sectors;
+
+ /* Alloc scsi host */
+ scsi_host = scsi_host_alloc(&scsi_host_template, sizeof(u64));
+ if (unlikely(!scsi_host)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Register scsi host failed");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_host->max_channel = host_param->max_channel;
+ scsi_host->max_lun = host_param->max_lun;
+ scsi_host->max_cmd_len = host_param->max_cmnd_len;
+ scsi_host->unchecked_isa_dma = 0;
+ scsi_host->hostdata[0] = (unsigned long)(uintptr_t)unf_lport; /* save lport to scsi */
+ scsi_host->unique_id = scsi_host->host_no;
+ scsi_host->max_id = host_param->max_id;
+ scsi_host->transportt = (unf_lport == unf_lport->root_lport)
+ ? scsi_transport_template
+ : scsi_transport_template_v;
+
+ /* register DIF/DIX protection */
+ if (spfc_dif_enable) {
+ /* Enable DIF and DIX function */
+ scsi_host_set_prot(scsi_host, spfc_dif_type);
+
+ spfc_guard = SHOST_DIX_GUARD_CRC;
+ /* Enable IP checksum algorithm in DIX */
+ if (dix_flag)
+ spfc_guard |= SHOST_DIX_GUARD_IP;
+ scsi_host_set_guard(scsi_host, spfc_guard);
+ }
+
+ /* Add scsi host */
+ ret = scsi_add_host(scsi_host, host_param->pdev);
+ if (unlikely(ret)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Add scsi host failed with return value %d", ret);
+
+ scsi_host_put(scsi_host);
+ return RETURN_ERROR;
+ }
+
+ /* Set scsi host attribute */
+ unf_host_init_attr_setting(scsi_host);
+ *unf_scsi_host = scsi_host;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Alloc and add scsi host(0x%llx) succeed",
+ (u64)scsi_host);
+
+ return RETURN_OK;
+}
+
+void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host)
+{
+ struct Scsi_Host *scsi_host = NULL;
+
+ scsi_host = unf_scsi_host;
+ fc_remove_host(scsi_host);
+ scsi_remove_host(scsi_host);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Remove scsi host(%u) succeed", scsi_host->host_no);
+
+ scsi_host_put(scsi_host);
+}
+
+u32 unf_register_ini_transport(void)
+{
+ /* Register INI Transport */
+ scsi_transport_template = fc_attach_transport(&function_template);
+
+ if (!scsi_transport_template) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Register FC transport to scsi failed");
+
+ return RETURN_ERROR;
+ }
+
+ scsi_transport_template_v = fc_attach_transport(&function_template_v);
+ if (!scsi_transport_template_v) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Register FC vport transport to scsi failed");
+
+ fc_release_transport(scsi_transport_template);
+
+ return RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Register FC transport to scsi succeed");
+
+ return RETURN_OK;
+}
+
+void unf_unregister_ini_transport(void)
+{
+ fc_release_transport(scsi_transport_template);
+ fc_release_transport(scsi_transport_template_v);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[event]Unregister FC transport succeed");
+}
+
+void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len)
+{
+ struct scsi_cmnd *cmd = NULL;
+
+ FC_CHECK_RETURN_VOID(scsi_cmd);
+ FC_CHECK_RETURN_VOID(sense);
+
+ cmd = (struct scsi_cmnd *)scsi_cmd;
+ memcpy(cmd->sense_buffer, sense, sens_len);
+}
+
diff --git a/drivers/scsi/spfc/common/unf_scsi_common.h b/drivers/scsi/spfc/common/unf_scsi_common.h
new file mode 100644
index 000000000000..c73b5c3d56ce
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_scsi_common.h
@@ -0,0 +1,570 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_SCSI_COMMON
+#define UNF_SCSI_COMMON
+
+#include "unf_type.h"
+
+#define SCSI_SENSE_DATA_LEN 96
+#define DRV_SCSI_CDB_LEN 16
+#define DRV_SCSI_LUN_LEN 8
+
+#define DRV_ENTRY_PER_SGL 64 /* Size of an entry array in a hash table */
+
+#define UNF_DIF_AREA_SIZE (8)
+
+struct unf_dif_control_info {
+ u16 app_tag;
+ u16 flags;
+ u32 protect_opcode;
+ u32 fcp_dl;
+ u32 start_lba;
+ u8 actual_dif[UNF_DIF_AREA_SIZE];
+ u8 expected_dif[UNF_DIF_AREA_SIZE];
+ u32 dif_sge_count;
+ void *dif_sgl;
+};
+
+struct dif_result_info {
+ unsigned char actual_idf[UNF_DIF_AREA_SIZE];
+ unsigned char expect_dif[UNF_DIF_AREA_SIZE];
+};
+
+struct drv_sge {
+ char *buf;
+ void *page_ctrl;
+ u32 Length;
+ u32 offset;
+};
+
+struct drv_scsi_cmd_result {
+ u32 Status;
+ u16 sense_data_length; /* sense data length */
+ u8 sense_data[SCSI_SENSE_DATA_LEN]; /* fail sense info */
+};
+
+enum drv_io_direction {
+ DRV_IO_BIDIRECTIONAL = 0,
+ DRV_IO_DIRECTION_WRITE = 1,
+ DRV_IO_DIRECTION_READ = 2,
+ DRV_IO_DIRECTION_NONE = 3,
+};
+
+struct drv_sgl {
+ struct drv_sgl *next_sgl; /* poin to SGL,SGL list */
+ unsigned short num_sges_in_chain;
+ unsigned short num_sges_in_sgl;
+ u32 flag;
+ u64 serial_num;
+ struct drv_sge sge[DRV_ENTRY_PER_SGL];
+ struct list_head node;
+ u32 cpu_id;
+};
+
+struct dif_info {
+/* Indicates the result returned when the data protection
+ *information is inconsistent,add by pangea
+ */
+ struct dif_result_info dif_result;
+/* Data protection information operation code
+ * bit[31-24] other operation code
+ * bit[23-16] Data Protection Information Operation
+ * bit[15-8] Data protection information
+ * verification bit[7-0] Data protection information
+ * replace
+ */
+ u32 protect_opcode;
+ unsigned short apptag;
+ u64 start_lba; /* IO start LBA */
+ struct drv_sgl *protection_sgl;
+};
+
+struct drv_device_address {
+ u16 initiator_id; /* ini id */
+ u16 bus_id; /* device bus id */
+ u16 target_id; /* device target id,for PCIe SSD,device id */
+ u16 function_id; /* function id */
+};
+
+struct drv_ini_cmd {
+ struct drv_scsi_cmd_result result;
+ void *upper; /* product private pointer */
+ void *lower; /* driver private pointer */
+ u8 cdb[DRV_SCSI_CDB_LEN]; /* CDB edit by product */
+ u8 lun[DRV_SCSI_LUN_LEN];
+ u16 cmd_len;
+ u16 tag; /* SCSI cmd add by driver */
+ enum drv_io_direction io_direciton;
+ u32 data_length;
+ u32 underflow;
+ u32 overflow;
+ u32 resid;
+ u64 port_id;
+ u64 cmd_sn;
+ struct drv_device_address addr;
+ struct drv_sgl *sgl;
+ void *device;
+ void (*done)(struct drv_ini_cmd *cmd); /* callback pointer */
+ struct dif_info dif_info;
+};
+
+typedef void (*uplevel_cmd_done)(struct drv_ini_cmd *scsi_cmnd);
+
+#ifndef SUCCESS
+#define SUCCESS 0x2002
+#endif
+#ifndef FAILED
+#define FAILED 0x2003
+#endif
+
+#ifndef DRIVER_OK
+#define DRIVER_OK 0x00 /* Driver status */
+#endif
+
+#ifndef PCI_FUNC
+#define PCI_FUNC(devfn) ((devfn) & 0x07)
+#endif
+
+#define UNF_SCSI_ABORT_SUCCESS SUCCESS
+#define UNF_SCSI_ABORT_FAIL FAILED
+
+#define UNF_SCSI_STATUS(byte) (byte)
+#define UNF_SCSI_MSG(byte) ((byte) << 8)
+#define UNF_SCSI_HOST(byte) ((byte) << 16)
+#define UNF_SCSI_DRIVER(byte) ((byte) << 24)
+#define UNF_GET_SCSI_HOST_ID(scsi_host) ((scsi_host)->host_no)
+
+struct unf_ini_error_code {
+ u32 drv_errcode; /* driver error code */
+ u32 ap_errcode; /* up level error code */
+};
+
+typedef u32 (*ini_get_sgl_entry_buf)(void *upper_cmnd, void *driver_sgl,
+ void **upper_sgl, u32 *req_index,
+ u32 *index, char **buf,
+ u32 *buf_len);
+
+#define UNF_SCSI_SENSE_BUFFERSIZE 96
+struct unf_scsi_cmnd {
+ u32 scsi_host_id;
+ u32 scsi_id; /* cmd->dev->id */
+ u64 raw_lun_id;
+ u64 port_id;
+ u32 under_flow; /* Underflow */
+ u32 transfer_len; /* Transfer Length */
+ u32 resid; /* Resid */
+ u32 sense_buflen;
+ int result;
+ u32 entry_count; /* IO Buffer counter */
+ u32 abort;
+ u32 err_code_table_cout; /* error code size */
+ u64 cmnd_sn;
+ ulong time_out; /* EPL driver add timer */
+ u16 cmnd_len; /* Cdb length */
+ u8 data_direction; /* data direction */
+ u8 *pcmnd; /* SCSI CDB */
+ u8 *sense_buf;
+ void *drv_private; /* driver host pionter */
+ void *driver_scribble; /* Xchg pionter */
+ void *upper_cmnd; /* UpperCmnd pointer by driver */
+ u8 *lun_id; /* new lunid */
+ u32 world_id;
+ struct unf_dif_control_info dif_control; /* DIF control */
+ struct unf_ini_error_code *err_code_table; /* error code table */
+ void *sgl; /* Sgl pointer */
+ ini_get_sgl_entry_buf unf_ini_get_sgl_entry;
+ void (*done)(struct unf_scsi_cmnd *cmd);
+ uplevel_cmd_done uplevel_done;
+ struct dif_info dif_info;
+ u32 qos_level;
+ void *pinitiator;
+};
+
+#ifndef FC_PORTSPEED_32GBIT
+#define FC_PORTSPEED_32GBIT 0x40
+#endif
+
+#define UNF_GID_PORT_CNT 2048
+#define UNF_RSCN_PAGE_SUM 255
+
+#define UNF_CPU_ENDIAN
+
+#define UNF_NPORTID_MASK 0x00FFFFFF
+#define UNF_DOMAIN_MASK 0x00FF0000
+#define UNF_AREA_MASK 0x0000FF00
+#define UNF_ALPA_MASK 0x000000FF
+
+struct unf_fc_head {
+ u32 rctl_did; /* Routing control and Destination address of the seq */
+ u32 csctl_sid; /* Class control and Source address of the sequence */
+ u32 type_fctl; /* Data type and Initial frame control value of the seq
+ */
+ u32 seqid_dfctl_seqcnt; /* Seq ID, Data Field and Initial seq count */
+ u32 oxid_rxid; /* Originator & Responder exchange IDs for the sequence
+ */
+ u32 parameter; /* Relative offset of the first frame of the sequence */
+};
+
+#define UNF_FCPRSP_CTL_LEN (24)
+#define UNF_MAX_RSP_INFO_LEN (8)
+#define UNF_RSP_LEN_VLD (1 << 0)
+#define UNF_SENSE_LEN_VLD (1 << 1)
+#define UNF_RESID_OVERRUN (1 << 2)
+#define UNF_RESID_UNDERRUN (1 << 3)
+#define UNF_FCP_CONF_REQ (1 << 4)
+
+/* T10: FCP2r.07 9.4.1 Overview and format of FCP_RSP IU */
+struct unf_fcprsp_iu {
+ u32 reserved[2];
+ u8 reserved2[2];
+ u8 control;
+ u8 fcp_status;
+ u32 fcp_residual;
+ u32 fcp_sense_len; /* Length of sense info field */
+ u32 fcp_response_len; /* Length of response info field in bytes 0,4 or 8
+ */
+ u8 fcp_resp_info[UNF_MAX_RSP_INFO_LEN]; /* Buffer for response info */
+ u8 fcp_sense_info[SCSI_SENSE_DATA_LEN]; /* Buffer for sense info */
+} __attribute__((packed));
+
+#define UNF_CMD_REF_MASK 0xFF000000
+#define UNF_TASK_ATTR_MASK 0x00070000
+#define UNF_TASK_MGMT_MASK 0x0000FF00
+#define UNF_FCP_WR_DATA 0x00000001
+#define UNF_FCP_RD_DATA 0x00000002
+#define UNF_CDB_LEN_MASK 0x0000007C
+#define UNF_FCP_CDB_LEN_16 (16)
+#define UNF_FCP_CDB_LEN_32 (32)
+#define UNF_FCP_LUNID_LEN_8 (8)
+
+/* FCP-4 :Table 27 - RSP_CODE field */
+#define UNF_FCP_TM_RSP_COMPLETE (0)
+#define UNF_FCP_TM_INVALID_CMND (0x2)
+#define UNF_FCP_TM_RSP_REJECT (0x4)
+#define UNF_FCP_TM_RSP_FAIL (0x5)
+#define UNF_FCP_TM_RSP_SUCCEED (0x8)
+#define UNF_FCP_TM_RSP_INCRECT_LUN (0x9)
+
+#define UNF_SET_TASK_MGMT_FLAGS(fcp_tm_code) ((fcp_tm_code) << 8)
+#define UNF_GET_TASK_MGMT_FLAGS(control) (((control) & UNF_TASK_MGMT_MASK) >> 8)
+
+enum unf_task_mgmt_cmd {
+ UNF_FCP_TM_QUERY_TASK_SET = (1 << 0),
+ UNF_FCP_TM_ABORT_TASK_SET = (1 << 1),
+ UNF_FCP_TM_CLEAR_TASK_SET = (1 << 2),
+ UNF_FCP_TM_QUERY_UNIT_ATTENTION = (1 << 3),
+ UNF_FCP_TM_LOGICAL_UNIT_RESET = (1 << 4),
+ UNF_FCP_TM_TARGET_RESET = (1 << 5),
+ UNF_FCP_TM_CLEAR_ACA = (1 << 6),
+ UNF_FCP_TM_TERMINATE_TASK = (1 << 7) /* obsolete */
+};
+
+struct unf_fcp_cmnd {
+ u8 lun[UNF_FCP_LUNID_LEN_8]; /* Logical unit number */
+ u32 control;
+ u8 cdb[UNF_FCP_CDB_LEN_16]; /* Payload data containing cdb info */
+ u32 data_length; /* Number of bytes expected to be transferred */
+} __attribute__((packed));
+
+struct unf_fcp_cmd_hdr {
+ struct unf_fc_head frame_hdr; /* FCHS structure */
+ struct unf_fcp_cmnd fcp_cmnd; /* Fcp Cmnd struct */
+};
+
+/* FC-LS-2 Common Service Parameter applicability */
+struct unf_fabric_coparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_credit : 16; /* 0 [0-15] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 highest_version : 8; /* 0 [24-31] */
+#else
+ u32 highest_version : 8; /* 0 [24-31] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 bb_credit : 16; /* 0 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 r_t_tov : 1; /* 1 [19] */
+ u32 reserved_co2 : 6; /* 1 [20-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 mnid_assignment : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 clean_address : 1; /* 1 [31] */
+#else
+ u32 reserved_co2 : 2; /* 1 [24-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 mnid_assignment : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 clean_address : 1; /* 1 [31] */
+
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 r_t_tov : 1; /* 1 [19] */
+ u32 reserved_co5 : 4; /* 1 [20-23] */
+
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+#endif
+ u32 r_a_tov; /* 2 [0-31] */
+ u32 e_d_tov; /* 3 [0-31] */
+};
+
+/* FC-LS-2 Common Service Parameter applicability */
+/*Common Service Parameters - PLOGI and PLOGI LS_ACC */
+struct lgn_port_coparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_credit : 16; /* 0 [0-15] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 highest_version : 8; /* 0 [24-31] */
+#else
+ u32 highest_version : 8; /* 0 [24-31] */
+ u32 lowest_version : 8; /* 0 [16-23] */
+ u32 bb_credit : 16; /* 0 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 bbscn : 4; /* 1 [12-15] */
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 reserved_co2 : 7; /* 1 [19-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 vendor_version_level : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 continuously_increasing : 1; /* 1 [31] */
+#else
+ u32 reserved_co2 : 2; /* 1 [24-25] */
+ u32 e_d_tov_resolution : 1; /* 1 [26] */
+ u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
+ u32 nport : 1; /* 1 [28] */
+ u32 vendor_version_level : 1; /* 1 [29] */
+ u32 random_relative_offset : 1; /* 1 [30] */
+ u32 continuously_increasing : 1; /* 1 [31] */
+
+ u32 payload_length : 1; /* 1 [16] */
+ u32 seq_cnt : 1; /* 1 [17] */
+ u32 dynamic_half_duplex : 1; /* 1 [18] */
+ u32 reserved_co5 : 5; /* 1 [19-23] */
+
+ u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
+ u32 reserved_co1 : 4; /* 1 [12-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 relative_offset : 16; /* 2 [0-15] */
+ u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
+#else
+ u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
+ u32 relative_offset : 16; /* 2 [0-15] */
+#endif
+
+ u32 e_d_tov;
+};
+
+/* FC-LS-2 Class Service Parameters Applicability */
+struct unf_lgn_port_clparm {
+#if defined(UNF_CPU_ENDIAN)
+ u32 reserved_cl1 : 6; /* 0 [0-5] */
+ u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
+ u32 ic_data_compression_capable : 1; /* 0 [8] */
+
+ u32 ic_ack_generation_assistance : 1; /* 0 [9] */
+ u32 ic_ack_n_capable : 1; /* 0 [10] */
+ u32 ic_ack_o_capable : 1; /* 0 [11] */
+ u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
+ u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
+
+ u32 reserved_cl2 : 7; /* 0 [16-22] */
+ u32 priority : 1; /* 0 [23] */
+ u32 buffered_class : 1; /* 0 [24] */
+ u32 camp_on : 1; /* 0 [25] */
+ u32 dedicated_simplex : 1; /* 0 [26] */
+ u32 sequential_delivery : 1; /* 0 [27] */
+ u32 stacked_connect_request : 2; /* 0 [28-29] */
+ u32 intermix_mode : 1; /* 0 [30] */
+ u32 valid : 1; /* 0 [31] */
+#else
+ u32 buffered_class : 1; /* 0 [24] */
+ u32 camp_on : 1; /* 0 [25] */
+ u32 dedicated_simplex : 1; /* 0 [26] */
+ u32 sequential_delivery : 1; /* 0 [27] */
+ u32 stacked_connect_request : 2; /* 0 [28-29] */
+ u32 intermix_mode : 1; /* 0 [30] */
+ u32 valid : 1; /* 0 [31] */
+ u32 reserved_cl2 : 7; /* 0 [16-22] */
+ u32 priority : 1; /* 0 [23] */
+ u32 ic_data_compression_capable : 1; /* 0 [8] */
+ u32 ic_ack_generation_assistance : 1; /* 0 [9] */
+ u32 ic_ack_n_capable : 1; /* 0 [10] */
+ u32 ic_ack_o_capable : 1; /* 0 [11] */
+ u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
+ u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
+
+ u32 reserved_cl1 : 6; /* 0 [0-5] */
+ u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 received_data_field_size : 16; /* 1 [0-15] */
+
+ u32 reserved_cl3 : 5; /* 1 [16-20] */
+ u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
+ u32 rc_data_compression_capable : 1; /* 1 [23] */
+
+ u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
+ u32 reserved_cl4 : 1; /* 1 [26] */
+ u32 rc_error_policy_supported : 2; /* 1 [27-28] */
+ u32 rc_x_id_interlock : 1; /* 1 [29] */
+ u32 rc_ack_n_capable : 1; /* 1 [30] */
+ u32 rc_ack_o_capable : 1; /* 1 [31] */
+#else
+ u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
+ u32 reserved_cl4 : 1; /* 1 [26] */
+ u32 rc_error_policy_supported : 2; /* 1 [27-28] */
+ u32 rc_x_id_interlock : 1; /* 1 [29] */
+ u32 rc_ack_n_capable : 1; /* 1 [30] */
+ u32 rc_ack_o_capable : 1; /* 1 [31] */
+
+ u32 reserved_cl3 : 5; /* 1 [16-20] */
+ u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
+ u32 rc_data_compression_capable : 1; /* 1 [23] */
+
+ u32 received_data_field_size : 16; /* 1 [0-15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
+ u32 reserved_cl5 : 1; /* 2 [15] */
+
+ u32 concurrent_sequences : 16; /* 2 [16-31] */
+#else
+ u32 concurrent_sequences : 16; /* 2 [16-31] */
+
+ u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
+ u32 reserved_cl5 : 1; /* 2 [15] */
+#endif
+
+#if defined(UNF_CPU_ENDIAN)
+ u32 reserved_cl6 : 16; /* 3 [0-15] */
+ u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
+#else
+ u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
+ u32 reserved_cl6 : 16; /* 3 [0-15] */
+#endif
+};
+
+struct unf_fabric_parm {
+ struct unf_fabric_coparm co_parms;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ struct unf_lgn_port_clparm cl_parms[3];
+ u32 reserved_1[4];
+ u32 vendor_version_level[4];
+};
+
+struct unf_lgn_parm {
+ struct lgn_port_coparm co_parms;
+ u32 high_port_name;
+ u32 low_port_name;
+ u32 high_node_name;
+ u32 low_node_name;
+ struct unf_lgn_port_clparm cl_parms[3];
+ u32 reserved_1[4];
+ u32 vendor_version_level[4];
+};
+
+#define ELS_RJT 0x1
+#define ELS_ACC 0x2
+#define ELS_PLOGI 0x3
+#define ELS_FLOGI 0x4
+#define ELS_LOGO 0x5
+#define ELS_ECHO 0x10
+#define ELS_RRQ 0x12
+#define ELS_REC 0x13
+#define ELS_PRLI 0x20
+#define ELS_PRLO 0x21
+#define ELS_TPRLO 0x24
+#define ELS_PDISC 0x50
+#define ELS_FDISC 0x51
+#define ELS_ADISC 0x52
+#define ELS_RSCN 0x61 /* registered state change notification */
+#define ELS_SCR 0x62 /* state change registration */
+
+#define NS_GIEL 0X0101
+#define NS_GA_NXT 0X0100
+#define NS_GPN_ID 0x0112 /* get port name by ID */
+#define NS_GNN_ID 0x0113 /* get node name by ID */
+#define NS_GFF_ID 0x011f /* get FC-4 features by ID */
+#define NS_GID_PN 0x0121 /* get ID for port name */
+#define NS_GID_NN 0x0131 /* get IDs for node name */
+#define NS_GID_FT 0x0171 /* get IDs by FC4 type */
+#define NS_GPN_FT 0x0172 /* get port names by FC4 type */
+#define NS_GID_PT 0x01a1 /* get IDs by port type */
+#define NS_RFT_ID 0x0217 /* reg FC4 type for ID */
+#define NS_RPN_ID 0x0212 /* reg port name for ID */
+#define NS_RNN_ID 0x0213 /* reg node name for ID */
+#define NS_RSNPN 0x0218 /* reg symbolic port name */
+#define NS_RFF_ID 0x021f /* reg FC4 Features for ID */
+#define NS_RSNN 0x0239 /* reg symbolic node name */
+#define ST_NULL 0xffff /* reg symbolic node name */
+
+#define BLS_ABTS 0xA001 /* ABTS */
+
+#define FCP_SRR 0x14 /* Sequence Retransmission Request */
+#define UNF_FC_FID_DOM_MGR 0xfffc00 /* domain manager base */
+enum unf_fc_well_known_fabric_id {
+ UNF_FC_FID_NONE = 0x000000, /* No destination */
+ UNF_FC_FID_DOM_CTRL = 0xfffc01, /* domain controller */
+ UNF_FC_FID_BCAST = 0xffffff, /* broadcast */
+ UNF_FC_FID_FLOGI = 0xfffffe, /* fabric login */
+ UNF_FC_FID_FCTRL = 0xfffffd, /* fabric controller */
+ UNF_FC_FID_DIR_SERV = 0xfffffc, /* directory server */
+ UNF_FC_FID_TIME_SERV = 0xfffffb, /* time server */
+ UNF_FC_FID_MGMT_SERV = 0xfffffa, /* management server */
+ UNF_FC_FID_QOS = 0xfffff9, /* QoS Facilitator */
+ UNF_FC_FID_ALIASES = 0xfffff8, /* alias server (FC-PH2) */
+ UNF_FC_FID_SEC_KEY = 0xfffff7, /* Security key dist. server */
+ UNF_FC_FID_CLOCK = 0xfffff6, /* clock synch server */
+ UNF_FC_FID_MCAST_SERV = 0xfffff5 /* multicast server */
+};
+
+#define INVALID_WORLD_ID 0xfffffffc
+
+struct unf_host_param {
+ int can_queue;
+ u16 sg_table_size;
+ short cmnd_per_lun;
+ u32 max_id;
+ u32 max_lun;
+ u32 max_channel;
+ u16 max_cmnd_len;
+ u16 max_sectors;
+ u64 dma_boundary;
+ u32 port_id;
+ void *lport;
+ struct device *pdev;
+};
+
+int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host, struct unf_host_param *host_param);
+void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host);
+u32 unf_register_ini_transport(void);
+void unf_unregister_ini_transport(void);
+void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len);
+
+#endif
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
new file mode 100644
index 000000000000..8f72f6470647
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_service.c
@@ -0,0 +1,1439 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "unf_service.h"
+#include "unf_log.h"
+#include "unf_rport.h"
+#include "unf_ls.h"
+#include "unf_gs.h"
+
+struct unf_els_handle_table els_handle_table[] = {
+ {ELS_PLOGI, unf_plogi_handler}, {ELS_FLOGI, unf_flogi_handler},
+ {ELS_LOGO, unf_logo_handler}, {ELS_ECHO, unf_echo_handler},
+ {ELS_RRQ, unf_rrq_handler}, {ELS_REC, unf_rec_handler},
+ {ELS_PRLI, unf_prli_handler}, {ELS_PRLO, unf_prlo_handler},
+ {ELS_PDISC, unf_pdisc_handler}, {ELS_ADISC, unf_adisc_handler},
+ {ELS_RSCN, unf_rscn_handler} };
+
+u32 max_frame_size = UNF_DEFAULT_FRAME_SIZE;
+
+#define UNF_NEED_BIG_RESPONSE_BUFF(cmnd_code) \
+ (((cmnd_code) == ELS_ECHO) || ((cmnd_code) == NS_GID_PT) || \
+ ((cmnd_code) == NS_GID_FT))
+
+#define NEED_REFRESH_NPORTID(pkg) \
+ ((((pkg)->cmnd == ELS_PLOGI) || ((pkg)->cmnd == ELS_PDISC) || \
+ ((pkg)->cmnd == ELS_ADISC)))
+
+void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ u32 ssq_index = 0;
+ struct unf_rport *unf_rport = NULL;
+
+ if (likely(xchg)) {
+ unf_rport = xchg->rport;
+
+ if (unf_rport) {
+ ssq_index = (xchg->hotpooltag % UNF_SQ_NUM_PER_SESSION) +
+ unf_rport->sqn_base;
+ }
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] = ssq_index;
+}
+
+u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ ulong time_out = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ if (unlikely(!lport->low_level_func.service_op.unf_ls_gs_send)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) LS/GS send function is NULL",
+ lport->port_id);
+
+ return ret;
+ }
+
+ if (pkg->type == UNF_PKG_GS_REQ)
+ time_out = UNF_GET_GS_SFS_XCHG_TIMER(lport);
+ else
+ time_out = UNF_GET_ELS_SFS_XCHG_TIMER(lport);
+
+ if (xchg->cmnd_code == ELS_RRQ) {
+ time_out = ((ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport) > UNF_RRQ_MIN_TIMEOUT_INTERVAL)
+ ? (ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport)
+ : UNF_RRQ_MIN_TIMEOUT_INTERVAL;
+ } else if (xchg->cmnd_code == ELS_LOGO) {
+ time_out = UNF_LOGO_TIMEOUT_INTERVAL;
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)time_out;
+ lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out, UNF_TIMER_TYPE_SFS);
+
+ unf_select_sq(xchg, pkg);
+
+ ret = lport->low_level_func.service_op.unf_ls_gs_send(lport->fc_port, pkg);
+ if (unlikely(ret != RETURN_OK))
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ return ret;
+}
+
+static u32 unf_bls_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)UNF_GET_BLS_SFS_XCHG_TIMER(lport);
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+
+ unf_select_sq(xchg, pkg);
+
+ return lport->low_level_func.service_op.unf_bls_send(lport->fc_port, pkg);
+}
+
+void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
+ struct unf_rport *rport)
+{
+ /* v_pstRport maybe NULL */
+ FC_CHECK_RETURN_VOID(pkg);
+ FC_CHECK_RETURN_VOID(xchg);
+
+ pkg->cmnd = xchg->cmnd_code;
+ pkg->fcp_cmnd = &xchg->fcp_cmnd;
+ pkg->frame_head.csctl_sid = xchg->sid;
+ pkg->frame_head.rctl_did = xchg->did;
+ pkg->frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+ pkg->xchg_contex = xchg;
+
+ FC_CHECK_RETURN_VOID(xchg->lport);
+ pkg->private_data[PKG_PRIVATE_XCHG_VP_INDEX] = xchg->lport->vp_index;
+
+ if (!rport) {
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = UNF_RPORT_INVALID_INDEX;
+ pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = INVALID_VALUE32;
+ } else {
+ if (likely(rport->nport_id != UNF_FC_FID_FLOGI))
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
+ else
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
+
+ pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
+ }
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
+ xchg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD];
+ pkg->unf_cmnd_pload_bl.buffer_ptr =
+ (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ pkg->unf_cmnd_pload_bl.buf_dma_addr =
+ xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
+
+ /* Low level need to know payload length if send ECHO response */
+ pkg->unf_cmnd_pload_bl.length = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+}
+
+struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport, u32 did,
+ struct unf_rport *rport,
+ union unf_sfs_u **fc_entry)
+{
+ struct unf_xchg *xchg = NULL;
+ union unf_sfs_u *sfs_fc_entry = NULL;
+
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg)
+ return NULL;
+
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->disc_rport = NULL;
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ sfs_fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!sfs_fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return NULL;
+ }
+
+ *fc_entry = sfs_fc_entry;
+
+ return xchg;
+}
+
+void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg)
+{
+ struct unf_big_sfs *big_sfs = NULL;
+ struct list_head *list_head = NULL;
+ struct unf_xchg_mgr *xchg_mgr = NULL;
+ ulong flag = 0;
+ spinlock_t *big_sfs_pool_lock = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+ xchg_mgr = xchg->xchg_mgr;
+ FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
+ big_sfs_pool_lock = &xchg_mgr->big_sfs_pool.big_sfs_pool_lock;
+
+ spin_lock_irqsave(big_sfs_pool_lock, flag);
+ if (!list_empty(&xchg_mgr->big_sfs_pool.list_freepool)) {
+ /* from free to busy */
+ list_head = UNF_OS_LIST_NEXT(&xchg_mgr->big_sfs_pool.list_freepool);
+ list_del(list_head);
+ xchg_mgr->big_sfs_pool.free_count--;
+ list_add_tail(list_head, &xchg_mgr->big_sfs_pool.list_busypool);
+ big_sfs = list_entry(list_head, struct unf_big_sfs, entry_bigsfs);
+ } else {
+ spin_unlock_irqrestore(big_sfs_pool_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Allocate big sfs buf failed, count(0x%x) exchange(0x%p) command(0x%x)",
+ xchg_mgr->big_sfs_pool.free_count, xchg, xchg->cmnd_code);
+
+ return NULL;
+ }
+ spin_unlock_irqrestore(big_sfs_pool_lock, flag);
+
+ xchg->big_sfs_buf = big_sfs;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Allocate one big sfs buffer(0x%p), remaining count(0x%x) exchange(0x%p) command(0x%x)",
+ big_sfs->addr, xchg_mgr->big_sfs_pool.free_count, xchg,
+ xchg->cmnd_code);
+
+ return big_sfs->addr;
+}
+
+static void unf_fill_rjt_pld(struct unf_els_rjt *els_rjt, u32 reason_code,
+ u32 reason_explanation)
+{
+ FC_CHECK_RETURN_VOID(els_rjt);
+
+ els_rjt->cmnd = UNF_ELS_CMND_RJT;
+ els_rjt->reason_code = (reason_code | reason_explanation);
+}
+
+u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
+{
+ struct unf_rport *unf_rport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_frame_pkg pkg;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ unf_rport = xchg->rport;
+ FC_CHECK_RETURN_VALUE(unf_rport, UNF_RETURN_ERROR);
+
+ /* set pkg info */
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ pkg.type = UNF_PKG_BLS_REQ;
+ pkg.frame_head.csctl_sid = xchg->sid;
+ pkg.frame_head.rctl_did = xchg->did;
+ pkg.frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
+ pkg.xchg_contex = xchg;
+ pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+
+ pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+
+ UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
+ UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
+
+ pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] =
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+
+ /* Send ABTS frame to target */
+ ret = unf_bls_cmnd_send(lport, &pkg, xchg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x_0x%x) send ABTS %s. Abort exch(0x%p) Cmdsn:0x%lx, tag(0x%x) iotype(0x%x)",
+ lport->port_id, lport->nport_id,
+ (ret == UNF_RETURN_ERROR) ? "failed" : "succeed", xchg,
+ (ulong)xchg->cmnd_sn, xchg->hotpooltag, xchg->data_direction);
+
+ return ret;
+}
+
+u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_rport *rport, struct unf_rjt_info *rjt_info)
+{
+ struct unf_els_rjt *els_rjt = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
+ xchg->did = rport->nport_id;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = rport;
+ xchg->disc_rport = NULL;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, rport);
+ pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ els_rjt = &fc_entry->els_rjt;
+ memset(els_rjt, 0, sizeof(struct unf_els_rjt));
+ unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send LS_RJT for 0x%x %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ rjt_info->els_cmnd_code,
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
+ rport->nport_id, ox_id, rx_id);
+
+ return ret;
+}
+
+u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
+ u32 did, struct unf_rjt_info *rjt_info)
+{
+ struct unf_els_rjt *els_rjt = NULL;
+ union unf_sfs_u *fc_entry = NULL;
+ struct unf_frame_pkg pkg = {0};
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
+ xchg->did = did;
+ xchg->sid = lport->nport_id;
+ xchg->oid = xchg->sid;
+ xchg->lport = lport;
+ xchg->rport = NULL;
+ xchg->disc_rport = NULL;
+
+ xchg->callback = NULL;
+ xchg->ob_callback = NULL;
+
+ unf_fill_package(&pkg, xchg, NULL);
+ pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
+ pkg.type = UNF_PKG_ELS_REPLY;
+
+ if (rjt_info->reason_code == UNF_LS_RJT_CLASS_ERROR &&
+ rjt_info->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
+ pkg.class_mode = rjt_info->class_mode;
+ }
+
+ fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
+ if (!fc_entry) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
+ lport->port_id, xchg->hotpooltag);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+
+ els_rjt = &fc_entry->els_rjt;
+ memset(els_rjt, 0, sizeof(struct unf_els_rjt));
+ unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
+ ox_id = xchg->oxid;
+ rx_id = xchg->rxid;
+
+ ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
+ if (ret != RETURN_OK)
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]LOGIN: Send LS_RJT %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
+ (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, did, ox_id, rx_id);
+
+ return ret;
+}
+
+static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg *xchg, u32 sid,
+ u32 els_cmnd_code)
+{
+ struct unf_rport *unf_rport = NULL;
+ struct unf_rjt_info rjt_info = {0};
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_KEVENT,
+ "[info]Receive Unknown ELS command(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
+ els_cmnd_code, lport->port_id, sid, xchg->oxid);
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+ rjt_info.els_cmnd_code = els_cmnd_code;
+ rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
+
+ unf_rport = unf_get_rport_by_nport_id(lport, sid);
+ if (unf_rport) {
+ if (unf_rport->rport_index !=
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) NPort handle(0x%x) from low level is not equal to RPort index(0x%x)",
+ lport->port_id, lport->nport_id,
+ xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
+ unf_rport->rport_index);
+ }
+ ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
+ } else {
+ ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+ }
+
+ return ret;
+}
+
+static struct unf_xchg *unf_alloc_xchg_for_rcv_cmnd(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ ulong flags = 0;
+ u32 i = 0;
+ u32 offset = 0;
+ u8 *cmnd_pld = NULL;
+ u32 first_dword = 0;
+ u32 alloc_time = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ if (!pkg->xchg_contex) {
+ xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) get new exchange failed",
+ lport->port_id);
+
+ return NULL;
+ }
+
+ offset = (xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ first_dword = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->sfs_common.frame_head.rctl_did;
+
+ if (cmnd_pld || first_dword != 0 || offset != 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exchange(0x%p) abnormal, maybe data overrun, start(%llu) command(0x%x)",
+ lport->port_id, xchg, xchg->alloc_jif, pkg->cmnd);
+
+ UNF_PRINT_SFS(UNF_INFO, lport->port_id,
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr,
+ sizeof(union unf_sfs_u));
+ }
+
+ memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0, sizeof(union unf_sfs_u));
+
+ pkg->xchg_contex = (void *)xchg;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
+ alloc_time = xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ for (i = 0; i < PKG_MAX_PRIVATE_DATA_SIZE; i++)
+ xchg->private_data[i] = pkg->private_data[i];
+
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = alloc_time;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ } else {
+ xchg = (struct unf_xchg *)pkg->xchg_contex;
+ }
+
+ if (!xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ return xchg;
+}
+
+static u8 *unf_calc_big_cmnd_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
+{
+ u8 *cmnd_pld = NULL;
+ void *buf = NULL;
+ u8 *dest = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ if (cmnd_code == ELS_RSCN)
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
+ else
+ cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+
+ if (!cmnd_pld) {
+ buf = unf_get_one_big_sfs_buf(xchg);
+ if (!buf)
+ return NULL;
+
+ if (cmnd_code == ELS_RSCN) {
+ memset(buf, 0, sizeof(struct unf_rscn_pld));
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld = buf;
+ } else {
+ memset(buf, 0, sizeof(struct unf_echo_payload));
+ xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld = buf;
+ }
+
+ dest = (u8 *)buf;
+ } else {
+ dest = (u8 *)(cmnd_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ }
+
+ return dest;
+}
+
+static u8 *unf_calc_other_pld_buffer(struct unf_xchg *xchg)
+{
+ u8 *dest = NULL;
+ u32 offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ offset = (sizeof(struct unf_fc_head)) + (xchg->fcp_sfs_union.sfs_entry.cur_offset);
+ dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
+
+ return dest;
+}
+
+struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u8 *dest = NULL;
+ u32 length = 0;
+ ulong flags = 0;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+ FC_CHECK_RETURN_VALUE(pkg, NULL);
+
+ xchg = unf_alloc_xchg_for_rcv_cmnd(lport, pkg);
+ if (!xchg)
+ return NULL;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ memcpy(&xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->sfs_common.frame_head,
+ &pkg->frame_head, sizeof(pkg->frame_head));
+
+ if (pkg->cmnd == ELS_RSCN || pkg->cmnd == ELS_ECHO)
+ dest = unf_calc_big_cmnd_pld_buffer(xchg, pkg->cmnd);
+ else
+ dest = unf_calc_other_pld_buffer(xchg);
+
+ if (!dest) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ if (((xchg->fcp_sfs_union.sfs_entry.cur_offset +
+ pkg->unf_cmnd_pload_bl.length) > (u32)sizeof(union unf_sfs_u)) &&
+ pkg->cmnd != ELS_RSCN && pkg->cmnd != ELS_ECHO) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) excange(0x%p) command(0x%x,0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
+ lport->port_id, xchg, pkg->cmnd, xchg->hotpooltag,
+ xchg->fcp_sfs_union.sfs_entry.cur_offset,
+ pkg->unf_cmnd_pload_bl.length, (u32)sizeof(union unf_sfs_u));
+
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+
+ return NULL;
+ }
+
+ length = pkg->unf_cmnd_pload_bl.length;
+ if (length > 0)
+ memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
+
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return xchg;
+}
+
+static u32 unf_check_els_cmnd_valid(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg)
+{
+ struct unf_rjt_info rjt_info = {0};
+ struct unf_lport *vport = NULL;
+ u32 sid = 0;
+ u32 did = 0;
+
+ sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
+ did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
+
+ memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
+
+ if (pkg->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) unsupport class 0x%x cmd 0x%x and send RJT",
+ lport->port_id, pkg->class_mode, pkg->cmnd);
+
+ rjt_info.reason_code = UNF_LS_RJT_CLASS_ERROR;
+ rjt_info.els_cmnd_code = pkg->cmnd;
+ rjt_info.class_mode = pkg->class_mode;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
+
+ if (pkg->cmnd == ELS_FLOGI && lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) receive FLOGI in top (0x%x) and send LS_RJT",
+ lport->port_id, lport->act_topo);
+
+ rjt_info.els_cmnd_code = ELS_FLOGI;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (pkg->cmnd == ELS_PLOGI && did >= UNF_FC_FID_DOM_MGR) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x)receive PLOGI with wellknown address(0x%x) and Send LS_RJT",
+ lport->port_id, did);
+
+ rjt_info.els_cmnd_code = ELS_PLOGI;
+ (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if ((lport->nport_id == 0 || lport->nport_id == INVALID_VALUE32) &&
+ (NEED_REFRESH_NPORTID(pkg))) {
+ lport->nport_id = did;
+ } else if ((lport->nport_id != did) && (pkg->cmnd != ELS_FLOGI)) {
+ vport = unf_cm_lookup_vport_by_did(lport, did);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive ELS cmd(0x%x) with abnormal D_ID(0x%x)",
+ lport->nport_id, pkg->cmnd, did);
+
+ unf_cm_free_xchg(lport, xchg);
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+static u32 unf_rcv_els_cmnd_req(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 i = 0;
+ u32 sid = 0;
+ u32 did = 0;
+ struct unf_lport *vport = NULL;
+ u32 (*els_cmnd_handler)(struct unf_lport *, u32, struct unf_xchg *) = NULL;
+
+ sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
+ did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
+
+ xchg = unf_mv_data_2_xchg(lport, pkg);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) receive ElsCmnd(0x%x), exchange is NULL",
+ lport->port_id, pkg->cmnd);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (!pkg->last_pkg_flag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Exchange(%u) waiting for last WQE",
+ xchg->hotpooltag);
+ return RETURN_OK;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Exchange(%u) get last WQE", xchg->hotpooltag);
+
+ xchg->oxid = UNF_GET_OXID(pkg);
+ xchg->abort_oxid = xchg->oxid;
+ xchg->rxid = UNF_GET_RXID(pkg);
+ xchg->cmnd_code = pkg->cmnd;
+
+ ret = unf_check_els_cmnd_valid(lport, pkg, xchg);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ if (lport->nport_id != did && pkg->cmnd != ELS_FLOGI) {
+ vport = unf_cm_lookup_vport_by_did(lport, did);
+ if (!vport) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) received unknown ELS command with S_ID(0x%x) D_ID(0x%x))",
+ lport->port_id, sid, did);
+ return UNF_RETURN_ERROR;
+ }
+ lport = vport;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]VPort(0x%x) received ELS command with S_ID(0x%x) D_ID(0x%x)",
+ lport->port_id, sid, did);
+ }
+
+ do {
+ if (pkg->cmnd == els_handle_table[i].cmnd) {
+ els_cmnd_handler = els_handle_table[i].els_cmnd_handler;
+ break;
+ }
+ i++;
+ } while (i < (sizeof(els_handle_table) / sizeof(struct unf_els_handle_table)));
+
+ if (els_cmnd_handler) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) receive ELS(0x%x) from RPort(0x%x) and process it",
+ lport->port_id, pkg->cmnd, sid);
+ ret = els_cmnd_handler(lport, sid, xchg);
+ } else {
+ ret = unf_els_cmnd_default_handler(lport, xchg, sid, pkg->cmnd);
+ }
+ return ret;
+}
+
+u32 unf_send_els_rsp_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ void (*ob_callback)(struct unf_xchg *) = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function is NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
+ hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) find exhange by tag(0x%x) failed",
+ lport->port_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->ob_callback &&
+ (!(xchg->io_state & TGT_IO_STATE_ABORT))) {
+ ob_callback = xchg->ob_callback;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) with exchange(0x%p) tag(0x%x) do callback",
+ lport->port_id, xchg, hot_pool_tag);
+
+ ob_callback(xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ return ret;
+}
+
+static u8 *unf_calc_big_resp_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
+{
+ u8 *resp_pld = NULL;
+ u8 *dest = NULL;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ if (cmnd_code == ELS_ECHO) {
+ resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
+ } else {
+ resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
+ ->get_id.gid_rsp.gid_acc_pld;
+ }
+
+ if (resp_pld)
+ dest = (u8 *)(resp_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
+
+ return dest;
+}
+
+static u8 *unf_calc_other_resp_pld_buffer(struct unf_xchg *xchg)
+{
+ u8 *dest = NULL;
+ u32 offset = 0;
+
+ FC_CHECK_RETURN_VALUE(xchg, NULL);
+
+ offset = xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
+
+ return dest;
+}
+
+u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ u8 *dest = NULL;
+ u32 length = 0;
+ u32 offset = 0;
+ u32 max_frame_len = 0;
+ ulong flags = 0;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+
+ if (UNF_NEED_BIG_RESPONSE_BUFF(xchg->cmnd_code)) {
+ dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
+ offset = 0;
+ max_frame_len = sizeof(struct unf_gid_acc_pld);
+ } else if (NS_GA_NXT == xchg->cmnd_code || NS_GIEL == xchg->cmnd_code) {
+ dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
+ offset = 0;
+ max_frame_len = xchg->fcp_sfs_union.sfs_entry.sfs_buff_len;
+ } else {
+ dest = unf_calc_other_resp_pld_buffer(xchg);
+ offset = sizeof(struct unf_fc_head);
+ max_frame_len = sizeof(union unf_sfs_u);
+ }
+
+ if (!dest) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (xchg->fcp_sfs_union.sfs_entry.cur_offset == 0) {
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += offset;
+ dest = dest + offset;
+ }
+
+ length = pkg->unf_cmnd_pload_bl.length;
+
+ if ((xchg->fcp_sfs_union.sfs_entry.cur_offset + length) >
+ max_frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Exchange(0x%p) command(0x%x) hotpooltag(0x%x) OX_RX_ID(0x%x) S_ID(0x%x) D_ID(0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
+ xchg, xchg->cmnd_code, xchg->hotpooltag, pkg->frame_head.oxid_rxid,
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
+ xchg->fcp_sfs_union.sfs_entry.cur_offset,
+ pkg->unf_cmnd_pload_bl.length, max_frame_len);
+
+ length = max_frame_len - xchg->fcp_sfs_union.sfs_entry.cur_offset;
+ }
+
+ if (length > 0)
+ memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
+
+ xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ return RETURN_OK;
+}
+
+static void unf_ls_gs_do_callback(struct unf_xchg *xchg,
+ struct unf_frame_pkg *pkg)
+{
+ ulong flags = 0;
+ void (*callback)(void *, void *, void *) = NULL;
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->callback &&
+ (xchg->cmnd_code == ELS_RRQ ||
+ xchg->cmnd_code == ELS_LOGO ||
+ !(xchg->io_state & TGT_IO_STATE_ABORT))) {
+ callback = xchg->callback;
+
+ if (xchg->cmnd_code == ELS_FLOGI || xchg->cmnd_code == ELS_FDISC)
+ xchg->sid = pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
+
+ if (xchg->cmnd_code == ELS_ECHO) {
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME];
+ xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
+ pkg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME];
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ callback(xchg->lport, xchg->rport, xchg);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+}
+
+u32 unf_send_ls_gs_cmnd_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+
+ if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
+ unf_lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(unf_lport->xchg_mgr_temp
+ .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
+ unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(unf_lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ if ((pkg->frame_head.csctl_sid & UNF_NPORTID_MASK) != xchg->did) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) find exhange invalid, package S_ID(0x%x) exchange S_ID(0x%x) D_ID(0x%x)",
+ unf_lport->port_id, pkg->frame_head.csctl_sid, xchg->sid, xchg->did);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (pkg->last_pkg_flag == UNF_PKG_NOT_LAST_RESPONSE) {
+ ret = unf_mv_resp_2_xchg(xchg, pkg);
+ return ret;
+ }
+
+ xchg->byte_orders = pkg->byte_orders;
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ unf_ls_gs_do_callback(xchg, pkg);
+ unf_cm_free_xchg((void *)unf_lport, (void *)xchg);
+ return ret;
+}
+
+u32 unf_send_ls_gs_cmnd_failed(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ struct unf_xchg *xchg = NULL;
+ u32 ret = RETURN_OK;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ void (*ob_callback)(struct unf_xchg *) = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
+ lport->port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
+ xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
+ hot_pool_tag));
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) find exhange by tag(0x%x) failed",
+ lport->port_id, lport->nport_id, hot_pool_tag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (xchg->ob_callback &&
+ (xchg->cmnd_code == ELS_RRQ || xchg->cmnd_code == ELS_LOGO ||
+ (!(xchg->io_state & TGT_IO_STATE_ABORT)))) {
+ ob_callback = xchg->ob_callback;
+ xchg->ob_callback_sts = pkg->status;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ ob_callback(xchg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) exchange(0x%p) tag(0x%x) do callback",
+ lport->port_id, xchg, hot_pool_tag);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_cm_free_xchg((void *)lport, (void *)xchg);
+ return ret;
+}
+
+static u32 unf_rcv_ls_gs_cmnd_reply(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg)
+{
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
+ ret = unf_send_ls_gs_cmnd_succ(lport, pkg);
+ else
+ ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
+
+ return ret;
+}
+
+u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = (struct unf_lport *)lport;
+
+ switch (pkg->type) {
+ case UNF_PKG_ELS_REQ_DONE:
+ case UNF_PKG_GS_REQ_DONE:
+ ret = unf_rcv_ls_gs_cmnd_reply(unf_lport, pkg);
+ break;
+
+ case UNF_PKG_ELS_REQ:
+ ret = unf_rcv_els_cmnd_req(unf_lport, pkg);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) with exchange type(0x%x) abnormal",
+ unf_lport->port_id, unf_lport->nport_id, pkg->type);
+ break;
+ }
+
+ return ret;
+}
+
+u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->type == UNF_PKG_ELS_REPLY_DONE) {
+ if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
+ ret = unf_send_els_rsp_succ(lport, pkg);
+ else
+ ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
+ }
+
+ return ret;
+}
+
+void unf_rport_immediate_link_down(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Swap case: Report Link Down immediately & release R_Port */
+ ulong flags = 0;
+ struct unf_disc *disc = NULL;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&rport->rport_state_lock, flags);
+ /* 1. Inc R_Port ref_cnt */
+ if (unf_rport_ref_inc(rport) != RETURN_OK) {
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Rport(0x%p,0x%x) is removing and no need process",
+ lport->port_id, rport, rport->nport_id);
+
+ return;
+ }
+
+ /* 2. R_PORT state update: Link Down Event --->>> closing state */
+ unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
+ spin_unlock_irqrestore(&rport->rport_state_lock, flags);
+
+ /* 3. Put R_Port from busy to destroy list */
+ disc = &lport->disc;
+ spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
+ list_del_init(&rport->entry_rport);
+ list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
+ spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
+
+ /* 4. Schedule Closing work (Enqueuing workqueue) */
+ unf_schedule_closing_work(lport, rport);
+
+ unf_rport_ref_dec(rport);
+}
+
+struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
+ u64 lport_name)
+{
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, NULL);
+
+ if (rport_nport_id >= UNF_FC_FID_DOM_MGR) {
+ /* R_Port is Fabric: by N_Port_ID */
+ unf_rport = unf_get_rport_by_nport_id(unf_lport, rport_nport_id);
+ } else {
+ /* Others: by WWPN & N_Port_ID */
+ unf_rport = unf_find_valid_rport(unf_lport, lport_name, rport_nport_id);
+ }
+
+ return unf_rport;
+}
+
+void unf_process_logo_in_pri_loop(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send PLOGI or LOGO */
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI); /* PLOGI WAIT */
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ /* Private Loop with INI mode, Avoid COM Mode problem */
+ unf_rport_delay_login(unf_rport);
+}
+
+void unf_process_logo_in_n2n(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /* Send PLOGI or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
+
+ unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
+ spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
+
+ if (unf_lport->port_name > unf_rport->port_name) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x)'s WWN(0x%llx) is larger than(0x%llx), should be master",
+ unf_lport->port_id, unf_lport->port_name, unf_rport->port_name);
+
+ ret = unf_send_plogi(unf_lport, unf_rport);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]LOGIN: Port(0x%x) send PLOGI failed, enter recovery",
+ lport->port_id);
+
+ unf_rport_error_recovery(unf_rport);
+ }
+ } else {
+ unf_rport_enter_logo(unf_lport, unf_rport);
+ }
+}
+
+void unf_process_logo_in_fabric(struct unf_lport *lport,
+ struct unf_rport *rport)
+{
+ /* Send GFF_ID or LOGO */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+ struct unf_rport *sns_port = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ /* L_Port with INI Mode: Send GFF_ID */
+ sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
+ if (!sns_port) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find fabric port",
+ unf_lport->port_id);
+ return;
+ }
+
+ ret = unf_get_and_post_disc_event(lport, sns_port, unf_rport->nport_id,
+ UNF_DISC_GET_FEATURE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
+ unf_lport->port_id, UNF_DISC_GET_FEATURE,
+ unf_rport->nport_id);
+
+ unf_rcv_gff_id_rsp_unknown(unf_lport, unf_rport->nport_id);
+ }
+}
+
+void unf_process_rport_after_logo(struct unf_lport *lport, struct unf_rport *rport)
+{
+ /*
+ * 1. LOGO handler
+ * 2. RPLO handler
+ * 3. LOGO_CALL_BACK (send LOGO ACC) handler
+ */
+ struct unf_lport *unf_lport = lport;
+ struct unf_rport *unf_rport = rport;
+
+ FC_CHECK_RETURN_VOID(lport);
+ FC_CHECK_RETURN_VOID(rport);
+
+ if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
+ /* R_Port is not fabric port (retry LOGIN or LOGO) */
+ if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
+ /* Private Loop: PLOGI or LOGO */
+ unf_process_logo_in_pri_loop(unf_lport, unf_rport);
+ } else if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ /* Point to Point: LOGIN or LOGO */
+ unf_process_logo_in_n2n(unf_lport, unf_rport);
+ } else {
+ /* Fabric or Public Loop: GFF_ID or LOGO */
+ unf_process_logo_in_fabric(unf_lport, unf_rport);
+ }
+ } else {
+ /* Rport is fabric port: link down now */
+ unf_rport_linkdown(unf_lport, unf_rport);
+ }
+}
+
+static u32 unf_rcv_bls_req_done(struct unf_lport *lport, struct unf_frame_pkg *pkg)
+{
+ /*
+ * About I/O resource:
+ * 1. normal: Release I/O resource during RRQ processer
+ * 2. exception: Release I/O resource immediately
+ */
+ struct unf_xchg *xchg = NULL;
+ u16 hot_pool_tag = 0;
+ ulong flags = 0;
+ ulong time_ms = 0;
+ u32 ret = RETURN_OK;
+ struct unf_lport *unf_lport = NULL;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ unf_lport = lport;
+
+ hot_pool_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
+ xchg = (struct unf_xchg *)unf_cm_lookup_xchg_by_tag((void *)unf_lport, hot_pool_tag);
+ if (!xchg) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) can't find exchange by tag(0x%x) when receiving ABTS response",
+ unf_lport->port_id, hot_pool_tag);
+ return UNF_RETURN_ERROR;
+ }
+
+ UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
+
+ ret = unf_xchg_ref_inc(xchg, TGT_ABTS_DONE);
+ FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ xchg->oxid = UNF_GET_OXID(pkg);
+ xchg->rxid = UNF_GET_RXID(pkg);
+ xchg->io_state |= INI_IO_STATE_DONE;
+ xchg->abts_state |= ABTS_RESPONSE_RECEIVED;
+ if (!(INI_IO_STATE_UPABORT & xchg->io_state)) {
+ /* NOTE: I/O exchange has been released and used again */
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x_0x%x) SID(0x%x) exch(0x%p) (0x%x:0x%x:0x%x:0x%x) state(0x%x) is abnormal with cnt(0x%x)",
+ unf_lport->port_id, unf_lport->nport_id, xchg->sid,
+ xchg, xchg->hotpooltag, xchg->oxid, xchg->rxid,
+ xchg->oid, xchg->io_state,
+ atomic_read(&xchg->ref_cnt));
+
+ unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
+ return UNF_RETURN_ERROR;
+ }
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
+ /*
+ * Exchage I/O Status check: Succ-> Add RRQ Timer
+ * ***** pkg->status --- to --->>> scsi_cmnd->result *****
+ * *
+ * FAILED: ERR_Code or X_ID is err, or BA_RSP type is err
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (pkg->status == UNF_IO_SUCCESS) {
+ /* Succeed: PKG status -->> EXCH status -->> scsi status */
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
+ xchg->io_state |= INI_IO_STATE_WAIT_RRQ;
+ xchg->rxid = UNF_GET_RXID(pkg);
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* Add RRQ timer */
+ time_ms = (ulong)(unf_lport->ra_tov);
+ unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_ms,
+ UNF_TIMER_TYPE_INI_RRQ);
+ } else {
+ /* Failed: PKG status -->> EXCH status -->> scsi status */
+ UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_FAILED);
+ if (MARKER_STS_RECEIVED & xchg->abts_state) {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+
+ /* NOTE: release I/O resource immediately */
+ unf_cm_free_xchg(unf_lport, xchg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) exch(0x%p) OX_RX(0x%x:0x%x) IOstate(0x%x) ABTSstate(0x%x) receive response abnormal ref(0x%x)",
+ unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
+ xchg->io_state, xchg->abts_state, atomic_read(&xchg->ref_cnt));
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+ }
+
+ /*
+ * If abts response arrived before
+ * marker sts received just wake up abts marker sema
+ */
+ spin_lock_irqsave(&xchg->xchg_state_lock, flags);
+ if (!(MARKER_STS_RECEIVED & xchg->abts_state)) {
+ xchg->ucode_abts_state = pkg->status;
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ up(&xchg->task_sema);
+ } else {
+ spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
+ }
+
+ unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
+ return ret;
+}
+
+u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ unf_lport = (struct unf_lport *)lport;
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ if (pkg->type == UNF_PKG_BLS_REQ_DONE) {
+ /* INI: RCVD BLS Req Done */
+ ret = unf_rcv_bls_req_done(lport, pkg);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) received BLS packet type(%xh) is error",
+ unf_lport->port_id, pkg->type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
+{
+ pkg->frame_head.csctl_sid = xchg->sid;
+ pkg->frame_head.rctl_did = xchg->did;
+ pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
+
+ if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
+ if (UNF_XCHG_IS_ELS_REPLY(xchg)) {
+ pkg->type = UNF_PKG_ELS_REPLY;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ } else {
+ pkg->type = UNF_PKG_ELS_REQ;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
+ }
+ } else if (xchg->xchg_type == UNF_XCHG_TYPE_INI) {
+ pkg->type = UNF_PKG_INI_IO;
+ pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
+ }
+}
+
+void unf_notify_chip_free_xid(struct unf_xchg *xchg)
+{
+ struct unf_lport *unf_lport = NULL;
+ u32 ret = RETURN_ERROR;
+ struct unf_frame_pkg pkg = {0};
+
+ FC_CHECK_RETURN_VOID(xchg);
+ unf_lport = xchg->lport;
+ FC_CHECK_RETURN_VOID(unf_lport);
+
+ unf_fill_free_xid_pkg(xchg, &pkg);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Sid_Did(0x%x)(0x%x) Xchg(0x%p) RXorOX(0x%x) tag(0x%x) xid(0x%x) magic(0x%x) Stat(0x%x)type(0x%x) wait timeout.",
+ xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], pkg.frame_head.oxid_rxid,
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME], xchg->io_state, pkg.type);
+
+ ret = unf_lport->low_level_func.service_op.ll_release_xid(unf_lport->fc_port, &pkg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Free xid abnormal:Sid_Did(0x%x 0x%x) Xchg(0x%p) RXorOX(0x%x) xid(0x%x) Stat(0x%x) tag(0x%x) magic(0x%x) type(0x%x).",
+ xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
+ pkg.frame_head.oxid_rxid, xchg->io_state,
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
+ pkg.type);
+ }
+}
diff --git a/drivers/scsi/spfc/common/unf_service.h b/drivers/scsi/spfc/common/unf_service.h
new file mode 100644
index 000000000000..0dd2975c6a7b
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_service.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_SERVICE_H
+#define UNF_SERVICE_H
+
+#include "unf_type.h"
+#include "unf_exchg.h"
+#include "unf_rport.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+extern u32 max_frame_size;
+#define UNF_INIT_DISC 0x1 /* first time DISC */
+#define UNF_RSCN_DISC 0x2 /* RSCN Port Addr DISC */
+#define UNF_SET_ELS_ACC_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_ACC)
+#define UNF_SET_ELS_RJT_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_RJT)
+#define UNF_XCHG_IS_ELS_REPLY(xchg) \
+ ((ELS_ACC == ((xchg)->cmnd_code & 0x0ffff)) || \
+ (ELS_RJT == ((xchg)->cmnd_code & 0x0ffff)))
+
+struct unf_els_handle_table {
+ u32 cmnd;
+ u32 (*els_cmnd_handler)(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
+};
+
+void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
+void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
+ struct unf_rport *rport);
+struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport,
+ u32 did,
+ struct unf_rport *rport,
+ union unf_sfs_u **fc_entry);
+void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg);
+u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
+void unf_rport_immediate_link_down(struct unf_lport *lport,
+ struct unf_rport *rport);
+struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
+ u64 port_name);
+void unf_process_logo_in_fabric(struct unf_lport *lport,
+ struct unf_rport *rport);
+void unf_notify_chip_free_xid(struct unf_xchg *xchg);
+
+u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
+ struct unf_xchg *xchg);
+u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg);
+struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport,
+ struct unf_frame_pkg *pkg);
+u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg);
+u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
+ u32 did, struct unf_rjt_info *rjt_info);
+u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
+ struct unf_rport *rport,
+ struct unf_rjt_info *rjt_info);
+u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg);
+void unf_process_rport_after_logo(struct unf_lport *lport,
+ struct unf_rport *rport);
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_type.h b/drivers/scsi/spfc/common/unf_type.h
new file mode 100644
index 000000000000..28e163d0543c
--- /dev/null
+++ b/drivers/scsi/spfc/common/unf_type.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef UNF_TYPE_H
+#define UNF_TYPE_H
+
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/fs.h>
+#include <linux/vmalloc.h>
+#include <linux/version.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <linux/kref.h>
+#include <linux/scatterlist.h>
+#include <linux/crc-t10dif.h>
+#include <linux/ctype.h>
+#include <linux/types.h>
+#include <linux/compiler.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/random.h>
+#include <linux/jiffies.h>
+#include <linux/cpufreq.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_transport_fc.h>
+#include <linux/sched/signal.h>
+
+#ifndef SPFC_FT
+#define SPFC_FT
+#endif
+
+#define BUF_LIST_PAGE_SIZE (PAGE_SIZE << 8)
+
+#define UNF_S_TO_NS (1000000000)
+#define UNF_S_TO_MS (1000)
+
+enum UNF_OS_THRD_PRI_E {
+ UNF_OS_THRD_PRI_HIGHEST = 0,
+ UNF_OS_THRD_PRI_HIGH,
+ UNF_OS_THRD_PRI_SUBHIGH,
+ UNF_OS_THRD_PRI_MIDDLE,
+ UNF_OS_THRD_PRI_LOW,
+ UNF_OS_THRD_PRI_BUTT
+};
+
+#define UNF_OS_LIST_NEXT(a) ((a)->next)
+#define UNF_OS_LIST_PREV(a) ((a)->prev)
+
+#define UNF_OS_PER_NS (1000000000)
+#define UNF_OS_MS_TO_NS (1000000)
+
+#ifndef MIN
+#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
+#endif
+
+#ifndef MAX
+#define MAX(X, Y) ((X) > (Y) ? (X) : (Y))
+#endif
+
+#ifndef INVALID_VALUE64
+#define INVALID_VALUE64 0xFFFFFFFFFFFFFFFFULL
+#endif /* INVALID_VALUE64 */
+
+#ifndef INVALID_VALUE32
+#define INVALID_VALUE32 0xFFFFFFFF
+#endif /* INVALID_VALUE32 */
+
+#ifndef INVALID_VALUE16
+#define INVALID_VALUE16 0xFFFF
+#endif /* INVALID_VALUE16 */
+
+#ifndef INVALID_VALUE8
+#define INVALID_VALUE8 0xFF
+#endif /* INVALID_VALUE8 */
+
+#ifndef RETURN_OK
+#define RETURN_OK 0
+#endif
+
+#ifndef RETURN_ERROR
+#define RETURN_ERROR (~0)
+#endif
+#define UNF_RETURN_ERROR (~0)
+
+/* define shift bits */
+#define UNF_SHIFT_1 1
+#define UNF_SHIFT_2 2
+#define UNF_SHIFT_3 3
+#define UNF_SHIFT_4 4
+#define UNF_SHIFT_6 6
+#define UNF_SHIFT_7 7
+#define UNF_SHIFT_8 8
+#define UNF_SHIFT_11 11
+#define UNF_SHIFT_12 12
+#define UNF_SHIFT_15 15
+#define UNF_SHIFT_16 16
+#define UNF_SHIFT_17 17
+#define UNF_SHIFT_19 19
+#define UNF_SHIFT_20 20
+#define UNF_SHIFT_23 23
+#define UNF_SHIFT_24 24
+#define UNF_SHIFT_25 25
+#define UNF_SHIFT_26 26
+#define UNF_SHIFT_28 28
+#define UNF_SHIFT_29 29
+#define UNF_SHIFT_32 32
+#define UNF_SHIFT_35 35
+#define UNF_SHIFT_37 37
+#define UNF_SHIFT_39 39
+#define UNF_SHIFT_40 40
+#define UNF_SHIFT_43 43
+#define UNF_SHIFT_48 48
+#define UNF_SHIFT_51 51
+#define UNF_SHIFT_56 56
+#define UNF_SHIFT_57 57
+#define UNF_SHIFT_59 59
+#define UNF_SHIFT_60 60
+#define UNF_SHIFT_61 61
+
+/* array index */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+#define ARRAY_INDEX_8 8
+#define ARRAY_INDEX_10 10
+#define ARRAY_INDEX_11 11
+#define ARRAY_INDEX_12 12
+#define ARRAY_INDEX_13 13
+
+/* define mask bits */
+#define UNF_MASK_BIT_7_0 0xff
+#define UNF_MASK_BIT_15_0 0x0000ffff
+#define UNF_MASK_BIT_31_16 0xffff0000
+
+#define UNF_IO_SUCCESS 0x00000000
+#define UNF_IO_ABORTED 0x00000001 /* the host system aborted the command */
+#define UNF_IO_FAILED 0x00000002
+#define UNF_IO_ABORT_ABTS 0x00000003
+#define UNF_IO_ABORT_LOGIN 0x00000004 /* abort login */
+#define UNF_IO_ABORT_REET 0x00000005 /* reset event aborted the transport */
+#define UNF_IO_ABORT_FAILED 0x00000006 /* abort failed */
+/* data out of order ,data reassembly error */
+#define UNF_IO_OUTOF_ORDER 0x00000007
+#define UNF_IO_FTO 0x00000008 /* frame time out */
+#define UNF_IO_LINK_FAILURE 0x00000009
+#define UNF_IO_OVER_FLOW 0x0000000a /* data over run */
+#define UNF_IO_RSP_OVER 0x0000000b
+#define UNF_IO_LOST_FRAME 0x0000000c
+#define UNF_IO_UNDER_FLOW 0x0000000d /* data under run */
+#define UNF_IO_HOST_PROG_ERROR 0x0000000e
+#define UNF_IO_SEST_PROG_ERROR 0x0000000f
+#define UNF_IO_INVALID_ENTRY 0x00000010
+#define UNF_IO_ABORT_SEQ_NOT 0x00000011
+#define UNF_IO_REJECT 0x00000012
+#define UNF_IO_RS_INFO 0x00000013
+#define UNF_IO_EDC_IN_ERROR 0x00000014
+#define UNF_IO_EDC_OUT_ERROR 0x00000015
+#define UNF_IO_UNINIT_KEK_ERR 0x00000016
+#define UNF_IO_DEK_OUTOF_RANGE 0x00000017
+#define UNF_IO_KEY_UNWRAP_ERR 0x00000018
+#define UNF_IO_KEY_TAG_ERR 0x00000019
+#define UNF_IO_KEY_ECC_ERR 0x0000001a
+#define UNF_IO_BLOCK_SIZE_ERROR 0x0000001b
+#define UNF_IO_ILLEGAL_CIPHER_MODE 0x0000001c
+#define UNF_IO_CLEAN_UP 0x0000001d
+#define UNF_SRR_RECEIVE 0x0000001e /* receive srr */
+/* The target device sent an ABTS to abort the I/O.*/
+#define UNF_IO_ABORTED_BY_TARGET 0x0000001f
+#define UNF_IO_TRANSPORT_ERROR 0x00000020
+#define UNF_IO_LINK_FLASH 0x00000021
+#define UNF_IO_TIMEOUT 0x00000022
+#define UNF_IO_PORT_UNAVAILABLE 0x00000023
+#define UNF_IO_PORT_LOGOUT 0x00000024
+#define UNF_IO_PORT_CFG_CHG 0x00000025
+#define UNF_IO_FIRMWARE_RES_UNAVAILABLE 0x00000026
+#define UNF_IO_TASK_MGT_OVERRUN 0x00000027
+#define UNF_IO_DMA_ERROR 0x00000028
+#define UNF_IO_DIF_ERROR 0x00000029
+#define UNF_IO_NO_LPORT 0x0000002a
+#define UNF_IO_NO_XCHG 0x0000002b
+#define UNF_IO_SOFT_ERR 0x0000002c
+#define UNF_IO_XCHG_ADD_ERROR 0x0000002d
+#define UNF_IO_NO_LOGIN 0x0000002e
+#define UNF_IO_NO_BUFFER 0x0000002f
+#define UNF_IO_DID_ERROR 0x00000030
+#define UNF_IO_UNSUPPORT 0x00000031
+#define UNF_IO_NOREADY 0x00000032
+#define UNF_IO_NPORTID_REUSED 0x00000033
+#define UNF_IO_NPORT_HANDLE_REUSED 0x00000034
+#define UNF_IO_NO_NPORT_HANDLE 0x00000035
+#define UNF_IO_ABORT_BY_FW 0x00000036
+#define UNF_IO_ABORT_PORT_REMOVING 0x00000037
+#define UNF_IO_INCOMPLETE 0x00000038
+#define UNF_IO_DIF_REF_ERROR 0x00000039
+#define UNF_IO_DIF_GEN_ERROR 0x0000003a
+
+#define UNF_IO_ERREND 0xFFFFFFFF
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.c b/drivers/scsi/spfc/hw/spfc_chipitf.c
new file mode 100644
index 000000000000..be6073ff4dc0
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_chipitf.c
@@ -0,0 +1,1105 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_chipitf.h"
+#include "sphw_hw.h"
+#include "sphw_crm.h"
+
+#define SPFC_MBOX_TIME_SEC_MAX (60)
+
+#define SPFC_LINK_UP_COUNT 1
+#define SPFC_LINK_DOWN_COUNT 2
+#define SPFC_FC_DELETE_CMND_COUNT 3
+
+#define SPFC_MBX_MAX_TIMEOUT 10000
+
+u32 spfc_get_chip_msg(void *hba, void *mac)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct unf_get_chip_info_argout *wwn = NULL;
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ union spfc_outmbox_generic *get_chip_info_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(mac, UNF_RETURN_ERROR);
+
+ spfc_hba = (struct spfc_hba_info *)hba;
+ wwn = (struct unf_get_chip_info_argout *)mac;
+
+ memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
+
+ get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!get_chip_info_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
+ get_chip_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
+
+ if (spfc_mb_send_and_wait_mbox(spfc_hba, &get_chip_info,
+ sizeof(struct spfc_inmbox_get_chip_info),
+ get_chip_info_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc can't send and wait mailbox, command type: 0x%x.",
+ get_chip_info.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) mailbox status incorrect status(0x%x) .",
+ spfc_hba->port_cfg.port_id,
+ get_chip_info_sts->get_chip_info_sts.status);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
+ spfc_hba->port_cfg.port_id,
+ get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ wwn->board_type = get_chip_info_sts->get_chip_info_sts.board_type;
+ spfc_hba->card_info.card_type = get_chip_info_sts->get_chip_info_sts.board_type;
+ wwn->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
+ wwn->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
+
+ ret = RETURN_OK;
+exit:
+ kfree(get_chip_info_sts);
+
+ return ret;
+}
+
+u32 spfc_get_chip_capability(void *hwdev_handle,
+ struct spfc_chip_info *chip_info)
+{
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ union spfc_outmbox_generic *get_chip_info_sts = NULL;
+ u16 out_size = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hwdev_handle, UNF_RETURN_ERROR);
+
+ memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
+
+ get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!get_chip_info_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
+ get_chip_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
+ get_chip_info.header.port_id = (u8)sphw_global_func_id(hwdev_handle);
+ out_size = sizeof(union spfc_outmbox_generic);
+
+ if (sphw_msg_to_mgmt_sync(hwdev_handle, COMM_MOD_FC, SPFC_MBOX_GET_CHIP_INFO,
+ (void *)&get_chip_info.header,
+ sizeof(struct spfc_inmbox_get_chip_info),
+ (union spfc_outmbox_generic *)(get_chip_info_sts), &out_size,
+ (SPFC_MBX_MAX_TIMEOUT), SPHW_CHANNEL_FC) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x.",
+ SPFC_MBOX_GET_CHIP_INFO);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port mailbox status incorrect status(0x%x) .",
+ get_chip_info_sts->get_chip_info_sts.status);
+
+ goto exit;
+ }
+
+ if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port receive mailbox type incorrect type: 0x%x.",
+ get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ chip_info->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
+ chip_info->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
+
+ ret = RETURN_OK;
+exit:
+ kfree(get_chip_info_sts);
+
+ return ret;
+}
+
+u32 spfc_config_port_table(struct spfc_hba_info *hba)
+{
+ struct spfc_inmbox_config_api config_api;
+ union spfc_outmbox_generic *out_mbox = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&config_api, 0, sizeof(config_api));
+ out_mbox = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!out_mbox) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(out_mbox, 0, sizeof(union spfc_outmbox_generic));
+
+ config_api.header.cmnd_type = SPFC_MBOX_CONFIG_API;
+ config_api.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_api));
+
+ config_api.op_code = UNDEFINEOPCODE;
+
+ /* change switching top cmd of CM to the cmd that up recognize */
+ /* if the cmd equals UNF_TOP_P2P_MASK sending in CM means that it
+ * should be changed into P2P top, LL using SPFC_TOP_NON_LOOP_MASK
+ */
+ if (((u8)(hba->port_topo_cfg)) == UNF_TOP_P2P_MASK) {
+ config_api.topy_mode = 0x2;
+ /* if the cmd equals UNF_TOP_LOOP_MASK sending in CM means that it
+ *should be changed into loop top, LL using SPFC_TOP_LOOP_MASK
+ */
+ } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_LOOP_MASK) {
+ config_api.topy_mode = 0x1;
+ /* if the cmd equals UNF_TOP_AUTO_MASK sending in CM means that it
+ *should be changed into loop top, LL using SPFC_TOP_AUTO_MASK
+ */
+ } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_AUTO_MASK) {
+ config_api.topy_mode = 0x0;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) topo cmd is error, command type: 0x%x",
+ hba->port_cfg.port_id, (u8)(hba->port_topo_cfg));
+
+ goto exit;
+ }
+
+ /* About speed */
+ config_api.sfp_speed = (u8)(hba->port_speed_cfg);
+ config_api.max_speed = (u8)(hba->max_support_speed);
+
+ config_api.rx_6432g_bb_credit = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
+ config_api.rx_16g_bb_credit = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
+ config_api.rx_84g_bb_credit = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
+ config_api.rdy_cnt_bf_fst_frm = SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT;
+ config_api.esch_32g_value = SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE;
+ config_api.esch_16g_value = SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE;
+ config_api.esch_8g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_4g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_64g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
+ config_api.esch_bust_size = SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE;
+
+ /* default value:0xFF */
+ config_api.hard_alpa = 0xFF;
+ memcpy(config_api.port_name, hba->sys_port_name, UNF_WWN_LEN);
+
+ /* if only for slave, the value is 1; if participate master choosing,
+ * the value is 0
+ */
+ config_api.slave = hba->port_loop_role;
+
+ /* 1:auto negotiate, 0:fixed mode negotiate */
+ if (config_api.sfp_speed == 0)
+ config_api.auto_sneg = 0x1;
+ else
+ config_api.auto_sneg = 0x0;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &config_api, sizeof(config_api),
+ out_mbox) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x",
+ hba->port_cfg.port_id,
+ config_api.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (out_mbox->config_api_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) with status(0x%x) error",
+ hba->port_cfg.port_id,
+ out_mbox->config_api_sts.header.cmnd_type,
+ out_mbox->config_api_sts.status);
+
+ goto exit;
+ }
+
+ if (out_mbox->config_api_sts.header.cmnd_type != SPFC_MBOX_CONFIG_API_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) error",
+ hba->port_cfg.port_id,
+ out_mbox->config_api_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ ret = RETURN_OK;
+exit:
+ kfree(out_mbox);
+
+ return ret;
+}
+
+u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on)
+{
+ struct spfc_inmbox_port_switch port_switch;
+ union spfc_outmbox_generic *port_switch_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&port_switch, 0, sizeof(port_switch));
+
+ port_switch_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_switch_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_switch_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ port_switch.header.cmnd_type = SPFC_MBOX_PORT_SWITCH;
+ port_switch.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_switch));
+ port_switch.op_code = (u8)turn_on;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &port_switch, sizeof(port_switch),
+ (union spfc_outmbox_generic *)((void *)port_switch_sts)) !=
+ RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type(0x%x) opcode(0x%x)",
+ hba->port_cfg.port_id,
+ port_switch.header.cmnd_type, port_switch.op_code);
+
+ goto exit;
+ }
+
+ if (port_switch_sts->port_switch_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) status(0x%x) error",
+ hba->port_cfg.port_id,
+ port_switch_sts->port_switch_sts.header.cmnd_type,
+ port_switch_sts->port_switch_sts.status);
+
+ goto exit;
+ }
+
+ if (port_switch_sts->port_switch_sts.header.cmnd_type != SPFC_MBOX_PORT_SWITCH_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive mailbox type(0x%x) error",
+ hba->port_cfg.port_id,
+ port_switch_sts->port_switch_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) switch succeed, turns to %s",
+ hba->port_cfg.port_id, (turn_on) ? "on" : "off");
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_switch_sts);
+
+ return ret;
+}
+
+u32 spfc_config_login_api(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_parms)
+{
+#define SPFC_LOOP_RDYNUM 8
+ int iret = RETURN_OK;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_inmbox_config_login config_login;
+ union spfc_outmbox_generic *cfg_login_sts = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&config_login, 0, sizeof(config_login));
+ cfg_login_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!cfg_login_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(cfg_login_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ config_login.header.cmnd_type = SPFC_MBOX_CONFIG_LOGIN_API;
+ config_login.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_login));
+ config_login.header.port_id = hba->port_index;
+
+ config_login.op_code = UNDEFINEOPCODE;
+
+ config_login.tx_bb_credit = hba->remote_bb_credit;
+
+ config_login.etov = hba->compared_edtov_val;
+ config_login.rtov = hba->compared_ratov_val;
+
+ config_login.rt_tov_tag = hba->remote_rttov_tag;
+ config_login.ed_tov_tag = hba->remote_edtov_tag;
+ config_login.bb_credit = hba->remote_bb_credit;
+ config_login.bb_scn = SPFC_LSB(hba->compared_bb_scn);
+
+ if (config_login.bb_scn) {
+ config_login.lr_flag = (login_parms->els_cmnd_code == ELS_PLOGI) ? 0 : 1;
+ ret = spfc_mb_send_and_wait_mbox(hba, &config_login, sizeof(config_login),
+ (union spfc_outmbox_generic *)cfg_login_sts);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x.",
+ hba->port_cfg.port_id, config_login.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (cfg_login_sts->config_login_sts.header.cmnd_type !=
+ SPFC_MBOX_CONFIG_LOGIN_API_STS) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) Receive mailbox type incorrect. Type: 0x%x.",
+ hba->port_cfg.port_id,
+ cfg_login_sts->config_login_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (cfg_login_sts->config_login_sts.status != STATUS_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x.",
+ hba->port_cfg.port_id,
+ cfg_login_sts->config_login_sts.header.cmnd_type,
+ cfg_login_sts->config_login_sts.status);
+
+ goto exit;
+ }
+ } else {
+ iret = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
+ SPFC_MBOX_CONFIG_LOGIN_API, &config_login,
+ sizeof(config_login), SPHW_CHANNEL_FC);
+
+ if (iret != 0) {
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI_FAIL);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) spfc can't send config login cmd to up,ret:%d.",
+ hba->port_cfg.port_id, iret);
+
+ goto exit;
+ }
+
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Topo(0x%x) Config login param to up: txbbcredit(0x%x), BB_SC_N(0x%x).",
+ hba->port_cfg.port_id, hba->active_topo,
+ config_login.tx_bb_credit, config_login.bb_scn);
+
+ ret = RETURN_OK;
+exit:
+ kfree(cfg_login_sts);
+
+ return ret;
+}
+
+u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox,
+ u16 in_size,
+ union spfc_outmbox_generic *out_mbox)
+{
+ void *handle = NULL;
+ u16 out_size = 0;
+ ulong time_out = 0;
+ int ret = 0;
+ struct spfc_mbox_header *header = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(in_mbox, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(out_mbox, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(hba->dev_handle, UNF_RETURN_ERROR);
+ header = (struct spfc_mbox_header *)in_mbox;
+ out_size = sizeof(union spfc_outmbox_generic);
+ handle = hba->dev_handle;
+ header->port_id = (u8)sphw_global_func_id(handle);
+
+ /* Wait for las mailbox completion: */
+ time_out = wait_for_completion_timeout(&hba->mbox_complete,
+ (ulong)msecs_to_jiffies(SPFC_MBOX_TIME_SEC_MAX *
+ UNF_S_TO_MS));
+ if (time_out == SPFC_ZERO) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[err]Port(0x%x) wait mailbox(0x%x) completion timeout: %d sec",
+ hba->port_cfg.port_id, header->cmnd_type,
+ SPFC_MBOX_TIME_SEC_MAX);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Send Msg to uP Sync: timer 10s */
+ ret = sphw_msg_to_mgmt_sync(handle, COMM_MOD_FC, header->cmnd_type,
+ (void *)in_mbox, in_size,
+ (union spfc_outmbox_generic *)out_mbox,
+ &out_size, (SPFC_MBX_MAX_TIMEOUT),
+ SPHW_CHANNEL_FC);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) can not send mailbox(0x%x) with ret:%d",
+ hba->port_cfg.port_id, header->cmnd_type, ret);
+
+ complete(&hba->mbox_complete);
+ return UNF_RETURN_ERROR;
+ }
+
+ complete(&hba->mbox_complete);
+
+ return RETURN_OK;
+}
+
+void spfc_initial_dynamic_info(struct spfc_hba_info *fc_port)
+{
+ struct spfc_hba_info *hba = fc_port;
+ ulong flag = 0;
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
+ hba->active_topo = UNF_ACT_TOP_UNKNOWN;
+ hba->phy_link = UNF_PORT_LINK_DOWN;
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
+ hba->loop_map_valid = LOOP_MAP_INVALID;
+ hba->srq_delay_info.srq_delay_flag = 0;
+ hba->srq_delay_info.root_rq_rcvd_flag = 0;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+}
+
+static u32 spfc_recv_fc_linkup(struct spfc_hba_info *hba, void *buf_in)
+{
+#define SPFC_LOOP_MASK 0x1
+#define SPFC_LOOPMAP_COUNT 128
+
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+ hba->phy_link = UNF_PORT_LINK_UP;
+ hba->active_port_speed = link_event->speed;
+ hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
+ hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
+ hba->led_states.ac_led = (u8)(link_event->ac_led);
+
+ if (link_event->top_type == SPFC_LOOP_MASK &&
+ (link_event->loop_map_info[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
+ link_event->loop_map_info[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR)) {
+ hba->active_topo = UNF_ACT_TOP_PUBLIC_LOOP; /* Public Loop */
+ hba->active_alpa = link_event->alpa_value; /* AL_PA */
+ memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
+ hba->loop_map_valid = LOOP_MAP_VALID;
+ } else if (link_event->top_type == SPFC_LOOP_MASK) {
+ hba->active_topo = UNF_ACT_TOP_PRIVATE_LOOP; /* Private Loop */
+ hba->active_alpa = link_event->alpa_value; /* AL_PA */
+ memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
+ hba->loop_map_valid = LOOP_MAP_VALID;
+ } else {
+ hba->active_topo = UNF_TOP_P2P_MASK; /* P2P_D or P2P_F */
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[event]Port(0x%x) receive link up event(0x%x) with speed(0x%x) uP_topo(0x%x) driver_topo(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event,
+ link_event->speed, link_event->top_type, hba->active_topo);
+
+ /* Set clear & flush state */
+ spfc_set_hba_clear_state(hba, false);
+ spfc_set_hba_flush_state(hba, false);
+ spfc_set_rport_flush_state(hba, false);
+
+ /* Report link up event to COM */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_UP,
+ &hba->active_port_speed);
+
+ SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_UP_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_linkdown(struct spfc_hba_info *hba, void *buf_in)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+
+ /* 1. Led state setting */
+ hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
+ hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
+ hba->led_states.ac_led = (u8)(link_event->ac_led);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[event]Port(0x%x) receive link down event(0x%x) reason(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event, link_event->reason);
+
+ spfc_initial_dynamic_info(hba);
+
+ /* 2. set HBA flush state */
+ spfc_set_hba_flush_state(hba, true);
+
+ /* 3. set R_Port (parent SQ) flush state */
+ spfc_set_rport_flush_state(hba, true);
+
+ /* 4. Report link down event to COM */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_DOWN, 0);
+
+ /* DFX setting */
+ SPFC_LINK_REASON_STAT(hba, link_event->reason);
+ SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_DOWN_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_delcmd(struct spfc_hba_info *hba, void *buf_in)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_link_event *link_event = NULL;
+
+ link_event = (struct spfc_link_event *)buf_in;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) receive delete cmd event(0x%x)",
+ hba->port_cfg.port_id, link_event->link_event);
+
+ /* Send buffer clear cmnd */
+ ret = spfc_clear_fetched_sq_wqe(hba);
+
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_SCANNING;
+ SPFC_LINK_EVENT_STAT(hba, SPFC_FC_DELETE_CMND_COUNT);
+
+ return ret;
+}
+
+static u32 spfc_recv_fc_error(struct spfc_hba_info *hba, void *buf_in)
+{
+#define FC_ERR_LEVEL_DEAD 0
+#define FC_ERR_LEVEL_HIGH 1
+#define FC_ERR_LEVEL_LOW 2
+
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_up_error_event *up_error_event = NULL;
+
+ up_error_event = (struct spfc_up_error_event *)buf_in;
+ if (up_error_event->error_type >= SPFC_UP_ERR_BUTT ||
+ up_error_event->error_value >= SPFC_ERR_VALUE_BUTT) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive a unsupported UP Error Event Type(0x%x) Value(0x%x).",
+ hba->port_cfg.port_id, up_error_event->error_type,
+ up_error_event->error_value);
+ return ret;
+ }
+
+ switch (up_error_event->error_level) {
+ case FC_ERR_LEVEL_DEAD:
+ ret = RETURN_OK;
+ break;
+
+ case FC_ERR_LEVEL_HIGH:
+ /* port reset */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
+ UNF_PORT_ABNORMAL_RESET, NULL);
+ break;
+
+ case FC_ERR_LEVEL_LOW:
+ ret = RETURN_OK;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive a unsupported UP Error Event Level(0x%x), Can not Process.",
+ hba->port_cfg.port_id,
+ up_error_event->error_level);
+ return ret;
+ }
+ if (up_error_event->error_value < SPFC_ERR_VALUE_BUTT)
+ SPFC_UP_ERR_EVENT_STAT(hba, up_error_event->error_value);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) process UP Error Event Level(0x%x) Type(0x%x) Value(0x%x) %s.",
+ hba->port_cfg.port_id, up_error_event->error_level,
+ up_error_event->error_type, up_error_event->error_value,
+ (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
+
+ return ret;
+}
+
+static struct spfc_up2drv_msg_handle up_msg_handle[] = {
+ {SPFC_MBOX_RECV_FC_LINKUP, spfc_recv_fc_linkup},
+ {SPFC_MBOX_RECV_FC_LINKDOWN, spfc_recv_fc_linkdown},
+ {SPFC_MBOX_RECV_FC_DELCMD, spfc_recv_fc_delcmd},
+ {SPFC_MBOX_RECV_FC_ERROR, spfc_recv_fc_error}
+};
+
+void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_mbox_header *mbx_header = NULL;
+
+ FC_CHECK_RETURN_VOID(hwdev_handle);
+ FC_CHECK_RETURN_VOID(pri_handle);
+ FC_CHECK_RETURN_VOID(buf_in);
+ FC_CHECK_RETURN_VOID(buf_out);
+ FC_CHECK_RETURN_VOID(out_size);
+
+ hba = (struct spfc_hba_info *)pri_handle;
+ if (!hba) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR, "[err]Hba is null");
+ return;
+ }
+
+ mbx_header = (struct spfc_mbox_header *)buf_in;
+ if (mbx_header->cmnd_type != cmd) {
+ *out_size = sizeof(struct spfc_link_event);
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[err]Port(0x%x) cmd(0x%x) is not matched with header cmd type(0x%x)",
+ hba->port_cfg.port_id, cmd, mbx_header->cmnd_type);
+ return;
+ }
+
+ while (index < (sizeof(up_msg_handle) / sizeof(struct spfc_up2drv_msg_handle))) {
+ if (up_msg_handle[index].cmd == cmd &&
+ up_msg_handle[index].spfc_msg_up2driver_handler) {
+ ret = up_msg_handle[index].spfc_msg_up2driver_handler(hba, buf_in);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[warn]Port(0x%x) process up cmd(0x%x) failed",
+ hba->port_cfg.port_id, cmd);
+ }
+ *out_size = sizeof(struct spfc_link_event);
+ return;
+ }
+ index++;
+ }
+
+ *out_size = sizeof(struct spfc_link_event);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
+ "[err]Port(0x%x) process up cmd(0x%x) failed",
+ hba->port_cfg.port_id, cmd);
+}
+
+u32 spfc_get_topo_act(void *hba, void *topo_act)
+{
+ struct spfc_hba_info *spfc_hba = hba;
+ enum unf_act_topo *pen_topo_act = topo_act;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(topo_act, UNF_RETURN_ERROR);
+
+ /* Get topo from low_level */
+ *pen_topo_act = spfc_hba->active_topo;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Get active topology: 0x%x", *pen_topo_act);
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_loop_alpa(void *hba, void *alpa)
+{
+ ulong flags = 0;
+ struct spfc_hba_info *spfc_hba = hba;
+ u8 *alpa_temp = alpa;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(alpa, UNF_RETURN_ERROR);
+
+ spin_lock_irqsave(&spfc_hba->hba_lock, flags);
+ *alpa_temp = spfc_hba->active_alpa;
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Get active AL_PA(0x%x)", *alpa_temp);
+
+ return RETURN_OK;
+}
+
+static void spfc_get_fabric_login_params(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *params_addr)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->active_topo = params_addr->act_topo;
+ hba->compared_ratov_val = params_addr->compared_ratov_val;
+ hba->compared_edtov_val = params_addr->compared_edtov_val;
+ hba->compared_bb_scn = params_addr->compared_bbscn;
+ hba->remote_edtov_tag = params_addr->remote_edtov_tag;
+ hba->remote_rttov_tag = params_addr->remote_rttov_tag;
+ hba->remote_bb_credit = params_addr->remote_bb_credit;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) topo(0x%x) get fabric params: R_A_TOV(0x%x) E_D_TOV(%u) BB_CREDIT(0x%x) BB_SC_N(0x%x)",
+ hba->port_cfg.port_id, hba->active_topo,
+ hba->compared_ratov_val, hba->compared_edtov_val,
+ hba->remote_bb_credit, hba->compared_bb_scn);
+}
+
+static void spfc_get_port_login_params(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *params_addr)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->hba_lock, flag);
+ hba->compared_ratov_val = params_addr->compared_ratov_val;
+ hba->compared_edtov_val = params_addr->compared_edtov_val;
+ hba->compared_bb_scn = params_addr->compared_bbscn;
+ hba->remote_edtov_tag = params_addr->remote_edtov_tag;
+ hba->remote_rttov_tag = params_addr->remote_rttov_tag;
+ hba->remote_bb_credit = params_addr->remote_bb_credit;
+ spin_unlock_irqrestore(&hba->hba_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Topo(0x%x) Get Port Params: R_A_TOV(0x%x), E_D_TOV(0x%x), BB_CREDIT(0x%x), BB_SC_N(0x%x).",
+ hba->port_cfg.port_id, hba->active_topo,
+ hba->compared_ratov_val, hba->compared_edtov_val,
+ hba->remote_bb_credit, hba->compared_bb_scn);
+}
+
+u32 spfc_update_fabric_param(void *hba, void *para_in)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+ struct unf_port_login_parms *login_coparms = para_in;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ spfc_get_fabric_login_params(spfc_hba, login_coparms);
+
+ if (spfc_hba->active_topo == UNF_ACT_TOP_P2P_FABRIC ||
+ spfc_hba->active_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
+ if (spfc_hba->work_mode == SPFC_SMARTIO_WORK_MODE_FC)
+ ret = spfc_config_login_api(spfc_hba, login_coparms);
+ }
+
+ return ret;
+}
+
+u32 spfc_update_port_param(void *hba, void *para_in)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+ struct unf_port_login_parms *login_coparms =
+ (struct unf_port_login_parms *)para_in;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ if (spfc_hba->active_topo == UNF_ACT_TOP_PRIVATE_LOOP ||
+ spfc_hba->active_topo == UNF_ACT_TOP_P2P_DIRECT) {
+ spfc_get_port_login_params(spfc_hba, login_coparms);
+ ret = spfc_config_login_api(spfc_hba, login_coparms);
+ }
+
+ spfc_save_login_parms_in_sq_info(spfc_hba, login_coparms);
+
+ return ret;
+}
+
+u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit)
+{
+ u32 *bb_credit_temp = (u32 *)bb_credit;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(bb_credit, UNF_RETURN_ERROR);
+ if (spfc_hba->active_port_speed == UNF_PORT_SPEED_32_G)
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
+ else if (spfc_hba->active_port_speed == UNF_PORT_SPEED_16_G)
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
+ else
+ *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn)
+{
+ u32 *bb_scn_temp = (u32 *)bb_scn;
+ struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(bb_scn, UNF_RETURN_ERROR);
+
+ *bb_scn_temp = spfc_hba->port_bb_scn_cfg;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Return BBSCN(0x%x) to CM", *bb_scn_temp);
+
+ return RETURN_OK;
+}
+
+u32 spfc_get_loop_map(void *hba, void *buf)
+{
+ ulong flags = 0;
+ struct unf_buf *buf_temp = (struct unf_buf *)buf;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp->buf, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(buf_temp->buf_len, UNF_RETURN_ERROR);
+
+ if (buf_temp->buf_len > UNF_LOOPMAP_COUNT)
+ return UNF_RETURN_ERROR;
+
+ spin_lock_irqsave(&spfc_hba->hba_lock, flags);
+ if (spfc_hba->loop_map_valid != LOOP_MAP_VALID) {
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+ return UNF_RETURN_ERROR;
+ }
+ memcpy(buf_temp->buf, spfc_hba->loop_map, buf_temp->buf_len);
+ spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
+
+ return RETURN_OK;
+}
+
+u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type)
+{
+ struct spfc_inmbox_port_reset port_reset;
+ union spfc_outmbox_generic *port_reset_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ memset(&port_reset, 0, sizeof(port_reset));
+
+ port_reset_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_reset_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_reset_sts, 0, sizeof(union spfc_outmbox_generic));
+ port_reset.header.cmnd_type = SPFC_MBOX_PORT_RESET;
+ port_reset.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_reset));
+ port_reset.op_code = sub_type;
+
+ if (spfc_mb_send_and_wait_mbox(hba, &port_reset, sizeof(port_reset),
+ port_reset_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) can't send and wait mailbox with command type(0x%x)",
+ hba->port_cfg.port_id, port_reset.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (port_reset_sts->port_reset_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[warn]Port(0x%x) receive mailbox type(0x%x) status(0x%x) incorrect",
+ hba->port_cfg.port_id,
+ port_reset_sts->port_reset_sts.header.cmnd_type,
+ port_reset_sts->port_reset_sts.status);
+
+ goto exit;
+ }
+
+ if (port_reset_sts->port_reset_sts.header.cmnd_type != SPFC_MBOX_PORT_RESET_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "[warn]Port(0x%x) recv mailbox type(0x%x) incorrect",
+ hba->port_cfg.port_id,
+ port_reset_sts->port_reset_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) reset chip mailbox success",
+ hba->port_cfg.port_id);
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_reset_sts);
+
+ return ret;
+}
+
+u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba)
+{
+ int ret1 = RETURN_OK;
+ u32 ret2 = RETURN_OK;
+ struct spfc_inmbox_clear_done clear_done;
+
+ clear_done.header.cmnd_type = SPFC_MBOX_BUFFER_CLEAR_DONE;
+ clear_done.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_clear_done));
+ clear_done.header.port_id = hba->port_index;
+
+ ret1 = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
+ SPFC_MBOX_BUFFER_CLEAR_DONE, &clear_done,
+ sizeof(clear_done), SPHW_CHANNEL_FC);
+
+ if (ret1 != 0) {
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE_FAIL);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC Port(0x%x) can't send clear done cmd to up, ret:%d",
+ hba->port_cfg.port_id, ret1);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE);
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
+ hba->next_clear_sq = 0;
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[info]Port(0x%x) clear done msg(0x%x) sent to up succeed with stage(0x%x)",
+ hba->port_cfg.port_id, clear_done.header.cmnd_type,
+ hba->queue_set_stage);
+
+ return ret2;
+}
+
+u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state)
+{
+ struct spfc_inmbox_get_clear_state get_clr_state;
+ union spfc_outmbox_generic *port_clear_state_sts = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(clear_state, UNF_RETURN_ERROR);
+
+ memset(&get_clr_state, 0, sizeof(get_clr_state));
+
+ port_clear_state_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
+ if (!port_clear_state_sts) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "malloc outmbox memory failed");
+ return UNF_RETURN_ERROR;
+ }
+ memset(port_clear_state_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ get_clr_state.header.cmnd_type = SPFC_MBOX_GET_CLEAR_STATE;
+ get_clr_state.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_clear_state));
+
+ if (spfc_mb_send_and_wait_mbox(hba, &get_clr_state, sizeof(get_clr_state),
+ port_clear_state_sts) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x",
+ get_clr_state.header.cmnd_type);
+
+ goto exit;
+ }
+
+ if (port_clear_state_sts->get_clr_state_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x, state 0x%x.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.header.cmnd_type,
+ port_clear_state_sts->get_clr_state_sts.status,
+ port_clear_state_sts->get_clr_state_sts.state);
+
+ goto exit;
+ }
+
+ if (port_clear_state_sts->get_clr_state_sts.header.cmnd_type !=
+ SPFC_MBOX_GET_CLEAR_STATE_STS) {
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
+ "Port(0x%x) recv mailbox type(0x%x) incorrect.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.header.cmnd_type);
+
+ goto exit;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "Port(0x%x) get port clear state 0x%x.",
+ hba->port_cfg.port_id,
+ port_clear_state_sts->get_clr_state_sts.state);
+
+ *clear_state = port_clear_state_sts->get_clr_state_sts.state;
+
+ ret = RETURN_OK;
+exit:
+ kfree(port_clear_state_sts);
+
+ return ret;
+}
+
+u32 spfc_mbx_config_default_session(void *hba, u32 flag)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct spfc_inmbox_default_sq_info default_sq_info;
+ union spfc_outmbox_generic default_sq_info_sts;
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ spfc_hba = (struct spfc_hba_info *)hba;
+
+ memset(&default_sq_info, 0, sizeof(struct spfc_inmbox_default_sq_info));
+ memset(&default_sq_info_sts, 0, sizeof(union spfc_outmbox_generic));
+
+ default_sq_info.header.cmnd_type = SPFC_MBOX_SEND_DEFAULT_SQ_INFO;
+ default_sq_info.header.length =
+ SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_default_sq_info));
+ default_sq_info.func_id = sphw_global_func_id(spfc_hba->dev_handle);
+
+ /* When flag is 1, set default SQ info when probe, when 0, clear when
+ * remove
+ */
+ if (flag) {
+ default_sq_info.sq_cid = spfc_hba->default_sq_info.sq_cid;
+ default_sq_info.sq_xid = spfc_hba->default_sq_info.sq_xid;
+ default_sq_info.valid = 1;
+ }
+
+ ret =
+ spfc_mb_send_and_wait_mbox(spfc_hba, &default_sq_info, sizeof(default_sq_info),
+ (union spfc_outmbox_generic *)(void *)&default_sq_info_sts);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "spfc can't send and wait mailbox, command type: 0x%x.",
+ default_sq_info.header.cmnd_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (default_sq_info_sts.default_sq_sts.status != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) mailbox status incorrect status(0x%x) .",
+ spfc_hba->port_cfg.port_id,
+ default_sq_info_sts.default_sq_sts.status);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS !=
+ default_sq_info_sts.default_sq_sts.header.cmnd_type) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
+ spfc_hba->port_cfg.port_id,
+ default_sq_info_sts.default_sq_sts.header.cmnd_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.h b/drivers/scsi/spfc/hw/spfc_chipitf.h
new file mode 100644
index 000000000000..acd770514edf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_chipitf.h
@@ -0,0 +1,797 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_CHIPITF_H
+#define SPFC_CHIPITF_H
+
+#include "unf_type.h"
+#include "unf_log.h"
+#include "spfc_utils.h"
+#include "spfc_module.h"
+
+#include "spfc_service.h"
+
+/* CONF_API_CMND */
+#define SPFC_MBOX_CONFIG_API (0x00)
+#define SPFC_MBOX_CONFIG_API_STS (0xA0)
+
+/* GET_CHIP_INFO_API_CMD */
+#define SPFC_MBOX_GET_CHIP_INFO (0x01)
+#define SPFC_MBOX_GET_CHIP_INFO_STS (0xA1)
+
+/* PORT_RESET */
+#define SPFC_MBOX_PORT_RESET (0x02)
+#define SPFC_MBOX_PORT_RESET_STS (0xA2)
+
+/* SFP_SWITCH_API_CMND */
+#define SPFC_MBOX_PORT_SWITCH (0x03)
+#define SPFC_MBOX_PORT_SWITCH_STS (0xA3)
+
+/* CONF_AF_LOGIN_API_CMND */
+#define SPFC_MBOX_CONFIG_LOGIN_API (0x06)
+#define SPFC_MBOX_CONFIG_LOGIN_API_STS (0xA6)
+
+/* BUFFER_CLEAR_DONE_CMND */
+#define SPFC_MBOX_BUFFER_CLEAR_DONE (0x07)
+#define SPFC_MBOX_BUFFER_CLEAR_DONE_STS (0xA7)
+
+#define SPFC_MBOX_GET_UP_STATE (0x09)
+#define SPFC_MBOX_GET_UP_STATE_STS (0xA9)
+
+/* GET CLEAR DONE STATE */
+#define SPFC_MBOX_GET_CLEAR_STATE (0x0E)
+#define SPFC_MBOX_GET_CLEAR_STATE_STS (0xAE)
+
+/* CONFIG TIMER */
+#define SPFC_MBOX_CONFIG_TIMER (0x10)
+#define SPFC_MBOX_CONFIG_TIMER_STS (0xB0)
+
+/* Led Test */
+#define SPFC_MBOX_LED_TEST (0x12)
+#define SPFC_MBOX_LED_TEST_STS (0xB2)
+
+/* set esch */
+#define SPFC_MBOX_SET_ESCH (0x13)
+#define SPFC_MBOX_SET_ESCH_STS (0xB3)
+
+/* set get tx serdes */
+#define SPFC_MBOX_SET_GET_SERDES_TX (0x14)
+#define SPFC_MBOX_SET_GET_SERDES_TX_STS (0xB4)
+
+/* get rx serdes */
+#define SPFC_MBOX_GET_SERDES_RX (0x15)
+#define SPFC_MBOX_GET_SERDES_RX_STS (0xB5)
+
+/* i2c read write */
+#define SPFC_MBOX_I2C_WR_RD (0x16)
+#define SPFC_MBOX_I2C_WR_RD_STS (0xB6)
+
+/* GET UCODE STATS CMD */
+#define SPFC_MBOX_GET_UCODE_STAT (0x18)
+#define SPFC_MBOX_GET_UCODE_STAT_STS (0xB8)
+
+/* gpio read write */
+#define SPFC_MBOX_GPIO_WR_RD (0x19)
+#define SPFC_MBOX_GPIO_WR_RD_STS (0xB9)
+
+#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO (0x26)
+#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS (0xc6)
+
+/* FC: DRV->UP */
+#define SPFC_MBOX_SEND_ELS_CMD (0x2A)
+#define SPFC_MBOX_SEND_VPORT_INFO (0x2B)
+
+/* FC: UP->DRV */
+#define SPFC_MBOX_RECV_FC_LINKUP (0x40)
+#define SPFC_MBOX_RECV_FC_LINKDOWN (0x41)
+#define SPFC_MBOX_RECV_FC_DELCMD (0x42)
+#define SPFC_MBOX_RECV_FC_ERROR (0x43)
+
+#define LOOP_MAP_VALID (1)
+#define LOOP_MAP_INVALID (0)
+
+#define SPFC_MBOX_SIZE (1024)
+#define SPFC_MBOX_HEADER_SIZE (4)
+
+#define UNDEFINEOPCODE (0)
+
+#define VALUEMASK_L 0x00000000FFFFFFFF
+#define VALUEMASK_H 0xFFFFFFFF00000000
+
+#define STATUS_OK (0)
+#define STATUS_FAIL (1)
+
+enum spfc_drv2up_unblock_msg_cmd_code {
+ SPFC_SEND_ELS_CMD,
+ SPFC_SEND_ELS_CMD_FAIL,
+ SPFC_RCV_ELS_CMD_RSP,
+ SPFC_SEND_CONFIG_LOGINAPI,
+ SPFC_SEND_CONFIG_LOGINAPI_FAIL,
+ SPFC_RCV_CONFIG_LOGIN_API_RSP,
+ SPFC_SEND_CLEAR_DONE,
+ SPFC_SEND_CLEAR_DONE_FAIL,
+ SPFC_RCV_CLEAR_DONE_RSP,
+ SPFC_SEND_VPORT_INFO_DONE,
+ SPFC_SEND_VPORT_INFO_FAIL,
+ SPFC_SEND_VPORT_INFO_RSP,
+ SPFC_MBOX_CMD_BUTT
+};
+
+/* up to dirver cmd code */
+enum spfc_up2drv_msg_cmd_code {
+ SPFC_UP2DRV_MSG_CMD_LINKUP = 0x1,
+ SPFC_UP2DRV_MSG_CMD_LINKDOWN = 0x2,
+ SPFC_UP2DRV_MSG_CMD_BUTT
+};
+
+/* up to driver handle templete */
+struct spfc_up2drv_msg_handle {
+ u8 cmd;
+ u32 (*spfc_msg_up2driver_handler)(struct spfc_hba_info *hba, void *buf_in);
+};
+
+/* tile to driver cmd code */
+enum spfc_tile2drv_msg_cmd_code {
+ SPFC_TILE2DRV_MSG_CMD_SCAN_DONE,
+ SPFC_TILE2DRV_MSG_CMD_FLUSH_DONE,
+ SPFC_TILE2DRV_MSG_CMD_BUTT
+};
+
+/* tile to driver handle templete */
+struct spfc_tile2drv_msg_handle {
+ u8 cmd;
+ u32 (*spfc_msg_tile2driver_handler)(struct spfc_hba_info *hba, u8 cmd, u64 val);
+};
+
+/* Mbox Common Header */
+struct spfc_mbox_header {
+ u8 cmnd_type;
+ u8 length;
+ u8 port_id;
+ u8 reserved;
+};
+
+/* open or close the sfp */
+struct spfc_inmbox_port_switch {
+ struct spfc_mbox_header header;
+ u32 op_code : 8;
+ u32 rsvd0 : 24;
+ u32 rsvd1[6];
+};
+
+struct spfc_inmbox_send_vport_info {
+ struct spfc_mbox_header header;
+
+ u64 sys_port_wwn;
+ u64 sys_node_name;
+
+ u32 nport_id : 24;
+ u32 vpi : 8;
+};
+
+struct spfc_outmbox_port_switch_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* config API */
+struct spfc_inmbox_config_api {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved1 : 24;
+
+ u8 topy_mode;
+ u8 sfp_speed;
+ u8 max_speed;
+ u8 hard_alpa;
+
+ u8 port_name[UNF_WWN_LEN];
+
+ u32 slave : 1;
+ u32 auto_sneg : 1;
+ u32 reserved2 : 30;
+
+ u32 rx_6432g_bb_credit : 16; /* 160 */
+ u32 rx_16g_bb_credit : 16; /* 80 */
+ u32 rx_84g_bb_credit : 16; /* 50 */
+ u32 rdy_cnt_bf_fst_frm : 16; /* 8 */
+
+ u32 esch_32g_value;
+ u32 esch_16g_value;
+ u32 esch_8g_value;
+ u32 esch_4g_value;
+ u32 esch_64g_value;
+ u32 esch_bust_size;
+};
+
+struct spfc_outmbox_config_api_sts {
+ struct spfc_mbox_header header;
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* Get chip info */
+struct spfc_inmbox_get_chip_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_chip_info_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 board_type;
+ u8 rvsd0[2];
+ u64 wwpn;
+ u64 wwnn;
+ u64 rsvd1;
+};
+
+/* Get reg info */
+struct spfc_inmbox_get_reg_info {
+ struct spfc_mbox_header header;
+ u32 op_code : 1;
+ u32 reg_len : 8;
+ u32 rsvd1 : 23;
+ u32 reg_addr;
+ u32 reg_value_l32;
+ u32 reg_value_h32;
+ u32 rsvd2[27];
+};
+
+/* Get reg info sts */
+struct spfc_outmbox_get_reg_info_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd0;
+ u8 rsvd1;
+ u8 status;
+ u32 reg_value_l32;
+ u32 reg_value_h32;
+ u32 rsvd2[28];
+};
+
+/* Config login API */
+struct spfc_inmbox_config_login {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved1 : 24;
+
+ u16 tx_bb_credit;
+ u16 reserved2;
+
+ u32 rtov;
+ u32 etov;
+
+ u32 rt_tov_tag : 1;
+ u32 ed_tov_tag : 1;
+ u32 bb_credit : 6;
+ u32 bb_scn : 8;
+ u32 lr_flag : 16;
+};
+
+struct spfc_outmbox_config_login_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* port reset */
+#define SPFC_MBOX_SUBTYPE_LIGHT_RESET (0x0)
+#define SPFC_MBOX_SUBTYPE_HEAVY_RESET (0x1)
+
+struct spfc_inmbox_port_reset {
+ struct spfc_mbox_header header;
+
+ u32 op_code : 8;
+ u32 reserved : 24;
+};
+
+struct spfc_outmbox_port_reset_sts {
+ struct spfc_mbox_header header;
+
+ u16 reserved1;
+ u8 reserved2;
+ u8 status;
+};
+
+/* led test */
+struct spfc_inmbox_led_test {
+ struct spfc_mbox_header header;
+
+ /* 0->act type;1->low speed;1->high speed */
+ u8 led_type;
+ /* 0:twinkle;1:light on;2:light off;0xff:defalut */
+ u8 led_mode;
+ u8 resvd[ARRAY_INDEX_2];
+};
+
+struct spfc_outmbox_led_test_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd1;
+ u8 rsvd2;
+ u8 status;
+};
+
+/* set esch */
+struct spfc_inmbox_set_esch {
+ struct spfc_mbox_header header;
+
+ u32 esch_value;
+ u32 esch_bust_size;
+};
+
+struct spfc_outmbox_set_esch_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd1;
+ u8 rsvd2;
+ u8 status;
+};
+
+struct spfc_inmbox_set_serdes_tx {
+ struct spfc_mbox_header header;
+
+ u8 swing; /* amplitude setting */
+ char serdes_pre1; /* pre1 setting */
+ char serdes_pre2; /* pre2 setting */
+ char serdes_post; /* post setting */
+ u8 serdes_main; /* main setting */
+ u8 op_code; /* opcode,0:setting;1:read */
+ u8 rsvd[ARRAY_INDEX_2];
+};
+
+struct spfc_outmbox_set_serdes_tx_sts {
+ struct spfc_mbox_header header;
+ u16 rvsd0;
+ u8 rvsd1;
+ u8 status;
+ u8 swing;
+ char serdes_pre1;
+ char serdes_pre2;
+ char serdes_post;
+ u8 serdes_main;
+ u8 rsvd2[ARRAY_INDEX_3];
+};
+
+struct spfc_inmbox_i2c_wr_rd {
+ struct spfc_mbox_header header;
+ u8 op_code; /* 0 write, 1 read */
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u32 dev_addr;
+ u32 offset;
+ u32 wr_data;
+};
+
+struct spfc_outmbox_i2c_wr_rd_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 resvd[ARRAY_INDEX_3];
+
+ u32 rd_data;
+};
+
+struct spfc_inmbox_gpio_wr_rd {
+ struct spfc_mbox_header header;
+ u8 op_code; /* 0 write,1 read */
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u32 pin;
+ u32 wr_data;
+};
+
+struct spfc_outmbox_gpio_wr_rd_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 resvd[ARRAY_INDEX_3];
+
+ u32 rd_data;
+};
+
+struct spfc_inmbox_get_serdes_rx {
+ struct spfc_mbox_header header;
+
+ u8 op_code;
+ u8 h16_macro;
+ u8 h16_lane;
+ u8 rsvd;
+};
+
+struct spfc_inmbox_get_serdes_rx_sts {
+ struct spfc_mbox_header header;
+ u16 rvsd0;
+ u8 rvsd1;
+ u8 status;
+ int left_eye;
+ int right_eye;
+ int low_eye;
+ int high_eye;
+};
+
+struct spfc_ser_op_m_l {
+ u8 op_code;
+ u8 h16_macro;
+ u8 h16_lane;
+ u8 rsvd;
+};
+
+/* get sfp info */
+#define SPFC_MBOX_GET_SFP_INFO_MB_LENGTH 1
+#define OFFSET_TWO_DWORD 2
+#define OFFSET_ONE_DWORD 1
+
+struct spfc_inmbox_get_sfp_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_sfp_info_sts {
+ struct spfc_mbox_header header;
+
+ u32 rcvd : 8;
+ u32 length : 16;
+ u32 status : 8;
+};
+
+/* get ucode stats */
+#define SPFC_UCODE_STAT_NUM 64
+
+struct spfc_outmbox_get_ucode_stat {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_ucode_stat_sts {
+ struct spfc_mbox_header header;
+
+ u16 rsvd;
+ u8 rsvd2;
+ u8 status;
+
+ u32 ucode_stat[SPFC_UCODE_STAT_NUM];
+};
+
+/* uP-->Driver asyn event API */
+struct spfc_link_event {
+ struct spfc_mbox_header header;
+
+ u8 link_event;
+ u8 reason;
+ u8 speed;
+ u8 top_type;
+
+ u8 alpa_value;
+ u8 reserved1;
+ u16 paticpate : 1;
+ u16 ac_led : 1;
+ u16 yellow_speed_led : 1;
+ u16 green_speed_led : 1;
+ u16 reserved2 : 12;
+
+ u8 loop_map_info[128];
+};
+
+enum spfc_up_err_type {
+ SPFC_UP_ERR_DRV_PARA = 0,
+ SPFC_UP_ERR_SFP = 1,
+ SPFC_UP_ERR_32G_PUB = 2,
+ SPFC_UP_ERR_32G_UA = 3,
+ SPFC_UP_ERR_32G_MAC = 4,
+ SPFC_UP_ERR_NON32G_DFX = 5,
+ SPFC_UP_ERR_NON32G_MAC = 6,
+ SPFC_UP_ERR_BUTT
+
+};
+
+enum spfc_up_err_value {
+ /* ERR type 0 */
+ SPFC_DRV_2_UP_PARA_ERR = 0,
+
+ /* ERR type 1 */
+ SPFC_SFP_SPEED_ERR,
+
+ /* ERR type 2 */
+ SPFC_32GPUB_UA_RXESCH_FIFO_OF,
+ SPFC_32GPUB_UA_RXESCH_FIFO_UCERR,
+
+ /* ERR type 3 */
+ SPFC_32G_UA_UATX_LEN_ABN,
+ SPFC_32G_UA_RXAFIFO_OF,
+ SPFC_32G_UA_TXAFIFO_OF,
+ SPFC_32G_UA_RXAFIFO_UCERR,
+ SPFC_32G_UA_TXAFIFO_UCERR,
+
+ /* ERR type 4 */
+ SPFC_32G_MAC_RX_BBC_FATAL,
+ SPFC_32G_MAC_TX_BBC_FATAL,
+ SPFC_32G_MAC_TXFIFO_UF,
+ SPFC_32G_MAC_PCS_TXFIFO_UF,
+ SPFC_32G_MAC_RXBBC_CRDT_TO,
+ SPFC_32G_MAC_PCS_RXAFIFO_OF,
+ SPFC_32G_MAC_PCS_TXFIFO_OF,
+ SPFC_32G_MAC_FC2P_RXFIFO_OF,
+ SPFC_32G_MAC_FC2P_TXFIFO_OF,
+ SPFC_32G_MAC_FC2P_CAFIFO_OF,
+ SPFC_32G_MAC_PCS_RXRSFECM_UCEER,
+ SPFC_32G_MAC_PCS_RXAFIFO_UCEER,
+ SPFC_32G_MAC_PCS_TXFIFO_UCEER,
+ SPFC_32G_MAC_FC2P_RXFIFO_UCEER,
+ SPFC_32G_MAC_FC2P_TXFIFO_UCEER,
+
+ /* ERR type 5 */
+ SPFC_NON32G_DFX_FC1_DFX_BF_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_BP_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_RX_AFIFO_ERR,
+ SPFC_NON32G_DFX_FC1_DFX_TX_AFIFO_ERR,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBUF_FIFO1,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBBC_TO,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXDAT_FIFO,
+ SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXCMD_FIFO,
+ SPFC_NON32G_DFX_FC1_ERR_R_RDY,
+
+ /* ERR type 6 */
+ SPFC_NON32G_MAC_FC1_FAIRNESS_ERROR,
+
+ SPFC_ERR_VALUE_BUTT
+
+};
+
+struct spfc_up_error_event {
+ struct spfc_mbox_header header;
+
+ u8 link_event;
+ u8 error_level;
+ u8 error_type;
+ u8 error_value;
+};
+
+struct spfc_inmbox_clear_done {
+ struct spfc_mbox_header header;
+};
+
+/* receive els cmd */
+struct spfc_inmbox_rcv_els {
+ struct spfc_mbox_header header;
+ u16 pkt_type;
+ u16 pkt_len;
+ u8 frame[ARRAY_INDEX_0];
+};
+
+/* FCF event type */
+enum spfc_fcf_event_type {
+ SPFC_FCF_SELECTED = 0,
+ SPFC_FCF_DEAD,
+ SPFC_FCF_CLEAR_VLINK,
+ SPFC_FCF_CLEAR_VLINK_APPOINTED
+};
+
+struct spfc_nport_id_info {
+ u32 nport_id : 24;
+ u32 vp_index : 8;
+};
+
+struct spfc_inmbox_fcf_event {
+ struct spfc_mbox_header header;
+
+ u8 fcf_map[ARRAY_INDEX_3];
+ u8 event_type;
+
+ u8 fcf_mac_h4[ARRAY_INDEX_4];
+
+ u16 vlan_info;
+ u8 fcf_mac_l2[ARRAY_INDEX_2];
+
+ struct spfc_nport_id_info nport_id_info[UNF_SPFC_MAXNPIV_NUM + 1];
+};
+
+/* send els cmd */
+struct spfc_inmbox_send_els {
+ struct spfc_mbox_header header;
+
+ u8 oper_code;
+ u8 rsvd[ARRAY_INDEX_3];
+
+ u8 resvd;
+ u8 els_cmd_type;
+ u16 pkt_len;
+
+ u8 fcf_mac_h4[ARRAY_INDEX_4];
+
+ u16 vlan_info;
+ u8 fcf_mac_l2[ARRAY_INDEX_2];
+
+ u8 fc_frame[SPFC_FC_HEAD_LEN + UNF_FLOGI_PAYLOAD_LEN];
+};
+
+struct spfc_inmbox_send_els_sts {
+ struct spfc_mbox_header header;
+
+ u16 rx_id;
+ u16 err_code;
+
+ u16 ox_id;
+ u16 rsvd;
+};
+
+struct spfc_inmbox_get_clear_state {
+ struct spfc_mbox_header header;
+ u32 resvd[31];
+};
+
+struct spfc_outmbox_get_clear_state_sts {
+ struct spfc_mbox_header header;
+ u16 rsvd1;
+ u8 state; /* 1--clear doing. 0---clear done. */
+ u8 status; /* 0--ok,!0---fail */
+ u32 rsvd2[30];
+};
+
+#define SPFC_FIP_MODE_VN2VF (0)
+#define SPFC_FIP_MODE_VN2VN (1)
+
+/* get up state */
+struct spfc_inmbox_get_up_state {
+ struct spfc_mbox_header header;
+
+ u64 cur_jiff_time;
+};
+
+/* get port state */
+struct spfc_inmbox_get_port_info {
+ struct spfc_mbox_header header;
+};
+
+struct spfc_outmbox_get_up_state_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv0;
+ u16 rsv1;
+ struct unf_port_dynamic_info dymic_info;
+};
+
+struct spfc_outmbox_get_port_info_sts {
+ struct spfc_mbox_header header;
+
+ u32 status : 8;
+ u32 fe_16g_cvis_tts : 8;
+ u32 bb_scn : 8;
+ u32 loop_credit : 8;
+
+ u32 non_loop_rx_credit : 8;
+ u32 non_loop_tx_credit : 8;
+ u32 sfp_speed : 8;
+ u32 present : 8;
+};
+
+struct spfc_inmbox_config_timer {
+ struct spfc_mbox_header header;
+
+ u16 op_code;
+ u16 fun_id;
+ u32 user_data;
+};
+
+struct spfc_inmbox_config_srqc {
+ struct spfc_mbox_header header;
+
+ u16 valid;
+ u16 fun_id;
+ u32 srqc_gpa_hi;
+ u32 srqc_gpa_lo;
+};
+
+struct spfc_outmbox_config_timer_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+struct spfc_outmbox_config_srqc_sts {
+ struct spfc_mbox_header header;
+
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+struct spfc_inmbox_default_sq_info {
+ struct spfc_mbox_header header;
+ u32 sq_cid;
+ u32 sq_xid;
+ u16 func_id;
+ u16 valid;
+};
+
+struct spfc_outmbox_default_sq_info_sts {
+ struct spfc_mbox_header header;
+ u8 status;
+ u8 rsv[ARRAY_INDEX_3];
+};
+
+/* Generic Inmailbox and Outmailbox */
+union spfc_inmbox_generic {
+ struct {
+ struct spfc_mbox_header header;
+ u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
+ } generic;
+
+ struct spfc_inmbox_port_switch port_switch;
+ struct spfc_inmbox_config_api config_api;
+ struct spfc_inmbox_get_chip_info get_chip_info;
+ struct spfc_inmbox_config_login config_login;
+ struct spfc_inmbox_port_reset port_reset;
+ struct spfc_inmbox_set_esch esch_set;
+ struct spfc_inmbox_led_test led_test;
+ struct spfc_inmbox_get_sfp_info get_sfp_info;
+ struct spfc_inmbox_clear_done clear_done;
+ struct spfc_outmbox_get_ucode_stat get_ucode_stat;
+ struct spfc_inmbox_get_clear_state get_clr_state;
+ struct spfc_inmbox_send_vport_info send_vport_info;
+ struct spfc_inmbox_get_up_state get_up_state;
+ struct spfc_inmbox_config_timer timer_config;
+ struct spfc_inmbox_config_srqc config_srqc;
+ struct spfc_inmbox_get_port_info get_port_info;
+};
+
+union spfc_outmbox_generic {
+ struct {
+ struct spfc_mbox_header header;
+ u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
+ } generic;
+
+ struct spfc_outmbox_port_switch_sts port_switch_sts;
+ struct spfc_outmbox_config_api_sts config_api_sts;
+ struct spfc_outmbox_get_chip_info_sts get_chip_info_sts;
+ struct spfc_outmbox_get_reg_info_sts get_reg_info_sts;
+ struct spfc_outmbox_config_login_sts config_login_sts;
+ struct spfc_outmbox_port_reset_sts port_reset_sts;
+ struct spfc_outmbox_led_test_sts led_test_sts;
+ struct spfc_outmbox_set_esch_sts esch_set_sts;
+ struct spfc_inmbox_get_serdes_rx_sts serdes_rx_get_sts;
+ struct spfc_outmbox_set_serdes_tx_sts serdes_tx_set_sts;
+ struct spfc_outmbox_i2c_wr_rd_sts i2c_wr_rd_sts;
+ struct spfc_outmbox_gpio_wr_rd_sts gpio_wr_rd_sts;
+ struct spfc_outmbox_get_sfp_info_sts get_sfp_info_sts;
+ struct spfc_outmbox_get_ucode_stat_sts get_ucode_stat_sts;
+ struct spfc_outmbox_get_clear_state_sts get_clr_state_sts;
+ struct spfc_outmbox_get_up_state_sts get_up_state_sts;
+ struct spfc_outmbox_config_timer_sts timer_config_sts;
+ struct spfc_outmbox_config_srqc_sts config_srqc_sts;
+ struct spfc_outmbox_get_port_info_sts get_port_info_sts;
+ struct spfc_outmbox_default_sq_info_sts default_sq_sts;
+};
+
+u32 spfc_get_chip_msg(void *hba, void *mac);
+u32 spfc_config_port_table(struct spfc_hba_info *hba);
+u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on);
+u32 spfc_get_loop_map(void *hba, void *buf);
+u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit);
+u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn);
+u32 spfc_get_port_current_info(void *hba, void *port_info);
+u32 spfc_get_port_fec(void *hba, void *para_out);
+
+u32 spfc_get_loop_alpa(void *hba, void *alpa);
+u32 spfc_get_topo_act(void *hba, void *topo_act);
+u32 spfc_config_login_api(struct spfc_hba_info *hba, struct unf_port_login_parms *login_parms);
+u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox, u16 in_size,
+ union spfc_outmbox_generic *out_mbox);
+void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type);
+u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba);
+u32 spfc_update_fabric_param(void *hba, void *para_in);
+u32 spfc_update_port_param(void *hba, void *para_in);
+u32 spfc_update_fdisc_param(void *hba, void *vport_info);
+u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state);
+u32 spfc_get_chip_capability(void *hwdev_handle, struct spfc_chip_info *chip_info);
+u32 spfc_mbx_config_default_session(void *hba, u32 flag);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
new file mode 100644
index 000000000000..30fb56a9bfed
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
@@ -0,0 +1,1646 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+static unsigned char cqm_ver = 8;
+module_param(cqm_ver, byte, 0644);
+MODULE_PARM_DESC(cqm_ver, "for cqm version control (default=8)");
+
+static void
+cqm_bat_fill_cla_common_gpa(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_bat_entry_standerd *bat_entry_standerd)
+{
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ struct sphw_func_attr *func_attr = NULL;
+ struct cqm_bat_entry_vf2pf gpa = {0};
+ u32 cla_gpa_h = 0;
+ dma_addr_t pa;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ pa = cla_table->cla_z_buf.buf_list[0].pa;
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ pa = cla_table->cla_y_buf.buf_list[0].pa;
+ else
+ pa = cla_table->cla_x_buf.buf_list[0].pa;
+
+ gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
+
+ /* On the SPU, the value of spu_en in the GPA address
+ * in the BAT is determined by the host ID and fun IDx.
+ */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
+ } else {
+ gpa.acs_spu_en = 0;
+ }
+
+ /* In fake mode, fake_vf_en in the GPA address of the BAT
+ * must be set to 1.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD) {
+ gpa.fake_vf_en = 1;
+ func_attr = &cqm_handle->parent_cqm_handle->func_attribute;
+ gpa.pf_id = func_attr->func_global_idx;
+ } else {
+ gpa.fake_vf_en = 0;
+ }
+
+ memcpy(&cla_gpa_h, &gpa, sizeof(u32));
+ bat_entry_standerd->cla_gpa_h = cla_gpa_h;
+
+ /* GPA is valid when gpa[0] = 1.
+ * CQM_BAT_ENTRY_T_REORDER does not support GPA validity check.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
+ else
+ bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa) | gpa_check_enable;
+}
+
+static void cqm_bat_fill_cla_common(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 *entry_base_addr)
+{
+ struct cqm_bat_entry_standerd *bat_entry_standerd = NULL;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
+ cla_table->type);
+ return;
+ }
+
+ bat_entry_standerd = (struct cqm_bat_entry_standerd *)entry_base_addr;
+
+ /* The QPC value is 256/512/1024 and the timer value is 512.
+ * The other cacheline value is 256B.
+ * The conversion operation is performed inside the chip.
+ */
+ if (cla_table->obj_size > cache_line) {
+ if (cla_table->obj_size == CQM_OBJECT_512)
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ else
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_1024;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cla_table->obj_size;
+ } else {
+ if (cache_line == CQM_CHIP_CACHELINE) {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
+ } else {
+ bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
+ bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
+ }
+ }
+
+ bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
+
+ bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
+ bat_entry_standerd->z = cla_table->cacheline_z;
+ bat_entry_standerd->y = cla_table->cacheline_y;
+ bat_entry_standerd->x = cla_table->cacheline_x;
+ bat_entry_standerd->cla_level = cla_table->cla_lvl;
+
+ cqm_bat_fill_cla_common_gpa(cqm_handle, cla_table, bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_cfg(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct cqm_bat_entry_cfg *bat_entry_cfg = NULL;
+
+ bat_entry_cfg = (struct cqm_bat_entry_cfg *)(*entry_base_addr);
+ bat_entry_cfg->cur_conn_cache = 0;
+ bat_entry_cfg->max_conn_cache =
+ func_cap->flow_table_based_conn_cache_number;
+ bat_entry_cfg->cur_conn_num_h_4 = 0;
+ bat_entry_cfg->cur_conn_num_l_16 = 0;
+ bat_entry_cfg->max_conn_num = func_cap->flow_table_based_conn_number;
+
+ /* Aligns with 64 buckets and shifts rightward by 6 bits.
+ * The maximum value of this field is 16 bits. A maximum of 4M buckets
+ * can be supported. The value is subtracted by 1. It is used for &hash
+ * value.
+ */
+ if ((func_cap->hash_number >> CQM_HASH_NUMBER_UNIT) != 0) {
+ bat_entry_cfg->bucket_num = ((func_cap->hash_number >>
+ CQM_HASH_NUMBER_UNIT) - 1);
+ }
+ if (func_cap->bloomfilter_length != 0) {
+ bat_entry_cfg->bloom_filter_len = func_cap->bloomfilter_length -
+ 1;
+ bat_entry_cfg->bloom_filter_addr = func_cap->bloomfilter_addr;
+ }
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_cfg);
+}
+
+static void cqm_bat_fill_cla_other(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
+}
+
+static void cqm_bat_fill_cla_taskmap(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ struct cqm_bat_entry_taskmap *bat_entry_taskmap = NULL;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ int i;
+
+ if (cqm_handle->func_capability.taskmap_number != 0) {
+ bat_entry_taskmap =
+ (struct cqm_bat_entry_taskmap *)(*entry_base_addr);
+ for (i = 0; i < CQM_BAT_ENTRY_TASKMAP_NUM; i++) {
+ bat_entry_taskmap->addr[i].gpa_h =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa >>
+ CQM_CHIP_GPA_HSHIFT);
+ bat_entry_taskmap->addr[i].gpa_l =
+ (u32)(cla_table->cla_z_buf.buf_list[i].pa &
+ CQM_CHIP_GPA_LOMASK);
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: taskmap bat entry: 0x%x 0x%x\n",
+ bat_entry_taskmap->addr[i].gpa_h,
+ bat_entry_taskmap->addr[i].gpa_l);
+ }
+ }
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_taskmap);
+}
+
+static void cqm_bat_fill_cla_timer(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ /* Only the PPF allocates timer resources. */
+ if (cqm_handle->func_attribute.func_type != CQM_PPF) {
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+ } else {
+ cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
+
+ (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
+ }
+}
+
+static void cqm_bat_fill_cla_invalid(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u8 **entry_base_addr)
+{
+ (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
+}
+
+static void cqm_bat_fill_cla(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ /* Fills each item in the BAT table according to the BAT format. */
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+ cla_table = &bat_table->entry[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_CFG)
+ cqm_bat_fill_cla_cfg(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ cqm_bat_fill_cla_taskmap(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_INVALID)
+ cqm_bat_fill_cla_invalid(cqm_handle, cla_table, &entry_base_addr);
+ else if (entry_type == CQM_BAT_ENTRY_T_TIMER)
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
+ else
+ cqm_bat_fill_cla_other(cqm_handle, cla_table, &entry_base_addr);
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >= (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle)
+{
+ u32 funcid = 0;
+ u32 smf_sel = 0;
+ u32 smf_id = 0;
+ u32 smf_pg_partial = 0;
+ /* SMF_Selection is selected based on
+ * the lower two bits of the function id
+ */
+ u32 lbf_smfsel[4] = {0, 2, 1, 3};
+ /* SMFID is selected based on SMF_PG[1:0] and SMF_Selection(0-1) */
+ u32 smfsel_smfid01[4][2] = { {0, 0}, {0, 0}, {1, 1}, {0, 1} };
+ /* SMFID is selected based on SMF_PG[3:2] and SMF_Selection(2-4) */
+ u32 smfsel_smfid23[4][2] = { {2, 2}, {2, 2}, {3, 3}, {2, 3} };
+
+ /* When the LB mode is disabled, SMF0 is always returned. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL) {
+ smf_id = 0;
+ } else {
+ funcid = cqm_handle->func_attribute.func_global_idx & 0x3;
+ smf_sel = lbf_smfsel[funcid];
+
+ if (smf_sel < 2) {
+ smf_pg_partial = cqm_handle->func_capability.smf_pg & 0x3;
+ smf_id = smfsel_smfid01[smf_pg_partial][smf_sel];
+ } else {
+ smf_pg_partial = (cqm_handle->func_capability.smf_pg >> 2) & 0x3;
+ smf_id = smfsel_smfid23[smf_pg_partial][smf_sel - 2];
+ }
+ }
+
+ return smf_id;
+}
+
+/* This function is used in LB mode 1/2. The timer spoker info
+ * of independent space needs to be configured for 4 SMFs.
+ */
+static void cqm_update_timer_gpa(struct cqm_handle *cqm_handle, u32 smf_id)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
+ u8 *entry_base_addr = NULL;
+ u32 i = 0;
+
+ if (cqm_handle->func_attribute.func_type != CQM_PPF)
+ return;
+
+ if (cqm_handle->func_capability.lb_mode != CQM_LB_MODE_1 &&
+ cqm_handle->func_capability.lb_mode != CQM_LB_MODE_2)
+ return;
+
+ cla_table = &bat_table->timer_entry[smf_id];
+ entry_base_addr = bat_table->bat;
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ entry_type = bat_table->bat_entry_type[i];
+
+ if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
+ cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
+ break;
+ }
+
+ if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
+ entry_base_addr += sizeof(struct cqm_bat_entry_taskmap);
+ else
+ entry_base_addr += CQM_BAT_ENTRY_SIZE;
+
+ /* Check whether entry_base_addr is out-of-bounds array. */
+ if (entry_base_addr >=
+ (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
+ break;
+ }
+}
+
+static s32 cqm_bat_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ u32 smf_id, u32 func_id)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmdq_bat_update *bat_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ bat_update_cmd = (struct cqm_cmdq_bat_update *)(buf_in->buf);
+ bat_update_cmd->offset = 0;
+
+ if (cqm_handle->bat_table.bat_size > CQM_BAT_MAX_SIZE) {
+ cqm_err(handle->dev_hdl,
+ "bat_size = %u, which is more than %d.\n",
+ cqm_handle->bat_table.bat_size, CQM_BAT_MAX_SIZE);
+ return CQM_FAIL;
+ }
+ bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
+
+ memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat, bat_update_cmd->byte_len);
+
+ bat_update_cmd->smf_id = smf_id;
+ bat_update_cmd->func_id = func_id;
+
+ cqm_info(handle->dev_hdl, "Bat update: smf_id=%u\n", bat_update_cmd->smf_id);
+ cqm_info(handle->dev_hdl, "Bat update: func_id=%u\n", bat_update_cmd->func_id);
+
+ cqm_swab32((u8 *)bat_update_cmd, sizeof(struct cqm_cmdq_bat_update) >> CQM_DW_SHIFT);
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_BAT_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "%s: send_cmd_box ret=%d\n", __func__,
+ ret);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bat_update(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ s32 ret = CQM_FAIL;
+ u32 smf_id = 0;
+ u32 func_id = 0;
+ u32 i = 0;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cmdq_bat_update);
+
+ /* In non-fake mode, func_id is set to 0xffff, indicating the current func.
+ * In fake mode, the value of func_id is specified. This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ func_id = 0xffff;
+
+ /* The LB scenario is supported.
+ * The normal mode is the traditional mode and is configured on SMF0.
+ * In mode 0, load is balanced to four SMFs based on the func ID (except
+ * the PPF func ID). The PPF in mode 0 needs to be configured on four
+ * SMF, so the timer resources can be shared by the four timer engine.
+ * Mode 1/2 is load balanced to four SMF by flow. Therefore, one
+ * function needs to be configured to four SMF.
+ */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ } else if ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1) ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) ||
+ ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0) &&
+ (cqm_handle->func_attribute.func_type == CQM_PPF))) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cqm_update_timer_gpa(cqm_handle, i);
+
+ /* The smf_pg variable stores the currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ smf_id = i;
+ ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Bat update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+s32 cqm_bat_init_ft(struct cqm_handle *cqm_handle, struct cqm_bat_table *bat_table,
+ enum func_type function_type)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i = 0;
+
+ bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
+ bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
+ bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_LUN;
+ bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_TASKMAP;
+
+ if (function_type == CQM_PF || function_type == CQM_PPF) {
+ bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_L3I;
+ bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_CHILDC;
+ bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TIMER;
+ bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_XID2CID;
+ bat_table->bat_entry_type[CQM_BAT_INDEX10] = CQM_BAT_ENTRY_T_REORDER;
+ bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
+ } else if (function_type == CQM_VF) {
+ bat_table->bat_size = CQM_BAT_SIZE_FT_VF;
+ } else {
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bat_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ enum func_type function_type = cqm_handle->func_attribute.func_type;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ memset(bat_table, 0, sizeof(struct cqm_bat_table));
+
+ /* Initialize the type of each bat entry. */
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ /* Select BATs based on service types. Currently,
+ * feature-related resources of the VF are stored in the BATs of the VF.
+ */
+ if (capability->ft_enable)
+ return cqm_bat_init_ft(cqm_handle, bat_table, function_type);
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(capability->ft_enable));
+
+ return CQM_FAIL;
+}
+
+void cqm_bat_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
+ bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
+
+ memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
+
+ /* Instruct the chip to update the BAT table. */
+ if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+}
+
+s32 cqm_cla_fill_buf(struct cqm_handle *cqm_handle, struct cqm_buf *cla_base_buf,
+ struct cqm_buf *cla_sub_buf, u8 gpa_check_enable)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct sphw_func_attr *func_attr = NULL;
+ dma_addr_t *base = NULL;
+ u64 fake_en = 0;
+ u64 spu_en = 0;
+ u64 pf_id = 0;
+ u32 i = 0;
+ u32 addr_num;
+ u32 buf_index = 0;
+
+ /* Apply for space for base_buf */
+ if (!cla_base_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_base_buf, false) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_base_buf));
+ return CQM_FAIL;
+ }
+ }
+
+ /* Apply for space for sub_buf */
+ if (!cla_sub_buf->buf_list) {
+ if (cqm_buf_alloc(cqm_handle, cla_sub_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_sub_buf));
+ cqm_buf_free(cla_base_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ }
+
+ /* Fill base_buff with the gpa of sub_buf */
+ addr_num = cla_base_buf->buf_size / sizeof(dma_addr_t);
+ base = (dma_addr_t *)(cla_base_buf->buf_list[0].va);
+ for (i = 0; i < cla_sub_buf->buf_number; i++) {
+ /* The SPU SMF supports load balancing from the SMF to the CPI,
+ * depending on the host ID and func ID.
+ */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
+ func_attr = &cqm_handle->func_attribute;
+ spu_en = (u64)(func_attr->func_global_idx & 0x1) << 63;
+ } else {
+ spu_en = 0;
+ }
+
+ /* fake enable */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_CHILD) {
+ fake_en = 1ULL << 62;
+ func_attr =
+ &cqm_handle->parent_cqm_handle->func_attribute;
+ pf_id = func_attr->func_global_idx;
+ pf_id = (pf_id & 0x1f) << 57;
+ } else {
+ fake_en = 0;
+ pf_id = 0;
+ }
+
+ *base = (((((cla_sub_buf->buf_list[i].pa & CQM_CHIP_GPA_MASK) |
+ spu_en) |
+ fake_en) |
+ pf_id) |
+ gpa_check_enable);
+
+ cqm_swab64((u8 *)base, 1);
+ if ((i + 1) % addr_num == 0) {
+ buf_index++;
+ if (buf_index < cla_base_buf->buf_number)
+ base = cla_base_buf->buf_list[buf_index].va;
+ } else {
+ base++;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_lvl1(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_y_buf = NULL;
+ struct cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_1;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ cla_table->y = CQM_MAX_INDEX_BIT;
+ cla_table->x = 0;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
+ cla_table->cacheline_x = 0;
+ }
+
+ /* Applying for CLA_Y_BUF Space */
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number = 1;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_1_y_buf));
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ ret = cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
+ cqm_buf_free(cla_y_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_lvl2(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 trunk_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_x_buf = NULL;
+ struct cqm_buf *cla_y_buf = NULL;
+ struct cqm_buf *cla_z_buf = NULL;
+ s32 shift = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 cache_line = 0;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
+ cache_line = CQM_CHIP_TIMER_CACHELINE;
+ else
+ cache_line = CQM_CHIP_CACHELINE;
+
+ if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
+ gpa_check_enable = 0;
+
+ cla_table->cla_lvl = CQM_CLA_LVL_2;
+
+ shift = cqm_shift(trunk_size / cla_table->obj_size);
+ cla_table->z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->y = cla_table->z + shift;
+ cla_table->x = CQM_MAX_INDEX_BIT;
+
+ if (cla_table->obj_size >= cache_line) {
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+ } else {
+ shift = cqm_shift(trunk_size / cache_line);
+ cla_table->cacheline_z = shift ? (shift - 1) : (shift);
+ shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
+ cla_table->cacheline_y = cla_table->cacheline_z + shift;
+ cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
+ }
+
+ /* Apply for CLA_X_BUF Space */
+ cla_x_buf = &cla_table->cla_x_buf;
+ cla_x_buf->buf_size = trunk_size;
+ cla_x_buf->buf_number = 1;
+ cla_x_buf->page_number = cla_x_buf->buf_number <<
+ cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_2_x_buf));
+
+ /* Apply for CLA_Z_BUF and CLA_Y_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size;
+ cla_z_buf->buf_number =
+ (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
+ cla_z_buf->page_number = cla_z_buf->buf_number <<
+ cla_table->trunk_order;
+
+ cla_y_buf = &cla_table->cla_y_buf;
+ cla_y_buf->buf_size = trunk_size;
+ cla_y_buf->buf_number =
+ (ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t), trunk_size)) /
+ trunk_size;
+ cla_y_buf->page_number = cla_y_buf->buf_number <<
+ cla_table->trunk_order;
+ /* All buffer space must be statically allocated. */
+ if (cla_table->alloc_static) {
+ /* Apply for y buf and z buf, and fill the gpa of
+ * z buf list in y buf
+ */
+ if (cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
+ gpa_check_enable) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+
+ /* Fill the gpa of the y buf list into the x buf.
+ * After the x and y bufs are applied for,
+ * this function will not fail.
+ * Use void to forcibly convert the return of the function.
+ */
+ (void)cqm_cla_fill_buf(cqm_handle, cla_x_buf, cla_y_buf,
+ gpa_check_enable);
+ } else { /* Only the buffer list space is initialized. The buffer space
+ * is dynamically allocated in services.
+ */
+ cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_z_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_z_buf->buf_list, 0,
+ cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
+
+ cla_y_buf->buf_list = vmalloc(cla_y_buf->buf_number *
+ sizeof(struct cqm_buf_list));
+ if (!cla_y_buf->buf_list) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
+ cqm_buf_free(cla_z_buf, cqm_handle->dev);
+ cqm_buf_free(cla_x_buf, cqm_handle->dev);
+ return CQM_FAIL;
+ }
+ memset(cla_y_buf->buf_list, 0,
+ cla_y_buf->buf_number * sizeof(struct cqm_buf_list));
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_xyz_check(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 *size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size = 0;
+
+ /* If the capability(obj_num) is set to 0, the CLA does not need to be
+ * initialized and exits directly.
+ */
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
+ cla_table->type);
+ return CQM_SUCCESS;
+ }
+
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0x%x, gpa_check_enable=%d\n",
+ cla_table->type, cla_table->obj_num,
+ cqm_handle->func_capability.gpa_check_enable);
+
+ /* Check whether obj_size is 2^n-aligned. An error is reported when
+ * obj_size is 0 or 1.
+ */
+ if (!cqm_check_align(cla_table->obj_size)) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+
+ if (trunk_size < cla_table->obj_size) {
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
+ cla_table->type, cla_table->obj_size);
+ return CQM_FAIL;
+ }
+
+ *size = trunk_size;
+
+ return CQM_CONTINUE;
+}
+
+s32 cqm_cla_xyz(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf *cla_z_buf = NULL;
+ u32 trunk_size = 0;
+ s32 ret = CQM_FAIL;
+
+ ret = cqm_cla_xyz_check(cqm_handle, cla_table, &trunk_size);
+ if (ret != CQM_CONTINUE)
+ return ret;
+
+ /* Level-0 CLA occupies a small space.
+ * Only CLA_Z_BUF can be allocated during initialization.
+ */
+ if (cla_table->max_buffer_size <= trunk_size) {
+ cla_table->cla_lvl = CQM_CLA_LVL_0;
+
+ cla_table->z = CQM_MAX_INDEX_BIT;
+ cla_table->y = 0;
+ cla_table->x = 0;
+
+ cla_table->cacheline_z = cla_table->z;
+ cla_table->cacheline_y = cla_table->y;
+ cla_table->cacheline_x = cla_table->x;
+
+ /* Applying for CLA_Z_BUF Space */
+ cla_z_buf = &cla_table->cla_z_buf;
+ cla_z_buf->buf_size = trunk_size; /* (u32)(PAGE_SIZE <<
+ * cla_table->trunk_order);
+ */
+ cla_z_buf->buf_number = 1;
+ cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
+ ret = cqm_buf_alloc(cqm_handle, cla_z_buf, false);
+ CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
+ CQM_ALLOC_FAIL(lvl_0_z_buf));
+ }
+ /* Level-1 CLA
+ * Allocates CLA_Y_BUF and CLA_Z_BUF during initialization.
+ */
+ else if (cla_table->max_buffer_size <= (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl1(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
+ return CQM_FAIL;
+ }
+ }
+ /* Level-2 CLA
+ * Allocates CLA_X_BUF, CLA_Y_BUF, and CLA_Z_BUF during initialization.
+ */
+ else if (cla_table->max_buffer_size <=
+ (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
+ (trunk_size / sizeof(dma_addr_t)))) {
+ if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) ==
+ CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
+ return CQM_FAIL;
+ }
+ } else { /* The current memory management mode does not support such
+ * a large buffer addressing. The order value needs to
+ * be increased.
+ */
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
+ cla_table->max_buffer_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_cla_init_entry_normal(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_func_capability *capability)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_HASH:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->hash_number * capability->hash_basic_size;
+ cla_table->obj_size = capability->hash_basic_size;
+ cla_table->obj_num = capability->hash_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_QPC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->qpc_number * capability->qpc_basic_size;
+ cla_table->obj_size = capability->qpc_basic_size;
+ cla_table->obj_num = capability->qpc_number;
+ cla_table->alloc_static = capability->qpc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->mpt_number * capability->mpt_basic_size;
+ cla_table->obj_size = capability->mpt_basic_size;
+ cla_table->obj_num = capability->mpt_number;
+ /* CCB decided. MPT uses only static application scenarios. */
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->scqc_number * capability->scqc_basic_size;
+ cla_table->obj_size = capability->scqc_basic_size;
+ cla_table->obj_num = capability->scqc_number;
+ cla_table->alloc_static = capability->scqc_alloc_static;
+ cqm_info(handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
+ cla_table->alloc_static);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->srqc_number * capability->srqc_basic_size;
+ cla_table->obj_size = capability->srqc_basic_size;
+ cla_table->obj_num = capability->srqc_number;
+ cla_table->alloc_static = false;
+ break;
+ default:
+ break;
+ }
+}
+
+void cqm_cla_init_entry_extern(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ struct cqm_func_capability *capability)
+{
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_GID:
+ /* Level-0 CLA table required */
+ cla_table->max_buffer_size = capability->gid_number * capability->gid_basic_size;
+ cla_table->trunk_order =
+ (u32)cqm_shift(ALIGN(cla_table->max_buffer_size, PAGE_SIZE) / PAGE_SIZE);
+ cla_table->obj_size = capability->gid_basic_size;
+ cla_table->obj_num = capability->gid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_LUN:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->lun_number * capability->lun_basic_size;
+ cla_table->obj_size = capability->lun_basic_size;
+ cla_table->obj_num = capability->lun_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TASKMAP:
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->taskmap_number *
+ capability->taskmap_basic_size;
+ cla_table->obj_size = capability->taskmap_basic_size;
+ cla_table->obj_num = capability->taskmap_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_L3I:
+ cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->l3i_number * capability->l3i_basic_size;
+ cla_table->obj_size = capability->l3i_basic_size;
+ cla_table->obj_num = capability->l3i_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_CHILDC:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->childc_number *
+ capability->childc_basic_size;
+ cla_table->obj_size = capability->childc_basic_size;
+ cla_table->obj_num = capability->childc_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_TIMER:
+ /* Ensure that the basic size of the timer buffer page does not
+ * exceed 128 x 4 KB. Otherwise, clearing the timer buffer of
+ * the function is complex.
+ */
+ cla_table->trunk_order = CQM_4K_PAGE_ORDER;
+ cla_table->max_buffer_size = capability->timer_number *
+ capability->timer_basic_size;
+ cla_table->obj_size = capability->timer_basic_size;
+ cla_table->obj_num = capability->timer_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_XID2CID:
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->xid2cid_number *
+ capability->xid2cid_basic_size;
+ cla_table->obj_size = capability->xid2cid_basic_size;
+ cla_table->obj_num = capability->xid2cid_number;
+ cla_table->alloc_static = true;
+ break;
+ case CQM_BAT_ENTRY_T_REORDER:
+ /* This entry supports only IWARP and does not support GPA validity check. */
+ cla_table->trunk_order = capability->pagesize_reorder;
+ cla_table->max_buffer_size = capability->reorder_number *
+ capability->reorder_basic_size;
+ cla_table->obj_size = capability->reorder_basic_size;
+ cla_table->obj_num = capability->reorder_number;
+ cla_table->alloc_static = true;
+ break;
+ default:
+ break;
+ }
+}
+
+s32 cqm_cla_init_entry_condition(struct cqm_handle *cqm_handle, u32 entry_type)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = &bat_table->entry[entry_type];
+ struct cqm_cla_table *cla_table_timer = NULL;
+ u32 i;
+
+ /* When the timer is in LB mode 1 or 2, the timer needs to be
+ * configured for four SMFs and the address space is independent.
+ */
+ if (cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table_timer = &bat_table->timer_entry[i];
+ memcpy(cla_table_timer, cla_table, sizeof(struct cqm_cla_table));
+
+ if (cqm_cla_xyz(cqm_handle, cla_table_timer) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+ }
+ }
+
+ if (cqm_cla_xyz(cqm_handle, cla_table) == CQM_FAIL) {
+ cqm_cla_uninit(cqm_handle, entry_type);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_init_entry(struct cqm_handle *cqm_handle,
+ struct cqm_func_capability *capability)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 ret;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ cla_table->type = bat_table->bat_entry_type[i];
+
+ cqm_cla_init_entry_normal(cqm_handle, cla_table, capability);
+ cqm_cla_init_entry_extern(cqm_handle, cla_table, capability);
+
+ /* Allocate CLA entry space at each level. */
+ if (cla_table->type < CQM_BAT_ENTRY_T_HASH ||
+ cla_table->type > CQM_BAT_ENTRY_T_REORDER) {
+ mutex_init(&cla_table->lock);
+ continue;
+ }
+
+ /* For the PPF, resources (8 wheels x 2k scales x 32B x
+ * func_num) need to be applied for to the timer. The
+ * structure of the timer entry in the BAT table needs
+ * to be filled. For the PF, no resource needs to be
+ * applied for the timer and no structure needs to be
+ * filled in the timer entry in the BAT table.
+ */
+ if (!(cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ ret = cqm_cla_init_entry_condition(cqm_handle, i);
+ if (ret != CQM_SUCCESS)
+ return CQM_FAIL;
+ }
+ mutex_init(&cla_table->lock);
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret;
+
+ /* Applying for CLA Entries */
+ ret = cqm_cla_init_entry(cqm_handle, capability);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init_entry));
+ return ret;
+ }
+
+ /* After the CLA entry is applied, the address is filled in the BAT table. */
+ cqm_bat_fill_cla(cqm_handle);
+
+ /* Instruct the chip to update the BAT table. */
+ ret = cqm_bat_update(cqm_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
+ goto err;
+ }
+
+ cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
+ cqm_handle->func_attribute.func_type,
+ cqm_handle->func_capability.timer_enable);
+
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
+ /* Enable the timer after the timer resources are applied for */
+ cqm_info(handle->dev_hdl, "Timer start: spfc ppf timer start\n");
+ ret = sphw_ppf_tmr_start((void *)(cqm_handle->ex_handle));
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: spfc ppf timer start, ret=%d\n",
+ ret);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ return CQM_FAIL;
+}
+
+void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 inv_flag = 0;
+ u32 i;
+
+ for (i = 0; i < entry_numb; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
+ }
+ }
+
+ /* When the lb mode is 1/2, the timer space allocated to the 4 SMFs
+ * needs to be released.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ cla_table = &bat_table->timer_entry[i];
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
+ cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
+ }
+ }
+}
+
+s32 cqm_cla_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ struct cqm_cla_update_cmd *cmd)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_update_cmd *cla_update_cmd = NULL;
+ s32 ret = CQM_FAIL;
+
+ cla_update_cmd = (struct cqm_cla_update_cmd *)(buf_in->buf);
+
+ cla_update_cmd->gpa_h = cmd->gpa_h;
+ cla_update_cmd->gpa_l = cmd->gpa_l;
+ cla_update_cmd->value_h = cmd->value_h;
+ cla_update_cmd->value_l = cmd->value_l;
+ cla_update_cmd->smf_id = cmd->smf_id;
+ cla_update_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_update_cmd,
+ (sizeof(struct cqm_cla_update_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_UPDATE, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cqm3_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_update(struct cqm_handle *cqm_handle, struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_cla_update_cmd cmd;
+ dma_addr_t pa = 0;
+ s32 ret = CQM_FAIL;
+ u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
+ u32 i = 0;
+ u64 spu_en;
+
+ buf_in = cqm3_cmd_alloc(cqm_handle->ex_handle);
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_update_cmd);
+
+ /* Fill command format, convert to big endian. */
+ /* SPU function sets bit63: acs_spu_en based on function id. */
+ if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID)
+ spu_en = ((u64)(cqm_handle->func_attribute.func_global_idx &
+ 0x1)) << 63;
+ else
+ spu_en = 0;
+
+ pa = ((buf_node_parent->pa + (child_index * sizeof(dma_addr_t))) |
+ spu_en);
+ cmd.gpa_h = CQM_ADDR_HI(pa);
+ cmd.gpa_l = CQM_ADDR_LW(pa);
+
+ pa = (buf_node_child->pa | spu_en);
+ cmd.value_h = CQM_ADDR_HI(pa);
+ cmd.value_l = CQM_ADDR_LW(pa);
+
+ /* current CLA GPA CHECK */
+ if (gpa_check_enable) {
+ switch (cla_update_mode) {
+ /* gpa[0]=1 means this GPA is valid */
+ case CQM_CLA_RECORD_NEW_GPA:
+ cmd.value_l |= 1;
+ break;
+ /* gpa[0]=0 means this GPA is valid */
+ case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
+ case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
+ cmd.value_l &= (~1);
+ break;
+ default:
+ cqm_err(handle->dev_hdl,
+ "Cla alloc: %s, wrong cla_update_mode=%u\n",
+ __func__, cla_update_mode);
+ break;
+ }
+ }
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicates the current func fake mode, set func_id to the
+ * specified value, This is a fake func_id.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in, &cmd);
+ }
+ /* Modes 1/2 are allocated to four SMF engines by flow.
+ * Therefore, one function needs to be allocated to four SMF engines.
+ */
+ /* Mode 0 PPF needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg variable stores currently enabled SMF. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_update_cmd(cqm_handle, buf_in,
+ &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla update: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+s32 cqm_cla_alloc(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ s32 ret = CQM_FAIL;
+
+ /* Apply for trunk page */
+ buf_node_child->va = (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ cla_table->trunk_order);
+ CQM_PTR_CHECK_RET(buf_node_child->va, CQM_FAIL, CQM_ALLOC_FAIL(va));
+
+ /* PCI mapping */
+ buf_node_child->pa = pci_map_single(cqm_handle->dev, buf_node_child->va,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
+ goto err1;
+ }
+
+ /* Notify the chip of trunk_pa so that the chip fills in cla entry */
+ ret = cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, CQM_CLA_RECORD_NEW_GPA);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ goto err2;
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+err1:
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+ return CQM_FAIL;
+}
+
+void cqm_cla_free(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ struct cqm_buf_list *buf_node_parent,
+ struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 trunk_size;
+
+ if (cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
+ child_index, cla_update_mode) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
+ return;
+ }
+
+ if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
+ trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ if (cqm_cla_cache_invalid(cqm_handle, buf_node_child->pa,
+ trunk_size) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
+ return;
+ }
+ }
+
+ /* Remove PCI mapping from the trunk page */
+ pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
+ PAGE_SIZE << cla_table->trunk_order,
+ PCI_DMA_BIDIRECTIONAL);
+
+ /* Rlease trunk page */
+ free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
+ buf_node_child->va = NULL;
+}
+
+u8 *cqm_cla_get_unlock_lvl0(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ /* Level 0 CLA pages are statically allocated. */
+ offset = index * cla_table->obj_size;
+ ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
+ *pa = cla_z_buf->buf_list->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock_lvl1(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_z = NULL;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ return NULL;
+ }
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* The z buf node does not exist, applying for a page first. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ cqm_err(handle->dev_hdl,
+ "Cla get: cla_table->type=%u\n",
+ cla_table->type);
+ return NULL;
+ }
+ }
+
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock_lvl2(struct cqm_handle *cqm_handle,
+ struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_x = NULL;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_z = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 z_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u8 *ret_addr = NULL;
+ u32 offset = 0;
+ u64 tmp;
+
+ z_index = index & ((1U << (cla_table->z + 1)) - 1);
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla get: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ return NULL;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* The y buf node does not exist, applying for pages for y node. */
+ if (!buf_node_y->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_x, buf_node_y,
+ x_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ return NULL;
+ }
+ }
+
+ /* The z buf node does not exist, applying for pages for z node. */
+ if (!buf_node_z->va) {
+ if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
+ y_index) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_alloc));
+ if (buf_node_y->refcount == 0)
+ /* To release node Y, cache_invalid is
+ * required.
+ */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ return NULL;
+ }
+
+ /* reference counting of the y buffer node needs to increase
+ * by 1.
+ */
+ buf_node_y->refcount++;
+ }
+
+ buf_node_z->refcount += count;
+ offset = z_index * cla_table->obj_size;
+ ret_addr = (u8 *)(buf_node_z->va) + offset;
+ *pa = buf_node_z->pa + offset;
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_0)
+ ret_addr = cqm_cla_get_unlock_lvl0(cqm_handle, cla_table, index,
+ count, pa);
+ else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
+ ret_addr = cqm_cla_get_unlock_lvl1(cqm_handle, cla_table, index,
+ count, pa);
+ else
+ ret_addr = cqm_cla_get_unlock_lvl2(cqm_handle, cla_table, index,
+ count, pa);
+
+ return ret_addr;
+}
+
+u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa)
+{
+ u8 *ret_addr = NULL;
+
+ mutex_lock(&cla_table->lock);
+
+ ret_addr = cqm_cla_get_unlock(cqm_handle, cla_table, index, count, pa);
+
+ mutex_unlock(&cla_table->lock);
+
+ return ret_addr;
+}
+
+void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count)
+{
+ struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
+ struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
+ struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_buf_list *buf_node_z = NULL;
+ struct cqm_buf_list *buf_node_y = NULL;
+ struct cqm_buf_list *buf_node_x = NULL;
+ u32 x_index = 0;
+ u32 y_index = 0;
+ u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
+ u64 tmp;
+
+ /* The buffer is applied statically, and the reference counting
+ * does not need to be controlled.
+ */
+ if (cla_table->alloc_static)
+ return;
+
+ mutex_lock(&cla_table->lock);
+
+ if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
+ y_index = index >> (cla_table->z + 1);
+
+ if (y_index >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
+ y_index, cla_z_buf->buf_number);
+ cqm_err(handle->dev_hdl,
+ "Cla put: cla_table->type=%u\n",
+ cla_table->type);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_z = &cla_z_buf->buf_list[y_index];
+ buf_node_y = cla_y_buf->buf_list;
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0)
+ /* The cache invalid is not required for the Z node. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+ } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
+ y_index = (index >> (cla_table->z + 1)) &
+ ((1U << (cla_table->y - cla_table->z)) - 1);
+ x_index = index >> (cla_table->y + 1);
+ tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
+
+ if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
+ cqm_err(handle->dev_hdl,
+ "Cla put: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
+ x_index, y_index, cla_y_buf->buf_number,
+ cla_z_buf->buf_number);
+ mutex_unlock(&cla_table->lock);
+ return;
+ }
+
+ buf_node_x = cla_x_buf->buf_list;
+ buf_node_y = &cla_y_buf->buf_list[x_index];
+ buf_node_z = &cla_z_buf->buf_list[tmp];
+
+ /* When the value of reference counting on the z node page is 0,
+ * the z node page is released.
+ */
+ buf_node_z->refcount -= count;
+ if (buf_node_z->refcount == 0) {
+ cqm_cla_free(cqm_handle, cla_table, buf_node_y,
+ buf_node_z, y_index,
+ CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
+
+ /* When the value of reference counting on the y node
+ * page is 0, the y node page is released.
+ */
+ buf_node_y->refcount--;
+ if (buf_node_y->refcount == 0)
+ /* Node y requires cache to be invalid. */
+ cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
+ CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
+ }
+ }
+
+ mutex_unlock(&cla_table->lock);
+}
+
+struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type)
+{
+ struct cqm_cla_table *cla_table = NULL;
+ u32 i = 0;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (entry_type == cla_table->type)
+ return cla_table;
+ }
+
+ return NULL;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
new file mode 100644
index 000000000000..d3b871ca3c82
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_BAT_CLA_H
+#define CQM_BAT_CLA_H
+
+/* When the connection check is enabled, the maximum number of connections
+ * supported by the chip is 1M - 63, which cannot reach 1M
+ */
+#define CQM_BAT_MAX_CONN_NUM (0x100000 - 63)
+#define CQM_BAT_MAX_CACHE_CONN_NUM (0x100000 - 63)
+
+#define CLA_TABLE_PAGE_ORDER 0
+#define CQM_4K_PAGE_ORDER 0
+#define CQM_4K_PAGE_SIZE 4096
+
+#define CQM_BAT_ENTRY_MAX 16
+#define CQM_BAT_ENTRY_SIZE 16
+
+#define CQM_BAT_SIZE_FT_PF 192
+#define CQM_BAT_SIZE_FT_VF 112
+
+#define CQM_BAT_INDEX0 0
+#define CQM_BAT_INDEX1 1
+#define CQM_BAT_INDEX2 2
+#define CQM_BAT_INDEX3 3
+#define CQM_BAT_INDEX4 4
+#define CQM_BAT_INDEX5 5
+#define CQM_BAT_INDEX6 6
+#define CQM_BAT_INDEX7 7
+#define CQM_BAT_INDEX8 8
+#define CQM_BAT_INDEX9 9
+#define CQM_BAT_INDEX10 10
+#define CQM_BAT_INDEX11 11
+#define CQM_BAT_INDEX12 12
+#define CQM_BAT_INDEX13 13
+#define CQM_BAT_INDEX14 14
+#define CQM_BAT_INDEX15 15
+
+enum cqm_bat_entry_type {
+ CQM_BAT_ENTRY_T_CFG = 0,
+ CQM_BAT_ENTRY_T_HASH = 1,
+ CQM_BAT_ENTRY_T_QPC = 2,
+ CQM_BAT_ENTRY_T_SCQC = 3,
+ CQM_BAT_ENTRY_T_SRQC = 4,
+ CQM_BAT_ENTRY_T_MPT = 5,
+ CQM_BAT_ENTRY_T_GID = 6,
+ CQM_BAT_ENTRY_T_LUN = 7,
+ CQM_BAT_ENTRY_T_TASKMAP = 8,
+ CQM_BAT_ENTRY_T_L3I = 9,
+ CQM_BAT_ENTRY_T_CHILDC = 10,
+ CQM_BAT_ENTRY_T_TIMER = 11,
+ CQM_BAT_ENTRY_T_XID2CID = 12,
+ CQM_BAT_ENTRY_T_REORDER = 13,
+ CQM_BAT_ENTRY_T_INVALID = 14,
+ CQM_BAT_ENTRY_T_MAX = 15,
+};
+
+/* CLA update mode */
+#define CQM_CLA_RECORD_NEW_GPA 0
+#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
+#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
+
+#define CQM_CLA_LVL_0 0
+#define CQM_CLA_LVL_1 1
+#define CQM_CLA_LVL_2 2
+
+#define CQM_MAX_INDEX_BIT 19
+
+#define CQM_CHIP_CACHELINE 256
+#define CQM_CHIP_TIMER_CACHELINE 512
+#define CQM_OBJECT_256 256
+#define CQM_OBJECT_512 512
+#define CQM_OBJECT_1024 1024
+#define CQM_CHIP_GPA_MASK 0x1ffffffffffffff
+#define CQM_CHIP_GPA_HIMASK 0x1ffffff
+#define CQM_CHIP_GPA_LOMASK 0xffffffff
+#define CQM_CHIP_GPA_HSHIFT 32
+
+/* Aligns with 64 buckets and shifts rightward by 6 bits */
+#define CQM_HASH_NUMBER_UNIT 6
+
+struct cqm_cla_table {
+ u32 type;
+ u32 max_buffer_size;
+ u32 obj_num;
+ bool alloc_static; /* Whether the buffer is statically allocated */
+ u32 cla_lvl;
+ u32 cacheline_x; /* x value calculated based on cacheline, used by the chip */
+ u32 cacheline_y; /* y value calculated based on cacheline, used by the chip */
+ u32 cacheline_z; /* z value calculated based on cacheline, used by the chip */
+ u32 x; /* x value calculated based on obj_size, used by software */
+ u32 y; /* y value calculated based on obj_size, used by software */
+ u32 z; /* z value calculated based on obj_size, used by software */
+ struct cqm_buf cla_x_buf;
+ struct cqm_buf cla_y_buf;
+ struct cqm_buf cla_z_buf;
+ u32 trunk_order; /* A continuous physical page contains 2^order pages */
+ u32 obj_size;
+ struct mutex lock; /* Lock for cla buffer allocation and free */
+
+ struct cqm_bitmap bitmap;
+
+ struct cqm_object_table obj_table; /* Mapping table between indexes and objects */
+};
+
+struct cqm_bat_entry_cfg {
+ u32 cur_conn_num_h_4 : 4;
+ u32 rsv1 : 4;
+ u32 max_conn_num : 20;
+ u32 rsv2 : 4;
+
+ u32 max_conn_cache : 10;
+ u32 rsv3 : 6;
+ u32 cur_conn_num_l_16 : 16;
+
+ u32 bloom_filter_addr : 16;
+ u32 cur_conn_cache : 10;
+ u32 rsv4 : 6;
+
+ u32 bucket_num : 16;
+ u32 bloom_filter_len : 16;
+};
+
+#define CQM_BAT_NO_BYPASS_CACHE 0
+#define CQM_BAT_BYPASS_CACHE 1
+
+#define CQM_BAT_ENTRY_SIZE_256 0
+#define CQM_BAT_ENTRY_SIZE_512 1
+#define CQM_BAT_ENTRY_SIZE_1024 2
+
+struct cqm_bat_entry_standerd {
+ u32 entry_size : 2;
+ u32 rsv1 : 6;
+ u32 max_number : 20;
+ u32 rsv2 : 4;
+
+ u32 cla_gpa_h : 32;
+
+ u32 cla_gpa_l : 32;
+
+ u32 rsv3 : 8;
+ u32 z : 5;
+ u32 y : 5;
+ u32 x : 5;
+ u32 rsv24 : 1;
+ u32 bypass : 1;
+ u32 cla_level : 2;
+ u32 rsv5 : 5;
+};
+
+struct cqm_bat_entry_vf2pf {
+ u32 cla_gpa_h : 25;
+ u32 pf_id : 5;
+ u32 fake_vf_en : 1;
+ u32 acs_spu_en : 1;
+};
+
+#define CQM_BAT_ENTRY_TASKMAP_NUM 4
+struct cqm_bat_entry_taskmap_addr {
+ u32 gpa_h;
+ u32 gpa_l;
+};
+
+struct cqm_bat_entry_taskmap {
+ struct cqm_bat_entry_taskmap_addr addr[CQM_BAT_ENTRY_TASKMAP_NUM];
+};
+
+struct cqm_bat_table {
+ u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
+ u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
+ struct cqm_cla_table entry[CQM_BAT_ENTRY_MAX];
+ /* In LB mode 1, the timer needs to be configured in 4 SMFs,
+ * and the GPAs must be different and independent.
+ */
+ struct cqm_cla_table timer_entry[4];
+ u32 bat_size;
+};
+
+#define CQM_BAT_MAX_SIZE 256
+struct cqm_cmdq_bat_update {
+ u32 offset;
+ u32 byte_len;
+ u8 data[CQM_BAT_MAX_SIZE];
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct cqm_cla_update_cmd {
+ /* Gpa address to be updated */
+ u32 gpa_h;
+ u32 gpa_l;
+
+ /* Updated Value */
+ u32 value_h;
+ u32 value_l;
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+s32 cqm_bat_init(struct cqm_handle *cqm_handle);
+void cqm_bat_uninit(struct cqm_handle *cqm_handle);
+s32 cqm_cla_init(struct cqm_handle *cqm_handle);
+void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb);
+u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count, dma_addr_t *pa);
+void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
+ u32 index, u32 count);
+struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type);
+u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle);
+
+#endif /* CQM_BAT_CLA_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
new file mode 100644
index 000000000000..4e482776a14f
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
@@ -0,0 +1,891 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/mm.h>
+#include <linux/gfp.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+#define common_section
+
+void cqm_swab64(u8 *addr, u32 cnt)
+{
+ u64 *temp = (u64 *)addr;
+ u64 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab64(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+void cqm_swab32(u8 *addr, u32 cnt)
+{
+ u32 *temp = (u32 *)addr;
+ u32 value = 0;
+ u32 i;
+
+ for (i = 0; i < cnt; i++) {
+ value = __swab32(*temp);
+ *temp = value;
+ temp++;
+ }
+}
+
+s32 cqm_shift(u32 data)
+{
+ s32 shift = -1;
+
+ do {
+ data >>= 1;
+ shift++;
+ } while (data);
+
+ return shift;
+}
+
+bool cqm_check_align(u32 data)
+{
+ if (data == 0)
+ return false;
+
+ do {
+ /* When the value can be exactly divided by 2,
+ * the value of data is shifted right by one bit, that is,
+ * divided by 2.
+ */
+ if ((data & 0x1) == 0)
+ data >>= 1;
+ /* If the value cannot be divisible by 2, the value is
+ * not 2^n-aligned and false is returned.
+ */
+ else
+ return false;
+ } while (data != 1);
+
+ return true;
+}
+
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
+{
+ void *orig_addr = NULL;
+ void *align_addr = NULL;
+ void *index_addr = NULL;
+
+ orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
+ flags);
+ if (!orig_addr)
+ return NULL;
+
+ index_addr = (void *)((char *)orig_addr + sizeof(void *));
+ align_addr =
+ (void *)((((u64)index_addr + ((u64)1 << align_order) - 1) >>
+ align_order) << align_order);
+
+ /* Record the original memory address for memory release. */
+ index_addr = (void *)((char *)align_addr - sizeof(void *));
+ *(void **)index_addr = orig_addr;
+
+ return align_addr;
+}
+
+void cqm_kfree_align(void *addr)
+{
+ void *index_addr = NULL;
+
+ /* Release the original memory address. */
+ index_addr = (void *)((char *)addr - sizeof(void *));
+
+ kfree(*(void **)index_addr);
+}
+
+void cqm_write_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_lock_bh(lock);
+ else
+ write_lock(lock);
+}
+
+void cqm_write_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ write_unlock_bh(lock);
+ else
+ write_unlock(lock);
+}
+
+void cqm_read_lock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_lock_bh(lock);
+ else
+ read_lock(lock);
+}
+
+void cqm_read_unlock(rwlock_t *lock, bool bh)
+{
+ if (bh)
+ read_unlock_bh(lock);
+ else
+ read_unlock(lock);
+}
+
+s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct page **pages = NULL;
+ u32 i, j, order;
+
+ order = get_order(buf->buf_size);
+
+ if (!direct) {
+ buf->direct.va = NULL;
+ return CQM_SUCCESS;
+ }
+
+ pages = vmalloc(sizeof(struct page *) * buf->page_number);
+ if (!pages) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < buf->buf_number; i++) {
+ for (j = 0; j < ((u32)1 << order); j++)
+ pages[(ulong)(unsigned int)((i << order) + j)] =
+ (void *)virt_to_page((u8 *)(buf->buf_list[i].va) +
+ (PAGE_SIZE * j));
+ }
+
+ buf->direct.va = vmap(pages, buf->page_number, VM_MAP, PAGE_KERNEL);
+ vfree(pages);
+ if (!buf->direct.va) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc_page(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct page *newpage = NULL;
+ u32 order;
+ void *va = NULL;
+ s32 i, node;
+
+ order = get_order(buf->buf_size);
+ /* Page for applying for each buffer for non-ovs */
+ if (handle->board_info.service_mode != 0) {
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ order);
+ if (!va) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+ } else {
+ node = dev_to_node(handle->dev_hdl);
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ newpage = alloc_pages_node(node,
+ GFP_KERNEL | __GFP_ZERO,
+ order);
+ if (!newpage) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(buf_page));
+ break;
+ }
+ va = (void *)page_address(newpage);
+ /* Initialize the page after the page is applied for.
+ * If hash entries are involved, the initialization
+ * value must be 0.
+ */
+ memset(va, 0, buf->buf_size);
+ buf->buf_list[i].va = va;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc_map(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ void *va = NULL;
+ s32 i;
+
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ va = buf->buf_list[i].va;
+ buf->buf_list[i].pa = pci_map_single(dev, va, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
+ break;
+ }
+ }
+
+ if (i != buf->buf_number) {
+ i--;
+ for (; i >= 0; i--)
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size, PCI_DMA_BIDIRECTIONAL);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct pci_dev *dev = cqm_handle->dev;
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ /* Applying for the buffer list descriptor space */
+ buf->buf_list = vmalloc(buf->buf_number * sizeof(struct cqm_buf_list));
+ CQM_PTR_CHECK_RET(buf->buf_list, CQM_FAIL,
+ CQM_ALLOC_FAIL(linux_buf_list));
+ memset(buf->buf_list, 0, buf->buf_number * sizeof(struct cqm_buf_list));
+
+ /* Page for applying for each buffer */
+ if (cqm_buf_alloc_page(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_page));
+ goto err1;
+ }
+
+ /* PCI mapping of the buffer */
+ if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_map));
+ goto err2;
+ }
+
+ /* direct remapping */
+ if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ for (i = 0; i < (s32)buf->buf_number; i++)
+ pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+err2:
+ for (i = 0; i < (s32)buf->buf_number; i++) {
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+err1:
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ return CQM_FAIL;
+}
+
+void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev)
+{
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ if (buf->direct.va) {
+ vunmap(buf->direct.va);
+ buf->direct.va = NULL;
+ }
+
+ if (buf->buf_list) {
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (buf->buf_list[i].va) {
+ pci_unmap_single(dev, buf->buf_list[i].pa,
+ buf->buf_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ free_pages((ulong)(buf->buf_list[i].va), order);
+ buf->buf_list[i].va = NULL;
+ }
+ }
+
+ vfree(buf->buf_list);
+ buf->buf_list = NULL;
+ }
+}
+
+s32 cqm_cla_cache_invalid_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
+ struct cqm_cla_cache_invalid_cmd *cmd)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_cache_invalid_cmd *cla_cache_invalid_cmd = NULL;
+ s32 ret;
+
+ cla_cache_invalid_cmd = (struct cqm_cla_cache_invalid_cmd *)(buf_in->buf);
+ cla_cache_invalid_cmd->gpa_h = cmd->gpa_h;
+ cla_cache_invalid_cmd->gpa_l = cmd->gpa_l;
+ cla_cache_invalid_cmd->cache_size = cmd->cache_size;
+ cla_cache_invalid_cmd->smf_id = cmd->smf_id;
+ cla_cache_invalid_cmd->func_id = cmd->func_id;
+
+ cqm_swab32((u8 *)cla_cache_invalid_cmd,
+ /* shift 2 bits by right to get length of dw(4B) */
+ (sizeof(struct cqm_cla_cache_invalid_cmd) >> 2));
+
+ /* Send the cmdq command. */
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
+ CQM_CMD_T_CLA_CACHE_INVALID, buf_in, NULL, NULL,
+ CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cqm3_send_cmd_box_ret=%d\n",
+ ret);
+ cqm_err(handle->dev_hdl,
+ "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
+ cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa, u32 cache_size)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_cla_cache_invalid_cmd cmd;
+ s32 ret = CQM_FAIL;
+ u32 i;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+ buf_in->size = sizeof(struct cqm_cla_cache_invalid_cmd);
+
+ /* Fill command and convert it to big endian */
+ cmd.cache_size = cache_size;
+ cmd.gpa_h = CQM_ADDR_HI(gpa);
+ cmd.gpa_l = CQM_ADDR_LW(gpa);
+
+ /* In non-fake mode, set func_id to 0xffff.
+ * Indicate the current func fake mode.
+ * The value of func_id is a fake func ID.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD)
+ cmd.func_id = cqm_handle->func_attribute.func_global_idx;
+ else
+ cmd.func_id = 0xffff;
+
+ /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
+ if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type != CQM_PPF)) {
+ cmd.smf_id = cqm_funcid2smfid(cqm_handle);
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle, buf_in, &cmd);
+ }
+ /* Mode 1/2 are allocated to 4 SMF engines by flow. Therefore,
+ * one function needs to be allocated to 4 SMF engines.
+ */
+ /* The PPF in mode 0 needs to be configured on 4 engines,
+ * and the timer resources need to be shared by the 4 engines.
+ */
+ else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
+ cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
+ (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
+ cqm_handle->func_attribute.func_type == CQM_PPF)) {
+ for (i = 0; i < CQM_LB_SMF_MAX; i++) {
+ /* The smf_pg stored currently enabled SMF engine. */
+ if (cqm_handle->func_capability.smf_pg & (1U << i)) {
+ cmd.smf_id = i;
+ ret = cqm_cla_cache_invalid_cmd(cqm_handle,
+ buf_in, &cmd);
+ if (ret != CQM_SUCCESS)
+ goto out;
+ }
+ }
+ } else {
+ cqm_err(handle->dev_hdl, "Cla cache invalid: unsupport lb mode=%u\n",
+ cqm_handle->func_capability.lb_mode);
+ ret = CQM_FAIL;
+ }
+
+out:
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return ret;
+}
+
+static void free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 order;
+ s32 i;
+
+ order = get_order(buf->buf_size);
+
+ if (!handle->chip_present_flag)
+ return;
+
+ if (!buf->buf_list)
+ return;
+
+ for (i = 0; i < (s32)(buf->buf_number); i++) {
+ if (!buf->buf_list[i].va)
+ continue;
+
+ if (*inv_flag != CQM_SUCCESS)
+ continue;
+
+ /* In the Pangea environment, if the cmdq times out,
+ * no subsequent message is sent.
+ */
+ *inv_flag = cqm_cla_cache_invalid(cqm_handle, buf->buf_list[i].pa,
+ (u32)(PAGE_SIZE << order));
+ if (*inv_flag != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl,
+ "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
+ *inv_flag);
+ }
+}
+
+void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag)
+{
+ /* Send a command to the chip to kick out the cache. */
+ free_cache_inv(cqm_handle, buf, inv_flag);
+
+ /* Clear host resources */
+ cqm_buf_free(buf, cqm_handle->dev);
+}
+
+#define bitmap_section
+
+s32 cqm_single_bitmap_init(struct cqm_bitmap *bitmap)
+{
+ u32 bit_number;
+
+ spin_lock_init(&bitmap->lock);
+
+ /* Max_num of the bitmap is 8-aligned and then
+ * shifted rightward by 3 bits to obtain the number of bytes required.
+ */
+ bit_number = (ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >> CQM_BYTE_BIT_SHIFT);
+ bitmap->table = vmalloc(bit_number);
+ CQM_PTR_CHECK_RET(bitmap->table, CQM_FAIL, CQM_ALLOC_FAIL(bitmap->table));
+ memset(bitmap->table, 0, bit_number);
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bitmap_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
+ cla_table->type);
+ continue;
+ }
+
+ bitmap = &cla_table->bitmap;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ bitmap->max_num = capability->qpc_number;
+ bitmap->reserved_top = capability->qpc_reserved;
+ bitmap->last = capability->qpc_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ bitmap->max_num = capability->mpt_number;
+ bitmap->reserved_top = capability->mpt_reserved;
+ bitmap->last = capability->mpt_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ bitmap->max_num = capability->scqc_number;
+ bitmap->reserved_top = capability->scq_reserved;
+ bitmap->last = capability->scq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ bitmap->max_num = capability->srqc_number;
+ bitmap->reserved_top = capability->srq_reserved;
+ bitmap->last = capability->srq_reserved;
+ cqm_info(handle->dev_hdl,
+ "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
+ cla_table->type, bitmap->max_num);
+ ret = cqm_single_bitmap_init(bitmap);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_bitmap_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_bitmap_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ bitmap = &cla_table->bitmap;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (bitmap->table) {
+ vfree(bitmap->table);
+ bitmap->table = NULL;
+ }
+ }
+ }
+}
+
+u32 cqm_bitmap_check_range(const ulong *table, u32 step, u32 max_num, u32 begin,
+ u32 count)
+{
+ u32 end = (begin + (count - 1));
+ u32 i;
+
+ /* Single-bit check is not performed. */
+ if (count == 1)
+ return begin;
+
+ /* The end value exceeds the threshold. */
+ if (end >= max_num)
+ return max_num;
+
+ /* Bit check, the next bit is returned when a non-zero bit is found. */
+ for (i = (begin + 1); i <= end; i++) {
+ if (test_bit((s32)i, table))
+ return i + 1;
+ }
+
+ /* Check whether it's in different steps. */
+ if ((begin & (~(step - 1))) != (end & (~(step - 1))))
+ return (end & (~(step - 1)));
+
+ /* If the check succeeds, begin is returned. */
+ return begin;
+}
+
+void cqm_bitmap_find(struct cqm_bitmap *bitmap, u32 *index, u32 last, u32 step, u32 count)
+{
+ u32 max_num = bitmap->max_num;
+ ulong *table = bitmap->table;
+
+ do {
+ *index = (u32)find_next_zero_bit(table, max_num, last);
+ if (*index < max_num)
+ last = cqm_bitmap_check_range(table, step, max_num,
+ *index, count);
+ else
+ break;
+ } while (last != *index);
+}
+
+u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last)
+{
+ u32 index = 0;
+ u32 max_num = bitmap->max_num;
+ u32 last = bitmap->last;
+ ulong *table = bitmap->table;
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ /* Search for an idle bit from the last position. */
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+
+ /* The preceding search fails. Search for an idle bit
+ * from the beginning.
+ */
+ if (index >= max_num) {
+ last = bitmap->reserved_top;
+ cqm_bitmap_find(bitmap, &index, last, step, count);
+ }
+
+ /* Set the found bit to 1 and reset last. */
+ if (index < max_num) {
+ for (i = index; i < (index + count); i++)
+ set_bit(i, table);
+
+ if (update_last) {
+ bitmap->last = (index + count);
+ if (bitmap->last >= bitmap->max_num)
+ bitmap->last = bitmap->reserved_top;
+ }
+ }
+
+ spin_unlock(&bitmap->lock);
+ return index;
+}
+
+u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index)
+{
+ ulong *table = bitmap->table;
+ u32 ret_index;
+
+ if (index >= bitmap->reserved_top || index >= bitmap->max_num || count != 1)
+ return CQM_INDEX_INVALID;
+
+ spin_lock(&bitmap->lock);
+
+ if (test_bit((s32)index, table)) {
+ ret_index = CQM_INDEX_INVALID;
+ } else {
+ set_bit(index, table);
+ ret_index = index;
+ }
+
+ spin_unlock(&bitmap->lock);
+ return ret_index;
+}
+
+void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count)
+{
+ u32 i;
+
+ spin_lock(&bitmap->lock);
+
+ for (i = index; i < (index + count); i++)
+ clear_bit((s32)i, bitmap->table);
+
+ spin_unlock(&bitmap->lock);
+}
+
+#define obj_table_section
+s32 cqm_single_object_table_init(struct cqm_object_table *obj_table)
+{
+ rwlock_init(&obj_table->lock);
+
+ obj_table->table = vmalloc(obj_table->max_num * sizeof(void *));
+ CQM_PTR_CHECK_RET(obj_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
+ memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
+ return CQM_SUCCESS;
+}
+
+s32 cqm_object_table_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *obj_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ s32 ret = CQM_SUCCESS;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ if (cla_table->obj_num == 0) {
+ cqm_info(handle->dev_hdl,
+ "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
+ cla_table->type);
+ continue;
+ }
+
+ obj_table = &cla_table->obj_table;
+
+ switch (cla_table->type) {
+ case CQM_BAT_ENTRY_T_QPC:
+ obj_table->max_num = capability->qpc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_MPT:
+ obj_table->max_num = capability->mpt_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SCQC:
+ obj_table->max_num = capability->scqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ case CQM_BAT_ENTRY_T_SRQC:
+ obj_table->max_num = capability->srqc_number;
+ ret = cqm_single_object_table_init(obj_table);
+ break;
+ default:
+ break;
+ }
+
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
+ cla_table->type, cla_table->obj_num);
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_object_table_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_object_table_uninit(struct cqm_handle *cqm_handle)
+{
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_object_table *obj_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 i;
+
+ for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
+ cla_table = &bat_table->entry[i];
+ obj_table = &cla_table->obj_table;
+ if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
+ if (obj_table->table) {
+ vfree(obj_table->table);
+ obj_table->table = NULL;
+ }
+ }
+ }
+}
+
+s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, struct cqm_object *obj, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return CQM_FAIL;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (!object_table->table[index]) {
+ object_table->table[index] = obj;
+ cqm_write_unlock(&object_table->lock, bh);
+ return CQM_SUCCESS;
+ }
+
+ cqm_write_unlock(&object_table->lock, bh);
+ cqm_err(handle->dev_hdl,
+ "Obj table insert: object_table->table[0x%x] has been inserted\n",
+ index);
+
+ return CQM_FAIL;
+}
+
+void cqm_object_table_remove(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, const struct cqm_object *obj, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return;
+ }
+
+ cqm_write_lock(&object_table->lock, bh);
+
+ if (object_table->table[index] && object_table->table[index] == obj)
+ object_table->table[index] = NULL;
+ else
+ cqm_err(handle->dev_hdl,
+ "Obj table remove: object_table->table[0x%x] has been removed\n",
+ index);
+
+ cqm_write_unlock(&object_table->lock, bh);
+}
+
+struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, bool bh)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object *obj = NULL;
+
+ if (index >= object_table->max_num) {
+ cqm_err(handle->dev_hdl,
+ "Obj table get: index 0x%x exceeds max_num 0x%x\n",
+ index, object_table->max_num);
+ return NULL;
+ }
+
+ cqm_read_lock(&object_table->lock, bh);
+
+ obj = object_table->table[index];
+ if (obj)
+ atomic_inc(&obj->refcount);
+
+ cqm_read_unlock(&object_table->lock, bh);
+
+ return obj;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
new file mode 100644
index 000000000000..4a1b353794bf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_BITMAP_TABLE_H
+#define CQM_BITMAP_TABLE_H
+
+struct cqm_bitmap {
+ ulong *table;
+ u32 max_num;
+ u32 last;
+ u32 reserved_top; /* reserved index */
+ spinlock_t lock;
+};
+
+struct cqm_object_table {
+ /* Now is big array. Later will be optimized as a red-black tree. */
+ struct cqm_object **table;
+ u32 max_num;
+ rwlock_t lock;
+};
+
+struct cqm_cla_cache_invalid_cmd {
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 cache_size; /* CLA cache size=4096B */
+
+ u32 smf_id;
+ u32 func_id;
+};
+
+struct cqm_handle;
+
+s32 cqm_bitmap_init(struct cqm_handle *cqm_handle);
+void cqm_bitmap_uninit(struct cqm_handle *cqm_handle);
+u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last);
+u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index);
+void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count);
+s32 cqm_object_table_init(struct cqm_handle *cqm_handle);
+void cqm_object_table_uninit(struct cqm_handle *cqm_handle);
+s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, struct cqm_object *obj, bool bh);
+void cqm_object_table_remove(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, const struct cqm_object *obj, bool bh);
+struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
+ struct cqm_object_table *object_table,
+ u32 index, bool bh);
+
+void cqm_swab64(u8 *addr, u32 cnt);
+void cqm_swab32(u8 *addr, u32 cnt);
+bool cqm_check_align(u32 data);
+s32 cqm_shift(u32 data);
+s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
+s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
+void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev);
+void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
+ s32 *inv_flag);
+s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa,
+ u32 cache_size);
+void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order);
+void cqm_kfree_align(void *addr);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.c b/drivers/scsi/spfc/hw/spfc_cqm_main.c
new file mode 100644
index 000000000000..eecf385ec0f3
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_main.c
@@ -0,0 +1,1257 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/vmalloc.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hw_cfg.h"
+
+#include "spfc_cqm_main.h"
+
+static unsigned char cqm_lb_mode = CQM_LB_MODE_NORMAL;
+module_param(cqm_lb_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_lb_mode, "for cqm lb mode (default=0xff)");
+
+static unsigned char cqm_fake_mode = CQM_FAKE_MODE_DISABLE;
+module_param(cqm_fake_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_fake_mode, "for cqm fake mode (default=0 disable)");
+
+static unsigned char cqm_platform_mode = CQM_FPGA_MODE;
+module_param(cqm_platform_mode, byte, 0644);
+MODULE_PARM_DESC(cqm_platform_mode, "for cqm platform mode (default=0 FPGA)");
+
+s32 cqm3_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = kmalloc(sizeof(*cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_ALLOC_FAIL(cqm_handle));
+
+ /* Clear the memory to prevent other systems from
+ * not clearing the memory.
+ */
+ memset(cqm_handle, 0, sizeof(struct cqm_handle));
+
+ cqm_handle->ex_handle = handle;
+ cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
+ handle->cqm_hdl = (void *)cqm_handle;
+
+ /* Clearing Statistics */
+ memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct cqm_stats));
+
+ /* Reads VF/PF information. */
+ cqm_handle->func_attribute = handle->hwif->attr;
+ cqm_info(handle->dev_hdl, "Func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ cqm_handle->func_attribute.func_global_idx,
+ cqm_handle->func_attribute.func_type);
+
+ /* Read capability from configuration management module */
+ ret = cqm_capability_init(ex_handle);
+ if (ret == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_capability_init));
+ goto err1;
+ }
+
+ /* In FAKE mode, only the bitmap of the timer of the function is
+ * enabled, and resources are not initialized. Otherwise, the
+ * configuration of the fake function is overwritten.
+ */
+ if (cqm_handle->func_capability.fake_func_type == CQM_FAKE_FUNC_CHILD_CONFLICT) {
+ if (sphw_func_tmr_bitmap_set(ex_handle, true) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
+
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_SUCCESS;
+ }
+
+ /* Initialize memory entries such as BAT, CLA, and bitmap. */
+ if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
+ goto err1;
+ }
+
+ /* Event callback initialization */
+ if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
+ goto err2;
+ }
+
+ /* Doorbell initiation */
+ if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
+ goto err3;
+ }
+
+ /* Initialize the bloom filter. */
+ if (cqm_bloomfilter_init(ex_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bloomfilter_init));
+ goto err4;
+ }
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ if (sphw_func_tmr_bitmap_set(ex_handle, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bloomfilter_uninit(ex_handle);
+err4:
+ cqm_db_uninit(ex_handle);
+err3:
+ cqm_event_uninit(ex_handle);
+err2:
+ cqm_mem_uninit(ex_handle);
+err1:
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm3_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
+
+ /* The timer bitmap is set directly at the beginning of the CQM.
+ * The ifconfig up/down command is not used to set or clear the bitmap.
+ */
+ cqm_info(handle->dev_hdl, "Timer stop: disable timer\n");
+ if (sphw_func_tmr_bitmap_set(ex_handle, false) != CQM_SUCCESS)
+ cqm_err(handle->dev_hdl, "Timer stop: disable timer bitmap failed\n");
+
+ /* After the TMR timer stops, the system releases resources
+ * after a delay of one or two milliseconds.
+ */
+ if (cqm_handle->func_attribute.func_type == CQM_PPF &&
+ cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
+ cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop\n");
+ ret = sphw_ppf_tmr_stop(handle);
+ if (ret != CQM_SUCCESS)
+ /* The timer fails to be stopped,
+ * and the resource release is not affected.
+ */
+ cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop, ret=%d\n",
+ ret);
+ /* Somebody requires a delay of 1 ms, which is inaccurate. */
+ usleep_range(900, 1000);
+ }
+
+ /* Release Bloom Filter Table */
+ cqm_bloomfilter_uninit(ex_handle);
+
+ /* Release hardware doorbell */
+ cqm_db_uninit(ex_handle);
+
+ /* Cancel the callback of the event */
+ cqm_event_uninit(ex_handle);
+
+ /* Release various memory tables and require the service
+ * to release all objects.
+ */
+ cqm_mem_uninit(ex_handle);
+
+ /* Release cqm_handle */
+ handle->cqm_hdl = NULL;
+ kfree(cqm_handle);
+}
+
+void cqm_test_mode_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ if (service_capability->test_mode == 0)
+ return;
+
+ cqm_info(handle->dev_hdl, "Enter CQM test mode\n");
+
+ func_cap->qpc_number = service_capability->test_qpc_num;
+ func_cap->qpc_reserved =
+ GET_MAX(func_cap->qpc_reserved,
+ service_capability->test_qpc_resvd_num);
+ func_cap->xid_alloc_mode = service_capability->test_xid_alloc_mode;
+ func_cap->gpa_check_enable = service_capability->test_gpa_check_enable;
+ func_cap->pagesize_reorder = service_capability->test_page_size_reorder;
+ func_cap->qpc_alloc_static =
+ (bool)(service_capability->test_qpc_alloc_mode);
+ func_cap->scqc_alloc_static =
+ (bool)(service_capability->test_scqc_alloc_mode);
+ func_cap->flow_table_based_conn_number =
+ service_capability->test_max_conn_num;
+ func_cap->flow_table_based_conn_cache_number =
+ service_capability->test_max_cache_conn_num;
+ func_cap->scqc_number = service_capability->test_scqc_num;
+ func_cap->mpt_number = service_capability->test_mpt_num;
+ func_cap->mpt_reserved = service_capability->test_mpt_recvd_num;
+ func_cap->reorder_number = service_capability->test_reorder_num;
+ /* 256K buckets, 256K*64B = 16MB */
+ func_cap->hash_number = service_capability->test_hash_num;
+}
+
+void cqm_service_capability_update(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+
+ func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
+ func_cap->scqc_number = GET_MIN(CQM_MAX_SCQC_NUM, func_cap->scqc_number);
+ func_cap->srqc_number = GET_MIN(CQM_MAX_SRQC_NUM, func_cap->srqc_number);
+ func_cap->childc_number = GET_MIN(CQM_MAX_CHILDC_NUM, func_cap->childc_number);
+}
+
+void cqm_service_valid_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ enum cfg_svc_type_en type = service_capability->chip_svc_type;
+ struct cqm_service *svc = cqm_handle->service;
+
+ svc[CQM_SERVICE_T_FC].valid = ((u32)type & CFG_SVC_FC_BIT5) ? true : false;
+}
+
+void cqm_service_capability_init_fc(struct cqm_handle *cqm_handle, void *pra)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct service_cap *service_capability = (struct service_cap *)pra;
+ struct fc_service_cap *fc_cap = &service_capability->fc_cap;
+ struct dev_fc_svc_cap *dev_fc_cap = &fc_cap->dev_fc_cap;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+
+ cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
+ cqm_info(handle->dev_hdl, "Cap init: fc qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
+ dev_fc_cap->max_parent_qpc_num, dev_fc_cap->scq_num,
+ dev_fc_cap->srq_num);
+ func_cap->hash_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
+ func_cap->qpc_number += dev_fc_cap->max_parent_qpc_num;
+ func_cap->qpc_basic_size = GET_MAX(fc_cap->parent_qpc_size,
+ func_cap->qpc_basic_size);
+ func_cap->qpc_alloc_static = true;
+ func_cap->scqc_number += dev_fc_cap->scq_num;
+ func_cap->scqc_basic_size = GET_MAX(fc_cap->scqc_size,
+ func_cap->scqc_basic_size);
+ func_cap->srqc_number += dev_fc_cap->srq_num;
+ func_cap->srqc_basic_size = GET_MAX(fc_cap->srqc_size,
+ func_cap->srqc_basic_size);
+ func_cap->lun_number = CQM_LUN_FC_NUM;
+ func_cap->lun_basic_size = CQM_LUN_SIZE_8;
+ func_cap->taskmap_number = CQM_TASKMAP_FC_NUM;
+ func_cap->taskmap_basic_size = PAGE_SIZE;
+ func_cap->childc_number += dev_fc_cap->max_child_qpc_num;
+ func_cap->childc_basic_size = GET_MAX(fc_cap->child_qpc_size,
+ func_cap->childc_basic_size);
+ func_cap->pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
+}
+
+void cqm_service_capability_init(struct cqm_handle *cqm_handle,
+ struct service_cap *service_capability)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 i;
+
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ cqm_handle->service[i].valid = false;
+ cqm_handle->service[i].has_register = false;
+ cqm_handle->service[i].buf_order = 0;
+ }
+
+ cqm_service_valid_init(cqm_handle, service_capability);
+
+ cqm_info(handle->dev_hdl, "Cap init: service type %d\n",
+ service_capability->chip_svc_type);
+
+ if (cqm_handle->service[CQM_SERVICE_T_FC].valid)
+ cqm_service_capability_init_fc(cqm_handle, (void *)service_capability);
+}
+
+s32 cqm_get_fake_func_type(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 parent_func, child_func_start, child_func_number, i;
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ parent_func = func_cap->fake_cfg[i].parent_func;
+ child_func_start = func_cap->fake_cfg[i].child_func_start;
+ child_func_number = func_cap->fake_cfg[i].child_func_number;
+
+ if (idx == parent_func) {
+ return CQM_FAKE_FUNC_PARENT;
+ } else if ((idx >= child_func_start) &&
+ (idx < (child_func_start + child_func_number))) {
+ return CQM_FAKE_FUNC_CHILD_CONFLICT;
+ }
+ }
+
+ return CQM_FAKE_FUNC_NORMAL;
+}
+
+s32 cqm_get_child_func_start(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ /* Currently, only one set of fake configurations is implemented.
+ * fake_cfg_number = 1
+ */
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_start);
+ }
+
+ return CQM_FAIL;
+}
+
+s32 cqm_get_child_func_number(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ u32 i;
+
+ for (i = 0; i < func_cap->fake_cfg_number; i++) {
+ if (func_attr->func_global_idx ==
+ func_cap->fake_cfg[i].parent_func)
+ return (s32)(func_cap->fake_cfg[i].child_func_number);
+ }
+
+ return CQM_FAIL;
+}
+
+/* Set func_type in fake_cqm_handle to ppf, pf, or vf. */
+void cqm_set_func_type(struct cqm_handle *cqm_handle)
+{
+ u32 idx = cqm_handle->func_attribute.func_global_idx;
+
+ if (idx == 0)
+ cqm_handle->func_attribute.func_type = CQM_PPF;
+ else if (idx < CQM_MAX_PF_NUM)
+ cqm_handle->func_attribute.func_type = CQM_PF;
+ else
+ cqm_handle->func_attribute.func_type = CQM_VF;
+}
+
+void cqm_lb_fake_mode_init(struct cqm_handle *cqm_handle)
+{
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct cqm_fake_cfg *cfg = func_cap->fake_cfg;
+
+ func_cap->lb_mode = cqm_lb_mode;
+ func_cap->fake_mode = cqm_fake_mode;
+
+ /* Initializing the LB Mode */
+ if (func_cap->lb_mode == CQM_LB_MODE_NORMAL) {
+ func_cap->smf_pg = 0;
+ } else {
+ /* The LB mode is tailored on the FPGA.
+ * Only SMF0 and SMF2 are instantiated.
+ */
+ if (cqm_platform_mode == CQM_FPGA_MODE)
+ func_cap->smf_pg = 0x5;
+ else
+ func_cap->smf_pg = 0xF;
+ }
+
+ /* Initializing the FAKE Mode */
+ if (func_cap->fake_mode == CQM_FAKE_MODE_DISABLE) {
+ func_cap->fake_cfg_number = 0;
+ func_cap->fake_func_type = CQM_FAKE_FUNC_NORMAL;
+ } else {
+ func_cap->fake_cfg_number = 1;
+
+ /* When configuring fake mode, ensure that the parent function
+ * cannot be contained in the child function; otherwise, the
+ * system will be initialized repeatedly.
+ */
+ cfg[0].child_func_start = CQM_FAKE_CFUNC_START;
+ func_cap->fake_func_type = cqm_get_fake_func_type(cqm_handle);
+ }
+}
+
+s32 cqm_capability_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
+ struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ u32 total_function_num = 0;
+ int err = 0;
+
+ /* Initializes the PPF capabilities: include timer, pf, vf. */
+ if (func_attr->func_type == CQM_PPF) {
+ total_function_num = service_capability->host_total_function;
+ func_cap->timer_enable = service_capability->timer_en;
+ func_cap->pf_num = service_capability->pf_num;
+ func_cap->pf_id_start = service_capability->pf_id_start;
+ func_cap->vf_num = service_capability->vf_num;
+ func_cap->vf_id_start = service_capability->vf_id_start;
+
+ cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
+ total_function_num);
+ cqm_info(handle->dev_hdl, "Cap init: pf_num 0x%x, pf_id_start 0x%x, vf_num 0x%x, vf_id_start 0x%x\n",
+ func_cap->pf_num, func_cap->pf_id_start,
+ func_cap->vf_num, func_cap->vf_id_start);
+ cqm_info(handle->dev_hdl, "Cap init: timer_enable %u (1: enable; 0: disable)\n",
+ func_cap->timer_enable);
+ }
+
+ func_cap->flow_table_based_conn_number = service_capability->max_connect_num;
+ func_cap->flow_table_based_conn_cache_number = service_capability->max_stick2cache_num;
+ cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
+ func_cap->flow_table_based_conn_number,
+ func_cap->flow_table_based_conn_cache_number);
+
+ func_cap->bloomfilter_enable = service_capability->bloomfilter_en;
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_enable %u (1: enable; 0: disable)\n",
+ func_cap->bloomfilter_enable);
+
+ if (func_cap->bloomfilter_enable) {
+ func_cap->bloomfilter_length = service_capability->bfilter_len;
+ func_cap->bloomfilter_addr =
+ service_capability->bfilter_start_addr;
+ if (func_cap->bloomfilter_length != 0 &&
+ !cqm_check_align(func_cap->bloomfilter_length)) {
+ cqm_err(handle->dev_hdl, "Cap init: bloomfilter_length %u is not the power of 2\n",
+ func_cap->bloomfilter_length);
+
+ err = CQM_FAIL;
+ goto out;
+ }
+ }
+
+ cqm_info(handle->dev_hdl, "Cap init: bloomfilter_length 0x%x, bloomfilter_addr 0x%x\n",
+ func_cap->bloomfilter_length, func_cap->bloomfilter_addr);
+
+ func_cap->qpc_reserved = 0;
+ func_cap->mpt_reserved = 0;
+ func_cap->scq_reserved = 0;
+ func_cap->srq_reserved = 0;
+ func_cap->qpc_alloc_static = false;
+ func_cap->scqc_alloc_static = false;
+
+ func_cap->l3i_number = CQM_L3I_COMM_NUM;
+ func_cap->l3i_basic_size = CQM_L3I_SIZE_8;
+
+ func_cap->timer_number = CQM_TIMER_ALIGN_SCALE_NUM * total_function_num;
+ func_cap->timer_basic_size = CQM_TIMER_SIZE_32;
+
+ func_cap->gpa_check_enable = true;
+
+ cqm_lb_fake_mode_init(cqm_handle);
+ cqm_info(handle->dev_hdl, "Cap init: lb_mode=%u\n", func_cap->lb_mode);
+ cqm_info(handle->dev_hdl, "Cap init: smf_pg=%u\n", func_cap->smf_pg);
+ cqm_info(handle->dev_hdl, "Cap init: fake_mode=%u\n", func_cap->fake_mode);
+ cqm_info(handle->dev_hdl, "Cap init: fake_func_type=%u\n", func_cap->fake_func_type);
+ cqm_info(handle->dev_hdl, "Cap init: fake_cfg_number=%u\n", func_cap->fake_cfg_number);
+
+ cqm_service_capability_init(cqm_handle, service_capability);
+
+ cqm_test_mode_init(cqm_handle, service_capability);
+
+ cqm_service_capability_update(cqm_handle);
+
+ func_cap->ft_enable = service_capability->sf_svc_attr.ft_en;
+ func_cap->rdma_enable = service_capability->sf_svc_attr.rdma_en;
+
+ cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %u\n", func_cap->pagesize_reorder);
+ cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
+ func_cap->xid_alloc_mode, func_cap->gpa_check_enable);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
+ func_cap->qpc_alloc_static, func_cap->scqc_alloc_static);
+ cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n", func_cap->hash_number);
+ cqm_info(handle->dev_hdl, "Cap init: qpc_number 0x%x, qpc_reserved 0x%x, qpc_basic_size 0x%x\n",
+ func_cap->qpc_number, func_cap->qpc_reserved, func_cap->qpc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: scqc_number 0x%x scqc_reserved 0x%x, scqc_basic_size 0x%x\n",
+ func_cap->scqc_number, func_cap->scq_reserved, func_cap->scqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: srqc_number 0x%x, srqc_basic_size 0x%x\n",
+ func_cap->srqc_number, func_cap->srqc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
+ func_cap->mpt_number, func_cap->mpt_reserved);
+ cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
+ func_cap->gid_number, func_cap->lun_number);
+ cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
+ func_cap->taskmap_number, func_cap->l3i_number);
+ cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x, childc_number 0x%x\n",
+ func_cap->timer_number, func_cap->childc_number);
+ cqm_info(handle->dev_hdl, "Cap init: childc_basic_size 0x%x\n",
+ func_cap->childc_basic_size);
+ cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
+ func_cap->xid2cid_number, func_cap->reorder_number);
+ cqm_info(handle->dev_hdl, "Cap init: ft_enable %d, rdma_enable %d\n",
+ func_cap->ft_enable, func_cap->rdma_enable);
+
+ return CQM_SUCCESS;
+
+out:
+ if (func_attr->func_type == CQM_PPF)
+ func_cap->timer_enable = 0;
+
+ return err;
+}
+
+void cqm_fake_uninit(struct cqm_handle *cqm_handle)
+{
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return;
+
+ for (i = 0; i < CQM_FAKE_FUNC_MAX; i++) {
+ kfree(cqm_handle->fake_cqm_handle[i]);
+ cqm_handle->fake_cqm_handle[i] = NULL;
+ }
+}
+
+s32 cqm_fake_init(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_func_capability *func_cap = NULL;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ struct sphw_func_attr *func_attr = NULL;
+ s32 child_func_start, child_func_number;
+ u32 i;
+
+ func_cap = &cqm_handle->func_capability;
+ if (func_cap->fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_start = cqm_get_child_func_start(cqm_handle);
+ if (child_func_start == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_start));
+ return CQM_FAIL;
+ }
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = kmalloc(sizeof(*fake_cqm_handle), GFP_KERNEL | __GFP_ZERO);
+ if (!fake_cqm_handle) {
+ cqm_err(handle->dev_hdl,
+ CQM_ALLOC_FAIL(fake_cqm_handle));
+ goto err;
+ }
+
+ /* Copy the attributes of the parent CQM handle to the child CQM
+ * handle and modify the values of function.
+ */
+ memcpy(fake_cqm_handle, cqm_handle, sizeof(struct cqm_handle));
+ func_attr = &fake_cqm_handle->func_attribute;
+ func_cap = &fake_cqm_handle->func_capability;
+ func_attr->func_global_idx = (u16)(child_func_start + i);
+ cqm_set_func_type(fake_cqm_handle);
+ func_cap->fake_func_type = CQM_FAKE_FUNC_CHILD;
+ cqm_info(handle->dev_hdl, "Fake func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
+ func_attr->func_global_idx, func_attr->func_type);
+
+ fake_cqm_handle->parent_cqm_handle = cqm_handle;
+ cqm_handle->fake_cqm_handle[i] = fake_cqm_handle;
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_fake_mem_uninit(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type != CQM_FAKE_FUNC_PARENT)
+ return;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+ cqm_object_table_uninit(fake_cqm_handle);
+ cqm_bitmap_uninit(fake_cqm_handle);
+ cqm_cla_uninit(fake_cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(fake_cqm_handle);
+ }
+}
+
+s32 cqm_fake_mem_init(struct cqm_handle *cqm_handle)
+{
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_handle *fake_cqm_handle = NULL;
+ s32 child_func_number;
+ u32 i;
+
+ if (cqm_handle->func_capability.fake_func_type !=
+ CQM_FAKE_FUNC_PARENT)
+ return CQM_SUCCESS;
+
+ child_func_number = cqm_get_child_func_number(cqm_handle);
+ if (child_func_number == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(child_func_number));
+ return CQM_FAIL;
+ }
+
+ for (i = 0; i < (u32)child_func_number; i++) {
+ fake_cqm_handle = cqm_handle->fake_cqm_handle[i];
+
+ if (cqm_bat_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err;
+ }
+
+ if (cqm_cla_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err;
+ }
+
+ if (cqm_bitmap_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err;
+ }
+
+ if (cqm_object_table_init(fake_cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err:
+ cqm_fake_mem_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+s32 cqm_mem_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ if (cqm_fake_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_init));
+ return CQM_FAIL;
+ }
+
+ if (cqm_fake_mem_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fake_mem_init));
+ goto err1;
+ }
+
+ if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
+ goto err2;
+ }
+
+ if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
+ goto err3;
+ }
+
+ if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
+ goto err4;
+ }
+
+ if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_init));
+ goto err5;
+ }
+
+ return CQM_SUCCESS;
+
+err5:
+ cqm_bitmap_uninit(cqm_handle);
+err4:
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+err3:
+ cqm_bat_uninit(cqm_handle);
+err2:
+ cqm_fake_mem_uninit(cqm_handle);
+err1:
+ cqm_fake_uninit(cqm_handle);
+ return CQM_FAIL;
+}
+
+void cqm_mem_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ cqm_object_table_uninit(cqm_handle);
+ cqm_bitmap_uninit(cqm_handle);
+ cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
+ cqm_bat_uninit(cqm_handle);
+ cqm_fake_mem_uninit(cqm_handle);
+ cqm_fake_uninit(cqm_handle);
+}
+
+s32 cqm_event_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ if (sphw_aeq_register_swe_cb(ex_handle, SPHW_STATEFULL_EVENT,
+ cqm_aeq_callback) != CHIPIF_SUCCESS) {
+ cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_event_uninit(void *ex_handle)
+{
+ sphw_aeq_unregister_swe_cb(ex_handle, SPHW_STATEFULL_EVENT);
+}
+
+u32 cqm_aeq_event2type(u8 event)
+{
+ u32 service_type;
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ if (event >= CQM_AEQ_BASE_T_FC && event < CQM_AEQ_MAX_T_FC)
+ service_type = CQM_SERVICE_T_FC;
+ else
+ service_type = CQM_SERVICE_T_MAX;
+
+ return service_type;
+}
+
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct service_register_template *service_template = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ u8 event_level = FAULT_LEVEL_MAX;
+ u32 service_type;
+
+ CQM_PTR_CHECK_RET(ex_handle, event_level,
+ CQM_PTR_NULL(aeq_callback_ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, event_level,
+ CQM_PTR_NULL(aeq_callback_cqm_handle));
+
+ /* Distributes events to different service modules
+ * based on the event type.
+ */
+ service_type = cqm_aeq_event2type(event);
+ if (service_type == CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
+ return event_level;
+ }
+
+ service = &cqm_handle->service[service_type];
+ service_template = &service->service_template;
+
+ if (!service_template->aeq_level_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_level_callback unregistered\n",
+ service_type);
+ else
+ event_level = service_template->aeq_level_callback(service_template->service_handle,
+ event, data);
+
+ if (!service_template->aeq_callback)
+ cqm_err(handle->dev_hdl, "Event: service_type %u aeq_callback unregistered\n",
+ service_type);
+ else
+ service_template->aeq_callback(service_template->service_handle,
+ event, data);
+
+ return event_level;
+}
+
+s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_PTR_NULL(cqm_handle));
+ CQM_PTR_CHECK_RET(service_template, CQM_FAIL,
+ CQM_PTR_NULL(service_template));
+
+ if (service_template->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl,
+ CQM_WRONG_VALUE(service_template->service_type));
+ return CQM_FAIL;
+ }
+ service = &cqm_handle->service[service_template->service_type];
+ if (!service->valid) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u is invalid\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ if (service->has_register) {
+ cqm_err(handle->dev_hdl, "Service register: service_type %u has registered\n",
+ service_template->service_type);
+ return CQM_FAIL;
+ }
+
+ service->has_register = true;
+ (void)memcpy((void *)(&service->service_template),
+ (void *)service_template,
+ sizeof(struct service_register_template));
+
+ return CQM_SUCCESS;
+}
+
+void cqm3_service_unregister(void *ex_handle, u32 service_type)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return;
+ }
+
+ service = &cqm_handle->service[service_type];
+ if (!service->valid)
+ cqm_err(handle->dev_hdl, "Service unregister: service_type %u is disable\n",
+ service_type);
+
+ service->has_register = false;
+ memset(&service->service_template, 0, sizeof(struct service_register_template));
+}
+
+struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
+
+ return (struct cqm_cmd_buf *)sphw_alloc_cmd_buf(ex_handle);
+}
+
+void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_NO_RET(cmd_buf, CQM_PTR_NULL(cmd_buf));
+ CQM_PTR_CHECK_NO_RET(cmd_buf->buf, CQM_PTR_NULL(buf));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
+
+ sphw_free_cmd_buf(ex_handle, (struct sphw_cmd_buf *)cmd_buf);
+}
+
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
+ struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_PTR_NULL(buf_in));
+ CQM_PTR_CHECK_RET(buf_in->buf, CQM_FAIL, CQM_PTR_NULL(buf));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
+
+ return sphw_cmdq_detail_resp(ex_handle, mod, cmd,
+ (struct sphw_cmd_buf *)buf_in,
+ (struct sphw_cmd_buf *)buf_out,
+ out_param, timeout, channel);
+}
+
+int cqm_alloc_fc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct sphw_hwif *hwif = NULL;
+ u32 idx = 0;
+#define SPFC_DB_ADDR_RSVD 12
+#define SPFC_DB_MASK 128
+ u64 db_base_phy_fc;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct sphw_hwdev *)hwdev)->hwif;
+
+ db_base_phy_fc = hwif->db_base_phy >> SPFC_DB_ADDR_RSVD;
+
+ if (db_base_phy_fc & (SPFC_DB_MASK - 1))
+ idx = SPFC_DB_MASK - (db_base_phy_fc && (SPFC_DB_MASK - 1));
+
+ *db_base = hwif->db_base + idx * SPHW_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + SPHW_DWQE_OFFSET;
+
+ return 0;
+}
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
+ void __iomem **dwqe_addr)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
+ CQM_PTR_CHECK_RET(db_addr, CQM_FAIL, CQM_PTR_NULL(db_addr));
+ CQM_PTR_CHECK_RET(dwqe_addr, CQM_FAIL, CQM_PTR_NULL(dwqe_addr));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
+
+ return cqm_alloc_fc_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ return sphw_alloc_db_phy_addr(ex_handle, db_paddr, dwqe_addr);
+}
+
+void cqm3_db_addr_free(void *ex_handle, const void __iomem *db_addr,
+ void __iomem *dwqe_addr)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+
+ CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
+
+ sphw_free_db_addr(ex_handle, db_addr, dwqe_addr);
+}
+
+void cqm_db_phy_addr_free(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
+{
+ sphw_free_db_phy_addr(ex_handle, *db_paddr, *dwqe_addr);
+}
+
+s32 cqm_db_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ /* Allocate hardware doorbells to services. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (!service->valid)
+ continue;
+
+ if (cqm3_db_addr_alloc(ex_handle, &service->hardware_db_vaddr,
+ &service->dwqe_vaddr) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_db_addr_alloc));
+ break;
+ }
+
+ if (cqm_db_phy_addr_alloc(handle, &service->hardware_db_paddr,
+ &service->dwqe_paddr) != CQM_SUCCESS) {
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_phy_addr_alloc));
+ break;
+ }
+ }
+
+ if (i != CQM_SERVICE_T_MAX) {
+ i--;
+ for (; i >= 0; i--) {
+ service = &cqm_handle->service[i];
+ if (!service->valid)
+ continue;
+
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ cqm_db_phy_addr_free(ex_handle,
+ &service->hardware_db_paddr,
+ &service->dwqe_paddr);
+ }
+ return CQM_FAIL;
+ }
+
+ return CQM_SUCCESS;
+}
+
+void cqm_db_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 i;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+
+ /* Release hardware doorbell. */
+ for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
+ service = &cqm_handle->service[i];
+ if (service->valid)
+ cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
+ service->dwqe_vaddr);
+ }
+}
+
+s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db)
+{
+#define SPFC_DB_FAKE_VF_OFFSET 32
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ struct sphw_hwdev *handle = NULL;
+ void *dbaddr = NULL;
+
+ handle = (struct sphw_hwdev *)ex_handle;
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ dbaddr = (u8 *)service->hardware_db_vaddr +
+ ((pagenum + SPFC_DB_FAKE_VF_OFFSET) * SPHW_DB_PAGE_SIZE);
+ *((u64 *)dbaddr + db_count) = db;
+ return CQM_SUCCESS;
+}
+
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
+ void *direct_wqe)
+{
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ struct sphw_hwdev *handle = NULL;
+ u64 *tmp = (u64 *)direct_wqe;
+ int i;
+
+ handle = (struct sphw_hwdev *)ex_handle;
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ service = &cqm_handle->service[service_type];
+
+ /* Considering the performance of ringing hardware db,
+ * the parameter is not checked.
+ */
+ wmb();
+ *((u64 *)service->dwqe_vaddr + 0) = tmp[2];
+ *((u64 *)service->dwqe_vaddr + 1) = tmp[3];
+ *((u64 *)service->dwqe_vaddr + 2) = tmp[0];
+ *((u64 *)service->dwqe_vaddr + 3) = tmp[1];
+ tmp += 4;
+
+ /* The FC use 256B WQE. The directwqe is written at block0,
+ * and the length is 256B
+ */
+ for (i = 4; i < 32; i++)
+ *((u64 *)service->dwqe_vaddr + i) = *tmp++;
+
+ return CQM_SUCCESS;
+}
+
+static s32 bloomfilter_init_cmd(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct cqm_func_capability *capability = &cqm_handle->func_capability;
+ struct cqm_bloomfilter_init_cmd *cmd = NULL;
+ struct cqm_cmd_buf *buf_in = NULL;
+ s32 ret;
+
+ buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct cqm_bloomfilter_init_cmd);
+ cmd = (struct cqm_bloomfilter_init_cmd *)(buf_in->buf);
+ cmd->bloom_filter_addr = capability->bloomfilter_addr;
+ cmd->bloom_filter_len = capability->bloomfilter_length;
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_init_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle),
+ CQM_MOD_CQM, CQM_CMD_T_BLOOMFILTER_INIT, buf_in,
+ NULL, NULL, CQM_CMD_TIMEOUT,
+ SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: %s ret=%d\n", __func__,
+ ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: %s: 0x%x 0x%x\n",
+ __func__, cmd->bloom_filter_addr,
+ cmd->bloom_filter_len);
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_FAIL;
+ }
+ cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
+ return CQM_SUCCESS;
+}
+
+s32 cqm_bloomfilter_init(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct cqm_func_capability *capability = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ u32 array_size;
+ s32 ret;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+ capability = &cqm_handle->func_capability;
+
+ if (capability->bloomfilter_length == 0) {
+ cqm_info(handle->dev_hdl,
+ "Bloomfilter: bf_length=0, don't need to init bloomfilter\n");
+ return CQM_SUCCESS;
+ }
+
+ /* The unit of bloomfilter_length is 64B(512bits). Each bit is a table
+ * node. Therefore the value must be shift 9 bits to the left.
+ */
+ bloomfilter_table->table_size = capability->bloomfilter_length <<
+ CQM_BF_LENGTH_UNIT;
+ /* The unit of bloomfilter_length is 64B. The unit of array entryis 32B.
+ */
+ array_size = capability->bloomfilter_length << 1;
+ if (array_size == 0 || array_size > CQM_BF_BITARRAY_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(array_size));
+ return CQM_FAIL;
+ }
+
+ bloomfilter_table->array_mask = array_size - 1;
+ /* This table is not a bitmap, it is the counter of corresponding bit.
+ */
+ bloomfilter_table->table = vmalloc(bloomfilter_table->table_size * (sizeof(u32)));
+ CQM_PTR_CHECK_RET(bloomfilter_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
+
+ memset(bloomfilter_table->table, 0,
+ (bloomfilter_table->table_size * sizeof(u32)));
+
+ /* The the bloomfilter must be initialized to 0 by ucode,
+ * because the bloomfilter is mem mode
+ */
+ if (cqm_handle->func_capability.bloomfilter_enable) {
+ ret = bloomfilter_init_cmd(ex_handle);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ "Bloomfilter: bloomfilter_init_cmd ret=%d\n",
+ ret);
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ return CQM_FAIL;
+ }
+ }
+
+ mutex_init(&bloomfilter_table->lock);
+ return CQM_SUCCESS;
+}
+
+void cqm_bloomfilter_uninit(void *ex_handle)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_bloomfilter_table *bloomfilter_table = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ bloomfilter_table = &cqm_handle->bloomfilter_table;
+
+ if (bloomfilter_table->table) {
+ vfree(bloomfilter_table->table);
+ bloomfilter_table->table = NULL;
+ }
+}
+
+s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_cmd_buf *buf_in = NULL;
+ struct cqm_bloomfilter_cmd *cmd = NULL;
+ s32 ret;
+
+ buf_in = cqm3_cmd_alloc(ex_handle);
+ CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
+
+ /* Fill the command format and convert it to big-endian. */
+ buf_in->size = sizeof(struct cqm_bloomfilter_cmd);
+ cmd = (struct cqm_bloomfilter_cmd *)(buf_in->buf);
+ memset((void *)cmd, 0, sizeof(struct cqm_bloomfilter_cmd));
+ cmd->k_en = k_flag;
+ cmd->index_h = (u32)(id >> CQM_DW_OFFSET);
+ cmd->index_l = (u32)(id & CQM_DW_MASK);
+
+ cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_cmd) >> CQM_DW_SHIFT));
+
+ ret = cqm3_send_cmd_box(ex_handle, CQM_MOD_CQM, (u8)op, buf_in, NULL,
+ NULL, CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
+ if (ret != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
+ cqm_err(handle->dev_hdl, "Bloomfilter: bloomfilter_cmd ret=%d\n", ret);
+ cqm_err(handle->dev_hdl, "Bloomfilter: op=0x%x, cmd: 0x%x 0x%x 0x%x 0x%x\n",
+ op, *((u32 *)cmd), *(((u32 *)cmd) + CQM_DW_INDEX1),
+ *(((u32 *)cmd) + CQM_DW_INDEX2),
+ *(((u32 *)cmd) + CQM_DW_INDEX3));
+ cqm3_cmd_free(ex_handle, buf_in);
+ return CQM_FAIL;
+ }
+
+ cqm3_cmd_free(ex_handle, buf_in);
+
+ return CQM_SUCCESS;
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.h b/drivers/scsi/spfc/hw/spfc_cqm_main.h
new file mode 100644
index 000000000000..c8fb37a631bf
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_main.h
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_MAIN_H
+#define CQM_MAIN_H
+
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+
+#define GET_MAX(a, b) ((a) > (b) ? (a) : (b))
+#define GET_MIN(a, b) ((a) < (b) ? (a) : (b))
+#define CQM_DW_SHIFT 2
+#define CQM_QW_SHIFT 3
+#define CQM_BYTE_BIT_SHIFT 3
+#define CQM_NUM_BIT_BYTE 8
+
+#define CHIPIF_SUCCESS 0
+#define CHIPIF_FAIL (-1)
+
+#define CQM_TIMER_ENABLE 1
+#define CQM_TIMER_DISABLE 0
+
+/* The value must be the same as that of sphw_service_type in sphw_crm.h. */
+#define CQM_SERVICE_T_FC SERVICE_T_FC
+#define CQM_SERVICE_T_MAX SERVICE_T_MAX
+
+struct cqm_service {
+ bool valid; /* Whether to enable this service on the function. */
+ bool has_register; /* Registered or Not */
+ u64 hardware_db_paddr;
+ void __iomem *hardware_db_vaddr;
+ u64 dwqe_paddr;
+ void __iomem *dwqe_vaddr;
+ u32 buf_order; /* The size of each buf node is 2^buf_order pages. */
+ struct service_register_template service_template;
+};
+
+struct cqm_fake_cfg {
+ u32 parent_func; /* The parent func_id of the fake vfs. */
+ u32 child_func_start; /* The start func_id of the child fake vfs. */
+ u32 child_func_number; /* The number of the child fake vfs. */
+};
+
+#define CQM_MAX_FACKVF_GROUP 4
+
+struct cqm_func_capability {
+ /* BAT_PTR table(SMLC) */
+ bool ft_enable; /* BAT for flow table enable: support fc service
+ */
+ bool rdma_enable; /* BAT for rdma enable: support RoCE */
+ /* VAT table(SMIR) */
+ bool ft_pf_enable; /* Same as ft_enable. BAT entry for fc on pf
+ */
+ bool rdma_pf_enable; /* Same as rdma_enable. BAT entry for rdma on pf */
+
+ /* Dynamic or static memory allocation during the application of
+ * specified QPC/SCQC for each service.
+ */
+ bool qpc_alloc_static;
+ bool scqc_alloc_static;
+
+ u8 timer_enable; /* Whether the timer function is enabled */
+ u8 bloomfilter_enable; /* Whether the bloomgfilter function is enabled */
+ /* Maximum number of connections for fc, whitch cannot excedd qpc_number */
+ u32 flow_table_based_conn_number;
+ u32 flow_table_based_conn_cache_number; /* Maximum number of sticky caches */
+ u32 bloomfilter_length; /* Size of the bloomfilter table, 64-byte aligned */
+ u32 bloomfilter_addr; /* Start position of the bloomfilter table in the SMF main cache. */
+ u32 qpc_reserved; /* Reserved bit in bitmap */
+ u32 mpt_reserved; /* The ROCE/IWARP MPT also has a reserved bit. */
+
+ /* All basic_size must be 2^n-aligned. */
+ /* The number of hash bucket. The size of BAT table is aliaed with 64 bucket.
+ *At least 64 buckets is required.
+ */
+ u32 hash_number;
+ /* THe basic size of hash bucket is 64B, including 5 valid entry and one next entry. */
+ u32 hash_basic_size;
+ u32 qpc_number;
+ u32 qpc_basic_size;
+
+ /* NUmber of PFs/VFs on the current host */
+ u32 pf_num;
+ u32 pf_id_start;
+ u32 vf_num;
+ u32 vf_id_start;
+
+ u32 lb_mode;
+ /* Only lower 4bit is valid, indicating which SMFs are enabled.
+ * For example, 0101B indicates that SMF0 and SMF2 are enabled.
+ */
+ u32 smf_pg;
+
+ u32 fake_mode;
+ /* Whether the current function belongs to the fake group (parent or child) */
+ u32 fake_func_type;
+ u32 fake_cfg_number; /* Number of current configuration groups */
+ struct cqm_fake_cfg fake_cfg[CQM_MAX_FACKVF_GROUP];
+
+ /* Note: for cqm specail test */
+ u32 pagesize_reorder;
+ bool xid_alloc_mode;
+ bool gpa_check_enable;
+ u32 scq_reserved;
+ u32 srq_reserved;
+
+ u32 mpt_number;
+ u32 mpt_basic_size;
+ u32 scqc_number;
+ u32 scqc_basic_size;
+ u32 srqc_number;
+ u32 srqc_basic_size;
+
+ u32 gid_number;
+ u32 gid_basic_size;
+ u32 lun_number;
+ u32 lun_basic_size;
+ u32 taskmap_number;
+ u32 taskmap_basic_size;
+ u32 l3i_number;
+ u32 l3i_basic_size;
+ u32 childc_number;
+ u32 childc_basic_size;
+ u32 child_qpc_id_start; /* FC service Child CTX is global addressing. */
+ u32 childc_number_all_function; /* The chip supports a maximum of 8096 child CTXs. */
+ u32 timer_number;
+ u32 timer_basic_size;
+ u32 xid2cid_number;
+ u32 xid2cid_basic_size;
+ u32 reorder_number;
+ u32 reorder_basic_size;
+};
+
+#define CQM_PF TYPE_PF
+#define CQM_VF TYPE_VF
+#define CQM_PPF TYPE_PPF
+#define CQM_UNKNOWN TYPE_UNKNOWN
+#define CQM_MAX_PF_NUM 32
+
+#define CQM_LB_MODE_NORMAL 0xff
+#define CQM_LB_MODE_0 0
+#define CQM_LB_MODE_1 1
+#define CQM_LB_MODE_2 2
+
+#define CQM_LB_SMF_MAX 4
+
+#define CQM_FPGA_MODE 0
+#define CQM_EMU_MODE 1
+#define CQM_FAKE_MODE_DISABLE 0
+#define CQM_FAKE_CFUNC_START 32
+
+#define CQM_FAKE_FUNC_NORMAL 0
+#define CQM_FAKE_FUNC_PARENT 1
+#define CQM_FAKE_FUNC_CHILD 2
+#define CQM_FAKE_FUNC_CHILD_CONFLICT 3
+#define CQM_FAKE_FUNC_MAX 32
+
+#define CQM_SPU_HOST_ID 4
+
+#define CQM_QPC_ROCE_PER_DRCT 12
+#define CQM_QPC_NORMAL_RESERVE_DRC 0
+#define CQM_QPC_ROCEAA_ENABLE 1
+#define CQM_QPC_ROCE_VBS_MODE 2
+#define CQM_QPC_NORMAL_WITHOUT_RSERVER_DRC 3
+
+struct cqm_db_common {
+ u32 rsvd1 : 23;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+
+ u32 rsvd2;
+};
+
+struct cqm_bloomfilter_table {
+ u32 *table;
+ u32 table_size; /* The unit is bit */
+ u32 array_mask; /* The unit of array entry is 32B, used to address entry
+ */
+ struct mutex lock;
+};
+
+struct cqm_bloomfilter_init_cmd {
+ u32 bloom_filter_len;
+ u32 bloom_filter_addr;
+};
+
+struct cqm_bloomfilter_cmd {
+ u32 rsv1;
+
+ u32 k_en : 4;
+ u32 rsv2 : 28;
+
+ u32 index_h;
+ u32 index_l;
+};
+
+struct cqm_handle {
+ struct sphw_hwdev *ex_handle;
+ struct pci_dev *dev;
+ struct sphw_func_attr func_attribute; /* vf/pf attributes */
+ struct cqm_func_capability func_capability; /* function capability set */
+ struct cqm_service service[CQM_SERVICE_T_MAX]; /* Service-related structure */
+ struct cqm_bat_table bat_table;
+ struct cqm_bloomfilter_table bloomfilter_table;
+ /* fake-vf-related structure */
+ struct cqm_handle *fake_cqm_handle[CQM_FAKE_FUNC_MAX];
+ struct cqm_handle *parent_cqm_handle;
+};
+
+enum cqm_cmd_type {
+ CQM_CMD_T_INVALID = 0,
+ CQM_CMD_T_BAT_UPDATE,
+ CQM_CMD_T_CLA_UPDATE,
+ CQM_CMD_T_CLA_CACHE_INVALID = 6,
+ CQM_CMD_T_BLOOMFILTER_INIT,
+ CQM_CMD_T_MAX
+};
+
+#define CQM_CQN_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_XID_FROM_CEQE(data) ((data) & 0xfffff)
+#define CQM_QID_FROM_CEQE(data) (((data) >> 20) & 0x7)
+#define CQM_TYPE_FROM_CEQE(data) (((data) >> 23) & 0x7)
+
+#define CQM_HASH_BUCKET_SIZE_64 64
+
+#define CQM_MAX_QPC_NUM 0x100000
+#define CQM_MAX_SCQC_NUM 0x100000
+#define CQM_MAX_SRQC_NUM 0x100000
+#define CQM_MAX_CHILDC_NUM 0x100000
+
+#define CQM_QPC_SIZE_256 256
+#define CQM_QPC_SIZE_512 512
+#define CQM_QPC_SIZE_1024 1024
+
+#define CQM_SCQC_SIZE_32 32
+#define CQM_SCQC_SIZE_64 64
+#define CQM_SCQC_SIZE_128 128
+
+#define CQM_SRQC_SIZE_32 32
+#define CQM_SRQC_SIZE_64 64
+#define CQM_SRQC_SIZE_128 128
+
+#define CQM_MPT_SIZE_64 64
+
+#define CQM_GID_SIZE_32 32
+
+#define CQM_LUN_SIZE_8 8
+
+#define CQM_L3I_SIZE_8 8
+
+#define CQM_TIMER_SIZE_32 32
+
+#define CQM_XID2CID_SIZE_8 8
+
+#define CQM_XID2CID_SIZE_8K 8192
+
+#define CQM_REORDER_SIZE_256 256
+
+#define CQM_CHILDC_SIZE_256 256
+
+#define CQM_XID2CID_VBS_NUM (18 * 1024) /* 16K virtio VQ + 2K nvme Q */
+
+#define CQM_VBS_QPC_NUM 2048 /* 2K VOLQ */
+
+#define CQM_VBS_QPC_SIZE 512
+
+#define CQM_XID2CID_VIRTIO_NUM (16 * 1024)
+
+#define CQM_GID_RDMA_NUM 128
+
+#define CQM_LUN_FC_NUM 64
+
+#define CQM_TASKMAP_FC_NUM 4
+
+#define CQM_L3I_COMM_NUM 64
+
+#define CQM_CHILDC_ROCE_NUM (8 * 1024)
+#define CQM_CHILDC_OVS_VBS_NUM (8 * 1024)
+#define CQM_CHILDC_TOE_NUM 256
+#define CQM_CHILDC_IPSEC_NUM (4 * 1024)
+
+#define CQM_TIMER_SCALE_NUM (2 * 1024)
+#define CQM_TIMER_ALIGN_WHEEL_NUM 8
+#define CQM_TIMER_ALIGN_SCALE_NUM \
+ (CQM_TIMER_SCALE_NUM * CQM_TIMER_ALIGN_WHEEL_NUM)
+
+#define CQM_QPC_OVS_RSVD (1024 * 1024)
+#define CQM_QPC_ROCE_RSVD 2
+#define CQM_QPC_ROCEAA_SWITCH_QP_NUM 4
+#define CQM_QPC_ROCEAA_RSVD \
+ (4 * 1024 + CQM_QPC_ROCEAA_SWITCH_QP_NUM) /* 4096 Normal QP + 4 Switch QP */
+#define CQM_CQ_ROCEAA_RSVD 64
+#define CQM_SRQ_ROCEAA_RSVD 64
+#define CQM_QPC_ROCE_VBS_RSVD \
+ (1024 + CQM_QPC_ROCE_RSVD) /* (204800 + CQM_QPC_ROCE_RSVD) */
+
+#define CQM_OVS_PAGESIZE_ORDER 8
+#define CQM_OVS_MAX_TIMER_FUNC 48
+
+#define CQM_FC_PAGESIZE_ORDER 0
+
+#define CQM_QHEAD_ALIGN_ORDER 6
+
+#define CQM_CMD_TIMEOUT 300000 /* ms */
+
+#define CQM_DW_MASK 0xffffffff
+#define CQM_DW_OFFSET 32
+#define CQM_DW_INDEX0 0
+#define CQM_DW_INDEX1 1
+#define CQM_DW_INDEX2 2
+#define CQM_DW_INDEX3 3
+
+/* The unit of bloomfilter_length is 64B(512bits). */
+#define CQM_BF_LENGTH_UNIT 9
+#define CQM_BF_BITARRAY_MAX BIT(17)
+
+typedef void (*serv_cap_init_cb)(struct cqm_handle *, void *);
+
+/* Only for llt test */
+s32 cqm_capability_init(void *ex_handle);
+/* Can be defined as static */
+s32 cqm_mem_init(void *ex_handle);
+void cqm_mem_uninit(void *ex_handle);
+s32 cqm_event_init(void *ex_handle);
+void cqm_event_uninit(void *ex_handle);
+u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data);
+s32 cqm_get_fake_func_type(struct cqm_handle *cqm_handle);
+s32 cqm_get_child_func_start(struct cqm_handle *cqm_handle);
+s32 cqm_get_child_func_number(struct cqm_handle *cqm_handle);
+
+s32 cqm3_init(void *ex_handle);
+void cqm3_uninit(void *ex_handle);
+s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template);
+void cqm3_service_unregister(void *ex_handle, u32 service_type);
+
+struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle);
+void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf);
+s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
+ struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
+ u16 channel);
+
+s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr, void __iomem **dwqe_addr);
+s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr);
+s32 cqm_db_init(void *ex_handle);
+void cqm_db_uninit(void *ex_handle);
+
+s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id);
+s32 cqm_bloomfilter_init(void *ex_handle);
+void cqm_bloomfilter_uninit(void *ex_handle);
+
+#define CQM_LOG_ID 0
+
+#define CQM_PTR_NULL(x) "%s: " #x " is null\n", __func__
+#define CQM_ALLOC_FAIL(x) "%s: " #x " alloc fail\n", __func__
+#define CQM_MAP_FAIL(x) "%s: " #x " map fail\n", __func__
+#define CQM_FUNCTION_FAIL(x) "%s: " #x " return failure\n", __func__
+#define CQM_WRONG_VALUE(x) "%s: " #x " %u is wrong\n", __func__, (u32)(x)
+
+#define cqm_err(dev, format, ...) dev_err(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_warn(dev, format, ...) dev_warn(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_notice(dev, format, ...) \
+ dev_notice(dev, "[CQM]" format, ##__VA_ARGS__)
+#define cqm_info(dev, format, ...) dev_info(dev, "[CQM]" format, ##__VA_ARGS__)
+
+#define CQM_32_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
+ do { \
+ if (unlikely(((x) & 0x1f) != 0)) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+#define CQM_64_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
+ do { \
+ if (unlikely(((x) & 0x3f) != 0)) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+
+#define CQM_PTR_CHECK_RET(ptr, ret, desc) \
+ do { \
+ if (unlikely((ptr) == NULL)) { \
+ pr_err("[CQM]" desc); \
+ return ret; \
+ } \
+ } while (0)
+
+#define CQM_PTR_CHECK_NO_RET(ptr, desc) \
+ do { \
+ if (unlikely((ptr) == NULL)) { \
+ pr_err("[CQM]" desc); \
+ return; \
+ } \
+ } while (0)
+#define CQM_CHECK_EQUAL_RET(dev_hdl, actual, expect, ret, desc) \
+ do { \
+ if (unlikely((expect) != (actual))) { \
+ cqm_err(dev_hdl, desc); \
+ return ret; \
+ } \
+ } while (0)
+#define CQM_CHECK_EQUAL_NO_RET(dev_hdl, actual, expect, desc) \
+ do { \
+ if (unlikely((expect) != (actual))) { \
+ cqm_err(dev_hdl, desc); \
+ return; \
+ } \
+ } while (0)
+
+#endif /* CQM_MAIN_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.c b/drivers/scsi/spfc/hw/spfc_cqm_object.c
new file mode 100644
index 000000000000..7e41ee633689
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_object.c
@@ -0,0 +1,959 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+
+#include "sphw_crm.h"
+#include "sphw_hw.h"
+#include "sphw_hwdev.h"
+#include "sphw_hwif.h"
+
+#include "spfc_cqm_object.h"
+#include "spfc_cqm_bitmap_table.h"
+#include "spfc_cqm_bat_cla.h"
+#include "spfc_cqm_main.h"
+
+s32 cqm_qpc_mpt_bitmap_alloc(struct cqm_object *object, struct cqm_cla_table *cla_table)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_bitmap *bitmap = &cla_table->bitmap;
+ u32 index, count;
+
+ count = (ALIGN(object->object_size, cla_table->obj_size)) / cla_table->obj_size;
+ qpc_mpt_info->index_count = count;
+
+ if (qpc_mpt_info->common.xid == CQM_INDEX_INVALID) {
+ /* apply for an index normally */
+ index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ count, func_cap->xid_alloc_mode);
+ if (index < bitmap->max_num) {
+ qpc_mpt_info->common.xid = index;
+ } else {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+ } else {
+ /* apply for index to be reserved */
+ index = cqm_bitmap_alloc_reserved(bitmap, count,
+ qpc_mpt_info->common.xid);
+ if (index != qpc_mpt_info->common.xid) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
+ return CQM_FAIL;
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_qpc_mpt_create(struct cqm_object *object)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 index, count;
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return CQM_FAIL;
+ }
+
+ CQM_PTR_CHECK_RET(cla_table, CQM_FAIL,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get));
+
+ /* Bitmap applies for index. */
+ if (cqm_qpc_mpt_bitmap_alloc(object, cla_table) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ bitmap = &cla_table->bitmap;
+ index = qpc_mpt_info->common.xid;
+ count = qpc_mpt_info->index_count;
+
+ /* Find the trunk page from the BAT/CLA and allocate the buffer.
+ * Ensure that the released buffer has been cleared.
+ */
+ if (cla_table->alloc_static)
+ qpc_mpt_info->common.vaddr = cqm_cla_get_unlock(cqm_handle,
+ cla_table,
+ index, count,
+ &common->paddr);
+ else
+ qpc_mpt_info->common.vaddr = cqm_cla_get_lock(cqm_handle,
+ cla_table, index,
+ count,
+ &common->paddr);
+
+ if (!qpc_mpt_info->common.vaddr) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get_lock));
+ cqm_err(handle->dev_hdl, "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
+ cla_table->alloc_static);
+ goto err1;
+ }
+
+ /* Indexes are associated with objects, and FC is executed
+ * in the interrupt context.
+ */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table, index,
+ object, true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_object_table_insert));
+ goto err2;
+ }
+ }
+
+ return CQM_SUCCESS;
+
+err2:
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+err1:
+ cqm_bitmap_free(bitmap, index, count);
+ return CQM_FAIL;
+}
+
+struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_qpc_mpt_info *qpc_mpt_info = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ s32 ret = CQM_FAIL;
+ u32 relative_index;
+ u32 fake_func_id;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ if (service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ if (object_type != CQM_OBJECT_SERVICE_CTX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ /* fake vf adaption, switch to corresponding VF. */
+ if (cqm_handle->func_capability.fake_func_type ==
+ CQM_FAKE_FUNC_PARENT) {
+ fake_func_id = index / cqm_handle->func_capability.qpc_number;
+ relative_index = index % cqm_handle->func_capability.qpc_number;
+
+ cqm_info(handle->dev_hdl, "qpc create: fake_func_id=%u, relative_index=%u\n",
+ fake_func_id, relative_index);
+
+ if ((s32)fake_func_id >=
+ cqm_get_child_func_number(cqm_handle)) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(fake_func_id));
+ return NULL;
+ }
+
+ index = relative_index;
+ cqm_handle = cqm_handle->fake_cqm_handle[fake_func_id];
+ }
+
+ qpc_mpt_info = kmalloc(sizeof(*qpc_mpt_info), GFP_ATOMIC | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(qpc_mpt_info, NULL, CQM_ALLOC_FAIL(qpc_mpt_info));
+
+ qpc_mpt_info->common.object.service_type = service_type;
+ qpc_mpt_info->common.object.object_type = object_type;
+ qpc_mpt_info->common.object.object_size = object_size;
+ atomic_set(&qpc_mpt_info->common.object.refcount, 1);
+ init_completion(&qpc_mpt_info->common.object.free);
+ qpc_mpt_info->common.object.cqm_handle = cqm_handle;
+ qpc_mpt_info->common.xid = index;
+
+ qpc_mpt_info->common.priv = object_priv;
+
+ ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object);
+ if (ret == CQM_SUCCESS)
+ return &qpc_mpt_info->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
+ kfree(qpc_mpt_info);
+ return NULL;
+}
+
+void cqm_linkwqe_fill(struct cqm_buf *buf, u32 wqe_per_buf, u32 wqe_size,
+ u32 wqe_number, bool tail, u8 link_mode)
+{
+ struct cqm_linkwqe_128B *linkwqe = NULL;
+ struct cqm_linkwqe *wqe = NULL;
+ dma_addr_t addr;
+ u8 *tmp = NULL;
+ u8 *va = NULL;
+ u32 i;
+
+ /* The linkwqe of other buffer except the last buffer
+ * is directly filled to the tail.
+ */
+ for (i = 0; i < buf->buf_number; i++) {
+ va = (u8 *)(buf->buf_list[i].va);
+
+ if (i != (buf->buf_number - 1)) {
+ wqe = (struct cqm_linkwqe *)(va + (u32)(wqe_size * wqe_per_buf));
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ wqe->lp = CQM_LINK_WQE_LP_INVALID;
+ /* The valid value of link wqe needs to be set to 1.
+ * Each service ensures that o-bit=1 indicates that
+ * link wqe is valid and o-bit=0 indicates that
+ * link wqe is invalid.
+ */
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[(u32)(i + 1)].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ } else { /* linkwqe special padding of the last buffer */
+ if (tail) {
+ /* must be filled at the end of the page */
+ tmp = va + (u32)(wqe_size * wqe_per_buf);
+ wqe = (struct cqm_linkwqe *)tmp;
+ } else {
+ /* The last linkwqe is filled
+ * following the last wqe.
+ */
+ tmp = va + (u32)(wqe_size * (wqe_number -
+ wqe_per_buf *
+ (buf->buf_number -
+ 1)));
+ wqe = (struct cqm_linkwqe *)tmp;
+ }
+ wqe->wf = CQM_WQE_WF_LINK;
+ wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+
+ /* In link mode, the last link WQE is invalid;
+ * In ring mode, the last link wqe is valid, pointing to
+ * the home page, and the lp is set.
+ */
+ if (link_mode == CQM_QUEUE_LINK_MODE) {
+ wqe->o = CQM_LINK_WQE_OWNER_INVALID;
+ } else {
+ /* The lp field of the last link_wqe is set to
+ * 1, indicating that the meaning of the o-bit
+ * is reversed.
+ */
+ wqe->lp = CQM_LINK_WQE_LP_VALID;
+ wqe->o = CQM_LINK_WQE_OWNER_VALID;
+ addr = buf->buf_list[0].pa;
+ wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
+ wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
+ }
+ }
+
+ if (wqe_size == CQM_LINKWQE_128B) {
+ /* After the B800 version, the WQE obit scheme is
+ * changed. The 64B bits before and after the 128B WQE
+ * need to be assigned a value:
+ * ifoe the 63rd bit from the end of the last 64B is
+ * obit;
+ * toe the 157th bit from the end of the last 64B is
+ * obit.
+ */
+ linkwqe = (struct cqm_linkwqe_128B *)wqe;
+ linkwqe->second64B.forth_16B.bs.ifoe_o = CQM_LINK_WQE_OWNER_VALID;
+
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe_128B) >> 2);
+ } else {
+ /* shift 2 bits by right to get length of dw(4B) */
+ cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe) >> 2);
+ }
+ }
+}
+
+s32 cqm_nonrdma_queue_ctx_create(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ s32 shift;
+
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ shift = cqm_shift(qinfo->q_ctx_size);
+ common->q_ctx_vaddr = cqm_kmalloc_align(qinfo->q_ctx_size,
+ GFP_KERNEL | __GFP_ZERO,
+ (u16)shift);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
+ return CQM_FAIL;
+ }
+
+ common->q_ctx_paddr = pci_map_single(cqm_handle->dev,
+ common->q_ctx_vaddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev,
+ common->q_ctx_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ return CQM_FAIL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* find the corresponding cla table */
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_cla_table_get));
+ return CQM_FAIL;
+ }
+
+ /* bitmap applies for index */
+ bitmap = &cla_table->bitmap;
+ qinfo->index_count =
+ (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
+ cla_table->obj_size;
+ qinfo->common.index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
+ qinfo->index_count,
+ cqm_handle->func_capability.xid_alloc_mode);
+ if (qinfo->common.index >= bitmap->max_num) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_bitmap_alloc));
+ return CQM_FAIL;
+ }
+
+ /* find the trunk page from BAT/CLA and allocate the buffer */
+ common->q_ctx_vaddr = cqm_cla_get_lock(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count,
+ &common->q_ctx_paddr);
+ if (!common->q_ctx_vaddr) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_cla_get_lock));
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+
+ /* index and object association */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC) {
+ if (cqm_object_table_insert(cqm_handle, object_table,
+ qinfo->common.index, object,
+ false) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+ } else {
+ if (cqm_object_table_insert(cqm_handle, object_table,
+ qinfo->common.index, object,
+ true) != CQM_SUCCESS) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
+ cqm_cla_put(cqm_handle, cla_table,
+ qinfo->common.index,
+ qinfo->index_count);
+ cqm_bitmap_free(bitmap, qinfo->common.index,
+ qinfo->index_count);
+ return CQM_FAIL;
+ }
+ }
+ }
+
+ return CQM_SUCCESS;
+}
+
+s32 cqm_nonrdma_queue_create(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_service *service = cqm_handle->service + object->service_type;
+ struct cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ u32 wqe_number = qinfo->common.object.object_size;
+ u32 wqe_size = qinfo->wqe_size;
+ u32 order = service->buf_order;
+ u32 buf_number, buf_size;
+ bool tail = false; /* determine whether the linkwqe is at the end of the page */
+
+ /* When creating a CQ/SCQ queue, the page size is 4 KB,
+ * the linkwqe must be at the end of the page.
+ */
+ if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ ||
+ object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* depth: 2^n-aligned; depth range: 256-32 K */
+ if (wqe_number < CQM_CQ_DEPTH_MIN ||
+ wqe_number > CQM_CQ_DEPTH_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
+ return CQM_FAIL;
+ }
+ if (!cqm_check_align(wqe_number)) {
+ cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
+ return CQM_FAIL;
+ }
+
+ order = CQM_4K_PAGE_ORDER; /* wqe page 4k */
+ tail = true; /* The linkwqe must be at the end of the page. */
+ buf_size = CQM_4K_PAGE_SIZE;
+ } else {
+ buf_size = (u32)(PAGE_SIZE << order);
+ }
+
+ /* Calculate the total number of buffers required,
+ * -1 indicates that the link wqe in a buffer is deducted.
+ */
+ qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
+ /* number of linkwqes that are included in the depth transferred
+ * by the service
+ */
+ buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
+
+ /* apply for buffer */
+ q_room_buf->buf_number = buf_number;
+ q_room_buf->buf_size = buf_size;
+ q_room_buf->page_number = buf_number << order;
+ if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
+ return CQM_FAIL;
+ }
+ /* fill link wqe, wqe_number - buf_number is the number of wqe without
+ * link wqe
+ */
+ cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
+ wqe_number - buf_number, tail,
+ common->queue_link_mode);
+
+ /* create queue header */
+ qinfo->common.q_header_vaddr = cqm_kmalloc_align(sizeof(struct cqm_queue_header),
+ GFP_KERNEL | __GFP_ZERO,
+ CQM_QHEAD_ALIGN_ORDER);
+ if (!qinfo->common.q_header_vaddr) {
+ cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
+ goto err1;
+ }
+
+ common->q_header_paddr = pci_map_single(cqm_handle->dev,
+ qinfo->common.q_header_vaddr,
+ sizeof(struct cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+ if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr)) {
+ cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
+ goto err2;
+ }
+
+ /* create queue ctx */
+ if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
+ cqm_err(handle->dev_hdl,
+ CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
+ goto err3;
+ }
+
+ return CQM_SUCCESS;
+
+err3:
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
+err2:
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+err1:
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ return CQM_FAIL;
+}
+
+struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ u32 valid_wqe_per_buffer;
+ u32 wqe_sum; /* include linkwqe, normal wqe */
+ u32 buf_size;
+ u32 buf_num;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* service_type must be fc */
+ if (service_type != CQM_SERVICE_T_FC) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* exception of service unregistered check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+
+ /* wqe_size cannot exceed PAGE_SIZE and must be 2^n aligned. */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* FC RQ is SRQ. (Different from the SRQ concept of TOE, FC indicates
+ * that packets received by all flows are placed on the same RQ.
+ * The SRQ of TOE is similar to the RQ resource pool.)
+ */
+ if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ service = &cqm_handle->service[service_type];
+ buf_size = (u32)(PAGE_SIZE << (service->buf_order));
+ /* subtract 1 link wqe */
+ valid_wqe_per_buffer = buf_size / wqe_size - 1;
+ buf_num = wqe_number / valid_wqe_per_buffer;
+ if (wqe_number % valid_wqe_per_buffer != 0)
+ buf_num++;
+
+ /* calculate the total number of WQEs */
+ wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ /* initialize object member */
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ /* total number of WQEs */
+ nonrdma_qinfo->common.object.object_size = wqe_sum;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default doorbell is the hardware doorbell.
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the connection mode is fixed. In the future,
+ * the service needs to transfer the connection mode.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ /* initialize public members */
+ nonrdma_qinfo->common.priv = object_priv;
+ nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
+
+ /* initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ /* RQ (also called SRQ of FC) created by FC services,
+ * CTX needs to be created.
+ */
+ nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fc_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
+ struct cqm_handle *cqm_handle = NULL;
+ struct cqm_service *service = NULL;
+ s32 ret;
+
+ CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
+
+ cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
+
+ /* exception of service registrion check */
+ if (!cqm_handle->service[service_type].has_register) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
+ return NULL;
+ }
+ /* wqe_size can't be more than PAGE_SIZE, can't be zero, must be power
+ * of 2 the function of cqm_check_align is to check above
+ */
+ if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
+ return NULL;
+ }
+
+ /* nonrdma supports: RQ, SQ, SRQ, CQ, SCQ */
+ if (object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
+ object_type > CQM_OBJECT_NONRDMA_SCQ) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
+ return NULL;
+ }
+
+ nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
+ CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
+
+ nonrdma_qinfo->common.object.service_type = service_type;
+ nonrdma_qinfo->common.object.object_type = object_type;
+ nonrdma_qinfo->common.object.object_size = wqe_number;
+ atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
+ init_completion(&nonrdma_qinfo->common.object.free);
+ nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
+
+ /* Initialize the doorbell used by the current queue.
+ * The default value is hardware doorbell
+ */
+ nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
+ /* Currently, the link mode is hardcoded and needs to be transferred by
+ * the service side.
+ */
+ nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
+
+ nonrdma_qinfo->common.priv = object_priv;
+
+ /* Initialize internal private members */
+ nonrdma_qinfo->wqe_size = wqe_size;
+ service = &cqm_handle->service[service_type];
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_SCQ:
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.scq_ctx_size;
+ break;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ /* Currently, the SRQ of the service is created through a
+ * dedicated interface.
+ */
+ nonrdma_qinfo->q_ctx_size =
+ service->service_template.srq_ctx_size;
+ break;
+ default:
+ break;
+ }
+
+ ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
+ if (ret == CQM_SUCCESS)
+ return &nonrdma_qinfo->common;
+
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
+ kfree(nonrdma_qinfo);
+ return NULL;
+}
+
+void cqm_qpc_mpt_delete(struct cqm_object *object)
+{
+ struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
+ struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
+ struct cqm_qpc_mpt_info,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ u32 count = qpc_mpt_info->index_count;
+ u32 index = qpc_mpt_info->common.xid;
+ struct cqm_bitmap *bitmap = NULL;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
+
+ /* find the corresponding cla table */
+ if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ } else {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ return;
+ }
+
+ CQM_PTR_CHECK_NO_RET(cla_table,
+ CQM_FUNCTION_FAIL(cqm_cla_table_get_qpc));
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index, object,
+ true);
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Qpc mpt del: object is referred by others, has to wait for completion\n");
+
+ /* Static QPC allocation must be non-blocking.
+ * Services ensure that the QPC is referenced
+ * when the QPC is deleted.
+ */
+ if (!cla_table->alloc_static)
+ wait_for_completion(&object->free);
+
+ /* release qpc buffer */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+}
+
+s32 cqm_qpc_mpt_delete_ret(struct cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cqm_qpc_mpt_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+void cqm_nonrdma_queue_delete(struct cqm_object *object)
+{
+ struct cqm_queue *common = container_of(object, struct cqm_queue, object);
+ struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
+ common);
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_buf *q_room_buf = &common->q_room_buf_1;
+ struct sphw_hwdev *handle = cqm_handle->ex_handle;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_bitmap *bitmap = NULL;
+ u32 index = qinfo->common.index;
+ u32 count = qinfo->index_count;
+
+ atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
+
+ /* The SCQ has an independent SCQN association. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ CQM_PTR_CHECK_NO_RET(cla_table, CQM_FUNCTION_FAIL(cqm_cla_table_get_queue));
+
+ /* disassociate index and object */
+ object_table = &cla_table->obj_table;
+ if (object->service_type == CQM_SERVICE_T_FC)
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, false);
+ else
+ cqm_object_table_remove(cqm_handle, object_table, index,
+ object, true);
+ }
+
+ /* wait for completion to ensure that all references to
+ * the QPC are complete
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+ else
+ cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
+
+ wait_for_completion(&object->free);
+
+ /* If the q header exists, release. */
+ if (qinfo->common.q_header_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
+ sizeof(struct cqm_queue_header),
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(qinfo->common.q_header_vaddr);
+ qinfo->common.q_header_vaddr = NULL;
+ }
+
+ cqm_buf_free(q_room_buf, cqm_handle->dev);
+ /* SRQ and SCQ have independent CTXs and release. */
+ if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
+ /* The CTX of the SRQ of the nordma is
+ * applied for independently.
+ */
+ if (common->q_ctx_vaddr) {
+ pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
+ qinfo->q_ctx_size,
+ PCI_DMA_BIDIRECTIONAL);
+
+ cqm_kfree_align(common->q_ctx_vaddr);
+ common->q_ctx_vaddr = NULL;
+ }
+ } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
+ /* The CTX of the SCQ of the nordma is managed by BAT/CLA. */
+ cqm_cla_put(cqm_handle, cla_table, index, count);
+
+ /* release the index to the bitmap */
+ bitmap = &cla_table->bitmap;
+ cqm_bitmap_free(bitmap, index, count);
+ }
+}
+
+s32 cqm_nonrdma_queue_delete_ret(struct cqm_object *object)
+{
+ u32 object_type;
+
+ object_type = object->object_type;
+ switch (object_type) {
+ case CQM_OBJECT_NONRDMA_EMBEDDED_RQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_SQ:
+ case CQM_OBJECT_NONRDMA_EMBEDDED_CQ:
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ case CQM_OBJECT_NONRDMA_SRQ:
+ cqm_nonrdma_queue_delete(object);
+ return CQM_SUCCESS;
+ default:
+ return CQM_FAIL;
+ }
+}
+
+void cqm3_object_delete(struct cqm_object *object)
+{
+ struct cqm_handle *cqm_handle = NULL;
+ struct sphw_hwdev *handle = NULL;
+
+ CQM_PTR_CHECK_NO_RET(object, CQM_PTR_NULL(object));
+ if (!object->cqm_handle) {
+ pr_err("[CQM]object del: cqm_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ cqm_handle = (struct cqm_handle *)object->cqm_handle;
+
+ if (!cqm_handle->ex_handle) {
+ pr_err("[CQM]object del: ex_handle is null, service type %u, refcount %d\n",
+ object->service_type, (int)object->refcount.counter);
+ kfree(object);
+ return;
+ }
+
+ handle = cqm_handle->ex_handle;
+
+ if (object->service_type >= CQM_SERVICE_T_MAX) {
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->service_type));
+ kfree(object);
+ return;
+ }
+
+ if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
+ kfree(object);
+ return;
+ }
+
+ cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
+ kfree(object);
+}
+
+struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh)
+{
+ struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
+ struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
+ struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
+ struct cqm_object_table *object_table = NULL;
+ struct cqm_cla_table *cla_table = NULL;
+ struct cqm_object *object = NULL;
+
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ switch (object_type) {
+ case CQM_OBJECT_SERVICE_CTX:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
+ break;
+ case CQM_OBJECT_NONRDMA_SCQ:
+ cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
+ break;
+ default:
+ return NULL;
+ }
+
+ if (!cla_table) {
+ cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
+ return NULL;
+ }
+
+ object_table = &cla_table->obj_table;
+ object = cqm_object_table_get(cqm_handle, object_table, index, bh);
+ return object;
+}
+
+void cqm3_object_put(struct cqm_object *object)
+{
+ /* The data flow path takes performance into consideration and
+ * does not check input parameters.
+ */
+ if (atomic_dec_and_test(&object->refcount))
+ complete(&object->free);
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.h b/drivers/scsi/spfc/hw/spfc_cqm_object.h
new file mode 100644
index 000000000000..3fbaa6a42af0
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_cqm_object.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef CQM_OBJECT_H
+#define CQM_OBJECT_H
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define CQM_SUCCESS 0
+#define CQM_FAIL (-1)
+/* Ignore the return value and continue */
+#define CQM_CONTINUE 1
+
+/* type of WQE is LINK WQE */
+#define CQM_WQE_WF_LINK 1
+
+/* chain queue mode */
+#define CQM_QUEUE_LINK_MODE 0
+/* RING queue mode */
+#define CQM_QUEUE_RING_MODE 1
+
+#define CQM_CQ_DEPTH_MAX 32768
+#define CQM_CQ_DEPTH_MIN 256
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+#define CQM_LINK_WQE_OWNER_VALID 1
+#define CQM_LINK_WQE_OWNER_INVALID 0
+
+#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
+#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
+
+#define CQM_QPC_LAYOUT_TABLE_SIZE 16
+
+#define CQM_MOD_CQM 8
+
+/* generic linkwqe structure */
+struct cqm_linkwqe {
+ u32 rsv1 : 14; /* <reserved field */
+ u32 wf : 1; /* <wf */
+ u32 rsv2 : 14; /* <reserved field */
+ u32 ctrlsl : 2; /* <ctrlsl */
+ u32 o : 1; /* <o bit */
+
+ u32 rsv3 : 31; /* <reserved field */
+ u32 lp : 1; /* The lp field determines whether the o-bit meaning is reversed. */
+ u32 next_page_gpa_h;
+ u32 next_page_gpa_l;
+ u32 next_buffer_addr_h;
+ u32 next_buffer_addr_l;
+};
+
+/* SRQ linkwqe structure. The wqe size must not exceed the common RQE size. */
+struct cqm_srq_linkwqe {
+ struct cqm_linkwqe linkwqe; /* <generic linkwqe structure */
+ u32 current_buffer_gpa_h;
+ u32 current_buffer_gpa_l;
+ u32 current_buffer_addr_h;
+ u32 current_buffer_addr_l;
+
+ u32 fast_link_page_addr_h;
+ u32 fast_link_page_addr_l;
+
+ u32 fixed_next_buffer_addr_h;
+ u32 fixed_next_buffer_addr_l;
+};
+
+#define CQM_LINKWQE_128B 128
+
+/* first 64B of standard 128B WQE */
+union cqm_linkwqe_first64B {
+ struct cqm_linkwqe basic_linkwqe; /* <generic linkwqe structure */
+ u32 value[16]; /* <reserved field */
+};
+
+/* second 64B of standard 128B WQE */
+struct cqm_linkwqe_second64B {
+ u32 rsvd0[4]; /* <first 16B reserved field */
+ u32 rsvd1[4]; /* <second 16B reserved field */
+ u32 rsvd2[4];
+
+ union {
+ struct {
+ u32 rsvd0[2];
+ u32 rsvd1 : 31;
+ u32 ifoe_o : 1; /* <o bit of ifoe */
+ u32 rsvd2;
+ } bs;
+ u32 value[4];
+ } forth_16B; /* <fourth 16B */
+};
+
+/* standard 128B WQE structure */
+struct cqm_linkwqe_128B {
+ union cqm_linkwqe_first64B first64B; /* <first 64B of standard 128B WQE */
+ struct cqm_linkwqe_second64B second64B; /* <back 64B of standard 128B WQE */
+};
+
+/* AEQ type definition */
+enum cqm_aeq_event_type {
+ CQM_AEQ_BASE_T_FC = 48, /* <FC consists of 8 events:48~55 */
+ CQM_AEQ_MAX_T_FC = 56
+};
+
+/* service registration template */
+struct service_register_template {
+ u32 service_type; /* <service type */
+ u32 srq_ctx_size; /* <SRQ context size */
+ u32 scq_ctx_size; /* <SCQ context size */
+ void *service_handle;
+ u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
+ void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
+};
+
+/* object operation type definition */
+enum cqm_object_type {
+ CQM_OBJECT_ROOT_CTX = 0, /* <0:root context, which is compatible with root CTX management */
+ CQM_OBJECT_SERVICE_CTX, /* <1:QPC, connection management object */
+ CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10, /* <10:RQ of non-RDMA services, managed by LINKWQE */
+ CQM_OBJECT_NONRDMA_EMBEDDED_SQ, /* <11:SQ of non-RDMA services, managed by LINKWQE */
+ /* <12:SRQ of non-RDMA services, managed by MTT, but the CQM needs to apply for MTT. */
+ CQM_OBJECT_NONRDMA_SRQ,
+ /* <13:Embedded CQ for non-RDMA services, managed by LINKWQE */
+ CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
+ CQM_OBJECT_NONRDMA_SCQ, /* <14:SCQ of non-RDMA services, managed by LINKWQE */
+};
+
+/* return value of the failure to apply for the BITMAP table */
+#define CQM_INDEX_INVALID (~(0U))
+
+/* doorbell mode selected by the current Q, hardware doorbell */
+#define CQM_HARDWARE_DOORBELL 1
+
+/* single-node structure of the CQM buffer */
+struct cqm_buf_list {
+ void *va; /* <virtual address */
+ dma_addr_t pa; /* <physical address */
+ u32 refcount; /* <reference count of the buf, which is used for internal buf management. */
+};
+
+/* common management structure of the CQM buffer */
+struct cqm_buf {
+ struct cqm_buf_list *buf_list; /* <buffer list */
+ /* <map the discrete buffer list to a group of consecutive addresses */
+ struct cqm_buf_list direct;
+ u32 page_number; /* <buf_number in quantity of page_number=2^n */
+ u32 buf_number; /* <number of buf_list nodes */
+ u32 buf_size; /* <PAGE_SIZE in quantity of buf_size=2^n */
+};
+
+/* CQM object structure, which can be considered
+ * as the base class abstracted from all queues/CTX.
+ */
+struct cqm_object {
+ u32 service_type; /* <service type */
+ u32 object_type; /* <object type, such as context, queue, mpt, and mtt, etc */
+ u32 object_size; /* <object Size, for queue/CTX/MPT, the unit is Byte*/
+ atomic_t refcount; /* <reference counting */
+ struct completion free; /* <release completed quantity */
+ void *cqm_handle; /* <cqm_handle */
+};
+
+/* structure of the QPC and MPT objects of the CQM */
+struct cqm_qpc_mpt {
+ struct cqm_object object;
+ u32 xid;
+ dma_addr_t paddr; /* <physical address of the QPC/MTT memory */
+ void *priv; /* <private information about the object of the service driver. */
+ u8 *vaddr; /* <virtual address of the QPC/MTT memory */
+};
+
+/* queue header structure */
+struct cqm_queue_header {
+ u64 doorbell_record; /* <SQ/RQ DB content */
+ u64 ci_record; /* <CQ DB content */
+ u64 rsv1;
+ u64 rsv2;
+};
+
+/* queue management structure: for queues of non-RDMA services, embedded queues
+ * are managed by LinkWQE, SRQ and SCQ are managed by MTT, but MTT needs to be
+ * applied by CQM; the queue of the RDMA service is managed by the MTT.
+ */
+struct cqm_queue {
+ struct cqm_object object; /* <object base class */
+ /* <The embedded queue and QP do not have indexes, but the SRQ and SCQ do. */
+ u32 index;
+ /* <private information about the object of the service driver */
+ void *priv;
+ /* <doorbell type selected by the current queue. HW/SW are used for the roce QP. */
+ u32 current_q_doorbell;
+ u32 current_q_room;
+ struct cqm_buf q_room_buf_1; /* <nonrdma:only q_room_buf_1 can be set to q_room_buf */
+ struct cqm_buf q_room_buf_2; /* <The CQ of RDMA reallocates the size of the queue room. */
+ struct cqm_queue_header *q_header_vaddr; /* <queue header virtual address */
+ dma_addr_t q_header_paddr; /* <physical address of the queue header */
+ u8 *q_ctx_vaddr; /* <CTX virtual addresses of SRQ and SCQ */
+ dma_addr_t q_ctx_paddr; /* <CTX physical addresses of SRQ and SCQ */
+ u32 valid_wqe_num; /* <number of valid WQEs that are successfully created */
+ u8 *tail_container; /* <tail pointer of the SRQ container */
+ u8 *head_container; /* <head pointer of SRQ container */
+ /* <Determine the connection mode during queue creation, such as link and ring. */
+ u8 queue_link_mode;
+};
+
+struct cqm_qpc_layout_table_node {
+ u32 type;
+ u32 size;
+ u32 offset;
+ struct cqm_object *object;
+};
+
+struct cqm_qpc_mpt_info {
+ struct cqm_qpc_mpt common;
+ /* Different service has different QPC.
+ * The large QPC/mpt will occupy some continuous indexes in bitmap.
+ */
+ u32 index_count;
+ struct cqm_qpc_layout_table_node qpc_layout_table[CQM_QPC_LAYOUT_TABLE_SIZE];
+};
+
+struct cqm_nonrdma_qinfo {
+ struct cqm_queue common;
+ u32 wqe_size;
+ /* Number of WQEs in each buffer (excluding link WQEs)
+ * For SRQ, the value is the number of WQEs contained in a container.
+ */
+ u32 wqe_per_buf;
+ u32 q_ctx_size;
+ /* When different services use CTXs of different sizes,
+ * a large CTX occupies multiple consecutive indexes in the bitmap.
+ */
+ u32 index_count;
+ /* add for srq */
+ u32 container_size;
+};
+
+/* sending command structure */
+struct cqm_cmd_buf {
+ void *buf;
+ dma_addr_t dma;
+ u16 size;
+};
+
+struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 object_size, void *object_priv,
+ u32 index);
+struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
+ enum cqm_object_type object_type,
+ u32 wqe_number, u32 wqe_size,
+ void *object_priv);
+void cqm3_object_delete(struct cqm_object *object);
+struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
+ u32 index, bool bh);
+void cqm3_object_put(struct cqm_object *object);
+
+s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
+ u8 pagenum, u64 db);
+s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count, void *direct_wqe);
+s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type, void *direct_wqe);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* CQM_OBJECT_H */
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
new file mode 100644
index 000000000000..179f49ddd7ad
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hba.c
@@ -0,0 +1,1724 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_hba.h"
+#include "spfc_module.h"
+#include "spfc_utils.h"
+#include "spfc_chipitf.h"
+#include "spfc_io.h"
+#include "spfc_lld.h"
+#include "sphw_hw.h"
+#include "spfc_cqm_main.h"
+
+struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
+ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+static ulong card_num_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+static struct spfc_card_num_manage card_num_manage[SPFC_MAX_CARD_NUM];
+spinlock_t probe_spin_lock;
+u32 max_parent_qpc_num;
+
+static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
+static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev);
+static u32 spfc_initial_chip_access(struct spfc_hba_info *hba);
+static void spfc_release_chip_access(struct spfc_hba_info *hba);
+static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode, void *var_in);
+static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode, void *para_out);
+static u32 spfc_port_update_wwn(void *hba, void *para_in);
+static u32 spfc_get_chip_info(struct spfc_hba_info *hba);
+static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn);
+static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa);
+static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state);
+static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba);
+
+struct spfc_uld_info fc_uld_info = {
+ .probe = spfc_probe,
+ .remove = spfc_remove,
+ .resume = NULL,
+ .event = NULL,
+ .suspend = NULL,
+ .ioctl = NULL
+};
+
+struct service_register_template service_cqm_temp = {
+ .service_type = SERVICE_T_FC,
+ .scq_ctx_size = SPFC_SCQ_CNTX_SIZE,
+ .srq_ctx_size = SPFC_SRQ_CNTX_SIZE, /* srq, scq context_size configuration */
+ .aeq_callback = spfc_process_aeqe, /* the API of asynchronous event from TILE to driver */
+};
+
+/* default configuration: auto speed, auto topology, INI+TGT */
+static struct unf_cfg_item spfc_port_cfg_parm[] = {
+ {"port_id", 0, 0x110000, 0xffffff},
+ /* port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
+ {"port_mode", 0, 0x20, 0xff},
+ /* port topology, 0x3: loop , 0xc:p2p, 0xf:auto, 0x10:vn2vn */
+ {"port_topology", 0, 0xf, 0x20},
+ {"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
+ /* queue depth of originator registered to SCSI midlayer */
+ {"max_queue_depth", 0, 128, 128},
+ {"sest_num", 0, 2048, 2048},
+ {"max_login", 0, 2048, 2048},
+ /* nodename from 32 bit to 64 bit */
+ {"node_name_high", 0, 0x1000286e, 0xffffffff},
+ /* nodename from 0 bit to 31 bit */
+ {"node_name_low", 0, 0xd4bbf12f, 0xffffffff},
+ /* portname from 32 bit to 64 bit */
+ {"port_name_high", 0, 0x2000286e, 0xffffffff},
+ /* portname from 0 bit to 31 bit */
+ {"port_name_low", 0, 0xd4bbf12f, 0xffffffff},
+ /* port speed 0:auto 1:1Gbps 2:2Gbps 3:4Gbps 4:8Gbps 5:16Gbps */
+ {"port_speed", 0, 0, 32},
+ {"interrupt_delay", 0, 0, 100}, /* unit: us */
+ {"tape_support", 0, 0, 1}, /* tape support */
+ {"End", 0, 0, 0}
+};
+
+struct unf_low_level_functioon_op spfc_func_op = {
+ .low_level_type = UNF_SPFC_FC,
+ .name = "SPFC",
+ .xchg_mgr_type = UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE,
+ .abts_xchg = UNF_NO_EXTRA_ABTS_XCHG,
+ .passthrough_flag = UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN,
+ .support_max_npiv_num = UNF_SPFC_MAXNPIV_NUM,
+ .support_max_ssq_num = SPFC_MAX_SSQ_NUM - 1,
+ .chip_id = 0,
+ .support_max_speed = UNF_PORT_SPEED_32_G,
+ .support_max_rport = UNF_SPFC_MAXRPORT_NUM,
+ .sfp_type = UNF_PORT_TYPE_FC_SFP,
+ .rport_release_type = UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC,
+ .sirt_page_mode = UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG,
+
+ /* Link service */
+ .service_op = {
+ .unf_ls_gs_send = spfc_send_ls_gs_cmnd,
+ .unf_bls_send = spfc_send_bls_cmnd,
+ .unf_cmnd_send = spfc_send_scsi_cmnd,
+ .unf_release_rport_res = spfc_free_parent_resource,
+ .unf_flush_ini_resp_que = spfc_flush_ini_resp_queue,
+ .unf_alloc_rport_res = spfc_alloc_parent_resource,
+ .ll_release_xid = spfc_free_xid,
+ },
+
+ /* Port Mgr */
+ .port_mgr_op = {
+ .ll_port_config_set = spfc_port_config_set,
+ .ll_port_config_get = spfc_port_config_get,
+ }
+};
+
+struct spfc_port_cfg_op {
+ enum unf_port_config_set_op opcode;
+ u32 (*spfc_operation)(void *hba, void *para);
+};
+
+struct spfc_port_cfg_op spfc_config_set_op[] = {
+ {UNF_PORT_CFG_SET_PORT_SWITCH, spfc_sfp_switch},
+ {UNF_PORT_CFG_UPDATE_WWN, spfc_port_update_wwn},
+ {UNF_PORT_CFG_UPDATE_FABRIC_PARAM, spfc_update_fabric_param},
+ {UNF_PORT_CFG_UPDATE_PLOGI_PARAM, spfc_update_port_param},
+ {UNF_PORT_CFG_SET_BUTT, NULL}
+};
+
+struct spfc_port_cfg_get_op {
+ enum unf_port_cfg_get_op opcode;
+ u32 (*spfc_operation)(void *hba, void *para);
+};
+
+struct spfc_port_cfg_get_op spfc_config_get_op[] = {
+ {UNF_PORT_CFG_GET_TOPO_ACT, spfc_get_topo_act},
+ {UNF_PORT_CFG_GET_LOOP_MAP, spfc_get_loop_map},
+ {UNF_PORT_CFG_GET_WORKBALE_BBCREDIT, spfc_get_workable_bb_credit},
+ {UNF_PORT_CFG_GET_WORKBALE_BBSCN, spfc_get_workable_bb_scn},
+ {UNF_PORT_CFG_GET_LOOP_ALPA, spfc_get_loop_alpa},
+ {UNF_PORT_CFG_GET_MAC_ADDR, spfc_get_chip_msg},
+ {UNF_PORT_CFG_GET_PCIE_LINK_STATE, spfc_get_hba_pcie_link_state},
+ {UNF_PORT_CFG_GET_BUTT, NULL},
+};
+
+static u32 spfc_port_update_wwn(void *hba, void *para_in)
+{
+ struct unf_port_wwn *port_wwn = NULL;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ port_wwn = (struct unf_port_wwn *)para_in;
+
+ /* Update it to the hba in the later */
+ *(u64 *)spfc_hba->sys_node_name = port_wwn->sys_node_name;
+ *(u64 *)spfc_hba->sys_port_name = port_wwn->sys_port_wwn;
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]Port(0x%x) updates WWNN(0x%llx) WWPN(0x%llx)",
+ spfc_hba->port_cfg.port_id,
+ *(u64 *)spfc_hba->sys_node_name,
+ *(u64 *)spfc_hba->sys_port_name);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode,
+ void *var_in)
+{
+ u32 op_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ for (op_idx = 0; op_idx < sizeof(spfc_config_set_op) /
+ sizeof(struct spfc_port_cfg_op); op_idx++) {
+ if (opcode == spfc_config_set_op[op_idx].opcode) {
+ if (!spfc_config_set_op[op_idx].spfc_operation) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Null operation for configuration, opcode(0x%x), operation ID(0x%x)",
+ opcode, op_idx);
+
+ return UNF_RETURN_ERROR;
+ }
+ return spfc_config_set_op[op_idx].spfc_operation(hba, var_in);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]No operation code for configuration, opcode(0x%x)",
+ opcode);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode,
+ void *para_out)
+{
+ u32 op_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ for (op_idx = 0; op_idx < sizeof(spfc_config_get_op) /
+ sizeof(struct spfc_port_cfg_get_op); op_idx++) {
+ if (opcode == spfc_config_get_op[op_idx].opcode) {
+ if (!spfc_config_get_op[op_idx].spfc_operation) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Null operation to get configuration, opcode(0x%x), operation ID(0x%x)",
+ opcode, op_idx);
+ return UNF_RETURN_ERROR;
+ }
+ return spfc_config_get_op[op_idx].spfc_operation(hba, para_out);
+ }
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]No operation to get configuration, opcode(0x%x)",
+ opcode);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_fc_mode_check(void *hw_dev_handle)
+{
+ FC_CHECK_RETURN_VALUE(hw_dev_handle, UNF_RETURN_ERROR);
+
+ if (!sphw_support_fc(hw_dev_handle, NULL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Work mode is error");
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Selected work mode is FC");
+
+ return RETURN_OK;
+}
+
+static u32 spfc_check_port_cfg(const struct spfc_port_cfg *port_cfg)
+{
+ bool topo_condition = false;
+ bool speed_condition = false;
+ /* About Work Topology */
+ topo_condition = ((port_cfg->port_topology != UNF_TOP_LOOP_MASK) &&
+ (port_cfg->port_topology != UNF_TOP_P2P_MASK) &&
+ (port_cfg->port_topology != UNF_TOP_AUTO_MASK));
+ if (topo_condition) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port topology(0x%x) is incorrect",
+ port_cfg->port_topology);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* About Work Mode */
+ if (port_cfg->port_mode != UNF_PORT_MODE_INI) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port mode(0x%x) is incorrect",
+ port_cfg->port_mode);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* About Work Speed */
+ speed_condition = ((port_cfg->port_speed != UNF_PORT_SPEED_AUTO) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_2_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_4_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_8_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_16_G) &&
+ (port_cfg->port_speed != UNF_PORT_SPEED_32_G));
+ if (speed_condition) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Configured port speed(0x%x) is incorrect",
+ port_cfg->port_speed);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Check port configuration OK");
+
+ return RETURN_OK;
+}
+
+static u32 spfc_get_port_cfg(struct spfc_hba_info *hba,
+ struct spfc_chip_info *chip_info, u8 card_num)
+{
+#define UNF_CONFIG_ITEM_LEN 15
+ /* Maximum length of a configuration item name, including the end
+ * character
+ */
+#define UNF_MAX_ITEM_NAME_LEN (32 + 1)
+
+ /* Get and check parameters */
+ char cfg_item[UNF_MAX_ITEM_NAME_LEN];
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ memset((void *)cfg_item, 0, sizeof(cfg_item));
+
+ spfc_hba->card_info.func_num = (sphw_global_func_id(hba->dev_handle)) & UNF_FUN_ID_MASK;
+ spfc_hba->card_info.card_num = card_num;
+
+ /* The range of PF of FC server is from PF1 to PF2 */
+ snprintf(cfg_item, UNF_MAX_ITEM_NAME_LEN, "spfc_cfg_%1u", (spfc_hba->card_info.func_num));
+
+ cfg_item[UNF_MAX_ITEM_NAME_LEN - 1] = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Get port configuration: %s", cfg_item);
+
+ /* Get configuration parameters from file */
+ UNF_LOWLEVEL_GET_CFG_PARMS(ret, cfg_item, &spfc_port_cfg_parm[ARRAY_INDEX_0],
+ (u32 *)(void *)(&spfc_hba->port_cfg),
+ sizeof(spfc_port_cfg_parm) / sizeof(struct unf_cfg_item));
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't get configuration",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ if (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) {
+ spfc_hba->port_cfg.sest_num = UNF_SPFC_MAXRPORT_NUM;
+ spfc_hba->port_cfg.max_login = UNF_SPFC_MAXRPORT_NUM;
+ }
+
+ spfc_hba->port_cfg.port_id &= SPFC_PORT_ID_MASK;
+ spfc_hba->port_cfg.port_id |= spfc_hba->card_info.card_num << UNF_SHIFT_8;
+ spfc_hba->port_cfg.port_id |= spfc_hba->card_info.func_num;
+ spfc_hba->port_cfg.tape_support = (u32)chip_info->tape_support;
+
+ /* Parameters check */
+ ret = spfc_check_port_cfg(&spfc_hba->port_cfg);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) check configuration incorrect",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ /* Set configuration which is got from file */
+ spfc_hba->port_speed_cfg = spfc_hba->port_cfg.port_speed;
+ spfc_hba->port_topo_cfg = spfc_hba->port_cfg.port_topology;
+ spfc_hba->port_mode = (enum unf_port_mode)(spfc_hba->port_cfg.port_mode);
+
+ return ret;
+}
+
+void spfc_generate_sys_wwn(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ *(u64 *)hba->sys_node_name = (((u64)hba->port_cfg.node_name_hi << UNF_SHIFT_32) |
+ (hba->port_cfg.node_name_lo));
+ *(u64 *)hba->sys_port_name = (((u64)hba->port_cfg.port_name_hi << UNF_SHIFT_32) |
+ (hba->port_cfg.port_name_lo));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]NodeName = 0x%llx, PortName = 0x%llx",
+ *(u64 *)hba->sys_node_name, *(u64 *)hba->sys_port_name);
+}
+
+static u32 spfc_create_queues(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Initialize shared resources of SCQ and SRQ in parent queue */
+ ret = spfc_create_common_share_queues(hba);
+ if (ret != RETURN_OK)
+ goto out_create_common_queue_fail;
+
+ /* Initialize parent queue manager resources */
+ ret = spfc_alloc_parent_queue_mgr(hba);
+ if (ret != RETURN_OK)
+ goto out_free_share_queue_resource;
+
+ /* Initialize shared WQE page pool in parent SQ */
+ ret = spfc_alloc_parent_sq_wqe_page_pool(hba);
+ if (ret != RETURN_OK)
+ goto out_free_parent_queue_resource;
+
+ ret = spfc_create_ssq(hba);
+ if (ret != RETURN_OK)
+ goto out_free_parent_wqe_page_pool;
+
+ /*
+ * Notice: the configuration of SQ and QID(default_sqid)
+ * must be the same in FC
+ */
+ hba->next_clear_sq = 0;
+ hba->default_sqid = SPFC_QID_SQ;
+
+ SPFC_FUNCTION_RETURN;
+ return RETURN_OK;
+out_free_parent_wqe_page_pool:
+ spfc_free_parent_sq_wqe_page_pool(hba);
+
+out_free_parent_queue_resource:
+ spfc_free_parent_queue_mgr(hba);
+
+out_free_share_queue_resource:
+ spfc_flush_scq_ctx(hba);
+ spfc_flush_srq_ctx(hba);
+ spfc_destroy_common_share_queues(hba);
+
+out_create_common_queue_fail:
+ SPFC_FUNCTION_RETURN;
+
+ return ret;
+}
+
+static u32 spfc_alloc_dma_buffers(struct spfc_hba_info *hba)
+{
+ struct pci_dev *pci_dev = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ pci_dev = hba->pci_dev;
+ FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
+
+ hba->sfp_buf = dma_alloc_coherent(&hba->pci_dev->dev,
+ sizeof(struct unf_sfp_err_rome_info),
+ &hba->sfp_dma_addr, GFP_KERNEL);
+ if (!hba->sfp_buf) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate SFP DMA buffer",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(hba->sfp_buf, 0, sizeof(struct unf_sfp_err_rome_info));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allocate sfp buffer(0x%p 0x%llx)",
+ hba->port_cfg.port_id, hba->sfp_buf,
+ (u64)hba->sfp_dma_addr);
+
+ return RETURN_OK;
+}
+
+static void spfc_free_dma_buffers(struct spfc_hba_info *hba)
+{
+ struct pci_dev *pci_dev = NULL;
+
+ FC_CHECK_RETURN_VOID(hba);
+ pci_dev = hba->pci_dev;
+ FC_CHECK_RETURN_VOID(pci_dev);
+
+ if (hba->sfp_buf) {
+ dma_free_coherent(&pci_dev->dev, sizeof(struct unf_sfp_err_rome_info),
+ hba->sfp_buf, hba->sfp_dma_addr);
+
+ hba->sfp_buf = NULL;
+ hba->sfp_dma_addr = 0;
+ }
+}
+
+static void spfc_destroy_queues(struct spfc_hba_info *hba)
+{
+ /* Free ssq */
+ spfc_free_ssq(hba, SPFC_MAX_SSQ_NUM);
+
+ /* Free parent queue resource */
+ spfc_free_parent_queues(hba);
+
+ /* Free queue manager resource */
+ spfc_free_parent_queue_mgr(hba);
+
+ /* Free linked List SQ and WQE page pool resource */
+ spfc_free_parent_sq_wqe_page_pool(hba);
+
+ /* Free shared SRQ and SCQ queue resource */
+ spfc_destroy_common_share_queues(hba);
+}
+
+static u32 spfc_alloc_default_session(struct spfc_hba_info *hba)
+{
+ struct unf_port_info rport_info = {0};
+ u32 wait_sq_cnt = 0;
+
+ rport_info.nport_id = 0xffffff;
+ rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
+ rport_info.local_nport_id = 0xffffff;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = 0x81;
+
+ if (spfc_alloc_parent_resource((void *)hba, &rport_info) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Alloc default session resource failed");
+ goto failed;
+ }
+
+ for (;;) {
+ if (hba->default_sq_info.default_sq_flag == 1)
+ break;
+
+ msleep(SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS);
+ wait_sq_cnt++;
+ if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES) {
+ hba->default_sq_info.default_sq_flag = 0xF;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Wait Default Session enable timeout");
+ goto failed;
+ }
+ }
+
+ if (spfc_mbx_config_default_session(hba, 1) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Notify up config default session table fail");
+ goto failed;
+ }
+
+ return RETURN_OK;
+
+failed:
+ spfc_sess_resource_free_sync((void *)hba, &rport_info);
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_init_host_res(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Initialize spin lock */
+ spin_lock_init(&spfc_hba->hba_lock);
+ spin_lock_init(&spfc_hba->flush_state_lock);
+ spin_lock_init(&spfc_hba->clear_state_lock);
+ spin_lock_init(&spfc_hba->spin_lock);
+ spin_lock_init(&spfc_hba->srq_delay_info.srq_lock);
+ /* Initialize init_completion */
+ init_completion(&spfc_hba->hba_init_complete);
+ init_completion(&spfc_hba->mbox_complete);
+ init_completion(&spfc_hba->vpf_complete);
+ init_completion(&spfc_hba->fcfi_complete);
+ init_completion(&spfc_hba->get_sfp_complete);
+ /* Step-1: initialize the communication channel between driver and uP */
+ ret = spfc_initial_chip_access(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't initialize chip access",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_unmap_memory;
+ }
+ /* Step-2: get chip configuration information before creating
+ * queue resources
+ */
+ ret = spfc_get_chip_info(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't get chip information",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_unmap_memory;
+ }
+
+ /* Step-3: create queue resources */
+ ret = spfc_create_queues(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't create queues",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_release_chip_access;
+ }
+ /* Allocate DMA buffer (SFP information) */
+ ret = spfc_alloc_dma_buffers(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't allocate DMA buffers",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_destroy_queues;
+ }
+ /* Initialize status parameters */
+ spfc_hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
+ spfc_hba->active_topo = UNF_ACT_TOP_UNKNOWN;
+ spfc_hba->sfp_on = false;
+ spfc_hba->port_loop_role = UNF_LOOP_ROLE_MASTER_OR_SLAVE;
+ spfc_hba->phy_link = UNF_PORT_LINK_DOWN;
+ spfc_hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
+
+ /* Initialize parameters referring to the lowlevel */
+ spfc_hba->remote_rttov_tag = 0;
+ spfc_hba->port_bb_scn_cfg = SPFC_LOWLEVEL_DEFAULT_BB_SCN;
+
+ /* Initialize timer, and the unit of E_D_TOV is ms */
+ spfc_hba->remote_edtov_tag = 0;
+ spfc_hba->remote_bb_credit = 0;
+ spfc_hba->compared_bb_scn = 0;
+ spfc_hba->compared_edtov_val = UNF_DEFAULT_EDTOV;
+ spfc_hba->compared_ratov_val = UNF_DEFAULT_RATOV;
+ spfc_hba->removing = false;
+ spfc_hba->dev_present = true;
+
+ /* Initialize parameters about cos */
+ spfc_hba->cos_bitmap = cos_bit_map;
+ memset(spfc_hba->cos_rport_cnt, 0, SPFC_MAX_COS_NUM * sizeof(atomic_t));
+
+ /* Mailbox access completion */
+ complete(&spfc_hba->mbox_complete);
+
+ ret = spfc_alloc_default_session(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't allocate Default Session",
+ spfc_hba->port_cfg.port_id);
+
+ goto out_destroy_dma_buff;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]SPFC port(0x%x) initialize host resources succeeded",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+
+out_destroy_dma_buff:
+ spfc_free_dma_buffers(spfc_hba);
+out_destroy_queues:
+ spfc_flush_scq_ctx(spfc_hba);
+ spfc_flush_srq_ctx(spfc_hba);
+ spfc_destroy_queues(spfc_hba);
+
+out_release_chip_access:
+ spfc_release_chip_access(spfc_hba);
+
+out_unmap_memory:
+ return ret;
+}
+
+static u32 spfc_get_chip_info(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ u32 exi_count = 0;
+ u32 exi_base = 0;
+ u32 exi_stride = 0;
+ u32 fun_idx = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ hba->vpid_start = hba->service_cap.dev_fc_cap.vp_id_start;
+ hba->vpid_end = hba->service_cap.dev_fc_cap.vp_id_end;
+ fun_idx = sphw_global_func_id(hba->dev_handle);
+
+ exi_count = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
+ exit_count >> UNF_SHIFT_1 : exit_count;
+ exi_stride = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
+ exit_stride >> UNF_SHIFT_1 : exit_stride;
+ exi_base = exit_base;
+
+ exi_base += (fun_idx * exi_stride);
+ hba->exi_base = SPFC_LSW(exi_base);
+ hba->exi_count = SPFC_LSW(exi_count);
+ hba->max_support_speed = max_speed;
+ hba->port_index = SPFC_LSB(fun_idx);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) base information: PortIndex=0x%x, ExiBase=0x%x, ExiCount=0x%x, VpIdStart=0x%x, VpIdEnd=0x%x, MaxSpeed=0x%x, Speed=0x%x, Topo=0x%x",
+ hba->port_cfg.port_id, hba->port_index, hba->exi_base,
+ hba->exi_count, hba->vpid_start, hba->vpid_end,
+ hba->max_support_speed, hba->port_speed_cfg, hba->port_topo_cfg);
+
+ return ret;
+}
+
+static u32 spfc_initial_chip_access(struct spfc_hba_info *hba)
+{
+ int ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ /* 1. Initialize cqm access related with scq, emb cq, aeq(ucode-->driver) */
+ service_cqm_temp.service_handle = hba;
+
+ ret = cqm3_service_register(hba->dev_handle, &service_cqm_temp);
+ if (ret != CQM_SUCCESS)
+ return UNF_RETURN_ERROR;
+
+ /* 2. Initialize mailbox(driver-->up), aeq(up--->driver) access */
+ ret = sphw_register_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC, hba,
+ spfc_up_msg2driver_proc);
+ if (ret != CQM_SUCCESS)
+ goto out_unreg_cqm;
+
+ return RETURN_OK;
+
+out_unreg_cqm:
+ cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_release_chip_access(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(hba->dev_handle);
+
+ sphw_unregister_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC);
+
+ cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
+}
+
+static void spfc_update_lport_config(struct spfc_hba_info *hba,
+ struct unf_low_level_functioon_op *lowlevel_func)
+{
+#define SPFC_MULTI_CONF_NONSUPPORT 0
+
+ struct unf_lport_cfg_item *lport_cfg = NULL;
+
+ lport_cfg = &lowlevel_func->lport_cfg_items;
+
+ if (hba->port_cfg.max_login < lowlevel_func->support_max_rport)
+ lport_cfg->max_login = hba->port_cfg.max_login;
+ else
+ lport_cfg->max_login = lowlevel_func->support_max_rport;
+
+ if (hba->port_cfg.sest_num >> UNF_SHIFT_1 < UNF_RESERVE_SFS_XCHG)
+ lport_cfg->max_io = hba->port_cfg.sest_num;
+ else
+ lport_cfg->max_io = hba->port_cfg.sest_num - UNF_RESERVE_SFS_XCHG;
+
+ lport_cfg->max_sfs_xchg = UNF_MAX_SFS_XCHG;
+ lport_cfg->port_id = hba->port_cfg.port_id;
+ lport_cfg->port_mode = hba->port_cfg.port_mode;
+ lport_cfg->port_topology = hba->port_cfg.port_topology;
+ lport_cfg->max_queue_depth = hba->port_cfg.max_queue_depth;
+
+ lport_cfg->port_speed = hba->port_cfg.port_speed;
+ lport_cfg->tape_support = hba->port_cfg.tape_support;
+
+ lowlevel_func->sys_port_name = *(u64 *)hba->sys_port_name;
+ lowlevel_func->sys_node_name = *(u64 *)hba->sys_node_name;
+
+ /* Update chip information */
+ lowlevel_func->dev = hba->pci_dev;
+ lowlevel_func->chip_info.chip_work_mode = hba->work_mode;
+ lowlevel_func->chip_info.chip_type = hba->chip_type;
+ lowlevel_func->chip_info.disable_err_flag = 0;
+ lowlevel_func->support_max_speed = hba->max_support_speed;
+ lowlevel_func->support_min_speed = hba->min_support_speed;
+
+ lowlevel_func->chip_id = 0;
+
+ lowlevel_func->sfp_type = UNF_PORT_TYPE_FC_SFP;
+
+ lowlevel_func->multi_conf_support = SPFC_MULTI_CONF_NONSUPPORT;
+ lowlevel_func->support_max_hot_tag_range = hba->port_cfg.sest_num;
+ lowlevel_func->update_fw_reset_active = UNF_PORT_UNGRADE_FW_RESET_INACTIVE;
+ lowlevel_func->port_type = 0; /* DRV_PORT_ENTITY_TYPE_PHYSICAL */
+
+ if ((lport_cfg->port_id & UNF_FIRST_LPORT_ID_MASK) == lport_cfg->port_id)
+ lowlevel_func->support_upgrade_report = UNF_PORT_SUPPORT_UPGRADE_REPORT;
+ else
+ lowlevel_func->support_upgrade_report = UNF_PORT_UNSUPPORT_UPGRADE_REPORT;
+}
+
+static u32 spfc_create_lport(struct spfc_hba_info *hba)
+{
+ void *lport = NULL;
+ struct unf_low_level_functioon_op lowlevel_func;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ spfc_func_op.dev = hba->pci_dev;
+ memcpy(&lowlevel_func, &spfc_func_op, sizeof(struct unf_low_level_functioon_op));
+
+ /* Update port configuration table */
+ spfc_update_lport_config(hba, &lowlevel_func);
+
+ /* Apply for lport resources */
+ UNF_LOWLEVEL_ALLOC_LPORT(lport, hba, &lowlevel_func);
+ if (!lport) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't allocate Lport",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ hba->lport = lport;
+
+ return RETURN_OK;
+}
+
+void spfc_release_probe_index(u32 probe_index)
+{
+ if (probe_index >= SPFC_MAX_PROBE_PORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Probe index(0x%x) is invalid", probe_index);
+
+ return;
+ }
+
+ spin_lock(&probe_spin_lock);
+ if (!test_bit((int)probe_index, (const ulong *)probe_bit_map)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Probe index(0x%x) is not probed",
+ probe_index);
+
+ spin_unlock(&probe_spin_lock);
+
+ return;
+ }
+
+ clear_bit((int)probe_index, probe_bit_map);
+ spin_unlock(&probe_spin_lock);
+}
+
+static void spfc_delete_default_session(struct spfc_hba_info *hba)
+{
+ struct unf_port_info rport_info = {0};
+
+ rport_info.nport_id = 0xffffff;
+ rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
+ rport_info.local_nport_id = 0xffffff;
+ rport_info.port_name = 0;
+ rport_info.cs_ctrl = 0x81;
+
+ /* Need config table to up first, then delete default session */
+ (void)spfc_mbx_config_default_session(hba, 0);
+ spfc_sess_resource_free_sync((void *)hba, &rport_info);
+}
+
+static void spfc_release_host_res(struct spfc_hba_info *hba)
+{
+ spfc_free_dma_buffers(hba);
+
+ spfc_destroy_queues(hba);
+
+ spfc_release_chip_access(hba);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) release low level resource done",
+ hba->port_cfg.port_id);
+}
+
+static struct spfc_hba_info *spfc_init_hba(struct pci_dev *pci_dev,
+ void *hw_dev_handle,
+ struct spfc_chip_info *chip_info,
+ u8 card_num)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(pci_dev, NULL);
+ FC_CHECK_RETURN_VALUE(hw_dev_handle, NULL);
+
+ /* Allocate HBA */
+ hba = kmalloc(sizeof(struct spfc_hba_info), GFP_ATOMIC);
+ FC_CHECK_RETURN_VALUE(hba, NULL);
+ memset(hba, 0, sizeof(struct spfc_hba_info));
+
+ /* Heartbeat default */
+ hba->heart_status = 1;
+ /* Private data in pciDev */
+ hba->pci_dev = pci_dev;
+ hba->dev_handle = hw_dev_handle;
+
+ /* Work mode */
+ hba->work_mode = chip_info->work_mode;
+ /* Create work queue */
+ hba->work_queue = create_singlethread_workqueue("spfc");
+ if (!hba->work_queue) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Spfc creat workqueue failed");
+
+ goto out_free_hba;
+ }
+ /* Init delay work */
+ INIT_DELAYED_WORK(&hba->srq_delay_info.del_work, spfc_rcvd_els_from_srq_timeout);
+ INIT_WORK(&hba->els_srq_clear_work, spfc_wq_destroy_els_srq);
+
+ /* Notice: Only use FC features */
+ (void)sphw_support_fc(hw_dev_handle, &hba->service_cap);
+ /* Check parent context available */
+ if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC parent context is not allocated in this function");
+
+ goto out_destroy_workqueue;
+ }
+ max_parent_qpc_num = hba->service_cap.dev_fc_cap.max_parent_qpc_num;
+
+ /* Get port configuration */
+ ret = spfc_get_port_cfg(hba, chip_info, card_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Can't get port configuration");
+
+ goto out_destroy_workqueue;
+ }
+ /* Get WWN */
+ spfc_generate_sys_wwn(hba);
+
+ /* Initialize host resources */
+ ret = spfc_init_host_res(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't initialize host resource",
+ hba->port_cfg.port_id);
+
+ goto out_destroy_workqueue;
+ }
+ /* Local Port create */
+ ret = spfc_create_lport(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't create lport",
+ hba->port_cfg.port_id);
+ goto out_release_host_res;
+ }
+ complete(&hba->hba_init_complete);
+
+ /* Print reference count */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) probe succeeded. Memory reference is 0x%x",
+ hba->port_cfg.port_id, atomic_read(&fc_mem_ref));
+
+ return hba;
+
+out_release_host_res:
+ spfc_delete_default_session(hba);
+ spfc_flush_scq_ctx(hba);
+ spfc_flush_srq_ctx(hba);
+ spfc_release_host_res(hba);
+
+out_destroy_workqueue:
+ flush_workqueue(hba->work_queue);
+ destroy_workqueue(hba->work_queue);
+ hba->work_queue = NULL;
+
+out_free_hba:
+ kfree(hba);
+
+ return NULL;
+}
+
+void spfc_get_total_probed_num(u32 *probe_cnt)
+{
+ u32 i = 0;
+ u32 cnt = 0;
+
+ spin_lock(&probe_spin_lock);
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (test_bit((int)i, (const ulong *)probe_bit_map))
+ cnt++;
+ }
+
+ *probe_cnt = cnt;
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Probed port total number is 0x%x", cnt);
+}
+
+u32 spfc_assign_card_num(struct spfc_lld_dev *lld_dev,
+ struct spfc_chip_info *chip_info, u8 *card_num)
+{
+ u8 i = 0;
+ u64 card_index = 0;
+
+ card_index = (!pci_is_root_bus(lld_dev->pdev->bus)) ?
+ lld_dev->pdev->bus->parent->number : lld_dev->pdev->bus->number;
+
+ spin_lock(&probe_spin_lock);
+
+ for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
+ if (test_bit((int)i, (const ulong *)card_num_bit_map)) {
+ if (card_num_manage[i].card_number ==
+ card_index && !card_num_manage[i].is_removing
+ ) {
+ card_num_manage[i].port_count++;
+ *card_num = i;
+ spin_unlock(&probe_spin_lock);
+ return RETURN_OK;
+ }
+ }
+ }
+
+ for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
+ if (!test_bit((int)i, (const ulong *)card_num_bit_map)) {
+ card_num_manage[i].card_number = card_index;
+ card_num_manage[i].port_count = 1;
+ card_num_manage[i].is_removing = false;
+
+ *card_num = i;
+ set_bit(i, card_num_bit_map);
+
+ spin_unlock(&probe_spin_lock);
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Have probe more than 0x%x port, probe failed", i);
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_dec_and_free_card_num(u8 card_num)
+{
+ /* 2 ports per card */
+ if (card_num >= SPFC_MAX_CARD_NUM) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Card number(0x%x) is invalid", card_num);
+
+ return;
+ }
+
+ spin_lock(&probe_spin_lock);
+
+ if (test_bit((int)card_num, (const ulong *)card_num_bit_map)) {
+ card_num_manage[card_num].port_count--;
+ card_num_manage[card_num].is_removing = true;
+
+ if (card_num_manage[card_num].port_count == 0) {
+ card_num_manage[card_num].card_number = 0;
+ card_num_manage[card_num].is_removing = false;
+ clear_bit((int)card_num, card_num_bit_map);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Can not find card number(0x%x)", card_num);
+ }
+
+ spin_unlock(&probe_spin_lock);
+}
+
+u32 spfc_assign_probe_index(u32 *probe_index)
+{
+ u32 i = 0;
+
+ spin_lock(&probe_spin_lock);
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (!test_bit((int)i, (const ulong *)probe_bit_map)) {
+ *probe_index = i;
+ set_bit(i, probe_bit_map);
+
+ spin_unlock(&probe_spin_lock);
+
+ return RETURN_OK;
+ }
+ }
+ spin_unlock(&probe_spin_lock);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Have probe more than 0x%x port, probe failed", i);
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index)
+{
+ u32 total_probe_num = 0;
+ u32 i = 0;
+ u32 probe_cnt = 0;
+
+ spfc_get_total_probed_num(&total_probe_num);
+
+ for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
+ if (!spfc_hba[i])
+ continue;
+
+ if (total_probe_num == probe_cnt)
+ break;
+
+ if (port_id == spfc_hba[i]->port_cfg.port_id) {
+ *probe_index = spfc_hba[i]->probe_index;
+
+ return RETURN_OK;
+ }
+
+ probe_cnt++;
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name)
+{
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *hba = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ const u8 work_mode = SPFC_SMARTIO_WORK_MODE_FC;
+ u32 probe_index = 0;
+ u32 probe_total_num = 0;
+ u8 card_num = INVALID_VALUE8;
+ struct spfc_chip_info chip_info;
+
+ FC_CHECK_RETURN_VALUE(lld_dev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(lld_dev->hwdev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(lld_dev->pdev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(uld_dev, UNF_RETURN_ERROR_S32);
+ FC_CHECK_RETURN_VALUE(uld_dev_name, UNF_RETURN_ERROR_S32);
+
+ pci_dev = lld_dev->pdev;
+ memset(&chip_info, 0, sizeof(struct spfc_chip_info));
+ /* 1. Get & check Total_Probed_number */
+ spfc_get_total_probed_num(&probe_total_num);
+ if (probe_total_num >= allowed_probe_num) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Total probe num (0x%x) is larger than allowed number(0x%x)",
+ probe_total_num, allowed_probe_num);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+ /* 2. Check device work mode */
+ ret = spfc_fc_mode_check(lld_dev->hwdev);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR_S32;
+
+ /* 3. Assign & Get new Probe index */
+ ret = spfc_assign_probe_index(&probe_index);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]AssignProbeIndex fail");
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ ret = spfc_get_chip_capability((void *)lld_dev->hwdev, &chip_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]GetChipCapability fail");
+ return UNF_RETURN_ERROR_S32;
+ }
+ chip_info.work_mode = work_mode;
+
+ /* Assign & Get new Card number */
+ ret = spfc_assign_card_num(lld_dev, &chip_info, &card_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc_assign_card_num fail");
+ spfc_release_probe_index(probe_index);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ /* Init HBA resource */
+ hba = spfc_init_hba(pci_dev, lld_dev->hwdev, &chip_info, card_num);
+ if (!hba) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Probe HBA(0x%x) failed. Memory reference = 0x%x",
+ probe_index, atomic_read(&fc_mem_ref));
+
+ spfc_release_probe_index(probe_index);
+ spfc_dec_and_free_card_num(card_num);
+
+ return UNF_RETURN_ERROR_S32;
+ }
+
+ /* Name by the order of probe */
+ *uld_dev = hba;
+ snprintf(uld_dev_name, SPFC_PORT_NAME_STR_LEN, "%s%02x%02x",
+ SPFC_PORT_NAME_LABEL, hba->card_info.card_num,
+ hba->card_info.func_num);
+ memcpy(hba->port_name, uld_dev_name, SPFC_PORT_NAME_STR_LEN);
+ hba->probe_index = probe_index;
+ spfc_hba[probe_index] = hba;
+
+ return RETURN_OK;
+}
+
+u32 spfc_sfp_switch(void *hba, void *para_in)
+{
+ struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
+ bool turn_on = false;
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
+
+ /* Redundancy check */
+ turn_on = *((bool *)para_in);
+ if ((u32)turn_on == (u32)spfc_hba->sfp_on) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) FC physical port is already %s",
+ spfc_hba->port_cfg.port_id, (turn_on) ? "on" : "off");
+
+ return ret;
+ }
+
+ if (turn_on) {
+ ret = spfc_port_check_fw_ready(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Get port(0x%x) clear state failed, turn on fail",
+ spfc_hba->port_cfg.port_id);
+ return ret;
+ }
+ /* At first, configure port table info if necessary */
+ ret = spfc_config_port_table(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't configurate port table",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+ }
+
+ /* Switch physical port */
+ ret = spfc_port_switch(spfc_hba, turn_on);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Port(0x%x) switch failed",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+ }
+
+ /* Update HBA's sfp state */
+ spfc_hba->sfp_on = turn_on;
+
+ return ret;
+}
+
+static u32 spfc_destroy_lport(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, hba->lport);
+ hba->lport = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) destroy L_Port done",
+ hba->port_cfg.port_id);
+
+ return ret;
+}
+
+static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba)
+{
+#define SPFC_PORT_CLEAR_DONE 0
+#define SPFC_PORT_CLEAR_DOING 1
+#define SPFC_WAIT_ONE_TIME_MS 1000
+#define SPFC_LOOP_TIMES 30
+
+ u32 clear_state = SPFC_PORT_CLEAR_DOING;
+ u32 ret = RETURN_OK;
+ u32 wait_timeout = 0;
+
+ do {
+ msleep(SPFC_WAIT_ONE_TIME_MS);
+ wait_timeout += SPFC_WAIT_ONE_TIME_MS;
+ ret = spfc_mbx_get_fw_clear_stat(hba, &clear_state);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ /* Total time more than 30s retry more than 3 times failed */
+ if (wait_timeout > SPFC_LOOP_TIMES * SPFC_WAIT_ONE_TIME_MS &&
+ clear_state != SPFC_PORT_CLEAR_DONE)
+ return UNF_RETURN_ERROR;
+ } while (clear_state != SPFC_PORT_CLEAR_DONE);
+
+ return RETURN_OK;
+}
+
+u32 spfc_port_reset(struct spfc_hba_info *hba)
+{
+ u32 ret = RETURN_OK;
+ ulong timeout = 0;
+ bool sfp_before_reset = false;
+ bool off_para_in = false;
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *spfc_hba = hba;
+
+ FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
+ pci_dev = spfc_hba->pci_dev;
+ FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) reset HBA begin",
+ spfc_hba->port_cfg.port_id);
+
+ /* Wait for last init/reset completion */
+ timeout = wait_for_completion_timeout(&spfc_hba->hba_init_complete,
+ (ulong)SPFC_PORT_INIT_TIME_SEC_MAX * HZ);
+
+ if (timeout == SPFC_ZERO) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Last HBA initialize/reset timeout: %d second",
+ SPFC_PORT_INIT_TIME_SEC_MAX);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Save current port state */
+ sfp_before_reset = spfc_hba->sfp_on;
+
+ /* Inform the reset event to CM level before beginning */
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_START, NULL);
+ spfc_hba->reset_time = jiffies;
+
+ /* Close SFP */
+ ret = spfc_sfp_switch(spfc_hba, &off_para_in);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) can't close SFP",
+ spfc_hba->port_cfg.port_id);
+ spfc_hba->sfp_on = sfp_before_reset;
+
+ complete(&spfc_hba->hba_init_complete);
+
+ return ret;
+ }
+
+ ret = spfc_port_check_fw_ready(spfc_hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Get port(0x%x) clear state failed, hang port and report chip error",
+ spfc_hba->port_cfg.port_id);
+
+ complete(&spfc_hba->hba_init_complete);
+
+ return ret;
+ }
+
+ spfc_queue_pre_process(spfc_hba, false);
+
+ ret = spfc_mb_reset_chip(spfc_hba, SPFC_MBOX_SUBTYPE_LIGHT_RESET);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SPFC port(0x%x) can't reset chip mailbox",
+ spfc_hba->port_cfg.port_id);
+
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_GET_FWLOG, NULL);
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_DEBUG_DUMP, NULL);
+ }
+
+ /* Inform the success to CM level */
+ UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_END, NULL);
+
+ /* Queue open */
+ spfc_queue_post_process(spfc_hba);
+
+ /* Open SFP */
+ (void)spfc_sfp_switch(spfc_hba, &sfp_before_reset);
+
+ complete(&spfc_hba->hba_init_complete);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) reset HBA done",
+ spfc_hba->port_cfg.port_id);
+
+ return ret;
+#undef SPFC_WAIT_LINKDOWN_EVENT_MS
+}
+
+static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn)
+{
+ /* Via CMND Queue */
+#define SPFC_DEL_SCQC_TIMEOUT 3000
+
+ int ret;
+ struct spfc_cmdqe_delete_scqc del_scqc_cmd;
+ struct sphw_cmd_buf *cmd_buf;
+
+ /* Alloc cmd buffer */
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Build & Send Cmnd */
+ memset(&del_scqc_cmd, 0, sizeof(del_scqc_cmd));
+ del_scqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SCQC;
+ del_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
+ spfc_cpu_to_big32(&del_scqc_cmd, sizeof(del_scqc_cmd));
+ memcpy(cmd_buf->buf, &del_scqc_cmd, sizeof(del_scqc_cmd));
+ cmd_buf->size = sizeof(del_scqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
+ NULL, NULL, SPFC_DEL_SCQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ /* Free cmnd buffer */
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send del scqc via cmdq failed, ret=0x%x",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa)
+{
+ /* Via CMND Queue */
+#define SPFC_DEL_SRQC_TIMEOUT 3000
+
+ int ret;
+ struct spfc_cmdqe_delete_srqc del_srqc_cmd;
+ struct sphw_cmd_buf *cmd_buf;
+
+ /* Alloc Cmnd buffer */
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf allocate failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Build & Send Cmnd */
+ memset(&del_srqc_cmd, 0, sizeof(del_srqc_cmd));
+ del_srqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SRQC;
+ del_srqc_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(sqrc_gpa);
+ del_srqc_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(sqrc_gpa);
+ spfc_cpu_to_big32(&del_srqc_cmd, sizeof(del_srqc_cmd));
+ memcpy(cmd_buf->buf, &del_srqc_cmd, sizeof(del_srqc_cmd));
+ cmd_buf->size = sizeof(del_srqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
+ NULL, NULL, SPFC_DEL_SRQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ /* Free Cmnd Buffer */
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send del srqc via cmdq failed, ret=0x%x",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
+
+ return RETURN_OK;
+}
+
+void spfc_flush_scq_ctx(struct spfc_hba_info *hba)
+{
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy total 0x%x SCQC", SPFC_TOTAL_SCQ_NUM);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ (void)spfc_delete_scqc_via_cmdq_sync(hba, 0);
+}
+
+void spfc_flush_srq_ctx(struct spfc_hba_info *hba)
+{
+ struct spfc_srq_info *srq_info = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy ELS&IMMI SRQC");
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ /* Check state to avoid to flush SRQC again */
+ srq_info = &hba->els_srq_info;
+ if (srq_info->srq_type == SPFC_SRQ_ELS && srq_info->enable) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[event]HBA(0x%x) flush ELS SRQC",
+ hba->port_index);
+
+ (void)spfc_delete_srqc_via_cmdq_sync(hba, srq_info->cqm_srq_info->q_ctx_paddr);
+ }
+}
+
+void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->flush_state_lock, flag);
+ hba->in_flushing = in_flush;
+ spin_unlock_irqrestore(&hba->flush_state_lock, flag);
+}
+
+void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag)
+{
+ ulong flag = 0;
+
+ spin_lock_irqsave(&hba->clear_state_lock, flag);
+ hba->port_is_cleared = clear_flag;
+ spin_unlock_irqrestore(&hba->clear_state_lock, flag);
+}
+
+bool spfc_hba_is_present(struct spfc_hba_info *hba)
+{
+ int ret_val = RETURN_OK;
+ bool present_flag = false;
+ u32 vendor_id = 0;
+
+ ret_val = pci_read_config_dword(hba->pci_dev, 0, &vendor_id);
+ vendor_id &= SPFC_PCI_VENDOR_ID_MASK;
+ if (ret_val == RETURN_OK && vendor_id == SPFC_PCI_VENDOR_ID_RAMAXEL) {
+ present_flag = true;
+ } else {
+ present_flag = false;
+ hba->dev_present = false;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port %s remove: vender_id=0x%x, ret=0x%x",
+ present_flag ? "normal" : "surprise", vendor_id, ret_val);
+
+ return present_flag;
+}
+
+static void spfc_exit(struct pci_dev *pci_dev, struct spfc_hba_info *hba)
+{
+#define SPFC_WAIT_CLR_RESOURCE_MS 1000
+ u32 ret = UNF_RETURN_ERROR;
+ bool sfp_switch = false;
+ bool present_flag = true;
+
+ FC_CHECK_RETURN_VOID(pci_dev);
+ FC_CHECK_RETURN_VOID(hba);
+
+ hba->removing = true;
+
+ /* 1. Check HBA present or not */
+ present_flag = spfc_hba_is_present(hba);
+ if (present_flag) {
+ if (hba->phy_link == UNF_PORT_LINK_DOWN)
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
+
+ /* At first, close sfp */
+ sfp_switch = false;
+ (void)spfc_sfp_switch((void *)hba, (void *)&sfp_switch);
+ }
+
+ /* 2. Report COM with HBA removing: delete route timer delay work */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_BEGIN_REMOVE, NULL);
+
+ /* 3. Report COM with HBA Nop, COM release I/O(s) & R_Port(s) forcely */
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_NOP, NULL);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]PCI device(%p) remove port(0x%x) failed",
+ pci_dev, hba->port_index);
+ }
+
+ spfc_delete_default_session(hba);
+
+ if (present_flag)
+ /* 4.1 Wait for all SQ empty, free SRQ buffer & SRQC */
+ spfc_queue_pre_process(hba, true);
+
+ /* 5. Destroy L_Port */
+ (void)spfc_destroy_lport(hba);
+
+ /* 6. With HBA is present */
+ if (present_flag) {
+ /* Enable Queues dispatch */
+ spfc_queue_post_process(hba);
+
+ /* Need reset port if necessary */
+ (void)spfc_mb_reset_chip(hba, SPFC_MBOX_SUBTYPE_HEAVY_RESET);
+
+ /* Flush SCQ context */
+ spfc_flush_scq_ctx(hba);
+
+ /* Flush SRQ context */
+ spfc_flush_srq_ctx(hba);
+
+ sphw_func_rx_tx_flush(hba->dev_handle, SPHW_CHANNEL_FC);
+
+ /* NOTE: while flushing txrx, hash bucket will be cached out in
+ * UP. Wait to clear resources completely
+ */
+ msleep(SPFC_WAIT_CLR_RESOURCE_MS);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) flush scq & srq & root context done",
+ hba->port_cfg.port_id);
+ }
+
+ /* 7. Release host resources */
+ spfc_release_host_res(hba);
+
+ /* 8. Destroy FC work queue */
+ if (hba->work_queue) {
+ flush_workqueue(hba->work_queue);
+ destroy_workqueue(hba->work_queue);
+ hba->work_queue = NULL;
+ }
+
+ /* 9. Release Probe index & Decrease card number */
+ spfc_release_probe_index(hba->probe_index);
+ spfc_dec_and_free_card_num((u8)hba->card_info.card_num);
+
+ /* 10. Free HBA memory */
+ kfree(hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]PCI device(%p) remove succeed, memory reference is 0x%x",
+ pci_dev, atomic_read(&fc_mem_ref));
+}
+
+static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev)
+{
+ struct pci_dev *pci_dev = NULL;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)uld_dev;
+ u32 probe_total_num = 0;
+ u32 probe_index = 0;
+
+ FC_CHECK_RETURN_VOID(lld_dev);
+ FC_CHECK_RETURN_VOID(uld_dev);
+ FC_CHECK_RETURN_VOID(lld_dev->hwdev);
+ FC_CHECK_RETURN_VOID(lld_dev->pdev);
+
+ pci_dev = hba->pci_dev;
+
+ /* Get total probed port number */
+ spfc_get_total_probed_num(&probe_total_num);
+ if (probe_total_num < 1) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port manager is empty and no need to remove");
+ return;
+ }
+
+ /* check pci vendor id */
+ if (pci_dev->vendor != SPFC_PCI_VENDOR_ID_RAMAXEL) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Wrong vendor id(0x%x) and exit",
+ pci_dev->vendor);
+ return;
+ }
+
+ /* Check function ability */
+ if (!sphw_support_fc(lld_dev->hwdev, NULL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC is not enable in this function");
+ return;
+ }
+
+ /* Get probe index */
+ probe_index = hba->probe_index;
+
+ /* Parent context alloc check */
+ if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]FC parent context not allocate in this function");
+ return;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]HBA(0x%x) start removing...", hba->port_index);
+
+ /* HBA removinig... */
+ spfc_exit(pci_dev, hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Port(0x%x) pci device removed, vendorid(0x%04x) devid(0x%04x)",
+ probe_index, pci_dev->vendor, pci_dev->device);
+
+ /* Probe index check */
+ if (probe_index < SPFC_HBA_PORT_MAX_NUM) {
+ spfc_hba[probe_index] = NULL;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Probe index(0x%x) is invalid and remove failed",
+ probe_index);
+ }
+
+ spfc_get_total_probed_num(&probe_total_num);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Removed index=%u, RemainNum=%u, AllowNum=%u",
+ probe_index, probe_total_num, allowed_probe_num);
+}
+
+static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state)
+{
+ bool *link_state_info = link_state;
+ bool present_flag = true;
+ struct spfc_hba_info *spfc_hba = hba;
+ int ret;
+ bool last_dev_state = true;
+ bool cur_dev_state = true;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(link_state, UNF_RETURN_ERROR);
+ last_dev_state = spfc_hba->dev_present;
+ ret = sphw_get_card_present_state(spfc_hba->dev_handle, (bool *)&present_flag);
+ if (ret || !present_flag) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]port(0x%x) is not present,ret:%d, present_flag:%d",
+ spfc_hba->port_cfg.port_id, ret, present_flag);
+ cur_dev_state = false;
+ } else {
+ cur_dev_state = true;
+ }
+
+ spfc_hba->dev_present = cur_dev_state;
+
+ /* To prevent false alarms, the heartbeat is considered lost only
+ * when the PCIe link is down for two consecutive times.
+ */
+ if (!last_dev_state && !cur_dev_state)
+ spfc_hba->heart_status = false;
+
+ *link_state_info = spfc_hba->dev_present;
+
+ return RETURN_OK;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_hba.h b/drivers/scsi/spfc/hw/spfc_hba.h
new file mode 100644
index 000000000000..937f00ea8fc7
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hba.h
@@ -0,0 +1,341 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_HBA_H
+#define SPFC_HBA_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_queue.h"
+#include "sphw_crm.h"
+#define SPFC_PCI_VENDOR_ID_MASK (0xffff)
+
+#define FW_VER_LEN (32)
+#define HW_VER_LEN (32)
+#define FW_SUB_VER_LEN (24)
+
+#define SPFC_LOWLEVEL_RTTOV_TAG 0
+#define SPFC_LOWLEVEL_EDTOV_TAG 0
+#define SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT (8)
+#define SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT (255)
+#define SPFC_LOWLEVEL_DEFAULT_BB_SCN 0
+#define SPFC_LOWLEVEL_DEFAULT_RA_TOV UNF_DEFAULT_RATOV
+#define SPFC_LOWLEVEL_DEFAULT_ED_TOV UNF_DEFAULT_EDTOV
+
+#define SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE 28081
+#define SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE 14100
+#define SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE 7000
+#define SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE 0x2000
+
+#define SPFC_PCI_STATUS 0x06
+
+#define SPFC_SMARTIO_WORK_MODE_FC 0x1
+#define SPFC_SMARTIO_WORK_MODE_OTHER 0xF
+#define UNF_FUN_ID_MASK 0x07
+
+#define UNF_SPFC_FC (0x01)
+#define UNF_SPFC_MAXNPIV_NUM 64 /* If not support NPIV, Initialized to 0 */
+
+#define SPFC_MAX_COS_NUM (8)
+
+#define SPFC_INTR_ENABLE 0x5
+#define SPFC_INTR_DISABLE 0x0
+#define SPFC_CLEAR_FW_INTR 0x1
+#define SPFC_REG_ENABLE_INTR 0x00000200
+
+#define SPFC_PCI_VENDOR_ID_RAMAXEL 0x1E81
+
+#define SPFC_SCQ_CNTX_SIZE 32
+#define SPFC_SRQ_CNTX_SIZE 64
+
+#define SPFC_PORT_INIT_TIME_SEC_MAX 1
+
+#define SPFC_PORT_NAME_LABEL "spfc"
+#define SPFC_PORT_NAME_STR_LEN (16)
+
+#define SPFC_MAX_PROBE_PORT_NUM (64)
+#define SPFC_PORT_NUM_PER_TABLE (64)
+#define SPFC_MAX_CARD_NUM (32)
+
+#define SPFC_HBA_PORT_MAX_NUM SPFC_MAX_PROBE_PORT_NUM
+#define SPFC_SIRT_MIN_RXID 0
+#define SPFC_SIRT_MAX_RXID 255
+
+#define SPFC_GET_HBA_PORT_ID(hba) ((hba)->port_index)
+
+#define SPFC_MAX_WAIT_LOOP_TIMES 10000
+#define SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS 1
+#define SPFC_WAIT_SESS_FREE_ONE_TIME_MS 1
+
+#define SPFC_PORT_ID_MASK 0xff0000
+
+#define SPFC_MAX_PARENT_QPC_NUM 2048
+struct spfc_port_cfg {
+ u32 port_id; /* Port ID */
+ u32 port_mode; /* Port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
+ u32 port_topology; /* Port topo:0x3:loop,0xc:p2p,0xf:auto */
+ u32 port_alpa; /* Port ALPA */
+ u32 max_queue_depth; /* Max Queue depth Registration to SCSI */
+ u32 sest_num; /* IO burst num:512-4096 */
+ u32 max_login; /* Max Login Session. */
+ u32 node_name_hi; /* nodename high 32 bits */
+ u32 node_name_lo; /* nodename low 32 bits */
+ u32 port_name_hi; /* portname high 32 bits */
+ u32 port_name_lo; /* portname low 32 bits */
+ u32 port_speed; /* Port speed 0:auto 4:4Gbps 8:8Gbps 16:16Gbps */
+ u32 interrupt_delay; /* Delay times(ms) in interrupt */
+ u32 tape_support; /* tape support */
+};
+
+#define SPFC_VER_INFO_SIZE 128
+struct spfc_drv_version {
+ char ver[SPFC_VER_INFO_SIZE];
+};
+
+struct spfc_card_info {
+ u32 card_num : 8;
+ u32 func_num : 8;
+ u32 base_func : 8;
+ /* Card type:UNF_FC_SERVER_BOARD_32_G(6) 32G mode,
+ * UNF_FC_SERVER_BOARD_16_G(7)16G mode
+ */
+ u32 card_type : 8;
+};
+
+struct spfc_card_num_manage {
+ bool is_removing;
+ u32 port_count;
+ u64 card_number;
+};
+
+struct spfc_sim_ini_err {
+ u32 err_code;
+ u32 times;
+};
+
+struct spfc_sim_pcie_err {
+ u32 err_code;
+ u32 times;
+};
+
+struct spfc_led_state {
+ u8 green_speed_led;
+ u8 yellow_speed_led;
+ u8 ac_led;
+ u8 rsvd;
+};
+
+enum spfc_led_activity {
+ SPFC_LED_CFG_ACTVE_FRAME = 0,
+ SPFC_LED_CFG_ACTVE_FC = 3
+};
+
+enum spfc_queue_set_stage {
+ SPFC_QUEUE_SET_STAGE_INIT = 0,
+ SPFC_QUEUE_SET_STAGE_SCANNING,
+ SPFC_QUEUE_SET_STAGE_FLUSHING,
+ SPFC_QUEUE_SET_STAGE_FLUSHDONE,
+ SPFC_QUEUE_SET_STAGE_BUTT
+};
+
+struct spfc_vport_info {
+ u64 node_name;
+ u64 port_name;
+ u32 port_mode; /* INI, TGT or both */
+ u32 nport_id; /* maybe acquired by lowlevel and update to common */
+ void *vport;
+ u16 vp_index;
+};
+
+struct spfc_srq_delay_info {
+ u8 srq_delay_flag; /* Check whether need to delay */
+ u8 root_rq_rcvd_flag;
+ u16 rsd;
+
+ spinlock_t srq_lock;
+ struct unf_frame_pkg frame_pkg;
+
+ struct delayed_work del_work;
+};
+
+struct spfc_fw_ver_detail {
+ u8 ucode_ver[SPFC_VER_LEN];
+ u8 ucode_compile_time[SPFC_COMPILE_TIME_LEN];
+
+ u8 up_ver[SPFC_VER_LEN];
+ u8 up_compile_time[SPFC_COMPILE_TIME_LEN];
+
+ u8 boot_ver[SPFC_VER_LEN];
+ u8 boot_compile_time[SPFC_COMPILE_TIME_LEN];
+};
+
+/* get wwpn and wwnn */
+struct spfc_chip_info {
+ u8 work_mode;
+ u8 tape_support;
+ u64 wwpn;
+ u64 wwnn;
+};
+
+/* Default SQ info */
+struct spfc_default_sq_info {
+ u32 sq_cid;
+ u32 sq_xid;
+ u32 fun_cid;
+ u32 default_sq_flag;
+};
+
+struct spfc_hba_info {
+ struct pci_dev *pci_dev;
+ void *dev_handle;
+
+ struct fc_service_cap service_cap; /* struct fc_service_cap pstFcoeServiceCap; */
+
+ struct spfc_scq_info scq_info[SPFC_TOTAL_SCQ_NUM];
+ struct spfc_srq_info els_srq_info;
+
+ struct spfc_vport_info vport_info[UNF_SPFC_MAXNPIV_NUM + 1];
+
+ /* PCI IO Memory */
+ void __iomem *bar0;
+ u32 bar0_len;
+
+ struct spfc_parent_queue_mgr *parent_queue_mgr;
+
+ /* Link list Sq WqePage Pool */
+ struct spfc_sq_wqepage_pool sq_wpg_pool;
+
+ enum spfc_queue_set_stage queue_set_stage;
+ u32 next_clear_sq;
+ u32 default_sqid;
+
+ /* Port parameters, Obtained through firmware */
+ u16 queue_set_max_count;
+ u8 port_type; /* FC or FCoE Port */
+ u8 port_index; /* Phy Port */
+ u32 default_scqn;
+ char fw_ver[FW_VER_LEN]; /* FW version */
+ char hw_ver[HW_VER_LEN]; /* HW version */
+ char mst_fw_ver[FW_SUB_VER_LEN];
+ char fc_fw_ver[FW_SUB_VER_LEN];
+ u8 chip_type; /* chiptype:Smart or fc */
+ u8 work_mode;
+ struct spfc_card_info card_info;
+ char port_name[SPFC_PORT_NAME_STR_LEN];
+ u32 probe_index;
+
+ u16 exi_base;
+ u16 exi_count;
+ u16 vpf_count;
+ u8 vpid_start;
+ u8 vpid_end;
+
+ spinlock_t flush_state_lock;
+ bool in_flushing;
+
+ spinlock_t clear_state_lock;
+ bool port_is_cleared;
+
+ struct spfc_port_cfg port_cfg; /* Obtained through Config */
+
+ void *lport; /* Used in UNF level */
+
+ u8 sys_node_name[UNF_WWN_LEN];
+ u8 sys_port_name[UNF_WWN_LEN];
+
+ struct completion hba_init_complete;
+ struct completion mbox_complete;
+ struct completion vpf_complete;
+ struct completion fcfi_complete;
+ struct completion get_sfp_complete;
+
+ u16 init_stage;
+ u16 removing;
+ bool sfp_on;
+ bool dev_present;
+ bool heart_status;
+ spinlock_t hba_lock;
+ u32 port_topo_cfg;
+ u32 port_bb_scn_cfg;
+ u32 port_loop_role;
+ u32 port_speed_cfg;
+ u32 max_support_speed;
+ u32 min_support_speed;
+ u32 server_max_speed;
+
+ u8 remote_rttov_tag;
+ u8 remote_edtov_tag;
+ u16 compared_bb_scn;
+ u16 remote_bb_credit;
+ u32 compared_edtov_val;
+ u32 compared_ratov_val;
+ enum unf_act_topo active_topo;
+ u32 active_port_speed;
+ u32 active_rxbb_credit;
+ u32 active_bb_scn;
+
+ u32 phy_link;
+
+ enum unf_port_mode port_mode;
+
+ u32 fcp_cfg;
+
+ /* loop */
+ u8 active_alpa;
+ u8 loop_map_valid;
+ u8 loop_map[UNF_LOOPMAP_COUNT];
+
+ /* sfp info dma */
+ void *sfp_buf;
+ dma_addr_t sfp_dma_addr;
+ u32 sfp_status;
+ int chip_temp;
+ u32 sfp_posion;
+
+ u32 cos_bitmap;
+ atomic_t cos_rport_cnt[SPFC_MAX_COS_NUM];
+
+ /* fw debug dma buffer */
+ void *debug_buf;
+ dma_addr_t debug_buf_dma_addr;
+ void *log_buf;
+ dma_addr_t fw_log_dma_addr;
+
+ void *dma_addr;
+ dma_addr_t update_dma_addr;
+
+ struct spfc_sim_ini_err sim_ini_err;
+ struct spfc_sim_pcie_err sim_pcie_err;
+
+ struct spfc_led_state led_states;
+
+ u32 fec_status;
+
+ struct workqueue_struct *work_queue;
+ struct work_struct els_srq_clear_work;
+ u64 reset_time;
+
+ spinlock_t spin_lock;
+
+ struct spfc_srq_delay_info srq_delay_info;
+ struct spfc_fw_ver_detail hardinfo;
+ struct spfc_default_sq_info default_sq_info;
+};
+
+extern struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
+extern spinlock_t probe_spin_lock;
+extern ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
+
+u32 spfc_port_reset(struct spfc_hba_info *hba);
+void spfc_flush_scq_ctx(struct spfc_hba_info *hba);
+void spfc_flush_srq_ctx(struct spfc_hba_info *hba);
+void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush);
+void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag);
+u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index);
+void spfc_get_total_probed_num(u32 *probe_cnt);
+u32 spfc_sfp_switch(void *hba, void *para_in);
+bool spfc_hba_is_present(struct spfc_hba_info *hba);
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_hw_wqe.h b/drivers/scsi/spfc/hw/spfc_hw_wqe.h
new file mode 100644
index 000000000000..e03d24a98579
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_hw_wqe.h
@@ -0,0 +1,1645 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_HW_WQE_H
+#define SPFC_HW_WQE_H
+
+#define FC_ICQ_EN
+#define FC_SCSI_CMDIU_LEN 48
+#define FC_NVME_CMDIU_LEN 96
+#define FC_LS_GS_USERID_CNT_MAX 10
+#define FC_SENSEDATA_USERID_CNT_MAX 2
+#define FC_INVALID_MAGIC_NUM 0xFFFFFFFF
+#define FC_INVALID_HOTPOOLTAG 0xFFFF
+
+/* TASK TYPE: in order to compatible wiht EDA, please add new type before BUTT. */
+enum spfc_task_type {
+ SPFC_TASK_T_EMPTY = 0, /* SCQE TYPE: means task type not initialize */
+
+ SPFC_TASK_T_IWRITE = 1, /* SQE TYPE: ini send FCP Write Command */
+ SPFC_TASK_T_IREAD = 2, /* SQE TYPE: ini send FCP Read Command */
+ SPFC_TASK_T_IRESP = 3, /* SCQE TYPE: ini recv fcp rsp for IREAD/IWRITE/ITMF */
+ SPFC_TASK_T_TCMND = 4, /* NA */
+ SPFC_TASK_T_TREAD = 5, /* SQE TYPE: tgt send FCP Read Command */
+ SPFC_TASK_T_TWRITE = 6, /* SQE TYPE: tgt send FCP Write Command (XFER_RDY) */
+ SPFC_TASK_T_TRESP = 7, /* SQE TYPE: tgt send fcp rsp of Read/Write */
+ SPFC_TASK_T_TSTS = 8, /* SCQE TYPE: tgt sts for TREAD/TWRITE/TRESP */
+ SPFC_TASK_T_ABTS = 9, /* SQE TYPE: ini send abts request Command */
+ SPFC_TASK_T_IELS = 10, /* NA */
+ SPFC_TASK_T_ITMF = 11, /* SQE TYPE: ini send tmf request Command */
+ SPFC_TASK_T_CLEAN_UP = 12, /* NA */
+ SPFC_TASK_T_CLEAN_UP_ALL = 13, /* NA */
+ SPFC_TASK_T_UNSOLICITED = 14, /* NA */
+ SPFC_TASK_T_ERR_WARN = 15, /* NA */
+ SPFC_TASK_T_SESS_EN = 16, /* CMDQ TYPE: enable session */
+ SPFC_TASK_T_SESS_DIS = 17, /* NA */
+ SPFC_TASK_T_SESS_DEL = 18, /* NA */
+ SPFC_TASK_T_RQE_REPLENISH = 19, /* NA */
+
+ SPFC_TASK_T_RCV_TCMND = 20, /* SCQE TYPE: tgt recv fcp cmd */
+ SPFC_TASK_T_RCV_ELS_CMD = 21, /* SCQE TYPE: tgt recv els cmd */
+ SPFC_TASK_T_RCV_ABTS_CMD = 22, /* SCQE TYPE: tgt recv abts cmd */
+ SPFC_TASK_T_RCV_IMMEDIATE = 23, /* SCQE TYPE: tgt recv immediate data */
+ /* SQE TYPE: send ESL rsp. PLOGI_ACC, PRLI_ACC will carry the parent
+ *context parameter indication.
+ */
+ SPFC_TASK_T_ELS_RSP = 24,
+ SPFC_TASK_T_ELS_RSP_STS = 25, /* SCQE TYPE: ELS rsp sts */
+ SPFC_TASK_T_ABTS_RSP = 26, /* CMDQ TYPE: tgt send abts rsp */
+ SPFC_TASK_T_ABTS_RSP_STS = 27, /* SCQE TYPE: tgt abts rsp sts */
+
+ SPFC_TASK_T_ABORT = 28, /* CMDQ TYPE: tgt send Abort Command */
+ SPFC_TASK_T_ABORT_STS = 29, /* SCQE TYPE: Abort sts */
+
+ SPFC_TASK_T_ELS = 30, /* SQE TYPE: send ELS request Command */
+ SPFC_TASK_T_RCV_ELS_RSP = 31, /* SCQE TYPE: recv ELS response */
+
+ SPFC_TASK_T_GS = 32, /* SQE TYPE: send GS request Command */
+ SPFC_TASK_T_RCV_GS_RSP = 33, /* SCQE TYPE: recv GS response */
+
+ SPFC_TASK_T_SESS_EN_STS = 34, /* SCQE TYPE: enable session sts */
+ SPFC_TASK_T_SESS_DIS_STS = 35, /* NA */
+ SPFC_TASK_T_SESS_DEL_STS = 36, /* NA */
+
+ SPFC_TASK_T_RCV_ABTS_RSP = 37, /* SCQE TYPE: ini recv abts rsp */
+
+ SPFC_TASK_T_BUFFER_CLEAR = 38, /* CMDQ TYPE: Buffer Clear */
+ SPFC_TASK_T_BUFFER_CLEAR_STS = 39, /* SCQE TYPE: Buffer Clear sts */
+ SPFC_TASK_T_FLUSH_SQ = 40, /* CMDQ TYPE: flush sq */
+ SPFC_TASK_T_FLUSH_SQ_STS = 41, /* SCQE TYPE: flush sq sts */
+
+ SPFC_TASK_T_SESS_RESET = 42, /* SQE TYPE: Reset session */
+ SPFC_TASK_T_SESS_RESET_STS = 43, /* SCQE TYPE: Reset session sts */
+ SPFC_TASK_T_RQE_REPLENISH_STS = 44, /* NA */
+ SPFC_TASK_T_DUMP_EXCH = 45, /* CMDQ TYPE: dump exch */
+ SPFC_TASK_T_INIT_SRQC = 46, /* CMDQ TYPE: init SRQC */
+ SPFC_TASK_T_CLEAR_SRQ = 47, /* CMDQ TYPE: clear SRQ */
+ SPFC_TASK_T_CLEAR_SRQ_STS = 48, /* SCQE TYPE: clear SRQ sts */
+ SPFC_TASK_T_INIT_SCQC = 49, /* CMDQ TYPE: init SCQC */
+ SPFC_TASK_T_DEL_SCQC = 50, /* CMDQ TYPE: delete SCQC */
+ SPFC_TASK_T_TMF_RESP = 51, /* SQE TYPE: tgt send tmf rsp */
+ SPFC_TASK_T_DEL_SRQC = 52, /* CMDQ TYPE: delete SRQC */
+ SPFC_TASK_T_RCV_IMMI_CONTINUE = 53, /* SCQE TYPE: tgt recv continue immediate data */
+
+ SPFC_TASK_T_ITMF_RESP = 54, /* SCQE TYPE: ini recv tmf rsp */
+ SPFC_TASK_T_ITMF_MARKER_STS = 55, /* SCQE TYPE: tmf marker sts */
+ SPFC_TASK_T_TACK = 56,
+ SPFC_TASK_T_SEND_AEQERR = 57,
+ SPFC_TASK_T_ABTS_MARKER_STS = 58, /* SCQE TYPE: abts marker sts */
+ SPFC_TASK_T_FLR_CLEAR_IO = 59, /* FLR clear io type */
+ SPFC_TASK_T_CREATE_SSQ_CONTEXT = 60,
+ SPFC_TASK_T_CLEAR_SSQ_CONTEXT = 61,
+ SPFC_TASK_T_EXCH_ID_FREE = 62,
+ SPFC_TASK_T_DIFX_RESULT_STS = 63,
+ SPFC_TASK_T_EXCH_ID_FREE_ABORT = 64,
+ SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS = 65,
+ SPFC_TASK_T_PARAM_CHECK_FAIL = 66,
+ SPFC_TASK_T_TGT_UNKNOWN = 67,
+ SPFC_TASK_T_NVME_LS = 70, /* SQE TYPE: Snd Ls Req */
+ SPFC_TASK_T_RCV_NVME_LS_RSP = 71, /* SCQE TYPE: Rcv Ls Rsp */
+
+ SPFC_TASK_T_NVME_LS_RSP = 72, /* SQE TYPE: Snd Ls Rsp */
+ SPFC_TASK_T_RCV_NVME_LS_RSP_STS = 73, /* SCQE TYPE: Rcv Ls Rsp sts */
+
+ SPFC_TASK_T_RCV_NVME_LS_CMD = 74, /* SCQE TYPE: Rcv ls cmd */
+
+ SPFC_TASK_T_NVME_IREAD = 75, /* SQE TYPE: Ini Snd Nvme Read Cmd */
+ SPFC_TASK_T_NVME_IWRITE = 76, /* SQE TYPE: Ini Snd Nvme write Cmd */
+
+ SPFC_TASK_T_NVME_TREAD = 77, /* SQE TYPE: Tgt Snd Nvme Read Cmd */
+ SPFC_TASK_T_NVME_TWRITE = 78, /* SQE TYPE: Tgt Snd Nvme write Cmd */
+
+ SPFC_TASK_T_NVME_IRESP = 79, /* SCQE TYPE: Ini recv nvme rsp for NVMEIREAD/NVMEIWRITE */
+
+ SPFC_TASK_T_INI_IO_ABORT = 80, /* SQE type: INI Abort Cmd */
+ SPFC_TASK_T_INI_IO_ABORT_STS = 81, /* SCQE type: INI Abort sts */
+
+ SPFC_TASK_T_INI_LS_ABORT = 82, /* SQE type: INI ls abort Cmd */
+ SPFC_TASK_T_INI_LS_ABORT_STS = 83, /* SCQE type: INI ls abort sts */
+ SPFC_TASK_T_EXCHID_TIMEOUT_STS = 84, /* SCQE TYPE: EXCH_ID TIME OUT */
+ SPFC_TASK_T_PARENT_ERR_STS = 85, /* SCQE TYPE: PARENT ERR */
+
+ SPFC_TASK_T_NOP = 86,
+ SPFC_TASK_T_NOP_STS = 87,
+
+ SPFC_TASK_T_DFX_INFO = 126,
+ SPFC_TASK_T_BUTT
+};
+
+/* error code for error report */
+
+enum spfc_err_code {
+ FC_CQE_COMPLETED = 0, /* Successful */
+ FC_SESS_HT_INSERT_FAIL = 1, /* Offload fail: hash insert fail */
+ FC_SESS_HT_INSERT_DUPLICATE = 2, /* Offload fail: duplicate offload */
+ FC_SESS_HT_BIT_SET_FAIL = 3, /* Offload fail: bloom filter set fail */
+ FC_SESS_HT_DELETE_FAIL = 4, /* Offload fail: hash delete fail(duplicate delete) */
+ FC_CQE_BUFFER_CLEAR_IO_COMPLETED = 5, /* IO done in buffer clear */
+ FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED = 6, /* IO done in session rst mode=1 */
+ FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED = 7, /* IO done in session rst mode=3 */
+ FC_CQE_TMF_RSP_IO_COMPLETED = 8, /* IO done in tgt tmf rsp */
+ FC_CQE_TMF_IO_COMPLETED = 9, /* IO done in ini tmf */
+ FC_CQE_DRV_ABORT_IO_COMPLETED = 10, /* IO done in tgt abort */
+ /*
+ *IO done in fcp rsp process. Used for the sceanrio: 1.abort before cmd 2.
+ *send fcp rsp directly after recv cmd.
+ */
+ FC_CQE_DRV_ABORT_IO_IN_RSP_COMPLETED = 11,
+ /*
+ *IO done in fcp cmd process. Used for the sceanrio: 1.abort before cmd 2.child setup fail.
+ */
+ FC_CQE_DRV_ABORT_IO_IN_CMD_COMPLETED = 12,
+ FC_CQE_WQE_FLUSH_IO_COMPLETED = 13, /* IO done in FLUSH SQ */
+ FC_ERROR_CODE_DATA_DIFX_FAILED = 14, /* fcp data format check: DIFX check error */
+ /* fcp data format check: task_type is not read */
+ FC_ERROR_CODE_DATA_TASK_TYPE_INCORRECT = 15,
+ FC_ERROR_CODE_DATA_OOO_RO = 16, /* fcp data format check: data offset is not continuous */
+ FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS = 17, /* fcp data format check: data is over run */
+ /* fcp rsp format check: payload is too short */
+ FC_ERROR_CODE_FCP_RSP_INVALID_LENGTH_FIELD = 18,
+ /* fcp rsp format check: fcp_conf need, but exch don't hold seq initiative */
+ FC_ERROR_CODE_FCP_RSP_CONF_REQ_NOT_SUPPORTED_YET = 19,
+ /* fcp rsp format check: fcp_conf is required, but it's the last seq */
+ FC_ERROR_CODE_FCP_RSP_OPENED_SEQ = 20,
+ /* xfer rdy format check: payload is too short */
+ FC_ERROR_CODE_XFER_INVALID_PAYLOAD_SIZE = 21,
+ /* xfer rdy format check: last data out havn't finished */
+ FC_ERROR_CODE_XFER_PEND_XFER_SET = 22,
+ /* xfer rdy format check: data offset is not continuous */
+ FC_ERROR_CODE_XFER_OOO_RO = 23,
+ FC_ERROR_CODE_XFER_NULL_BURST_LEN = 24, /* xfer rdy format check: burst len is 0 */
+ FC_ERROR_CODE_REC_TIMER_EXPIRE = 25, /* Timer expire: REC_TIMER */
+ FC_ERROR_CODE_E_D_TIMER_EXPIRE = 26, /* Timer expire: E_D_TIMER */
+ FC_ERROR_CODE_ABORT_TIMER_EXPIRE = 27, /* Timer expire: Abort timer */
+ FC_ERROR_CODE_ABORT_MAGIC_NUM_NOT_MATCH = 28, /* Abort IO magic number mismatch */
+ FC_IMMI_CMDPKT_SETUP_FAIL = 29, /* RX immediate data cmd pkt child setup fail */
+ FC_ERROR_CODE_DATA_SEQ_ID_NOT_EQUAL = 30, /* RX fcp data sequence id not equal */
+ FC_ELS_GS_RSP_EXCH_CHECK_FAIL = 31, /* ELS/GS exch info check fail */
+ FC_CQE_ELS_GS_SRQE_GET_FAIL = 32, /* ELS/GS process get SRQE fail */
+ FC_CQE_DATA_DMA_REQ_FAIL = 33, /* SMF soli-childdma rsp error */
+ FC_CQE_SESSION_CLOSED = 34, /* Session is closed */
+ FC_SCQ_IS_FULL = 35, /* SCQ is full */
+ FC_SRQ_IS_FULL = 36, /* SRQ is full */
+ FC_ERROR_DUCHILDCTX_SETUP_FAIL = 37, /* dpchild ctx setup fail */
+ FC_ERROR_INVALID_TXMFS = 38, /* invalid txmfs */
+ FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL = 39, /* offload fail,lack of SCQE,through AEQ */
+ FC_ERROR_INVALID_TASK_ID = 40, /* tx invlaid task id */
+ FC_ERROR_INVALID_PKT_LEN = 41, /* tx els gs pakcet len check */
+ FC_CQE_ELS_GS_REQ_CLR_IO_COMPLETED = 42, /* IO done in els gs tx */
+ FC_CQE_ELS_RSP_CLR_IO_COMPLETED = 43, /* IO done in els rsp tx */
+ FC_ERROR_CODE_RESID_UNDER_ERR = 44, /* FCP RSP RESID ERROR */
+ FC_ERROR_EXCH_ID_FREE_ERR = 45, /* Abnormal free xid failed */
+ FC_ALLOC_EXCH_ID_FAILED = 46, /* ucode alloc EXCH ID failed */
+ FC_ERROR_DUPLICATE_IO_RECEIVED = 47, /* Duplicate tcmnd or tmf rsp received */
+ FC_ERROR_RXID_MISCOMPARE = 48,
+ FC_ERROR_FAILOVER_CLEAR_VALID_HOST = 49, /* Failover cleared valid host io */
+ FC_ERROR_EXCH_ID_NOT_MATCH = 50, /* SCQ TYPE: xid not match */
+ FC_ERROR_ABORT_FAIL = 51, /* SCQ TYPE: abort fail */
+ FC_ERROR_SHARD_TABLE_OP_FAIL = 52, /* SCQ TYPE: shard table OP fail */
+ FC_ERROR_E0E1_FAIL = 53,
+ FC_INSERT_EXCH_ID_HASH_FAILED = 54, /* ucode INSERT EXCH ID HASH failed */
+ FC_ERROR_CODE_FCP_RSP_UPDMA_FAILED = 55, /* up dma req failed,while fcp rsp is rcving */
+ FC_ERROR_CODE_SID_DID_NOT_MATCH = 56, /* sid or did not match */
+ FC_ERROR_DATA_NOT_REL_OFF = 57, /* data not rel off */
+ FC_ERROR_CODE_EXCH_ID_TIMEOUT = 58, /* exch id timeout */
+ FC_ERROR_PARENT_CHECK_FAIL = 59,
+ FC_ERROR_RECV_REC_REJECT = 60, /* RECV REC RSP REJECT */
+ FC_ERROR_RECV_SRR_REJECT = 61, /* RECV REC SRR REJECT */
+ FC_ERROR_REC_NOT_FIND_EXID_INVALID = 62,
+ FC_ERROR_RECV_REC_NO_ERR = 63,
+ FC_ERROR_PARENT_CTX_ERR = 64
+};
+
+/* AEQ EVENT TYPE */
+enum spfc_aeq_evt_type {
+ /* SCQ and SRQ not enough, HOST will initiate a operation to associated SCQ/SRQ */
+ FC_AEQ_EVENT_QUEUE_ERROR = 48,
+ FC_AEQ_EVENT_WQE_FATAL_ERROR = 49, /* WQE MSN check error,HOST will reset port */
+ FC_AEQ_EVENT_CTX_FATAL_ERROR = 50, /* serious chip error, HOST will reset chip */
+ FC_AEQ_EVENT_OFFLOAD_ERROR = 51,
+ FC_FC_AEQ_EVENT_TYPE_LAST
+};
+
+enum spfc_protocol_class {
+ FC_PROTOCOL_CLASS_3 = 0x0,
+ FC_PROTOCOL_CLASS_2 = 0x1,
+ FC_PROTOCOL_CLASS_1 = 0x2,
+ FC_PROTOCOL_CLASS_F = 0x3,
+ FC_PROTOCOL_CLASS_OTHER = 0x4
+};
+
+enum spfc_aeq_evt_err_code {
+ /* detail type of resource lack */
+ FC_SCQ_IS_FULL_ERR = 0,
+ FC_SRQ_IS_FULL_ERR,
+
+ /* detail type of FC_AEQ_EVENT_WQE_FATAL_ERROR */
+ FC_SQE_CHILD_SETUP_WQE_MSN_ERR = 2,
+ FC_SQE_CHILD_SETUP_WQE_GPA_ERR,
+ FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_1,
+ FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_2,
+ FC_CLEAEQ_WQE_ERR,
+ FC_WQEFETCH_WQE_MSN_ERR,
+ FC_WQEFETCH_QUINFO_ERR,
+
+ /* detail type of FC_AEQ_EVENT_CTX_FATAL_ERROR */
+ FC_SCQE_ERR_BIT_ERR = 9,
+ FC_UPDMA_ADDR_REQ_SRQ_ERR,
+ FC_SOLICHILDDMA_ADDR_REQ_ERR,
+ FC_UNSOLICHILDDMA_ADDR_REQ_ERR,
+ FC_SQE_CHILD_SETUP_QINFO_ERR_1,
+ FC_SQE_CHILD_SETUP_QINFO_ERR_2,
+ FC_CMDPKT_CHILD_SETUP_QINFO_ERR_1,
+ FC_CMDPKT_CHILD_SETUP_QINFO_ERR_2,
+ FC_CMDPKT_CHILD_SETUP_PMSN_ERR,
+ FC_CLEAEQ_CTX_ERR,
+ FC_WQEFETCH_CTX_ERR,
+ FC_FLUSH_QPC_ERR_LQP,
+ FC_FLUSH_QPC_ERR_SMF,
+ FC_PREFETCH_QPC_ERR_PCM_MHIT_LQP,
+ FC_PREFETCH_QPC_ERR_PCM_MHIT_FQG,
+ FC_PREFETCH_QPC_ERR_PCM_ABM_FQG,
+ FC_PREFETCH_QPC_ERR_MAP_FQG,
+ FC_PREFETCH_QPC_ERR_MAP_LQP,
+ FC_PREFETCH_QPC_ERR_SMF_RTN,
+ FC_PREFETCH_QPC_ERR_CFG,
+ FC_PREFETCH_QPC_ERR_FLSH_HIT,
+ FC_PREFETCH_QPC_ERR_FLSH_ACT,
+ FC_PREFETCH_QPC_ERR_ABM_W_RSC,
+ FC_PREFETCH_QPC_ERR_RW_ABM,
+ FC_PREFETCH_QPC_ERR_DEFAULT,
+ FC_CHILDHASH_INSERT_SW_ERR,
+ FC_CHILDHASH_LOOKUP_SW_ERR,
+ FC_CHILDHASH_DEL_SW_ERR,
+ FC_EXCH_ID_FREE_SW_ERR,
+ FC_FLOWHASH_INSERT_SW_ERR,
+ FC_FLOWHASH_LOOKUP_SW_ERR,
+ FC_FLOWHASH_DEL_SW_ERR,
+ FC_FLUSH_QPC_ERR_USED,
+ FC_FLUSH_QPC_ERR_OUTER_LOCK,
+ FC_SETUP_SESSION_ERR,
+
+ FC_AEQ_EVT_ERR_CODE_BUTT
+
+};
+
+/* AEQ data structure */
+struct spfc_aqe_data {
+ union {
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd : 8;
+ u32 evt_code : 8;
+ } wd0;
+
+ u32 data0;
+ };
+
+ union {
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd1;
+
+ u32 data1;
+ };
+};
+
+/* Control Section: Common Header */
+struct spfc_wqe_ctrl_ch {
+ union {
+ struct {
+ u32 bdsl : 8;
+ u32 drv_sl : 2;
+ u32 rsvd0 : 4;
+ u32 wf : 1;
+ u32 cf : 1;
+ u32 tsl : 5;
+ u32 va : 1;
+ u32 df : 1;
+ u32 cr : 1;
+ u32 dif_sl : 3;
+ u32 csl : 2;
+ u32 ctrl_sl : 2;
+ u32 owner : 1;
+ } wd0;
+
+ u32 ctrl_ch_val;
+ };
+};
+
+/* Control Section: Queue Specific Field */
+struct spfc_wqe_ctrl_qsf {
+ u32 wqe_sn : 16;
+ u32 dump_wqe_sn : 16;
+};
+
+/* DIF info definition in WQE */
+struct spfc_fc_dif_info {
+ struct {
+ u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
+ /* Bit 0: scenario of the reference tag verify mode.
+ *Bit 1: scenario of the reference tag insert/replace mode.
+ */
+ u32 ref_tag_mode : 2;
+ /* 0: fixed; 1: increasement; */
+ u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
+ u32 grd_agm_ini_ctrl : 3;
+ u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
+ /* Bit 1: DIF/DIX guard replace or insert algorithm control */
+ u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
+ u32 dif_verify_type : 2; /* verify type */
+ u32 difx_ref_esc : 1; /* Check blocks whose reference tag contains 0xFFFF flag */
+ u32 difx_app_esc : 1;/* Check blocks whose application tag contains 0xFFFF flag */
+ u32 rsvd : 8;
+ u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
+ u32 smd_tp : 2;
+ u32 difx_en : 1;
+ } wd0;
+
+ struct {
+ u32 cmp_app_tag_msk : 16;
+ u32 rsvd : 7;
+ u32 lun_qos_en : 2;
+ u32 vpid : 7;
+ } wd1;
+
+ u16 cmp_app_tag;
+ u16 rep_app_tag;
+
+ u32 cmp_ref_tag;
+ u32 rep_ref_tag;
+};
+
+/* Task Section: TMF SQE for INI */
+struct spfc_tmf_info {
+ union {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } bs;
+ u32 value;
+ } w0;
+
+ union {
+ struct {
+ u32 reset_did : 24;
+ u32 reset_type : 2;
+ u32 marker_sts : 1;
+ u32 rsvd0 : 5;
+ } bs;
+ u32 value;
+ } w1;
+
+ union {
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd0 : 8;
+ } bs;
+ u32 value;
+ } w2;
+
+ u8 reset_lun[8];
+};
+
+/* Task Section: CMND SQE for INI */
+struct spfc_sqe_icmnd {
+ u8 fcp_cmnd_iu[FC_SCSI_CMDIU_LEN];
+ union {
+ struct spfc_fc_dif_info dif_info;
+ struct spfc_tmf_info tmf;
+ } info;
+};
+
+/* Task Section: ABTS SQE */
+struct spfc_sqe_abts {
+ u32 fh_parm_abts;
+ u32 hotpooltag;
+ u32 release_timer;
+};
+
+struct spfc_keys {
+ struct {
+ u32 smac1 : 8;
+ u32 smac0 : 8;
+ u32 rsv : 16;
+ } wd0;
+
+ u8 smac[4];
+
+ u8 dmac[6];
+ u8 sid[3];
+ u8 did[3];
+
+ struct {
+ u32 port_id : 3;
+ u32 host_id : 2;
+ u32 rsvd : 27;
+ } wd5;
+ u32 rsvd;
+};
+
+/* BDSL: Session Enable WQE.keys field only use 26 bytes room */
+struct spfc_cmdqe_sess_en {
+ struct {
+ u32 rx_id : 16;
+ u32 port_id : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd1 : 12;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid_p : 20;
+ u32 rsvd3 : 12;
+ } wd3;
+
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ struct spfc_keys keys;
+ u32 context[64];
+};
+
+/* Control Section */
+struct spfc_wqe_ctrl {
+ struct spfc_wqe_ctrl_ch ch;
+ struct spfc_wqe_ctrl_qsf qsf;
+};
+
+struct spfc_sqe_els_rsp {
+ struct {
+ u32 echo_flag : 16;
+ u32 data_len : 16;
+ } wd0;
+
+ struct {
+ u32 rsvd1 : 27;
+ u32 offload_flag : 1;
+ u32 lp_bflag : 1;
+ u32 clr_io : 1;
+ u32 para_update : 2;
+ } wd1;
+
+ struct {
+ u32 seq_cnt : 1;
+ u32 e_d_tov : 1;
+ u32 rsvd2 : 6;
+ u32 class_mode : 8; /* 0:class3, 1:class2*/
+ u32 tx_mfs : 16;
+ } wd2;
+
+ u32 e_d_tov_timer_val;
+
+ struct {
+ u32 conf : 1;
+ u32 rec : 1;
+ u32 xfer_dis : 1;
+ u32 immi_taskid_cnt : 13;
+ u32 immi_taskid_start : 16;
+ } wd4;
+
+ u32 first_burst_len;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd6;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpooltag : 16;
+ } wd7;
+
+ u32 magic_local;
+ u32 magic_remote;
+ u32 ts_rcv_echo_req;
+ u32 sid;
+ u32 did;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+};
+
+struct spfc_sqe_reset_session {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd0;
+
+ struct {
+ u32 reset_did : 24;
+ u32 mode : 2;
+ u32 rsvd : 6;
+ } wd1;
+
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd : 8;
+ } wd2;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd : 16;
+ } wd3;
+};
+
+struct spfc_sqe_nop_sq {
+ struct {
+ u32 scqn : 16;
+ u32 rsvd : 16;
+ } wd0;
+ u32 magic_num;
+};
+
+struct spfc_sqe_t_els_gs {
+ u16 echo_flag;
+ u16 data_len;
+
+ struct {
+ u32 rsvd1 : 9;
+ u32 offload_flag : 1;
+ u32 origin_hottag : 16;
+ u32 rec_flag : 1;
+ u32 rec_support : 1;
+ u32 lp_bflag : 1;
+ u32 clr_io : 1;
+ u32 para_update : 2;
+ } wd4;
+
+ struct {
+ u32 seq_cnt : 1;
+ u32 e_d_tov : 1;
+ u32 rsvd2 : 14;
+ u32 tx_mfs : 16;
+ } wd5;
+
+ u32 e_d_tov_timer_val;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd6;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpooltag : 16; /* used for send ELS rsp */
+ } wd7;
+
+ u32 sid;
+ u32 did;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ u32 origin_magicnum;
+};
+
+struct spfc_sqe_els_gs_elsrsp_comm {
+ u16 rsvd;
+ u16 data_len;
+};
+
+struct spfc_sqe_lpb_msg {
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } w0;
+
+ struct {
+ u32 reset_did : 24;
+ u32 reset_type : 2;
+ u32 rsvd0 : 6;
+ } w1;
+
+ struct {
+ u32 reset_sid : 24;
+ u32 rsvd0 : 8;
+ } w2;
+
+ u16 tmf_exch_id;
+ u16 rsvd1;
+
+ u8 reset_lun[8];
+};
+
+/* SQE Task Section's Contents except Common Header */
+union spfc_sqe_ts_cont {
+ struct spfc_sqe_icmnd icmnd;
+ struct spfc_sqe_abts abts;
+ struct spfc_sqe_els_rsp els_rsp;
+ struct spfc_sqe_t_els_gs t_els_gs;
+ struct spfc_sqe_els_gs_elsrsp_comm els_gs_elsrsp_comm;
+ struct spfc_sqe_reset_session reset_session;
+ struct spfc_sqe_lpb_msg lpb_msg;
+ struct spfc_sqe_nop_sq nop_sq;
+ u32 value[17];
+};
+
+struct spfc_sqe_nvme_icmnd_part2 {
+ u8 nvme_cmnd_iu_part2_data[FC_NVME_CMDIU_LEN - FC_SCSI_CMDIU_LEN];
+};
+
+union spfc_sqe_ts_ex {
+ struct spfc_sqe_nvme_icmnd_part2 nvme_icmnd_part2;
+ u32 value[12];
+};
+
+struct spfc_sqe_ts {
+ /* SQE Task Section's Common Header */
+ u32 local_xid : 16; /* local exch_id, icmnd/els send used for hotpooltag */
+ u32 crc_inj : 1;
+ u32 immi_std : 1;
+ u32 cdb_type : 1; /* cdb_type = 0:CDB_LEN = 16B, cdb_type = 1:CDB_LEN = 32B */
+ u32 rsvd : 5; /* used for loopback saving bdsl's num */
+ u32 task_type : 8;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num;
+ union spfc_sqe_ts_cont cont;
+};
+
+struct spfc_constant_sge {
+ u32 buf_addr_hi;
+ u32 buf_addr_lo;
+};
+
+struct spfc_variable_sge {
+ u32 buf_addr_hi;
+ u32 buf_addr_lo;
+
+ struct {
+ u32 buf_len : 31;
+ u32 r_flag : 1;
+ } wd0;
+
+ struct {
+ u32 buf_addr_gpa : 16;
+ u32 xid : 14;
+ u32 extension_flag : 1;
+ u32 last_flag : 1;
+ } wd1;
+};
+
+#define FC_WQE_SIZE 256
+/* SQE, should not be over 256B */
+struct spfc_sqe {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u32 sid;
+ u32 did;
+ u64 wqe_gpa; /* gpa shift 6 bit to right*/
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_ts ts_sl;
+ struct spfc_variable_sge sge[2];
+};
+
+struct spfc_rqe_ctrl {
+ struct spfc_wqe_ctrl_ch ch;
+
+ struct {
+ u16 wqe_msn;
+ u16 dump_wqe_msn;
+ } wd0;
+};
+
+struct spfc_rqe_drv {
+ struct {
+ u32 rsvd0 : 16;
+ u32 user_id : 16;
+ } wd0;
+
+ u32 rsvd1;
+};
+
+/* RQE,should not be over 32B */
+struct spfc_rqe {
+ struct spfc_rqe_ctrl ctrl_sl;
+ u32 cqe_gpa_h;
+ u32 cqe_gpa_l;
+ struct spfc_constant_sge bds_sl;
+ struct spfc_rqe_drv drv_sl;
+};
+
+struct spfc_cmdqe_abort {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 ox_id : 16;
+ u32 rsvd1 : 12;
+ u32 trsp_send : 1;
+ u32 tcmd_send : 1;
+ u32 immi : 1;
+ u32 reply_sts : 1;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd3;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd : 12;
+ } wd4;
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd5; /* v6 new define */
+ /* abort time out. Used for abort and io cmd reach ucode in different path
+ * and io cmd will not arrive.
+ */
+ u32 time_out;
+ u32 magic_num;
+};
+
+struct spfc_cmdqe_abts_rsp {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 ox_id : 16;
+ u32 rsvd1 : 4;
+ u32 port_id : 4;
+ u32 payload_len : 7;
+ u32 rsp_type : 1;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 scqn : 16;
+ } wd2;
+
+ struct {
+ u32 xid : 20;
+ u32 rsvd : 12;
+ } wd3;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd : 12;
+ } wd4;
+
+ struct {
+ u32 req_rx_id : 16;
+ u32 hotpooltag : 16;
+ } wd5;
+
+ /* payload length is according to rsp_type:1DWORD or 3DWORD */
+ u32 payload[3];
+};
+
+struct spfc_cmdqe_buffer_clear {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 wqe_type : 8;
+ } wd0;
+
+ struct {
+ u32 rx_id_end : 16;
+ u32 rx_id_start : 16;
+ } wd1;
+
+ u32 scqn;
+ u32 wd3;
+};
+
+struct spfc_cmdqe_flush_sq {
+ struct {
+ u32 entry_count : 16;
+ u32 rsvd : 8;
+ u32 wqe_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 port_id : 4;
+ u32 pos : 11;
+ u32 last_wqe : 1;
+ } wd1;
+
+ struct {
+ u32 rsvd : 4;
+ u32 clr_pos : 12;
+ u32 pkt_ptr : 16;
+ } wd2;
+
+ struct {
+ u32 first_sq_xid : 24;
+ u32 sqqid_start_per_session : 4;
+ u32 sqcnt_per_session : 4;
+ } wd3;
+};
+
+struct spfc_cmdqe_dump_exch {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u16 oqid_wr;
+ u16 oqid_rd;
+
+ u32 host_id;
+ u32 func_id;
+ u32 cache_id;
+ u32 exch_id;
+};
+
+struct spfc_cmdqe_creat_srqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+
+ u32 srqc[16]; /* srqc_size=64B */
+};
+
+struct spfc_cmdqe_delete_srqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+};
+
+struct spfc_cmdqe_clr_srq {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 srq_type : 16;
+ } wd1;
+
+ u32 srqc_gpa_h;
+ u32 srqc_gpa_l;
+};
+
+struct spfc_cmdqe_creat_scqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+
+ u32 scqc[16]; /* scqc_size=64B */
+};
+
+struct spfc_cmdqe_delete_scqc {
+ struct {
+ u32 rsvd1 : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+};
+
+struct spfc_cmdqe_creat_ssqc {
+ struct {
+ u32 rsvd1 : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+
+ u32 ssqc[64]; /* ssqc_size=256B */
+};
+
+struct spfc_cmdqe_delete_ssqc {
+ struct {
+ u32 entry_count : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 scqn : 16;
+ u32 rsvd2 : 16;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+};
+
+/* add xid free via cmdq */
+struct spfc_cmdqe_exch_id_free {
+ struct {
+ u32 task_id : 16;
+ u32 port_id : 8;
+ u32 rsvd0 : 8;
+ } wd0;
+
+ u32 magic_num;
+
+ struct {
+ u32 scqn : 16;
+ u32 hotpool_tag : 16;
+ } wd2;
+ struct {
+ u32 rsvd1 : 31;
+ u32 clear_abort_flag : 1;
+ } wd3;
+ u32 sid;
+ u32 did;
+ u32 type; /* ELS/ELS RSP/IO */
+};
+
+struct spfc_cmdqe_cmdqe_dfx {
+ struct {
+ u32 rsvd1 : 4;
+ u32 xid : 20;
+ u32 task_type : 8;
+ } wd0;
+
+ struct {
+ u32 qid_crclen : 12;
+ u32 cid : 20;
+ } wd1;
+ u32 context_gpa_hi;
+ u32 context_gpa_lo;
+ u32 dfx_type;
+
+ u32 rsv[16];
+};
+
+struct spfc_sqe_t_rsp {
+ struct {
+ u32 rsvd1 : 16;
+ u32 fcp_rsp_len : 8;
+ u32 busy_rsp : 3;
+ u32 immi : 1;
+ u32 mode : 1;
+ u32 conf : 1;
+ u32 fill : 2;
+ } wd0;
+
+ u32 hotpooltag;
+
+ union {
+ struct {
+ u32 addr_h;
+ u32 addr_l;
+ } gpa;
+
+ struct {
+ u32 data[23]; /* FCP_RESP payload buf, 92B rsvd */
+ } buf;
+ } payload;
+};
+
+struct spfc_sqe_tmf_t_rsp {
+ struct {
+ u32 scqn : 16;
+ u32 fcp_rsp_len : 8;
+ u32 pkt_nosnd_flag : 3; /* tmf rsp snd flag, 0:snd, 1: not snd, Driver ignore */
+ u32 reset_type : 2;
+ u32 conf : 1;
+ u32 fill : 2;
+ } wd0;
+
+ struct {
+ u32 reset_exch_end : 16;
+ u32 reset_exch_start : 16;
+ } wd1;
+
+ struct {
+ u16 hotpooltag; /*tmf rsp hotpooltag, Driver ignore */
+ u16 rsvd;
+ } wd2;
+
+ u8 lun[8]; /* Lun ID */
+ u32 data[20]; /* FCP_RESP payload buf, 80B rsvd */
+};
+
+struct spfc_sqe_tresp_ts {
+ /* SQE Task Section's Common Header */
+ u16 local_xid;
+ u8 rsvd0;
+ u8 task_type;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num;
+ struct spfc_sqe_t_rsp t_rsp;
+};
+
+struct spfc_sqe_tmf_resp_ts {
+ /* SQE Task Section's Common Header */
+ u16 local_xid;
+ u8 rsvd0;
+ u8 task_type;
+
+ struct {
+ u16 conn_id;
+ u16 remote_xid;
+ } wd0;
+
+ u32 xid : 20;
+ u32 sqn : 12;
+ u32 cid;
+ u32 magic_num; /* magic num */
+ struct spfc_sqe_tmf_t_rsp tmf_rsp;
+};
+
+/* SQE for fcp response, max TSL is 120B */
+struct spfc_sqe_tresp {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u64 taskrsvd;
+ u64 wqe_gpa;
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_tresp_ts ts_sl;
+};
+
+/* SQE for tmf response, max TSL is 120B */
+struct spfc_sqe_tmf_rsp {
+ struct spfc_wqe_ctrl ctrl_sl;
+ u64 taskrsvd;
+ u64 wqe_gpa;
+ u64 db_val;
+ union spfc_sqe_ts_ex ts_ex;
+ struct spfc_variable_sge esge[3];
+ struct spfc_wqe_ctrl ectrl_sl;
+ struct spfc_sqe_tmf_resp_ts ts_sl;
+};
+
+/* SCQE Common Header */
+struct spfc_scqe_ch {
+ struct {
+ u32 task_type : 8;
+ u32 sqn : 13;
+ u32 cqe_remain_cnt : 3;
+ u32 err_code : 7;
+ u32 owner : 1;
+ } wd0;
+};
+
+struct spfc_scqe_type {
+ struct spfc_scqe_ch ch;
+
+ u32 rsvd0;
+
+ u16 conn_id;
+ u16 rsvd4;
+
+ u32 rsvd1[12];
+
+ struct {
+ u32 done : 1;
+ u32 rsvd : 23;
+ u32 dif_vry_rst : 8;
+ } wd0;
+};
+
+struct spfc_scqe_sess_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 xid_qpn : 20;
+ u32 rsvd1 : 12;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd3 : 16;
+ } wd1;
+
+ struct {
+ u32 cid : 20;
+ u32 rsvd2 : 12;
+ } wd2;
+
+ u64 rsvd3;
+};
+
+struct spfc_scqe_comm_rsp_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd1;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_iresp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 3;
+ u32 user_id_num : 8;
+ u32 dif_info : 5;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+
+ u32 fcp_resid;
+ u32 fcp_sns_len;
+ u32 fcp_rsp_len;
+ u32 magic_num;
+ u16 user_id[FC_SENSEDATA_USERID_CNT_MAX];
+ u32 rsv1;
+};
+
+struct spfc_scqe_nvme_iresp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 eresp_flag : 8;
+ u32 user_id_num : 8;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+ u32 magic_num;
+ u32 eresp[8];
+};
+
+#pragma pack(1)
+struct spfc_dif_result {
+ u8 vrd_rpt;
+ u16 pad;
+ u8 rcv_pi_vb;
+ u32 rcv_pi_h;
+ u32 rcv_pi_l;
+ u16 vrf_agm_imm;
+ u16 ri_agm_imm;
+};
+
+#pragma pack()
+
+struct spfc_scqe_dif_result {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 11;
+ u32 dif_info : 5;
+ } wd1;
+
+ struct {
+ u32 scsi_status : 8;
+ u32 fcp_flag : 8;
+ u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
+ } wd2;
+
+ u32 fcp_resid;
+ u32 fcp_sns_len;
+ u32 fcp_rsp_len;
+ u32 magic_num;
+
+ u32 rsv1[3];
+ struct spfc_dif_result difinfo;
+};
+
+struct spfc_scqe_rcv_abts_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 hotpooltag : 16;
+ } wd1;
+
+ struct {
+ u32 fh_rctrl : 8;
+ u32 rsvd0 : 24;
+ } wd2;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd1 : 8;
+ } wd3;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd2 : 8;
+ } wd4;
+
+ /* payload length is according to fh_rctrl:1DWORD or 3DWORD */
+ u32 payload[3];
+ u32 magic_num;
+};
+
+struct spfc_scqe_fcp_rsp_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd0;
+
+ struct {
+ u32 conn_id : 16;
+ u32 rsvd0 : 10;
+ u32 immi : 1;
+ u32 dif_info : 5;
+ } wd1;
+
+ u32 magic_num;
+ u32 hotpooltag;
+ u32 xfer_rsp;
+ u32 rsvd[5];
+
+ u32 dif_tmp[4]; /* HW will overwrite it */
+};
+
+struct spfc_scqe_rcv_els_cmd {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 did : 24;
+ u32 class_mode : 8; /* 0:class3, 1:class2 */
+ } wd0;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd1;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd2;
+
+ struct {
+ u32 user_id_num : 16;
+ u32 data_len : 16;
+ } wd3;
+ /* User ID of SRQ SGE, used for drvier buffer release */
+ u16 user_id[FC_LS_GS_USERID_CNT_MAX];
+ u32 ts;
+};
+
+struct spfc_scqe_param_check_scq {
+ struct spfc_scqe_ch ch;
+
+ u8 rsvd0[3];
+ u8 port_id;
+
+ u16 scqn;
+ u16 check_item;
+
+ u16 exch_id_load;
+ u16 exch_id;
+
+ u16 historty_type;
+ u16 entry_count;
+
+ u32 xid;
+
+ u32 gpa_h;
+ u32 gpa_l;
+
+ u32 magic_num;
+ u32 hotpool_tag;
+
+ u32 payload_len;
+ u32 sub_err;
+
+ u32 rsvd2[3];
+};
+
+struct spfc_scqe_rcv_abts_cmd {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd0 : 8;
+ } wd0;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd1;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd2;
+};
+
+struct spfc_scqe_rcv_els_gs_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 conn_id : 16;
+ u32 data_len : 16; /* ELS/GS RSP Payload length */
+ } wd2;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd : 6;
+ u32 echo_rsp : 1;
+ u32 end_rsp : 1;
+ } wd3;
+
+ struct {
+ u32 sid : 24;
+ u32 user_id_num : 8;
+ } wd4;
+
+ struct {
+ u32 rsvd : 16;
+ u32 hotpooltag : 16;
+ } wd5;
+
+ u32 magic_num;
+ u16 user_id[FC_LS_GS_USERID_CNT_MAX];
+};
+
+struct spfc_scqe_rcv_flush_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rsvd0 : 4;
+ u32 clr_pos : 12;
+ u32 port_id : 8;
+ u32 last_flush : 8;
+ } wd0;
+};
+
+struct spfc_scqe_rcv_clear_buf_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rsvd0 : 24;
+ u32 port_id : 8;
+ } wd0;
+};
+
+struct spfc_scqe_clr_srq_rsp {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 srq_type : 16;
+ u32 cur_wqe_msn : 16;
+ } wd0;
+};
+
+struct spfc_scqe_itmf_marker_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 end_rsp : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 rsvd1 : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_abts_marker_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 end_rsp : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 io_state : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_ini_abort_sts {
+ struct spfc_scqe_ch ch;
+
+ struct {
+ u32 rx_id : 16;
+ u32 ox_id : 16;
+ } wd1;
+
+ struct {
+ u32 did : 24;
+ u32 rsvd : 8;
+ } wd2;
+
+ struct {
+ u32 sid : 24;
+ u32 io_state : 8;
+ } wd3;
+
+ struct {
+ u32 hotpooltag : 16;
+ u32 rsvd : 16;
+ } wd4;
+
+ u32 magic_num;
+};
+
+struct spfc_scqe_sq_nop_sts {
+ struct spfc_scqe_ch ch;
+ struct {
+ u32 rsvd : 16;
+ u32 sqn : 16;
+ } wd0;
+ struct {
+ u32 rsvd : 16;
+ u32 conn_id : 16;
+ } wd1;
+ u32 magic_num;
+};
+
+/* SCQE, should not be over 64B */
+#define FC_SCQE_SIZE 64
+union spfc_scqe {
+ struct spfc_scqe_type common;
+ struct spfc_scqe_sess_sts sess_sts; /* session enable/disable/delete sts */
+ struct spfc_scqe_comm_rsp_sts comm_sts; /* aborts/abts_rsp/els rsp sts */
+ struct spfc_scqe_rcv_clear_buf_sts clear_sts; /* clear buffer sts */
+ struct spfc_scqe_rcv_flush_sts flush_sts; /* flush sq sts */
+ struct spfc_scqe_iresp iresp;
+ struct spfc_scqe_rcv_abts_rsp rcv_abts_rsp; /* recv abts rsp */
+ struct spfc_scqe_fcp_rsp_sts fcp_rsp_sts; /* Read/Write/Rsp sts */
+ struct spfc_scqe_rcv_els_cmd rcv_els_cmd; /* recv els cmd */
+ struct spfc_scqe_rcv_abts_cmd rcv_abts_cmd; /* recv abts cmd */
+ struct spfc_scqe_rcv_els_gs_rsp rcv_els_gs_rsp; /* recv els/gs rsp */
+ struct spfc_scqe_clr_srq_rsp clr_srq_sts;
+ struct spfc_scqe_itmf_marker_sts itmf_marker_sts; /* tmf marker */
+ struct spfc_scqe_abts_marker_sts abts_marker_sts; /* abts marker */
+ struct spfc_scqe_dif_result dif_result;
+ struct spfc_scqe_param_check_scq param_check_sts;
+ struct spfc_scqe_nvme_iresp nvme_iresp;
+ struct spfc_scqe_ini_abort_sts ini_abort_sts;
+ struct spfc_scqe_sq_nop_sts sq_nop_sts;
+};
+
+struct spfc_cmdqe_type {
+ struct {
+ u32 rx_id : 16;
+ u32 rsvd0 : 8;
+ u32 task_type : 8;
+ } wd0;
+};
+
+struct spfc_cmdqe_send_ack {
+ struct {
+ u32 rx_id : 16;
+ u32 immi_stand : 1;
+ u32 rsvd0 : 7;
+ u32 task_type : 8;
+ } wd0;
+
+ u32 xid;
+ u32 cid;
+};
+
+struct spfc_cmdqe_send_aeq_err {
+ struct {
+ u32 errorevent : 8;
+ u32 errortype : 8;
+ u32 portid : 8;
+ u32 task_type : 8;
+ } wd0;
+};
+
+/* CMDQE, variable length */
+union spfc_cmdqe {
+ struct spfc_cmdqe_type common;
+ struct spfc_cmdqe_sess_en session_enable;
+ struct spfc_cmdqe_abts_rsp snd_abts_rsp;
+ struct spfc_cmdqe_abort snd_abort;
+ struct spfc_cmdqe_buffer_clear buffer_clear;
+ struct spfc_cmdqe_flush_sq flush_sq;
+ struct spfc_cmdqe_dump_exch dump_exch;
+ struct spfc_cmdqe_creat_srqc create_srqc;
+ struct spfc_cmdqe_delete_srqc delete_srqc;
+ struct spfc_cmdqe_clr_srq clear_srq;
+ struct spfc_cmdqe_creat_scqc create_scqc;
+ struct spfc_cmdqe_delete_scqc delete_scqc;
+ struct spfc_cmdqe_send_ack send_ack;
+ struct spfc_cmdqe_send_aeq_err send_aeqerr;
+ struct spfc_cmdqe_creat_ssqc createssqc;
+ struct spfc_cmdqe_delete_ssqc deletessqc;
+ struct spfc_cmdqe_cmdqe_dfx dfx_info;
+ struct spfc_cmdqe_exch_id_free xid_free;
+};
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
new file mode 100644
index 000000000000..2b1d1c607b13
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_io.c
@@ -0,0 +1,1193 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_io.h"
+#include "spfc_module.h"
+#include "spfc_service.h"
+
+#define SPFC_SGE_WD1_XID_MASK 0x3fff
+
+u32 dif_protect_opcode = INVALID_VALUE32;
+u32 dif_app_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
+u32 dif_ref_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
+u32 grd_agm_ini_ctrl = SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1;
+u32 ref_tag_no_increase;
+u32 dix_flag;
+u32 grd_ctrl;
+u32 grd_agm_ctrl = SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16;
+u32 cmp_app_tag_mask = 0xffff;
+u32 app_tag_ctrl;
+u32 ref_tag_ctrl;
+u32 ref_tag_mod = INVALID_VALUE32;
+u32 rep_ref_tag;
+u32 rx_rep_ref_tag;
+u16 cmp_app_tag;
+u16 rep_app_tag;
+
+static void spfc_dif_err_count(struct spfc_hba_info *hba, u8 info)
+{
+ u8 dif_info = info;
+
+ if (dif_info & SPFC_TX_DIF_ERROR_FLAG) {
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_ALL);
+ if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_CRC);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_APP)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_APP);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_REF)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_REF);
+ } else {
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_ALL);
+ if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_CRC);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_APP)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_APP);
+
+ if (dif_info & SPFC_DIF_ERROR_CODE_REF)
+ SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_REF);
+ }
+}
+
+void spfc_build_no_dif_control(struct unf_frame_pkg *pkg,
+ struct spfc_fc_dif_info *info)
+{
+ struct spfc_fc_dif_info *dif_info = info;
+
+ /* dif enable or disable */
+ dif_info->wd0.difx_en = SPFC_DIF_DISABLE;
+
+ dif_info->wd1.vpid = pkg->qos_level;
+ dif_info->wd1.lun_qos_en = 1;
+}
+
+void spfc_dif_action_forward(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_REPLACE_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
+ : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
+}
+
+void spfc_dif_action_delete(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd0.grd_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+
+ dif_info_l1->wd0.app_tag_ctrl |=
+ (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
+ ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
+ : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
+}
+
+static void spfc_convert_dif_action(struct unf_dif_control_info *dif_ctrl,
+ struct spfc_fc_dif_info *dif_info)
+{
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+ struct unf_dif_control_info *dif_ctrl_u1 = NULL;
+
+ dif_info_l1 = dif_info;
+ dif_ctrl_u1 = dif_ctrl;
+
+ switch (UNF_DIF_ACTION_MASK & dif_ctrl_u1->protect_opcode) {
+ case UNF_DIF_ACTION_VERIFY_AND_REPLACE:
+ case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
+ spfc_dif_action_forward(dif_info_l1, dif_ctrl_u1);
+ break;
+
+ case UNF_DIF_ACTION_INSERT:
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.ref_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
+ dif_info_l1->wd0.app_tag_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
+ break;
+
+ case UNF_DIF_ACTION_VERIFY_AND_DELETE:
+ spfc_dif_action_delete(dif_info_l1, dif_ctrl_u1);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "Unknown dif protect opcode 0x%x",
+ dif_ctrl_u1->protect_opcode);
+ break;
+ }
+}
+
+void spfc_get_dif_info_l1(struct spfc_fc_dif_info *dif_info_l1,
+ struct unf_dif_control_info *dif_ctrl_u1)
+{
+ dif_info_l1->wd1.cmp_app_tag_msk = cmp_app_tag_mask;
+
+ dif_info_l1->rep_app_tag = dif_ctrl_u1->app_tag;
+ dif_info_l1->rep_ref_tag = dif_ctrl_u1->start_lba;
+
+ dif_info_l1->cmp_app_tag = dif_ctrl_u1->app_tag;
+ dif_info_l1->cmp_ref_tag = dif_ctrl_u1->start_lba;
+
+ if (cmp_app_tag != 0)
+ dif_info_l1->cmp_app_tag = cmp_app_tag;
+
+ if (rep_app_tag != 0)
+ dif_info_l1->rep_app_tag = rep_app_tag;
+
+ if (rep_ref_tag != 0)
+ dif_info_l1->rep_ref_tag = rep_ref_tag;
+}
+
+void spfc_build_dif_control(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_fc_dif_info *dif_info)
+{
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+ struct unf_dif_control_info *dif_ctrl_u1 = NULL;
+
+ dif_info_l1 = dif_info;
+ dif_ctrl_u1 = &pkg->dif_control;
+
+ /* dif enable or disable */
+ dif_info_l1->wd0.difx_en = SPFC_DIF_ENABLE;
+
+ dif_info_l1->wd1.vpid = pkg->qos_level;
+ dif_info_l1->wd1.lun_qos_en = 1;
+
+ /* 512B + 8 size mode */
+ dif_info_l1->wd0.sct_size = (dif_ctrl_u1->flags & UNF_DIF_SECTSIZE_4KB)
+ ? SPFC_DIF_SECTOR_4KB_MODE
+ : SPFC_DIF_SECTOR_512B_MODE;
+
+ /* dif type 1 */
+ dif_info_l1->wd0.dif_verify_type = dif_type;
+
+ /* Check whether the 0xffff app or ref domain is isolated */
+ /* If all ff messages are displayed in type1 app, checkcheck sector
+ * dif_info_l1->wd0.difx_app_esc = SPFC_DIF_APP_REF_ESC_CHECK
+ */
+
+ dif_info_l1->wd0.difx_app_esc = dif_app_esc_check;
+
+ /* type1 ref tag If all ff is displayed, check sector is required */
+ dif_info_l1->wd0.difx_ref_esc = dif_ref_esc_check;
+
+ /* Currently, only t10 crc is supported */
+ dif_info_l1->wd0.grd_agm_ctrl = 0;
+
+ /* Set this parameter based on the values of bit zero and bit one.
+ * The initial value is 0, and the value is UNF_DEFAULT_CRC_GUARD_SEED
+ */
+ dif_info_l1->wd0.grd_agm_ini_ctrl = grd_agm_ini_ctrl;
+ dif_info_l1->wd0.app_tag_ctrl = 0;
+ dif_info_l1->wd0.grd_ctrl = 0;
+ dif_info_l1->wd0.ref_tag_ctrl = 0;
+
+ /* Convert the verify operation, replace, forward, insert,
+ * and delete operations based on the actual operation code of the upper
+ * layer
+ */
+ if (dif_protect_opcode != INVALID_VALUE32) {
+ dif_ctrl_u1->protect_opcode =
+ dif_protect_opcode |
+ (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_MASK);
+ }
+
+ spfc_convert_dif_action(dif_ctrl_u1, dif_info_l1);
+ dif_info_l1->wd0.app_tag_ctrl |= app_tag_ctrl;
+
+ /* Address self-increase mode */
+ dif_info_l1->wd0.ref_tag_mode =
+ (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_NO_INCREASE_REFTAG)
+ ? (BOTH_NONE)
+ : (BOTH_INCREASE);
+
+ if (ref_tag_mod != INVALID_VALUE32)
+ dif_info_l1->wd0.ref_tag_mode = ref_tag_mod;
+
+ /* This parameter is used only when type 3 is set to 0xffff. */
+ spfc_get_dif_info_l1(dif_info_l1, dif_ctrl_u1);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) sid_did(0x%x_0x%x) package type(0x%x) apptag(0x%x) flag(0x%x) opcode(0x%x) fcpdl(0x%x) statlba(0x%x)",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, pkg->type, pkg->dif_control.app_tag,
+ pkg->dif_control.flags, pkg->dif_control.protect_opcode,
+ pkg->dif_control.fcp_dl, pkg->dif_control.start_lba);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) cover dif control info, app:cmp_tag(0x%x) cmp_tag_mask(0x%x) rep_tag(0x%x), ref:tag_mode(0x%x) cmp_tag(0x%x) rep_tag(0x%x).",
+ hba->port_cfg.port_id, dif_info_l1->cmp_app_tag,
+ dif_info_l1->wd1.cmp_app_tag_msk, dif_info_l1->rep_app_tag,
+ dif_info_l1->wd0.ref_tag_mode, dif_info_l1->cmp_ref_tag,
+ dif_info_l1->rep_ref_tag);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "Port(0x%x) cover dif control info, ctrl:grd(0x%x) ref(0x%x) app(0x%x).",
+ hba->port_cfg.port_id, dif_info_l1->wd0.grd_ctrl,
+ dif_info_l1->wd0.ref_tag_ctrl,
+ dif_info_l1->wd0.app_tag_ctrl);
+}
+
+static u32 spfc_fill_external_sgl_page(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct unf_esgl_page *esgl_page,
+ u32 sge_num, int direction,
+ u32 context_id, u32 dif_flag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ u32 sge_num_per_page = 0;
+ u32 buffer_addr = 0;
+ u32 buf_len = 0;
+ char *buf = NULL;
+ ulong phys = 0;
+ struct unf_esgl_page *unf_esgl_page = NULL;
+ struct spfc_variable_sge *sge = NULL;
+
+ unf_esgl_page = esgl_page;
+ while (sge_num > 0) {
+ /* Obtains the initial address of the sge page */
+ sge = (struct spfc_variable_sge *)unf_esgl_page->page_address;
+
+ /* Calculate the number of sge on each page */
+ sge_num_per_page = (unf_esgl_page->page_size) / sizeof(struct spfc_variable_sge);
+
+ /* Fill in sgl page. The last sge of each page is link sge by
+ * default
+ */
+ for (index = 0; index < (sge_num_per_page - 1); index++) {
+ UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len, dif_flag);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ phys = (ulong)buf;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sge[index].wd0.buf_len = buf_len;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* Parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+
+ sge_num--;
+ if (sge_num == 0)
+ break;
+ }
+
+ /* sge Set the end flag on the last sge of the page if all the
+ * pages have been filled.
+ */
+ if (sge_num == 0) {
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ /* Parity bit */
+ buffer_addr = be32_to_cpu(sge[index].buf_addr_lo);
+ sge[index].wd1.buf_addr_gpa = (buffer_addr >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index].wd1, SPFC_DWORD_BYTE);
+ }
+ /* If only one sge is left empty, the sge reserved on the page
+ * is used for filling.
+ */
+ else if (sge_num == 1) {
+ UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len,
+ dif_flag);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ phys = (ulong)buf;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sge[index].wd0.buf_len = buf_len;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ /* Parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+
+ sge_num--;
+ } else {
+ /* Apply for a new sgl page and fill in link sge */
+ UNF_GET_FREE_ESGL_PAGE(unf_esgl_page, hba->lport, pkg);
+ if (!unf_esgl_page) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Get free esgl page failed.");
+ return UNF_RETURN_ERROR;
+ }
+ phys = unf_esgl_page->esgl_phy_addr;
+ sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+
+ /* For the cascaded wqe, you only need to enter the
+ * cascading buffer address and extension flag, and do
+ * not need to fill in other fields
+ */
+ sge[index].wd0.buf_len = 0;
+ sge[index].wd0.r_flag = 0;
+ sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
+ sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
+
+ spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%x) SID(0x%x) DID(0x%x) RXID(0x%x) build esgl left sge num: %u.",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did,
+ pkg->frame_head.oxid_rxid, sge_num);
+ }
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_local_dif_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ u32 buf_len = 0;
+ ulong phys = 0;
+ u32 dif_sge_place = 0;
+
+ /* DIF SGE must be followed by BD SGE */
+ dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
+
+ /* The entry_count= 0 needs to be specially processed and does not need
+ * to be mounted. As long as len is set to zero, Last-bit is set to one,
+ * and E-bit is set to 0.
+ */
+ if (pkg->dif_control.dif_sge_count == 0) {
+ sqe->sge[dif_sge_place].buf_addr_hi = 0;
+ sqe->sge[dif_sge_place].buf_addr_lo = 0;
+ sqe->sge[dif_sge_place].wd0.buf_len = 0;
+ } else {
+ UNF_CM_GET_DIF_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "DOUBLE DIF Get Dif Buf Fail.");
+ return UNF_RETURN_ERROR;
+ }
+ phys = (ulong)buf;
+ sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[dif_sge_place].wd0.buf_len = buf_len;
+ }
+
+ /* rdma flag. If the fc is not used, enter 0. */
+ sqe->sge[dif_sge_place].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
+ sqe->sge[dif_sge_place].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the value of
+ * this field is always 0.
+ */
+ sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_external_dif_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sqe, int direction,
+ u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct unf_esgl_page *esgl_page = NULL;
+ ulong phys = 0;
+ u32 left_sge_num = 0;
+ u32 dif_sge_place = 0;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ u32 ssqn = 0;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* DIF SGE must be followed by BD SGE */
+ dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
+
+ /* Allocate the first page first */
+ UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
+ if (!esgl_page) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "DOUBLE DIF Get External Page Fail.");
+ return UNF_RETURN_ERROR;
+ }
+
+ phys = esgl_page->esgl_phy_addr;
+
+ /* Configuring the Address of the Cascading Page */
+ sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
+
+ /* Configuring Control Information About the Cascading Page */
+ sqe->sge[dif_sge_place].wd0.buf_len = 0;
+ sqe->sge[dif_sge_place].wd0.r_flag = 0;
+ sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
+ sqe->sge[dif_sge_place].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
+
+ /* Fill in the sge information on the cascading page */
+ left_sge_num = pkg->dif_control.dif_sge_count;
+ ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
+ direction, ssq->context_id, true);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_local_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ u32 buf_len = 0;
+ u32 index = 0;
+ ulong phys = 0;
+
+ for (index = 0; index < pkg->entry_count; index++) {
+ UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+
+ phys = (ulong)buf;
+ sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[index].wd0.buf_len = buf_len;
+
+ /* rdma flag. If the fc is not used, enter 0. */
+ sqe->sge[index].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the
+ * value of this field is always 0.
+ */
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ if (index == (pkg->entry_count - 1)) {
+ /* Sets the last WQE end flag 1 */
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+ }
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM((pkg->entry_count *
+ sizeof(struct spfc_variable_sge))));
+
+ /* The entry_count= 0 needs to be specially processed and does not need
+ * to be mounted. As long as len is set to zero, Last-bit is set to one,
+ * and E-bit is set to 0.
+ */
+ if (pkg->entry_count == 0) {
+ sqe->sge[ARRAY_INDEX_0].buf_addr_hi = 0;
+ sqe->sge[ARRAY_INDEX_0].buf_addr_lo = 0;
+ sqe->sge[ARRAY_INDEX_0].wd0.buf_len = 0;
+
+ /* rdma flag. This field is not used in fc. Set it to 0. */
+ sqe->sge[ARRAY_INDEX_0].wd0.r_flag = 0;
+
+ /* parity bit */
+ sqe->sge[ARRAY_INDEX_0].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[ARRAY_INDEX_0].wd1.xid = 0;
+
+ /* The local sgl does not use the cascading SGE. Therefore, the
+ * value of this field is always 0.
+ */
+ sqe->sge[ARRAY_INDEX_0].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[ARRAY_INDEX_0].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+
+ spfc_cpu_to_big32(&sqe->sge[ARRAY_INDEX_0], sizeof(struct spfc_variable_sge));
+
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+ }
+
+ return RETURN_OK;
+}
+
+static u32 spfc_build_external_sgl(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ char *buf = NULL;
+ struct unf_esgl_page *esgl_page = NULL;
+ ulong phys = 0;
+ u32 buf_len = 0;
+ u32 index = 0;
+ u32 left_sge_num = 0;
+ u32 local_sge_num = 0;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ u16 ssqn = 0;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* Ensure that the value of bd_sge_num is greater than or equal to one
+ */
+ local_sge_num = bd_sge_num - 1;
+
+ for (index = 0; index < local_sge_num; index++) {
+ UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
+ if (unlikely(ret != RETURN_OK))
+ return UNF_RETURN_ERROR;
+
+ phys = (ulong)buf;
+
+ sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
+ sqe->sge[index].wd0.buf_len = buf_len;
+
+ /* RDMA flag, which is not used by FC. */
+ sqe->sge[index].wd0.r_flag = 0;
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+ }
+
+ /* Calculate the number of remaining sge. */
+ left_sge_num = pkg->entry_count - local_sge_num;
+ /* Adjust the length of the BDSL field in the CTRL domain. */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
+ SPFC_BYTES_TO_QW_NUM((bd_sge_num * sizeof(struct spfc_variable_sge))));
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "alloc extended sgl page,leftsge:%d", left_sge_num);
+ /* Allocating the first cascading page */
+ UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
+ if (unlikely(!esgl_page))
+ return UNF_RETURN_ERROR;
+
+ phys = esgl_page->esgl_phy_addr;
+
+ /* Configuring the Address of the Cascading Page */
+ sqe->sge[index].buf_addr_hi = (u32)UNF_DMA_HI32(phys);
+ sqe->sge[index].buf_addr_lo = (u32)UNF_DMA_LO32(phys);
+
+ /* Configuring Control Information About the Cascading Page */
+ sqe->sge[index].wd0.buf_len = 0;
+ sqe->sge[index].wd0.r_flag = 0;
+ sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
+ sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
+
+ /* parity bit */
+ sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sqe->sge[index].wd1.xid = 0;
+
+ spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
+
+ /* Fill in the sge information on the cascading page. */
+ ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
+ direction, ssq->context_id, false);
+ if (ret != RETURN_OK)
+ return UNF_RETURN_ERROR;
+ /* Copy the extended data sge to the extended sge of the extended wqe.*/
+ if (left_sge_num > 0) {
+ memcpy(sqe->esge, (void *)esgl_page->page_address,
+ SPFC_WQE_MAX_ESGE_NUM * sizeof(struct spfc_variable_sge));
+ }
+
+ return RETURN_OK;
+}
+
+u32 spfc_build_sgl_by_local_sge_num(struct unf_frame_pkg *pkg,
+ struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num)
+{
+ u32 ret = RETURN_OK;
+
+ if (pkg->entry_count <= bd_sge_num)
+ ret = spfc_build_local_sgl(hba, pkg, sqe, direction);
+ else
+ ret = spfc_build_external_sgl(hba, pkg, sqe, direction, bd_sge_num);
+
+ return ret;
+}
+
+u32 spfc_conf_dual_sgl_info(struct unf_frame_pkg *pkg,
+ struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ int direction, u32 bd_sge_num, bool double_sgl)
+{
+ u32 ret = RETURN_OK;
+
+ if (double_sgl) {
+ /* Adjust the length of the DIF_SL field in the CTRL domain */
+ SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.dif_sl,
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+
+ if (pkg->dif_control.dif_sge_count <= SPFC_WQE_SGE_DIF_ENTRY_NUM)
+ ret = spfc_build_local_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
+ else
+ ret = spfc_build_external_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
+ }
+
+ return ret;
+}
+
+u32 spfc_build_sgl(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sqe, int direction, u32 dif_flag)
+{
+#define SPFC_ESGE_CNT 3
+ u32 ret = RETURN_OK;
+ u32 bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM;
+ bool double_sgl = false;
+
+ if (dif_flag != 0 && (pkg->dif_control.flags & UNF_DIF_DOUBLE_SGL)) {
+ bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM - SPFC_WQE_SGE_DIF_ENTRY_NUM;
+ double_sgl = true;
+ }
+
+ /* Only one wqe local sge can be loaded. If more than one wqe local sge
+ * is used, use the esgl
+ */
+ ret = spfc_build_sgl_by_local_sge_num(pkg, hba, sqe, direction, bd_sge_num);
+
+ if (unlikely(ret != RETURN_OK))
+ return ret;
+
+ /* Configuring Dual SGL Information for DIF */
+ ret = spfc_conf_dual_sgl_info(pkg, hba, sqe, direction, bd_sge_num, double_sgl);
+
+ return ret;
+}
+
+void spfc_adjust_dix(struct unf_frame_pkg *pkg, struct spfc_fc_dif_info *dif_info,
+ u8 task_type)
+{
+ u8 tasktype = task_type;
+ struct spfc_fc_dif_info *dif_info_l1 = NULL;
+
+ dif_info_l1 = dif_info;
+
+ if (dix_flag == 1) {
+ if (tasktype == SPFC_SQE_FCP_IWRITE ||
+ tasktype == SPFC_SQE_FCP_TRD) {
+ if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
+ }
+
+ if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_DELETE) {
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
+ }
+ }
+
+ if (tasktype == SPFC_SQE_FCP_IREAD ||
+ tasktype == SPFC_SQE_FCP_TWR) {
+ if ((UNF_DIF_ACTION_MASK &
+ pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
+ dif_info_l1->wd0.grd_ctrl |=
+ SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
+ }
+
+ if ((UNF_DIF_ACTION_MASK &
+ pkg->dif_control.protect_opcode) ==
+ UNF_DIF_ACTION_INSERT) {
+ dif_info_l1->wd0.grd_agm_ctrl =
+ SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
+ }
+ }
+ }
+
+ if (grd_agm_ctrl != 0)
+ dif_info_l1->wd0.grd_agm_ctrl = grd_agm_ctrl;
+
+ if (grd_ctrl != 0)
+ dif_info_l1->wd0.grd_ctrl = grd_ctrl;
+}
+
+void spfc_get_dma_direction_by_fcp_cmnd(const struct unf_fcp_cmnd *fcp_cmnd,
+ int *dma_direction, u8 *task_type)
+{
+ if (UNF_FCP_WR_DATA & fcp_cmnd->control) {
+ *task_type = SPFC_SQE_FCP_IWRITE;
+ *dma_direction = DMA_TO_DEVICE;
+ } else if (UNF_GET_TASK_MGMT_FLAGS(fcp_cmnd->control) != 0) {
+ *task_type = SPFC_SQE_FCP_ITMF;
+ *dma_direction = DMA_FROM_DEVICE;
+ } else {
+ *task_type = SPFC_SQE_FCP_IREAD;
+ *dma_direction = DMA_FROM_DEVICE;
+ }
+}
+
+static inline u32 spfc_build_icmnd_wqe(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ struct spfc_sqe *sge)
+{
+ u32 ret = RETURN_OK;
+ int direction = 0;
+ u8 tasktype = 0;
+ struct unf_fcp_cmnd *fcp_cmnd = NULL;
+ struct spfc_sqe *sqe = sge;
+ u32 dif_flag = 0;
+
+ fcp_cmnd = pkg->fcp_cmnd;
+ if (unlikely(!fcp_cmnd)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Package's FCP commond pointer is NULL.");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spfc_get_dma_direction_by_fcp_cmnd(fcp_cmnd, &direction, &tasktype);
+
+ spfc_build_icmnd_wqe_ts_header(pkg, sqe, tasktype, hba->exi_base, hba->port_index);
+
+ spfc_build_icmnd_wqe_ctrls(pkg, sqe);
+
+ spfc_build_icmnd_wqe_ts(hba, pkg, &sqe->ts_sl, &sqe->ts_ex);
+
+ if (sqe->ts_sl.task_type != SPFC_SQE_FCP_ITMF) {
+ if (pkg->dif_control.protect_opcode == UNF_DIF_ACTION_NONE) {
+ dif_flag = 0;
+ spfc_build_no_dif_control(pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
+ } else {
+ dif_flag = 1;
+ spfc_build_dif_control(hba, pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
+ spfc_adjust_dix(pkg,
+ &sqe->ts_sl.cont.icmnd.info.dif_info,
+ tasktype);
+ }
+ }
+
+ ret = spfc_build_sgl(hba, pkg, sqe, direction, dif_flag);
+
+ sqe->sid = UNF_GET_SID(pkg);
+ sqe->did = UNF_GET_DID(pkg);
+
+ return ret;
+}
+
+u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg)
+{
+ struct spfc_hba_info *spfc_hba = NULL;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_sqe sqe;
+ u16 ssqn;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+
+ /* input param check */
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ memset(&sqe, 0, sizeof(struct spfc_sqe));
+ spfc_hba = hba;
+
+ /* 1. find parent sq for scsi_cmnd(pkg) */
+ parent_sq = spfc_find_parent_sq_by_pkg(spfc_hba, pkg);
+ if (unlikely(!parent_sq)) {
+ /* Do not need to print info */
+ return UNF_RETURN_ERROR;
+ }
+
+ pkg->qos_level += spfc_hba->vpid_start;
+
+ /* 2. build cmnd wqe (to sqe) for scsi_cmnd(pkg) */
+ ret = spfc_build_icmnd_wqe(spfc_hba, pkg, &sqe);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[fail]Port(0x%x) Build WQE failed, SID(0x%x) DID(0x%x) pkg type(0x%x) hottag(0x%x).",
+ spfc_hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, pkg->type, UNF_GET_XCHG_TAG(pkg));
+
+ return ret;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "Port(0x%x) RPort(0x%x) send FCP_CMND TYPE(0x%x) Local_Xid(0x%x) hottag(0x%x) LBA(0x%llx)",
+ spfc_hba->port_cfg.port_id, parent_sq->rport_index,
+ sqe.ts_sl.task_type, sqe.ts_sl.local_xid,
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
+ *((u64 *)pkg->fcp_cmnd->cdb));
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ if (sqe.ts_sl.task_type == SPFC_SQE_FCP_ITMF) {
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info,
+ parent_sq_info);
+ ret = spfc_suspend_sqe_and_send_nop(spfc_hba, parent_queue, &sqe, pkg);
+ return ret;
+ }
+ /* 3. En-Queue Parent SQ for scsi_cmnd(pkg) sqe */
+ ret = spfc_parent_sq_enqueue(parent_sq, &sqe, ssqn);
+
+ return ret;
+}
+
+static void spfc_ini_status_default_handler(struct spfc_scqe_iresp *iresp,
+ struct unf_frame_pkg *pkg)
+{
+ u8 control = 0;
+ u16 com_err_code = 0;
+
+ control = iresp->wd2.fcp_flag & SPFC_CTRL_MASK;
+
+ if (iresp->fcp_resid != 0) {
+ com_err_code = UNF_IO_FAILED;
+ pkg->residus_len = iresp->fcp_resid;
+ } else {
+ com_err_code = UNF_IO_SUCCESS;
+ pkg->residus_len = 0;
+ }
+
+ pkg->status = spfc_fill_pkg_status(com_err_code, control, iresp->wd2.scsi_status);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Fill package with status: 0x%x, residus len: 0x%x",
+ pkg->status, pkg->residus_len);
+}
+
+static void spfc_check_fcp_rsp_iu(struct spfc_scqe_iresp *iresp,
+ struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg)
+{
+ u8 scsi_status = 0;
+ u8 control = 0;
+
+ control = (u8)iresp->wd2.fcp_flag;
+ scsi_status = (u8)iresp->wd2.scsi_status;
+
+ /* FcpRspIU with Little End from IOB WQE to COM's pkg also */
+ if (control & FCP_RESID_UNDER_MASK) {
+ /* under flow: usually occurs in inquiry */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]I_STS IOB posts under flow with residus len: %u, FCP residue: %u.",
+ pkg->residus_len, iresp->fcp_resid);
+
+ if (pkg->residus_len != iresp->fcp_resid)
+ pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
+ else
+ pkg->status = spfc_fill_pkg_status(UNF_IO_UNDER_FLOW, control, scsi_status);
+ }
+
+ if (control & FCP_RESID_OVER_MASK) {
+ /* over flow: error happened */
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]I_STS IOB posts over flow with residus len: %u, FCP residue: %u.",
+ pkg->residus_len, iresp->fcp_resid);
+
+ if (pkg->residus_len != iresp->fcp_resid)
+ pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
+ else
+ pkg->status = spfc_fill_pkg_status(UNF_IO_OVER_FLOW, control, scsi_status);
+ }
+
+ pkg->unf_rsp_pload_bl.length = 0;
+ pkg->unf_sense_pload_bl.length = 0;
+
+ if (control & FCP_RSP_LEN_VALID_MASK) {
+ /* dma by chip */
+ pkg->unf_rsp_pload_bl.buffer_ptr = NULL;
+
+ pkg->unf_rsp_pload_bl.length = iresp->fcp_rsp_len;
+ pkg->byte_orders |= UNF_BIT_3;
+ }
+
+ if (control & FCP_SNS_LEN_VALID_MASK) {
+ /* dma by chip */
+ pkg->unf_sense_pload_bl.buffer_ptr = NULL;
+
+ pkg->unf_sense_pload_bl.length = iresp->fcp_sns_len;
+ pkg->byte_orders |= UNF_BIT_4;
+ }
+
+ if (iresp->wd1.user_id_num == 1 &&
+ (pkg->unf_sense_pload_bl.length + pkg->unf_rsp_pload_bl.length > 0)) {
+ pkg->unf_rsp_pload_bl.buffer_ptr =
+ (u8 *)spfc_get_els_buf_by_user_id(hba, (u16)iresp->user_id[ARRAY_INDEX_0]);
+ } else if (iresp->wd1.user_id_num > 1) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]receive buff num 0x%x > 1 0x%x",
+ iresp->wd1.user_id_num, control);
+ }
+}
+
+u16 spfc_get_com_err_code(struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+ u32 status_subcode = 0;
+
+ status_subcode = pkg->status_sub_code;
+
+ if (likely(status_subcode == 0))
+ com_err_code = 0;
+ else if (status_subcode == UNF_DIF_CRC_ERR)
+ com_err_code = UNF_IO_DIF_ERROR;
+ else if (status_subcode == UNF_DIF_LBA_ERR)
+ com_err_code = UNF_IO_DIF_REF_ERROR;
+ else if (status_subcode == UNF_DIF_APP_ERR)
+ com_err_code = UNF_IO_DIF_GEN_ERROR;
+
+ return com_err_code;
+}
+
+void spfc_process_ini_fail_io(struct spfc_hba_info *hba, union spfc_scqe *iresp,
+ struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+
+ /* 1. error stats process */
+ if (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp) != 0) {
+ switch (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp)) {
+ /* I/O not complete: 1.session reset; 2.clear buffer */
+ case FC_CQE_BUFFER_CLEAR_IO_COMPLETED:
+ case FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED:
+ case FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED:
+ case FC_CQE_WQE_FLUSH_IO_COMPLETED:
+ com_err_code = UNF_IO_CLEAN_UP;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO not complete, OX_ID(0x%x) RX_ID(0x%x) status(0x%x)",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd0.ox_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd0.rx_id,
+ com_err_code);
+
+ break;
+ /* Allocate task id(oxid) fail */
+ case FC_ERROR_INVALID_TASK_ID:
+ com_err_code = UNF_IO_NO_XCHG;
+ break;
+ case FC_ALLOC_EXCH_ID_FAILED:
+ com_err_code = UNF_IO_NO_XCHG;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO, tag 0x%x alloc oxid fail.",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
+ break;
+ case FC_ERROR_CODE_DATA_DIFX_FAILED:
+ com_err_code = pkg->status >> UNF_SHIFT_16;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) INI IO, tag 0x%x tx dif error.",
+ hba->port_cfg.port_id,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
+ break;
+ /* any other: I/O failed --->>> DID error */
+ default:
+ com_err_code = UNF_IO_FAILED;
+ break;
+ }
+
+ /* fill pkg status & return directly */
+ pkg->status =
+ spfc_fill_pkg_status(com_err_code,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.fcp_flag,
+ ((struct spfc_scqe_iresp *)iresp)->wd2.scsi_status);
+
+ return;
+ }
+
+ /* 2. default stats process */
+ spfc_ini_status_default_handler((struct spfc_scqe_iresp *)iresp, pkg);
+
+ /* 3. FCP RSP IU check */
+ spfc_check_fcp_rsp_iu((struct spfc_scqe_iresp *)iresp, hba, pkg);
+}
+
+void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
+ struct unf_frame_pkg *pkg)
+{
+ u16 com_err_code = UNF_IO_FAILED;
+ u8 dif_info = 0;
+
+ dif_info = wqe->common.wd0.dif_vry_rst;
+ if (dif_info == SPFC_TX_DIF_ERROR_FLAG) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[error]Port(0x%x) TGT recv tx dif result abnormal.",
+ hba->port_cfg.port_id);
+ }
+
+ pkg->status_sub_code =
+ (dif_info & SPFC_DIF_ERROR_CODE_CRC)
+ ? UNF_DIF_CRC_ERR
+ : ((dif_info & SPFC_DIF_ERROR_CODE_REF)
+ ? UNF_DIF_LBA_ERR
+ : ((dif_info & SPFC_DIF_ERROR_CODE_APP) ? UNF_DIF_APP_ERR : 0));
+ com_err_code = spfc_get_com_err_code(pkg);
+ pkg->status = (u32)(com_err_code) << UNF_SHIFT_16;
+
+ if (unlikely(com_err_code != 0)) {
+ spfc_dif_err_count(hba, dif_info);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[error]Port(0x%x) INI io status with dif result(0x%x),subcode(0x%x) pkg->status(0x%x)",
+ hba->port_cfg.port_id, dif_info,
+ pkg->status_sub_code, pkg->status);
+ }
+}
+
+u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
+{
+#define SPFC_IRSP_USERID_LEN ((FC_SENSEDATA_USERID_CNT_MAX + 1) / 2)
+ struct spfc_scqe_iresp *iresp = NULL;
+ struct unf_frame_pkg pkg;
+ u32 ret = RETURN_OK;
+ u16 hot_tag;
+
+ FC_CHECK_RETURN_VALUE((hba), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE((wqe), UNF_RETURN_ERROR);
+
+ iresp = (struct spfc_scqe_iresp *)(void *)wqe;
+
+ /* 1. Constraints: I_STS remain cnt must be zero */
+ if (unlikely(SPFC_GET_SCQE_REMAIN_CNT(wqe) != 0)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ini_wqe(OX_ID:0x%x RX_ID:0x%x) HotTag(0x%x) remain_cnt(0x%x) abnormal, status(0x%x)",
+ hba->port_cfg.port_id, iresp->wd0.ox_id,
+ iresp->wd0.rx_id, iresp->wd2.hotpooltag,
+ SPFC_GET_SCQE_REMAIN_CNT(wqe),
+ SPFC_GET_SCQE_STATUS(wqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe, sizeof(union spfc_scqe));
+
+ /* return directly */
+ return UNF_RETURN_ERROR;
+ }
+
+ spfc_swap_16_in_32((u32 *)iresp->user_id, SPFC_IRSP_USERID_LEN);
+
+ memset(&pkg, 0, sizeof(struct unf_frame_pkg));
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
+ pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
+
+ hot_tag = (u16)iresp->wd2.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ /* 2. HotTag validity check */
+ if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
+ pkg.status = UNF_IO_SUCCESS;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
+ hot_tag - hba->exi_base;
+ } else {
+ /* OX_ID error: return by COM */
+ pkg.status = UNF_IO_FAILED;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE16;
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) ini_cmnd_wqe(OX_ID:0x%x RX_ID:0x%x) ox_id invalid, status(0x%x)",
+ hba->port_cfg.port_id, iresp->wd0.ox_id, iresp->wd0.rx_id,
+ SPFC_GET_SCQE_STATUS(wqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe,
+ sizeof(union spfc_scqe));
+ }
+
+ /* process dif result */
+ spfc_process_dif_result(hba, wqe, &pkg);
+
+ /* 3. status check */
+ if (unlikely(SPFC_GET_SCQE_STATUS(wqe) ||
+ iresp->wd2.scsi_status != 0 || iresp->fcp_resid != 0 ||
+ ((iresp->wd2.fcp_flag & SPFC_CTRL_MASK) != 0))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[warn]Port(0x%x) scq_status(0x%x) scsi_status(0x%x) fcp_resid(0x%x) fcp_flag(0x%x)",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(wqe),
+ iresp->wd2.scsi_status, iresp->fcp_resid,
+ iresp->wd2.fcp_flag);
+
+ /* set pkg status & check fcp_rsp IU */
+ spfc_process_ini_fail_io(hba, (union spfc_scqe *)iresp, &pkg);
+ }
+
+ /* 4. LL_Driver ---to--->>> COM_Driver */
+ UNF_LOWLEVEL_SCSI_COMPLETED(ret, hba->lport, &pkg);
+ if (iresp->wd1.user_id_num == 1 &&
+ (pkg.unf_sense_pload_bl.length + pkg.unf_rsp_pload_bl.length > 0)) {
+ spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)iresp->user_id[ARRAY_INDEX_0]);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport(0x%x) recv(%s) hottag(0x%x) OX_ID(0x%x) RX_ID(0x%x) return(%s)",
+ hba->port_cfg.port_id, iresp->wd1.conn_id,
+ (SPFC_SCQE_FCP_IRSP == (SPFC_GET_SCQE_TYPE(wqe)) ? "IRESP" : "ITMF_RSP"),
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], iresp->wd0.ox_id,
+ iresp->wd0.rx_id, (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_io.h b/drivers/scsi/spfc/hw/spfc_io.h
new file mode 100644
index 000000000000..26d10a51bbe4
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_io.h
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_IO_H
+#define SPFC_IO_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_hba.h"
+
+#ifdef __cplusplus
+#if __cplusplus
+extern "C" {
+#endif
+#endif /* __cplusplus */
+
+#define BYTE_PER_DWORD 4
+#define SPFC_TRESP_DIRECT_CARRY_LEN (23 * 4)
+#define FCP_RESP_IU_LEN_BYTE_GOOD_STATUS 24
+#define SPFC_TRSP_IU_CONTROL_OFFSET 2
+#define SPFC_TRSP_IU_FCP_CONF_REP (1 << 12)
+
+struct spfc_dif_io_param {
+ u32 all_len;
+ u32 buf_len;
+ char **buf;
+ char *in_buf;
+ int drect;
+};
+
+enum dif_mode_type {
+ DIF_MODE_NONE = 0x0,
+ DIF_MODE_INSERT = 0x1,
+ DIF_MODE_REMOVE = 0x2,
+ DIF_MODE_FORWARD_OR_REPLACE = 0x3
+};
+
+enum ref_tag_mode_type {
+ BOTH_NONE = 0x0,
+ RECEIVE_INCREASE = 0x1,
+ REPLACE_INCREASE = 0x2,
+ BOTH_INCREASE = 0x3
+};
+
+#define SPFC_DIF_DISABLE 0
+#define SPFC_DIF_ENABLE 1
+#define SPFC_DIF_SINGLE_SGL 0
+#define SPFC_DIF_DOUBLE_SGL 1
+#define SPFC_DIF_SECTOR_512B_MODE 0
+#define SPFC_DIF_SECTOR_4KB_MODE 1
+#define SPFC_DIF_TYPE1 0x01
+#define SPFC_DIF_TYPE3 0x03
+#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16 0x0
+#define SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM 0x1
+#define SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16 0x2
+#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_IP_CHECKSUM 0x3
+#define SPFC_DIF_CRC16_INITIAL_SELECTOR_DEFAUL 0
+#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_REGISTER 0
+#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1 0x4
+
+#define SPFC_DIF_GARD_REF_APP_CTRL_VERIFY 0x4
+#define SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY 0x0
+#define SPFC_DIF_GARD_REF_APP_CTRL_INSERT 0x0
+#define SPFC_DIF_GARD_REF_APP_CTRL_DELETE 0x1
+#define SPFC_DIF_GARD_REF_APP_CTRL_FORWARD 0x2
+#define SPFC_DIF_GARD_REF_APP_CTRL_REPLACE 0x3
+
+#define SPFC_BUILD_RESPONSE_INFO_NON_GAP_MODE0 0
+#define SPFC_BUILD_RESPONSE_INFO_GPA_MODE1 1
+#define SPFC_CONF_SUPPORT 1
+#define SPFC_CONF_NOT_SUPPORT 0
+#define SPFC_XID_INTERVAL 2048
+
+#define SPFC_DIF_ERROR_CODE_MASK 0xe
+#define SPFC_DIF_ERROR_CODE_CRC 0x2
+#define SPFC_DIF_ERROR_CODE_REF 0x4
+#define SPFC_DIF_ERROR_CODE_APP 0x8
+#define SPFC_TX_DIF_ERROR_FLAG (1 << 7)
+
+#define SPFC_DIF_PAYLOAD_TYPE (1 << 0)
+#define SPFC_DIF_CRC_TYPE (1 << 1)
+#define SPFC_DIF_APP_TYPE (1 << 2)
+#define SPFC_DIF_REF_TYPE (1 << 3)
+
+#define SPFC_DIF_SEND_DIFERR_ALL (0)
+#define SPFC_DIF_SEND_DIFERR_CRC (1)
+#define SPFC_DIF_SEND_DIFERR_APP (2)
+#define SPFC_DIF_SEND_DIFERR_REF (3)
+#define SPFC_DIF_RECV_DIFERR_ALL (4)
+#define SPFC_DIF_RECV_DIFERR_CRC (5)
+#define SPFC_DIF_RECV_DIFERR_APP (6)
+#define SPFC_DIF_RECV_DIFERR_REF (7)
+#define SPFC_DIF_ERR_ENABLE (382855)
+#define SPFC_DIF_ERR_DISABLE (0)
+
+#define SPFC_DIF_LENGTH (8)
+#define SPFC_SECT_SIZE_512 (512)
+#define SPFC_SECT_SIZE_4096 (4096)
+#define SPFC_SECT_SIZE_512_8 (520)
+#define SPFC_SECT_SIZE_4096_8 (4104)
+#define SPFC_DIF_SECT_SIZE_APP_OFFSET (2)
+#define SPFC_DIF_SECT_SIZE_LBA_OFFSET (4)
+
+#define SPFC_MAX_IO_TAG (2048)
+#define SPFC_PRINT_WORD (8)
+
+extern u32 dif_protect_opcode;
+extern u32 dif_sect_size;
+extern u32 no_dif_sect_size;
+extern u32 grd_agm_ini_ctrl;
+extern u32 ref_tag_mod;
+extern u32 grd_ctrl;
+extern u32 grd_agm_ctrl;
+extern u32 cmp_app_tag_mask;
+extern u32 app_tag_ctrl;
+extern u32 ref_tag_ctrl;
+extern u32 rep_ref_tag;
+extern u32 rx_rep_ref_tag;
+extern u16 cmp_app_tag;
+extern u16 rep_app_tag;
+
+#define spfc_fill_pkg_status(com_err_code, control, scsi_status) \
+ (((u32)(com_err_code) << 16) | ((u32)(control) << 8) | \
+ (u32)(scsi_status))
+#define SPFC_CTRL_MASK 0x1f
+
+u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg);
+u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe);
+void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
+ struct unf_frame_pkg *pkg);
+
+#ifdef __cplusplus
+#if __cplusplus
+}
+#endif
+#endif /* __cplusplus */
+
+#endif /* __SPFC_IO_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_lld.c b/drivers/scsi/spfc/hw/spfc_lld.c
new file mode 100644
index 000000000000..6c252cc60bd6
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_lld.c
@@ -0,0 +1,998 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <net/addrconf.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+
+#include "spfc_lld.h"
+#include "sphw_hw.h"
+#include "sphw_mt.h"
+#include "sphw_hw_cfg.h"
+#include "sphw_hw_comm.h"
+#include "sphw_common.h"
+#include "spfc_cqm_main.h"
+#include "spfc_module.h"
+
+#define SPFC_DRV_NAME "spfc"
+#define SPFC_CHIP_NAME "spfc"
+
+#define PCI_VENDOR_ID_RAMAXEL 0x1E81
+#define SPFC_DEV_ID_PF_STD 0x9010
+#define SPFC_DEV_ID_VF 0x9008
+
+#define SPFC_VF_PCI_CFG_REG_BAR 0
+#define SPFC_PF_PCI_CFG_REG_BAR 1
+
+#define SPFC_PCI_INTR_REG_BAR 2
+#define SPFC_PCI_MGMT_REG_BAR 3
+#define SPFC_PCI_DB_BAR 4
+
+#define SPFC_SECOND_BASE (1000)
+#define SPFC_SYNC_YEAR_OFFSET (1900)
+#define SPFC_SYNC_MONTH_OFFSET (1)
+#define SPFC_MINUTE_BASE (60)
+#define SPFC_WAIT_TOOL_CNT_TIMEOUT 10000
+
+#define SPFC_MIN_TIME_IN_USECS 900
+#define SPFC_MAX_TIME_IN_USECS 1000
+#define SPFC_MAX_LOOP_TIMES 10000
+
+#define SPFC_TOOL_MIN_TIME_IN_USECS 9900
+#define SPFC_TOOL_MAX_TIME_IN_USECS 10000
+
+#define SPFC_EVENT_PROCESS_TIMEOUT 10000
+
+#define FIND_BIT(num, n) (((num) & (1UL << (n))) ? 1 : 0)
+#define SET_BIT(num, n) ((num) | (1UL << (n)))
+#define CLEAR_BIT(num, n) ((num) & (~(1UL << (n))))
+
+#define MAX_CARD_ID 64
+static unsigned long card_bit_map;
+LIST_HEAD(g_spfc_chip_list);
+struct spfc_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
+
+struct unf_cm_handle_op spfc_cm_op_handle = {0};
+
+u32 allowed_probe_num = SPFC_MAX_PORT_NUM;
+u32 dif_sgl_mode;
+u32 max_speed = SPFC_SPEED_32G;
+u32 accum_db_num = 1;
+u32 dif_type = 0x1;
+u32 wqe_page_size = 4096;
+u32 wqe_pre_load = 6;
+u32 combo_length = 128;
+u32 cos_bit_map = 0x1f;
+u32 spfc_dif_type;
+u32 spfc_dif_enable;
+u8 spfc_guard;
+int link_lose_tmo = 30;
+
+u32 exit_count = 4096;
+u32 exit_stride = 4096;
+u32 exit_base;
+
+/* dfx counter */
+atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
+u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
+u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
+atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+struct spfc_lld_lock g_lld_lock;
+
+/* g_device_mutex */
+struct mutex g_device_mutex;
+
+/* pci device initialize lock */
+struct mutex g_pci_init_mutex;
+
+#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
+
+void lld_dev_cnt_init(struct spfc_pcidev *pci_adapter)
+{
+ atomic_set(&pci_adapter->ref_cnt, 0);
+}
+
+void lld_dev_hold(struct spfc_lld_dev *dev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_inc(&pci_adapter->ref_cnt);
+}
+
+void lld_dev_put(struct spfc_lld_dev *dev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_dec(&pci_adapter->ref_cnt);
+}
+
+static void spfc_sync_time_to_fmw(struct spfc_pcidev *pdev_pri)
+{
+ struct tm tm = {0};
+ u64 tv_msec;
+ int err;
+
+ tv_msec = ktime_to_ms(ktime_get_real());
+ err = sphw_sync_time(pdev_pri->hwdev, tv_msec);
+ if (err) {
+ sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+ } else {
+ time64_to_tm(tv_msec / MSEC_PER_SEC, 0, &tm);
+ sdk_info(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware succeed. UTC time %ld-%02d-%02d %02d:%02d:%02d.\n",
+ tm.tm_year + 1900, tm.tm_mon + 1,
+ tm.tm_mday, tm.tm_hour,
+ tm.tm_min, tm.tm_sec);
+ }
+}
+
+void wait_lld_dev_unused(struct spfc_pcidev *pci_adapter)
+{
+ u32 loop_cnt = 0;
+
+ while (loop_cnt < SPFC_WAIT_TOOL_CNT_TIMEOUT) {
+ if (!atomic_read(&pci_adapter->ref_cnt))
+ return;
+
+ usleep_range(SPFC_TOOL_MIN_TIME_IN_USECS, SPFC_TOOL_MAX_TIME_IN_USECS);
+ loop_cnt++;
+ }
+}
+
+static void lld_lock_chip_node(void)
+{
+ u32 loop_cnt;
+
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ loop_cnt = 0;
+ while (loop_cnt < WAIT_LLD_DEV_NODE_CHANGED) {
+ if (!test_and_set_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait for lld node change complete for %us",
+ loop_cnt / UNF_S_TO_MS);
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_NODE_CHANGED)
+ pr_warn("[warn]Wait for lld node change complete timeout when try to get lld lock");
+
+ loop_cnt = 0;
+ while (loop_cnt < WAIT_LLD_DEV_REF_CNT_EMPTY) {
+ if (!atomic_read(&g_lld_lock.dev_ref_cnt))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait for lld dev unused for %us, reference count: %d",
+ loop_cnt / UNF_S_TO_MS, atomic_read(&g_lld_lock.dev_ref_cnt));
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_REF_CNT_EMPTY)
+ pr_warn("[warn]Wait for lld dev unused timeout");
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+static void lld_unlock_chip_node(void)
+{
+ clear_bit(SPFC_NODE_CHANGE, &g_lld_lock.status);
+}
+
+void lld_hold(void)
+{
+ u32 loop_cnt = 0;
+
+ /* ensure there have not any chip node in changing */
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ while (loop_cnt < WAIT_LLD_DEV_HOLD_TIMEOUT) {
+ if (!test_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
+ pr_warn("[warn]Wait lld node change complete for %u",
+ loop_cnt / UNF_S_TO_MS);
+
+ usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
+ }
+
+ if (loop_cnt == WAIT_LLD_DEV_HOLD_TIMEOUT)
+ pr_warn("[warn]Wait lld node change complete timeout when try to hode lld dev %u",
+ loop_cnt / UNF_S_TO_MS);
+
+ atomic_inc(&g_lld_lock.dev_ref_cnt);
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_put(void)
+{
+ atomic_dec(&g_lld_lock.dev_ref_cnt);
+}
+
+static void spfc_lld_lock_init(void)
+{
+ mutex_init(&g_lld_lock.lld_mutex);
+ atomic_set(&g_lld_lock.dev_ref_cnt, 0);
+}
+
+static void spfc_realease_cmo_op_handle(void)
+{
+ memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
+}
+
+static void spfc_check_module_para(void)
+{
+ if (spfc_dif_enable) {
+ dif_sgl_mode = true;
+ spfc_dif_type = SHOST_DIF_TYPE1_PROTECTION | SHOST_DIX_TYPE1_PROTECTION;
+ dix_flag = 1;
+ }
+
+ if (dif_sgl_mode != 0)
+ dif_sgl_mode = 1;
+}
+
+void spfc_event_process(void *adapter, struct sphw_event_info *event)
+{
+ struct spfc_pcidev *dev = adapter;
+
+ if (test_and_set_bit(SERVICE_T_FC, &dev->state)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[WARN]Event: fc is in detach");
+ return;
+ }
+
+ if (g_uld_info[SERVICE_T_FC].event)
+ g_uld_info[SERVICE_T_FC].event(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC], event);
+
+ clear_bit(SERVICE_T_FC, &dev->state);
+}
+
+int spfc_stateful_init(void *hwdev)
+{
+ struct sphw_hwdev *dev = hwdev;
+ int stateful_en;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (dev->statufull_ref_cnt++)
+ return 0;
+
+ stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
+ if (stateful_en && SPHW_IS_PPF(dev)) {
+ err = sphw_ppf_ext_db_init(dev);
+ if (err)
+ goto out;
+ }
+
+ err = cqm3_init(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
+ goto init_cqm_err;
+ }
+
+ sdk_info(dev->dev_hdl, "Initialize statefull resource success\n");
+
+ return 0;
+
+init_cqm_err:
+ if (stateful_en)
+ sphw_ppf_ext_db_deinit(dev);
+
+out:
+ dev->statufull_ref_cnt--;
+
+ return err;
+}
+
+void spfc_stateful_deinit(void *hwdev)
+{
+ struct sphw_hwdev *dev = hwdev;
+ u32 stateful_en;
+
+ if (!dev || !dev->statufull_ref_cnt)
+ return;
+
+ if (--dev->statufull_ref_cnt)
+ return;
+
+ cqm3_uninit(hwdev);
+
+ stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
+ if (stateful_en)
+ sphw_ppf_ext_db_deinit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear statefull resource success\n");
+}
+
+static int attach_uld(struct spfc_pcidev *dev, struct spfc_uld_info *uld_info)
+{
+ void *uld_dev = NULL;
+ int err;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (dev->uld_dev[SERVICE_T_FC]) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]fc driver has attached to pcie device");
+ err = 0;
+ goto out_unlock;
+ }
+
+ err = spfc_stateful_init(dev->hwdev);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to initialize statefull resources");
+ goto out_unlock;
+ }
+
+ err = uld_info->probe(&dev->lld_dev, &uld_dev,
+ dev->uld_dev_name[SERVICE_T_FC]);
+ if (err || !uld_dev) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to add object for fc driver to pcie device");
+ goto probe_failed;
+ }
+
+ dev->uld_dev[SERVICE_T_FC] = uld_dev;
+ mutex_unlock(&dev->pdev_mutex);
+
+ return RETURN_OK;
+
+probe_failed:
+ spfc_stateful_deinit(dev->hwdev);
+
+out_unlock:
+ mutex_unlock(&dev->pdev_mutex);
+
+ return err;
+}
+
+static void detach_uld(struct spfc_pcidev *dev)
+{
+ struct spfc_uld_info *uld_info = &g_uld_info[SERVICE_T_FC];
+ u32 cnt = 0;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (!dev->uld_dev[SERVICE_T_FC]) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+
+ while (cnt < SPFC_EVENT_PROCESS_TIMEOUT) {
+ if (!test_and_set_bit(SERVICE_T_FC, &dev->state))
+ break;
+ usleep_range(900, 1000);
+ cnt++;
+ }
+
+ uld_info->remove(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC]);
+ dev->uld_dev[SERVICE_T_FC] = NULL;
+ spfc_stateful_deinit(dev->hwdev);
+ if (cnt < SPFC_EVENT_PROCESS_TIMEOUT)
+ clear_bit(SERVICE_T_FC, &dev->state);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "Detach fc driver from pcie device succeed");
+ mutex_unlock(&dev->pdev_mutex);
+}
+
+int spfc_register_uld(struct spfc_uld_info *uld_info)
+{
+ memset(g_uld_info, 0, sizeof(g_uld_info));
+ spfc_lld_lock_init();
+ mutex_init(&g_device_mutex);
+ mutex_init(&g_pci_init_mutex);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Module Init Success, wait for pci init and probe");
+
+ if (!uld_info || !uld_info->probe || !uld_info->remove) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Invalid information of fc driver to register");
+ return -EINVAL;
+ }
+
+ lld_hold();
+
+ if (g_uld_info[SERVICE_T_FC].probe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]fc driver has registered");
+ lld_put();
+ return -EINVAL;
+ }
+
+ memcpy(&g_uld_info[SERVICE_T_FC], uld_info, sizeof(*uld_info));
+
+ lld_put();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[KEVENT]Register spfc driver succeed");
+ return RETURN_OK;
+}
+
+void spfc_unregister_uld(void)
+{
+ struct spfc_uld_info *uld_info = NULL;
+
+ lld_hold();
+ uld_info = &g_uld_info[SERVICE_T_FC];
+ memset(uld_info, 0, sizeof(*uld_info));
+ lld_put();
+}
+
+static int spfc_pci_init(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+ int err = 0;
+
+ pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
+ if (!pci_adapter) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc pci device adapter");
+ return -ENOMEM;
+ }
+ pci_adapter->pcidev = pdev;
+ mutex_init(&pci_adapter->pdev_mutex);
+
+ pci_set_drvdata(pdev, pci_adapter);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to enable PCI device");
+ goto pci_enable_err;
+ }
+
+ err = pci_request_regions(pdev, SPFC_DRV_NAME);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to request regions");
+ goto pci_regions_err;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Couldn't set 64-bit DMA mask");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Failed to set DMA mask");
+ goto dma_mask_err;
+ }
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Couldn't set 64-bit coherent DMA mask");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR,
+ "[err]Failed to set coherent DMA mask");
+ goto dma_consistnet_mask_err;
+ }
+ }
+
+ return 0;
+
+dma_consistnet_mask_err:
+dma_mask_err:
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+
+pci_regions_err:
+ pci_disable_device(pdev);
+
+pci_enable_err:
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+
+ return err;
+}
+
+static void spfc_pci_deinit(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+}
+
+static int alloc_chip_node(struct spfc_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = NULL;
+ unsigned char i;
+ unsigned char bus_number = 0;
+
+ if (!pci_is_root_bus(pci_adapter->pcidev->bus))
+ bus_number = pci_adapter->pcidev->bus->number;
+
+ if (bus_number != 0) {
+ list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
+ if (chip_node->bus_num == bus_number) {
+ pci_adapter->chip_node = chip_node;
+ return 0;
+ }
+ }
+ } else if (pci_adapter->pcidev->device == SPFC_DEV_ID_VF) {
+ list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
+ if (chip_node) {
+ pci_adapter->chip_node = chip_node;
+ return 0;
+ }
+ }
+ }
+
+ for (i = 0; i < MAX_CARD_ID; i++) {
+ if (!test_and_set_bit(i, &card_bit_map))
+ break;
+ }
+
+ if (i == MAX_CARD_ID) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc card id");
+ return -EFAULT;
+ }
+
+ chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
+ if (!chip_node) {
+ clear_bit(i, &card_bit_map);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to alloc chip node");
+ return -ENOMEM;
+ }
+
+ /* bus number */
+ chip_node->bus_num = bus_number;
+
+ snprintf(chip_node->chip_name, IFNAMSIZ, "%s%d", SPFC_CHIP_NAME, i);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Add new chip %s to global list succeed",
+ chip_node->chip_name);
+
+ list_add_tail(&chip_node->node, &g_spfc_chip_list);
+
+ INIT_LIST_HEAD(&chip_node->func_list);
+ pci_adapter->chip_node = chip_node;
+
+ return 0;
+}
+
+#ifdef CONFIG_X86
+void cfg_order_reg(struct spfc_pcidev *pci_adapter)
+{
+ u8 cpu_model[] = {0x3c, 0x3f, 0x45, 0x46, 0x3d, 0x47, 0x4f, 0x56};
+ struct cpuinfo_x86 *cpuinfo = NULL;
+ u32 i;
+
+ if (sphw_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return;
+
+ cpuinfo = &cpu_data(0);
+
+ for (i = 0; i < sizeof(cpu_model); i++) {
+ if (cpu_model[i] == cpuinfo->x86_model)
+ sphw_set_pcie_order_cfg(pci_adapter->hwdev);
+ }
+}
+#endif
+
+static int mapping_bar(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
+{
+ int cfg_bar;
+
+ cfg_bar = pdev->is_virtfn ? SPFC_VF_PCI_CFG_REG_BAR : SPFC_PF_PCI_CFG_REG_BAR;
+
+ pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
+ if (!pci_adapter->cfg_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map configuration regs");
+ return -ENOMEM;
+ }
+
+ pci_adapter->intr_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_INTR_REG_BAR);
+ if (!pci_adapter->intr_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map interrupt regs");
+ goto map_intr_bar_err;
+ }
+
+ if (!pdev->is_virtfn) {
+ pci_adapter->mgmt_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_MGMT_REG_BAR);
+ if (!pci_adapter->mgmt_reg_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "Failed to map mgmt regs");
+ goto map_mgmt_bar_err;
+ }
+ }
+
+ pci_adapter->db_base_phy = pci_resource_start(pdev, SPFC_PCI_DB_BAR);
+ pci_adapter->db_dwqe_len = pci_resource_len(pdev, SPFC_PCI_DB_BAR);
+ pci_adapter->db_base = pci_ioremap_bar(pdev, SPFC_PCI_DB_BAR);
+ if (!pci_adapter->db_base) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "Failed to map doorbell regs");
+ goto map_db_err;
+ }
+
+ return 0;
+
+map_db_err:
+ if (!pdev->is_virtfn)
+ iounmap(pci_adapter->mgmt_reg_base);
+
+map_mgmt_bar_err:
+ iounmap(pci_adapter->intr_reg_base);
+
+map_intr_bar_err:
+ iounmap(pci_adapter->cfg_reg_base);
+
+ return -ENOMEM;
+}
+
+static void unmapping_bar(struct spfc_pcidev *pci_adapter)
+{
+ iounmap(pci_adapter->db_base);
+
+ if (!pci_adapter->pcidev->is_virtfn)
+ iounmap(pci_adapter->mgmt_reg_base);
+
+ iounmap(pci_adapter->intr_reg_base);
+ iounmap(pci_adapter->cfg_reg_base);
+}
+
+static int spfc_func_init(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
+{
+ struct sphw_init_para init_para = {0};
+ int err;
+
+ init_para.adapter_hdl = pci_adapter;
+ init_para.pcidev_hdl = pdev;
+ init_para.dev_hdl = &pdev->dev;
+ init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
+ init_para.intr_reg_base = pci_adapter->intr_reg_base;
+ init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
+ init_para.db_base = pci_adapter->db_base;
+ init_para.db_base_phy = pci_adapter->db_base_phy;
+ init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
+ init_para.hwdev = &pci_adapter->hwdev;
+ init_para.chip_node = pci_adapter->chip_node;
+ err = sphw_init_hwdev(&init_para);
+ if (err) {
+ pci_adapter->hwdev = NULL;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to initialize hardware device");
+ return -EFAULT;
+ }
+
+ pci_adapter->lld_dev.pdev = pdev;
+ pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
+
+ sphw_event_register(pci_adapter->hwdev, pci_adapter, spfc_event_process);
+
+ if (sphw_func_type(pci_adapter->hwdev) != TYPE_VF)
+ spfc_sync_time_to_fmw(pci_adapter);
+ lld_lock_chip_node();
+ list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
+ lld_unlock_chip_node();
+ err = attach_uld(pci_adapter, &g_uld_info[SERVICE_T_FC]);
+
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Spfc3 attach uld fail");
+ goto attach_fc_err;
+ }
+
+#ifdef CONFIG_X86
+ cfg_order_reg(pci_adapter);
+#endif
+
+ return 0;
+
+attach_fc_err:
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+ wait_lld_dev_unused(pci_adapter);
+
+ return err;
+}
+
+static void spfc_func_deinit(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+ wait_lld_dev_unused(pci_adapter);
+
+ detach_uld(pci_adapter);
+ sphw_disable_mgmt_msg_report(pci_adapter->hwdev);
+ sphw_flush_mgmt_workq(pci_adapter->hwdev);
+ sphw_event_unregister(pci_adapter->hwdev);
+ sphw_free_hwdev(pci_adapter->hwdev);
+}
+
+static void free_chip_node(struct spfc_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = pci_adapter->chip_node;
+ int id, err;
+
+ if (list_empty(&chip_node->func_list)) {
+ list_del(&chip_node->node);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Delete chip %s from global list succeed",
+ chip_node->chip_name);
+ err = sscanf(chip_node->chip_name, SPFC_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Failed to get spfc id");
+ }
+
+ clear_bit(id, &card_bit_map);
+
+ kfree(chip_node);
+ }
+}
+
+static int spfc_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+ int err;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[event]Spfc3 Pcie device probe begin");
+
+ mutex_lock(&g_pci_init_mutex);
+ err = spfc_pci_init(pdev);
+ if (err) {
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]pci init fail, return %d", err);
+ return err;
+ }
+ pci_adapter = pci_get_drvdata(pdev);
+ err = mapping_bar(pdev, pci_adapter);
+ if (err) {
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to map bar");
+ goto map_bar_failed;
+ }
+ mutex_unlock(&g_pci_init_mutex);
+ pci_adapter->id = *id;
+ lld_dev_cnt_init(pci_adapter);
+
+ /* if chip information of pcie function exist, add the function into chip */
+ lld_lock_chip_node();
+ err = alloc_chip_node(pci_adapter);
+ if (err) {
+ lld_unlock_chip_node();
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Failed to add new chip node to global list");
+ goto alloc_chip_node_fail;
+ }
+
+ lld_unlock_chip_node();
+ err = spfc_func_init(pdev, pci_adapter);
+ if (err) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]spfc func init fail");
+ goto func_init_err;
+ }
+
+ return 0;
+
+func_init_err:
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+alloc_chip_node_fail:
+ unmapping_bar(pci_adapter);
+
+map_bar_failed:
+ spfc_pci_deinit(pdev);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Pcie device probe failed");
+ return err;
+}
+
+static void spfc_remove(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ if (!pci_adapter)
+ return;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Pcie device remove begin");
+ sphw_detect_hw_present(pci_adapter->hwdev);
+ spfc_func_deinit(pdev);
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+ unmapping_bar(pci_adapter);
+ mutex_lock(&g_pci_init_mutex);
+ spfc_pci_deinit(pdev);
+ mutex_unlock(&g_pci_init_mutex);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[INFO]Pcie device removed");
+}
+
+static void spfc_shutdown(struct pci_dev *pdev)
+{
+ struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Shutdown device");
+
+ if (pci_adapter)
+ sphw_shutdown_hwdev(pci_adapter->hwdev);
+
+ pci_disable_device(pdev);
+}
+
+static pci_ers_result_t spfc_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct spfc_pcidev *pci_adapter = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Uncorrectable error detected, log and cleanup error status: 0x%08x",
+ state);
+
+ pci_aer_clear_nonfatal_status(pdev);
+ pci_adapter = pci_get_drvdata(pdev);
+
+ if (pci_adapter)
+ sphw_record_pcie_error(pci_adapter->hwdev);
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+}
+
+static int unf_global_value_init(void)
+{
+ memset(rx_tx_stat, 0, sizeof(rx_tx_stat));
+ memset(rx_tx_err, 0, sizeof(rx_tx_err));
+ memset(scq_err_stat, 0, sizeof(scq_err_stat));
+ memset(aeq_err_stat, 0, sizeof(aeq_err_stat));
+ memset(dif_err_stat, 0, sizeof(dif_err_stat));
+ memset(link_event_stat, 0, sizeof(link_event_stat));
+ memset(link_reason_stat, 0, sizeof(link_reason_stat));
+ memset(hba_stat, 0, sizeof(hba_stat));
+ memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
+ memset(up_err_event_stat, 0, sizeof(up_err_event_stat));
+ memset(mail_box_stat, 0, sizeof(mail_box_stat));
+ memset(spfc_hba, 0, sizeof(spfc_hba));
+
+ spin_lock_init(&probe_spin_lock);
+
+ /* 4. Get COM Handlers used for low_level */
+ if (unf_get_cm_handle_ops(&spfc_cm_op_handle) != RETURN_OK) {
+ spfc_realease_cmo_op_handle();
+ return RETURN_ERROR_S32;
+ }
+
+ return RETURN_OK;
+}
+
+static const struct pci_device_id spfc_pci_table[] = {
+ {PCI_VDEVICE(RAMAXEL, SPFC_DEV_ID_PF_STD), 0},
+ {0, 0}
+};
+
+MODULE_DEVICE_TABLE(pci, spfc_pci_table);
+
+static struct pci_error_handlers spfc_err_handler = {
+ .error_detected = spfc_io_error_detected,
+};
+
+static struct pci_driver spfc_driver = {.name = SPFC_DRV_NAME,
+ .id_table = spfc_pci_table,
+ .probe = spfc_probe,
+ .remove = spfc_remove,
+ .shutdown = spfc_shutdown,
+ .err_handler = &spfc_err_handler};
+
+static __init int spfc_lld_init(void)
+{
+ if (unf_common_init() != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]UNF_Common_init failed");
+ return RETURN_ERROR_S32;
+ }
+
+ spfc_check_module_para();
+
+ if (unf_global_value_init() != RETURN_OK)
+ return RETURN_ERROR_S32;
+
+ spfc_register_uld(&fc_uld_info);
+ return pci_register_driver(&spfc_driver);
+}
+
+static __exit void spfc_lld_exit(void)
+{
+ pci_unregister_driver(&spfc_driver);
+ spfc_unregister_uld();
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]SPFC module removing...");
+
+ spfc_realease_cmo_op_handle();
+
+ /* 2. Unregister FC COM module(level) */
+ unf_common_exit();
+}
+
+module_init(spfc_lld_init);
+module_exit(spfc_lld_exit);
+
+MODULE_AUTHOR("Ramaxel Memory Technology, Ltd");
+MODULE_DESCRIPTION(SPFC_DRV_DESC);
+MODULE_VERSION(SPFC_DRV_VERSION);
+MODULE_LICENSE("GPL");
+
+module_param(allowed_probe_num, uint, 0444);
+module_param(dif_sgl_mode, uint, 0444);
+module_param(max_speed, uint, 0444);
+module_param(wqe_page_size, uint, 0444);
+module_param(combo_length, uint, 0444);
+module_param(cos_bit_map, uint, 0444);
+module_param(spfc_dif_enable, uint, 0444);
+MODULE_PARM_DESC(spfc_dif_enable, "set dif enable/disable(1/0), default is 0(disable).");
+module_param(link_lose_tmo, uint, 0444);
+MODULE_PARM_DESC(link_lose_tmo, "set link time out, default is 30s.");
+
diff --git a/drivers/scsi/spfc/hw/spfc_lld.h b/drivers/scsi/spfc/hw/spfc_lld.h
new file mode 100644
index 000000000000..f7b4a5e5ce07
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_lld.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_LLD_H
+#define SPFC_LLD_H
+
+#include "sphw_crm.h"
+
+struct spfc_lld_dev {
+ struct pci_dev *pdev;
+ void *hwdev;
+};
+
+struct spfc_uld_info {
+ /* uld_dev: should not return null even the function capability
+ * is not support the up layer driver
+ * uld_dev_name: NIC driver should copy net device name.
+ * FC driver could copy fc device name.
+ * other up layer driver don`t need copy anything
+ */
+ int (*probe)(struct spfc_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name);
+ void (*remove)(struct spfc_lld_dev *lld_dev, void *uld_dev);
+ int (*suspend)(struct spfc_lld_dev *lld_dev, void *uld_dev,
+ pm_message_t state);
+ int (*resume)(struct spfc_lld_dev *lld_dev, void *uld_dev);
+ void (*event)(struct spfc_lld_dev *lld_dev, void *uld_dev,
+ struct sphw_event_info *event);
+ int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+};
+
+/* Structure pcidev private */
+struct spfc_pcidev {
+ struct pci_dev *pcidev;
+ void *hwdev;
+ struct card_node *chip_node;
+ struct spfc_lld_dev lld_dev;
+ /* such as fc_dev */
+ void *uld_dev[SERVICE_T_MAX];
+ /* Record the service object name */
+ char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
+ /* It is a the global variable for driver to manage
+ * all function device linked list
+ */
+ struct list_head node;
+ void __iomem *cfg_reg_base;
+ void __iomem *intr_reg_base;
+ void __iomem *mgmt_reg_base;
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ void __iomem *db_base;
+ /* lock for attach/detach uld */
+ struct mutex pdev_mutex;
+ /* setted when uld driver processing event */
+ unsigned long state;
+ struct pci_device_id id;
+ atomic_t ref_cnt;
+};
+
+enum spfc_lld_status {
+ SPFC_NODE_CHANGE = BIT(0),
+};
+
+struct spfc_lld_lock {
+ /* lock for chip list */
+ struct mutex lld_mutex;
+ unsigned long status;
+ atomic_t dev_ref_cnt;
+};
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_module.h b/drivers/scsi/spfc/hw/spfc_module.h
new file mode 100644
index 000000000000..153d59955339
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_module.h
@@ -0,0 +1,297 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_MODULE_H
+#define SPFC_MODULE_H
+#include "unf_type.h"
+#include "unf_log.h"
+#include "unf_common.h"
+#include "spfc_utils.h"
+#include "spfc_hba.h"
+
+#define SPFC_FT_ENABLE (1)
+#define SPFC_FC_DISABLE (0)
+
+#define SPFC_P2P_DIRECT (0)
+#define SPFC_P2P_FABRIC (1)
+#define SPFC_LOOP (2)
+#define SPFC_ATUOSPEED (1)
+#define SPFC_FIXEDSPEED (0)
+#define SPFC_AUTOTOPO (0)
+#define SPFC_P2PTOPO (0x2)
+#define SPFC_LOOPTOPO (0x1)
+#define SPFC_SPEED_2G (0x2)
+#define SPFC_SPEED_4G (0x4)
+#define SPFC_SPEED_8G (0x8)
+#define SPFC_SPEED_16G (0x10)
+#define SPFC_SPEED_32G (0x20)
+
+#define SPFC_MAX_PORT_NUM SPFC_MAX_PROBE_PORT_NUM
+#define SPFC_MAX_PORT_TASK_TYPE_STAT_NUM (128)
+#define SPFC_MAX_LINK_EVENT_CNT (4)
+#define SPFC_MAX_LINK_REASON_CNT (256)
+
+#define SPFC_MML_LOGCTRO_NUM (14)
+
+#define WWN_SIZE 8 /* Size of WWPN, WWN & WWNN */
+
+/*
+ * Define the data type
+ */
+struct spfc_log_ctrl {
+ char *log_option;
+ u32 state;
+};
+
+/*
+ * Declare the global function.
+ */
+extern struct unf_cm_handle_op spfc_cm_op_handle;
+extern struct spfc_uld_info fc_uld_info;
+extern u32 allowed_probe_num;
+extern u32 max_speed;
+extern u32 accum_db_num;
+extern u32 wqe_page_size;
+extern u32 dif_type;
+extern u32 wqe_pre_load;
+extern u32 combo_length;
+extern u32 cos_bit_map;
+extern u32 exit_count;
+extern u32 exit_stride;
+extern u32 exit_base;
+
+extern atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
+extern u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
+extern atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
+extern u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
+#define SPFC_LINK_EVENT_STAT(hba, link_ent) \
+ (link_event_stat[(hba)->probe_index][link_ent]++)
+#define SPFC_LINK_REASON_STAT(hba, link_rsn) \
+ (link_reason_stat[(hba)->probe_index][link_rsn]++)
+#define SPFC_HBA_STAT(hba, hba_stat_type) \
+ (hba_stat[(hba)->probe_index][hba_stat_type]++)
+
+#define SPFC_UP_ERR_EVENT_STAT(hba, err_type) \
+ (atomic64_inc(&up_err_event_stat[(hba)->probe_index][err_type]))
+#define SPFC_UP_ERR_EVENT_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&up_err_event_stat[probe_index][io_type]))
+#define SPFC_DIF_ERR_STAT(hba, dif_err) \
+ (atomic64_inc(&dif_err_stat[(hba)->probe_index][dif_err]))
+#define SPFC_DIF_ERR_STAT_READ(probe_index, dif_err) \
+ (atomic64_read(&dif_err_stat[probe_index][dif_err]))
+
+#define SPFC_IO_STAT(hba, io_type) \
+ (atomic64_inc(&rx_tx_stat[(hba)->probe_index][io_type]))
+#define SPFC_IO_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&rx_tx_stat[probe_index][io_type]))
+
+#define SPFC_ERR_IO_STAT(hba, io_type) \
+ (atomic64_inc(&rx_tx_err[(hba)->probe_index][io_type]))
+#define SPFC_ERR_IO_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&rx_tx_err[probe_index][io_type]))
+
+#define SPFC_SCQ_ERR_TYPE_STAT(hba, err_type) \
+ (atomic64_inc(&scq_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_SCQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&scq_err_stat[probe_index][io_type]))
+#define SPFC_AEQ_ERR_TYPE_STAT(hba, err_type) \
+ (atomic64_inc(&aeq_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_AEQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&aeq_err_stat[probe_index][io_type]))
+
+#define SPFC_MAILBOX_STAT(hba, io_type) \
+ (atomic64_inc(&mail_box_stat[(hba)->probe_index][io_type]))
+#define SPFC_MAILBOX_STAT_READ(probe_index, io_type) \
+ (atomic64_read(&mail_box_stat[probe_index][io_type]))
+
+#define SPFC_COM_UP_ERR_EVENT_STAT(hba, err_type) \
+ (atomic64_inc( \
+ &com_up_event_err_stat[(hba)->probe_index][err_type]))
+#define SPFC_COM_UP_ERR_EVENT_STAT_READ(probe_index, err_type) \
+ (atomic64_read(&com_up_event_err_stat[probe_index][err_type]))
+
+#define UNF_LOWLEVEL_ALLOC_LPORT(lport, fc_port, low_level) \
+ do { \
+ if (spfc_cm_op_handle.unf_alloc_local_port) { \
+ lport = spfc_cm_op_handle.unf_alloc_local_port( \
+ (fc_port), (low_level)); \
+ } else { \
+ lport = NULL; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_ls_gs_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_ls_gs_pkg( \
+ (fc_port), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SEND_ELS_DONE(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_send_els_done) { \
+ ret = spfc_cm_op_handle.unf_send_els_done((fc_port), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_GET_CFG_PARMS(ret, section_name, cfg_parm, cfg_value, \
+ item_num) \
+ do { \
+ if (spfc_cm_op_handle.unf_get_cfg_parms) { \
+ ret = (u32)spfc_cm_op_handle.unf_get_cfg_parms( \
+ (section_name), (cfg_parm), (cfg_value), \
+ (item_num)); \
+ } else { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN, \
+ "Get config parameter function is NULL."); \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, lport) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_release_local_port)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = \
+ spfc_cm_op_handle.unf_release_local_port(lport); \
+ } \
+ } while (0)
+
+#define UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_cm_get_sgl_entry)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_cm_get_sgl_entry( \
+ pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_cm_get_dif_sgl_entry)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_cm_get_dif_sgl_entry( \
+ pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_GET_SGL_ENTRY(ret, pkg, buf, buf_len, dif_flag) \
+ do { \
+ if (dif_flag) { \
+ UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len); \
+ } else { \
+ UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len); \
+ } \
+ } while (0)
+
+#define UNF_GET_FREE_ESGL_PAGE(ret, lport, pkg) \
+ do { \
+ if (unlikely( \
+ !spfc_cm_op_handle.unf_get_one_free_esgl_page)) { \
+ ret = NULL; \
+ } else { \
+ ret = \
+ spfc_cm_op_handle.unf_get_one_free_esgl_page( \
+ lport, pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_FCP_CMND_RECEIVED(ret, lport, pkg) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_process_fcp_cmnd)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_process_fcp_cmnd(lport, \
+ pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SCSI_COMPLETED(ret, lport, pkg) \
+ do { \
+ if (unlikely(NULL == \
+ spfc_cm_op_handle.unf_receive_ini_response)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_receive_ini_response( \
+ lport, pkg); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_PORT_EVENT(ret, lport, event, input) \
+ do { \
+ if (unlikely(!spfc_cm_op_handle.unf_fc_port_event)) { \
+ ret = UNF_RETURN_ERROR; \
+ } else { \
+ ret = spfc_cm_op_handle.unf_fc_port_event( \
+ lport, event, input); \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_FC4LS_PKG(ret, fc_port, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_fc4ls_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_fc4ls_pkg( \
+ (fc_port), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_SEND_FC4LS_DONE(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_send_fc4ls_done) { \
+ ret = spfc_cm_op_handle.unf_send_fc4ls_done((lport), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_bls_pkg) { \
+ ret = spfc_cm_op_handle.unf_receive_bls_pkg((lport), \
+ (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_marker_status) { \
+ ret = spfc_cm_op_handle.unf_receive_marker_status( \
+ (lport), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#define UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, lport, pkg) \
+ do { \
+ if (spfc_cm_op_handle.unf_receive_abts_marker_status) { \
+ ret = \
+ spfc_cm_op_handle.unf_receive_abts_marker_status( \
+ (lport), (pkg)); \
+ } else { \
+ ret = UNF_RETURN_ERROR; \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_parent_context.h b/drivers/scsi/spfc/hw/spfc_parent_context.h
new file mode 100644
index 000000000000..dc4baffe5c44
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_parent_context.h
@@ -0,0 +1,269 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_PARENT_CONTEXT_H
+#define SPFC_PARENT_CONTEXT_H
+
+enum fc_parent_status {
+ FC_PARENT_STATUS_INVALID = 0,
+ FC_PARENT_STATUS_NORMAL,
+ FC_PARENT_STATUS_CLOSING
+};
+
+#define SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE (48)
+
+#define SPFC_PARENT_CONTEXT_TIMER_SIZE (32) /* 24+2*N,N=timer count */
+
+#define FC_CALC_CID(_xid) \
+ (((((_xid) >> 5) & 0x1ff) << 11) | ((((_xid) >> 5) & 0x1ff) << 2) | \
+ (((_xid) >> 3) & 0x3))
+
+#define MAX_PKT_SIZE_PER_DISPATCH (fc_child_ctx_ex->per_xmit_data_size)
+
+/* immediate data DIF info definition in parent context */
+struct immi_dif_info {
+ union {
+ u32 value;
+ struct {
+ u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
+ u32 ref_tag_mode : 2; /* Bit 0: scenario of the reference tag verify mode */
+ /* Bit 1: scenario of the reference tag insert/replace
+ * mode 0: fixed; 1: increasement;
+ */
+ u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
+ u32 grd_agm_ini_ctrl : 3;
+ u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
+ /* Bit 1: DIF/DIX guard replace or insert algorithm control */
+ u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
+ u32 dif_verify_type : 2; /* verify type */
+ /* Check blocks whose reference tag contains 0xFFFF flag */
+ u32 difx_ref_esc : 1;
+ /* Check blocks whose application tag contains 0xFFFF flag */
+ u32 difx_app_esc : 1;
+ u32 rsvd : 8;
+ u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
+ u32 smd_tp : 2;
+ u32 difx_en : 1;
+ } info;
+ } dif_dw3;
+
+ u32 cmp_app_tag : 16;
+ u32 rep_app_tag : 16;
+ /* The ref tag value for verify compare, do not support replace or insert ref tag */
+ u32 cmp_ref_tag;
+ u32 rep_ref_tag;
+
+ u32 rsv1 : 16;
+ u32 cmp_app_tag_msk : 16;
+};
+
+/* parent context SW section definition: SW(80B) */
+struct spfc_sw_section {
+ u16 scq_num_rcv_cmd;
+ u16 scq_num_max_scqn;
+
+ struct {
+ u32 xid : 13;
+ u32 vport : 7;
+ u32 csctrl : 8;
+ u32 rsvd0 : 4;
+ } sw_ctxt_vport_xid;
+
+ u32 scq_num_scqn_mask : 12;
+ u32 cid : 20; /* ucode init */
+
+ u16 conn_id;
+ u16 immi_rq_page_size;
+
+ u16 immi_taskid_min;
+ u16 immi_taskid_max;
+
+ union {
+ u32 pctxt_val0;
+ struct {
+ u32 srv_type : 5; /* driver init */
+ u32 srr_support : 2; /* sequence retransmition support flag */
+ u32 rsvd1 : 5;
+ u32 port_id : 4; /* driver init */
+ u32 vlan_id : 16; /* driver init */
+ } dw;
+ } sw_ctxt_misc;
+
+ u32 rsvd2;
+ u32 per_xmit_data_size;
+
+ /* RW fields */
+ u32 cmd_scq_gpa_h;
+ u32 cmd_scq_gpa_l;
+ u32 e_d_tov_timer_val; /* E_D_TOV timer value: value should be set on ms by driver */
+ u16 mfs_unaligned_bytes; /* mfs unalined bytes of per 64KB dispatch*/
+ u16 tx_mfs; /* remote port max receive fc payload length */
+ u32 xfer_rdy_dis_max_len_remote; /* max data len allowed in xfer_rdy dis scenario */
+ u32 xfer_rdy_dis_max_len_local;
+
+ union {
+ struct {
+ u32 priority : 3; /* vlan priority */
+ u32 rsvd4 : 2;
+ u32 status : 8; /* status of flow */
+ u32 cos : 3; /* doorbell cos value */
+ u32 oq_cos_data : 3; /* esch oq cos for data */
+ u32 oq_cos_cmd : 3; /* esch oq cos for cmd/xferrdy/rsp */
+ /* used for parent context cache Consistency judgment,1: done */
+ u32 flush_done : 1;
+ u32 work_mode : 2; /* 0:Target, 1:Initiator, 2:Target&Initiator */
+ u32 seq_cnt : 1; /* seq_cnt */
+ u32 e_d_tov : 1; /* E_D_TOV resolution */
+ u32 vlan_enable : 1; /* Vlan enable flag */
+ u32 conf_support : 1; /* Response confirm support flag */
+ u32 rec_support : 1; /* REC support flag */
+ u32 write_xfer_rdy : 1; /* WRITE Xfer_Rdy disable or enable */
+ u32 sgl_num : 1; /* Double or single SGL, 1: double; 0: single */
+ } dw;
+ u32 pctxt_val1;
+ } sw_ctxt_config;
+ struct immi_dif_info immi_dif_info; /* immediate data dif control info(20B) */
+};
+
+struct spfc_hw_rsvd_queue {
+ /* bitmap[0]:255-192 */
+ /* bitmap[1]:191-128 */
+ /* bitmap[2]:127-64 */
+ /* bitmap[3]:63-0 */
+ u64 seq_id_bitmap[4];
+ struct {
+ u64 last_req_seq_id : 8;
+ u64 xid : 20;
+ u64 rsvd0 : 36;
+ } wd0;
+};
+
+struct spfc_sq_qinfo {
+ u64 rsvd_0 : 10;
+ u64 pmsn_type : 1; /* 0: get pmsn from queue header; 1: get pmsn from ucode */
+ u64 rsvd_1 : 4;
+ u64 cur_wqe_o : 1; /* should be opposite from loop_o */
+ u64 rsvd_2 : 48;
+
+ u64 cur_sqe_gpa;
+ u64 pmsn_gpa; /* sq's queue header gpa */
+
+ u64 sqe_dmaattr_idx : 6;
+ u64 sq_so_ro : 2;
+ u64 rsvd_3 : 2;
+ u64 ring : 1; /* 0: link; 1: ring */
+ u64 loop_o : 1; /* init to be the first round o-bit */
+ u64 rsvd_4 : 4;
+ u64 zerocopy_dmaattr_idx : 6;
+ u64 zerocopy_so_ro : 2;
+ u64 parity : 8;
+ u64 r : 1;
+ u64 s : 1;
+ u64 enable_256 : 1;
+ u64 rsvd_5 : 23;
+ u64 pcie_template : 6;
+};
+
+struct spfc_cq_qinfo {
+ u64 pcie_template_hi : 3;
+ u64 parity_2 : 1;
+ u64 cur_cqe_gpa : 60;
+
+ u64 pi : 15;
+ u64 pi_o : 1;
+ u64 ci : 15;
+ u64 ci_o : 1;
+ u64 c_eqn_msi_x : 10; /* if init_mode = 2, is msi/msi-x; other the low-5-bit means c_eqn */
+ u64 parity_1 : 1;
+ u64 ci_type : 1; /* 0: get ci from queue header; 1: get ci from ucode */
+ u64 cq_depth : 3; /* valid when ring = 1 */
+ u64 armq : 1; /* 0: IDLE state; 1: NEXT state */
+ u64 cur_cqe_cnt : 8;
+ u64 cqe_max_cnt : 8;
+
+ u64 cqe_dmaattr_idx : 6;
+ u64 cq_so_ro : 2;
+ u64 init_mode : 2; /* 1: armQ; 2: msi/msi-x; others: rsvd */
+ u64 next_o : 1; /* next pate valid o-bit */
+ u64 loop_o : 1; /* init to be the first round o-bit */
+ u64 next_cq_wqe_page_gpa : 52;
+
+ u64 pcie_template_lo : 3;
+ u64 parity_0 : 1;
+ u64 ci_gpa : 60; /* cq's queue header gpa */
+};
+
+struct spfc_scq_qinfo {
+ union {
+ struct {
+ u64 scq_n : 20; /* scq number */
+ u64 sq_min_preld_cache_num : 4;
+ u64 sq_th0_preld_cache_num : 5;
+ u64 sq_th1_preld_cache_num : 5;
+ u64 sq_th2_preld_cache_num : 5;
+ u64 rq_min_preld_cache_num : 4;
+ u64 rq_th0_preld_cache_num : 5;
+ u64 rq_th1_preld_cache_num : 5;
+ u64 rq_th2_preld_cache_num : 5;
+ u64 parity : 6;
+ } info;
+
+ u64 pctxt_val1;
+ } hw_scqc_config;
+};
+
+struct spfc_srq_qinfo {
+ u64 parity : 4;
+ u64 srqc_gpa : 60;
+};
+
+/* here is the layout of service type 12/13 */
+struct spfc_parent_context {
+ u8 key[SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE];
+ struct spfc_scq_qinfo resp_scq_qinfo;
+ struct spfc_srq_qinfo imm_srq_info;
+ struct spfc_sq_qinfo sq_qinfo;
+ u8 timer_section[SPFC_PARENT_CONTEXT_TIMER_SIZE];
+ struct spfc_hw_rsvd_queue hw_rsvdq;
+ struct spfc_srq_qinfo els_srq_info;
+ struct spfc_sw_section sw_section;
+};
+
+/* here is the layout of service type 13 */
+struct spfc_ssq_parent_context {
+ u8 rsvd0[64];
+ struct spfc_sq_qinfo sq1_qinfo;
+ u8 rsvd1[32];
+ struct spfc_sq_qinfo sq2_qinfo;
+ u8 rsvd2[32];
+ struct spfc_sq_qinfo sq3_qinfo;
+ struct spfc_scq_qinfo sq_pretchinfo;
+ u8 rsvd3[24];
+};
+
+/* FC Key Section */
+struct spfc_fc_key_section {
+ u32 xid_h : 4;
+ u32 key_size : 2;
+ u32 rsvd1 : 1;
+ u32 srv_type : 5;
+ u32 csize : 2;
+ u32 rsvd0 : 17;
+ u32 v : 1;
+
+ u32 tag_fp_h : 4;
+ u32 rsvd2 : 12;
+ u32 xid_l : 16;
+
+ u16 tag_fp_l;
+ u8 smac[6]; /* Source MAC */
+ u8 dmac[6]; /* Dest MAC */
+ u8 sid[3]; /* Source FC ID */
+ u8 did[3]; /* Dest FC ID */
+ u8 svlan[4]; /* Svlan */
+ u8 cvlan[4]; /* Cvlan */
+
+ u32 next_ptr_h;
+};
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
new file mode 100644
index 000000000000..3f73fa26aad1
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_queue.c
@@ -0,0 +1,4857 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_queue.h"
+#include "unf_log.h"
+#include "unf_lport.h"
+#include "spfc_module.h"
+#include "spfc_utils.h"
+#include "spfc_service.h"
+#include "spfc_chipitf.h"
+#include "spfc_parent_context.h"
+#include "sphw_hw.h"
+#include "sphw_crm.h"
+
+#define SPFC_UCODE_CMD_MODIFY_QUEUE_CONTEXT 0
+
+#define SPFC_DONE_MASK (0x00000001)
+#define SPFC_OWNER_MASK (0x80000000)
+
+#define SPFC_SQ_LINK_PRE (1 << 2)
+
+#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE (64)
+#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE - 1)
+
+#define SPFC_ADDR_64_ALIGN(addr) \
+ (((addr) + (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK)) & \
+ ~(SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK))
+
+u32 spfc_get_parity_value(u64 *src_data, u32 row, u32 col)
+{
+ u32 i = 0;
+ u32 j = 0;
+ u32 offset = 0;
+ u32 group = 0;
+ u32 bit_offset = 0;
+ u32 bit_val = 0;
+ u32 tmp_val = 0;
+ u32 dest_data = 0;
+
+ for (i = 0; i < row; i++) {
+ for (j = 0; j < col; j++) {
+ offset = (row * j + i);
+ group = offset / (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
+ bit_offset = offset % (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
+ tmp_val = (src_data[group] >> bit_offset) & SPFC_PARITY_MASK;
+
+ if (j == 0) {
+ bit_val = tmp_val;
+ continue;
+ }
+
+ bit_val ^= tmp_val;
+ }
+
+ bit_val = (~bit_val) & SPFC_PARITY_MASK;
+
+ dest_data |= (bit_val << i);
+ }
+
+ return dest_data;
+}
+
+static void spfc_update_producer_info(u16 q_depth, u16 *pus_pi, u16 *pus_owner)
+{
+ u16 current_pi = 0;
+ u16 next_pi = 0;
+ u16 owner = 0;
+
+ current_pi = *pus_pi;
+ next_pi = current_pi + 1;
+
+ if (next_pi < q_depth) {
+ *pus_pi = next_pi;
+ } else {
+ /* PI reversal */
+ *pus_pi = 0;
+
+ /* obit reversal */
+ owner = *pus_owner;
+ *pus_owner = !owner;
+ }
+}
+
+static void spfc_update_consumer_info(u16 q_depth, u16 *pus_ci, u16 *pus_owner)
+{
+ u16 current_ci = 0;
+ u16 next_ci = 0;
+ u16 owner = 0;
+
+ current_ci = *pus_ci;
+ next_ci = current_ci + 1;
+
+ if (next_ci < q_depth) {
+ *pus_ci = next_ci;
+ } else {
+ /* CI reversal */
+ *pus_ci = 0;
+
+ /* obit reversal */
+ owner = *pus_owner;
+ *pus_owner = !owner;
+ }
+}
+
+static inline void spfc_update_cq_header(struct ci_record *ci_record, u16 ci,
+ u16 owner)
+{
+ u32 size = 0;
+ struct ci_record record = {0};
+
+ size = sizeof(struct ci_record);
+ memcpy(&record, ci_record, size);
+ spfc_big_to_cpu64(&record, size);
+ record.cmsn = ci + (u16)(owner << SPFC_CQ_HEADER_OWNER_SHIFT);
+ record.dump_cmsn = record.cmsn;
+ spfc_cpu_to_big64(&record, size);
+
+ wmb();
+ memcpy(ci_record, &record, size);
+}
+
+static void spfc_update_srq_header(struct db_record *pmsn_record, u16 pmsn)
+{
+ u32 size = 0;
+ struct db_record record = {0};
+
+ size = sizeof(struct db_record);
+ memcpy(&record, pmsn_record, size);
+ spfc_big_to_cpu64(&record, size);
+ record.pmsn = pmsn;
+ record.dump_pmsn = record.pmsn;
+ spfc_cpu_to_big64(&record, sizeof(struct db_record));
+
+ wmb();
+ memcpy(pmsn_record, &record, size);
+}
+
+static void spfc_set_srq_wqe_owner_be(struct spfc_wqe_ctrl *sqe_ctrl_in_wp,
+ u32 owner)
+{
+ struct spfc_wqe_ctrl_ch wqe_ctrl_ch;
+
+ mb();
+
+ wqe_ctrl_ch.ctrl_ch_val = be32_to_cpu(sqe_ctrl_in_wp->ch.ctrl_ch_val);
+ wqe_ctrl_ch.wd0.owner = owner;
+ sqe_ctrl_in_wp->ch.ctrl_ch_val = cpu_to_be32(wqe_ctrl_ch.ctrl_ch_val);
+
+ mb();
+}
+
+static inline void spfc_set_sq_wqe_owner_be(void *sqe)
+{
+ u32 *sqe_dw = (u32 *)sqe;
+ u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
+
+ /* Ensure that the write of WQE is complete */
+ mb();
+ e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
+ mb();
+}
+
+void spfc_clear_sq_wqe_owner_be(struct spfc_sqe *sqe)
+{
+ u32 *sqe_dw = (u32 *)sqe;
+ u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
+
+ mb();
+ sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ mb();
+ sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+ e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
+}
+
+static void spfc_set_direct_wqe_owner_be(void *sqe, u16 owner)
+{
+ if (owner)
+ spfc_set_sq_wqe_owner_be(sqe);
+ else
+ spfc_clear_sq_wqe_owner_be(sqe);
+}
+
+static void spfc_set_srq_link_wqe_owner_be(struct spfc_linkwqe *link_wqe,
+ u32 owner, u16 pmsn)
+{
+ struct spfc_linkwqe local_lw;
+
+ mb();
+ local_lw.val_wd1 = be32_to_cpu(link_wqe->val_wd1);
+ local_lw.wd1.msn = pmsn;
+ local_lw.wd1.dump_msn = (local_lw.wd1.msn & SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK);
+ link_wqe->val_wd1 = cpu_to_be32(local_lw.val_wd1);
+
+ local_lw.val_wd0 = be32_to_cpu(link_wqe->val_wd0);
+ local_lw.wd0.o = owner;
+ link_wqe->val_wd0 = cpu_to_be32(local_lw.val_wd0);
+ mb();
+}
+
+static inline bool spfc_is_scq_link_wqe(struct spfc_scq_info *scq_info)
+{
+ u16 custom_scqe_num = 0;
+
+ custom_scqe_num = scq_info->ci + 1;
+
+ if ((custom_scqe_num % scq_info->wqe_num_per_buf == 0) ||
+ scq_info->valid_wqe_num == custom_scqe_num)
+ return true;
+ else
+ return false;
+}
+
+static struct spfc_wqe_page *
+spfc_add_tail_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_wqe_page *esgl = NULL;
+ struct list_head *free_list_head = NULL;
+ ulong flag = 0;
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+
+ /* Get a WqePage from hba->sq_wpg_pool.list_free_wpg_pool, and add to
+ * sq.list_SqTailWqePage
+ */
+ if (!list_empty(&hba->sq_wpg_pool.list_free_wpg_pool)) {
+ free_list_head = UNF_OS_LIST_NEXT(&hba->sq_wpg_pool.list_free_wpg_pool);
+ list_del(free_list_head);
+ list_add_tail(free_list_head, &ssq->list_linked_list_sq);
+ esgl = list_entry(free_list_head, struct spfc_wqe_page, entry_wpg);
+
+ /* WqePage Pool counter */
+ atomic_inc(&hba->sq_wpg_pool.wpg_in_use);
+ } else {
+ esgl = NULL;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]SQ pool is empty when SQ(0x%x) try to get wqe page",
+ ssq->sqn);
+ SPFC_HBA_STAT(hba, SPFC_STAT_SQ_POOL_EMPTY);
+ }
+
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+
+ return esgl;
+}
+
+static inline struct spfc_sqe *spfc_get_wqe_page_entry(struct spfc_wqe_page *wpg,
+ u32 wqe_offset)
+{
+ struct spfc_sqe *sqe_wpg = NULL;
+
+ sqe_wpg = (struct spfc_sqe *)(wpg->wpg_addr);
+ sqe_wpg += wqe_offset;
+
+ return sqe_wpg;
+}
+
+static void spfc_free_head_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_wqe_page *sq_wpg = NULL;
+ struct list_head *entry_head_wqe_page = NULL;
+ ulong flag = 0;
+
+ atomic_dec(&ssq->wqe_page_cnt);
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "Port(0x%x) free wqe page nowpagecnt:%d",
+ hba->port_cfg.port_id,
+ atomic_read(&ssq->wqe_page_cnt));
+ sq_wpg = SPFC_GET_SQ_HEAD(ssq);
+
+ memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ entry_head_wqe_page = &sq_wpg->entry_wpg;
+ list_del(entry_head_wqe_page);
+ list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
+
+ /* WqePage Pool counter */
+ atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+}
+
+static void spfc_free_link_list_wpg(struct spfc_parent_ssq_info *ssq)
+{
+ ulong flag = 0;
+ struct spfc_hba_info *hba = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct list_head *entry_head_wqe_page = NULL;
+ struct spfc_wqe_page *sq_wpg = NULL;
+
+ hba = (struct spfc_hba_info *)ssq->hba;
+
+ list_for_each_safe(node, next_node, &ssq->list_linked_list_sq) {
+ sq_wpg = list_entry(node, struct spfc_wqe_page, entry_wpg);
+ memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
+
+ spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ entry_head_wqe_page = &sq_wpg->entry_wpg;
+ list_del(entry_head_wqe_page);
+ list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
+
+ /* WqePage Pool counter */
+ atomic_dec(&ssq->wqe_page_cnt);
+ atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
+
+ spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
+ }
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) Sq(0x%x) link list destroyed, Sq.WqePageCnt=0x%x, SqWpgPool.wpg_in_use=0x%x",
+ hba->port_cfg.port_id, ssq->sqn, ssq->context_id,
+ atomic_read(&ssq->wqe_page_cnt), atomic_read(&hba->sq_wpg_pool.wpg_in_use));
+}
+
+struct spfc_wqe_page *
+spfc_add_one_wqe_page(struct spfc_parent_ssq_info *ssq)
+{
+ u32 wqe_inx = 0;
+ struct spfc_wqe_page *wqe_page = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe_in_wpg = NULL;
+ struct spfc_linkwqe link_wqe;
+
+ /* Add a new Wqe Page */
+ wqe_page = spfc_add_tail_wqe_page(ssq);
+
+ if (!wqe_page)
+ return NULL;
+
+ for (wqe_inx = 0; wqe_inx <= ssq->wqe_num_per_buf; wqe_inx++) {
+ sqe_in_wp = spfc_get_wqe_page_entry(wqe_page, wqe_inx);
+ sqe_in_wp->ctrl_sl.ch.ctrl_ch_val = 0;
+ sqe_in_wp->ectrl_sl.ch.ctrl_ch_val = 0;
+ }
+
+ /* Set last WqePage as linkwqe */
+ link_wqe_in_wpg = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(wqe_page,
+ ssq->wqe_num_per_buf);
+ link_wqe.val_wd0 = 0;
+ link_wqe.val_wd1 = 0;
+ link_wqe.next_page_addr_hi = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? SPFC_MSD(wqe_page->wpg_phy_addr)
+ : 0;
+ link_wqe.next_page_addr_lo = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? SPFC_LSD(wqe_page->wpg_phy_addr)
+ : 0;
+ link_wqe.wd0.wf = CQM_WQE_WF_LINK;
+ link_wqe.wd0.ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
+ link_wqe.wd0.o = !(ssq->last_pi_owner);
+ link_wqe.wd1.lp = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ ? CQM_LINK_WQE_LP_VALID
+ : CQM_LINK_WQE_LP_INVALID;
+ spfc_cpu_to_big32(&link_wqe, sizeof(struct spfc_linkwqe));
+ memcpy(link_wqe_in_wpg, &link_wqe, sizeof(struct spfc_linkwqe));
+ memcpy((u8 *)link_wqe_in_wpg + SPFC_EXTEND_WQE_OFFSET,
+ &link_wqe, sizeof(struct spfc_linkwqe));
+
+ return wqe_page;
+}
+
+static inline struct spfc_scqe_type *
+spfc_get_scq_entry(struct spfc_scq_info *scq_info)
+{
+ u32 buf_id = 0;
+ u16 buf_offset = 0;
+ u16 ci = 0;
+ struct cqm_buf_list *buf = NULL;
+
+ FC_CHECK_RETURN_VALUE(scq_info, NULL);
+
+ ci = scq_info->ci;
+ buf_id = ci / scq_info->wqe_num_per_buf;
+ buf = &scq_info->cqm_scq_info->q_room_buf_1.buf_list[buf_id];
+ buf_offset = (u16)(ci % scq_info->wqe_num_per_buf);
+
+ return (struct spfc_scqe_type *)(buf->va) + buf_offset;
+}
+
+static inline bool spfc_is_cqe_done(u32 *done, u32 *owner, u16 driver_owner)
+{
+ return ((((u16)(!!(*done & SPFC_DONE_MASK)) == driver_owner) &&
+ ((u16)(!!(*owner & SPFC_OWNER_MASK)) == driver_owner)) ? true : false);
+}
+
+u32 spfc_process_scq_cqe_entity(ulong info, u32 proc_cnt)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 index = 0;
+ struct wq_header *queue_header = NULL;
+ struct spfc_scqe_type *scqe = NULL;
+ struct spfc_scqe_type tmp_scqe;
+ struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
+
+ FC_CHECK_RETURN_VALUE(scq_info, ret);
+ SPFC_FUNCTION_ENTER;
+
+ queue_header = (struct wq_header *)(void *)(scq_info->cqm_scq_info->q_header_vaddr);
+
+ for (index = 0; index < proc_cnt;) {
+ /* If linked wqe, then update CI */
+ if (spfc_is_scq_link_wqe(scq_info)) {
+ spfc_update_consumer_info(scq_info->valid_wqe_num,
+ &scq_info->ci,
+ &scq_info->ci_owner);
+ spfc_update_cq_header(&queue_header->ci_record,
+ scq_info->ci, scq_info->ci_owner);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_INFO,
+ "[info]Current wqe is a linked wqe");
+ continue;
+ }
+
+ /* Get SCQE and then check obit & donebit whether been set */
+ scqe = spfc_get_scq_entry(scq_info);
+ if (unlikely(!scqe)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Scqe is NULL");
+ break;
+ }
+
+ if (!spfc_is_cqe_done((u32 *)(void *)&scqe->wd0,
+ (u32 *)(void *)&scqe->ch.wd0,
+ scq_info->ci_owner)) {
+ atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DONE);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_INFO, "[info]Now has no valid scqe");
+ break;
+ }
+
+ /* rmb & do memory copy */
+ rmb();
+ memcpy(&tmp_scqe, scqe, sizeof(struct spfc_scqe_type));
+ /* process SCQ entry */
+ ret = spfc_rcv_scq_entry_from_scq(scq_info->hba, (void *)&tmp_scqe,
+ scq_info->queue_id);
+ if (unlikely(ret != RETURN_OK)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]QueueId(0x%x) scqn(0x%x) scqe process error at CI(0x%x)",
+ scq_info->queue_id, scq_info->scqn, scq_info->ci);
+ }
+
+ /* Update Driver's CI & Obit */
+ spfc_update_consumer_info(scq_info->valid_wqe_num,
+ &scq_info->ci, &scq_info->ci_owner);
+ spfc_update_cq_header(&queue_header->ci_record, scq_info->ci,
+ scq_info->ci_owner);
+ index++;
+ }
+
+ /* Re-schedule again if necessary */
+ if (index == proc_cnt)
+ tasklet_schedule(&scq_info->tasklet);
+
+ SPFC_FUNCTION_RETURN;
+
+ return index;
+}
+
+void spfc_set_scq_irg_cfg(struct spfc_hba_info *hba, u32 mode, u16 msix_index)
+{
+#define SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT 5
+#define SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG 10
+ u8 pending_limt = 0;
+ u8 coalesc_timer_cfg = 0;
+
+ struct interrupt_info info = {0};
+
+ if (mode != SPFC_SCQ_INTR_LOW_LATENCY_MODE) {
+ pending_limt = SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT;
+ coalesc_timer_cfg =
+ SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG;
+ }
+
+ memset(&info, 0, sizeof(info));
+ info.interrupt_coalesc_set = 1;
+ info.lli_set = 0;
+ info.pending_limt = pending_limt;
+ info.coalesc_timer_cfg = coalesc_timer_cfg;
+ info.resend_timer_cfg = 0;
+ info.msix_index = msix_index;
+
+ sphw_set_interrupt_cfg(hba->dev_handle, info, SPHW_CHANNEL_FC);
+}
+
+void spfc_process_scq_cqe(ulong info)
+{
+ struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ spfc_process_scq_cqe_entity(info, SPFC_CQE_MAX_PROCESS_NUM_PER_INTR);
+}
+
+irqreturn_t spfc_scq_irq(int irq, void *scq_info)
+{
+ SPFC_FUNCTION_ENTER;
+
+ FC_CHECK_RETURN_VALUE(scq_info, IRQ_NONE);
+
+ tasklet_schedule(&((struct spfc_scq_info *)scq_info)->tasklet);
+
+ SPFC_FUNCTION_RETURN;
+
+ return IRQ_HANDLED;
+}
+
+static u32 spfc_alloc_scq_int(struct spfc_scq_info *scq_info)
+{
+ int ret = UNF_RETURN_ERROR_S32;
+ u16 act_num = 0;
+ struct irq_info irq_info;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(scq_info, UNF_RETURN_ERROR);
+
+ /* 1. Alloc & check SCQ IRQ */
+ hba = (struct spfc_hba_info *)(scq_info->hba);
+ ret = sphw_alloc_irqs(hba->dev_handle, SERVICE_T_FC, SPFC_INT_NUM_PER_QUEUE,
+ &irq_info, &act_num);
+ if (ret != RETURN_OK || act_num != SPFC_INT_NUM_PER_QUEUE) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate scq irq failed, return %d", ret);
+ return UNF_RETURN_ERROR;
+ }
+
+ if (irq_info.msix_entry_idx >= SPFC_SCQ_INT_ID_MAX) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SCQ irq id exceed %d, msix_entry_idx %d",
+ SPFC_SCQ_INT_ID_MAX, irq_info.msix_entry_idx);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, irq_info.irq_id);
+ return UNF_RETURN_ERROR;
+ }
+
+ scq_info->irq_id = (u32)(irq_info.irq_id);
+ scq_info->msix_entry_idx = (u16)(irq_info.msix_entry_idx);
+
+ snprintf(scq_info->irq_name, SPFC_IRQ_NAME_MAX, "fc_scq%u_%x_msix%u",
+ scq_info->queue_id, hba->port_cfg.port_id, scq_info->msix_entry_idx);
+
+ /* 2. SCQ IRQ tasklet init */
+ tasklet_init(&scq_info->tasklet, spfc_process_scq_cqe, (ulong)(uintptr_t)scq_info);
+
+ /* 3. Request IRQ for SCQ */
+ ret = request_irq(scq_info->irq_id, spfc_scq_irq, 0, scq_info->irq_name, scq_info);
+
+ sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_ENABLE);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Request SCQ irq failed, SCQ Index = %u, return %d",
+ scq_info->queue_id, ret);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
+ memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
+ scq_info->irq_id = 0;
+ scq_info->msix_entry_idx = 0;
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void spfc_free_scq_int(struct spfc_scq_info *scq_info)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ hba = (struct spfc_hba_info *)(scq_info->hba);
+ sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_DISABLE);
+ free_irq(scq_info->irq_id, scq_info);
+ tasklet_kill(&scq_info->tasklet);
+ sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
+ memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
+ scq_info->irq_id = 0;
+ scq_info->msix_entry_idx = 0;
+}
+
+static void spfc_init_scq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_scq,
+ u32 queue_id, struct spfc_scq_info **scq_info)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(cqm_scq);
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ *scq_info = &hba->scq_info[queue_id];
+ (*scq_info)->queue_id = queue_id;
+ (*scq_info)->scqn = cqm_scq->index;
+ (*scq_info)->hba = (void *)hba;
+
+ (*scq_info)->cqm_scq_info = cqm_scq;
+ (*scq_info)->wqe_num_per_buf =
+ cqm_scq->q_room_buf_1.buf_size / SPFC_SCQE_SIZE;
+ (*scq_info)->wqe_size = SPFC_SCQE_SIZE;
+ (*scq_info)->valid_wqe_num = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQ_DEPTH
+ : SPFC_CMD_SCQ_DEPTH);
+ (*scq_info)->scqc_cq_depth = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQC_CQ_DEPTH
+ : SPFC_CMD_SCQC_CQ_DEPTH);
+ (*scq_info)->scqc_ci_type = SPFC_STS_SCQ_CI_TYPE;
+ (*scq_info)->ci = 0;
+ (*scq_info)->ci_owner = 1;
+}
+
+static void spfc_init_scq_header(struct wq_header *queue_header)
+{
+ FC_CHECK_RETURN_VOID(queue_header);
+
+ memset(queue_header, 0, sizeof(struct wq_header));
+
+ /* Obit default is 1 */
+ queue_header->db_record.pmsn = 1 << UNF_SHIFT_15;
+ queue_header->db_record.dump_pmsn = queue_header->db_record.pmsn;
+ queue_header->ci_record.cmsn = 1 << UNF_SHIFT_15;
+ queue_header->ci_record.dump_cmsn = queue_header->ci_record.cmsn;
+
+ /* Big endian convert */
+ spfc_cpu_to_big64((void *)queue_header, sizeof(struct wq_header));
+}
+
+static void spfc_cfg_scq_ctx(struct spfc_scq_info *scq_info,
+ struct spfc_cq_qinfo *scq_ctx)
+{
+ struct cqm_queue *cqm_scq_info = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ u64 parity = 0;
+
+ FC_CHECK_RETURN_VOID(scq_info);
+
+ cqm_scq_info = scq_info->cqm_scq_info;
+
+ scq_ctx->pcie_template_hi = 0;
+ scq_ctx->cur_cqe_gpa = cqm_scq_info->q_room_buf_1.buf_list->pa >> SPFC_CQE_GPA_SHIFT;
+ scq_ctx->pi = 0;
+ scq_ctx->pi_o = 1;
+ scq_ctx->ci = scq_info->ci;
+ scq_ctx->ci_o = scq_info->ci_owner;
+ scq_ctx->c_eqn_msi_x = scq_info->msix_entry_idx;
+ scq_ctx->ci_type = scq_info->scqc_ci_type;
+ scq_ctx->cq_depth = scq_info->scqc_cq_depth;
+ scq_ctx->armq = SPFC_ARMQ_IDLE;
+ scq_ctx->cur_cqe_cnt = 0;
+ scq_ctx->cqe_max_cnt = 0;
+ scq_ctx->cqe_dmaattr_idx = 0;
+ scq_ctx->cq_so_ro = 0;
+ scq_ctx->init_mode = SPFC_CQ_INT_MODE;
+ scq_ctx->next_o = 1;
+ scq_ctx->loop_o = 1;
+ scq_ctx->next_cq_wqe_page_gpa = cqm_scq_info->q_room_buf_1.buf_list[ARRAY_INDEX_1].pa >>
+ SPFC_NEXT_CQE_GPA_SHIFT;
+ scq_ctx->pcie_template_lo = 0;
+
+ scq_ctx->ci_gpa = (cqm_scq_info->q_header_paddr + offsetof(struct wq_header, ci_record)) >>
+ SPFC_CQE_GPA_SHIFT;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(scq_info->scqn & SPFC_SCQN_MASK)); /* bits 20 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->pcie_template_lo)) << UNF_SHIFT_20);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->ci_gpa & SPFC_SCQ_CTX_CI_GPA_MASK)) <<
+ UNF_SHIFT_23); /* bits 28 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cqe_dmaattr_idx)) << UNF_SHIFT_51);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cq_so_ro)) << UNF_SHIFT_57); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->init_mode)) << UNF_SHIFT_59); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->c_eqn_msi_x &
+ SPFC_SCQ_CTX_C_EQN_MSI_X_MASK)) << UNF_SHIFT_61);
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(scq_ctx->c_eqn_msi_x >> UNF_SHIFT_3)); /* bits 7 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->ci_type)) << UNF_SHIFT_7); /* bits 1 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cq_depth)) << UNF_SHIFT_8); /* bits 3 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cqe_max_cnt)) << UNF_SHIFT_11);
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->pcie_template_hi)) << UNF_SHIFT_19);
+
+ parity = spfc_get_parity_value(queue_bus.bus, SPFC_SCQC_BUS_ROW, SPFC_SCQC_BUS_COL);
+ scq_ctx->parity_0 = parity & SPFC_PARITY_MASK;
+ scq_ctx->parity_1 = (parity >> UNF_SHIFT_1) & SPFC_PARITY_MASK;
+ scq_ctx->parity_2 = (parity >> UNF_SHIFT_2) & SPFC_PARITY_MASK;
+
+ spfc_cpu_to_big64((void *)scq_ctx, sizeof(struct spfc_cq_qinfo));
+}
+
+static u32 spfc_creat_scqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_cq_qinfo *scqc, u32 scqn)
+{
+#define SPFC_INIT_SCQC_TIMEOUT 3000
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_scqc init_scqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&init_scqc_cmd, 0, sizeof(init_scqc_cmd));
+ init_scqc_cmd.wd0.task_type = SPFC_TASK_T_INIT_SCQC;
+ init_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
+ covrt_size = sizeof(init_scqc_cmd) - sizeof(init_scqc_cmd.scqc);
+ spfc_cpu_to_big32(&init_scqc_cmd, covrt_size);
+
+ /* scqc is already big endian */
+ memcpy(init_scqc_cmd.scqc, scqc, sizeof(*scqc));
+ memcpy(cmdq_in_buf->buf, &init_scqc_cmd, sizeof(init_scqc_cmd));
+ cmdq_in_buf->size = sizeof(init_scqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SCQC_TIMEOUT, SPHW_CHANNEL_FC);
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send creat scqc via cmdq failed, ret=%d",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_delete_ssqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 xid,
+ u64 context_gpa, u32 entry_count)
+{
+#define SPFC_DELETE_SSQC_TIMEOUT 3000
+ int ret = RETURN_OK;
+ struct spfc_cmdqe_delete_ssqc delete_ssqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf = NULL;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&delete_ssqc_cmd, 0, sizeof(delete_ssqc_cmd));
+ delete_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CLEAR_SSQ_CONTEXT;
+ delete_ssqc_cmd.wd0.xid = xid;
+ delete_ssqc_cmd.wd0.entry_count = entry_count;
+ delete_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
+ delete_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
+ delete_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
+ spfc_cpu_to_big32(&delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
+ memcpy(cmdq_in_buf->buf, &delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
+ cmdq_in_buf->size = sizeof(delete_ssqc_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_DELETE_SSQC_TIMEOUT,
+ SPHW_CHANNEL_FC);
+
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+
+ return ret;
+}
+
+static void spfc_free_ssq_qpc(struct spfc_hba_info *hba, u32 free_sq_num)
+{
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+
+ SPFC_FUNCTION_ENTER;
+ for (global_sq_index = 0; global_sq_index < free_sq_num;) {
+ for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ if (qid == SPFC_SQ_NUM_PER_QPC ||
+ global_sq_index == free_sq_num - 1) {
+ if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[INFO]qid 0x%x, global_sq_index 0x%x, free_sq_num 0x%x",
+ qid, global_sq_index, free_sq_num);
+ cqm3_object_delete(&ssq_info->parent_ctx
+ .cqm_parent_ctx_obj->object);
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+ }
+ global_sq_index++;
+ if (global_sq_index >= free_sq_num)
+ break;
+ }
+ }
+}
+
+void spfc_free_ssq(void *handle, u32 free_sq_num)
+{
+#define SPFC_FREE_SSQ_WAIT_MS 1000
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+ struct spfc_parent_ssq_info *sq_ctrl = NULL;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 entry_count = 0;
+ struct spfc_hba_info *hba = NULL;
+
+ SPFC_FUNCTION_ENTER;
+
+ hba = (struct spfc_hba_info *)handle;
+ for (global_sq_index = 0; global_sq_index < free_sq_num;) {
+ for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ sq_ctrl = &ssq_info->parent_ssq_info;
+ /* Free data cos */
+ spfc_free_link_list_wpg(sq_ctrl);
+ if (sq_ctrl->queue_head_original) {
+ pci_unmap_single(hba->pci_dev,
+ sq_ctrl->queue_hdr_phy_addr_original,
+ sizeof(struct spfc_queue_header) +
+ SPFC_SQ_HEADER_ADDR_ALIGN_SIZE,
+ DMA_BIDIRECTIONAL);
+ kfree(sq_ctrl->queue_head_original);
+ sq_ctrl->queue_head_original = NULL;
+ }
+ if (qid == SPFC_SQ_NUM_PER_QPC || global_sq_index == free_sq_num - 1) {
+ if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
+ prnt_ctx = ssq_info->parent_ctx.cqm_parent_ctx_obj;
+ entry_count = (qid == SPFC_SQ_NUM_PER_QPC ?
+ SPFC_SQ_NUM_PER_QPC :
+ free_sq_num - global_sq_index);
+ ret = spfc_delete_ssqc_via_cmdq_sync(hba, prnt_ctx->xid,
+ prnt_ctx->paddr,
+ entry_count);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]ucode delete ssq fail, glbindex 0x%x, qid 0x%x, glsqindex 0x%x",
+ global_sq_index, qid, free_sq_num);
+ }
+ }
+ }
+ global_sq_index++;
+ if (global_sq_index >= free_sq_num)
+ break;
+ }
+ }
+
+ msleep(SPFC_FREE_SSQ_WAIT_MS);
+
+ spfc_free_ssq_qpc(hba, free_sq_num);
+}
+
+u32 spfc_creat_ssqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_ssq_parent_context *ssqc,
+ u32 xid, u64 context_gpa)
+{
+#define SPFC_INIT_SSQC_TIMEOUT 3000
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_ssqc create_ssqc_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf = NULL;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&create_ssqc_cmd, 0, sizeof(create_ssqc_cmd));
+ create_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CREATE_SSQ_CONTEXT;
+ create_ssqc_cmd.wd0.xid = xid;
+ create_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
+ create_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
+ create_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
+ covrt_size = sizeof(create_ssqc_cmd) - sizeof(create_ssqc_cmd.ssqc);
+ spfc_cpu_to_big32(&create_ssqc_cmd, covrt_size);
+
+ /* scqc is already big endian */
+ memcpy(create_ssqc_cmd.ssqc, ssqc, sizeof(*ssqc));
+ memcpy(cmdq_in_buf->buf, &create_ssqc_cmd, sizeof(create_ssqc_cmd));
+ cmdq_in_buf->size = sizeof(create_ssqc_cmd);
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SSQC_TIMEOUT, SPHW_CHANNEL_FC);
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+ if (ret)
+ return UNF_RETURN_ERROR;
+ return RETURN_OK;
+}
+
+void spfc_init_sq_prnt_ctxt_sq_qinfo(struct spfc_sq_qinfo *sq_info,
+ struct spfc_parent_ssq_info *ssq)
+{
+ struct spfc_wqe_page *head_wqe_page = NULL;
+ struct spfc_sq_qinfo *prnt_sq_ctx = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ SPFC_FUNCTION_ENTER;
+
+ /* Obtains the Parent Context address */
+ head_wqe_page = SPFC_GET_SQ_HEAD(ssq);
+
+ prnt_sq_ctx = sq_info;
+
+ /* The PMSN is updated by the host driver */
+ prnt_sq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
+
+ /* Indicates the value of O of the valid SQE in the current round of SQ.
+ * * The value of Linked List SQ is always one, and the value of 0 is
+ * invalid.
+ */
+ prnt_sq_ctx->loop_o =
+ SPFC_OWNER_DRIVER_PRODUCT; /* current valid o-bit */
+
+ /* should be opposite from loop_o */
+ prnt_sq_ctx->cur_wqe_o = ~(prnt_sq_ctx->loop_o);
+
+ /* the first sqe's gpa */
+ prnt_sq_ctx->cur_sqe_gpa = head_wqe_page->wpg_phy_addr;
+
+ /* Indicates the GPA of the Queue header that is initialized to the SQ
+ * in * the Host memory. The value must be 16-byte aligned.
+ */
+ prnt_sq_ctx->pmsn_gpa = ssq->queue_hdr_phy_addr;
+ if (wqe_pre_load != 0)
+ prnt_sq_ctx->pmsn_gpa |= SPFC_SQ_LINK_PRE;
+
+ /* This field is used to fill in the dmaattr_idx field of the ComboDMA.
+ * The default value is 0
+ */
+ prnt_sq_ctx->sqe_dmaattr_idx = SPFC_DMA_ATTR_OFST;
+
+ /* This field is filled using the value of RO_SO in the SGL0 of the
+ * ComboDMA
+ */
+ prnt_sq_ctx->sq_so_ro = SPFC_PCIE_RELAXED_ORDERING;
+
+ prnt_sq_ctx->ring = ssq->queue_style;
+
+ /* This field is used to set the SGL0 field of the Child solicDMA */
+ prnt_sq_ctx->zerocopy_dmaattr_idx = SPFC_DMA_ATTR_OFST;
+
+ prnt_sq_ctx->zerocopy_so_ro = SPFC_PCIE_RELAXED_ORDERING;
+ prnt_sq_ctx->enable_256 = SPFC_256BWQE_ENABLE;
+
+ /* PCIe attribute information */
+ prnt_sq_ctx->pcie_template = SPFC_PCIE_TEMPLATE;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(ssq->context_id & SPFC_SSQ_CTX_MASK)); /* bits 20 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sqe_dmaattr_idx)) << UNF_SHIFT_20);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sq_so_ro)) << UNF_SHIFT_26);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->ring)) << UNF_SHIFT_28); /* bits 1 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_dmaattr_idx))
+ << UNF_SHIFT_29); /* bits 6 */
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_so_ro)) << UNF_SHIFT_35);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pcie_template)) << UNF_SHIFT_37);
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_4))
+ << UNF_SHIFT_43); /* bits 21 */
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_25));
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(prnt_sq_ctx->pmsn_type)) << UNF_SHIFT_39);
+ prnt_sq_ctx->parity = spfc_get_parity_value(queue_bus.bus, SPFC_SQC_BUS_ROW,
+ SPFC_SQC_BUS_COL);
+ spfc_cpu_to_big64(prnt_sq_ctx, sizeof(struct spfc_sq_qinfo));
+
+ SPFC_FUNCTION_RETURN;
+}
+
+u32 spfc_create_ssq(void *handle)
+{
+ u32 ret = RETURN_OK;
+ u32 global_sq_index = 0;
+ u32 qid = 0;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ struct spfc_parent_shared_queue_info *ssq_info = NULL;
+ struct spfc_parent_ssq_info *sq_ctrl = NULL;
+ u32 queue_header_alloc_size = 0;
+ struct spfc_wqe_page *head_wpg = NULL;
+ struct spfc_ssq_parent_context prnt_ctx_info;
+ struct spfc_sq_qinfo *sq_info = NULL;
+ struct spfc_scq_qinfo *psq_pretchinfo = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ struct spfc_fc_key_section *keysection = NULL;
+ struct spfc_hba_info *hba = NULL;
+ dma_addr_t origin_addr;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+ for (global_sq_index = 0; global_sq_index < SPFC_MAX_SSQ_NUM;) {
+ qid = 0;
+ prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_SERVICE_CTX,
+ SPFC_CNTX_SIZE_256B, NULL,
+ CQM_INDEX_INVALID);
+ if (!prnt_ctx) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create ssq context failed, CQM_INDEX is 0x%x",
+ CQM_INDEX_INVALID);
+ goto ssq_ctx_create_fail;
+ }
+ memset(&prnt_ctx_info, 0, sizeof(prnt_ctx_info));
+ keysection = (struct spfc_fc_key_section *)&prnt_ctx_info;
+ keysection->xid_h = (prnt_ctx->xid >> UNF_SHIFT_16) & SPFC_KEYSECTION_XID_H_MASK;
+ keysection->xid_l = prnt_ctx->xid & SPFC_KEYSECTION_XID_L_MASK;
+ spfc_cpu_to_big32(keysection, sizeof(struct spfc_fc_key_section));
+ for (qid = 0; qid < SPFC_SQ_NUM_PER_QPC; qid++) {
+ sq_info = (struct spfc_sq_qinfo *)((u8 *)(&prnt_ctx_info) + ((qid + 1) *
+ SPFC_SQ_SPACE_OFFSET));
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
+ /* Initialize struct spfc_parent_sq_info */
+ sq_ctrl = &ssq_info->parent_ssq_info;
+ sq_ctrl->hba = (void *)hba;
+ sq_ctrl->context_id = prnt_ctx->xid;
+ sq_ctrl->sq_queue_id = qid + SPFC_SQ_QID_START_PER_QPC;
+ sq_ctrl->cache_id = FC_CALC_CID(prnt_ctx->xid);
+ sq_ctrl->sqn = global_sq_index;
+ sq_ctrl->max_sqe_num = hba->exi_count;
+ /* Reduce one Link Wqe */
+ sq_ctrl->wqe_num_per_buf = hba->sq_wpg_pool.wqe_per_wpg - 1;
+ sq_ctrl->wqe_size = SPFC_SQE_SIZE;
+ sq_ctrl->wqe_offset = 0;
+ sq_ctrl->head_start_cmsn = 0;
+ sq_ctrl->head_end_cmsn = SPFC_GET_WP_END_CMSN(0, sq_ctrl->wqe_num_per_buf);
+ sq_ctrl->last_cmsn = 0;
+ /* Linked List SQ Owner Bit 1 valid,0 invalid */
+ sq_ctrl->last_pi_owner = 1;
+ atomic_set(&sq_ctrl->sq_valid, true);
+ sq_ctrl->accum_wqe_cnt = 0;
+ sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC_SQ;
+ sq_ctrl->queue_style = (global_sq_index == SPFC_DIRECTWQE_SQ_INDEX) ?
+ SPFC_QUEUE_RING_STYLE : SPFC_QUEUE_LINK_STYLE;
+ INIT_LIST_HEAD(&sq_ctrl->list_linked_list_sq);
+ atomic_set(&sq_ctrl->wqe_page_cnt, 0);
+ atomic_set(&sq_ctrl->sq_db_cnt, 0);
+ atomic_set(&sq_ctrl->sqe_minus_cqe_cnt, 1);
+ atomic_set(&sq_ctrl->sq_wqe_cnt, 0);
+ atomic_set(&sq_ctrl->sq_cqe_cnt, 0);
+ spin_lock_init(&sq_ctrl->parent_sq_enqueue_lock);
+ memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
+
+ /* Allocate and initialize the Queue Header space. 64B
+ * alignment is required. * Additional 64B is applied
+ * for alignment
+ */
+ queue_header_alloc_size = sizeof(struct spfc_queue_header) +
+ SPFC_SQ_HEADER_ADDR_ALIGN_SIZE;
+ sq_ctrl->queue_head_original = kmalloc(queue_header_alloc_size, GFP_ATOMIC);
+ if (!sq_ctrl->queue_head_original) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SQ(0x%x) create SQ queue header failed",
+ global_sq_index);
+ goto ssq_qheader_create_fail;
+ }
+
+ memset((u8 *)sq_ctrl->queue_head_original, 0, queue_header_alloc_size);
+
+ sq_ctrl->queue_hdr_phy_addr_original =
+ pci_map_single(hba->pci_dev, sq_ctrl->queue_head_original,
+ queue_header_alloc_size, DMA_BIDIRECTIONAL);
+ origin_addr = sq_ctrl->queue_hdr_phy_addr_original;
+ if (pci_dma_mapping_error(hba->pci_dev, origin_addr)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]SQ(0x%x) SQ queue header DMA mapping failed",
+ global_sq_index);
+ goto ssq_qheader_dma_map_fail;
+ }
+
+ /* Obtains the 64B alignment address */
+ sq_ctrl->queue_header = (struct spfc_queue_header *)(uintptr_t)
+ SPFC_ADDR_64_ALIGN((u64)((uintptr_t)(sq_ctrl->queue_head_original)));
+ sq_ctrl->queue_hdr_phy_addr = SPFC_ADDR_64_ALIGN(origin_addr);
+
+ /* Each SQ is allocated with a Wqe Page by default. The
+ * WqePageCnt is incremented by one
+ */
+ head_wpg = spfc_add_one_wqe_page(sq_ctrl);
+ if (!head_wpg) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]SQ(0x%x) create SQ first wqe page failed",
+ global_sq_index);
+ goto ssq_headwpg_create_fail;
+ }
+
+ atomic_inc(&sq_ctrl->wqe_page_cnt);
+ spfc_init_sq_prnt_ctxt_sq_qinfo(sq_info, sq_ctrl);
+ global_sq_index++;
+ if (global_sq_index == SPFC_MAX_SSQ_NUM)
+ break;
+ }
+ psq_pretchinfo = &prnt_ctx_info.sq_pretchinfo;
+ psq_pretchinfo->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
+ psq_pretchinfo->hw_scqc_config.info.scq_n = (u64)0;
+ psq_pretchinfo->hw_scqc_config.info.parity = 0;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = psq_pretchinfo->hw_scqc_config.pctxt_val1;
+ psq_pretchinfo->hw_scqc_config.info.parity =
+ spfc_get_parity_value(queue_bus.bus, SPFC_HW_SCQC_BUS_ROW,
+ SPFC_HW_SCQC_BUS_COL);
+ spfc_cpu_to_big64(psq_pretchinfo, sizeof(struct spfc_scq_qinfo));
+ ret = spfc_creat_ssqc_via_cmdq_sync(hba, &prnt_ctx_info,
+ prnt_ctx->xid, prnt_ctx->paddr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]SQ(0x%x) create ssqc failed.",
+ global_sq_index);
+ goto ssq_cmdqsync_fail;
+ }
+ }
+
+ return RETURN_OK;
+
+ssq_headwpg_create_fail:
+ pci_unmap_single(hba->pci_dev, sq_ctrl->queue_hdr_phy_addr_original,
+ queue_header_alloc_size, DMA_BIDIRECTIONAL);
+
+ssq_qheader_dma_map_fail:
+ kfree(sq_ctrl->queue_head_original);
+ sq_ctrl->queue_head_original = NULL;
+
+ssq_qheader_create_fail:
+ cqm3_object_delete(&prnt_ctx->object);
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ if (qid > 0) {
+ while (qid--) {
+ ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index - qid];
+ ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+ }
+
+ssq_ctx_create_fail:
+ssq_cmdqsync_fail:
+ if (global_sq_index > 0)
+ spfc_free_ssq(hba, global_sq_index);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_create_scq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 scq_index = 0;
+ u32 scq_cfg_num = 0;
+ struct cqm_queue *cqm_scq = NULL;
+ void *handle = NULL;
+ struct spfc_scq_info *scq_info = NULL;
+ struct spfc_cq_qinfo cq_qinfo;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ handle = hba->dev_handle;
+ /* Create SCQ by CQM interface */
+ for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
+ /*
+ * 1. Create/Allocate SCQ
+ * *
+ * Notice: SCQ[0, 2, 4 ...]--->CMD SCQ,
+ * SCQ[1, 3, 5 ...]--->STS SCQ,
+ * SCQ[SPFC_TOTAL_SCQ_NUM-1]--->Defaul SCQ
+ */
+ cqm_scq = cqm3_object_nonrdma_queue_create(handle, SERVICE_T_FC,
+ CQM_OBJECT_NONRDMA_SCQ,
+ SPFC_SCQ_IS_STS(scq_index) ?
+ SPFC_STS_SCQ_DEPTH :
+ SPFC_CMD_SCQ_DEPTH,
+ SPFC_SCQE_SIZE, hba);
+ if (!cqm_scq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN, "[err]Create scq failed");
+
+ goto free_scq;
+ }
+
+ /* 2. Initialize SCQ (info) */
+ spfc_init_scq_info(hba, cqm_scq, scq_index, &scq_info);
+
+ /* 3. Allocate & Initialize SCQ interrupt */
+ ret = spfc_alloc_scq_int(scq_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate scq interrupt failed");
+
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ goto free_scq;
+ }
+
+ /* 4. Initialize SCQ queue header */
+ spfc_init_scq_header((struct wq_header *)(void *)cqm_scq->q_header_vaddr);
+
+ /* 5. Initialize & Create SCQ CTX */
+ memset(&cq_qinfo, 0, sizeof(cq_qinfo));
+ spfc_cfg_scq_ctx(scq_info, &cq_qinfo);
+ ret = spfc_creat_scqc_via_cmdq_sync(hba, &cq_qinfo, scq_info->scqn);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create scq context failed");
+
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ goto free_scq;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Create SCQ[%u] Scqn=%u WqeNum=%u WqeSize=%u WqePerBuf=%u CqDepth=%u CiType=%u irq=%u msix=%u",
+ scq_info->queue_id, scq_info->scqn,
+ scq_info->valid_wqe_num, scq_info->wqe_size,
+ scq_info->wqe_num_per_buf, scq_info->scqc_cq_depth,
+ scq_info->scqc_ci_type, scq_info->irq_id,
+ scq_info->msix_entry_idx);
+ }
+
+ /* Last SCQ is used to handle SCQE delivery access when clearing buffer
+ */
+ hba->default_scqn = scq_info->scqn;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Default Scqn=%u CqmScqIndex=%u", hba->default_scqn,
+ cqm_scq->index);
+
+ return RETURN_OK;
+
+free_scq:
+ spfc_flush_scq_ctx(hba);
+
+ scq_cfg_num = scq_index;
+ for (scq_index = 0; scq_index < scq_cfg_num; scq_index++) {
+ scq_info = &hba->scq_info[scq_index];
+ spfc_free_scq_int(scq_info);
+ cqm_scq = scq_info->cqm_scq_info;
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_destroy_scq(struct spfc_hba_info *hba)
+{
+ u32 scq_index = 0;
+ struct cqm_queue *cqm_scq = NULL;
+ struct spfc_scq_info *scq_info = NULL;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Start destroy total %d SCQ", SPFC_TOTAL_SCQ_NUM);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ /* Use CQM to delete SCQ */
+ for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
+ scq_info = &hba->scq_info[scq_index];
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ALL,
+ "[info]Destroy SCQ%u, Scqn=%u, Irq=%u, msix=%u, name=%s",
+ scq_index, scq_info->scqn, scq_info->irq_id,
+ scq_info->msix_entry_idx, scq_info->irq_name);
+
+ spfc_free_scq_int(scq_info);
+ cqm_scq = scq_info->cqm_scq_info;
+ cqm3_object_delete(&cqm_scq->object);
+ memset(scq_info, 0, sizeof(struct spfc_scq_info));
+ }
+}
+
+static void spfc_init_srq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_srq,
+ struct spfc_srq_info *srq_info)
+{
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(cqm_srq);
+ FC_CHECK_RETURN_VOID(srq_info);
+
+ srq_info->hba = (void *)hba;
+
+ srq_info->cqm_srq_info = cqm_srq;
+ srq_info->wqe_num_per_buf = cqm_srq->q_room_buf_1.buf_size / SPFC_SRQE_SIZE - 1;
+ srq_info->wqe_size = SPFC_SRQE_SIZE;
+ srq_info->valid_wqe_num = cqm_srq->valid_wqe_num;
+ srq_info->pi = 0;
+ srq_info->pi_owner = SPFC_SRQ_INIT_LOOP_O;
+ srq_info->pmsn = 0;
+ srq_info->srqn = cqm_srq->index;
+ srq_info->first_rqe_recv_dma = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Init srq info(srq index 0x%x) valid wqe num 0x%x, buffer size 0x%x, wqe num per buf 0x%x",
+ cqm_srq->index, srq_info->valid_wqe_num,
+ cqm_srq->q_room_buf_1.buf_size, srq_info->wqe_num_per_buf);
+}
+
+static void spfc_init_srq_header(struct wq_header *queue_header)
+{
+ FC_CHECK_RETURN_VOID(queue_header);
+
+ memset(queue_header, 0, sizeof(struct wq_header));
+}
+
+/*
+ *Function Name : spfc_get_srq_entry
+ *Function Description: Obtain RQE in SRQ via PI.
+ *Input Parameters : *srq_info,
+ * **linked_rqe,
+ * position
+ *Output Parameters : N/A
+ *Return Type : struct spfc_rqe*
+ */
+static struct spfc_rqe *spfc_get_srq_entry(struct spfc_srq_info *srq_info,
+ struct spfc_rqe **linked_rqe, u16 position)
+{
+ u32 buf_id = 0;
+ u32 wqe_num_per_buf = 0;
+ u16 buf_offset = 0;
+ struct cqm_buf_list *buf = NULL;
+
+ FC_CHECK_RETURN_VALUE(srq_info, NULL);
+
+ wqe_num_per_buf = srq_info->wqe_num_per_buf;
+
+ buf_id = position / wqe_num_per_buf;
+ buf = &srq_info->cqm_srq_info->q_room_buf_1.buf_list[buf_id];
+ buf_offset = position % ((u16)wqe_num_per_buf);
+
+ if (buf_offset + 1 == wqe_num_per_buf)
+ *linked_rqe = (struct spfc_rqe *)(buf->va) + wqe_num_per_buf;
+ else
+ *linked_rqe = NULL;
+
+ return (struct spfc_rqe *)(buf->va) + buf_offset;
+}
+
+void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id)
+{
+ struct spfc_rqe *rqe = NULL;
+ struct spfc_rqe tmp_rqe;
+ struct spfc_rqe *linked_rqe = NULL;
+ struct wq_header *wq_header = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+ FC_CHECK_RETURN_VOID(buf_id < srq_info->valid_wqe_num);
+
+ buff_entry = srq_info->els_buff_entry_head + buf_id;
+
+ spin_lock(&srq_info->srq_spin_lock);
+
+ /* Obtain RQE, not include link wqe */
+ rqe = spfc_get_srq_entry(srq_info, &linked_rqe, srq_info->pi);
+ if (!rqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]post els srq,get srqe failed, valid wqe num 0x%x, pi 0x%x, pmsn 0x%x",
+ srq_info->valid_wqe_num, srq_info->pi,
+ srq_info->pmsn);
+
+ spin_unlock(&srq_info->srq_spin_lock);
+ return;
+ }
+
+ /* Initialize RQE */
+ /* cs section is not used */
+ memset(&tmp_rqe, 0, sizeof(struct spfc_rqe));
+
+ /* default Obit is invalid, and set valid finally */
+ spfc_build_srq_wqe_ctrls(&tmp_rqe, !srq_info->pi_owner, srq_info->pmsn + 1);
+
+ tmp_rqe.bds_sl.buf_addr_hi = SPFC_HIGH_32_BITS(buff_entry->buff_dma);
+ tmp_rqe.bds_sl.buf_addr_lo = SPFC_LOW_32_BITS(buff_entry->buff_dma);
+ tmp_rqe.drv_sl.wd0.user_id = buf_id;
+
+ /* convert to big endian */
+ spfc_cpu_to_big32(&tmp_rqe, sizeof(struct spfc_rqe));
+
+ memcpy(rqe, &tmp_rqe, sizeof(struct spfc_rqe));
+
+ /* reset Obit */
+ spfc_set_srq_wqe_owner_be((struct spfc_wqe_ctrl *)(void *)(&rqe->ctrl_sl),
+ srq_info->pi_owner);
+
+ if (linked_rqe) {
+ /* Update Obit in linked WQE */
+ spfc_set_srq_link_wqe_owner_be((struct spfc_linkwqe *)(void *)linked_rqe,
+ srq_info->pi_owner, srq_info->pmsn + 1);
+ }
+
+ /* Update PI and PMSN */
+ spfc_update_producer_info((u16)(srq_info->valid_wqe_num),
+ &srq_info->pi, &srq_info->pi_owner);
+
+ /* pmsn is 16bit. The value is added to the maximum value and is
+ * automatically reversed
+ */
+ srq_info->pmsn++;
+
+ /* Update pmsn in queue header */
+ wq_header = (struct wq_header *)(void *)srq_info->cqm_srq_info->q_header_vaddr;
+ spfc_update_srq_header(&wq_header->db_record, srq_info->pmsn);
+
+ spin_unlock(&srq_info->srq_spin_lock);
+}
+
+/*
+ *Function Name : spfc_cfg_srq_ctx
+ *Function Description: Initialize the CTX of the SRQ that receives the
+ * immediate data. The RQE of the SRQ
+ * needs to be
+ *initialized when the RQE is filled. Input Parameters : *srq_info, *srq_ctx,
+ * sge_size,
+ * rqe_gpa
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_cfg_srq_ctx(struct spfc_srq_info *srq_info,
+ struct spfc_srq_ctx *ctx, u32 sge_size,
+ u64 rqe_gpa)
+{
+ struct spfc_srq_ctx *srq_ctx = NULL;
+ struct cqm_queue *cqm_srq_info = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+ FC_CHECK_RETURN_VOID(ctx);
+
+ cqm_srq_info = srq_info->cqm_srq_info;
+ srq_ctx = ctx;
+ srq_ctx->last_rq_pmsn = 0;
+ srq_ctx->cur_rqe_msn = 0;
+ srq_ctx->pcie_template = 0;
+ /* The value of CTX needs to be updated
+ *when RQE is configured
+ */
+ srq_ctx->cur_rqe_gpa = rqe_gpa;
+ srq_ctx->cur_sge_v = 0;
+ srq_ctx->cur_sge_l = 0;
+ /* The information received by the SRQ is reported through the
+ *SCQ. The interrupt and ArmCQ are disabled.
+ */
+ srq_ctx->int_mode = 0;
+ srq_ctx->ceqn_msix = 0;
+ srq_ctx->cur_sge_remain_len = 0;
+ srq_ctx->cur_sge_id = 0;
+ srq_ctx->consant_sge_len = sge_size;
+ srq_ctx->cur_wqe_o = 0;
+ srq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
+ srq_ctx->bdsl = 0;
+ srq_ctx->cr = 0;
+ srq_ctx->csl = 0;
+ srq_ctx->cf = 0;
+ srq_ctx->ctrl_sl = 0;
+ srq_ctx->cur_sge_gpa = 0;
+ srq_ctx->cur_pmsn_gpa = cqm_srq_info->q_header_paddr;
+ srq_ctx->prefetch_max_masn = 0;
+ srq_ctx->cqe_max_cnt = 0;
+ srq_ctx->cur_cqe_cnt = 0;
+ srq_ctx->arm_q = 0;
+ srq_ctx->cq_so_ro = 0;
+ srq_ctx->cqe_dma_attr_idx = 0;
+ srq_ctx->rq_so_ro = 0;
+ srq_ctx->rqe_dma_attr_idx = 0;
+ srq_ctx->loop_o = SPFC_SRQ_INIT_LOOP_O;
+ srq_ctx->ring = SPFC_QUEUE_RING;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(cqm_srq_info->q_ctx_paddr >> UNF_SHIFT_4));
+ queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(srq_ctx->rqe_dma_attr_idx &
+ SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK))
+ << UNF_SHIFT_60); /* bits 4 */
+
+ queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(srq_ctx->rqe_dma_attr_idx >> UNF_SHIFT_4));
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->rq_so_ro)) << UNF_SHIFT_2); /* bits 2 */
+ queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->cur_pmsn_gpa >> UNF_SHIFT_4))
+ << UNF_SHIFT_4); /* bits 60 */
+
+ queue_bus.bus[ARRAY_INDEX_2] |= ((u64)(srq_ctx->consant_sge_len)); /* bits 17 */
+ queue_bus.bus[ARRAY_INDEX_2] |= (((u64)(srq_ctx->pcie_template)) << UNF_SHIFT_17);
+
+ srq_ctx->parity = spfc_get_parity_value((void *)queue_bus.bus, SPFC_SRQC_BUS_ROW,
+ SPFC_SRQC_BUS_COL);
+
+ spfc_cpu_to_big64((void *)srq_ctx, sizeof(struct spfc_srq_ctx));
+}
+
+static u32 spfc_creat_srqc_via_cmdq_sync(struct spfc_hba_info *hba,
+ struct spfc_srq_ctx *srqc,
+ u64 ctx_gpa)
+{
+#define SPFC_INIT_SRQC_TIMEOUT 3000
+
+ int ret;
+ u32 covrt_size;
+ struct spfc_cmdqe_creat_srqc init_srq_cmd;
+ struct sphw_cmd_buf *cmdq_in_buf;
+
+ cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmdq_in_buf) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]cmdq in_cmd_buf alloc failed");
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ memset(&init_srq_cmd, 0, sizeof(init_srq_cmd));
+ init_srq_cmd.wd0.task_type = SPFC_TASK_T_INIT_SRQC;
+ init_srq_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(ctx_gpa);
+ init_srq_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(ctx_gpa);
+ covrt_size = sizeof(init_srq_cmd) - sizeof(init_srq_cmd.srqc);
+ spfc_cpu_to_big32(&init_srq_cmd, covrt_size);
+
+ /* srqc is already big-endian */
+ memcpy(init_srq_cmd.srqc, srqc, sizeof(*srqc));
+ memcpy(cmdq_in_buf->buf, &init_srq_cmd, sizeof(init_srq_cmd));
+ cmdq_in_buf->size = sizeof(init_srq_cmd);
+
+ ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
+ cmdq_in_buf, NULL, NULL,
+ SPFC_INIT_SRQC_TIMEOUT, SPHW_CHANNEL_FC);
+
+ sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
+
+ if (ret) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Send creat srqc via cmdq failed, ret=%d",
+ ret);
+
+ SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+ return UNF_RETURN_ERROR;
+ }
+
+ SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
+
+ return RETURN_OK;
+}
+
+static void spfc_init_els_srq_wqe(struct spfc_srq_info *srq_info)
+{
+ u32 rqe_index = 0;
+ struct spfc_drq_buff_entry *buf_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(srq_info);
+
+ for (rqe_index = 0; rqe_index < srq_info->valid_wqe_num - 1; rqe_index++) {
+ buf_entry = srq_info->els_buff_entry_head + rqe_index;
+ spfc_post_els_srq_wqe(srq_info, buf_entry->buff_id);
+ }
+}
+
+static void spfc_free_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
+{
+ u32 buff_index = 0;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ srq_info = &hba->els_srq_info;
+
+ if (!srq_info->els_buff_entry_head)
+ return;
+
+ for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
+ buff_entry = &srq_info->els_buff_entry_head[buff_index];
+ buff_entry->buff_addr = NULL;
+ }
+
+ if (srq_info->buf_list.buflist) {
+ for (buff_index = 0; buff_index < srq_info->buf_list.buf_num;
+ buff_index++) {
+ if (srq_info->buf_list.buflist[buff_index].paddr != 0) {
+ pci_unmap_single(hba->pci_dev,
+ srq_info->buf_list.buflist[buff_index].paddr,
+ srq_info->buf_list.buf_size,
+ DMA_FROM_DEVICE);
+ srq_info->buf_list.buflist[buff_index].paddr = 0;
+ }
+ kfree(srq_info->buf_list.buflist[buff_index].vaddr);
+ srq_info->buf_list.buflist[buff_index].vaddr = NULL;
+ }
+
+ kfree(srq_info->buf_list.buflist);
+ srq_info->buf_list.buflist = NULL;
+ }
+
+ kfree(srq_info->els_buff_entry_head);
+ srq_info->els_buff_entry_head = NULL;
+}
+
+static u32 spfc_alloc_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
+{
+ u32 req_buff_size = 0;
+ u32 buff_index = 0;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_drq_buff_entry *buff_entry = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+ u32 buf_cnt_perhugebuf;
+
+ srq_info = &hba->els_srq_info;
+
+ /* Apply for entry buffer */
+ req_buff_size = (u32)(srq_valid_wqe * sizeof(struct spfc_drq_buff_entry));
+ srq_info->els_buff_entry_head = (struct spfc_drq_buff_entry *)kmalloc(req_buff_size,
+ GFP_KERNEL);
+ if (!srq_info->els_buff_entry_head) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate ELS Srq receive buffer entries failed");
+
+ return UNF_RETURN_ERROR;
+ }
+ memset(srq_info->els_buff_entry_head, 0, req_buff_size);
+
+ buf_total_size = SPFC_SRQ_ELS_SGE_LEN * srq_valid_wqe;
+
+ srq_info->buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE
+ ? BUF_LIST_PAGE_SIZE
+ : buf_total_size;
+ buf_cnt_perhugebuf = srq_info->buf_list.buf_size / SPFC_SRQ_ELS_SGE_LEN;
+ buf_num = srq_valid_wqe % buf_cnt_perhugebuf ?
+ srq_valid_wqe / buf_cnt_perhugebuf + 1 :
+ srq_valid_wqe / buf_cnt_perhugebuf;
+ srq_info->buf_list.buflist = (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list),
+ GFP_KERNEL);
+ srq_info->buf_list.buf_num = buf_num;
+
+ if (!srq_info->buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate ELS buf list failed out of memory");
+ goto free_buff;
+ }
+ memset(srq_info->buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ srq_info->buf_list.buflist[alloc_idx].vaddr = kmalloc(srq_info->buf_list.buf_size,
+ GFP_KERNEL);
+ if (!srq_info->buf_list.buflist[alloc_idx].vaddr)
+ goto free_buff;
+
+ memset(srq_info->buf_list.buflist[alloc_idx].vaddr, 0, srq_info->buf_list.buf_size);
+
+ srq_info->buf_list.buflist[alloc_idx].paddr =
+ pci_map_single(hba->pci_dev, srq_info->buf_list.buflist[alloc_idx].vaddr,
+ srq_info->buf_list.buf_size, DMA_FROM_DEVICE);
+ if (pci_dma_mapping_error(hba->pci_dev,
+ srq_info->buf_list.buflist[alloc_idx].paddr)) {
+ srq_info->buf_list.buflist[alloc_idx].paddr = 0;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Map els srq buffer failed");
+
+ goto free_buff;
+ }
+ }
+
+ /* Apply for receiving buffer and attach it to the free linked list */
+ for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
+ buff_entry = &srq_info->els_buff_entry_head[buff_index];
+ cur_buf_idx = buff_index / buf_cnt_perhugebuf;
+ cur_buf_offset = SPFC_SRQ_ELS_SGE_LEN * (buff_index % buf_cnt_perhugebuf);
+ buff_entry->buff_addr = srq_info->buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ buff_entry->buff_dma = srq_info->buf_list.buflist[cur_buf_idx].paddr +
+ cur_buf_offset;
+ buff_entry->buff_id = (u16)buff_index;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num,
+ buf_total_size);
+
+ return RETURN_OK;
+
+free_buff:
+ spfc_free_els_srq_buff(hba, srq_valid_wqe);
+ return UNF_RETURN_ERROR;
+}
+
+void spfc_send_clear_srq_cmd(struct spfc_hba_info *hba,
+ struct spfc_srq_info *srq_info)
+{
+ union spfc_cmdqe cmdqe;
+ struct cqm_queue *cqm_fcp_srq = NULL;
+ ulong flag = 0;
+
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+
+ spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
+ cqm_fcp_srq = srq_info->cqm_srq_info;
+ if (!cqm_fcp_srq) {
+ srq_info->state = SPFC_CLEAN_DONE;
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+ return;
+ }
+
+ cmdqe.clear_srq.wd0.task_type = SPFC_TASK_T_CLEAR_SRQ;
+ cmdqe.clear_srq.wd1.scqn = SPFC_LSW(hba->default_scqn);
+ cmdqe.clear_srq.wd1.srq_type = srq_info->srq_type;
+ cmdqe.clear_srq.srqc_gpa_h = SPFC_HIGH_32_BITS(cqm_fcp_srq->q_ctx_paddr);
+ cmdqe.clear_srq.srqc_gpa_l = SPFC_LOW_32_BITS(cqm_fcp_srq->q_ctx_paddr);
+
+ (void)queue_delayed_work(hba->work_queue, &srq_info->del_work,
+ (ulong)msecs_to_jiffies(SPFC_SRQ_DEL_STAGE_TIMEOUT_MS));
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port 0x%x begin to clear srq 0x%x(0x%x,0x%llx)",
+ hba->port_cfg.port_id, srq_info->srq_type,
+ SPFC_LSW(hba->default_scqn),
+ (u64)cqm_fcp_srq->q_ctx_paddr);
+
+ /* Run the ROOT CMDQ command to issue the clear srq command. If the
+ * command fails to be delivered, retry upon timeout.
+ */
+ (void)spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.clear_srq));
+}
+
+/*
+ *Function Name : spfc_srq_clr_timeout
+ *Function Description: Delete srq when timeout.
+ *Input Parameters : *work
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_srq_clr_timeout(struct work_struct *work)
+{
+#define SPFC_MAX_DEL_SRQ_RETRY_TIMES 2
+ struct spfc_srq_info *srq = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct cqm_queue *cqm_fcp_imm_srq = NULL;
+ ulong flag = 0;
+
+ srq = container_of(work, struct spfc_srq_info, del_work.work);
+
+ spin_lock_irqsave(&srq->srq_spin_lock, flag);
+ hba = srq->hba;
+ cqm_fcp_imm_srq = srq->cqm_srq_info;
+ spin_unlock_irqrestore(&srq->srq_spin_lock, flag);
+
+ if (hba && cqm_fcp_imm_srq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port 0x%x clear srq 0x%x stat 0x%x timeout",
+ hba->port_cfg.port_id, srq->srq_type, srq->state);
+
+ /* If the delivery fails or the execution times out after the
+ * delivery, try again once
+ */
+ srq->del_retry_time++;
+ if (srq->del_retry_time < SPFC_MAX_DEL_SRQ_RETRY_TIMES)
+ spfc_send_clear_srq_cmd(hba, srq);
+ else
+ srq->del_retry_time = 0;
+ }
+}
+
+static u32 spfc_create_els_srq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct cqm_queue *cqm_srq = NULL;
+ struct wq_header *wq_header = NULL;
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_srq_ctx srq_ctx = {0};
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ cqm_srq = cqm3_object_fc_srq_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_NONRDMA_SRQ, SPFC_SRQ_ELS_DATA_DEPTH,
+ SPFC_SRQE_SIZE, hba);
+ if (!cqm_srq) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create Els Srq failed");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Initialize SRQ */
+ srq_info = &hba->els_srq_info;
+ spfc_init_srq_info(hba, cqm_srq, srq_info);
+ srq_info->srq_type = SPFC_SRQ_ELS;
+ srq_info->enable = true;
+ srq_info->state = SPFC_CLEAN_DONE;
+ srq_info->del_retry_time = 0;
+
+ /* The srq lock is initialized and can be created repeatedly */
+ spin_lock_init(&srq_info->srq_spin_lock);
+ srq_info->spin_lock_init = true;
+
+ /* Initialize queue header */
+ wq_header = (struct wq_header *)(void *)cqm_srq->q_header_vaddr;
+ spfc_init_srq_header(wq_header);
+ INIT_DELAYED_WORK(&srq_info->del_work, spfc_srq_clr_timeout);
+
+ /* Apply for RQ buffer */
+ ret = spfc_alloc_els_srq_buff(hba, srq_info->valid_wqe_num);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate Els Srq buffer failed");
+
+ cqm3_object_delete(&cqm_srq->object);
+ memset(srq_info, 0, sizeof(struct spfc_srq_info));
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Fill RQE, update queue header */
+ spfc_init_els_srq_wqe(srq_info);
+
+ /* Fill SRQ CTX */
+ memset(&srq_ctx, 0, sizeof(srq_ctx));
+ spfc_cfg_srq_ctx(srq_info, &srq_ctx, SPFC_SRQ_ELS_SGE_LEN,
+ srq_info->cqm_srq_info->q_room_buf_1.buf_list->pa);
+
+ ret = spfc_creat_srqc_via_cmdq_sync(hba, &srq_ctx, srq_info->cqm_srq_info->q_ctx_paddr);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Creat Els Srqc failed");
+
+ spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
+ cqm3_object_delete(&cqm_srq->object);
+ memset(srq_info, 0, sizeof(struct spfc_srq_info));
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_wq_destroy_els_srq(struct work_struct *work)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+ hba =
+ container_of(work, struct spfc_hba_info, els_srq_clear_work);
+ spfc_destroy_els_srq(hba);
+}
+
+void spfc_destroy_els_srq(void *handle)
+{
+ /*
+ * Receive clear els srq sts
+ * ---then--->>> destroy els srq
+ */
+ struct spfc_srq_info *srq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ srq_info = &hba->els_srq_info;
+
+ /* release receive buffer */
+ spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
+
+ /* release srq info */
+ if (srq_info->cqm_srq_info) {
+ cqm3_object_delete(&srq_info->cqm_srq_info->object);
+ srq_info->cqm_srq_info = NULL;
+ }
+ if (srq_info->spin_lock_init)
+ srq_info->spin_lock_init = false;
+ srq_info->hba = NULL;
+ srq_info->enable = false;
+ srq_info->state = SPFC_CLEAN_DONE;
+}
+
+/*
+ *Function Name : spfc_create_srq
+ *Function Description: Create SRQ, which contains four SRQ for receiving
+ * instant data and a SRQ for receiving
+ * ELS data.
+ *Input Parameters : *hba Output Parameters : N/A Return Type :u32
+ */
+static u32 spfc_create_srq(struct spfc_hba_info *hba)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+
+ /* Create ELS SRQ */
+ ret = spfc_create_els_srq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create Els Srq failed");
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+/*
+ *Function Name : spfc_destroy_srq
+ *Function Description: Release the SRQ resource, including the SRQ for
+ * receiving the immediate data and the
+ * SRQ forreceiving the ELS data.
+ *Input Parameters : *hba Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_destroy_srq(struct spfc_hba_info *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ spfc_destroy_els_srq(hba);
+}
+
+u32 spfc_create_common_share_queues(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+ /* Create & Init 8 pairs SCQ */
+ ret = spfc_create_scq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create scq failed");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Alloc SRQ resource for SIRT & ELS */
+ ret = spfc_create_srq(hba);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Create srq failed");
+
+ spfc_flush_scq_ctx(hba);
+ spfc_destroy_scq(hba);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_destroy_common_share_queues(void *hba)
+{
+ FC_CHECK_RETURN_VOID(hba);
+
+ spfc_destroy_scq((struct spfc_hba_info *)hba);
+ spfc_destroy_srq((struct spfc_hba_info *)hba);
+}
+
+static u8 spfc_map_fcp_data_cos(struct spfc_hba_info *hba)
+{
+ u8 i = 0;
+ u8 min_cnt_index = SPFC_PACKET_COS_FC_DATA;
+ bool get_init_index = false;
+
+ for (i = 0; i < SPFC_MAX_COS_NUM; i++) {
+ /* Check whether the CoS is valid for the FC and cannot be
+ * occupied by the CMD
+ */
+ if ((!(hba->cos_bitmap & ((u32)1 << i))) || i == SPFC_PACKET_COS_FC_CMD)
+ continue;
+
+ if (!get_init_index) {
+ min_cnt_index = i;
+ get_init_index = true;
+ continue;
+ }
+
+ if (atomic_read(&hba->cos_rport_cnt[i]) <
+ atomic_read(&hba->cos_rport_cnt[min_cnt_index]))
+ min_cnt_index = i;
+ }
+
+ atomic_inc(&hba->cos_rport_cnt[min_cnt_index]);
+
+ return min_cnt_index;
+}
+
+static void spfc_update_cos_rport_cnt(struct spfc_hba_info *hba, u8 cos_index)
+{
+ if (cos_index >= SPFC_MAX_COS_NUM ||
+ cos_index == SPFC_PACKET_COS_FC_CMD ||
+ (!(hba->cos_bitmap & ((u32)1 << cos_index))) ||
+ (atomic_read(&hba->cos_rport_cnt[cos_index]) == 0))
+ return;
+
+ atomic_dec(&hba->cos_rport_cnt[cos_index]);
+}
+
+void spfc_invalid_parent_sq(struct spfc_parent_sq_info *sq_info)
+{
+ sq_info->rport_index = INVALID_VALUE32;
+ sq_info->context_id = INVALID_VALUE32;
+ sq_info->sq_queue_id = INVALID_VALUE32;
+ sq_info->cache_id = INVALID_VALUE32;
+ sq_info->local_port_id = INVALID_VALUE32;
+ sq_info->remote_port_id = INVALID_VALUE32;
+ sq_info->hba = NULL;
+ sq_info->del_start_jiff = INVALID_VALUE64;
+ sq_info->port_in_flush = false;
+ sq_info->sq_in_sess_rst = false;
+ sq_info->oqid_rd = INVALID_VALUE16;
+ sq_info->oqid_wr = INVALID_VALUE16;
+ sq_info->srq_ctx_addr = 0;
+ sq_info->sqn_base = 0;
+ atomic_set(&sq_info->sq_cached, false);
+ sq_info->vport_id = 0;
+ sq_info->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
+ sq_info->need_offloaded = INVALID_VALUE8;
+ atomic_set(&sq_info->sq_valid, false);
+ atomic_set(&sq_info->flush_done_wait_cnt, 0);
+ memset(&sq_info->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+ memset(sq_info->io_stat, 0, sizeof(sq_info->io_stat));
+}
+
+static void spfc_parent_sq_opreate_timeout(struct work_struct *work)
+{
+ ulong flag = 0;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ parent_sq = container_of(work, struct spfc_parent_sq_info, del_work.work);
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
+ hba = (struct spfc_hba_info *)parent_sq->hba;
+ FC_CHECK_RETURN_VOID(hba);
+
+ spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
+ if (parent_queue->offload_state == SPFC_QUEUE_STATE_DESTROYING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "Port(0x%x) sq rport index(0x%x) local nportid(0x%x),remote nportid(0x%x) reset timeout.",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ parent_sq->local_port_id,
+ parent_sq->remote_port_id);
+ }
+ spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
+}
+
+static void spfc_parent_sq_wait_flush_done_timeout(struct work_struct *work)
+{
+ ulong flag = 0;
+ struct spfc_parent_sq_info *parent_sq = NULL;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ struct spfc_hba_info *hba = NULL;
+ u32 ctx_flush_done;
+ u32 *ctx_dw = NULL;
+ int ret;
+ int sq_state = SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK;
+ spinlock_t *prtq_state_lock = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ parent_sq = container_of(work, struct spfc_parent_sq_info, flush_done_timeout_work.work);
+
+ FC_CHECK_RETURN_VOID(parent_sq);
+
+ parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ hba = (struct spfc_hba_info *)parent_sq->hba;
+ FC_CHECK_RETURN_VOID(hba);
+ FC_CHECK_RETURN_VOID(parent_queue);
+
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (parent_queue->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) sq rport index(0x%x) is not destroying status,offloadsts is %d",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ parent_queue->offload_state);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return;
+ }
+
+ if (parent_queue->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_dw = (u32 *)((void *)(parent_queue->parent_ctx.cqm_parent_ctx_obj->vaddr));
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ if (ctx_flush_done == 0) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ if (atomic_read(&parent_queue->parent_sq_info.flush_done_wait_cnt) <
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) sq rport index(0x%x) wait flush done timeout %d times",
+ hba->port_cfg.port_id, parent_sq->rport_index,
+ atomic_read(&(parent_queue->parent_sq_info
+ .flush_done_wait_cnt)));
+
+ atomic_inc(&parent_queue->parent_sq_info.flush_done_wait_cnt);
+
+ /* Delay Free Sq info */
+ ret = queue_delayed_work(hba->work_queue,
+ &(parent_queue->parent_sq_info
+ .flush_done_timeout_work),
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
+ if (!ret) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
+ hba->port_cfg.port_id,
+ parent_sq->rport_index, ret);
+ SPFC_HBA_STAT(hba, sq_state);
+ }
+
+ return;
+ }
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) sq rport index(0x%x) has wait flush done %d times,do not free sq",
+ hba->port_cfg.port_id,
+ parent_sq->rport_index,
+ atomic_read(&(parent_queue->parent_sq_info
+ .flush_done_wait_cnt)));
+
+ SPFC_HBA_STAT(hba, SPFC_STAT_CTXT_FLUSH_DONE);
+ return;
+ }
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) sq rport index(0x%x) flush done bit is ok,free sq now",
+ hba->port_cfg.port_id, parent_sq->rport_index);
+
+ spfc_free_parent_queue_info(hba, parent_queue);
+}
+
+static void spfc_free_parent_sq(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parq_info)
+{
+#define SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES 100
+ u32 ctx_flush_done = 0;
+ u32 *ctx_dw = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ u32 uidelaycnt = 0;
+ struct list_head *list = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ ulong flag = 0;
+
+ sq_info = &parq_info->parent_sq_info;
+
+ spin_lock_irqsave(&parq_info->parent_queue_state_lock, flag);
+ while (!list_empty(&sq_info->suspend_sqe_list)) {
+ list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
+ list_del(list);
+ suspend_sqe = list_entry(list, struct spfc_suspend_sqe_info, list_sqe_entry);
+ if (suspend_sqe) {
+ if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]reset worker timer maybe timeout");
+ }
+
+ kfree(suspend_sqe);
+ }
+ }
+ spin_unlock_irqrestore(&parq_info->parent_queue_state_lock, flag);
+
+ /* Free data cos */
+ spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
+
+ if (parq_info->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_dw = (u32 *)((void *)(parq_info->parent_ctx.cqm_parent_ctx_obj->vaddr));
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (parq_info->offload_state == SPFC_QUEUE_STATE_DESTROYING &&
+ ctx_flush_done == 0) {
+ do {
+ ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] &
+ SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (ctx_flush_done != 0)
+ break;
+ uidelaycnt++;
+ } while (uidelaycnt < SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES);
+
+ if (ctx_flush_done == 0) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) Rport(0x%x) flush done is not set",
+ hba->port_cfg.port_id,
+ sq_info->rport_index);
+ }
+ }
+
+ cqm3_object_delete(&parq_info->parent_ctx.cqm_parent_ctx_obj->object);
+ parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+ }
+
+ spfc_invalid_parent_sq(sq_info);
+}
+
+u32 spfc_alloc_parent_sq(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parq_info,
+ struct unf_port_info *rport_info)
+{
+ struct spfc_parent_sq_info *sq_ctrl = NULL;
+ struct cqm_qpc_mpt *prnt_ctx = NULL;
+ ulong flag = 0;
+
+ /* Craete parent context via CQM */
+ prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
+ CQM_OBJECT_SERVICE_CTX, SPFC_CNTX_SIZE_256B,
+ parq_info, CQM_INDEX_INVALID);
+ if (!prnt_ctx) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Create parent context failed, CQM_INDEX is 0x%x",
+ CQM_INDEX_INVALID);
+ goto parent_create_fail;
+ }
+
+ parq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
+ /* Initialize struct spfc_parent_sq_info */
+ sq_ctrl = &parq_info->parent_sq_info;
+ sq_ctrl->hba = (void *)hba;
+ sq_ctrl->rport_index = rport_info->rport_index;
+ sq_ctrl->sqn_base = rport_info->sqn_base;
+ sq_ctrl->context_id = prnt_ctx->xid;
+ sq_ctrl->sq_queue_id = SPFC_QID_SQ;
+ sq_ctrl->cache_id = INVALID_VALUE32;
+ sq_ctrl->local_port_id = INVALID_VALUE32;
+ sq_ctrl->remote_port_id = INVALID_VALUE32;
+ sq_ctrl->sq_in_sess_rst = false;
+ atomic_set(&sq_ctrl->sq_valid, true);
+ sq_ctrl->del_start_jiff = INVALID_VALUE64;
+ sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC;
+ sq_ctrl->vport_id = (u8)rport_info->qos_level;
+ sq_ctrl->cs_ctrl = (u8)rport_info->cs_ctrl;
+ sq_ctrl->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
+ sq_ctrl->need_offloaded = INVALID_VALUE8;
+ atomic_set(&sq_ctrl->flush_done_wait_cnt, 0);
+
+ /* Check whether the HBA is in the Linkdown state. Note that
+ * offload_state must be in the non-FREE state.
+ */
+ spin_lock_irqsave(&hba->flush_state_lock, flag);
+ sq_ctrl->port_in_flush = hba->in_flushing;
+ spin_unlock_irqrestore(&hba->flush_state_lock, flag);
+ memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
+
+ INIT_DELAYED_WORK(&sq_ctrl->del_work, spfc_parent_sq_opreate_timeout);
+ INIT_DELAYED_WORK(&sq_ctrl->flush_done_timeout_work,
+ spfc_parent_sq_wait_flush_done_timeout);
+ INIT_LIST_HEAD(&sq_ctrl->suspend_sqe_list);
+
+ memset(&sq_ctrl->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+
+ return RETURN_OK;
+
+parent_create_fail:
+ parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
+
+ return UNF_RETURN_ERROR;
+}
+
+static void
+spfc_init_prnt_ctxt_scq_qinfo(void *hba,
+ struct spfc_parent_queue_info *prnt_qinfo)
+{
+ u32 resp_scqn = 0;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_scq_qinfo *resp_prnt_scq_ctxt = NULL;
+ struct spfc_queue_info_bus queue_bus;
+
+ /* Obtains the queue id of the scq returned by the CQM when the SCQ is
+ * created
+ */
+ resp_scqn = prnt_qinfo->parent_sts_scq_info.cqm_queue_id;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ resp_prnt_scq_ctxt = &ctx->resp_scq_qinfo;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.scq_n = (u64)resp_scqn;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.parity = 0;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = resp_prnt_scq_ctxt->hw_scqc_config.pctxt_val1;
+ resp_prnt_scq_ctxt->hw_scqc_config.info.parity = spfc_get_parity_value(queue_bus.bus,
+ SPFC_HW_SCQC_BUS_ROW,
+ SPFC_HW_SCQC_BUS_COL
+ );
+ spfc_cpu_to_big64(resp_prnt_scq_ctxt, sizeof(struct spfc_scq_qinfo));
+}
+
+static void
+spfc_init_prnt_ctxt_srq_qinfo(void *handle, struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+ struct cqm_queue *cqm_els_srq = NULL;
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_queue_info_bus queue_bus;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* Obtains the SQ address */
+ sq = &prnt_qinfo->parent_sq_info;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ cqm_els_srq = hba->els_srq_info.cqm_srq_info;
+
+ /* Initialize the Parent SRQ INFO used when the ELS is received */
+ ctx->els_srq_info.srqc_gpa = cqm_els_srq->q_ctx_paddr >> UNF_SHIFT_4;
+
+ memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
+ queue_bus.bus[ARRAY_INDEX_0] = ctx->els_srq_info.srqc_gpa;
+ ctx->els_srq_info.parity = spfc_get_parity_value(queue_bus.bus, SPFC_HW_SRQC_BUS_ROW,
+ SPFC_HW_SRQC_BUS_COL);
+ spfc_cpu_to_big64(&ctx->els_srq_info, sizeof(struct spfc_srq_qinfo));
+
+ ctx->imm_srq_info.srqc_gpa = 0;
+ sq->srq_ctx_addr = 0;
+}
+
+static u16 spfc_get_max_sequence_id(void)
+{
+ return SPFC_HRQI_SEQ_ID_MAX;
+}
+
+static void spfc_init_prnt_rsvd_qinfo(struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_hw_rsvd_queue *hw_rsvd_qinfo = NULL;
+ u16 max_seq = 0;
+ u32 each = 0, seq_index = 0;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+ hw_rsvd_qinfo = (struct spfc_hw_rsvd_queue *)&ctx->hw_rsvdq;
+ memset(hw_rsvd_qinfo->seq_id_bitmap, 0, sizeof(hw_rsvd_qinfo->seq_id_bitmap));
+
+ max_seq = spfc_get_max_sequence_id();
+
+ /* special set for sequence id 0, which is always kept by ucode for
+ * sending fcp-cmd
+ */
+ hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] = 1;
+ seq_index = SPFC_HRQI_SEQ_SEPCIAL_ID - (max_seq >> SPFC_HRQI_SEQ_INDEX_SHIFT);
+
+ /* Set the unavailable mask to start from max + 1 */
+ for (each = (max_seq % SPFC_HRQI_SEQ_INDEX_MAX) + 1;
+ each < SPFC_HRQI_SEQ_INDEX_MAX; each++) {
+ hw_rsvd_qinfo->seq_id_bitmap[seq_index] |= ((u64)0x1) << each;
+ }
+
+ hw_rsvd_qinfo->seq_id_bitmap[seq_index] =
+ cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[seq_index]);
+
+ /* sepcial set for sequence id 0 */
+ if (seq_index != SPFC_HRQI_SEQ_SEPCIAL_ID)
+ hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] =
+ cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID]);
+
+ for (each = 0; each < seq_index; each++)
+ hw_rsvd_qinfo->seq_id_bitmap[each] = SPFC_HRQI_SEQ_INVALID_ID;
+
+ /* no matter what the range of seq id, last_req_seq_id is fixed value
+ * 0xff
+ */
+ hw_rsvd_qinfo->wd0.last_req_seq_id = SPFC_HRQI_SEQ_ID_MAX;
+ hw_rsvd_qinfo->wd0.xid = prnt_qinfo->parent_sq_info.context_id;
+
+ *(u64 *)&hw_rsvd_qinfo->wd0 =
+ cpu_to_be64(*(u64 *)&hw_rsvd_qinfo->wd0);
+}
+
+/*
+ *Function Name : spfc_init_prnt_sw_section_info
+ *Function Description: Initialize the SW Section area that can be accessed by
+ * the Parent Context uCode.
+ *Input Parameters : *hba,
+ * *prnt_qinfo
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_init_prnt_sw_section_info(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *prnt_qinfo)
+{
+#define SPFC_VLAN_ENABLE (1)
+#define SPFC_MB_PER_KB 1024
+ u16 rport_index;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_sw_section *sw_setion = NULL;
+ u16 total_scq_num = SPFC_TOTAL_SCQ_NUM;
+ u32 queue_id;
+ dma_addr_t queue_hdr_paddr;
+
+ /* Obtains the Parent Context address */
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+ sw_setion = &ctx->sw_section;
+
+ /* xid+vPortId */
+ sw_setion->sw_ctxt_vport_xid.xid = prnt_qinfo->parent_sq_info.context_id;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_vport_xid, sizeof(sw_setion->sw_ctxt_vport_xid));
+
+ /* conn_id */
+ rport_index = SPFC_LSW(prnt_qinfo->parent_sq_info.rport_index);
+ sw_setion->conn_id = cpu_to_be16(rport_index);
+
+ /* Immediate parameters */
+ sw_setion->immi_rq_page_size = 0;
+
+ /* Parent SCQ INFO used for sending packets to the Cmnd */
+ sw_setion->scq_num_rcv_cmd = cpu_to_be16((u16)prnt_qinfo->parent_cmd_scq_info.cqm_queue_id);
+ sw_setion->scq_num_max_scqn = cpu_to_be16(total_scq_num);
+
+ /* sw_ctxt_misc */
+ sw_setion->sw_ctxt_misc.dw.srv_type = prnt_qinfo->parent_sq_info.service_type;
+ sw_setion->sw_ctxt_misc.dw.port_id = hba->port_index;
+
+ /* only the VN2VF mode is supported */
+ sw_setion->sw_ctxt_misc.dw.vlan_id = 0;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+
+ /* Configuring the combo length */
+ sw_setion->per_xmit_data_size = cpu_to_be32(combo_length * SPFC_MB_PER_KB);
+ sw_setion->sw_ctxt_config.dw.work_mode = SPFC_PORT_MODE_INI;
+ sw_setion->sw_ctxt_config.dw.status = FC_PARENT_STATUS_INVALID;
+ sw_setion->sw_ctxt_config.dw.cos = 0;
+ sw_setion->sw_ctxt_config.dw.oq_cos_cmd = SPFC_PACKET_COS_FC_CMD;
+ sw_setion->sw_ctxt_config.dw.oq_cos_data = prnt_qinfo->queue_data_cos;
+ sw_setion->sw_ctxt_config.dw.priority = 0;
+ sw_setion->sw_ctxt_config.dw.vlan_enable = SPFC_VLAN_ENABLE;
+ sw_setion->sw_ctxt_config.dw.sgl_num = dif_sgl_mode;
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setion->immi_dif_info, sizeof(sw_setion->immi_dif_info));
+
+ queue_id = prnt_qinfo->parent_cmd_scq_info.local_queue_id;
+ queue_hdr_paddr = hba->scq_info[queue_id].cqm_scq_info->q_header_paddr;
+ sw_setion->cmd_scq_gpa_h = SPFC_HIGH_32_BITS(queue_hdr_paddr);
+ sw_setion->cmd_scq_gpa_l = SPFC_LOW_32_BITS(queue_hdr_paddr);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[info]Port(0x%x) RPort(0x%x) CmdLocalScqn(0x%x) QheaderGpaH(0x%x) QheaderGpaL(0x%x)",
+ hba->port_cfg.port_id, prnt_qinfo->parent_sq_info.rport_index, queue_id,
+ sw_setion->cmd_scq_gpa_h, sw_setion->cmd_scq_gpa_l);
+
+ spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_h, sizeof(sw_setion->cmd_scq_gpa_h));
+ spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_l, sizeof(sw_setion->cmd_scq_gpa_l));
+}
+
+static void spfc_init_parent_context(void *hba, struct spfc_parent_queue_info *prnt_qinfo)
+{
+ struct spfc_parent_context *ctx = NULL;
+
+ ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
+
+ /* Initialize Parent Context */
+ memset(ctx, 0, SPFC_CNTX_SIZE_256B);
+
+ /* Initialize the Queue Info hardware area */
+ spfc_init_prnt_ctxt_scq_qinfo(hba, prnt_qinfo);
+ spfc_init_prnt_ctxt_srq_qinfo(hba, prnt_qinfo);
+ spfc_init_prnt_rsvd_qinfo(prnt_qinfo);
+
+ /* Initialize Software Section */
+ spfc_init_prnt_sw_section_info(hba, prnt_qinfo);
+}
+
+void spfc_map_shared_queue_qid(struct spfc_hba_info *hba,
+ struct spfc_parent_queue_info *parent_queue_info,
+ u32 rport_index)
+{
+ u32 cmd_scqn_local = 0;
+ u32 sts_scqn_local = 0;
+
+ /* The SCQ is used for each connection based on the balanced *
+ * distribution of commands and responses
+ */
+ cmd_scqn_local = SPFC_RPORTID_TO_CMD_SCQN(rport_index);
+ sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(rport_index);
+ parent_queue_info->parent_cmd_scq_info.local_queue_id = cmd_scqn_local;
+ parent_queue_info->parent_sts_scq_info.local_queue_id = sts_scqn_local;
+ parent_queue_info->parent_cmd_scq_info.cqm_queue_id =
+ hba->scq_info[cmd_scqn_local].scqn;
+ parent_queue_info->parent_sts_scq_info.cqm_queue_id =
+ hba->scq_info[sts_scqn_local].scqn;
+
+ /* Each session share with immediate SRQ and ElsSRQ */
+ parent_queue_info->parent_els_srq_info.local_queue_id = 0;
+ parent_queue_info->parent_els_srq_info.cqm_queue_id = hba->els_srq_info.srqn;
+
+ /* Allocate fcp data cos value */
+ parent_queue_info->queue_data_cos = spfc_map_fcp_data_cos(hba);
+
+ /* Allocate Parent SQ vPort */
+ parent_queue_info->parent_sq_info.vport_id += parent_queue_info->queue_vport_id;
+}
+
+u32 spfc_send_session_enable(struct spfc_hba_info *hba, struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ dma_addr_t ctx_phy_addr = 0;
+ void *ctx_addr = NULL;
+ union spfc_cmdqe session_enable;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_context *ctx = NULL;
+ struct spfc_sw_section *sw_setion = NULL;
+ struct spfc_host_keys key;
+ u32 tx_mfs = 2048;
+ u32 edtov_timer = 2000;
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index;
+
+ memset(&session_enable, 0, sizeof(union spfc_cmdqe));
+ memset(&key, 0, sizeof(struct spfc_host_keys));
+ index = rport_info->rport_index;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ ctx = (struct spfc_parent_context *)(parent_queue_info->parent_ctx.parent_ctx);
+ sw_setion = &ctx->sw_section;
+
+ sw_setion->tx_mfs = cpu_to_be16((u16)(tx_mfs));
+ sw_setion->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
+
+ spfc_big_to_cpu32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+ sw_setion->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
+
+ spfc_big_to_cpu32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
+
+ parent_queue_info->parent_sq_info.rport_index = rport_info->rport_index;
+ parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
+ parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
+ parent_queue_info->parent_sq_info.context_id =
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
+
+ /* Fill in contex to the chip */
+ ctx_phy_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
+ ctx_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
+ memcpy(ctx_addr, parent_queue_info->parent_ctx.parent_ctx,
+ sizeof(struct spfc_parent_context));
+ session_enable.session_enable.wd0.task_type = SPFC_TASK_T_SESS_EN;
+ session_enable.session_enable.wd2.conn_id = rport_info->rport_index;
+ session_enable.session_enable.wd2.scqn = hba->default_scqn;
+ session_enable.session_enable.wd3.xid_p =
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
+ session_enable.session_enable.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_phy_addr);
+ session_enable.session_enable.context_gpa_lo = SPFC_LOW_32_BITS(ctx_phy_addr);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ key.wd3.sid_2 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_2_MASK) >> UNF_SHIFT_16;
+ key.wd3.sid_1 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_1_MASK) >> UNF_SHIFT_8;
+ key.wd4.sid_0 = rport_info->local_nport_id & SPFC_KEY_WD3_SID_0_MASK;
+ key.wd4.did_0 = rport_info->nport_id & SPFC_KEY_WD4_DID_0_MASK;
+ key.wd4.did_1 = (rport_info->nport_id & SPFC_KEY_WD4_DID_1_MASK) >> UNF_SHIFT_8;
+ key.wd4.did_2 = (rport_info->nport_id & SPFC_KEY_WD4_DID_2_MASK) >> UNF_SHIFT_16;
+ key.wd5.host_id = 0;
+ key.wd5.port_id = hba->port_index;
+
+ memcpy(&session_enable.session_enable.keys, &key, sizeof(struct spfc_host_keys));
+
+ memcpy((void *)(uintptr_t)session_enable.session_enable.context,
+ parent_queue_info->parent_ctx.parent_ctx,
+ sizeof(struct spfc_parent_context));
+ spfc_big_to_cpu32((void *)(uintptr_t)session_enable.session_enable.context,
+ sizeof(struct spfc_parent_context));
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
+ "[info] xid:0x%x, sid:0x%x,did:0x%x parentcontext:",
+ parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid,
+ rport_info->local_nport_id, rport_info->nport_id);
+
+ ret = spfc_root_cmdq_enqueue(hba, &session_enable, sizeof(session_enable.session_enable));
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]RootCMDQEnqueue Error, free default session parent resource");
+ return UNF_RETURN_ERROR;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send default session enable success,rport index(0x%x),context id(0x%x) SID=(0x%x), DID=(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.context_id,
+ rport_info->local_nport_id, rport_info->nport_id);
+
+ return RETURN_OK;
+}
+
+u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot find parent queue pool",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ index = rport_info->rport_index;
+ if (index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x)",
+ hba->port_cfg.port_id, index,
+ rport_info->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_FREE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x), offload state(0x%x)",
+ hba->port_cfg.port_id, index, rport_info->nport_id,
+ parent_queue_info->offload_state);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ /* Create Parent Context and Link List SQ */
+ ret = spfc_alloc_parent_sq(hba, parent_queue_info, rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "Port(0x%x) alloc session resoure failed.rport index(0x%x),rport nportid(0x%x).",
+ hba->port_cfg.port_id, index,
+ rport_info->nport_id);
+
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
+ spfc_invalid_parent_sq(&parent_queue_info->parent_sq_info);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Allocate the corresponding queue xid to each parent */
+ spfc_map_shared_queue_qid(hba, parent_queue_info, rport_info->rport_index);
+
+ /* Initialize Parent Context, including hardware area and ucode area */
+ spfc_init_parent_context(hba, parent_queue_info);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ /* Only default enable session obviously, other will enable secertly */
+ if (unlikely(rport_info->rport_index == SPFC_DEFAULT_RPORT_INDEX))
+ return spfc_send_session_enable(handle, rport_info);
+
+ parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
+ parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) allocate parent sq success,rport index(0x%x),rport nportid(0x%x),context id(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.context_id);
+
+ return ret;
+}
+
+u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ ulong rst_flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ enum spfc_session_reset_mode mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ spinlock_t *sq_enq_lock = NULL;
+ u32 index;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) cannot find parent queue pool",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* get parent queue info (by rport index) */
+ if (rport_info->rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) free parent resource failed, invlaid rport_index(%u) rport_nport_id(0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index, rport_info->nport_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ index = rport_info->rport_index;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ sq_enq_lock = &parent_queue_info->parent_sq_info.parent_sq_enqueue_lock;
+
+ spin_lock_irqsave(prtq_state_lock, flag);
+ /* 1. for has been offload */
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADED) {
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_DESTROYING;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ /* set reset state, in order to prevent I/O in_SQ */
+ spin_lock_irqsave(sq_enq_lock, rst_flag);
+ parent_queue_info->parent_sq_info.sq_in_sess_rst = true;
+ spin_unlock_irqrestore(sq_enq_lock, rst_flag);
+
+ /* check pcie device state */
+ if (!hba->dev_present) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) hba is not present, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+ return RETURN_OK;
+ }
+
+ parent_queue_info->parent_sq_info.del_start_jiff = jiffies;
+ (void)queue_delayed_work(hba->work_queue,
+ &parent_queue_info->parent_sq_info.del_work,
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_DEL_STAGE_TIMEOUT_MS));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to reset parent session, rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+ /* Forcibly set both mode */
+ mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
+ ret = spfc_send_session_rst_cmd(hba, parent_queue_info, mode);
+
+ return ret;
+ } else if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
+ /* 2. for resource has been alloc, but not offload */
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) parent sq is not offloaded, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+
+ return RETURN_OK;
+ } else if (parent_queue_info->offload_state ==
+ SPFC_QUEUE_STATE_OFFLOADING) {
+ /* 3. for driver has offloading CMND to uCode */
+ spfc_push_destroy_parent_queue_sqe(hba, parent_queue_info, rport_info);
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) parent sq is offloading, push to delay free. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ return RETURN_OK;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) parent sq is not created, do not need free state(0x%x) rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
+ hba->port_cfg.port_id, parent_queue_info->offload_state,
+ rport_info->rport_index,
+ parent_queue_info->parent_sq_info.rport_index,
+ parent_queue_info->parent_sq_info.local_port_id,
+ rport_info->nport_id,
+ parent_queue_info->parent_sq_info.remote_port_id);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return RETURN_OK;
+}
+
+void spfc_free_parent_queue_mgr(void *handle)
+{
+ u32 index = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!hba->parent_queue_mgr)
+ return;
+ parent_queue_mgr = hba->parent_queue_mgr;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ if (parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx)
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx = NULL;
+ }
+
+ if (parent_queue_mgr->parent_sq_buf_list.buflist) {
+ for (index = 0; index < parent_queue_mgr->parent_sq_buf_list.buf_num; index++) {
+ if (parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr != 0) {
+ pci_unmap_single(hba->pci_dev,
+ parent_queue_mgr->parent_sq_buf_list
+ .buflist[index].paddr,
+ parent_queue_mgr->parent_sq_buf_list.buf_size,
+ DMA_BIDIRECTIONAL);
+ parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr = 0;
+ }
+ kfree(parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr);
+ parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr = NULL;
+ }
+
+ kfree(parent_queue_mgr->parent_sq_buf_list.buflist);
+ parent_queue_mgr->parent_sq_buf_list.buflist = NULL;
+ }
+
+ vfree(parent_queue_mgr);
+ hba->parent_queue_mgr = NULL;
+}
+
+void spfc_free_parent_queues(void *handle)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ FC_CHECK_RETURN_VOID(handle);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (SPFC_QUEUE_STATE_DESTROYING ==
+ parent_queue_mgr->parent_queue[index].offload_state) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.del_work);
+ (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.flush_done_timeout_work);
+
+ /* free parent queue */
+ spfc_free_parent_queue_info(hba, &parent_queue_mgr->parent_queue[index]);
+ continue;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+}
+
+/*
+ *Function Name : spfc_alloc_parent_queue_mgr
+ *Function Description: Allocate and initialize parent queue manager.
+ *Input Parameters : *handle
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+u32 spfc_alloc_parent_queue_mgr(void *handle)
+{
+ u32 index = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ u32 buf_total_size;
+ u32 buf_num;
+ u32 alloc_idx;
+ u32 cur_buf_idx = 0;
+ u32 cur_buf_offset = 0;
+ u32 prt_ctx_size = sizeof(struct spfc_parent_context);
+ u32 buf_cnt_perhugebuf;
+ struct spfc_hba_info *hba = NULL;
+ u32 init_val = INVALID_VALUE32;
+ dma_addr_t paddr;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = (struct spfc_parent_queue_mgr *)vmalloc(sizeof
+ (struct spfc_parent_queue_mgr));
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) cannot allocate queue manager",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ hba->parent_queue_mgr = parent_queue_mgr;
+ memset(parent_queue_mgr, 0, sizeof(struct spfc_parent_queue_mgr));
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ spin_lock_init(&parent_queue_mgr->parent_queue[index].parent_queue_state_lock);
+ parent_queue_mgr->parent_queue[index].offload_state = SPFC_QUEUE_STATE_FREE;
+ spin_lock_init(&(parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock));
+ parent_queue_mgr->parent_queue[index].parent_cmd_scq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_sts_scq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_els_srq_info.cqm_queue_id = init_val;
+ parent_queue_mgr->parent_queue[index].parent_sq_info.del_start_jiff = init_val;
+ parent_queue_mgr->parent_queue[index].queue_vport_id = hba->vpid_start;
+ }
+
+ buf_total_size = prt_ctx_size * UNF_SPFC_MAXRPORT_NUM;
+ parent_queue_mgr->parent_sq_buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE ?
+ BUF_LIST_PAGE_SIZE : buf_total_size;
+ buf_cnt_perhugebuf = parent_queue_mgr->parent_sq_buf_list.buf_size / prt_ctx_size;
+ buf_num = UNF_SPFC_MAXRPORT_NUM % buf_cnt_perhugebuf ?
+ UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf + 1 :
+ UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf;
+ parent_queue_mgr->parent_sq_buf_list.buflist =
+ (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
+ parent_queue_mgr->parent_sq_buf_list.buf_num = buf_num;
+
+ if (!parent_queue_mgr->parent_sq_buf_list.buflist) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Allocate QueuMgr buf list failed out of memory");
+ goto free_parent_queue;
+ }
+ memset(parent_queue_mgr->parent_sq_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
+
+ for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr =
+ kmalloc(parent_queue_mgr->parent_sq_buf_list.buf_size, GFP_KERNEL);
+ if (!parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr)
+ goto free_parent_queue;
+
+ memset(parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr, 0,
+ parent_queue_mgr->parent_sq_buf_list.buf_size);
+
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr =
+ pci_map_single(hba->pci_dev,
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr,
+ parent_queue_mgr->parent_sq_buf_list.buf_size,
+ DMA_BIDIRECTIONAL);
+ paddr = parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr;
+ if (pci_dma_mapping_error(hba->pci_dev, paddr)) {
+ parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr = 0;
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[err]Map QueuMgr address failed");
+
+ goto free_parent_queue;
+ }
+ }
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ cur_buf_idx = index / buf_cnt_perhugebuf;
+ cur_buf_offset = prt_ctx_size * (index % buf_cnt_perhugebuf);
+
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx =
+ parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].vaddr +
+ cur_buf_offset;
+ parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx_addr =
+ parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].paddr +
+ cur_buf_offset;
+ }
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
+ "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
+
+ return RETURN_OK;
+
+free_parent_queue:
+ spfc_free_parent_queue_mgr(hba);
+ return UNF_RETURN_ERROR;
+}
+
+static void spfc_rlease_all_wqe_pages(struct spfc_hba_info *hba)
+{
+ u32 index;
+ struct spfc_wqe_page *wpg = NULL;
+
+ FC_CHECK_RETURN_VOID((hba));
+
+ wpg = hba->sq_wpg_pool.wpg_pool_addr;
+
+ for (index = 0; index < hba->sq_wpg_pool.wpg_cnt; index++) {
+ if (wpg->wpg_addr) {
+ dma_pool_free(hba->sq_wpg_pool.wpg_dma_pool,
+ wpg->wpg_addr, wpg->wpg_phy_addr);
+ wpg->wpg_addr = NULL;
+ wpg->wpg_phy_addr = 0;
+ }
+
+ wpg++;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port[%u] free total %u wqepages", hba->port_index,
+ index);
+}
+
+u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle)
+{
+ u32 index = 0;
+ struct spfc_sq_wqepage_pool *wpg_pool = NULL;
+ struct spfc_wqe_page *wpg = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ wpg_pool = &hba->sq_wpg_pool;
+
+ INIT_LIST_HEAD(&wpg_pool->list_free_wpg_pool);
+ spin_lock_init(&wpg_pool->wpg_pool_lock);
+ atomic_set(&wpg_pool->wpg_in_use, 0);
+
+ /* Calculate the number of Wqe Page required in the pool */
+ wpg_pool->wpg_size = wqe_page_size;
+ wpg_pool->wpg_cnt = SPFC_MIN_WP_NUM * SPFC_MAX_SSQ_NUM +
+ ((hba->exi_count * SPFC_SQE_SIZE) / wpg_pool->wpg_size);
+ wpg_pool->wqe_per_wpg = wpg_pool->wpg_size / SPFC_SQE_SIZE;
+
+ /* Craete DMA POOL */
+ wpg_pool->wpg_dma_pool = dma_pool_create("spfc_wpg_pool",
+ &hba->pci_dev->dev,
+ wpg_pool->wpg_size,
+ SPFC_SQE_SIZE, 0);
+ if (!wpg_pool->wpg_dma_pool) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Cannot allocate SQ WqePage DMA pool");
+
+ goto out_create_dma_pool_err;
+ }
+
+ /* Allocate arrays to record all WqePage addresses */
+ wpg_pool->wpg_pool_addr = (struct spfc_wqe_page *)vmalloc(wpg_pool->wpg_cnt *
+ sizeof(struct spfc_wqe_page));
+ if (!wpg_pool->wpg_pool_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Allocate SQ WqePageAddr array failed");
+
+ goto out_alloc_wpg_array_err;
+ }
+ wpg = wpg_pool->wpg_pool_addr;
+ memset(wpg, 0, wpg_pool->wpg_cnt * sizeof(struct spfc_wqe_page));
+
+ for (index = 0; index < wpg_pool->wpg_cnt; index++) {
+ wpg->wpg_addr = dma_pool_alloc(wpg_pool->wpg_dma_pool, GFP_KERNEL,
+ (u64 *)&wpg->wpg_phy_addr);
+ if (!wpg->wpg_addr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_ERR, "[err]Dma pool allocated failed");
+ break;
+ }
+
+ /* To ensure security, clear the memory */
+ memset(wpg->wpg_addr, 0, wpg_pool->wpg_size);
+
+ /* Add to the idle linked list */
+ INIT_LIST_HEAD(&wpg->entry_wpg);
+ list_add_tail(&wpg->entry_wpg, &wpg_pool->list_free_wpg_pool);
+
+ wpg++;
+ }
+ /* ALL allocated successfully */
+ if (wpg_pool->wpg_cnt == index)
+ return RETURN_OK;
+
+ spfc_rlease_all_wqe_pages(hba);
+ vfree(wpg_pool->wpg_pool_addr);
+ wpg_pool->wpg_pool_addr = NULL;
+
+out_alloc_wpg_array_err:
+ dma_pool_destroy(wpg_pool->wpg_dma_pool);
+ wpg_pool->wpg_dma_pool = NULL;
+
+out_create_dma_pool_err:
+ return UNF_RETURN_ERROR;
+}
+
+void spfc_free_parent_sq_wqe_page_pool(void *handle)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID((handle));
+ hba = (struct spfc_hba_info *)handle;
+ spfc_rlease_all_wqe_pages(hba);
+ hba->sq_wpg_pool.wpg_cnt = 0;
+
+ if (hba->sq_wpg_pool.wpg_pool_addr) {
+ vfree(hba->sq_wpg_pool.wpg_pool_addr);
+ hba->sq_wpg_pool.wpg_pool_addr = NULL;
+ }
+
+ dma_pool_destroy(hba->sq_wpg_pool.wpg_dma_pool);
+ hba->sq_wpg_pool.wpg_dma_pool = NULL;
+}
+
+static u32 spfc_parent_sq_ring_direct_wqe_doorbell(struct spfc_parent_ssq_info *sq, u8 *direct_wqe)
+{
+ u32 ret = RETURN_OK;
+ int ravl;
+ u16 pmsn;
+ u64 queue_hdr_db_val;
+ struct spfc_hba_info *hba;
+
+ hba = (struct spfc_hba_info *)sq->hba;
+ pmsn = sq->last_cmsn;
+
+ if (sq->cache_id == INVALID_VALUE32) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) invalid cid", sq->context_id);
+ return RETURN_ERROR;
+ }
+ /* Fill Doorbell Record */
+ queue_hdr_db_val = sq->queue_header->door_bell_record;
+ queue_hdr_db_val &= (u64)(~(0xFFFFFFFF));
+ queue_hdr_db_val |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
+ sq->queue_header->door_bell_record =
+ cpu_to_be64(queue_hdr_db_val);
+
+ ravl = cqm_ring_direct_wqe_db_fc(hba->dev_handle, SERVICE_T_FC, direct_wqe);
+ if (unlikely(ravl != CQM_SUCCESS)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) send DB failed", sq->context_id);
+
+ ret = RETURN_ERROR;
+ }
+
+ atomic_inc(&sq->sq_db_cnt);
+
+ return ret;
+}
+
+u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level, u32 c)
+{
+ u32 ret = RETURN_OK;
+ int ravl;
+ u16 pmsn;
+ u8 pmsn_lo;
+ u8 pmsn_hi;
+ u64 db_val_qw;
+ struct spfc_hba_info *hba;
+ struct spfc_parent_sq_db door_bell;
+
+ hba = (struct spfc_hba_info *)sq->hba;
+ pmsn = sq->last_cmsn;
+ /* Obtain the low 8 Bit of PMSN */
+ pmsn_lo = (u8)(pmsn & SPFC_PMSN_MASK);
+ /* Obtain the high 8 Bit of PMSN */
+ pmsn_hi = (u8)((pmsn >> UNF_SHIFT_8) & SPFC_PMSN_MASK);
+ door_bell.wd0.service_type = SPFC_LSW(sq->service_type);
+ door_bell.wd0.cos = 0;
+ /* c = 0 data type, c = 1 control type, two type are different in mqm */
+ door_bell.wd0.c = c;
+ door_bell.wd0.arm = SPFC_DB_ARM_DISABLE;
+ door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
+ door_bell.wd0.xid = sq->context_id;
+ door_bell.wd1.sm_data = sq->cache_id;
+ door_bell.wd1.qid = sq->sq_queue_id;
+ door_bell.wd1.pi_hi = (u32)pmsn_hi;
+
+ if (sq->cache_id == INVALID_VALUE32) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) invalid cid", sq->context_id);
+ return UNF_RETURN_ERROR;
+ }
+ /* Fill Doorbell Record */
+ db_val_qw = sq->queue_header->door_bell_record;
+ db_val_qw &= (u64)(~(SPFC_DB_VAL_MASK));
+ db_val_qw |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
+ sq->queue_header->door_bell_record = cpu_to_be64(db_val_qw);
+
+ /* ring doorbell */
+ db_val_qw = *(u64 *)&door_bell;
+ ravl = cqm3_ring_hardware_db_fc(hba->dev_handle, SERVICE_T_FC, pmsn_lo,
+ (qos_level & SPFC_QOS_LEVEL_MASK),
+ db_val_qw);
+ if (unlikely(ravl != CQM_SUCCESS)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ(0x%x) send DB(0x%llx) failed",
+ sq->context_id, db_val_qw);
+
+ ret = UNF_RETURN_ERROR;
+ }
+
+ /* Doorbell success counter */
+ atomic_inc(&sq->sq_db_cnt);
+
+ return ret;
+}
+
+u32 spfc_direct_sq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
+{
+ u32 ret = RETURN_OK;
+ u32 msn_wd = INVALID_VALUE32;
+ u16 link_wqe_msn = 0;
+ ulong flag = 0;
+ struct spfc_wqe_page *tail_wpg = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe = NULL;
+ struct spfc_linkwqe *link_wqe_last_part = NULL;
+ u64 wqe_gpa;
+ struct spfc_direct_wqe_db dre_door_bell;
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), sqe_minus_cqe_cnt(0x%x)",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ ssq->last_cmsn,
+ atomic_read(&ssq->sqe_minus_cqe_cnt));
+
+ link_wqe_msn = SPFC_MSN_DEC(ssq->last_cmsn);
+ link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
+ ssq->wqe_offset);
+ msn_wd = be32_to_cpu(link_wqe->val_wd1);
+ msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
+ msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
+ link_wqe->val_wd1 = cpu_to_be32(msn_wd);
+ link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
+ SPFC_EXTEND_WQE_OFFSET);
+ link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
+ spfc_set_direct_wqe_owner_be(link_wqe, ssq->last_pi_owner);
+ ssq->wqe_offset = 0;
+ ssq->last_pi_owner = !ssq->last_pi_owner;
+ }
+ sqe_in_wp =
+ (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
+ spfc_build_wqe_owner_pmsn(io_sqe, (ssq->last_pi_owner), ssq->last_cmsn);
+ SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+
+ wqe_gpa = tail_wpg->wpg_phy_addr + (ssq->wqe_offset * sizeof(struct spfc_sqe));
+ io_sqe->wqe_gpa = (wqe_gpa >> UNF_SHIFT_6);
+
+ dre_door_bell.wd0.ddb = IWARP_FC_DDB_TYPE;
+ dre_door_bell.wd0.cos = 0;
+ dre_door_bell.wd0.c = 0;
+ dre_door_bell.wd0.pi_hi =
+ (u32)(ssq->last_cmsn >> UNF_SHIFT_12) & SPFC_DB_WD0_PI_H_MASK;
+ dre_door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
+ dre_door_bell.wd0.xid = ssq->context_id;
+ dre_door_bell.wd1.sm_data = ssq->cache_id;
+ dre_door_bell.wd1.pi_lo = (u32)(ssq->last_cmsn & SPFC_DB_WD0_PI_L_MASK);
+ io_sqe->db_val = *(u64 *)&dre_door_bell;
+
+ spfc_convert_parent_wqe_to_big_endian(io_sqe);
+ memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
+ spfc_set_direct_wqe_owner_be(sqe_in_wp, ssq->last_pi_owner);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx,o:0x%x,outstandind:0x%x,pmsn:0x%x,cmsn:0x%x",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id, wqe_gpa,
+ ssq->last_pi_owner, atomic_read(&ssq->sqe_minus_cqe_cnt),
+ ssq->last_cmsn, SPFC_GET_QUEUE_CMSN(ssq));
+
+ ssq->accum_wqe_cnt++;
+ if (ssq->accum_wqe_cnt == accum_db_num) {
+ ret = spfc_parent_sq_ring_direct_wqe_doorbell(ssq, (void *)sqe_in_wp);
+ if (unlikely(ret != RETURN_OK))
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ ssq->accum_wqe_cnt = 0;
+ }
+
+ ssq->wqe_offset += 1;
+ ssq->last_cmsn = SPFC_MSN_INC(ssq->last_cmsn);
+ atomic_inc(&ssq->sq_wqe_cnt);
+ atomic_inc(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return ret;
+}
+
+u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
+{
+ u32 ret = RETURN_OK;
+ u32 addr_wd = INVALID_VALUE32;
+ u32 msn_wd = INVALID_VALUE32;
+ u16 link_wqe_msn = 0;
+ ulong flag = 0;
+ struct spfc_wqe_page *new_wqe_page = NULL;
+ struct spfc_wqe_page *tail_wpg = NULL;
+ struct spfc_sqe *sqe_in_wp = NULL;
+ struct spfc_linkwqe *link_wqe = NULL;
+ struct spfc_linkwqe *link_wqe_last_part = NULL;
+ u32 cur_cmsn = 0;
+ u8 qos_level = (u8)io_sqe->ts_sl.cont.icmnd.info.dif_info.wd1.vpid;
+ u32 c = SPFC_DB_C_BIT_CONTROL_TYPE;
+
+ if (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
+ return spfc_direct_sq_enqueue(ssq, io_sqe, wqe_type);
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), WpgCnt(0x%x)",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ ssq->last_cmsn,
+ atomic_read(&ssq->wqe_page_cnt));
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ spfc_free_sq_wqe_page(ssq, cur_cmsn);
+ new_wqe_page = spfc_add_one_wqe_page(ssq);
+ if (unlikely(!new_wqe_page)) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+ link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
+ ssq->wqe_offset);
+ addr_wd = SPFC_MSD(new_wqe_page->wpg_phy_addr);
+ link_wqe->next_page_addr_hi = cpu_to_be32(addr_wd);
+ addr_wd = SPFC_LSD(new_wqe_page->wpg_phy_addr);
+ link_wqe->next_page_addr_lo = cpu_to_be32(addr_wd);
+ link_wqe_msn = SPFC_MSN_DEC(ssq->last_cmsn);
+ msn_wd = be32_to_cpu(link_wqe->val_wd1);
+ msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
+ msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
+ link_wqe->val_wd1 = cpu_to_be32(msn_wd);
+ link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
+ SPFC_EXTEND_WQE_OFFSET);
+ link_wqe_last_part->next_page_addr_hi = link_wqe->next_page_addr_hi;
+ link_wqe_last_part->next_page_addr_lo = link_wqe->next_page_addr_lo;
+ link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
+ spfc_set_sq_wqe_owner_be(link_wqe);
+ ssq->wqe_offset = 0;
+ tail_wpg = SPFC_GET_SQ_TAIL(ssq);
+ atomic_inc(&ssq->wqe_page_cnt);
+ }
+
+ spfc_build_wqe_owner_pmsn(io_sqe, !(ssq->last_pi_owner), ssq->last_cmsn);
+ SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ spfc_convert_parent_wqe_to_big_endian(io_sqe);
+ sqe_in_wp = (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
+ memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
+ spfc_set_sq_wqe_owner_be(sqe_in_wp);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
+ "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx, qos_level:0x%x, c:0x%x",
+ ssq->sqn, ssq->context_id, ssq->sq_queue_id,
+ virt_to_phys(sqe_in_wp), qos_level, c);
+
+ ssq->accum_wqe_cnt++;
+ if (ssq->accum_wqe_cnt == accum_db_num) {
+ ret = spfc_parent_sq_ring_doorbell(ssq, qos_level, c);
+ if (unlikely(ret != RETURN_OK))
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
+ ssq->accum_wqe_cnt = 0;
+ }
+ ssq->wqe_offset += 1;
+ ssq->last_cmsn = SPFC_MSN_INC(ssq->last_cmsn);
+ atomic_inc(&ssq->sq_wqe_cnt);
+ atomic_inc(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, wqe_type);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+ return ret;
+}
+
+u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe, u16 ssqn)
+{
+ u8 wqe_type = 0;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)sq->hba;
+ struct spfc_parent_ssq_info *ssq = NULL;
+
+ if (unlikely(ssqn >= SPFC_MAX_SSQ_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Ssqn 0x%x is invalid.", ssqn);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ wqe_type = (u8)SPFC_GET_WQE_TYPE(io_sqe);
+
+ /* Serial enqueue */
+ io_sqe->ts_sl.xid = sq->context_id;
+ io_sqe->ts_sl.cid = sq->cache_id;
+ io_sqe->ts_sl.sqn = ssqn;
+
+ /* Choose SSQ */
+ ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
+
+ /* If the SQ is invalid, the wqe is discarded */
+ if (unlikely(!atomic_read(&sq->sq_valid))) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]SQ is invalid, reject wqe(0x%x)", wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* The heartbeat detection status is 0, which allows control sessions
+ * enqueuing
+ */
+ if (unlikely(!hba->heart_status && SPFC_WQE_IS_IO(io_sqe))) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]Heart status is false");
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (sq->need_offloaded != SPFC_NEED_DO_OFFLOAD) {
+ /* Ensure to be offloaded */
+ if (unlikely(!atomic_read(&sq->sq_cached))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba,
+ SPFC_STAT_PARENT_SQ_NOT_OFFLOADED);
+
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[err]RPort(0x%x) Session(0x%x) is not offloaded, reject wqe(0x%x)",
+ sq->rport_index, sq->context_id, wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ /* Whether the SQ is in the flush state. Temporarily allow the control
+ * sessions to enqueue.
+ */
+ if (unlikely(sq->port_in_flush && SPFC_WQE_IS_IO(io_sqe))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Session(0x%x) in flush, Sqn(0x%x) cmsn(0x%x), reject wqe(0x%x)",
+ sq->context_id, ssqn, SPFC_GET_QUEUE_CMSN(ssq),
+ wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* If the SQ is in the Seesion deletion state and is the WQE of the I/O
+ * path, * the I/O failure is directly returned
+ */
+ if (unlikely(sq->sq_in_sess_rst && SPFC_WQE_IS_IO(io_sqe))) {
+ SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
+ SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Session(0x%x) in session reset, reject wqe(0x%x)",
+ sq->context_id, wqe_type);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return spfc_parent_ssq_enqueue(ssq, io_sqe, wqe_type);
+}
+
+static bool spfc_msn_in_wqe_page(u32 start_msn, u32 end_msn, u32 cur_cmsn)
+{
+ bool ret = true;
+
+ if (end_msn >= start_msn) {
+ if (cur_cmsn < start_msn || cur_cmsn > end_msn)
+ ret = false;
+ else
+ ret = true;
+ } else {
+ if (cur_cmsn > end_msn && cur_cmsn < start_msn)
+ ret = false;
+ else
+ ret = true;
+ }
+
+ return ret;
+}
+
+void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn)
+{
+ u16 wpg_start_cmsn = 0;
+ u16 wpg_end_cmsn = 0;
+ bool wqe_page_in_use = false;
+
+ /* If there is only zero or one Wqe Page, no release is required */
+ if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM)
+ return;
+
+ /* Check whether the current MSN is within the MSN range covered by the
+ * WqePage
+ */
+ wpg_start_cmsn = ssq->head_start_cmsn;
+ wpg_end_cmsn = ssq->head_end_cmsn;
+ wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
+
+ /* If the value of CMSN is within the current Wqe Page, no release is
+ * required
+ */
+ if (wqe_page_in_use)
+ return;
+
+ /* If the next WqePage is available and the CMSN is not in the current
+ * WqePage, * the current WqePage is released
+ */
+ while (!wqe_page_in_use &&
+ (atomic_read(&ssq->wqe_page_cnt) > SPFC_MIN_WP_NUM)) {
+ /* Free WqePage */
+ spfc_free_head_wqe_page(ssq);
+
+ /* Obtain the start MSN of the next WqePage */
+ wpg_start_cmsn = SPFC_MSN_INC(wpg_end_cmsn);
+
+ /* obtain the end MSN of the next WqePage */
+ wpg_end_cmsn =
+ SPFC_GET_WP_END_CMSN(wpg_start_cmsn, ssq->wqe_num_per_buf);
+
+ /* Set new MSN range */
+ ssq->head_start_cmsn = wpg_start_cmsn;
+ ssq->head_end_cmsn = wpg_end_cmsn;
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ /* Check whether the current MSN is within the MSN range covered
+ * by the WqePage
+ */
+ wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
+ }
+}
+
+/*
+ *Function Name : SPFC_UpdateSqCompletionStat
+ *Function Description: Update the calculation statistics of the CQE
+ *corresponding to the WQE on the connection SQ.
+ *Input Parameters : *sq, *scqe
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_update_sq_wqe_completion_stat(struct spfc_parent_ssq_info *ssq,
+ union spfc_scqe *scqe)
+{
+ struct spfc_scqe_rcv_els_gs_rsp *els_gs_rsp = NULL;
+
+ els_gs_rsp = (struct spfc_scqe_rcv_els_gs_rsp *)scqe;
+
+ /* For the ELS/GS RSP intermediate frame and the CQE that is more than
+ * the ELS_GS_RSP_EXCH_CHECK_FAIL, no statistics are required
+ */
+ if (unlikely(SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP) ||
+ (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP)) {
+ if (!els_gs_rsp->wd3.end_rsp || !SPFC_SCQE_ERR_TO_CM(scqe))
+ return;
+ }
+
+ /* When the SQ statistics are updated, the PlogiAcc or PlogiAccSts that
+ * is * implicitly unloaded will enter here, and one more CQE count is
+ * added
+ */
+ atomic_inc(&ssq->sq_cqe_cnt);
+ atomic_dec(&ssq->sqe_minus_cqe_cnt);
+ SPFC_SQ_IO_STAT(ssq, SPFC_GET_SCQE_TYPE(scqe));
+}
+
+/*
+ *Function Name : spfc_reclaim_sq_wqe_page
+ *Function Description: Reclaim the Wqe Pgae that has been used up in the Linked
+ * List SQ.
+ *Input Parameters : *handle,
+ * *scqe
+ *Output Parameters : N/A
+ *Return Type : u32
+ */
+u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 cur_cmsn = 0;
+ u32 sqn = INVALID_VALUE32;
+ struct spfc_parent_ssq_info *ssq = NULL;
+ struct spfc_parent_shared_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ ulong flag = 0;
+
+ hba = (struct spfc_hba_info *)handle;
+ sqn = SPFC_GET_SCQE_SQN(scqe);
+ if (sqn >= SPFC_MAX_SSQ_NUM) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) do not have sqn: 0x%x",
+ hba->port_cfg.port_id, sqn);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->shared_queue[sqn];
+ ssq = &parent_queue_info->parent_ssq_info;
+ /* If there is only zero or one Wqe Page, no release is required */
+ if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM) {
+ spfc_update_sq_wqe_completion_stat(ssq, scqe);
+ return RETURN_OK;
+ }
+
+ spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
+ cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
+ spfc_free_sq_wqe_page(ssq, cur_cmsn);
+ spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
+
+ spfc_update_sq_wqe_completion_stat(ssq, scqe);
+
+ return ret;
+}
+
+u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len)
+{
+#define SPFC_ROOTCMDQ_TIMEOUT_MS 3000
+ u8 wqe_type = 0;
+ int cmq_ret = 0;
+ struct sphw_cmd_buf *cmd_buf = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ wqe_type = (u8)cmdqe->common.wd0.task_type;
+ SPFC_IO_STAT(hba, wqe_type);
+
+ cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
+ if (!cmd_buf) {
+ SPFC_ERR_IO_STAT(hba, wqe_type);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) CqmHandle(0x%p) allocate cmdq buffer failed",
+ hba->port_cfg.port_id, hba->dev_handle);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ memcpy(cmd_buf->buf, cmdqe, cmd_len);
+ spfc_cpu_to_big32(cmd_buf->buf, cmd_len);
+ cmd_buf->size = cmd_len;
+
+ cmq_ret = sphw_cmdq_async(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf, SPHW_CHANNEL_FC);
+
+ if (cmq_ret != RETURN_OK) {
+ sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
+ SPFC_ERR_IO_STAT(hba, wqe_type);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) CqmHandle(0x%p) send buff clear cmnd failed(0x%x)",
+ hba->port_cfg.port_id, hba->dev_handle, cmq_ret);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+struct spfc_parent_queue_info *
+spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 rport_index = 0;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+
+ if (unlikely(rport_index >= UNF_SPFC_MAXRPORT_NUM)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[warn]Port(0x%x) send pkg sid_did(0x%x_0x%x), but uplevel allocate invalid rport index: 0x%x",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did, rport_index);
+
+ return NULL;
+ }
+
+ /* parent -->> session */
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
+
+ return parent_queue_info;
+}
+
+struct spfc_parent_queue_info *spfc_find_parent_queue_info_by_id(struct spfc_hba_info *hba,
+ u32 local_id, u32 remote_id)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 lport_id;
+ u32 rport_id;
+
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr)
+ return NULL;
+
+ /* rport_number -->> parent_number -->> session_number */
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ lport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.local_port_id;
+ rport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.remote_port_id;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ /* local_id & remote_id & offload */
+ if (local_id == lport_id && remote_id == rport_id &&
+ parent_queue_mgr->parent_queue[index].offload_state ==
+ SPFC_QUEUE_STATE_OFFLOADED) {
+ parent_queue_info = &parent_queue_mgr->parent_queue[index];
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return parent_queue_info;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return NULL;
+}
+
+struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle, u32 local_id,
+ u32 remote_id, u32 rport_index)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr)
+ return NULL;
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ if (rport_index == index)
+ continue;
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (local_id == parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.local_port_id &&
+ remote_id == parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.remote_port_id &&
+ parent_queue_mgr->parent_queue[index].offload_state !=
+ SPFC_QUEUE_STATE_FREE &&
+ parent_queue_mgr->parent_queue[index].offload_state !=
+ SPFC_QUEUE_STATE_INITIALIZED) {
+ parent_queue_info = &parent_queue_mgr->parent_queue[index];
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return parent_queue_info;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return NULL;
+}
+
+struct spfc_parent_sq_info *spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct cqm_qpc_mpt *cqm_parent_ctxt_obj = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_info = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (unlikely(!parent_queue_info)) {
+ parent_queue_info = spfc_find_parent_queue_info_by_id(hba,
+ pkg->frame_head.csctl_sid &
+ UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did &
+ UNF_NPORTID_MASK);
+ if (!parent_queue_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x), get a null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return NULL;
+ }
+ }
+
+ cqm_parent_ctxt_obj = (parent_queue_info->parent_ctx.cqm_parent_ctx_obj);
+ if (unlikely(!cqm_parent_ctxt_obj)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x) with this rport has not alloc parent sq information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return NULL;
+ }
+
+ return &parent_queue_info->parent_sq_info;
+}
+
+u32 spfc_check_all_parent_queue_free(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[err]Port(0x%x) get a null parent queue mgr",
+ hba->port_cfg.port_id);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+
+ return RETURN_OK;
+}
+
+void spfc_flush_specific_scq(struct spfc_hba_info *hba, u32 index)
+{
+ /* The software interrupt is scheduled and processed during the second
+ * timeout period
+ */
+ struct spfc_scq_info *scq_info = NULL;
+ u32 flush_done_time = 0;
+
+ scq_info = &hba->scq_info[index];
+ atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DOING);
+ tasklet_schedule(&scq_info->tasklet);
+
+ /* Wait for a maximum of 2 seconds. If the SCQ soft interrupt is not
+ * scheduled * within 2 seconds, only timeout is returned
+ */
+ while ((atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) &&
+ (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
+ msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
+ flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
+ tasklet_schedule(&scq_info->tasklet);
+ }
+
+ if (atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) special scq(0x%x) flush timeout",
+ hba->port_cfg.port_id, index);
+ }
+}
+
+static void spfc_flush_cmd_scq(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+
+ for (index = SPFC_CMD_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
+ index += SPFC_SCQS_PER_SESSION) {
+ spfc_flush_specific_scq(hba, index);
+ }
+}
+
+static void spfc_flush_sts_scq(struct spfc_hba_info *hba)
+{
+ u32 index = 0;
+
+ /* for each STS SCQ */
+ for (index = SPFC_STS_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
+ index += SPFC_SCQS_PER_SESSION) {
+ spfc_flush_specific_scq(hba, index);
+ }
+}
+
+static void spfc_flush_all_scq(struct spfc_hba_info *hba)
+{
+ spfc_flush_cmd_scq(hba);
+ spfc_flush_sts_scq(hba);
+ /* Flush Default SCQ */
+ spfc_flush_specific_scq(hba, SPFC_SESSION_SCQ_NUM);
+}
+
+void spfc_wait_all_queues_empty(struct spfc_hba_info *hba)
+{
+ spfc_flush_all_scq(hba);
+}
+
+void spfc_set_rport_flush_state(void *handle, bool in_flush)
+{
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_mgr = hba->parent_queue_mgr;
+ if (!parent_queue_mgr) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) parent queue manager is empty",
+ hba->port_cfg.port_id);
+ return;
+ }
+
+ /*
+ * for each HBA's R_Port(SQ),
+ * set state with been flushing or flush done
+ */
+ for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
+ spin_lock_irqsave(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock, flag);
+ if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
+ parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.port_in_flush = in_flush;
+ }
+ spin_unlock_irqrestore(&parent_queue_mgr->parent_queue[index]
+ .parent_sq_info.parent_sq_enqueue_lock, flag);
+ }
+}
+
+u32 spfc_clear_fetched_sq_wqe(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ union spfc_cmdqe cmdqe;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+
+ hba = (struct spfc_hba_info *)handle;
+ /*
+ * The ROOT SQ cannot control the WQE in the empty queue of the ROOT SQ.
+ * Therefore, the ROOT SQ does not enqueue the WQE after the hardware
+ * obtains the. Link down after the wait mode is used. Therefore, the
+ * WQE of the hardware driver needs to enter the WQE of the queue after
+ * the Link down of the Link down is reported.
+ */
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+ spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_BUFFER_CLEAR, 0);
+ cmdqe.buffer_clear.wd1.rx_id_start = hba->exi_base;
+ cmdqe.buffer_clear.wd1.rx_id_end = hba->exi_base + hba->exi_count - 1;
+ cmdqe.buffer_clear.scqn = hba->default_scqn;
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) start clear all fetched wqe in start(0x%x) - end(0x%x) scqn(0x%x) stage(0x%x)",
+ hba->port_cfg.port_id, cmdqe.buffer_clear.wd1.rx_id_start,
+ cmdqe.buffer_clear.wd1.rx_id_end, cmdqe.buffer_clear.scqn,
+ hba->queue_set_stage);
+
+ /* Send BUFFER_CLEAR command via ROOT CMDQ */
+ ret = spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.buffer_clear));
+
+ return ret;
+}
+
+u32 spfc_clear_pending_sq_wqe(void *handle)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 cmdqe_len = 0;
+ ulong flag = 0;
+ struct spfc_parent_ssq_info *ssq_info = NULL;
+ union spfc_cmdqe cmdqe;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
+ spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_FLUSH_SQ, 0);
+ cmdqe.flush_sq.wd0.wqe_type = SPFC_TASK_T_FLUSH_SQ;
+ cmdqe.flush_sq.wd1.scqn = SPFC_LSW(hba->default_scqn);
+ cmdqe.flush_sq.wd1.port_id = hba->port_index;
+
+ ssq_info = &hba->parent_queue_mgr->shared_queue[ARRAY_INDEX_0].parent_ssq_info;
+
+ spin_lock_irqsave(&ssq_info->parent_sq_enqueue_lock, flag);
+ cmdqe.flush_sq.wd3.first_sq_xid = ssq_info->context_id;
+ spin_unlock_irqrestore(&ssq_info->parent_sq_enqueue_lock, flag);
+ cmdqe.flush_sq.wd0.entry_count = SPFC_MAX_SSQ_NUM;
+ cmdqe.flush_sq.wd3.sqqid_start_per_session = SPFC_SQ_QID_START_PER_QPC;
+ cmdqe.flush_sq.wd3.sqcnt_per_session = SPFC_SQ_NUM_PER_QPC;
+ cmdqe.flush_sq.wd1.last_wqe = 1;
+
+ /* Clear pending Queue */
+ cmdqe_len = (u32)(sizeof(cmdqe.flush_sq));
+ ret = spfc_root_cmdq_enqueue(hba, &cmdqe, (u16)cmdqe_len);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) clear total 0x%x SQ in this CMDQE(last=%u), stage (0x%x)",
+ hba->port_cfg.port_id, SPFC_MAX_SSQ_NUM,
+ cmdqe.flush_sq.wd1.last_wqe, hba->queue_set_stage);
+
+ return ret;
+}
+
+u32 spfc_wait_queue_set_flush_done(struct spfc_hba_info *hba)
+{
+ u32 flush_done_time = 0;
+ u32 ret = RETURN_OK;
+
+ while ((hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) &&
+ (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
+ msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
+ flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
+ }
+
+ if (hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]Port(0x%x) queue sets flush timeout with stage(0x%x)",
+ hba->port_cfg.port_id, hba->queue_set_stage);
+
+ ret = UNF_RETURN_ERROR;
+ }
+
+ return ret;
+}
+
+void spfc_disable_all_scq_schedule(struct spfc_hba_info *hba)
+{
+ struct spfc_scq_info *scq_info = NULL;
+ u32 index = 0;
+
+ for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
+ scq_info = &hba->scq_info[index];
+ tasklet_disable(&scq_info->tasklet);
+ }
+}
+
+void spfc_disable_queues_dispatch(struct spfc_hba_info *hba)
+{
+ spfc_disable_all_scq_schedule(hba);
+}
+
+void spfc_enable_all_scq_schedule(struct spfc_hba_info *hba)
+{
+ struct spfc_scq_info *scq_info = NULL;
+ u32 index = 0;
+
+ for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
+ scq_info = &hba->scq_info[index];
+ tasklet_enable(&scq_info->tasklet);
+ }
+}
+
+void spfc_enalbe_queues_dispatch(void *handle)
+{
+ spfc_enable_all_scq_schedule((struct spfc_hba_info *)handle);
+}
+
+/*
+ *Function Name : spfc_clear_els_srq
+ *Function Description: When the port is used as the remove, the resources
+ *related to the els srq are deleted.
+ *Input Parameters : *hba Output Parameters
+ *Return Type : void
+ */
+void spfc_clear_els_srq(struct spfc_hba_info *hba)
+{
+#define SPFC_WAIT_CLR_SRQ_CTX_MS 500
+#define SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES 60
+
+ u32 index = 0;
+ ulong flag = 0;
+ struct spfc_srq_info *srq_info = NULL;
+
+ srq_info = &hba->els_srq_info;
+
+ spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
+ if (!srq_info->enable || srq_info->state == SPFC_CLEAN_DOING) {
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ return;
+ }
+ srq_info->enable = false;
+ srq_info->state = SPFC_CLEAN_DOING;
+ spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
+
+ spfc_send_clear_srq_cmd(hba, &hba->els_srq_info);
+
+ /* wait for uCode to clear SRQ context, the timer is 30S */
+ while ((srq_info->state != SPFC_CLEAN_DONE) &&
+ (index < SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES)) {
+ msleep(SPFC_WAIT_CLR_SRQ_CTX_MS);
+ index++;
+ }
+
+ if (srq_info->state != SPFC_CLEAN_DONE) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
+ "[warn]SPFC Port(0x%x) clear els srq timeout",
+ hba->port_cfg.port_id);
+ }
+}
+
+u32 spfc_wait_all_parent_queue_free(struct spfc_hba_info *hba)
+{
+#define SPFC_MAX_LOOP_TIMES 6000
+#define SPFC_WAIT_ONE_TIME_MS 5
+ u32 index = 0;
+ u32 ret = UNF_RETURN_ERROR;
+
+ do {
+ ret = spfc_check_all_parent_queue_free(hba);
+ if (ret == RETURN_OK)
+ break;
+
+ index++;
+ msleep(SPFC_WAIT_ONE_TIME_MS);
+ } while (index < SPFC_MAX_LOOP_TIMES);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
+ "[warn]Port(0x%x) wait all parent queue state free timeout",
+ hba->port_cfg.port_id);
+ }
+
+ return ret;
+}
+
+/*
+ *Function Name : spfc_queue_pre_process
+ *Function Description: When the port functions as the remove, the queue needs
+ * to be preprocessed.
+ *Input Parameters : *handle,
+ * clean
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+void spfc_queue_pre_process(void *handle, bool clean)
+{
+#define SPFC_WAIT_LINKDOWN_EVENT_MS 500
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* From port reset & port remove */
+ /* 1. Wait for 2s and wait for QUEUE to be FLUSH Done. */
+ if (spfc_wait_queue_set_flush_done(hba) != RETURN_OK) {
+ /*
+ * During the process of removing the card, if the port is
+ * disabled and the flush done is not available, the chip is
+ * powered off or the pcie link is disconnected. In this case,
+ * you can proceed with the next step.
+ */
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]SPFC Port(0x%x) clean queue sets timeout",
+ hba->port_cfg.port_id);
+ }
+
+ /*
+ * 2. Port remove:
+ * 2.1 free parent queue
+ * 2.2 clear & destroy ELS/SIRT SRQ
+ */
+ if (clean) {
+ if (spfc_wait_all_parent_queue_free(hba) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT,
+ UNF_WARN,
+ "[warn]SPFC Port(0x%x) free all parent queue timeout",
+ hba->port_cfg.port_id);
+ }
+
+ /* clear & than destroy ELS/SIRT SRQ */
+ spfc_clear_els_srq(hba);
+ }
+
+ msleep(SPFC_WAIT_LINKDOWN_EVENT_MS);
+
+ /*
+ * 3. The internal resources of the port chip are flush done. However,
+ * there may be residual scqe or rq in the queue. The scheduling is
+ * forcibly refreshed once.
+ */
+ spfc_wait_all_queues_empty(hba);
+
+ /* 4. Disable tasklet scheduling for upstream queues on the software
+ * layer
+ */
+ spfc_disable_queues_dispatch(hba);
+}
+
+void spfc_queue_post_process(void *hba)
+{
+ spfc_enalbe_queues_dispatch((struct spfc_hba_info *)hba);
+}
+
+/*
+ *Function Name : spfc_push_delay_sqe
+ *Function Description: Check whether there is a sq that is being deleted.
+ * If yes, add the sq to the sq.
+ *Input Parameters : *hba,
+ * *offload_parent_queue,
+ * *sqe,
+ * *pkg
+ *Output Parameters : N/A
+ *Return Type : u32
+ */
+u32 spfc_push_delay_sqe(void *hba,
+ struct spfc_parent_queue_info *offload_parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
+{
+ ulong flag = 0;
+ spinlock_t *prtq_state_lock = NULL;
+
+ prtq_state_lock = &offload_parent_queue->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (offload_parent_queue->offload_state != SPFC_QUEUE_STATE_INITIALIZED &&
+ offload_parent_queue->offload_state != SPFC_QUEUE_STATE_FREE) {
+ memcpy(&offload_parent_queue->parent_sq_info.delay_sqe.sqe,
+ sqe, sizeof(struct spfc_sqe));
+ offload_parent_queue->parent_sq_info.delay_sqe.start_jiff = jiffies;
+ offload_parent_queue->parent_sq_info.delay_sqe.time_out =
+ pkg->private_data[PKG_PRIVATE_XCHG_TIMEER];
+ offload_parent_queue->parent_sq_info.delay_sqe.valid = true;
+ offload_parent_queue->parent_sq_info.delay_sqe.rport_index =
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+ offload_parent_queue->parent_sq_info.delay_sqe.sid =
+ pkg->frame_head.csctl_sid & UNF_NPORTID_MASK;
+ offload_parent_queue->parent_sq_info.delay_sqe.did =
+ pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
+ offload_parent_queue->parent_sq_info.delay_sqe.xid =
+ sqe->ts_sl.xid;
+ offload_parent_queue->parent_sq_info.delay_sqe.ssqn =
+ (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) delay send ELS, OXID(0x%x), RXID(0x%x)",
+ ((struct spfc_hba_info *)hba)->port_cfg.port_id,
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg));
+
+ return RETURN_OK;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ return UNF_RETURN_ERROR;
+}
+
+static u32 spfc_pop_session_valid_check(struct spfc_hba_info *hba,
+ struct spfc_delay_sqe_ctrl_info *sqe_info, u32 rport_index)
+{
+ if (!sqe_info->valid)
+ return UNF_RETURN_ERROR;
+
+ if (jiffies_to_msecs(jiffies - sqe_info->start_jiff) >= sqe_info->time_out) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay enable session failed, start time 0x%llx, timeout value 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay enable session failed, rport index(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+/*
+ *Function Name : spfc_pop_delay_sqe
+ *Function Description: The sqe that is delayed due to the deletion of the old
+ * connection is sent to the root sq for
+ *processing. Input Parameters : *hba, *sqe_info Output Parameters : N/A
+ *Return Type : void
+ */
+static void spfc_pop_delay_sqe(struct spfc_hba_info *hba,
+ struct spfc_delay_sqe_ctrl_info *sqe_info)
+{
+ ulong flag;
+ u32 delay_rport_index = INVALID_VALUE32;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ enum spfc_parent_queue_state offload_state =
+ SPFC_QUEUE_STATE_DESTROYING;
+ struct spfc_delay_destroy_ctrl_info destroy_sqe_info;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ memset(&destroy_sqe_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ delay_rport_index = sqe_info->rport_index;
+
+ /* According to the sequence, the rport index id is reported and then
+ * the sqe of the new link setup request is delivered.
+ */
+ ret = spfc_pop_session_valid_check(hba, sqe_info, delay_rport_index);
+
+ if (ret != RETURN_OK)
+ return;
+
+ parent_queue = &hba->parent_queue_mgr->parent_queue[delay_rport_index];
+ sq_info = &parent_queue->parent_sq_info;
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ /* Before the root sq is delivered, check the status again to
+ * ensure that the initialization status is not uninstalled. Other
+ * states are not processed and are discarded directly.
+ */
+ spin_lock_irqsave(prtq_state_lock, flag);
+ offload_state = parent_queue->offload_state;
+
+ /* Before re-enqueuing the rootsq, check whether the offload status and
+ * connection information is consistent to prevent the old request from
+ * being sent after the connection status is changed.
+ */
+ if (offload_state == SPFC_QUEUE_STATE_INITIALIZED &&
+ parent_queue->parent_sq_info.local_port_id == sqe_info->sid &&
+ parent_queue->parent_sq_info.remote_port_id == sqe_info->did &&
+ SPFC_CHECK_XID_MATCHED(parent_queue->parent_sq_info.context_id,
+ sqe_info->sqe.ts_sl.xid)) {
+ parent_queue->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up delay session enable, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out, delay_rport_index, offload_state);
+
+ if (spfc_parent_sq_enqueue(sq_info, &sqe_info->sqe, sqe_info->ssqn) != RETURN_OK) {
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ if (parent_queue->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ parent_queue->offload_state = offload_state;
+
+ if (parent_queue->parent_sq_info.destroy_sqe.valid) {
+ memcpy(&destroy_sqe_info,
+ &parent_queue->parent_sq_info.destroy_sqe,
+ sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ parent_queue->parent_sq_info.destroy_sqe.valid = false;
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &destroy_sqe_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop up delay session enable fail, recover offload state 0x%x",
+ hba->port_cfg.port_id, parent_queue->offload_state);
+ return;
+ }
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port 0x%x pop delay session enable failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id, sqe_info->start_jiff,
+ sqe_info->time_out, delay_rport_index,
+ offload_state);
+ }
+}
+
+void spfc_push_destroy_parent_queue_sqe(void *hba,
+ struct spfc_parent_queue_info *offloading_parent_queue,
+ struct unf_port_info *rport_info)
+{
+ offloading_parent_queue->parent_sq_info.destroy_sqe.valid = true;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_index = rport_info->rport_index;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.time_out =
+ SPFC_SQ_DEL_STAGE_TIMEOUT_MS;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.start_jiff = jiffies;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.nport_id =
+ rport_info->nport_id;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.rport_index =
+ rport_info->rport_index;
+ offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.port_name =
+ rport_info->port_name;
+}
+
+/*
+ *Function Name : spfc_pop_destroy_parent_queue_sqe
+ *Function Description: The deletion connection sqe that is delayed due to
+ * connection uninstallation is sent to
+ *the parent sq for processing. Input Parameters : *handle, *destroy_sqe_info
+ *Output Parameters : N/A
+ *Return Type : void
+ */
+void spfc_pop_destroy_parent_queue_sqe(void *handle,
+ struct spfc_delay_destroy_ctrl_info *destroy_sqe_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ ulong flag;
+ u32 index = INVALID_VALUE32;
+ struct spfc_parent_queue_info *parent_queue = NULL;
+ enum spfc_parent_queue_state offload_state =
+ SPFC_QUEUE_STATE_DESTROYING;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ if (!destroy_sqe_info->valid)
+ return;
+
+ if (jiffies_to_msecs(jiffies - destroy_sqe_info->start_jiff) < destroy_sqe_info->time_out) {
+ index = destroy_sqe_info->rport_index;
+ parent_queue = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ /* Before delivery, check the status again to ensure that the
+ * initialization status is not uninstalled. Other states are
+ * not processed and are discarded directly.
+ */
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ offload_state = parent_queue->offload_state;
+ if (offload_state == SPFC_QUEUE_STATE_OFFLOADED ||
+ offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port 0x%x pop up delay destroy parent sq, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
+ hba->port_cfg.port_id,
+ destroy_sqe_info->start_jiff,
+ destroy_sqe_info->time_out,
+ index, offload_state);
+ ret = spfc_free_parent_resource(hba, &destroy_sqe_info->rport_info);
+ } else {
+ ret = UNF_RETURN_ERROR;
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ }
+ }
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port 0x%x pop delay destroy parent sq failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, rport nport id 0x%x,offload state 0x%x",
+ hba->port_cfg.port_id, destroy_sqe_info->start_jiff,
+ destroy_sqe_info->time_out, index,
+ destroy_sqe_info->rport_info.nport_id, offload_state);
+ }
+}
+
+void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info)
+{
+ ulong flag = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 rport_index = INVALID_VALUE32;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_delay_sqe_ctrl_info sqe_info;
+ spinlock_t *prtq_state_lock = NULL;
+
+ memset(&sqe_info, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
+ hba = (struct spfc_hba_info *)handle;
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) begin to free parent sq, rport_index(0x%x)",
+ hba->port_cfg.port_id, parent_queue_info->parent_sq_info.rport_index);
+
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[info]Port(0x%x) duplicate free parent sq, rport_index(0x%x)",
+ hba->port_cfg.port_id,
+ parent_queue_info->parent_sq_info.rport_index);
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ return;
+ }
+
+ if (parent_queue_info->parent_sq_info.delay_sqe.valid) {
+ memcpy(&sqe_info, &parent_queue_info->parent_sq_info.delay_sqe,
+ sizeof(struct spfc_delay_sqe_ctrl_info));
+ }
+
+ rport_index = parent_queue_info->parent_sq_info.rport_index;
+
+ /* The Parent Contexe and SQ information is released. After
+ * initialization, the Parent Contexe and SQ information is associated
+ * with the sq in the queue of the parent
+ */
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ spfc_free_parent_sq(hba, parent_queue_info);
+ spin_lock_irqsave(prtq_state_lock, flag);
+
+ /* The initialization of all queue id is invalid */
+ parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->parent_sts_scq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->parent_els_srq_info.cqm_queue_id = INVALID_VALUE32;
+ parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
+
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_RELEASE_RPORT_INDEX,
+ (void *)&rport_index);
+
+ spfc_pop_delay_sqe(hba, &sqe_info);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Port(0x%x) free parent sq with rport_index(0x%x) failed",
+ hba->port_cfg.port_id, rport_index);
+ }
+}
+
+static void spfc_do_port_reset(struct work_struct *work)
+{
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VOID(work);
+
+ suspend_sqe = container_of(work, struct spfc_suspend_sqe_info,
+ timeout_work.work);
+ hba = (struct spfc_hba_info *)suspend_sqe->hba;
+ FC_CHECK_RETURN_VOID(hba);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) magic num (0x%x)do port reset.",
+ hba->port_cfg.port_id, suspend_sqe->magic_num);
+
+ spfc_port_reset(hba);
+}
+
+static void
+spfc_push_sqe_suspend(void *hba, struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, u32 magic_num)
+{
+#define SPFC_SQ_NOP_TIMEOUT_MS 1000
+ ulong flag = 0;
+ u32 sqn_base;
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+
+ sq = &parent_queue->parent_sq_info;
+ suspend_sqe =
+ kmalloc(sizeof(struct spfc_suspend_sqe_info), GFP_ATOMIC);
+ if (!suspend_sqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[err]alloc suspend sqe memory failed");
+ return;
+ }
+ memset(suspend_sqe, 0, sizeof(struct spfc_suspend_sqe_info));
+ memcpy(&suspend_sqe->sqe, sqe, sizeof(struct spfc_sqe));
+ suspend_sqe->magic_num = magic_num;
+ suspend_sqe->old_offload_sts = sq->need_offloaded;
+ suspend_sqe->hba = sq->hba;
+
+ if (pkg) {
+ memcpy(&suspend_sqe->pkg, pkg, sizeof(struct unf_frame_pkg));
+ } else {
+ sqn_base = sq->sqn_base;
+ suspend_sqe->pkg.private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] =
+ sqn_base;
+ }
+
+ INIT_DELAYED_WORK(&suspend_sqe->timeout_work, spfc_do_port_reset);
+ INIT_LIST_HEAD(&suspend_sqe->list_sqe_entry);
+
+ spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
+ list_add_tail(&suspend_sqe->list_sqe_entry, &sq->suspend_sqe_list);
+ spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
+
+ (void)queue_delayed_work(((struct spfc_hba_info *)hba)->work_queue,
+ &suspend_sqe->timeout_work,
+ (ulong)msecs_to_jiffies((u32)SPFC_SQ_NOP_TIMEOUT_MS));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) magic num(0x%x)suspend sqe",
+ ((struct spfc_hba_info *)hba)->port_cfg.port_id, magic_num);
+}
+
+u32 spfc_pop_suspend_sqe(void *handle, struct spfc_parent_queue_info *parent_queue,
+ struct spfc_suspend_sqe_info *suspen_sqe)
+{
+ ulong flag;
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_parent_sq_info *sq = NULL;
+ u16 ssqn;
+ struct unf_frame_pkg *pkg = NULL;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ u8 task_type;
+ spinlock_t *prtq_state_lock = NULL;
+
+ sq = &parent_queue->parent_sq_info;
+ task_type = suspen_sqe->sqe.ts_sl.task_type;
+ pkg = &suspen_sqe->pkg;
+ if (!pkg) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[error]pkt is null.");
+ return UNF_RETURN_ERROR;
+ }
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up suspend wqe sqn (0x%x) TaskType(0x%x)",
+ hba->port_cfg.port_id, ssqn, task_type);
+
+ prtq_state_lock = &parent_queue->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (SPFC_RPORT_NOT_OFFLOADED(parent_queue) &&
+ (task_type == SPFC_SQE_ELS_RSP ||
+ task_type == SPFC_TASK_T_ELS)) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ /* Send PLOGI or PLOGI ACC or SCR if session not offload */
+ ret = spfc_send_els_via_default_session(hba, &suspen_sqe->sqe, pkg, parent_queue);
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ ret = spfc_parent_sq_enqueue(sq, &suspen_sqe->sqe, ssqn);
+ }
+ return ret;
+}
+
+static void spfc_build_nop_sqe(struct spfc_hba_info *hba, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe, u32 magic_num, u32 scqn)
+{
+ sqe->ts_sl.task_type = SPFC_SQE_NOP;
+ sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
+ sqe->ts_sl.cont.nop_sq.wd0.scqn = scqn;
+ sqe->ts_sl.cont.nop_sq.magic_num = magic_num;
+ spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
+}
+
+u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
+ u32 magic_num, u16 sqn)
+{
+ struct spfc_sqe empty_sq_sqe;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ u32 ret;
+
+ memset(&empty_sq_sqe, 0, sizeof(struct spfc_sqe));
+
+ spfc_build_nop_sqe(hba, parent_sq_info, &empty_sq_sqe, magic_num, hba->default_scqn);
+ ret = spfc_parent_sq_enqueue(parent_sq_info, &empty_sq_sqe, sqn);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]send nop cmd scqn(0x%x) sq(0x%x).",
+ hba->default_scqn, sqn);
+ return ret;
+}
+
+u32 spfc_suspend_sqe_and_send_nop(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 magic_num;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ struct spfc_parent_sq_info *parent_sq = &parent_queue->parent_sq_info;
+ struct unf_lport *lport = (struct unf_lport *)hba->lport;
+
+ FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
+
+ if (pkg) {
+ magic_num = pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
+ } else {
+ magic_num = (u32)atomic64_inc_return(&((struct unf_lport *)
+ lport->root_lport)->exchg_index);
+ }
+
+ spfc_push_sqe_suspend(hba, parent_queue, sqe, pkg, magic_num);
+ if (SPFC_RPORT_NOT_OFFLOADED(parent_queue))
+ parent_sq->need_offloaded = SPFC_NEED_DO_OFFLOAD;
+
+ ret = spfc_send_nop_cmd(hba, parent_sq, magic_num,
+ (u16)parent_sq->sqn_base);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[err]Port(0x%x) rport_index(0x%x)send sq empty failed.",
+ hba->port_cfg.port_id, parent_sq->rport_index);
+ }
+ return ret;
+}
+
+void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe, enum spfc_session_reset_mode mode, u32 scqn)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ /* The reset session command does not occupy xid. Therefore,
+ * 0xffff can be used to align with the microcode.
+ */
+ sqe->ts_sl.task_type = SPFC_SQE_SESS_RST;
+ sqe->ts_sl.local_xid = 0xffff;
+ sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
+ sqe->ts_sl.wd0.remote_xid = 0xffff;
+ sqe->ts_sl.cont.reset_session.wd0.reset_exch_start = hba->exi_base;
+ sqe->ts_sl.cont.reset_session.wd0.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
+ sqe->ts_sl.cont.reset_session.wd1.reset_did = sq->remote_port_id;
+ sqe->ts_sl.cont.reset_session.wd1.mode = mode;
+ sqe->ts_sl.cont.reset_session.wd2.reset_sid = sq->local_port_id;
+ sqe->ts_sl.cont.reset_session.wd3.scqn = scqn;
+
+ spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
+}
+
+u32 spfc_send_session_rst_cmd(void *handle,
+ struct spfc_parent_queue_info *parent_queue_info,
+ enum spfc_session_reset_mode mode)
+{
+ struct spfc_parent_sq_info *sq = NULL;
+ struct spfc_sqe rst_sess_sqe;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 sts_scqn = 0;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+ memset(&rst_sess_sqe, 0, sizeof(struct spfc_sqe));
+ sq = &parent_queue_info->parent_sq_info;
+ sts_scqn = hba->default_scqn;
+
+ spfc_build_session_rst_wqe(hba, sq, &rst_sess_sqe, mode, sts_scqn);
+ ret = spfc_suspend_sqe_and_send_nop(hba, parent_queue_info, &rst_sess_sqe, NULL);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]RPort(0x%x) send SESS_RST(%d) start_exch_id(0x%x) end_exch_id(0x%x), scqn(0x%x) ctx_id(0x%x) cid(0x%x)",
+ sq->rport_index, mode,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_start,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_end,
+ rst_sess_sqe.ts_sl.cont.reset_session.wd3.scqn,
+ sq->context_id, sq->cache_id);
+ return ret;
+}
+
+void spfc_rcvd_els_from_srq_timeout(struct work_struct *work)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ hba = container_of(work, struct spfc_hba_info, srq_delay_info.del_work.work);
+
+ /* If the frame is not processed, the frame is pushed to the CM layer:
+ * The frame may have been processed when the root rq receives data.
+ */
+ if (hba->srq_delay_info.srq_delay_flag) {
+ spfc_recv_els_cmnd(hba, &hba->srq_delay_info.frame_pkg,
+ hba->srq_delay_info.frame_pkg.unf_cmnd_pload_bl.buffer_ptr,
+ 0, false);
+ hba->srq_delay_info.srq_delay_flag = 0;
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) srq delay work timeout, send saved plgoi to CM",
+ hba->port_cfg.port_id);
+ }
+}
+
+u32 spfc_flush_ini_resp_queue(void *handle)
+{
+ struct spfc_hba_info *hba = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ hba = (struct spfc_hba_info *)handle;
+
+ spfc_flush_sts_scq(hba);
+
+ return RETURN_OK;
+}
+
+static void spfc_handle_aeq_queue_error(struct spfc_hba_info *hba,
+ struct spfc_aqe_data *aeq_msg)
+{
+ u32 sts_scqn_local = 0;
+ u32 full_ci = INVALID_VALUE32;
+ u32 full_ci_owner = INVALID_VALUE32;
+ struct spfc_scq_info *scq_info = NULL;
+
+ sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(aeq_msg->wd0.conn_id);
+ scq_info = &hba->scq_info[sts_scqn_local];
+ full_ci = scq_info->ci;
+ full_ci_owner = scq_info->ci_owner;
+
+ /* Currently, Flush is forcibly set to StsScq. No matter whether scq is
+ * processed, AEQE is returned
+ */
+ tasklet_schedule(&scq_info->tasklet);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) LocalScqn(0x%x) CqmScqn(0x%x) is full, force flush CI from (%u|0x%x) to (%u|0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.conn_id,
+ sts_scqn_local, scq_info->scqn, full_ci_owner, full_ci,
+ scq_info->ci_owner, scq_info->ci);
+}
+
+void spfc_process_aeqe(void *handle, u8 event_type, u8 *val)
+{
+ u32 ret = RETURN_OK;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ struct spfc_aqe_data aeq_msg;
+ u8 event_code = INVALID_VALUE8;
+ u64 event_val = *((u64 *)val);
+
+ FC_CHECK_RETURN_VOID(hba);
+
+ memcpy(&aeq_msg, (struct spfc_aqe_data *)&event_val, sizeof(struct spfc_aqe_data));
+ event_code = (u8)aeq_msg.wd0.evt_code;
+
+ switch (event_type) {
+ case FC_AEQ_EVENT_QUEUE_ERROR:
+ spfc_handle_aeq_queue_error(hba, &aeq_msg);
+ break;
+
+ case FC_AEQ_EVENT_WQE_FATAL_ERROR:
+ UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
+ UNF_PORT_ABNORMAL_RESET, NULL);
+ break;
+
+ case FC_AEQ_EVENT_CTX_FATAL_ERROR:
+ break;
+
+ case FC_AEQ_EVENT_OFFLOAD_ERROR:
+ ret = spfc_handle_aeq_off_load_err(hba, &aeq_msg);
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[warn]Port(0x%x) receive a unsupported AEQ EventType(0x%x) EventVal(0x%llx).",
+ hba->port_cfg.port_id, event_type, (u64)event_val);
+ return;
+ }
+
+ if (event_code < FC_AEQ_EVT_ERR_CODE_BUTT)
+ SPFC_AEQ_ERR_TYPE_STAT(hba, aeq_msg.wd0.evt_code);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
+ "[info]Port(0x%x) receive AEQ EventType(0x%x) EventVal(0x%llx) EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x) %s",
+ hba->port_cfg.port_id, event_type, (u64)event_val, event_code,
+ aeq_msg.wd0.conn_id, aeq_msg.wd1.xid,
+ (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
+}
+
+void spfc_sess_resource_free_sync(void *handle,
+ struct unf_port_info *rport_info)
+{
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ ulong flag = 0;
+ u32 wait_sq_cnt = 0;
+ struct spfc_hba_info *hba = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+ u32 index = SPFC_DEFAULT_RPORT_INDEX;
+
+ FC_CHECK_RETURN_VOID(handle);
+ FC_CHECK_RETURN_VOID(rport_info);
+
+ hba = (struct spfc_hba_info *)handle;
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ (void)spfc_free_parent_resource((void *)hba, rport_info);
+
+ for (;;) {
+ spin_lock_irqsave(prtq_state_lock, flag);
+ if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ break;
+ }
+ spin_unlock_irqrestore(prtq_state_lock, flag);
+ msleep(SPFC_WAIT_SESS_FREE_ONE_TIME_MS);
+ wait_sq_cnt++;
+ if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES)
+ break;
+ }
+}
diff --git a/drivers/scsi/spfc/hw/spfc_queue.h b/drivers/scsi/spfc/hw/spfc_queue.h
new file mode 100644
index 000000000000..b1184eb17556
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_queue.h
@@ -0,0 +1,711 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_QUEUE_H
+#define SPFC_QUEUE_H
+
+#include "unf_type.h"
+#include "spfc_wqe.h"
+#include "spfc_cqm_main.h"
+#define SPFC_MIN_WP_NUM (2)
+#define SPFC_EXTEND_WQE_OFFSET (128)
+#define SPFC_SQE_SIZE (256)
+#define WQE_MARKER_0 (0x0)
+#define WQE_MARKER_6B (0x6b)
+
+/* PARENT SQ & Context defines */
+#define SPFC_MAX_MSN (65535)
+#define SPFC_MSN_MASK (0xffff000000000000LL)
+#define SPFC_SQE_TS_SIZE (72)
+#define SPFC_SQE_FIRST_OBIT_DW_POS (0)
+#define SPFC_SQE_SECOND_OBIT_DW_POS (30)
+#define SPFC_SQE_OBIT_SET_MASK_BE (0x80)
+#define SPFC_SQE_OBIT_CLEAR_MASK_BE (0xffffff7f)
+#define SPFC_MAX_SQ_TASK_TYPE_CNT (128)
+#define SPFC_SQ_NUM_PER_QPC (3)
+#define SPFC_SQ_QID_START_PER_QPC 0
+#define SPFC_SQ_SPACE_OFFSET (64)
+#define SPFC_MAX_SSQ_NUM (SPFC_SQ_NUM_PER_QPC * 63 + 1) /* must be a multiple of 3 */
+#define SPFC_DIRECTWQE_SQ_INDEX (SPFC_MAX_SSQ_NUM - 1)
+
+/* Note: if the location of flush done bit changes, the definition must be
+ * modifyed again
+ */
+#define SPFC_CTXT_FLUSH_DONE_DW_POS (58)
+#define SPFC_CTXT_FLUSH_DONE_MASK_BE (0x4000)
+#define SPFC_CTXT_FLUSH_DONE_MASK_LE (0x400000)
+
+#define SPFC_PCIE_TEMPLATE (0)
+#define SPFC_DMA_ATTR_OFST (0)
+
+/*
+ *When driver assembles WQE SGE, the GPA parity bit is multiplexed as follows:
+ * {rsvd'2,zerocopysoro'2,zerocopy_dmaattr_idx'6,pcie_template'6}
+ */
+#define SPFC_PCIE_TEMPLATE_OFFSET 0
+#define SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET 6
+#define SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET 12
+#define SPFC_PCIE_RELAXED_ORDERING (1)
+#define SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE \
+ (SPFC_PCIE_RELAXED_ORDERING << SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET | \
+ SPFC_DMA_ATTR_OFST << SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET | \
+ SPFC_PCIE_TEMPLATE)
+
+#define SPFC_GET_SQ_HEAD(sq) \
+ list_entry(UNF_OS_LIST_NEXT(&(sq)->list_linked_list_sq), \
+ struct spfc_wqe_page, entry_wpg)
+#define SPFC_GET_SQ_TAIL(sq) \
+ list_entry(UNF_OS_LIST_PREV(&(sq)->list_linked_list_sq), \
+ struct spfc_wqe_page, entry_wpg)
+#define SPFC_SQ_IO_STAT(ssq, io_type) \
+ (atomic_inc(&(ssq)->io_stat[io_type]))
+#define SPFC_SQ_IO_STAT_READ(ssq, io_type) \
+ (atomic_read(&(ssq)->io_stat[io_type]))
+#define SPFC_GET_QUEUE_CMSN(ssq) \
+ ((u32)(be64_to_cpu(((((ssq)->queue_header)->ci_record) & SPFC_MSN_MASK))))
+#define SPFC_GET_WP_END_CMSN(head_start_cmsn, wqe_num_per_buf) \
+ ((u16)(((u32)(head_start_cmsn) + (u32)(wqe_num_per_buf) - 1) % (SPFC_MAX_MSN + 1)))
+#define SPFC_MSN_INC(msn) (((SPFC_MAX_MSN) == (msn)) ? 0 : ((msn) + 1))
+#define SPFC_MSN_DEC(msn) (((msn) == 0) ? (SPFC_MAX_MSN) : ((msn) - 1))
+#define SPFC_QUEUE_MSN_OFFSET(start_cmsn, end_cmsn) \
+ ((u32)((((u32)(end_cmsn) + (SPFC_MAX_MSN)) - (u32)(start_cmsn)) % (SPFC_MAX_MSN + 1)))
+#define SPFC_MSN32_ADD(msn, inc) (((msn) + (inc)) % (SPFC_MAX_MSN + 1))
+
+/*
+ *SCQ defines
+ */
+#define SPFC_INT_NUM_PER_QUEUE (1)
+#define SPFC_SCQ_INT_ID_MAX (2048) /* 11BIT */
+#define SPFC_SCQE_SIZE (64)
+#define SPFC_CQE_GPA_SHIFT (4)
+#define SPFC_NEXT_CQE_GPA_SHIFT (12)
+/* 1-Update Ci by Tile, 0-Update Ci by Hardware */
+#define SPFC_PMSN_CI_TYPE_FROM_HOST (0)
+#define SPFC_PMSN_CI_TYPE_FROM_UCODE (1)
+#define SPFC_ARMQ_IDLE (0)
+#define SPFC_CQ_INT_MODE (2)
+#define SPFC_CQ_HEADER_OWNER_SHIFT (15)
+
+/* SCQC_CQ_DEPTH 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k.
+ * include LinkWqe
+ */
+#define SPFC_CMD_SCQ_DEPTH (4096)
+#define SPFC_STS_SCQ_DEPTH (8192)
+
+#define SPFC_CMD_SCQC_CQ_DEPTH (spfc_log2n(SPFC_CMD_SCQ_DEPTH >> 8))
+#define SPFC_STS_SCQC_CQ_DEPTH (spfc_log2n(SPFC_STS_SCQ_DEPTH >> 8))
+#define SPFC_STS_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_HOST
+
+#define SPFC_CMD_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_UCODE
+
+#define SPFC_SCQ_INTR_LOW_LATENCY_MODE 0
+#define SPFC_SCQ_INTR_POLLING_MODE 1
+#define SPFC_SCQ_PROC_CNT_PER_SECOND_THRESHOLD (30000)
+
+#define SPFC_CQE_MAX_PROCESS_NUM_PER_INTR (128)
+#define SPFC_SESSION_SCQ_NUM (16)
+
+/* SCQ[0, 2, 4 ...]CMD SCQ,SCQ[1, 3, 5 ...]STS
+ * SCQ,SCQ[SPFC_TOTAL_SCQ_NUM-1]Defaul SCQ
+ */
+#define SPFC_CMD_SCQN_START (0)
+#define SPFC_STS_SCQN_START (1)
+#define SPFC_SCQS_PER_SESSION (2)
+
+#define SPFC_TOTAL_SCQ_NUM (SPFC_SESSION_SCQ_NUM + 1)
+
+#define SPFC_SCQ_IS_STS(scq_index) \
+ (((scq_index) % SPFC_SCQS_PER_SESSION) || ((scq_index) == SPFC_SESSION_SCQ_NUM))
+#define SPFC_SCQ_IS_CMD(scq_index) (!SPFC_SCQ_IS_STS(scq_index))
+#define SPFC_RPORTID_TO_CMD_SCQN(rport_index) \
+ (((rport_index) * SPFC_SCQS_PER_SESSION) % SPFC_SESSION_SCQ_NUM)
+#define SPFC_RPORTID_TO_STS_SCQN(rport_index) \
+ ((((rport_index) * SPFC_SCQS_PER_SESSION) + 1) % SPFC_SESSION_SCQ_NUM)
+
+/*
+ *SRQ defines
+ */
+#define SPFC_SRQE_SIZE (32)
+#define SPFC_SRQ_INIT_LOOP_O (1)
+#define SPFC_QUEUE_RING (1)
+#define SPFC_SRQ_ELS_DATA_NUM (1)
+#define SPFC_SRQ_ELS_SGE_LEN (256)
+#define SPFC_SRQ_ELS_DATA_DEPTH (31750) /* depth should Divide 127 */
+
+#define SPFC_IRQ_NAME_MAX (30)
+
+/* Support 2048 sessions(xid) */
+#define SPFC_CQM_XID_MASK (0x7ff)
+
+#define SPFC_QUEUE_FLUSH_DOING (0)
+#define SPFC_QUEUE_FLUSH_DONE (1)
+#define SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS (2000)
+#define SPFC_QUEUE_FLUSH_WAIT_MS (2)
+
+/*
+ *RPort defines
+ */
+#define SPFC_RPORT_OFFLOADED(prnt_qinfo) \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADED)
+#define SPFC_RPORT_NOT_OFFLOADED(prnt_qinfo) \
+ ((prnt_qinfo)->offload_state != SPFC_QUEUE_STATE_OFFLOADED)
+#define SPFC_RPORT_FLUSH_NOT_NEEDED(prnt_qinfo) \
+ (((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_INITIALIZED) || \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADING) || \
+ ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_FREE))
+#define SPFC_CHECK_XID_MATCHED(sq_xid, sqe_xid) \
+ (((sq_xid) & SPFC_CQM_XID_MASK) == ((sqe_xid) & SPFC_CQM_XID_MASK))
+#define SPFC_PORT_MODE_TGT (0) /* Port mode */
+#define SPFC_PORT_MODE_INI (1)
+#define SPFC_PORT_MODE_BOTH (2)
+
+/*
+ *Hardware Reserved Queue Info defines
+ */
+#define SPFC_HRQI_SEQ_ID_MAX (255)
+#define SPFC_HRQI_SEQ_INDEX_MAX (64)
+#define SPFC_HRQI_SEQ_INDEX_SHIFT (6)
+#define SPFC_HRQI_SEQ_SEPCIAL_ID (3)
+#define SPFC_HRQI_SEQ_INVALID_ID (~0LL)
+
+enum spfc_session_reset_mode {
+ SPFC_SESS_RST_DELETE_IO_ONLY = 1,
+ SPFC_SESS_RST_DELETE_CONN_ONLY = 2,
+ SPFC_SESS_RST_DELETE_IO_CONN_BOTH = 3,
+ SPFC_SESS_RST_MODE_BUTT
+};
+
+/* linkwqe */
+#define CQM_LINK_WQE_CTRLSL_VALUE 2
+#define CQM_LINK_WQE_LP_VALID 1
+#define CQM_LINK_WQE_LP_INVALID 0
+
+/* bit mask */
+#define SPFC_SCQN_MASK 0xfffff
+#define SPFC_SCQ_CTX_CI_GPA_MASK 0xfffffff
+#define SPFC_SCQ_CTX_C_EQN_MSI_X_MASK 0x7
+#define SPFC_PARITY_MASK 0x1
+#define SPFC_KEYSECTION_XID_H_MASK 0xf
+#define SPFC_KEYSECTION_XID_L_MASK 0xffff
+#define SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK 0xf
+#define SPFC_SSQ_CTX_MASK 0xfffff
+#define SPFC_KEY_WD3_SID_2_MASK 0x00ff0000
+#define SPFC_KEY_WD3_SID_1_MASK 0x00ff00
+#define SPFC_KEY_WD3_SID_0_MASK 0x0000ff
+#define SPFC_KEY_WD4_DID_2_MASK 0x00ff0000
+#define SPFC_KEY_WD4_DID_1_MASK 0x00ff00
+#define SPFC_KEY_WD4_DID_0_MASK 0x0000ff
+#define SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK 0x7fff
+#define SPFC_PMSN_MASK 0xff
+#define SPFC_QOS_LEVEL_MASK 0x3
+#define SPFC_DB_VAL_MASK 0xFFFFFFFF
+#define SPFC_MSNWD_L_MASK 0xffff
+#define SPFC_MSNWD_H_MASK 0x7fff
+#define SPFC_DB_WD0_PI_H_MASK 0xf
+#define SPFC_DB_WD0_PI_L_MASK 0xfff
+
+#define SPFC_DB_C_BIT_DATA_TYPE 0
+#define SPFC_DB_C_BIT_CONTROL_TYPE 1
+
+#define SPFC_OWNER_DRIVER_PRODUCT (1)
+
+#define SPFC_256BWQE_ENABLE (1)
+#define SPFC_DB_ARM_DISABLE (0)
+
+#define SPFC_CNTX_SIZE_T_256B (0)
+#define SPFC_CNTX_SIZE_256B (256)
+
+#define SPFC_SERVICE_TYPE_FC (12)
+#define SPFC_SERVICE_TYPE_FC_SQ (13)
+
+#define SPFC_PACKET_COS_FC_CMD (0)
+#define SPFC_PACKET_COS_FC_DATA (1)
+
+#define SPFC_QUEUE_LINK_STYLE (0)
+#define SPFC_QUEUE_RING_STYLE (1)
+
+#define SPFC_NEED_DO_OFFLOAD (1)
+#define SPFC_QID_SQ (0)
+
+/*
+ *SCQ defines
+ */
+struct spfc_scq_info {
+ struct cqm_queue *cqm_scq_info;
+ u32 wqe_num_per_buf;
+ u32 wqe_size;
+ u32 scqc_cq_depth; /* 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k */
+ u16 scqc_ci_type;
+ u16 valid_wqe_num; /* ScQ depth include link wqe */
+ u16 ci;
+ u16 ci_owner;
+ u32 queue_id;
+ u32 scqn;
+ char irq_name[SPFC_IRQ_NAME_MAX];
+ u16 msix_entry_idx;
+ u32 irq_id;
+ struct tasklet_struct tasklet;
+ atomic_t flush_stat;
+ void *hba;
+ u32 reserved;
+ struct task_struct *delay_task;
+ bool task_exit;
+ u32 intr_mode;
+};
+
+struct spfc_srq_ctx {
+ /* DW0 */
+ u64 pcie_template : 6;
+ u64 rsvd0 : 2;
+ u64 parity : 8;
+ u64 cur_rqe_usr_id : 16;
+ u64 cur_rqe_msn : 16;
+ u64 last_rq_pmsn : 16;
+
+ /* DW1 */
+ u64 cur_rqe_gpa;
+
+ /* DW2 */
+ u64 ctrl_sl : 1;
+ u64 cf : 1;
+ u64 csl : 2;
+ u64 cr : 1;
+ u64 bdsl : 4;
+ u64 pmsn_type : 1;
+ u64 cur_wqe_o : 1;
+ u64 consant_sge_len : 17;
+ u64 cur_sge_id : 4;
+ u64 cur_sge_remain_len : 17;
+ u64 ceqn_msix : 11;
+ u64 int_mode : 2;
+ u64 cur_sge_l : 1;
+ u64 cur_sge_v : 1;
+
+ /* DW3 */
+ u64 cur_sge_gpa;
+
+ /* DW4 */
+ u64 cur_pmsn_gpa;
+
+ /* DW5 */
+ u64 rsvd3 : 5;
+ u64 ring : 1;
+ u64 loop_o : 1;
+ u64 rsvd2 : 1;
+ u64 rqe_dma_attr_idx : 6;
+ u64 rq_so_ro : 2;
+ u64 cqe_dma_attr_idx : 6;
+ u64 cq_so_ro : 2;
+ u64 rsvd1 : 7;
+ u64 arm_q : 1;
+ u64 cur_cqe_cnt : 8;
+ u64 cqe_max_cnt : 8;
+ u64 prefetch_max_masn : 16;
+
+ /* DW6~DW7 */
+ u64 rsvd4;
+ u64 rsvd5;
+};
+
+struct spfc_drq_buff_entry {
+ u16 buff_id;
+ void *buff_addr;
+ dma_addr_t buff_dma;
+};
+
+enum spfc_clean_state { SPFC_CLEAN_DONE, SPFC_CLEAN_DOING, SPFC_CLEAN_BUTT };
+enum spfc_srq_type { SPFC_SRQ_ELS = 1, SPFC_SRQ_IMMI, SPFC_SRQ_BUTT };
+
+struct spfc_srq_info {
+ enum spfc_srq_type srq_type;
+
+ struct cqm_queue *cqm_srq_info;
+ u32 wqe_num_per_buf; /* Wqe number per buf, dont't inlcude link wqe */
+ u32 wqe_size;
+ u32 valid_wqe_num; /* valid wqe number, dont't include link wqe */
+ u16 pi;
+ u16 pi_owner;
+ u16 pmsn;
+ u16 ci;
+ u16 cmsn;
+ u32 srqn;
+
+ dma_addr_t first_rqe_recv_dma;
+
+ struct spfc_drq_buff_entry *els_buff_entry_head;
+ struct buf_describe buf_list;
+ spinlock_t srq_spin_lock;
+ bool spin_lock_init;
+ bool enable;
+ enum spfc_clean_state state;
+
+ atomic_t ref;
+
+ struct delayed_work del_work;
+ u32 del_retry_time;
+ void *hba;
+};
+
+/*
+ * The doorbell record keeps PI of WQE, which will be produced next time.
+ * The PI is 15 bits width o-bit
+ */
+struct db_record {
+ u64 pmsn : 16;
+ u64 dump_pmsn : 16;
+ u64 rsvd0 : 32;
+};
+
+/*
+ * The ci record keeps CI of WQE, which will be consumed next time.
+ * The ci is 15 bits width with 1 o-bit
+ */
+struct ci_record {
+ u64 cmsn : 16;
+ u64 dump_cmsn : 16;
+ u64 rsvd0 : 32;
+};
+
+/* The accumulate data in WQ header */
+struct accumulate {
+ u64 data_2_uc;
+ u64 data_2_drv;
+};
+
+/* The WQ header structure */
+struct wq_header {
+ struct db_record db_record;
+ struct ci_record ci_record;
+ struct accumulate soft_data;
+};
+
+/* Link list Sq WqePage Pool */
+/* queue header struct */
+struct spfc_queue_header {
+ u64 door_bell_record;
+ u64 ci_record;
+ u64 rsv1;
+ u64 rsv2;
+};
+
+/* WPG-WQEPAGE, LLSQ-LINKED LIST SQ */
+struct spfc_wqe_page {
+ struct list_head entry_wpg;
+
+ /* Wqe Page virtual addr */
+ void *wpg_addr;
+
+ /* Wqe Page physical addr */
+ u64 wpg_phy_addr;
+};
+
+struct spfc_sq_wqepage_pool {
+ u32 wpg_cnt;
+ u32 wpg_size;
+ u32 wqe_per_wpg;
+
+ /* PCI DMA Pool */
+ struct dma_pool *wpg_dma_pool;
+ struct spfc_wqe_page *wpg_pool_addr;
+ struct list_head list_free_wpg_pool;
+ spinlock_t wpg_pool_lock;
+ atomic_t wpg_in_use;
+};
+
+#define SPFC_SQ_DEL_STAGE_TIMEOUT_MS (3 * 1000)
+#define SPFC_SRQ_DEL_STAGE_TIMEOUT_MS (10 * 1000)
+#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS (10 * 1000)
+#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT (3)
+
+#define SPFC_SRQ_PROCESS_DELAY_MS (20)
+
+/* PLOGI parameters */
+struct spfc_plogi_copram {
+ u32 seq_cnt : 1;
+ u32 ed_tov : 1;
+ u32 rsvd : 14;
+ u32 tx_mfs : 16;
+ u32 ed_tov_time;
+};
+
+struct spfc_delay_sqe_ctrl_info {
+ bool valid;
+ u32 rport_index;
+ u32 time_out;
+ u64 start_jiff;
+ u32 sid;
+ u32 did;
+ u32 xid;
+ u16 ssqn;
+ struct spfc_sqe sqe;
+};
+
+struct spfc_suspend_sqe_info {
+ void *hba;
+ u32 magic_num;
+ u8 old_offload_sts;
+ struct unf_frame_pkg pkg;
+ struct spfc_sqe sqe;
+ struct delayed_work timeout_work;
+ struct list_head list_sqe_entry;
+};
+
+struct spfc_delay_destroy_ctrl_info {
+ bool valid;
+ u32 rport_index;
+ u32 time_out;
+ u64 start_jiff;
+ struct unf_port_info rport_info;
+};
+
+/* PARENT SQ Info */
+struct spfc_parent_sq_info {
+ void *hba;
+ spinlock_t parent_sq_enqueue_lock;
+ u32 rport_index;
+ u32 context_id;
+ /* Fixed value,used for Doorbell */
+ u32 sq_queue_id;
+ /* When a session is offloaded, tile will return the CacheId to the
+ * driver,which is used for Doorbell
+ */
+ u32 cache_id;
+ /* service type, fc or fc */
+ u32 service_type;
+ /* OQID */
+ u16 oqid_rd;
+ u16 oqid_wr;
+ u32 local_port_id;
+ u32 remote_port_id;
+ u32 sqn_base;
+ bool port_in_flush;
+ bool sq_in_sess_rst;
+ atomic_t sq_valid;
+ /* Used by NPIV QoS */
+ u8 vport_id;
+ /* Used by NPIV QoS */
+ u8 cs_ctrl;
+ struct delayed_work del_work;
+ struct delayed_work flush_done_timeout_work;
+ u64 del_start_jiff;
+ dma_addr_t srq_ctx_addr;
+ atomic_t sq_cached;
+ atomic_t flush_done_wait_cnt;
+ struct spfc_plogi_copram plogi_co_parms;
+ /* dif control info for immi */
+ struct unf_dif_control_info sirt_dif_control;
+ struct spfc_delay_sqe_ctrl_info delay_sqe;
+ struct spfc_delay_destroy_ctrl_info destroy_sqe;
+ struct list_head suspend_sqe_list;
+ atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
+ u8 need_offloaded;
+};
+
+/* parent context doorbell */
+struct spfc_parent_sq_db {
+ struct {
+ u32 xid : 20;
+ u32 cntx_size : 2;
+ u32 arm : 1;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 service_type : 5;
+ } wd0;
+
+ struct {
+ u32 pi_hi : 8;
+ u32 sm_data : 20;
+ u32 qid : 4;
+ } wd1;
+};
+
+#define IWARP_FC_DDB_TYPE 3
+
+/* direct wqe doorbell */
+struct spfc_direct_wqe_db {
+ struct {
+ u32 xid : 20;
+ u32 cntx_size : 2;
+ u32 pi_hi : 4;
+ u32 c : 1;
+ u32 cos : 3;
+ u32 ddb : 2;
+ } wd0;
+
+ struct {
+ u32 pi_lo : 12;
+ u32 sm_data : 20;
+ } wd1;
+};
+
+struct spfc_parent_cmd_scq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+struct spfc_parent_st_scq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+struct spfc_parent_els_srq_info {
+ u32 cqm_queue_id;
+ u32 local_queue_id;
+};
+
+enum spfc_parent_queue_state {
+ SPFC_QUEUE_STATE_INITIALIZED = 0,
+ SPFC_QUEUE_STATE_OFFLOADING = 1,
+ SPFC_QUEUE_STATE_OFFLOADED = 2,
+ SPFC_QUEUE_STATE_DESTROYING = 3,
+ SPFC_QUEUE_STATE_FREE = 4,
+ SPFC_QUEUE_STATE_BUTT
+};
+
+struct spfc_parent_ctx {
+ dma_addr_t parent_ctx_addr;
+ void *parent_ctx;
+ struct cqm_qpc_mpt *cqm_parent_ctx_obj;
+};
+
+struct spfc_parent_queue_info {
+ spinlock_t parent_queue_state_lock;
+ struct spfc_parent_ctx parent_ctx;
+ enum spfc_parent_queue_state offload_state;
+ struct spfc_parent_sq_info parent_sq_info;
+ struct spfc_parent_cmd_scq_info parent_cmd_scq_info;
+ struct spfc_parent_st_scq_info
+ parent_sts_scq_info;
+ struct spfc_parent_els_srq_info parent_els_srq_info;
+ u8 queue_vport_id;
+ u8 queue_data_cos;
+};
+
+struct spfc_parent_ssq_info {
+ void *hba;
+ spinlock_t parent_sq_enqueue_lock;
+ atomic_t wqe_page_cnt;
+ u32 context_id;
+ u32 cache_id;
+ u32 sq_queue_id;
+ u32 sqn;
+ u32 service_type;
+ u32 max_sqe_num; /* SQ depth */
+ u32 wqe_num_per_buf;
+ u32 wqe_size;
+ u32 accum_wqe_cnt;
+ u32 wqe_offset;
+ u16 head_start_cmsn;
+ u16 head_end_cmsn;
+ u16 last_cmsn;
+ u16 last_pi_owner;
+ u32 queue_style;
+ atomic_t sq_valid;
+ void *queue_head_original;
+ struct spfc_queue_header *queue_header;
+ dma_addr_t queue_hdr_phy_addr_original;
+ dma_addr_t queue_hdr_phy_addr;
+ struct list_head list_linked_list_sq;
+ atomic_t sq_db_cnt;
+ atomic_t sq_wqe_cnt;
+ atomic_t sq_cqe_cnt;
+ atomic_t sqe_minus_cqe_cnt;
+ atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
+};
+
+struct spfc_parent_shared_queue_info {
+ struct spfc_parent_ctx parent_ctx;
+ struct spfc_parent_ssq_info parent_ssq_info;
+};
+
+struct spfc_parent_queue_mgr {
+ struct spfc_parent_queue_info parent_queue[UNF_SPFC_MAXRPORT_NUM];
+ struct spfc_parent_shared_queue_info shared_queue[SPFC_MAX_SSQ_NUM];
+ struct buf_describe parent_sq_buf_list;
+};
+
+#define SPFC_SRQC_BUS_ROW 8
+#define SPFC_SRQC_BUS_COL 19
+#define SPFC_SQC_BUS_ROW 8
+#define SPFC_SQC_BUS_COL 13
+#define SPFC_HW_SCQC_BUS_ROW 6
+#define SPFC_HW_SCQC_BUS_COL 10
+#define SPFC_HW_SRQC_BUS_ROW 4
+#define SPFC_HW_SRQC_BUS_COL 15
+#define SPFC_SCQC_BUS_ROW 3
+#define SPFC_SCQC_BUS_COL 29
+
+#define SPFC_QUEUE_INFO_BUS_NUM 4
+struct spfc_queue_info_bus {
+ u64 bus[SPFC_QUEUE_INFO_BUS_NUM];
+};
+
+u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info);
+u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info);
+u32 spfc_alloc_parent_queue_mgr(void *handle);
+void spfc_free_parent_queue_mgr(void *handle);
+u32 spfc_create_common_share_queues(void *handle);
+u32 spfc_create_ssq(void *handle);
+void spfc_destroy_common_share_queues(void *v_pstHba);
+u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle);
+void spfc_free_parent_sq_wqe_page_pool(void *handle);
+struct spfc_parent_queue_info *
+spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg);
+struct spfc_parent_sq_info *
+spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len);
+void spfc_process_scq_cqe(ulong scq_info);
+u32 spfc_process_scq_cqe_entity(ulong scq_info, u32 proc_cnt);
+void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id);
+void spfc_process_aeqe(void *handle, u8 event_type, u8 *event_val);
+u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe,
+ u16 ssqn);
+u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq,
+ struct spfc_sqe *io_sqe, u8 wqe_type);
+void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn);
+u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe);
+void spfc_set_rport_flush_state(void *handle, bool in_flush);
+u32 spfc_clear_fetched_sq_wqe(void *handle);
+u32 spfc_clear_pending_sq_wqe(void *handle);
+void spfc_free_parent_queues(void *handle);
+void spfc_free_ssq(void *handle, u32 free_sq_num);
+void spfc_enalbe_queues_dispatch(void *handle);
+void spfc_queue_pre_process(void *handle, bool clean);
+void spfc_queue_post_process(void *handle);
+void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info);
+u32 spfc_send_session_rst_cmd(void *handle,
+ struct spfc_parent_queue_info *parent_queue_info,
+ enum spfc_session_reset_mode mode);
+u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
+ u32 magic_num, u16 sqn);
+void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
+ struct spfc_sqe *sqe,
+ enum spfc_session_reset_mode mode, u32 scqn);
+void spfc_wq_destroy_els_srq(struct work_struct *work);
+void spfc_destroy_els_srq(void *handle);
+u32 spfc_push_delay_sqe(void *hba,
+ struct spfc_parent_queue_info *offload_parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
+void spfc_push_destroy_parent_queue_sqe(void *hba,
+ struct spfc_parent_queue_info *offloading_parent_queue,
+ struct unf_port_info *rport_info);
+void spfc_pop_destroy_parent_queue_sqe(void *handle,
+ struct spfc_delay_destroy_ctrl_info *destroy_sqe_info);
+struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle,
+ u32 local_id,
+ u32 remote_id,
+ u32 rport_index);
+u32 spfc_flush_ini_resp_queue(void *handle);
+void spfc_rcvd_els_from_srq_timeout(struct work_struct *work);
+u32 spfc_send_aeq_info_via_cmdq(void *hba, u32 aeq_error_type);
+u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level,
+ u32 c);
+void spfc_sess_resource_free_sync(void *handle,
+ struct unf_port_info *rport_info);
+u32 spfc_suspend_sqe_and_send_nop(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
+u32 spfc_pop_suspend_sqe(void *handle,
+ struct spfc_parent_queue_info *parent_queue,
+ struct spfc_suspend_sqe_info *suspen_sqe);
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
new file mode 100644
index 000000000000..fa3958357de3
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_service.c
@@ -0,0 +1,2169 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_service.h"
+#include "unf_log.h"
+#include "spfc_io.h"
+#include "spfc_chipitf.h"
+
+#define SPFC_ELS_SRQ_BUF_NUM (0x9)
+#define SPFC_LS_GS_USERID_LEN ((FC_LS_GS_USERID_CNT_MAX + 1) / 2)
+
+struct unf_scqe_handle_table {
+ u32 scqe_type; /* ELS type */
+ bool reclaim_sq_wpg;
+ u32 (*scqe_handle_func)(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+};
+
+static u32 spfc_get_els_rsp_pld_len(u16 els_type, u16 els_cmnd,
+ u32 *els_acc_pld_len)
+{
+ u32 ret = RETURN_OK;
+
+ FC_CHECK_RETURN_VALUE(els_acc_pld_len, UNF_RETURN_ERROR);
+
+ /* RJT */
+ if (els_type == ELS_RJT) {
+ *els_acc_pld_len = UNF_ELS_ACC_RJT_LEN;
+ return RETURN_OK;
+ }
+
+ /* ACC */
+ switch (els_cmnd) {
+ /* uses the same PAYLOAD length as PLOGI. */
+ case ELS_FLOGI:
+ case ELS_PDISC:
+ case ELS_PLOGI:
+ *els_acc_pld_len = UNF_PLOGI_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_PRLI:
+ /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
+ *els_acc_pld_len = (UNF_PRLI_ACC_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE);
+
+ break;
+
+ case ELS_LOGO:
+ *els_acc_pld_len = UNF_LOGO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_PRLO:
+ *els_acc_pld_len = UNF_PRLO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_RSCN:
+ *els_acc_pld_len = UNF_RSCN_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_ADISC:
+ *els_acc_pld_len = UNF_ADISC_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_RRQ:
+ *els_acc_pld_len = UNF_RRQ_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_SCR:
+ *els_acc_pld_len = UNF_SCR_RSP_PAYLOAD_LEN;
+ break;
+
+ case ELS_ECHO:
+ *els_acc_pld_len = UNF_ECHO_ACC_PAYLOAD_LEN;
+ break;
+
+ case ELS_REC:
+ *els_acc_pld_len = UNF_REC_ACC_PAYLOAD_LEN;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_WARN, "[warn]Unknown ELS command(0x%x)",
+ els_cmnd);
+ ret = UNF_RETURN_ERROR;
+ break;
+ }
+
+ return ret;
+}
+
+struct unf_els_cmd_paylod_table {
+ u16 els_cmnd; /* ELS type */
+ u32 els_req_pld_len;
+ u32 els_rsp_pld_len;
+};
+
+static const struct unf_els_cmd_paylod_table els_pld_table_map[] = {
+ {ELS_FDISC, UNF_FDISC_PAYLOAD_LEN, UNF_FDISC_ACC_PAYLOAD_LEN},
+ {ELS_FLOGI, UNF_FLOGI_PAYLOAD_LEN, UNF_FLOGI_ACC_PAYLOAD_LEN},
+ {ELS_PLOGI, UNF_PLOGI_PAYLOAD_LEN, UNF_PLOGI_ACC_PAYLOAD_LEN},
+ {ELS_SCR, UNF_SCR_PAYLOAD_LEN, UNF_SCR_RSP_PAYLOAD_LEN},
+ {ELS_PDISC, UNF_PDISC_PAYLOAD_LEN, UNF_PDISC_ACC_PAYLOAD_LEN},
+ {ELS_LOGO, UNF_LOGO_PAYLOAD_LEN, UNF_LOGO_ACC_PAYLOAD_LEN},
+ {ELS_PRLO, UNF_PRLO_PAYLOAD_LEN, UNF_PRLO_ACC_PAYLOAD_LEN},
+ {ELS_ADISC, UNF_ADISC_PAYLOAD_LEN, UNF_ADISC_ACC_PAYLOAD_LEN},
+ {ELS_RRQ, UNF_RRQ_PAYLOAD_LEN, UNF_RRQ_ACC_PAYLOAD_LEN},
+ {ELS_RSCN, 0, UNF_RSCN_ACC_PAYLOAD_LEN},
+ {ELS_ECHO, UNF_ECHO_PAYLOAD_LEN, UNF_ECHO_ACC_PAYLOAD_LEN},
+ {ELS_REC, UNF_REC_PAYLOAD_LEN, UNF_REC_ACC_PAYLOAD_LEN}
+};
+
+static u32 spfc_get_els_req_acc_pld_len(u16 els_cmnd, u32 *req_pld_len, u32 *rsp_pld_len)
+{
+ u32 ret = RETURN_OK;
+ u32 i;
+
+ FC_CHECK_RETURN_VALUE(req_pld_len, UNF_RETURN_ERROR);
+
+ for (i = 0; i < (sizeof(els_pld_table_map) /
+ sizeof(struct unf_els_cmd_paylod_table));
+ i++) {
+ if (els_pld_table_map[i].els_cmnd == els_cmnd) {
+ *req_pld_len = els_pld_table_map[i].els_req_pld_len;
+ *rsp_pld_len = els_pld_table_map[i].els_rsp_pld_len;
+ return ret;
+ }
+ }
+
+ switch (els_cmnd) {
+ case ELS_PRLI:
+ /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
+ *req_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
+ *rsp_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
+
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Unknown ELS_CMD(0x%x)", els_cmnd);
+ ret = UNF_RETURN_ERROR;
+ break;
+ }
+
+ return ret;
+}
+
+static u32 spfc_check_parent_qinfo_valid(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info **prt_qinfo)
+{
+ if (!*prt_qinfo) {
+ if (pkg->type == UNF_PKG_ELS_REQ || pkg->type == UNF_PKG_ELS_REPLY) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send LS SID(0x%x) DID(0x%x) with null prtqinfo",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
+ *prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!*prt_qinfo)
+ return UNF_RETURN_ERROR;
+ } else {
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ if (pkg->type == UNF_PKG_GS_REQ && SPFC_RPORT_NOT_OFFLOADED(*prt_qinfo)) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) send GS SID(0x%x) DID(0x%x), send GS Request before PLOGI",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ return UNF_RETURN_ERROR;
+ }
+ return RETURN_OK;
+}
+
+static void spfc_get_pkt_cmnd_type_code(struct unf_frame_pkg *pkg,
+ u16 *ls_gs_cmnd_code,
+ u16 *ls_gs_cmnd_type)
+{
+ *ls_gs_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+ if (SPFC_PKG_IS_ELS_RSP(*ls_gs_cmnd_type)) {
+ *ls_gs_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else if (pkg->type == UNF_PKG_GS_REQ) {
+ *ls_gs_cmnd_code = *ls_gs_cmnd_type;
+ } else {
+ *ls_gs_cmnd_code = *ls_gs_cmnd_type;
+ *ls_gs_cmnd_type = ELS_CMND;
+ }
+}
+
+static u32 spfc_get_gs_req_rsp_pld_len(u16 cmnd_code, u32 *gs_pld_len, u32 *gs_rsp_pld_len)
+{
+ FC_CHECK_RETURN_VALUE(gs_pld_len, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(gs_rsp_pld_len, UNF_RETURN_ERROR);
+
+ switch (cmnd_code) {
+ case NS_GPN_ID:
+ *gs_pld_len = UNF_GPNID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GPNID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GNN_ID:
+ *gs_pld_len = UNF_GNNID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GNNID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GFF_ID:
+ *gs_pld_len = UNF_GFFID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GFFID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_GID_FT:
+ case NS_GID_PT:
+ *gs_pld_len = UNF_GID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ case NS_RFT_ID:
+ *gs_pld_len = UNF_RFTID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
+ break;
+
+ case NS_RFF_ID:
+ *gs_pld_len = UNF_RFFID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_RFFID_RSP_PAYLOAD_LEN;
+ break;
+ case NS_GA_NXT:
+ *gs_pld_len = UNF_GID_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ case NS_GIEL:
+ *gs_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
+ *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
+ break;
+
+ default:
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Unknown GS commond type(0x%x)", cmnd_code);
+ return UNF_RETURN_ERROR;
+ }
+
+ return RETURN_OK;
+}
+
+static void *spfc_get_els_frame_addr(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg,
+ u16 els_cmnd_code, u16 els_cmnd_type,
+ u64 *phy_addr)
+{
+ void *frame_pld_addr = NULL;
+ dma_addr_t els_frame_addr = 0;
+
+ if (els_cmnd_code == ELS_ECHO) {
+ frame_pld_addr = (void *)UNF_GET_ECHO_PAYLOAD(pkg);
+ els_frame_addr = UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg);
+ } else if (els_cmnd_code == ELS_RSCN) {
+ if (els_cmnd_type == ELS_CMND) {
+ /* Not Support */
+ frame_pld_addr = NULL;
+ els_frame_addr = 0;
+ } else {
+ frame_pld_addr = (void *)UNF_GET_RSCN_ACC_PAYLOAD(pkg);
+ els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
+ sizeof(struct unf_fc_head);
+ }
+ } else {
+ frame_pld_addr = (void *)SPFC_GET_CMND_PAYLOAD_ADDR(pkg);
+ els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
+ sizeof(struct unf_fc_head);
+ }
+ *phy_addr = els_frame_addr;
+ return frame_pld_addr;
+}
+
+static u32 spfc_get_frame_info(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, void **frame_pld_addr,
+ u32 *frame_pld_len, u64 *frame_phy_addr,
+ u32 *acc_pld_len)
+{
+ u32 ret = RETURN_OK;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ u16 ls_gs_cmnd_type = SPFC_ZERO;
+
+ spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
+
+ if (pkg->type == UNF_PKG_GS_REQ) {
+ ret = spfc_get_gs_req_rsp_pld_len(ls_gs_cmnd_code,
+ frame_pld_len, acc_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) send GS SID(0x%x) DID(0x%x), get error GS request and response payload length",
+ hba->port_cfg.port_id,
+ pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+ *frame_pld_addr = (void *)(SPFC_GET_CMND_PAYLOAD_ADDR(pkg));
+ *frame_phy_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr + sizeof(struct unf_fc_head);
+ if (ls_gs_cmnd_code == NS_GID_FT || ls_gs_cmnd_code == NS_GID_PT)
+ *frame_pld_addr = (void *)(UNF_GET_GID_PAYLOAD(pkg));
+ } else {
+ *frame_pld_addr = spfc_get_els_frame_addr(hba, pkg, ls_gs_cmnd_code,
+ ls_gs_cmnd_type, frame_phy_addr);
+ if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
+ ret = spfc_get_els_rsp_pld_len(ls_gs_cmnd_type, ls_gs_cmnd_code,
+ frame_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get els cmd (0x%x) rsp len failed.",
+ hba->port_cfg.port_id,
+ ls_gs_cmnd_code);
+ return ret;
+ }
+ } else {
+ ret = spfc_get_els_req_acc_pld_len(ls_gs_cmnd_code, frame_pld_len,
+ acc_pld_len);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) get els cmd (0x%x) req and acc len failed.",
+ hba->port_cfg.port_id,
+ ls_gs_cmnd_code);
+ return ret;
+ }
+ }
+ }
+ return ret;
+}
+
+static u32
+spfc_send_ls_gs_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ u16 ls_gs_cmnd_type = SPFC_ZERO;
+ u16 remote_exid = 0;
+ u16 hot_tag = 0;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_sqe tmp_sqe;
+ struct spfc_sqe *sqe = NULL;
+ void *frame_pld_addr = NULL;
+ u32 frame_pld_len = 0;
+ u32 acc_pld_len = 0;
+ u64 frame_pa = 0;
+ ulong flags = 0;
+ u16 ssqn = 0;
+ spinlock_t *prtq_state_lock = NULL;
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ sqe = &tmp_sqe;
+ memset(sqe, 0, sizeof(struct spfc_sqe));
+
+ parent_sq_info = &prt_queue_info->parent_sq_info;
+ hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+
+ spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
+
+ ret = spfc_get_frame_info(hba, pkg, &frame_pld_addr, &frame_pld_len,
+ &frame_pa, &acc_pld_len);
+ if (ret != RETURN_OK)
+ return ret;
+
+ if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
+ remote_exid = UNF_GET_OXID(pkg);
+ spfc_build_els_wqe_ts_rsp(sqe, prt_queue_info, pkg,
+ frame_pld_addr, ls_gs_cmnd_type,
+ ls_gs_cmnd_code);
+
+ /* Assemble the SQE Task Section Els Common part */
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index,
+ UNF_GET_RXID(pkg), remote_exid,
+ SPFC_LSW(frame_pld_len));
+ } else {
+ remote_exid = UNF_GET_RXID(pkg);
+ /* send els req ,only use local_xid for hotpooltag */
+ spfc_build_els_wqe_ts_req(sqe, parent_sq_info,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ frame_pld_addr, pkg);
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, hot_tag,
+ remote_exid, SPFC_LSW(frame_pld_len));
+ }
+ /* Assemble the SQE Control Section part */
+ spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
+ SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
+
+ /* Build SGE */
+ spfc_build_els_gs_wqe_sge(sqe, frame_pld_addr, frame_pa, frame_pld_len,
+ parent_sq_info->context_id, hba);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) send ELS/GS Type(0x%x) Code(0x%x) HotTag(0x%x)",
+ hba->port_cfg.port_id, parent_sq_info->rport_index, ls_gs_cmnd_type,
+ ls_gs_cmnd_code, hot_tag);
+ if (ls_gs_cmnd_code == ELS_PLOGI || ls_gs_cmnd_code == ELS_LOGO) {
+ ret = spfc_suspend_sqe_and_send_nop(hba, prt_queue_info, sqe, pkg);
+ return ret;
+ }
+ prtq_state_lock = &prt_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flags);
+ if (SPFC_RPORT_NOT_OFFLOADED(prt_queue_info)) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ /* Send PLOGI or PLOGI ACC or SCR if session not offload */
+ ret = spfc_send_els_via_default_session(hba, sqe, pkg, prt_queue_info);
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+ }
+
+ return ret;
+}
+
+u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ u16 ls_gs_cmnd_code = SPFC_ZERO;
+ union unf_sfs_u *sfs_entry = NULL;
+ struct unf_rrq *rrq_pld = NULL;
+ u16 ox_id = 0;
+ u16 rx_id = 0;
+
+ /* Check Parameters */
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(UNF_GET_SFS_ENTRY(pkg), UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(SPFC_GET_CMND_PAYLOAD_ADDR(pkg), UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ hba = (struct spfc_hba_info *)handle;
+ ls_gs_cmnd_code = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ /* If RRQ Req, Special processing */
+ if (ls_gs_cmnd_code == ELS_RRQ) {
+ sfs_entry = UNF_GET_SFS_ENTRY(pkg);
+ rrq_pld = &sfs_entry->rrq;
+ ox_id = (u16)(rrq_pld->oxid_rxid >> UNF_SHIFT_16);
+ rx_id = (u16)(rrq_pld->oxid_rxid & SPFC_RXID_MASK);
+ rrq_pld->oxid_rxid = (u32)ox_id << UNF_SHIFT_16 | rx_id;
+ }
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ ret = spfc_check_parent_qinfo_valid(hba, pkg, &prt_qinfo);
+
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
+ "[error]Port(0x%x) send ELS/GS SID(0x%x) DID(0x%x) check qinfo invalid",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ return UNF_RETURN_ERROR;
+ }
+
+ ret = spfc_send_ls_gs_via_parent(hba, pkg, prt_qinfo);
+
+ return ret;
+}
+
+void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_params)
+{
+ u32 rport_index = login_params->rport_index;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[err]Port(0x%x) save login parms,but uplevel alloc invalid rport index: 0x%x",
+ hba->port_cfg.port_id, rport_index);
+
+ return;
+ }
+
+ parent_sq_info = &hba->parent_queue_mgr->parent_queue[rport_index].parent_sq_info;
+
+ parent_sq_info->plogi_co_parms.seq_cnt = login_params->seq_cnt;
+ parent_sq_info->plogi_co_parms.ed_tov = login_params->ed_tov;
+ parent_sq_info->plogi_co_parms.tx_mfs = (login_params->tx_mfs <
+ SPFC_DEFAULT_TX_MAX_FREAM_SIZE) ?
+ SPFC_DEFAULT_TX_MAX_FREAM_SIZE :
+ login_params->tx_mfs;
+ parent_sq_info->plogi_co_parms.ed_tov_time = login_params->ed_tov_timer_val;
+}
+
+static void
+spfc_recover_offloading_state(struct spfc_parent_queue_info *prt_queue_info,
+ enum spfc_parent_queue_state offload_state)
+{
+ ulong flags = 0;
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
+
+ if (prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ prt_queue_info->offload_state = offload_state;
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+}
+
+static bool spfc_check_need_delay_offload(void *hba, struct unf_frame_pkg *pkg, u32 rport_index,
+ struct spfc_parent_queue_info *cur_prt_queue_info,
+ struct spfc_parent_queue_info **offload_prt_queue_info)
+{
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_queue_info = NULL;
+ spinlock_t *prtq_state_lock = NULL;
+
+ prtq_state_lock = &cur_prt_queue_info->parent_queue_state_lock;
+ spin_lock_irqsave(prtq_state_lock, flags);
+
+ if (cur_prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ prt_queue_info = spfc_find_offload_parent_queue(hba, pkg->frame_head.csctl_sid &
+ UNF_NPORTID_MASK,
+ pkg->frame_head.rctl_did &
+ UNF_NPORTID_MASK, rport_index);
+ if (prt_queue_info) {
+ *offload_prt_queue_info = prt_queue_info;
+ return true;
+ }
+ } else {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+ }
+
+ return false;
+}
+
+static u16 spfc_build_wqe_with_offload(struct spfc_hba_info *hba, struct spfc_sqe *sqe,
+ struct spfc_parent_queue_info *prt_queue_info,
+ struct unf_frame_pkg *pkg,
+ enum spfc_parent_queue_state last_offload_state)
+{
+ u32 tx_mfs = 2048;
+ u32 edtov_timer = 2000;
+ dma_addr_t ctx_pa = 0;
+ u16 els_cmnd_type = SPFC_ZERO;
+ u16 els_cmnd_code = SPFC_ZERO;
+ void *ctx_va = NULL;
+ struct spfc_parent_context *parent_ctx_info = NULL;
+ struct spfc_sw_section *sw_setction = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
+ u16 offload_flag = 0;
+
+ els_cmnd_type = SPFC_GET_ELS_RSP_TYPE(pkg->cmnd);
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else {
+ els_cmnd_code = els_cmnd_type;
+ els_cmnd_type = ELS_CMND;
+ }
+
+ offload_flag = SPFC_CHECK_NEED_OFFLOAD(els_cmnd_code, els_cmnd_type, last_offload_state);
+
+ parent_ctx_info = (struct spfc_parent_context *)(prt_queue_info->parent_ctx.parent_ctx);
+ sw_setction = &parent_ctx_info->sw_section;
+
+ sw_setction->tx_mfs = cpu_to_be16((u16)(tx_mfs));
+ sw_setction->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
+
+ spfc_big_to_cpu32(&sw_setction->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
+ sw_setction->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
+ spfc_cpu_to_big32(&sw_setction->sw_ctxt_misc.pctxt_val0,
+ sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
+
+ spfc_big_to_cpu32(&sw_setction->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
+ spfc_cpu_to_big32(&sw_setction->sw_ctxt_config.pctxt_val1,
+ sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
+
+ /* Fill in contex to the chip */
+ ctx_pa = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
+ ctx_va = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
+
+ /* No need write key and no need do BIG TO CPU32 */
+ memcpy(ctx_va, prt_queue_info->parent_ctx.parent_ctx, sizeof(struct spfc_parent_context));
+
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ sqe->ts_sl.cont.els_rsp.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.els_rsp.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.els_rsp.wd1.offload_flag = offload_flag;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
+ parent_sq_info->local_port_id,
+ parent_sq_info->remote_port_id,
+ sqe->ts_sl.cont.els_rsp.context_gpa_hi,
+ sqe->ts_sl.cont.els_rsp.context_gpa_lo,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ offload_flag);
+ } else {
+ sqe->ts_sl.cont.t_els_gs.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.t_els_gs.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
+ sqe->ts_sl.cont.t_els_gs.wd4.offload_flag = offload_flag;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
+ parent_sq_info->local_port_id,
+ parent_sq_info->remote_port_id,
+ sqe->ts_sl.cont.t_els_gs.context_gpa_hi,
+ sqe->ts_sl.cont.t_els_gs.context_gpa_lo,
+ prt_queue_info->parent_sts_scq_info.cqm_queue_id,
+ offload_flag);
+ }
+
+ if (offload_flag) {
+ prt_queue_info->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
+ parent_sq_info->need_offloaded = SPFC_NEED_DO_OFFLOAD;
+ }
+
+ return offload_flag;
+}
+
+u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
+ struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info)
+{
+ ulong flags = 0;
+ bool sqe_delay = false;
+ u32 ret = UNF_RETURN_ERROR;
+ u16 els_cmnd_code = SPFC_ZERO;
+ u16 els_cmnd_type = SPFC_ZERO;
+ u16 ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ u32 rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
+ struct spfc_sqe *sqe = io_sqe;
+ struct spfc_parent_queue_info *default_prt_queue_info = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
+ struct spfc_parent_queue_info *offload_queue_info = NULL;
+ enum spfc_parent_queue_state last_offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+ u16 offload_flag = 0;
+ u32 default_index = SPFC_DEFAULT_RPORT_INDEX;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ /* Determine the ELS type in pkg */
+ els_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
+ els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
+ } else {
+ els_cmnd_code = els_cmnd_type;
+ els_cmnd_type = ELS_CMND;
+ }
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
+
+ last_offload_state = prt_queue_info->offload_state;
+
+ offload_flag = spfc_build_wqe_with_offload(hba, sqe, prt_queue_info,
+ pkg, last_offload_state);
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+
+ if (!offload_flag) {
+ default_prt_queue_info = &hba->parent_queue_mgr->parent_queue[default_index];
+ if (!default_prt_queue_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
+ "[ERR]cmd(0x%x), type(0x%x) send fail, default session null",
+ els_cmnd_code, els_cmnd_type);
+ return UNF_RETURN_ERROR;
+ }
+ parent_sq_info = &default_prt_queue_info->parent_sq_info;
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]cmd(0x%x), type(0x%x) send via default session",
+ els_cmnd_code, els_cmnd_type);
+ } else {
+ /* Need this xid to judge delay offload, when Sqe Enqueue will
+ * write again
+ */
+ sqe->ts_sl.xid = parent_sq_info->context_id;
+ sqe_delay = spfc_check_need_delay_offload(hba, pkg, rport_index, prt_queue_info,
+ &offload_queue_info);
+
+ if (sqe_delay) {
+ ret = spfc_push_delay_sqe(hba, offload_queue_info, sqe, pkg);
+ if (ret == RETURN_OK) {
+ spfc_recover_offloading_state(prt_queue_info, last_offload_state);
+ return ret;
+ }
+ }
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
+ "[info]cmd(0x%x), type(0x%x) do secretly offload",
+ els_cmnd_code, els_cmnd_type);
+ }
+
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+
+ if (ret != RETURN_OK) {
+ spfc_recover_offloading_state(prt_queue_info, last_offload_state);
+
+ spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock,
+ flags);
+
+ if (prt_queue_info->parent_sq_info.destroy_sqe.valid) {
+ memcpy(&delay_ctl_info, &prt_queue_info->parent_sq_info.destroy_sqe,
+ sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ prt_queue_info->parent_sq_info.destroy_sqe.valid = false;
+ }
+
+ spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
+ "[warn]Port(0x%x) RPort(0x%x) send ELS Type(0x%x) Code(0x%x) fail,recover offloadstatus(%u)",
+ hba->port_cfg.port_id, rport_index, els_cmnd_type,
+ els_cmnd_code, prt_queue_info->offload_state);
+ }
+
+ return ret;
+}
+
+static u32 spfc_rcv_ls_gs_rsp_payload(struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag,
+ u8 *els_pld_buf, u32 pld_len)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ if (pkg->type == UNF_PKG_GS_REQ_DONE)
+ spfc_big_to_cpu32(els_pld_buf, pld_len);
+ else
+ pkg->byte_orders |= SPFC_BIT_2;
+
+ pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld_buf;
+ pkg->unf_cmnd_pload_bl.length = pld_len;
+
+ pkg->last_pkg_flag = UNF_PKG_NOT_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /* Default path, which is sent from SCQ to the driver */
+ u8 status = 0;
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_abts_rsp *abts_rsp = NULL;
+
+ abts_rsp = &scqe->rcv_abts_rsp;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_rsp->magic_num;
+
+ ox_id = (u32)(abts_rsp->wd0.ox_id);
+
+ hot_tag = abts_rsp->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ if (unlikely(hot_tag < (u32)hba->exi_base ||
+ hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) has bad HotTag(0x%x) for bls_rsp",
+ hba->port_cfg.port_id, hot_tag);
+
+ status = UNF_IO_FAILED;
+ hot_tag = INVALID_VALUE32;
+ } else {
+ hot_tag -= hba->exi_base;
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) BLS response has error code(0x%x) tag(0x%x)",
+ hba->port_cfg.port_id,
+ SPFC_GET_SCQE_STATUS(scqe), (u32)hot_tag);
+
+ status = UNF_IO_FAILED;
+ } else {
+ pkg.frame_head.rctl_did = abts_rsp->wd3.did;
+ pkg.frame_head.csctl_sid = abts_rsp->wd4.sid;
+ pkg.frame_head.oxid_rxid = (u32)(abts_rsp->wd0.rx_id) | ox_id <<
+ UNF_SHIFT_16;
+
+ /* BLS_ACC/BLS_RJT: IO_succeed */
+ if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_ACC) {
+ status = UNF_IO_SUCCESS;
+ } else if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_RJT) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) ABTS RJT: %08x-%08x-%08x",
+ hba->port_cfg.port_id,
+ abts_rsp->payload[ARRAY_INDEX_0],
+ abts_rsp->payload[ARRAY_INDEX_1],
+ abts_rsp->payload[ARRAY_INDEX_2]);
+
+ status = UNF_IO_SUCCESS;
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) BLS response RCTL is error",
+ hba->port_cfg.port_id);
+ SPFC_ERR_IO_STAT(hba, SPFC_SCQE_ABTS_RSP);
+ status = UNF_IO_FAILED;
+ }
+ }
+ }
+
+ /* Set PKG/exchange status & Process BLS_RSP */
+ pkg.status = status;
+ ret = spfc_rcv_bls_rsp(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ABTS rsp OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) SID(0x%x) DID(0x%x) %s",
+ hba->port_cfg.port_id, ox_id, abts_rsp->wd0.rx_id, hot_tag,
+ abts_rsp->wd4.sid, abts_rsp->wd3.did,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
+ bool first)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ /* Convert Payload to small endian */
+ spfc_big_to_cpu32(els_pld, pld_len);
+
+ pkg->type = UNF_PKG_ELS_REQ;
+
+ pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld;
+
+ /* Payload length */
+ pkg->unf_cmnd_pload_bl.length = pld_len;
+
+ /* Obtain the Cmnd type from the Paylaod. The Cmnd is in small endian */
+ if (first)
+ pkg->cmnd = UNF_GET_FC_PAYLOAD_ELS_CMND(pkg->unf_cmnd_pload_bl.buffer_ptr);
+
+ /* Errors have been processed in SPFC_RecvElsError */
+ pkg->status = UNF_IO_SUCCESS;
+
+ /* Send PKG to the CM layer */
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ if (ret != RETURN_OK) {
+ pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ ret = spfc_free_xid((void *)hba, pkg);
+
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) recv %s ox_id(0x%x) RXID(0x%x) PldLen(0x%x) failed, Free xid %s",
+ hba->port_cfg.port_id,
+ UNF_GET_FC_HEADER_RCTL(&pkg->frame_head) == SPFC_FC_RCTL_ELS_REQ ?
+ "ELS REQ" : "ELS RSP",
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), pld_len,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+ }
+
+ return ret;
+}
+
+u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ if (pkg->type == UNF_PKG_ELS_REQ_DONE)
+ pkg->byte_orders |= SPFC_BIT_2;
+
+ pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_ELS_REPLY_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ UNF_LOWLEVEL_SEND_ELS_DONE(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ u32 hot_tag)
+{
+ /*
+ * 1. SCQ (normal)
+ * 2. from Root RQ (parent no existence)
+ * *
+ * single frame, single sequence
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_BLS_REQ_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
+
+ UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 rx_id)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->type = UNF_PKG_BLS_REPLY_DONE;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = rx_id;
+
+ UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_tmf_marker_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ /* Send PKG info to COM */
+ UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+u32 spfc_rcv_abts_marker_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag)
+{
+ u32 ret = UNF_RETURN_ERROR;
+
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+
+ UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, hba->lport, pkg);
+
+ return ret;
+}
+
+static void spfc_scqe_error_pre_proc(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /* Currently, only printing and statistics collection are performed */
+ SPFC_ERR_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
+ SPFC_SCQ_ERR_TYPE_STAT(hba, SPFC_GET_SCQE_STATUS(scqe));
+
+ FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
+ "[warn]Port(0x%x)-Task_type(%u) SCQE contain error code(%u),additional info(0x%x)",
+ hba->port_cfg.port_id, scqe->common.ch.wd0.task_type,
+ scqe->common.ch.wd0.err_code, scqe->common.conn_id);
+}
+
+void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id)
+{
+ struct spfc_drq_buff_entry *srq_buf_entry = NULL;
+ struct spfc_srq_info *srq_info = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, NULL);
+
+ srq_info = &hba->els_srq_info;
+ FC_CHECK_RETURN_VALUE(user_id < srq_info->valid_wqe_num, NULL);
+
+ srq_buf_entry = &srq_info->els_buff_entry_head[user_id];
+
+ return srq_buf_entry->buff_addr;
+}
+
+static u32 spfc_check_srq_buf_valid(struct spfc_hba_info *hba,
+ u16 *buf_id_array, u32 buf_num)
+{
+ u32 index = 0;
+ u32 buf_id = 0;
+ void *srq_buf = NULL;
+
+ for (index = 0; index < buf_num; index++) {
+ buf_id = buf_id_array[index];
+
+ if (buf_id < hba->els_srq_info.valid_wqe_num)
+ srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
+ else
+ srq_buf = NULL;
+
+ if (!srq_buf) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get srq buffer user id(0x%x) is null",
+ hba->port_cfg.port_id, buf_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+static void spfc_reclaim_srq_buf(struct spfc_hba_info *hba, u16 *buf_id_array,
+ u32 buf_num)
+{
+ u32 index = 0;
+ u32 buf_id = 0;
+ void *srq_buf = NULL;
+
+ for (index = 0; index < buf_num; index++) {
+ buf_id = buf_id_array[index];
+ if (buf_id < hba->els_srq_info.valid_wqe_num)
+ srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
+ else
+ srq_buf = NULL;
+
+ /* If the value of buffer is NULL, it indicates that the value
+ * of buffer is invalid. In this case, exit directly.
+ */
+ if (!srq_buf)
+ break;
+
+ spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)buf_id);
+ }
+}
+
+static u32 spfc_check_ls_gs_valid(struct spfc_hba_info *hba, union spfc_scqe *scqe,
+ struct unf_frame_pkg *pkg, u16 *buf_id_array,
+ u32 buf_num, u32 frame_len)
+{
+ u32 hot_tag;
+
+ hot_tag = UNF_GET_HOTPOOL_TAG(pkg);
+
+ /* The ELS CMD returns an error code and discards it directly */
+ if ((sizeof(struct spfc_fc_frame_header) > frame_len) ||
+ (SPFC_SCQE_HAS_ERRCODE(scqe)) || buf_num > SPFC_ELS_SRQ_BUF_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get scqe type(0x%x) payload len(0x%x),scq status(0x%x),user id num(0x%x) abnormal",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), frame_len,
+ SPFC_GET_SCQE_STATUS(scqe), buf_num);
+
+ /* ELS RSP Special Processing */
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP ||
+ SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP) {
+ if (SPFC_SCQE_ERR_TO_CM(scqe)) {
+ pkg->status = UNF_IO_FAILED;
+ (void)spfc_rcv_ls_gs_rsp(hba, pkg, hot_tag);
+ } else {
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP)
+ SPFC_HBA_STAT(hba, SPFC_STAT_ELS_RSP_EXCH_REUSE);
+ else
+ SPFC_HBA_STAT(hba, SPFC_STAT_GS_RSP_EXCH_REUSE);
+ }
+ }
+
+ /* Reclaim srq */
+ if (buf_num <= SPFC_ELS_SRQ_BUF_NUM)
+ spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* ELS CMD Check the validity of the buffer sent by the ucode */
+ if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_CMND) {
+ if (spfc_check_srq_buf_valid(hba, buf_id_array, buf_num) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els cmnd scqe user id num(0x%x) abnormal, as some srq buff is null",
+ hba->port_cfg.port_id, buf_num);
+
+ spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
+
+ return UNF_RETURN_ERROR;
+ }
+ }
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_recv_els_cmnd(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 pld_len = 0;
+ u32 header_len = 0;
+ u32 frame_len = 0;
+ u32 rcv_data_len = 0;
+ u32 max_buf_num = 0;
+ u16 buf_id = 0;
+ u32 index = 0;
+ u8 *pld_addr = NULL;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_els_cmd *els_cmd = NULL;
+ struct spfc_fc_frame_header *els_frame = NULL;
+ struct spfc_fc_frame_header tmp_frame = {0};
+ void *els_buf = NULL;
+ bool first = false;
+
+ els_cmd = &scqe->rcv_els_cmd;
+ frame_len = els_cmd->wd3.data_len;
+ max_buf_num = els_cmd->wd3.user_id_num;
+ spfc_swap_16_in_32((u32 *)els_cmd->user_id, SPFC_LS_GS_USERID_LEN);
+
+ pkg.xchg_contex = NULL;
+ pkg.status = UNF_IO_SUCCESS;
+
+ /* Check the validity of error codes and buff. If an exception occurs,
+ * discard the error code
+ */
+ ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, els_cmd->user_id,
+ max_buf_num, frame_len);
+ if (ret != RETURN_OK) {
+ pkg.rx_or_ox_id = UNF_PKG_FREE_RXID;
+ pkg.frame_head.oxid_rxid =
+ (u32)(els_cmd->wd2.rx_id) | (u32)(els_cmd->wd2.ox_id) << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
+ pkg.frame_head.csctl_sid = els_cmd->wd1.sid;
+ pkg.frame_head.rctl_did = els_cmd->wd0.did;
+ spfc_free_xid((void *)hba, &pkg);
+ return RETURN_OK;
+ }
+
+ /* Send data to COM cyclically */
+ for (index = 0; index < max_buf_num; index++) {
+ /* Exception record, which is not processed currently */
+ if (rcv_data_len >= frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els cmd date len(0x%x) is bigger than fream len(0x%x)",
+ hba->port_cfg.port_id, rcv_data_len, frame_len);
+ }
+
+ buf_id = (u16)els_cmd->user_id[index];
+ els_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
+
+ /* Obtain playload address */
+ pld_addr = (u8 *)(els_buf);
+ header_len = 0;
+ first = false;
+ if (index == 0) {
+ els_frame = (struct spfc_fc_frame_header *)els_buf;
+ pld_addr = (u8 *)(els_frame + 1);
+
+ header_len = sizeof(struct spfc_fc_frame_header);
+ first = true;
+
+ memcpy(&tmp_frame, els_frame, sizeof(struct spfc_fc_frame_header));
+ spfc_big_to_cpu32(&tmp_frame, sizeof(struct spfc_fc_frame_header));
+ memcpy(&pkg.frame_head, &tmp_frame, sizeof(pkg.frame_head));
+ pkg.frame_head.oxid_rxid = (u32)((pkg.frame_head.oxid_rxid &
+ SPFC_OXID_MASK) | (els_cmd->wd2.rx_id));
+ }
+
+ /* Calculate the playload length */
+ pkg.last_pkg_flag = 0;
+ pld_len = SPFC_SRQ_ELS_SGE_LEN;
+
+ if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len) {
+ pkg.last_pkg_flag = 1;
+ pld_len = frame_len - rcv_data_len;
+ }
+
+ pkg.class_mode = els_cmd->wd0.class_mode;
+
+ /* Push data to COM */
+ if (ret == RETURN_OK) {
+ ret = spfc_recv_els_cmnd(hba, &pkg, pld_addr,
+ (pld_len - header_len), first);
+ }
+
+ /* Reclaim srq buffer */
+ spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
+
+ rcv_data_len += pld_len;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ELS Type(0x%x) Cmnd(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) %u",
+ hba->port_cfg.port_id, pkg.type, pkg.cmnd, els_cmd->wd2.ox_id,
+ els_cmd->wd2.rx_id, els_cmd->wd1.sid, els_cmd->wd0.did, ret);
+
+ return ret;
+}
+
+static u32 spfc_get_ls_gs_pld_len(struct spfc_hba_info *hba, u32 rcv_data_len, u32 frame_len)
+{
+ u32 pld_len;
+
+ /* Exception record, which is not processed currently */
+ if (rcv_data_len >= frame_len) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) get els rsp data len(0x%x) is bigger than fream len(0x%x)",
+ hba->port_cfg.port_id, rcv_data_len, frame_len);
+ }
+
+ pld_len = SPFC_SRQ_ELS_SGE_LEN;
+ if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len)
+ pld_len = frame_len - rcv_data_len;
+
+ return pld_len;
+}
+
+u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = RETURN_OK;
+ u32 pld_len = 0;
+ u32 header_len = 0;
+ u32 frame_len = 0;
+ u32 rcv_data_len = 0;
+ u32 max_buf_num = 0;
+ u16 buf_id = 0;
+ u32 hot_tag = INVALID_VALUE32;
+ u32 index = 0;
+ u32 ox_id = (~0);
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_rcv_els_gs_rsp *ls_gs_rsp_scqe = NULL;
+ struct spfc_fc_frame_header *els_frame = NULL;
+ void *ls_gs_buf = NULL;
+ u8 *pld_addr = NULL;
+ u8 task_type;
+
+ ls_gs_rsp_scqe = &scqe->rcv_els_gs_rsp;
+ frame_len = ls_gs_rsp_scqe->wd2.data_len;
+ max_buf_num = ls_gs_rsp_scqe->wd4.user_id_num;
+ spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
+
+ ox_id = ls_gs_rsp_scqe->wd1.ox_id;
+ hot_tag = ((u16)(ls_gs_rsp_scqe->wd5.hotpooltag) & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
+ pkg.frame_head.csctl_sid = ls_gs_rsp_scqe->wd4.sid;
+ pkg.frame_head.rctl_did = ls_gs_rsp_scqe->wd3.did;
+ pkg.status = UNF_IO_SUCCESS;
+ pkg.type = UNF_PKG_ELS_REQ_DONE;
+
+ task_type = SPFC_GET_SCQE_TYPE(scqe);
+ if (task_type == SPFC_SCQE_GS_RSP) {
+ if (ls_gs_rsp_scqe->wd3.end_rsp)
+ SPFC_HBA_STAT(hba, SPFC_STAT_LAST_GS_SCQE);
+ pkg.type = UNF_PKG_GS_REQ_DONE;
+ }
+
+ /* Handle the exception first. The LS/GS RSP returns the error code.
+ * Only the ox_id can submit the error code to the CM layer.
+ */
+ ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, ls_gs_rsp_scqe->user_id,
+ max_buf_num, frame_len);
+ if (ret != RETURN_OK)
+ return RETURN_OK;
+
+ if (ls_gs_rsp_scqe->wd3.echo_rsp) {
+ pkg.private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_5];
+ pkg.private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_6];
+ pkg.private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_7];
+ pkg.private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
+ ls_gs_rsp_scqe->user_id[ARRAY_INDEX_8];
+ }
+
+ /* Send data to COM cyclically */
+ for (index = 0; index < max_buf_num; index++) {
+ /* Obtain buffer address */
+ ls_gs_buf = NULL;
+ buf_id = (u16)ls_gs_rsp_scqe->user_id[index];
+ ls_gs_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
+
+ if (unlikely(!ls_gs_buf)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) Index(0x%x) get els rsp buff user id(0x%x) abnormal",
+ hba->port_cfg.port_id, ox_id,
+ ls_gs_rsp_scqe->wd1.rx_id, ls_gs_rsp_scqe->wd4.sid,
+ ls_gs_rsp_scqe->wd3.did, index, buf_id);
+
+ if (index == 0) {
+ pkg.status = UNF_IO_FAILED;
+ ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
+ }
+
+ return ret;
+ }
+
+ header_len = 0;
+ pld_addr = (u8 *)(ls_gs_buf);
+ if (index == 0) {
+ header_len = sizeof(struct spfc_fc_frame_header);
+ els_frame = (struct spfc_fc_frame_header *)ls_gs_buf;
+ pld_addr = (u8 *)(els_frame + 1);
+ }
+
+ /* Calculate the playload length */
+ pld_len = spfc_get_ls_gs_pld_len(hba, rcv_data_len, frame_len);
+
+ /* Push data to COM */
+ if (ret == RETURN_OK) {
+ ret = spfc_rcv_ls_gs_rsp_payload(hba, &pkg, hot_tag, pld_addr,
+ (pld_len - header_len));
+ }
+
+ /* Reclaim srq buffer */
+ spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
+
+ rcv_data_len += pld_len;
+ }
+
+ if (ls_gs_rsp_scqe->wd3.end_rsp && ret == RETURN_OK)
+ ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) receive LS/GS RSP ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) end_rsp(0x%x) user_num(0x%x)",
+ hba->port_cfg.port_id, ox_id, ls_gs_rsp_scqe->wd1.rx_id,
+ ls_gs_rsp_scqe->wd4.sid, ls_gs_rsp_scqe->wd3.did,
+ ls_gs_rsp_scqe->wd3.end_rsp,
+ ls_gs_rsp_scqe->wd4.user_id_num);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_comm_rsp_sts *els_rsp_sts_scqe = NULL;
+
+ els_rsp_sts_scqe = &scqe->comm_sts;
+ rx_id = (u32)els_rsp_sts_scqe->wd0.rx_id;
+
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
+ els_rsp_sts_scqe->magic_num;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
+ hot_tag = (u32)((els_rsp_sts_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) -
+ hba->exi_base);
+
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ ret = spfc_rcv_els_rsp_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv ELS RSP STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
+ hba->port_cfg.port_id, els_rsp_sts_scqe->wd0.ox_id, rx_id,
+ hot_tag, (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+static u32 spfc_check_rport_valid(const struct spfc_parent_queue_info *prt_queue_info, u32 scqe_xid)
+{
+ if (prt_queue_info->parent_ctx.cqm_parent_ctx_obj) {
+ if ((prt_queue_info->parent_sq_info.context_id & SPFC_CQM_XID_MASK) ==
+ (scqe_xid & SPFC_CQM_XID_MASK)) {
+ return RETURN_OK;
+ }
+ }
+
+ return UNF_RETURN_ERROR;
+}
+
+u32 spfc_scq_recv_offload_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 valid = UNF_RETURN_ERROR;
+ u32 rport_index = 0;
+ u32 cid = 0;
+ u32 xid = 0;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_scqe_sess_sts *offload_sts_scqe = NULL;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+ offload_sts_scqe = &scqe->sess_sts;
+ rport_index = offload_sts_scqe->wd1.conn_id;
+ cid = offload_sts_scqe->wd2.cid;
+ xid = offload_sts_scqe->wd0.xid_qpn;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, cacheid(0x%x)",
+ hba->port_cfg.port_id, rport_index, cid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (rport_index == SPFC_DEFAULT_RPORT_INDEX &&
+ hba->default_sq_info.default_sq_flag == 0xF) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) default session timeout: rport(0x%x) cacheid(0x%x)",
+ hba->port_cfg.port_id, rport_index, cid);
+ return UNF_RETURN_ERROR;
+ }
+
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ parent_sq_info = &prt_qinfo->parent_sq_info;
+
+ valid = spfc_check_rport_valid(prt_qinfo, xid);
+ if (valid != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index, xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* Offload failed */
+ if (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x), rport(0x%x), context id(0x%x), cache id(0x%x), offload failed",
+ hba->port_cfg.port_id, rport_index, xid, cid);
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ if (prt_qinfo->offload_state != SPFC_QUEUE_STATE_OFFLOADED) {
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ parent_sq_info->need_offloaded = INVALID_VALUE8;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock,
+ flags);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ prt_qinfo->parent_sq_info.cache_id = cid;
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_OFFLOADED;
+ parent_sq_info->need_offloaded = SPFC_HAVE_OFFLOAD;
+ atomic_set(&prt_qinfo->parent_sq_info.sq_cached, true);
+
+ if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
+ delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
+ delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
+ delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
+ delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
+ delay_ctl_info.rport_info.nport_id =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
+ delay_ctl_info.rport_info.rport_index =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
+ delay_ctl_info.rport_info.port_name =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
+ prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (rport_index == SPFC_DEFAULT_RPORT_INDEX) {
+ hba->default_sq_info.sq_cid = cid;
+ hba->default_sq_info.sq_xid = xid;
+ hba->default_sq_info.default_sq_flag = 1;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_MAJOR, "[info]Receive default Session info");
+ }
+
+ spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) offload success: rport index(0x%x),rport nportid(0x%x),context id(0x%x),cache id(0x%x).",
+ hba->port_cfg.port_id, rport_index,
+ prt_qinfo->parent_sq_info.remote_port_id, xid, cid);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_send_bls_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u16 ox_id = INVALID_VALUE16;
+ u16 rx_id = INVALID_VALUE16;
+ struct spfc_sqe tmp_sqe;
+ struct spfc_sqe *sqe = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ u16 ssqn;
+
+ FC_CHECK_RETURN_VALUE((pkg->type == UNF_PKG_BLS_REQ), UNF_RETURN_ERROR);
+
+ sqe = &tmp_sqe;
+ memset(sqe, 0, sizeof(struct spfc_sqe));
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!prt_qinfo) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ parent_sq_info = spfc_find_parent_sq_by_pkg(hba, pkg);
+ if (!parent_sq_info) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send ABTS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ rx_id = UNF_GET_RXID(pkg);
+ ox_id = UNF_GET_OXID(pkg);
+
+ /* Assemble the SQE Control Section part. The ABTS does not have
+ * Payload. bdsl=0
+ */
+ spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
+ 0);
+
+ /* Assemble the SQE Task Section BLS Common part. The value of DW2 of
+ * BLS WQE is Rsvd, and the value of DW2 is 0
+ */
+ spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, ox_id, rx_id, 0);
+
+ /* Assemble the special part of the ABTS */
+ spfc_build_bls_wqe_ts_req(sqe, pkg, hba);
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) RPort(0x%x) send ABTS_REQ ox_id(0x%x) RXID(0x%x), HotTag(0x%x)",
+ hba->port_cfg.port_id, parent_sq_info->rport_index, ox_id,
+ rx_id, (u16)(UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base));
+
+ ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
+ ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
+
+ return ret;
+}
+
+u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ struct spfc_hba_info *hba = NULL;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+
+ FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg->type == UNF_PKG_BLS_REQ || pkg->type == UNF_PKG_BLS_REPLY,
+ UNF_RETURN_ERROR);
+
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+ hba = (struct spfc_hba_info *)handle;
+
+ prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
+ if (!prt_qinfo) {
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+
+ return ret;
+ }
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (SPFC_RPORT_OFFLOADED(prt_qinfo)) {
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+ ret = spfc_send_bls_via_parent(hba, pkg);
+ } else {
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+ FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
+ "[error]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with no offloaded, do noting",
+ hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
+ pkg->frame_head.rctl_did);
+ }
+
+ return ret;
+}
+
+static u32 spfc_scq_rcv_flush_sq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * RCVD sq flush sts
+ * --->>> continue flush or clear done
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (scqe->flush_sts.wd0.port_id != hba->port_index) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
+ hba->port_index, hba->queue_set_stage);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ if (scqe->flush_sts.wd0.last_flush) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_INFO,
+ "[info]Port(0x%x) flush sq(0x%x) done, stage(0x%x)",
+ hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
+
+ /* If the Flush STS is last one, send cmd done */
+ ret = spfc_clear_sq_wqe_done(hba);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
+ "[info]Port(0x%x) continue flush sq(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
+
+ ret = spfc_clear_pending_sq_wqe(hba);
+ }
+
+ return ret;
+}
+
+static u32 spfc_scq_rcv_buf_clear_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * clear: fetched sq wqe
+ * ---to--->>> pending sq wqe
+ */
+ u32 ret = UNF_RETURN_ERROR;
+
+ if (scqe->clear_sts.wd0.port_id != hba->port_index) {
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
+ "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
+ hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
+ hba->port_index, hba->queue_set_stage);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* set port with I/O cleared state */
+ spfc_set_hba_clear_state(hba, true);
+
+ FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
+ "[info]Port(0x%x) cleared all fetched wqe, start clear sq pending wqe, stage (0x%x)",
+ hba->port_cfg.port_id, hba->queue_set_stage);
+
+ hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHING;
+ ret = spfc_clear_pending_sq_wqe(hba);
+
+ return ret;
+}
+
+u32 spfc_scq_recv_sess_rst_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 rport_index = INVALID_VALUE32;
+ ulong flags = 0;
+ struct spfc_parent_queue_info *parent_queue_info = NULL;
+ struct spfc_scqe_sess_sts *sess_sts_scqe = (struct spfc_scqe_sess_sts *)(void *)scqe;
+ u32 flush_done;
+ u32 *ctx_array = NULL;
+ int ret;
+ spinlock_t *prtq_state_lock = NULL;
+
+ rport_index = sess_sts_scqe->wd1.conn_id;
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive reset session cmd sts failed, invlaid rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
+ hba->port_cfg.port_id, rport_index,
+ sess_sts_scqe->ch.wd0.err_code,
+ sess_sts_scqe->ch.wd0.cqe_remain_cnt);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
+ prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
+ /*
+ * If only session reset is used, the offload status of sq remains
+ * unchanged. If a link is deleted, the offload status is set to
+ * destroying and is irreversible.
+ */
+ spin_lock_irqsave(prtq_state_lock, flags);
+
+ /*
+ * According to the fault tolerance principle, even if the connection
+ * deletion times out and the sts returns to delete the connection, one
+ * indicates that the cancel timer is successful, and 0 indicates that
+ * the timer is being processed.
+ */
+ if (!cancel_delayed_work(&parent_queue_info->parent_sq_info.del_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) rport_index(0x%x) delete rport timer maybe timeout",
+ hba->port_cfg.port_id, rport_index);
+ }
+
+ /*
+ * If the SessRstSts is returned too late and the Parent Queue Info
+ * resource is released, OK is returned.
+ */
+ if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[info]Port(0x%x) reset session cmd complete, no need to free parent qinfo, rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
+ hba->port_cfg.port_id, rport_index,
+ sess_sts_scqe->ch.wd0.err_code,
+ sess_sts_scqe->ch.wd0.cqe_remain_cnt);
+
+ return RETURN_OK;
+ }
+
+ if (parent_queue_info->parent_ctx.cqm_parent_ctx_obj) {
+ ctx_array = (u32 *)((void *)(parent_queue_info->parent_ctx
+ .cqm_parent_ctx_obj->vaddr));
+ flush_done = ctx_array[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
+ mb();
+ if (flush_done == 0) {
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) flushdone is not set, delay to free parent session",
+ hba->port_cfg.port_id, rport_index);
+
+ /* If flushdone bit is not set,delay free Sq info */
+ ret = queue_delayed_work(hba->work_queue,
+ &(parent_queue_info->parent_sq_info
+ .flush_done_timeout_work),
+ (ulong)msecs_to_jiffies((u32)
+ SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
+ if (!ret) {
+ SPFC_HBA_STAT(hba, SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK);
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
+ hba->port_cfg.port_id, rport_index,
+ ret);
+ }
+
+ return RETURN_OK;
+ }
+ }
+
+ spin_unlock_irqrestore(prtq_state_lock, flags);
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) begin to free parent session with rport(0x%x)",
+ hba->port_cfg.port_id, rport_index);
+
+ spfc_free_parent_queue_info(hba, parent_queue_info);
+
+ return RETURN_OK;
+}
+
+static u32 spfc_scq_rcv_clear_srq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ /*
+ * clear ELS/Immi SRQ
+ * ---then--->>> Destroy SRQ
+ */
+ struct spfc_srq_info *srq_info = NULL;
+
+ if (SPFC_GET_SCQE_STATUS(scqe) != 0) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) clear srq failed, status(0x%x)",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+ }
+
+ srq_info = &hba->els_srq_info;
+
+ /*
+ * 1: cancel timer succeed
+ * 0: the timer is being processed, the SQ is released when the timer
+ * times out
+ */
+ if (cancel_delayed_work(&srq_info->del_work))
+ queue_work(hba->work_queue, &hba->els_srq_clear_work);
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_itmf_marker_sts *tmf_marker_sts_scqe = NULL;
+
+ tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
+ ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
+ rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
+ hot_tag = (tmf_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
+ pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
+ pkg.frame_head.rctl_did = tmf_marker_sts_scqe->wd2.did;
+
+ /* 1. set pkg status */
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ /* 2 .process rcvd marker STS: set exchange state */
+ ret = spfc_rcv_tmf_marker_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[event]Port(0x%x) recv marker STS OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) result %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ (ret == RETURN_OK) ? "succeed" : "failed");
+
+ return ret;
+}
+
+u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ u32 hot_tag = INVALID_VALUE32;
+ struct unf_frame_pkg pkg = {0};
+ struct spfc_scqe_abts_marker_sts *abts_marker_sts_scqe = NULL;
+
+ abts_marker_sts_scqe = &scqe->abts_marker_sts;
+ if (!abts_marker_sts_scqe) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]ABTS marker STS is NULL");
+ return ret;
+ }
+
+ ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
+ rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
+ hot_tag = (abts_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
+ pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
+ pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
+ pkg.abts_maker_status = (u32)abts_marker_sts_scqe->wd3.io_state;
+ pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_marker_sts_scqe->magic_num;
+
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
+ pkg.status = UNF_IO_FAILED;
+ else
+ pkg.status = UNF_IO_SUCCESS;
+
+ ret = spfc_rcv_abts_marker_sts(hba, &pkg, hot_tag);
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) recv abts marker STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ (ret == RETURN_OK) ? "SUCCEED" : "FAILED");
+
+ return ret;
+}
+
+u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba, struct spfc_aqe_data *aeq_msg)
+{
+ u32 ret = RETURN_OK;
+ u32 rport_index = 0;
+ u32 xid = 0;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_delay_destroy_ctrl_info delay_ctl_info;
+ ulong flags = 0;
+
+ memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
+
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive Offload Err Event, EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
+ aeq_msg->wd0.conn_id, aeq_msg->wd1.xid);
+
+ /* Currently, only the offload failure caused by insufficient scqe is
+ * processed. Other errors are not processed temporarily.
+ */
+ if (unlikely(aeq_msg->wd0.evt_code != FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL)) {
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an unsupported error code of AEQ Event,EvtCode(0x%x) Conn_id(0x%x)",
+ hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
+ aeq_msg->wd0.conn_id);
+
+ return UNF_RETURN_ERROR;
+ }
+ SPFC_SCQ_ERR_TYPE_STAT(hba, FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL);
+
+ rport_index = aeq_msg->wd0.conn_id;
+ xid = aeq_msg->wd1.xid;
+
+ if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, Xid(0x%x)",
+ hba->port_cfg.port_id, rport_index, aeq_msg->wd1.xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ if (spfc_check_rport_valid(prt_qinfo, xid) != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
+ hba->port_cfg.port_id, rport_index, xid);
+
+ return UNF_RETURN_ERROR;
+ }
+
+ /* The offload status is restored only when the offload status is offloading */
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ if (prt_qinfo->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
+ prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
+ delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
+ delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
+ delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
+ delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
+ delay_ctl_info.rport_info.nport_id =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
+ delay_ctl_info.rport_info.rport_index =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
+ delay_ctl_info.rport_info.port_name =
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
+ prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[info]Port(0x%x) pop up delay sqe, start:0x%llx, timeout:0x%x, rport:0x%x, offload state:0x%x",
+ hba->port_cfg.port_id, delay_ctl_info.start_jiff,
+ delay_ctl_info.time_out,
+ prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index,
+ SPFC_QUEUE_STATE_INITIALIZED);
+
+ ret = spfc_free_parent_resource(hba, &delay_ctl_info.rport_info);
+ if (ret != RETURN_OK) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[err]Port(0x%x) pop delay destroy parent sq failed, rport(0x%x), rport nport id 0x%x",
+ hba->port_cfg.port_id,
+ delay_ctl_info.rport_info.rport_index,
+ delay_ctl_info.rport_info.nport_id);
+ }
+ }
+
+ return ret;
+}
+
+u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg)
+{
+ u32 ret = RETURN_ERROR;
+ u16 rx_id = INVALID_VALUE16;
+ u16 ox_id = INVALID_VALUE16;
+ u16 hot_tag = INVALID_VALUE16;
+ struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
+ union spfc_cmdqe tmp_cmd_wqe;
+ union spfc_cmdqe *cmd_wqe = NULL;
+
+ FC_CHECK_RETURN_VALUE(hba, RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(pkg, RETURN_ERROR);
+ SPFC_CHECK_PKG_ALLOCTIME(pkg);
+
+ cmd_wqe = &tmp_cmd_wqe;
+ memset(cmd_wqe, 0, sizeof(union spfc_cmdqe));
+
+ rx_id = UNF_GET_RXID(pkg);
+ ox_id = UNF_GET_OXID(pkg);
+ if (UNF_GET_HOTPOOL_TAG(pkg) != INVALID_VALUE32)
+ hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+
+ spfc_build_cmdqe_common(cmd_wqe, SPFC_TASK_T_EXCH_ID_FREE, rx_id);
+ cmd_wqe->xid_free.wd2.hotpool_tag = hot_tag;
+ cmd_wqe->xid_free.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+ cmd_wqe->xid_free.sid = pkg->frame_head.csctl_sid;
+ cmd_wqe->xid_free.did = pkg->frame_head.rctl_did;
+ cmd_wqe->xid_free.type = pkg->type;
+
+ if (pkg->rx_or_ox_id == UNF_PKG_FREE_OXID)
+ cmd_wqe->xid_free.wd0.task_id = ox_id;
+ else
+ cmd_wqe->xid_free.wd0.task_id = rx_id;
+
+ cmd_wqe->xid_free.wd0.port_id = hba->port_index;
+ cmd_wqe->xid_free.wd2.scqn = hba->default_scqn;
+ ret = spfc_root_cmdq_enqueue(hba, cmd_wqe, sizeof(cmd_wqe->xid_free));
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "[info]Port(0x%x) ox_id(0x%x) RXID(0x%x) hottag(0x%x) magic_num(0x%x) Sid(0x%x) Did(0x%x), send free xid %s",
+ hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
+ cmd_wqe->xid_free.magic_num, cmd_wqe->xid_free.sid,
+ cmd_wqe->xid_free.did,
+ (ret == RETURN_OK) ? "OK" : "ERROR");
+
+ return ret;
+}
+
+u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 hot_tag = INVALID_VALUE32;
+ u32 magic_num = INVALID_VALUE32;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ struct spfc_scqe_comm_rsp_sts *free_xid_sts_scqe = NULL;
+
+ free_xid_sts_scqe = &scqe->comm_sts;
+ magic_num = free_xid_sts_scqe->magic_num;
+ ox_id = (u32)free_xid_sts_scqe->wd0.ox_id;
+ rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
+
+ if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
+ hot_tag = (free_xid_sts_scqe->wd1.hotpooltag &
+ UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ }
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "Port(0x%x) hottag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
+ hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
+ SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ u32 hot_tag = INVALID_VALUE32;
+ u32 magic_num = INVALID_VALUE32;
+ u32 ox_id = INVALID_VALUE32;
+ u32 rx_id = INVALID_VALUE32;
+ struct spfc_scqe_comm_rsp_sts *time_out_scqe = NULL;
+
+ time_out_scqe = &scqe->comm_sts;
+ magic_num = time_out_scqe->magic_num;
+ ox_id = (u32)time_out_scqe->wd0.ox_id;
+ rx_id = (u32)time_out_scqe->wd0.rx_id;
+
+ if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
+ hot_tag = (time_out_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+
+ FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
+ "Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
+ hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
+ SPFC_GET_SCQE_STATUS(scqe));
+
+ return RETURN_OK;
+}
+
+u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
+{
+ struct spfc_scqe_sq_nop_sts *sq_nop_scqe = NULL;
+ struct spfc_parent_queue_info *prt_qinfo = NULL;
+ struct spfc_parent_sq_info *parent_sq_info = NULL;
+ struct list_head *node = NULL;
+ struct list_head *next_node = NULL;
+ struct spfc_suspend_sqe_info *suspend_sqe = NULL;
+ struct spfc_suspend_sqe_info *sqe = NULL;
+ u32 rport_index = 0;
+ u32 magic_num;
+ u16 sqn;
+ u32 sqn_base;
+ u32 sqn_max;
+ u32 ret = RETURN_OK;
+ ulong flags = 0;
+
+ sq_nop_scqe = &scqe->sq_nop_sts;
+ rport_index = sq_nop_scqe->wd1.conn_id;
+ magic_num = sq_nop_scqe->magic_num;
+ sqn = sq_nop_scqe->wd0.sqn;
+ prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
+ parent_sq_info = &prt_qinfo->parent_sq_info;
+ sqn_base = parent_sq_info->sqn_base;
+ sqn_max = sqn_base + UNF_SQ_NUM_PER_SESSION - 1;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport(0x%x), magic_num(0x%x) receive nop sq sts form sq(0x%x)",
+ hba->port_cfg.port_id, rport_index, magic_num, sqn);
+
+ spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
+ list_for_each_safe(node, next_node, &parent_sq_info->suspend_sqe_list) {
+ sqe = list_entry(node, struct spfc_suspend_sqe_info, list_sqe_entry);
+ if (sqe->magic_num != magic_num)
+ continue;
+ suspend_sqe = sqe;
+ if (sqn == sqn_max)
+ list_del(node);
+ break;
+ }
+ spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
+
+ if (suspend_sqe) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
+ hba->port_cfg.port_id, rport_index);
+ if (sqn < sqn_max) {
+ ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
+ } else if (sqn == sqn_max) {
+ if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "[warn]Port(0x%x) rport(0x%x) reset worker timer maybe timeout",
+ hba->port_cfg.port_id, rport_index);
+ }
+ parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
+ ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
+ kfree(suspend_sqe);
+ }
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) magicnum(0x%x)can't find suspend sqe",
+ hba->port_cfg.port_id, rport_index, magic_num);
+ }
+ return ret;
+}
+
+static const struct unf_scqe_handle_table scqe_handle_table[] = {
+ {/* INI rcvd FCP RSP */
+ SPFC_SCQE_FCP_IRSP, true, spfc_scq_recv_iresp},
+ {/* INI/TGT rcvd ELS_CMND */
+ SPFC_SCQE_ELS_CMND, false, spfc_scq_recv_els_cmnd},
+ {/* INI/TGT rcvd ELS_RSP */
+ SPFC_SCQE_ELS_RSP, true, spfc_scq_recv_ls_gs_rsp},
+ {/* INI/TGT rcvd GS_RSP */
+ SPFC_SCQE_GS_RSP, true, spfc_scq_recv_ls_gs_rsp},
+ {/* INI rcvd BLS_RSP */
+ SPFC_SCQE_ABTS_RSP, true, spfc_scq_recv_abts_rsp},
+ {/* INI/TGT rcvd ELS_RSP STS(Done) */
+ SPFC_SCQE_ELS_RSP_STS, true, spfc_scq_recv_els_rsp_sts},
+ {/* INI or TGT rcvd Session enable STS */
+ SPFC_SCQE_SESS_EN_STS, false, spfc_scq_recv_offload_sts},
+ {/* INI or TGT rcvd flush (pending) SQ STS */
+ SPFC_SCQE_FLUSH_SQ_STS, false, spfc_scq_rcv_flush_sq_sts},
+ {/* INI or TGT rcvd Buffer clear STS */
+ SPFC_SCQE_BUF_CLEAR_STS, false, spfc_scq_rcv_buf_clear_sts},
+ {/* INI or TGT rcvd session reset STS */
+ SPFC_SCQE_SESS_RST_STS, false, spfc_scq_recv_sess_rst_sts},
+ {/* ELS/IMMI SRQ */
+ SPFC_SCQE_CLEAR_SRQ_STS, false, spfc_scq_rcv_clear_srq_sts},
+ {/* INI rcvd TMF RSP */
+ SPFC_SCQE_FCP_ITMF_RSP, true, spfc_scq_recv_iresp},
+ {/* INI rcvd TMF Marker STS */
+ SPFC_SCQE_ITMF_MARKER_STS, false, spfc_scq_recv_marker_sts},
+ {/* INI rcvd ABTS Marker STS */
+ SPFC_SCQE_ABTS_MARKER_STS, false, spfc_scq_recv_abts_marker_sts},
+ {SPFC_SCQE_XID_FREE_ABORT_STS, false, spfc_scq_free_xid_sts},
+ {SPFC_SCQE_EXCHID_TIMEOUT_STS, false, spfc_scq_exchg_timeout_sts},
+ {SPFC_SQE_NOP_STS, true, spfc_scq_rcv_sq_nop_sts},
+
+};
+
+u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba, union spfc_scqe *scqe, u32 scqn)
+{
+ u32 ret = UNF_RETURN_ERROR;
+ bool reclaim = false;
+ u32 index = 0;
+ u32 total = 0;
+
+ FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(scqe, UNF_RETURN_ERROR);
+ FC_CHECK_RETURN_VALUE(scqn < SPFC_TOTAL_SCQ_NUM, UNF_RETURN_ERROR);
+
+ SPFC_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
+ "[info]Port(0x%x) receive scqe type %d from SCQ[%u]",
+ hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), scqn);
+
+ /* 1. error code cheking */
+ if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
+ /* So far, just print & counter */
+ spfc_scqe_error_pre_proc(hba, scqe);
+ }
+
+ /* 2. Process SCQE by corresponding processer */
+ total = sizeof(scqe_handle_table) / sizeof(struct unf_scqe_handle_table);
+ while (index < total) {
+ if (SPFC_GET_SCQE_TYPE(scqe) == scqe_handle_table[index].scqe_type) {
+ ret = scqe_handle_table[index].scqe_handle_func(hba, scqe);
+ reclaim = scqe_handle_table[index].reclaim_sq_wpg;
+
+ break;
+ }
+
+ index++;
+ }
+
+ /* 3. SCQE type check */
+ if (unlikely(total == index)) {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
+ "[warn]Unknown SCQE type %d",
+ SPFC_GET_SCQE_TYPE(scqe));
+
+ UNF_PRINT_SFS_LIMIT(UNF_ERR, hba->port_cfg.port_id, scqe, sizeof(union spfc_scqe));
+ }
+
+ /* 4. If SCQE is for SQ-WQE then recovery Link List SQ free page */
+ if (reclaim) {
+ if (SPFC_GET_SCQE_SQN(scqe) < SPFC_MAX_SSQ_NUM) {
+ ret = spfc_reclaim_sq_wqe_page(hba, scqe);
+ } else {
+ /* NOTE: for buffer clear, the SCQE conn_id is 0xFFFF,count with HBA */
+ SPFC_HBA_STAT((struct spfc_hba_info *)hba, SPFC_STAT_SQ_IO_BUFFER_CLEARED);
+ }
+ }
+
+ return ret;
+}
+
diff --git a/drivers/scsi/spfc/hw/spfc_service.h b/drivers/scsi/spfc/hw/spfc_service.h
new file mode 100644
index 000000000000..e2555c55f4d1
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_service.h
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_SERVICE_H
+#define SPFC_SERVICE_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "unf_scsi_common.h"
+#include "spfc_hba.h"
+
+#define SPFC_HAVE_OFFLOAD (0)
+
+/* FC txmfs */
+#define SPFC_DEFAULT_TX_MAX_FREAM_SIZE (256)
+
+#define SPFC_GET_NETWORK_PORT_ID(hba) \
+ (((hba)->port_index > 1) ? ((hba)->port_index + 2) : (hba)->port_index)
+
+#define SPFC_GET_PRLI_PAYLOAD_LEN \
+ (UNF_PRLI_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE)
+/* Start addr of the header/payloed of the cmnd buffer in the pkg */
+#define SPFC_FC_HEAD_LEN (sizeof(struct unf_fc_head))
+#define SPFC_PAYLOAD_OFFSET (sizeof(struct unf_fc_head))
+#define SPFC_GET_CMND_PAYLOAD_ADDR(pkg) UNF_GET_FLOGI_PAYLOAD(pkg)
+#define SPFC_GET_CMND_HEADER_ADDR(pkg) \
+ ((pkg)->unf_cmnd_pload_bl.buffer_ptr)
+#define SPFC_GET_RSP_HEADER_ADDR(pkg) \
+ ((pkg)->unf_rsp_pload_bl.buffer_ptr)
+#define SPFC_GET_RSP_PAYLOAD_ADDR(pkg) \
+ ((pkg)->unf_rsp_pload_bl.buffer_ptr + SPFC_PAYLOAD_OFFSET)
+#define SPFC_GET_CMND_FC_HEADER(pkg) \
+ (&(UNF_GET_SFS_ENTRY(pkg)->sfs_common.frame_head))
+#define SPFC_PKG_IS_ELS_RSP(cmd_type) \
+ (((cmd_type) == ELS_ACC) || ((cmd_type) == ELS_RJT))
+#define SPFC_XID_IS_VALID(exid, base, exi_count) \
+ (((exid) >= (base)) && ((exid) < ((base) + (exi_count))))
+#define SPFC_CHECK_NEED_OFFLOAD(cmd_code, cmd_type, offload_state) \
+ (((cmd_code) == ELS_PLOGI) && ((cmd_type) != ELS_RJT) && \
+ ((offload_state) == SPFC_QUEUE_STATE_INITIALIZED))
+
+#define UNF_FC_PAYLOAD_ELS_MASK (0xFF000000)
+#define UNF_FC_PAYLOAD_ELS_SHIFT (24)
+#define UNF_FC_PAYLOAD_ELS_DWORD (0)
+
+/* Note: this pfcpayload is little endian */
+#define UNF_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD], \
+ UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
+
+/* Note: this pfcpayload is big endian */
+#define SPFC_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
+ UNF_GET_SHIFTMASK(be32_to_cpu(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD]), \
+ UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
+
+#define UNF_FC_PAYLOAD_RX_SZ_MASK (0x00000FFF)
+#define UNF_FC_PAYLOAD_RX_SZ_SHIFT (16)
+#define UNF_FC_PAYLOAD_RX_SZ_DWORD (2)
+
+/* Note: this pfcpayload is little endian */
+#define UNF_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
+ ((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD] & \
+ UNF_FC_PAYLOAD_RX_SZ_MASK))
+
+/* Note: this pfcpayload is big endian */
+#define SPFC_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
+ (be32_to_cpu((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD]) & \
+ UNF_FC_PAYLOAD_RX_SZ_MASK))
+
+#define SPFC_GET_RA_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_a_tov)
+#define SPFC_GET_RT_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_t_tov)
+#define SPFC_GET_E_D_TOV_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov)
+#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov_resolution)
+#define SPFC_GET_BB_SC_N_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bbscn)
+#define SPFC_GET_BB_CREDIT_FROM_PAYLOAD(pfcpayload) \
+ (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bb_credit)
+
+#define SPFC_GET_RA_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
+#define SPFC_GET_RT_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
+#define SPFC_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
+#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
+#define SPFC_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
+#define SPFC_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
+#define SPFC_CHECK_NPORT_FPORT_BIT(pfcparams) \
+ (((struct unf_fabric_parm *)(pfcparams))->co_parms.nport)
+
+#define UNF_FC_RCTL_BLS_MASK (0x80)
+#define SPFC_UNSOLICITED_FRAME_IS_BLS(hdr) (UNF_GET_FC_HEADER_RCTL(hdr) & UNF_FC_RCTL_BLS_MASK)
+
+#define SPFC_LOW_SEQ_CNT (0)
+#define SPFC_HIGH_SEQ_CNT (0xFFFF)
+
+/* struct unf_frame_pkg.cmnd meaning:
+ * The least significant 16 bits indicate whether to send ELS CMND or ELS RSP
+ * (ACC or RJT). The most significant 16 bits indicate the corresponding ELS
+ * CMND when the lower 16 bits are ELS RSP.
+ */
+#define SPFC_ELS_CMND_MASK (0xffff)
+#define SPFC_ELS_CMND__RELEVANT_SHIFT (16UL)
+#define SPFC_GET_LS_GS_CMND_CODE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
+#define SPFC_GET_ELS_RSP_TYPE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
+#define SPFC_GET_ELS_RSP_CODE(cmnd) \
+ ((u16)((cmnd) >> SPFC_ELS_CMND__RELEVANT_SHIFT & SPFC_ELS_CMND_MASK))
+
+/* ELS CMND Request */
+#define ELS_CMND (0)
+
+/* fh_f_ctl - Frame control flags. */
+#define SPFC_FC_EX_CTX BIT(23) /* sent by responder to exchange */
+#define SPFC_FC_SEQ_CTX BIT(22) /* sent by responder to sequence */
+#define SPFC_FC_FIRST_SEQ BIT(21) /* first sequence of this exchange */
+#define SPFC_FC_LAST_SEQ BIT(20) /* last sequence of this exchange */
+#define SPFC_FC_END_SEQ BIT(19) /* last frame of sequence */
+#define SPFC_FC_END_CONN BIT(18) /* end of class 1 connection pending */
+#define SPFC_FC_RES_B17 BIT(17) /* reserved */
+#define SPFC_FC_SEQ_INIT BIT(16) /* transfer of sequence initiative */
+#define SPFC_FC_X_ID_REASS BIT(15) /* exchange ID has been changed */
+#define SPFC_FC_X_ID_INVAL BIT(14) /* exchange ID invalidated */
+#define SPFC_FC_ACK_1 BIT(12) /* 13:12 = 1: ACK_1 expected */
+#define SPFC_FC_ACK_N (2 << 12) /* 13:12 = 2: ACK_N expected */
+#define SPFC_FC_ACK_0 (3 << 12) /* 13:12 = 3: ACK_0 expected */
+#define SPFC_FC_RES_B11 BIT(11) /* reserved */
+#define SPFC_FC_RES_B10 BIT(10) /* reserved */
+#define SPFC_FC_RETX_SEQ BIT(9) /* retransmitted sequence */
+#define SPFC_FC_UNI_TX BIT(8) /* unidirectional transmit (class 1) */
+#define SPFC_FC_CONT_SEQ(i) ((i) << 6)
+#define SPFC_FC_ABT_SEQ(i) ((i) << 4)
+#define SPFC_FC_REL_OFF BIT(3) /* parameter is relative offset */
+#define SPFC_FC_RES2 BIT(2) /* reserved */
+#define SPFC_FC_FILL(i) ((i) & 3) /* 1:0: bytes of trailing fill */
+
+#define SPFC_FCTL_REQ (SPFC_FC_FIRST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
+#define SPFC_FCTL_RESP \
+ (SPFC_FC_EX_CTX | SPFC_FC_LAST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
+#define SPFC_RCTL_BLS_REQ (0x81)
+#define SPFC_RCTL_BLS_ACC (0x84)
+#define SPFC_RCTL_BLS_RJT (0x85)
+
+#define PHY_PORT_TYPE_FC 0x1 /* Physical port type of FC */
+#define PHY_PORT_TYPE_FCOE 0x2 /* Physical port type of FCoE */
+#define SPFC_FC_COS_VALUE (0X4)
+
+#define SPFC_CDB16_LBA_MASK 0xffff
+#define SPFC_CDB16_TRANSFERLEN_MASK 0xff
+#define SPFC_RXID_MASK 0xffff
+#define SPFC_OXID_MASK 0xffff0000
+
+enum spfc_fc_fh_type {
+ SPFC_FC_TYPE_BLS = 0x00, /* basic link service */
+ SPFC_FC_TYPE_ELS = 0x01, /* extended link service */
+ SPFC_FC_TYPE_IP = 0x05, /* IP over FC, RFC 4338 */
+ SPFC_FC_TYPE_FCP = 0x08, /* SCSI FCP */
+ SPFC_FC_TYPE_CT = 0x20, /* Fibre Channel Services (FC-CT) */
+ SPFC_FC_TYPE_ILS = 0x22 /* internal link service */
+};
+
+enum spfc_fc_fh_rctl {
+ SPFC_FC_RCTL_DD_UNCAT = 0x00, /* uncategorized information */
+ SPFC_FC_RCTL_DD_SOL_DATA = 0x01, /* solicited data */
+ SPFC_FC_RCTL_DD_UNSOL_CTL = 0x02, /* unsolicited control */
+ SPFC_FC_RCTL_DD_SOL_CTL = 0x03, /* solicited control or reply */
+ SPFC_FC_RCTL_DD_UNSOL_DATA = 0x04, /* unsolicited data */
+ SPFC_FC_RCTL_DD_DATA_DESC = 0x05, /* data descriptor */
+ SPFC_FC_RCTL_DD_UNSOL_CMD = 0x06, /* unsolicited command */
+ SPFC_FC_RCTL_DD_CMD_STATUS = 0x07, /* command status */
+
+#define SPFC_FC_RCTL_ILS_REQ SPFC_FC_RCTL_DD_UNSOL_CTL /* ILS request */
+#define SPFC_FC_RCTL_ILS_REP SPFC_FC_RCTL_DD_SOL_CTL /* ILS reply */
+
+ /*
+ * Extended Link_Data
+ */
+ SPFC_FC_RCTL_ELS_REQ = 0x22, /* extended link services request */
+ SPFC_FC_RCTL_ELS_RSP = 0x23, /* extended link services reply */
+ SPFC_FC_RCTL_ELS4_REQ = 0x32, /* FC-4 ELS request */
+ SPFC_FC_RCTL_ELS4_RSP = 0x33, /* FC-4 ELS reply */
+ /*
+ * Optional Extended Headers
+ */
+ SPFC_FC_RCTL_VFTH = 0x50, /* virtual fabric tagging header */
+ SPFC_FC_RCTL_IFRH = 0x51, /* inter-fabric routing header */
+ SPFC_FC_RCTL_ENCH = 0x52, /* encapsulation header */
+ /*
+ * Basic Link Services fh_r_ctl values.
+ */
+ SPFC_FC_RCTL_BA_NOP = 0x80, /* basic link service NOP */
+ SPFC_FC_RCTL_BA_ABTS = 0x81, /* basic link service abort */
+ SPFC_FC_RCTL_BA_RMC = 0x82, /* remove connection */
+ SPFC_FC_RCTL_BA_ACC = 0x84, /* basic accept */
+ SPFC_FC_RCTL_BA_RJT = 0x85, /* basic reject */
+ SPFC_FC_RCTL_BA_PRMT = 0x86, /* dedicated connection preempted */
+ /*
+ * Link Control Information.
+ */
+ SPFC_FC_RCTL_ACK_1 = 0xc0, /* acknowledge_1 */
+ SPFC_FC_RCTL_ACK_0 = 0xc1, /* acknowledge_0 */
+ SPFC_FC_RCTL_P_RJT = 0xc2, /* port reject */
+ SPFC_FC_RCTL_F_RJT = 0xc3, /* fabric reject */
+ SPFC_FC_RCTL_P_BSY = 0xc4, /* port busy */
+ SPFC_FC_RCTL_F_BSY = 0xc5, /* fabric busy to data frame */
+ SPFC_FC_RCTL_F_BSYL = 0xc6, /* fabric busy to link control frame */
+ SPFC_FC_RCTL_LCR = 0xc7, /* link credit reset */
+ SPFC_FC_RCTL_END = 0xc9 /* end */
+};
+
+struct spfc_fc_frame_header {
+ u8 rctl; /* routing control */
+ u8 did[ARRAY_INDEX_3]; /* Destination ID */
+
+ u8 cs_ctrl; /* class of service control / pri */
+ u8 sid[ARRAY_INDEX_3]; /* Source ID */
+
+ u8 type; /* see enum fc_fh_type below */
+ u8 frame_ctrl[ARRAY_INDEX_3]; /* frame control */
+
+ u8 seq_id; /* sequence ID */
+ u8 df_ctrl; /* data field control */
+ u16 seq_cnt; /* sequence count */
+
+ u16 oxid; /* originator exchange ID */
+ u16 rxid; /* responder exchange ID */
+ u32 param_offset; /* parameter or relative offset */
+};
+
+u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
+ bool first);
+u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag);
+u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 hot_tag);
+u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
+ u32 hot_tag);
+u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
+ struct unf_frame_pkg *pkg, u32 rx_id);
+void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
+ struct unf_port_login_parms *login_params);
+u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba,
+ struct spfc_aqe_data *aeq_msg);
+u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
+u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
+ struct unf_frame_pkg *pkg,
+ struct spfc_parent_queue_info *prt_queue_info);
+u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg);
+u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg);
+
+/* Receive Frame from SCQ */
+u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba,
+ union spfc_scqe *scqe, u32 scqn);
+void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id);
+
+#define SPFC_CHECK_PKG_ALLOCTIME(pkg) \
+ do { \
+ if (unlikely(UNF_GETXCHGALLOCTIME(pkg) == 0)) { \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, \
+ UNF_WARN, \
+ "[warn]Invalid MagicNum,S_ID(0x%x) " \
+ "D_ID(0x%x) OXID(0x%x) " \
+ "RX_ID(0x%x) Pkg type(0x%x) hot " \
+ "pooltag(0x%x)", \
+ UNF_GET_SID(pkg), UNF_GET_DID(pkg), \
+ UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), \
+ ((struct unf_frame_pkg *)(pkg))->type, \
+ UNF_GET_XCHG_TAG(pkg)); \
+ } \
+ } while (0)
+
+#endif
diff --git a/drivers/scsi/spfc/hw/spfc_utils.c b/drivers/scsi/spfc/hw/spfc_utils.c
new file mode 100644
index 000000000000..328c388c95fe
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_utils.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_utils.h"
+#include "unf_log.h"
+#include "unf_common.h"
+
+void spfc_cpu_to_big64(void *addr, u32 size)
+{
+ u32 index = 0;
+ u32 cnt = 0;
+ u64 *temp = NULL;
+
+ FC_CHECK_VALID(addr, dump_stack(); return);
+ FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
+
+ temp = (u64 *)addr;
+ cnt = SPFC_SHIFT_TO_U64(size);
+
+ for (index = 0; index < cnt; index++) {
+ *temp = cpu_to_be64(*temp);
+ temp++;
+ }
+}
+
+void spfc_big_to_cpu64(void *addr, u32 size)
+{
+ u32 index = 0;
+ u32 cnt = 0;
+ u64 *temp = NULL;
+
+ FC_CHECK_VALID(addr, dump_stack(); return);
+ FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
+
+ temp = (u64 *)addr;
+ cnt = SPFC_SHIFT_TO_U64(size);
+
+ for (index = 0; index < cnt; index++) {
+ *temp = be64_to_cpu(*temp);
+ temp++;
+ }
+}
+
+void spfc_cpu_to_big32(void *addr, u32 size)
+{
+ unf_cpu_to_big_end(addr, size);
+}
+
+void spfc_big_to_cpu32(void *addr, u32 size)
+{
+ if (size % UNF_BYTES_OF_DWORD)
+ dump_stack();
+
+ unf_big_end_to_cpu(addr, size);
+}
+
+void spfc_cpu_to_be24(u8 *data, u32 value)
+{
+ data[ARRAY_INDEX_0] = (value >> UNF_SHIFT_16) & UNF_MASK_BIT_7_0;
+ data[ARRAY_INDEX_1] = (value >> UNF_SHIFT_8) & UNF_MASK_BIT_7_0;
+ data[ARRAY_INDEX_2] = value & UNF_MASK_BIT_7_0;
+}
+
+u32 spfc_big_to_cpu24(u8 *data)
+{
+ return (data[ARRAY_INDEX_0] << UNF_SHIFT_16) |
+ (data[ARRAY_INDEX_1] << UNF_SHIFT_8) | data[ARRAY_INDEX_2];
+}
+
+void spfc_print_buff(u32 dbg_level, void *buff, u32 size)
+{
+ u32 *spfc_buff = NULL;
+ u32 loop = 0;
+ u32 index = 0;
+
+ FC_CHECK_VALID(buff, dump_stack(); return);
+ FC_CHECK_VALID(0 == (size % SPFC_DWORD_BYTE), dump_stack(); return);
+
+ if ((dbg_level) <= unf_dgb_level) {
+ spfc_buff = (u32 *)buff;
+ loop = size / SPFC_DWORD_BYTE;
+
+ for (index = 0; index < loop; index++) {
+ spfc_buff = (u32 *)buff + index;
+ FC_DRV_PRINT(UNF_LOG_NORMAL,
+ UNF_MAJOR, "Buff DW%u 0x%08x.", index, *spfc_buff);
+ }
+ }
+}
+
+u32 spfc_log2n(u32 val)
+{
+ u32 result = 0;
+ u32 logn = (val >> UNF_SHIFT_1);
+
+ while (logn) {
+ logn >>= UNF_SHIFT_1;
+ result++;
+ }
+
+ return result;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_utils.h b/drivers/scsi/spfc/hw/spfc_utils.h
new file mode 100644
index 000000000000..6b4330da3f1d
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_utils.h
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_UTILS_H
+#define SPFC_UTILS_H
+
+#include "unf_type.h"
+#include "unf_log.h"
+
+#define SPFC_ZERO (0)
+
+#define SPFC_BIT(n) (0x1UL << (n))
+#define SPFC_BIT_0 SPFC_BIT(0)
+#define SPFC_BIT_1 SPFC_BIT(1)
+#define SPFC_BIT_2 SPFC_BIT(2)
+#define SPFC_BIT_3 SPFC_BIT(3)
+#define SPFC_BIT_4 SPFC_BIT(4)
+#define SPFC_BIT_5 SPFC_BIT(5)
+#define SPFC_BIT_6 SPFC_BIT(6)
+#define SPFC_BIT_7 SPFC_BIT(7)
+#define SPFC_BIT_8 SPFC_BIT(8)
+#define SPFC_BIT_9 SPFC_BIT(9)
+#define SPFC_BIT_10 SPFC_BIT(10)
+#define SPFC_BIT_11 SPFC_BIT(11)
+#define SPFC_BIT_12 SPFC_BIT(12)
+#define SPFC_BIT_13 SPFC_BIT(13)
+#define SPFC_BIT_14 SPFC_BIT(14)
+#define SPFC_BIT_15 SPFC_BIT(15)
+#define SPFC_BIT_16 SPFC_BIT(16)
+#define SPFC_BIT_17 SPFC_BIT(17)
+#define SPFC_BIT_18 SPFC_BIT(18)
+#define SPFC_BIT_19 SPFC_BIT(19)
+#define SPFC_BIT_20 SPFC_BIT(20)
+#define SPFC_BIT_21 SPFC_BIT(21)
+#define SPFC_BIT_22 SPFC_BIT(22)
+#define SPFC_BIT_23 SPFC_BIT(23)
+#define SPFC_BIT_24 SPFC_BIT(24)
+#define SPFC_BIT_25 SPFC_BIT(25)
+#define SPFC_BIT_26 SPFC_BIT(26)
+#define SPFC_BIT_27 SPFC_BIT(27)
+#define SPFC_BIT_28 SPFC_BIT(28)
+#define SPFC_BIT_29 SPFC_BIT(29)
+#define SPFC_BIT_30 SPFC_BIT(30)
+#define SPFC_BIT_31 SPFC_BIT(31)
+
+#define SPFC_GET_BITS(data, mask) ((data) & (mask)) /* Obtains the bit */
+#define SPFC_SET_BITS(data, mask) ((data) |= (mask)) /* set the bit */
+#define SPFC_CLR_BITS(data, mask) ((data) &= ~(mask)) /* clear the bit */
+
+#define SPFC_LSB(x) ((u8)(x))
+#define SPFC_MSB(x) ((u8)((u16)(x) >> 8))
+
+#define SPFC_LSW(x) ((u16)(x))
+#define SPFC_MSW(x) ((u16)((u32)(x) >> 16))
+
+#define SPFC_LSD(x) ((u32)((u64)(x)))
+#define SPFC_MSD(x) ((u32)((((u64)(x)) >> 16) >> 16))
+
+#define SPFC_BYTES_TO_QW_NUM(x) ((x) >> 3)
+#define SPFC_BYTES_TO_DW_NUM(x) ((x) >> 2)
+
+#define UNF_GET_SHIFTMASK(__src, __shift, __mask) (((__src) & (__mask)) >> (__shift))
+#define UNF_FC_SET_SHIFTMASK(__des, __val, __shift, __mask) \
+ ((__des) = (((__des) & ~(__mask)) | (((__val) << (__shift)) & (__mask))))
+
+/* R_CTL */
+#define UNF_FC_HEADER_RCTL_MASK (0xFF000000)
+#define UNF_FC_HEADER_RCTL_SHIFT (24)
+#define UNF_FC_HEADER_RCTL_DWORD (0)
+#define UNF_GET_FC_HEADER_RCTL(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[UNF_FC_HEADER_RCTL_DWORD], \
+ UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK)
+
+#define UNF_SET_FC_HEADER_RCTL(__pfcheader, __val) \
+ do { \
+ UNF_FC_SET_SHIFTMASK(((u32 *)(void *)(__pfcheader)[UNF_FC_HEADER_RCTL_DWORD], \
+ __val, UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK) \
+ } while (0)
+
+/* PRLI PARAM 3 */
+#define SPFC_PRLI_PARAM_WXFER_ENABLE_MASK (0x00000001)
+#define SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT (0)
+#define SPFC_PRLI_PARAM_WXFER_DWORD (3)
+#define SPFC_GET_PRLI_PARAM_WXFER(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_WXFER_DWORD], \
+ SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT, \
+ SPFC_PRLI_PARAM_WXFER_ENABLE_MASK)
+
+#define SPFC_PRLI_PARAM_CONF_ENABLE_MASK (0x00000080)
+#define SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT (7)
+#define SPFC_PRLI_PARAM_CONF_DWORD (3)
+#define SPFC_GET_PRLI_PARAM_CONF(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_DWORD], \
+ SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT, \
+ SPFC_PRLI_PARAM_CONF_ENABLE_MASK)
+
+#define SPFC_PRLI_PARAM_REC_ENABLE_MASK (0x00000400)
+#define SPFC_PRLI_PARAM_REC_ENABLE_SHIFT (10)
+#define SPFC_PRLI_PARAM_CONF_REC (3)
+#define SPFC_GET_PRLI_PARAM_REC(__pfcheader) \
+ UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_REC], \
+ SPFC_PRLI_PARAM_REC_ENABLE_SHIFT, SPFC_PRLI_PARAM_REC_ENABLE_MASK)
+
+#define SPFC_FUNCTION_ENTER \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
+ "%s Enter.", __func__)
+#define SPFC_FUNCTION_RETURN \
+ FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
+ "%s Return.", __func__)
+
+#define SPFC_SPIN_LOCK_IRQSAVE(interrupt, hw_adapt_lock, flags) \
+ do { \
+ if ((interrupt) == false) { \
+ spin_lock_irqsave(&(hw_adapt_lock), flags); \
+ } \
+ } while (0)
+
+#define SPFC_SPIN_UNLOCK_IRQRESTORE(interrupt, hw_adapt_lock, flags) \
+ do { \
+ if ((interrupt) == false) { \
+ spin_unlock_irqrestore(&(hw_adapt_lock), flags); \
+ } \
+ } while (0)
+
+#define FC_CHECK_VALID(condition, fail_do) \
+ do { \
+ if (unlikely(!(condition))) { \
+ FC_DRV_PRINT(UNF_LOG_REG_ATT, \
+ UNF_ERR, "Para check(%s) invalid", \
+ #condition); \
+ fail_do; \
+ } \
+ } while (0)
+
+#define RETURN_ERROR_S32 (-1)
+#define UNF_RETURN_ERROR_S32 (-1)
+
+enum SPFC_LOG_CTRL_E {
+ SPFC_LOG_ALL = 0,
+ SPFC_LOG_SCQE_RX,
+ SPFC_LOG_ELS_TX,
+ SPFC_LOG_ELS_RX,
+ SPFC_LOG_GS_TX,
+ SPFC_LOG_GS_RX,
+ SPFC_LOG_BLS_TX,
+ SPFC_LOG_BLS_RX,
+ SPFC_LOG_FCP_TX,
+ SPFC_LOG_FCP_RX,
+ SPFC_LOG_SESS_TX,
+ SPFC_LOG_SESS_RX,
+ SPFC_LOG_DIF_TX,
+ SPFC_LOG_DIF_RX
+};
+
+extern u32 spfc_log_en;
+#define SPFC_LOG_EN(hba, log_ctrl) (spfc_log_en + (log_ctrl))
+
+enum SPFC_HBA_ERR_STAT_E {
+ SPFC_STAT_CTXT_FLUSH_DONE = 0,
+ SPFC_STAT_SQ_WAIT_EMPTY,
+ SPFC_STAT_LAST_GS_SCQE,
+ SPFC_STAT_SQ_POOL_EMPTY,
+ SPFC_STAT_PARENT_IO_FLUSHED,
+ SPFC_STAT_ROOT_IO_FLUSHED, /* 5 */
+ SPFC_STAT_ROOT_SQ_FULL,
+ SPFC_STAT_ELS_RSP_EXCH_REUSE,
+ SPFC_STAT_GS_RSP_EXCH_REUSE,
+ SPFC_STAT_SQ_IO_BUFFER_CLEARED,
+ SPFC_STAT_PARENT_SQ_NOT_OFFLOADED, /* 10 */
+ SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK,
+ SPFC_STAT_PARENT_SQ_INVALID_CACHED_ID,
+ SPFC_HBA_STAT_BUTT
+};
+
+#define SPFC_DWORD_BYTE (4)
+#define SPFC_QWORD_BYTE (8)
+#define SPFC_SHIFT_TO_U64(x) ((x) >> 3)
+#define SPFC_SHIFT_TO_U32(x) ((x) >> 2)
+
+void spfc_cpu_to_big64(void *addr, u32 size);
+void spfc_big_to_cpu64(void *addr, u32 size);
+void spfc_cpu_to_big32(void *addr, u32 size);
+void spfc_big_to_cpu32(void *addr, u32 size);
+void spfc_cpu_to_be24(u8 *data, u32 value);
+u32 spfc_big_to_cpu24(u8 *data);
+
+void spfc_print_buff(u32 dbg_level, void *buff, u32 size);
+
+u32 spfc_log2n(u32 val);
+
+static inline void spfc_swap_16_in_32(u32 *paddr, u32 length)
+{
+ u32 i;
+
+ for (i = 0; i < length; i++) {
+ paddr[i] =
+ ((((paddr[i]) & UNF_MASK_BIT_31_16) >> UNF_SHIFT_16) |
+ (((paddr[i]) & UNF_MASK_BIT_15_0) << UNF_SHIFT_16));
+ }
+}
+
+#endif /* __SPFC_UTILS_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.c b/drivers/scsi/spfc/hw/spfc_wqe.c
new file mode 100644
index 000000000000..61909c51bc8c
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_wqe.c
@@ -0,0 +1,646 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#include "spfc_wqe.h"
+#include "spfc_module.h"
+#include "spfc_service.h"
+
+void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
+ struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
+ u32 scqn)
+{
+ sqe->ts_sl.task_type = SPFC_SQE_FCP_TMF_TRSP;
+ sqe->ts_sl.wd0.conn_id =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
+
+ if (UNF_GET_RXID(pkg) == INVALID_VALUE16)
+ sqe->ts_sl.local_xid = INVALID_VALUE16;
+ else
+ sqe->ts_sl.local_xid = UNF_GET_RXID(pkg) + exi_base;
+
+ sqe->ts_sl.tmf_rsp.wd0.scqn = scqn;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+}
+
+void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len)
+{
+ /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
+ * to 2040 bytes (8 bits of 8 bytes' chunk)
+ */
+ ctrl_sl->ch.wd0.bdsl = 0;
+
+ /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
+ * 0 to 24 bytes
+ */
+ ctrl_sl->ch.wd0.drv_sl = 0;
+
+ /* a.
+ * b1 - linking WQE, which will be only used in linked page architecture
+ * instead of ring, it's a special control WQE which does not contain
+ * any buffer or inline data information, and will only be consumed by
+ * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
+ * normal SEG WQE or inline data WQE
+ */
+ ctrl_sl->ch.wd0.wf = 0;
+
+ /*
+ * "CF" field of CtrlS - Completion Format - defines the format of CS.
+ * a. b0 - Status information is embedded inside of Completion Section
+ * b. b1 - Completion Section keeps SGL, where Status information
+ * should be written. (For the definition of SGLs see ?4.1
+ * .)
+ */
+ ctrl_sl->ch.wd0.cf = 0;
+
+ /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
+ * 248 bytes
+ */
+ ctrl_sl->ch.wd0.tsl = task_len;
+
+ /*
+ * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
+ * format is of two types, which are defined by "VA " field of CtrlS.
+ * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits
+ * buffer's pointer and 31-bits Length, each SGE can only support up to
+ * 2G-1B, it can guarantee each single SGE length can not exceed 2GB by
+ * nature, A byte count value of zero means a 0byte data transfer. o b1.
+ * SGE comprises 64-bits buffer's pointer, 31-bits Length and 30-bits
+ * Key of the Translation table , each SGE can only support up to 2G-1B,
+ * it can guarantee each single SGE length can not exceed 2GB by nature,
+ * A byte count value of zero means a 0byte data transfer
+ */
+ ctrl_sl->ch.wd0.va = 0;
+
+ /*
+ * "DF" field of CtrlS - Data Format - defines the format of BDS
+ * a. b0 - BDS carries the list of SGEs (SGL)
+ * b. b1 - BDS carries the inline data
+ */
+ ctrl_sl->ch.wd0.df = 0;
+
+ /* "CR" - Completion is Required - marks CQE generation request per WQE
+ */
+ ctrl_sl->ch.wd0.cr = 1;
+
+ /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
+ * 0 to 56 bytes
+ */
+ ctrl_sl->ch.wd0.dif_sl = 0;
+
+ /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
+ * 24 bytes
+ */
+ ctrl_sl->ch.wd0.csl = 0;
+
+ /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
+ * value Zero is not valid
+ */
+ ctrl_sl->ch.wd0.ctrl_sl = 1;
+
+ /* "O" - Owner - marks ownership of WQE */
+ ctrl_sl->ch.wd0.owner = 0;
+}
+
+void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
+{
+ /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
+ * to 2040 bytes (8 bits of 8 bytes' chunk)
+ */
+ /* TrdWqe carry 2 SGE defaultly, 4DW per SGE, the value is 4 because
+ * unit is 2DW, in double SGL mode, bdsl is 2
+ */
+ sqe->ctrl_sl.ch.wd0.bdsl = SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE;
+
+ /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
+ * 0 to 24 bytes
+ */
+ /* DrvSL = 0 */
+ sqe->ctrl_sl.ch.wd0.drv_sl = 0;
+
+ /* a.
+ * b1 - linking WQE, which will be only used in linked page architecture
+ * instead of ring, it's a special control WQE which does not contain
+ * any buffer or inline data information, and will only be consumed by
+ * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
+ * normal SEG WQE or inline data WQE
+ */
+ /* normal wqe */
+ sqe->ctrl_sl.ch.wd0.wf = 0;
+
+ /*
+ * "CF" field of CtrlS - Completion Format - defines the format of CS.
+ * a. b0 - Status information is embedded inside of Completion Section
+ * b. b1 - Completion Section keeps SGL, where Status information
+ * should be written. (For the definition of SGLs see ?4.1)
+ */
+ /* by SCQE mode, the value is ignored */
+ sqe->ctrl_sl.ch.wd0.cf = 0;
+
+ /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
+ * 248 bytes
+ */
+ /* TSL is configured by 56 bytes */
+ sqe->ctrl_sl.ch.wd0.tsl =
+ sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE;
+
+ /*
+ * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
+ * format is of two types, which are defined by "VA " field of CtrlS.
+ * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits buffer's
+ * pointer and 31-bits Length, each SGE can only support up to 2G-1B, it
+ * can guarantee each single SGE length can not exceed 2GB by nature, A
+ * byte count value of zero means a 0byte data transfer. o b1. SGE
+ * comprises 64-bits buffer's pointer, 31-bits Length and 30-bits Key of
+ * the Translation table , each SGE can only support up to 2G-1B, it can
+ * guarantee each single SGE length can not exceed 2GB by nature, A byte
+ * count value of zero means a 0byte data transfer
+ */
+ sqe->ctrl_sl.ch.wd0.va = 0;
+
+ /*
+ * "DF" field of CtrlS - Data Format - defines the format of BDS
+ * a. b0 - BDS carries the list of SGEs (SGL)
+ * b. b1 - BDS carries the inline data
+ */
+ sqe->ctrl_sl.ch.wd0.df = 0;
+
+ /* "CR" - Completion is Required - marks CQE generation request per WQE
+ */
+ /* by SCQE mode, this value is ignored */
+ sqe->ctrl_sl.ch.wd0.cr = 1;
+
+ /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
+ * 0 to 56 bytes.
+ */
+ sqe->ctrl_sl.ch.wd0.dif_sl = 0;
+
+ /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
+ * 24 bytes
+ */
+ sqe->ctrl_sl.ch.wd0.csl = 0;
+
+ /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
+ * value Zero is not valid.
+ */
+ sqe->ctrl_sl.ch.wd0.ctrl_sl = SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE;
+
+ /* "O" - Owner - marks ownership of WQE */
+ sqe->ctrl_sl.ch.wd0.owner = 0;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_service_wqe_ts_common
+ * Function Description : Construct the DW1~DW3 field in the Parent SQ WQE
+ * request of the ELS and ELS_RSP requests.
+ * Input Parameters : struct spfc_sqe_ts *sqe_ts u32 rport_index u16 local_xid
+ * u16 remote_xid u16 data_len
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
+ u16 local_xid, u16 remote_xid, u16 data_len)
+{
+ sqe_ts->local_xid = local_xid;
+
+ sqe_ts->wd0.conn_id = (u16)rport_index;
+ sqe_ts->wd0.remote_xid = remote_xid;
+
+ sqe_ts->cont.els_gs_elsrsp_comm.data_len = data_len;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_gs_wqe_sge
+ * Function Description : Construct the SGE field of the ELS and ELS_RSP WQE.
+ * The SGE and frame content have been converted to large ends in this
+ * function.
+ * Input Parameters: struct spfc_sqe *sqe void *buf_addr u32 buf_len u32 xid
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
+ u32 buf_len, u32 xid, void *handle)
+{
+ u64 els_rsp_phy_addr;
+ struct spfc_variable_sge *sge = NULL;
+
+ /* Fill in SGE and convert it to big-endian. */
+ sge = &sqe->sge[ARRAY_INDEX_0];
+ els_rsp_phy_addr = phy_addr;
+ sge->buf_addr_hi = SPFC_HIGH_32_BITS(els_rsp_phy_addr);
+ sge->buf_addr_lo = SPFC_LOW_32_BITS(els_rsp_phy_addr);
+ sge->wd0.buf_len = buf_len;
+ sge->wd0.r_flag = 0;
+ sge->wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
+ sge->wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
+ sge->wd1.xid = 0;
+ sge->wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
+ spfc_cpu_to_big32(sge, sizeof(*sge));
+
+ /* Converts the payload of an FC frame into a big end. */
+ if (buf_addr)
+ spfc_cpu_to_big32(buf_addr, buf_len);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_wqe_ts_rsp
+ * Function Description : Construct the DW2~DW6 field in the Parent SQ WQE
+ * of the ELS_RSP request.
+ * Input Parameters : struct spfc_sqe *sqe void *sq_info void *frame_pld
+ * u16 type u16 cmnd u32 scqn
+ * Output Parameters: N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
+ struct unf_frame_pkg *pkg, void *frame_pld,
+ u16 type, u16 cmnd)
+{
+ struct unf_prli_payload *prli_acc_pld = NULL;
+ struct spfc_sqe_els_rsp *els_rsp = NULL;
+ struct spfc_sqe_ts *sqe_ts = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct unf_fc_head *pkg_fc_hdr_info = NULL;
+ struct spfc_parent_queue_info *prnt_q_info = (struct spfc_parent_queue_info *)info;
+
+ FC_CHECK_RETURN_VOID(sqe);
+ FC_CHECK_RETURN_VOID(frame_pld);
+
+ sqe_ts = &sqe->ts_sl;
+ els_rsp = &sqe_ts->cont.els_rsp;
+ sqe_ts->task_type = SPFC_SQE_ELS_RSP;
+
+ /* The default chip does not need to update parameters. */
+ els_rsp->wd1.para_update = 0x0;
+
+ sq_info = &prnt_q_info->parent_sq_info;
+ hba = (struct spfc_hba_info *)sq_info->hba;
+
+ pkg_fc_hdr_info = &pkg->frame_head;
+ els_rsp->sid = pkg_fc_hdr_info->csctl_sid;
+ els_rsp->did = pkg_fc_hdr_info->rctl_did;
+ els_rsp->wd7.hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
+ els_rsp->wd2.class_mode = FC_PROTOCOL_CLASS_3;
+
+ if (type == ELS_RJT)
+ els_rsp->wd2.class_mode = pkg->class_mode;
+
+ /* When the PLOGI request is sent, the microcode needs to be instructed
+ * to clear the I/O related to the link to avoid data inconsistency
+ * caused by the disorder of the IO.
+ */
+ if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI)) {
+ els_rsp->wd1.clr_io = 1;
+ els_rsp->wd6.reset_exch_start = hba->exi_base;
+ els_rsp->wd6.reset_exch_end =
+ hba->exi_base + (hba->exi_count - 1);
+ els_rsp->wd7.scqn =
+ prnt_q_info->parent_sts_scq_info.cqm_queue_id;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) send cmd(0x%x) to RPort(0x%x),rport index(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
+ sq_info->local_port_id, cmnd, sq_info->remote_port_id,
+ sq_info->rport_index, els_rsp->wd6.reset_exch_start,
+ els_rsp->wd6.reset_exch_end, els_rsp->wd7.scqn);
+
+ return;
+ }
+
+ if (type == ELS_RJT)
+ return;
+
+ /* Enter WQE in the PrliAcc negotiation parameter, and fill in the
+ * Update flag in WQE.
+ */
+ if (cmnd == ELS_PRLI) {
+ /* The chip updates the PLOGI ACC negotiation parameters. */
+ els_rsp->wd2.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
+ els_rsp->wd2.e_d_tov = sq_info->plogi_co_parms.ed_tov;
+ els_rsp->wd2.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
+ els_rsp->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
+
+ /* The chip updates the PRLI ACC parameter. */
+ prli_acc_pld = (struct unf_prli_payload *)frame_pld;
+ els_rsp->wd4.xfer_dis = SPFC_GET_PRLI_PARAM_WXFER(prli_acc_pld->parms);
+ els_rsp->wd4.conf = SPFC_GET_PRLI_PARAM_CONF(prli_acc_pld->parms);
+ els_rsp->wd4.rec = SPFC_GET_PRLI_PARAM_REC(prli_acc_pld->parms);
+
+ els_rsp->wd1.para_update = 0x03;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x, xfer_dis:0x%x,conf:0x%x,rec:0x%x.",
+ sq_info->local_port_id, sq_info->rport_index,
+ els_rsp->wd2.seq_cnt, els_rsp->wd2.e_d_tov,
+ els_rsp->wd2.tx_mfs, els_rsp->e_d_tov_timer_val,
+ els_rsp->wd4.xfer_dis, els_rsp->wd4.conf, els_rsp->wd4.rec);
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_els_wqe_ts_req
+ * Function Description: Construct the DW2~DW4 field in the Parent SQ WQE
+ * of the ELS request.
+ * Input Parameters: struct spfc_sqe *sqe void *sq_info u16 cmnd u32 scqn
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
+ void *frame_pld, struct unf_frame_pkg *pkg)
+{
+ struct spfc_sqe_ts *sqe_ts = NULL;
+ struct spfc_sqe_t_els_gs *els_req = NULL;
+ struct spfc_parent_sq_info *sq_info = NULL;
+ struct spfc_hba_info *hba = NULL;
+ struct unf_fc_head *pkg_fc_hdr_info = NULL;
+ u16 cmnd;
+
+ cmnd = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
+
+ sqe_ts = &sqe->ts_sl;
+ if (pkg->type == UNF_PKG_GS_REQ)
+ sqe_ts->task_type = SPFC_SQE_GS_CMND;
+ else
+ sqe_ts->task_type = SPFC_SQE_ELS_CMND;
+
+ sqe_ts->magic_num = UNF_GETXCHGALLOCTIME(pkg);
+
+ els_req = &sqe_ts->cont.t_els_gs;
+ pkg_fc_hdr_info = &pkg->frame_head;
+
+ sq_info = (struct spfc_parent_sq_info *)info;
+ hba = (struct spfc_hba_info *)sq_info->hba;
+ els_req->sid = pkg_fc_hdr_info->csctl_sid;
+ els_req->did = pkg_fc_hdr_info->rctl_did;
+
+ /* When the PLOGI request is sent, the microcode needs to be instructed
+ * to clear the I/O related to the link to avoid data inconsistency
+ * caused by the disorder of the IO.
+ */
+ if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI) && hba) {
+ els_req->wd4.clr_io = 1;
+ els_req->wd6.reset_exch_start = hba->exi_base;
+ els_req->wd6.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
+ els_req->wd7.scqn = scqn;
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Rport(0x%x) SID(0x%x) send %s to DID(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
+ hba->port_cfg.port_id, sq_info->rport_index,
+ sq_info->local_port_id, (cmnd == ELS_PLOGI) ? "PLOGI" : "LOGO",
+ sq_info->remote_port_id, els_req->wd6.reset_exch_start,
+ els_req->wd6.reset_exch_end, scqn);
+
+ return;
+ }
+
+ /* The chip updates the PLOGI ACC negotiation parameters. */
+ if (cmnd == ELS_PRLI) {
+ els_req->wd5.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
+ els_req->wd5.e_d_tov = sq_info->plogi_co_parms.ed_tov;
+ els_req->wd5.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
+ els_req->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
+
+ els_req->wd4.rec_support = hba->port_cfg.tape_support ? 1 : 0;
+ els_req->wd4.para_update = 0x01;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
+ UNF_INFO,
+ "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x.",
+ sq_info->local_port_id, sq_info->rport_index,
+ els_req->wd5.seq_cnt, els_req->wd5.e_d_tov,
+ els_req->wd5.tx_mfs, els_req->e_d_tov_timer_val);
+ }
+
+ if (cmnd == ELS_ECHO)
+ els_req->echo_flag = true;
+
+ if (cmnd == ELS_REC) {
+ els_req->wd4.rec_flag = 1;
+ els_req->wd4.origin_hottag = pkg->origin_hottag + hba->exi_base;
+ els_req->origin_magicnum = pkg->origin_magicnum;
+
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
+ "Port(0x%x) Rport(0x%x) SID(0x%x) send Rec to DID(0x%x), origin_hottag 0x%x",
+ hba->port_cfg.port_id, sq_info->rport_index,
+ sq_info->local_port_id, sq_info->remote_port_id,
+ els_req->wd4.origin_hottag);
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_bls_wqe_ts_req
+ * Function Description: Construct the DW2 field in the Parent SQ WQE of
+ * the ELS request, that is, ABTS parameter.
+ * Input Parameters:struct unf_frame_pkg *pkg void *hba
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, void *handle)
+{
+ struct spfc_sqe_abts *abts;
+
+ sqe->ts_sl.task_type = SPFC_SQE_BLS_CMND;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+
+ abts = &sqe->ts_sl.cont.abts;
+ abts->fh_parm_abts = pkg->frame_head.parameter;
+ abts->hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) +
+ ((struct spfc_hba_info *)handle)->exi_base;
+ abts->release_timer = UNF_GET_XID_RELEASE_TIMER(pkg);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_service_wqe_ctrl_section
+ * Function Description: fill Parent SQ WQE and Root SQ WQE's Control Section
+ * Input Parameters : struct spfc_wqe_ctrl *wqe_cs u32 ts_size u32 bdsl
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
+ u32 bdsl)
+{
+ wqe_cs->ch.wd0.bdsl = bdsl;
+ wqe_cs->ch.wd0.drv_sl = 0;
+ wqe_cs->ch.wd0.rsvd0 = 0;
+ wqe_cs->ch.wd0.wf = 0;
+ wqe_cs->ch.wd0.cf = 0;
+ wqe_cs->ch.wd0.tsl = ts_size;
+ wqe_cs->ch.wd0.va = 0;
+ wqe_cs->ch.wd0.df = 0;
+ wqe_cs->ch.wd0.cr = 1;
+ wqe_cs->ch.wd0.dif_sl = 0;
+ wqe_cs->ch.wd0.csl = 0;
+ wqe_cs->ch.wd0.ctrl_sl = SPFC_BYTES_TO_QW_NUM(sizeof(*wqe_cs)); /* divided by 8 */
+ wqe_cs->ch.wd0.owner = 0;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_wqe_owner_pmsn
+ * Function Description: This field is filled using the value of Control
+ * Section of Parent SQ WQE.
+ * Input Parameters: struct spfc_wqe_ctrl *wqe_cs u16 owner u16 pmsn
+ * Output Parameters : N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn)
+{
+ struct spfc_wqe_ctrl *wqe_cs = &io_sqe->ctrl_sl;
+ struct spfc_wqe_ctrl *wqee_cs = &io_sqe->ectrl_sl;
+
+ wqe_cs->qsf.wqe_sn = pmsn;
+ wqe_cs->qsf.dump_wqe_sn = wqe_cs->qsf.wqe_sn;
+ wqe_cs->ch.wd0.owner = (u32)owner;
+ wqee_cs->ch.ctrl_ch_val = wqe_cs->ch.ctrl_ch_val;
+ wqee_cs->qsf.wqe_sn = wqe_cs->qsf.wqe_sn;
+ wqee_cs->qsf.dump_wqe_sn = wqe_cs->qsf.dump_wqe_sn;
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_convert_parent_wqe_to_big_endian
+ * Function Description: Set the Done field of Parent SQ WQE and convert
+ * Control Section and Task Section to big-endian.
+ * Input Parameters:struct spfc_sqe *sqe
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe)
+{
+ if (likely(sqe->ts_sl.task_type != SPFC_TASK_T_TRESP &&
+ sqe->ts_sl.task_type != SPFC_TASK_T_TMF_RESP)) {
+ /* Convert Control Secton and Task Section to big-endian. Before
+ * the SGE enters the queue, the upper-layer driver converts the
+ * SGE and Task Section to the big-endian mode.
+ */
+ spfc_cpu_to_big32(&sqe->ctrl_sl, sizeof(sqe->ctrl_sl));
+ spfc_cpu_to_big32(&sqe->ts_sl, sizeof(sqe->ts_sl));
+ spfc_cpu_to_big32(&sqe->ectrl_sl, sizeof(sqe->ectrl_sl));
+ spfc_cpu_to_big32(&sqe->sid, sizeof(sqe->sid));
+ spfc_cpu_to_big32(&sqe->did, sizeof(sqe->did));
+ spfc_cpu_to_big32(&sqe->wqe_gpa, sizeof(sqe->wqe_gpa));
+ spfc_cpu_to_big32(&sqe->db_val, sizeof(sqe->db_val));
+ } else {
+ /* The SPFC_TASK_T_TRESP may use the SGE as the Task Section to
+ * convert the entire SQE into a large end.
+ */
+ spfc_cpu_to_big32(sqe, sizeof(struct spfc_sqe_tresp));
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_cmdqe_common
+ * Function Description : Assemble the Cmdqe Common part.
+ * Input Parameters: union spfc_cmdqe *cmd_qe enum spfc_task_type task_type u16 rxid
+ * Output Parameters : N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
+ u16 rxid)
+{
+ cmd_qe->common.wd0.task_type = task_type;
+ cmd_qe->common.wd0.rx_id = rxid;
+ cmd_qe->common.wd0.rsvd0 = 0;
+}
+
+#define SPFC_STANDARD_SIRT_ENABLE (1)
+#define SPFC_STANDARD_SIRT_DISABLE (0)
+#define SPFC_UNKNOWN_ID (0xFFFF)
+
+void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ u8 task_type, u16 exi_base, u8 port_idx)
+{
+ sqe->ts_sl.local_xid = (u16)UNF_GET_HOTPOOL_TAG(pkg) + exi_base;
+ sqe->ts_sl.task_type = task_type;
+ sqe->ts_sl.wd0.conn_id =
+ (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
+
+ sqe->ts_sl.wd0.remote_xid = SPFC_UNKNOWN_ID;
+ sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_icmnd_wqe_ts
+ * Function Description : Constructing the TS Domain of the ICmnd
+ * Input Parameters: void *hba struct unf_frame_pkg *pkg
+ * struct spfc_sqe_ts *sqe_ts
+ * Output Parameters :N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
+ struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex)
+{
+ struct spfc_sqe_icmnd *icmnd = &sqe_ts->cont.icmnd;
+ struct spfc_hba_info *hba = NULL;
+
+ hba = (struct spfc_hba_info *)handle;
+
+ sqe_ts->cdb_type = 0;
+ memcpy(icmnd->fcp_cmnd_iu, pkg->fcp_cmnd, sizeof(struct unf_fcp_cmnd));
+
+ if (sqe_ts->task_type == SPFC_SQE_FCP_ITMF) {
+ icmnd->info.tmf.w0.bs.reset_exch_start = hba->exi_base;
+ icmnd->info.tmf.w0.bs.reset_exch_end = hba->exi_base + hba->exi_count - 1;
+
+ icmnd->info.tmf.w1.bs.reset_did = UNF_GET_DID(pkg);
+ /* delivers the marker status flag to the microcode. */
+ icmnd->info.tmf.w1.bs.marker_sts = 1;
+ SPFC_GET_RESET_TYPE(UNF_GET_TASK_MGMT_FLAGS(pkg->fcp_cmnd->control),
+ icmnd->info.tmf.w1.bs.reset_type);
+
+ icmnd->info.tmf.w2.bs.reset_sid = UNF_GET_SID(pkg);
+
+ memcpy(icmnd->info.tmf.reset_lun, pkg->fcp_cmnd->lun,
+ sizeof(icmnd->info.tmf.reset_lun));
+ }
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_icmnd_wqe_ctrls
+ * Function Description : The CtrlS domain of the ICmnd is constructed. The
+ * analysis result is the same as that of the TWTR.
+ * Input Parameters: struct unf_frame_pkg *pkg struct spfc_sqe *sqe
+ * Output Parameters: N/A
+ * Return Type: void
+ ****************************************************************************
+ */
+void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
+{
+ spfc_build_trd_twr_wqe_ctrls(pkg, sqe);
+}
+
+/* ****************************************************************************
+ * Function Name : spfc_build_srq_wqe_ctrls
+ * Function Description : Construct the CtrlS domain of the ICmnd. The analysis
+ * result is the same as that of the TWTR.
+ * Input Parameters : struct spfc_rqe *rqe u16 owner u16 pmsn
+ * Output Parameters : N/A
+ * Return Type : void
+ ****************************************************************************
+ */
+void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn)
+{
+ struct spfc_wqe_ctrl_ch *wqe_ctrls = NULL;
+
+ wqe_ctrls = &rqe->ctrl_sl.ch;
+ wqe_ctrls->wd0.owner = owner;
+ wqe_ctrls->wd0.ctrl_sl = sizeof(struct spfc_wqe_ctrl) >> UNF_SHIFT_3;
+ wqe_ctrls->wd0.csl = 1;
+ wqe_ctrls->wd0.dif_sl = 0;
+ wqe_ctrls->wd0.cr = 1;
+ wqe_ctrls->wd0.df = 0;
+ wqe_ctrls->wd0.va = 0;
+ wqe_ctrls->wd0.tsl = 0;
+ wqe_ctrls->wd0.cf = 0;
+ wqe_ctrls->wd0.wf = 0;
+ wqe_ctrls->wd0.drv_sl = sizeof(struct spfc_rqe_drv) >> UNF_SHIFT_3;
+ wqe_ctrls->wd0.bdsl = sizeof(struct spfc_constant_sge) >> UNF_SHIFT_3;
+
+ rqe->ctrl_sl.wd0.wqe_msn = pmsn;
+ rqe->ctrl_sl.wd0.dump_wqe_msn = rqe->ctrl_sl.wd0.wqe_msn;
+}
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.h b/drivers/scsi/spfc/hw/spfc_wqe.h
new file mode 100644
index 000000000000..ec6d7bbdf8f9
--- /dev/null
+++ b/drivers/scsi/spfc/hw/spfc_wqe.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
+
+#ifndef SPFC_WQE_H
+#define SPFC_WQE_H
+
+#include "unf_type.h"
+#include "unf_common.h"
+#include "spfc_hw_wqe.h"
+#include "spfc_parent_context.h"
+
+/* TGT WQE type */
+/* DRV->uCode via Parent SQ */
+#define SPFC_SQE_FCP_TRD SPFC_TASK_T_TREAD
+#define SPFC_SQE_FCP_TWR SPFC_TASK_T_TWRITE
+#define SPFC_SQE_FCP_TRSP SPFC_TASK_T_TRESP
+#define SPFC_SQE_FCP_TACK SPFC_TASK_T_TACK
+#define SPFC_SQE_ELS_CMND SPFC_TASK_T_ELS
+#define SPFC_SQE_ELS_RSP SPFC_TASK_T_ELS_RSP
+#define SPFC_SQE_GS_CMND SPFC_TASK_T_GS
+#define SPFC_SQE_BLS_CMND SPFC_TASK_T_ABTS
+#define SPFC_SQE_FCP_IREAD SPFC_TASK_T_IREAD
+#define SPFC_SQE_FCP_IWRITE SPFC_TASK_T_IWRITE
+#define SPFC_SQE_FCP_ITMF SPFC_TASK_T_ITMF
+#define SPFC_SQE_SESS_RST SPFC_TASK_T_SESS_RESET
+#define SPFC_SQE_FCP_TMF_TRSP SPFC_TASK_T_TMF_RESP
+#define SPFC_SQE_NOP SPFC_TASK_T_NOP
+/* DRV->uCode Via CMDQ */
+#define SPFC_CMDQE_ABTS_RSP SPFC_TASK_T_ABTS_RSP
+#define SPFC_CMDQE_ABORT SPFC_TASK_T_ABORT
+#define SPFC_CMDQE_SESS_DIS SPFC_TASK_T_SESS_DIS
+#define SPFC_CMDQE_SESS_DEL SPFC_TASK_T_SESS_DEL
+
+/* uCode->Drv Via CMD SCQ */
+#define SPFC_SCQE_FCP_TCMND SPFC_TASK_T_RCV_TCMND
+#define SPFC_SCQE_ELS_CMND SPFC_TASK_T_RCV_ELS_CMD
+#define SPFC_SCQE_ABTS_CMD SPFC_TASK_T_RCV_ABTS_CMD
+#define SPFC_SCQE_FCP_IRSP SPFC_TASK_T_IRESP
+#define SPFC_SCQE_FCP_ITMF_RSP SPFC_TASK_T_ITMF_RESP
+
+/* uCode->Drv Via STS SCQ */
+#define SPFC_SCQE_FCP_TSTS SPFC_TASK_T_TSTS
+#define SPFC_SCQE_GS_RSP SPFC_TASK_T_RCV_GS_RSP
+#define SPFC_SCQE_ELS_RSP SPFC_TASK_T_RCV_ELS_RSP
+#define SPFC_SCQE_ABTS_RSP SPFC_TASK_T_RCV_ABTS_RSP
+#define SPFC_SCQE_ELS_RSP_STS SPFC_TASK_T_ELS_RSP_STS
+#define SPFC_SCQE_ABORT_STS SPFC_TASK_T_ABORT_STS
+#define SPFC_SCQE_SESS_EN_STS SPFC_TASK_T_SESS_EN_STS
+#define SPFC_SCQE_SESS_DIS_STS SPFC_TASK_T_SESS_DIS_STS
+#define SPFC_SCQE_SESS_DEL_STS SPFC_TASK_T_SESS_DEL_STS
+#define SPFC_SCQE_SESS_RST_STS SPFC_TASK_T_SESS_RESET_STS
+#define SPFC_SCQE_ITMF_MARKER_STS SPFC_TASK_T_ITMF_MARKER_STS
+#define SPFC_SCQE_ABTS_MARKER_STS SPFC_TASK_T_ABTS_MARKER_STS
+#define SPFC_SCQE_FLUSH_SQ_STS SPFC_TASK_T_FLUSH_SQ_STS
+#define SPFC_SCQE_BUF_CLEAR_STS SPFC_TASK_T_BUFFER_CLEAR_STS
+#define SPFC_SCQE_CLEAR_SRQ_STS SPFC_TASK_T_CLEAR_SRQ_STS
+#define SPFC_SCQE_DIFX_RESULT_STS SPFC_TASK_T_DIFX_RESULT_STS
+#define SPFC_SCQE_XID_FREE_ABORT_STS SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS
+#define SPFC_SCQE_EXCHID_TIMEOUT_STS SPFC_TASK_T_EXCHID_TIMEOUT_STS
+#define SPFC_SQE_NOP_STS SPFC_TASK_T_NOP_STS
+
+#define SPFC_LOW_32_BITS(__addr) ((u32)((u64)(__addr) & 0xffffffff))
+#define SPFC_HIGH_32_BITS(__addr) ((u32)(((u64)(__addr) >> 32) & 0xffffffff))
+
+/* Error Code from SCQ */
+#define SPFC_COMPLETION_STATUS_SUCCESS FC_CQE_COMPLETED
+#define SPFC_COMPLETION_STATUS_ABORTED_SETUP_FAIL FC_IMMI_CMDPKT_SETUP_FAIL
+
+#define SPFC_COMPLETION_STATUS_TIMEOUT FC_ERROR_CODE_E_D_TIMER_EXPIRE
+#define SPFC_COMPLETION_STATUS_DIF_ERROR FC_ERROR_CODE_DATA_DIFX_FAILED
+#define SPFC_COMPLETION_STATUS_DATA_OOO FC_ERROR_CODE_DATA_OOO_RO
+#define SPFC_COMPLETION_STATUS_DATA_OVERFLOW \
+ FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS
+
+#define SPFC_SCQE_INVALID_CONN_ID (0xffff)
+#define SPFC_GET_SCQE_TYPE(scqe) ((scqe)->common.ch.wd0.task_type)
+#define SPFC_GET_SCQE_STATUS(scqe) ((scqe)->common.ch.wd0.err_code)
+#define SPFC_GET_SCQE_REMAIN_CNT(scqe) ((scqe)->common.ch.wd0.cqe_remain_cnt)
+#define SPFC_GET_SCQE_CONN_ID(scqe) ((scqe)->common.conn_id)
+#define SPFC_GET_SCQE_SQN(scqe) ((scqe)->common.ch.wd0.sqn)
+#define SPFC_GET_WQE_TYPE(wqe) ((wqe)->ts_sl.task_type)
+
+#define SPFC_WQE_IS_IO(wqe) \
+ ((SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_SESS_RST) && \
+ (SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_NOP))
+#define SPFC_SCQE_HAS_ERRCODE(scqe) \
+ (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS)
+#define SPFC_SCQE_ERR_TO_CM(scqe) \
+ (SPFC_GET_SCQE_STATUS(scqe) != FC_ELS_GS_RSP_EXCH_CHECK_FAIL)
+#define SPFC_SCQE_EXCH_ABORTED(scqe) \
+ ((SPFC_GET_SCQE_STATUS(scqe) >= \
+ FC_CQE_BUFFER_CLEAR_IO_COMPLETED) && \
+ (SPFC_GET_SCQE_STATUS(scqe) <= FC_CQE_WQE_FLUSH_IO_COMPLETED))
+#define SPFC_SCQE_CONN_ID_VALID(scqe) \
+ (SPFC_GET_SCQE_CONN_ID(scqe) != SPFC_SCQE_INVALID_CONN_ID)
+
+/*
+ * checksum error bitmap define
+ */
+#define NIC_RX_CSUM_HW_BYPASS_ERR (1)
+#define NIC_RX_CSUM_IP_CSUM_ERR (1 << 1)
+#define NIC_RX_CSUM_TCP_CSUM_ERR (1 << 2)
+#define NIC_RX_CSUM_UDP_CSUM_ERR (1 << 3)
+#define NIC_RX_CSUM_SCTP_CRC_ERR (1 << 4)
+
+#define SPFC_WQE_SECTION_CHUNK_SIZE 8 /* 8 bytes' chunk */
+#define SPFC_T_RESP_WQE_CTR_TSL_SIZE 15 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_TSL_SIZE 9 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE 4 /* 8 bytes' chunk */
+#define SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE 1 /* 8 bytes' chunk */
+
+#define SPFC_WQE_MAX_ESGE_NUM 3 /* 3 ESGE In Extended wqe */
+#define SPFC_WQE_SGE_ENTRY_NUM 2 /* BD SGE and DIF SGE count */
+#define SPFC_WQE_SGE_DIF_ENTRY_NUM 1 /* DIF SGE count */
+#define SPFC_WQE_SGE_LAST_FLAG 1
+#define SPFC_WQE_SGE_NOT_LAST_FLAG 0
+#define SPFC_WQE_SGE_EXTEND_FLAG 1
+#define SPFC_WQE_SGE_NOT_EXTEND_FLAG 0
+
+#define SPFC_FCP_TMF_PORT_RESET (0)
+#define SPFC_FCP_TMF_LUN_RESET (1)
+#define SPFC_FCP_TMF_TGT_RESET (2)
+#define SPFC_FCP_TMF_RSVD (3)
+
+#define SPFC_ADJUST_DATA(old_va, new_va) \
+ { \
+ (old_va) = new_va; \
+ }
+
+#define SPFC_GET_RESET_TYPE(tmf_flag, reset_flag) \
+ { \
+ switch (tmf_flag) { \
+ case UNF_FCP_TM_ABORT_TASK_SET: \
+ case UNF_FCP_TM_LOGICAL_UNIT_RESET: \
+ (reset_flag) = SPFC_FCP_TMF_LUN_RESET; \
+ break; \
+ case UNF_FCP_TM_TARGET_RESET: \
+ (reset_flag) = SPFC_FCP_TMF_TGT_RESET; \
+ break; \
+ case UNF_FCP_TM_CLEAR_TASK_SET: \
+ (reset_flag) = SPFC_FCP_TMF_PORT_RESET; \
+ break; \
+ default: \
+ (reset_flag) = SPFC_FCP_TMF_RSVD; \
+ } \
+ }
+
+/* Link WQE structure */
+struct spfc_linkwqe {
+ union {
+ struct {
+ u32 rsv1 : 14;
+ u32 wf : 1;
+ u32 rsv2 : 14;
+ u32 ctrlsl : 2;
+ u32 o : 1;
+ } wd0;
+
+ u32 val_wd0;
+ };
+
+ union {
+ struct {
+ u32 msn : 16;
+ u32 dump_msn : 15;
+ u32 lp : 1; /* lp means whether O bit is overturn */
+ } wd1;
+
+ u32 val_wd1;
+ };
+
+ u32 next_page_addr_hi;
+ u32 next_page_addr_lo;
+};
+
+/* Session Enable */
+struct spfc_host_keys {
+ struct {
+ u32 smac1 : 8;
+ u32 smac0 : 8;
+ u32 rsv : 16;
+ } wd0;
+
+ u8 smac[ARRAY_INDEX_4];
+
+ u8 dmac[ARRAY_INDEX_4];
+ struct {
+ u8 sid_1;
+ u8 sid_2;
+ u8 dmac_rvd[ARRAY_INDEX_2];
+ } wd3;
+ struct {
+ u8 did_0;
+ u8 did_1;
+ u8 did_2;
+ u8 sid_0;
+ } wd4;
+
+ struct {
+ u32 port_id : 3;
+ u32 host_id : 2;
+ u32 rsvd : 27;
+ } wd5;
+ u32 rsvd;
+};
+
+/* Parent SQ WQE Related function */
+void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
+ u32 bdsl);
+void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
+ u16 local_xid, u16 remote_xid,
+ u16 data_len);
+void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
+ u32 buf_len, u32 xid, void *handle);
+void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
+ void *frame_pld, struct unf_frame_pkg *pkg);
+void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
+ struct unf_frame_pkg *pkg, void *frame_pld,
+ u16 type, u16 cmnd);
+void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg,
+ void *handle);
+void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
+void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn);
+void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe);
+void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
+void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
+ struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex);
+void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
+ u8 task_type, u16 exi_base, u8 port_idx);
+
+void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
+ u16 rxid);
+void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn);
+void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len);
+void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
+ struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
+ u32 scqn);
+
+#endif
--
2.30.0
1
0

[PATCH openEuler-21.09 2/2] tools: add a tool to calculate the CPU utilization rate
by Hongyu Li 13 Oct '21
by Hongyu Li 13 Oct '21
13 Oct '21
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
This tool can help calculate the CPU utilization rate in higher precision.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
tools/accounting/Makefile | 2 +-
tools/accounting/idle_cal.c | 91 +++++++++++++++++++++++++++++++++++++
2 files changed, 92 insertions(+), 1 deletion(-)
create mode 100644 tools/accounting/idle_cal.c
diff --git a/tools/accounting/Makefile b/tools/accounting/Makefile
index 03687f19cbb1..d14151e28173 100644
--- a/tools/accounting/Makefile
+++ b/tools/accounting/Makefile
@@ -2,7 +2,7 @@
CC := $(CROSS_COMPILE)gcc
CFLAGS := -I../../usr/include
-PROGS := getdelays
+PROGS := getdelays idle_cal
all: $(PROGS)
diff --git a/tools/accounting/idle_cal.c b/tools/accounting/idle_cal.c
new file mode 100644
index 000000000000..6d621b37c70e
--- /dev/null
+++ b/tools/accounting/idle_cal.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * idle_cal.c
+ *
+ * Copyright (C) 2021
+ *
+ * cpu idle time accouting
+ */
+
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <fcntl.h>
+#include <string.h>
+#include <unistd.h>
+#include <time.h>
+#include <limits.h>
+#include <sys/time.h>
+
+#define BUFFSIZE 4096
+#define HZ 100
+#define FILE_NAME "/proc/stat2"
+
+struct cpu_info {
+ char name[BUFFSIZE];
+ long long value[1];
+};
+
+int main(void)
+{
+ int cpu_number = sysconf(_SC_NPROCESSORS_ONLN);
+ struct cpu_info *cpus = (struct cpu_info *)malloc(sizeof(struct cpu_info)*cpu_number);
+ struct cpu_info *cpus_2 = (struct cpu_info *)malloc(sizeof(struct cpu_info)*cpu_number);
+
+ char buf[BUFFSIZE];
+ long long sub;
+ double value;
+
+ while (1) {
+ FILE *fp = fopen(FILE_NAME, "r");
+ int i = 0;
+ struct timeval start, end;
+
+
+ while (i < cpu_number+1) {
+ int n = fscanf(fp, "%s %lld\n", cpus[i].name, &cpus[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ gettimeofday(&start, NULL);
+ fflush(fp);
+ fclose(fp);
+ i = 0;
+
+ sleep(1);
+
+ FILE *fp_2 = fopen(FILE_NAME, "r");
+
+ while (i < cpu_number+1) {
+ int n = fscanf(fp_2, "%s %lld\n", cpus_2[i].name, &cpus_2[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ gettimeofday(&end, NULL);
+ fflush(fp);
+ fclose(fp_2);
+
+ sub = end.tv_sec-start.tv_sec;
+ value = sub*1000000.0+end.tv_usec-start.tv_usec;
+ system("reset");
+ printf("CPU idle rate %f\n", 1000000/HZ*(cpus_2[0].value[0]-cpus[0].value[0])
+ /value);
+
+ for (int i = 1; i < cpu_number+1; i++) {
+ printf("CPU%d idle rate %f\n", i-1, 1-1000000/HZ
+ *(cpus_2[i].value[0]-cpus[i].value[0])/value);
+ }
+ }
+ return 0;
+}
+
--
2.17.1
1
0
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
The default way of calculating CPU utilization is to check which task is
executed during the interval of two ticks. This leads to the inaccurate
results of CPU utilization.
This problem can be solved by counting the idle time via scheduler rather
than the tick interval. We can record the time before executing idle
process and calculate the execute time before quiting the idle process.
The idle time of each CPU is given in the /proc/stat2 file. This way can
give higher precision in accounting the CPU idle time compared with the
/proc/stat.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++++
kernel/sched/idle.c | 38 ++++++++++++++++++
5 files changed, 171 insertions(+)
create mode 100644 fs/proc/stat2.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..33588a37579e 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/stat2 file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/stat2 file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..b0d5f2b347d7 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += stat2.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/stat2.c b/fs/proc/stat2.c
new file mode 100644
index 000000000000..6036a946c71d
--- /dev/null
+++ b/fs/proc/stat2.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * linux/fs/proc/stat2.c
+ *
+ * Copyright (C) 2007
+ *
+ * cpu idle time accouting
+ */
+
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "stat2"
+
+extern u64 cal_idle_sum_exec_runtime(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cal_idle_sum_exec_runtime(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 32 + 32 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..25218a8f822f 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -19,6 +19,8 @@
*/
DEFINE_PER_CPU(struct irqtime, cpu_irqtime);
+extern struct static_key_true proc_idle;
+
static int sched_clock_irqtime;
void enable_sched_clock_irqtime(void)
@@ -1078,3 +1080,35 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cal_idle_sum_exec_runtime(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ if (!static_branch_likely(&proc_idle))
+ return 0ULL;
+
+ if (rq->curr == rq->idle) {
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return idle;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle += delta_exec;
+ }
+
+ return idle;
+}
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..3714a1c0d57b 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -10,6 +10,8 @@
#include <trace/events/power.h>
+DEFINE_STATIC_KEY_TRUE(proc_idle);
+
/* Linker adds these: start and end of __cpuidle functions */
extern char __cpuidle_text_start[], __cpuidle_text_end[];
@@ -424,8 +426,35 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ if (!static_branch_likely(&proc_idle))
+ return;
+
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
+#ifdef CONFIG_PROC_IDLE
+static int __init init_proc_idle(char *str)
+{
+ if (!strcmp(str, "false"))
+ static_branch_disable(&proc_idle);
+
+ return 1;
+}
+__setup("proc_idle=", init_proc_idle);
+#endif
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
{
update_idle_core(rq);
@@ -436,6 +465,15 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ if (static_branch_likely(&proc_idle)) {
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+ }
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
1
0

[PATCH openEuler-21.09 0/2] Improve the precisoin of accounuting CPU utilization rate
by Hongyu Li 13 Oct '21
by Hongyu Li 13 Oct '21
13 Oct '21
The current way of calculating the CPU utilization rate is not accurate.
The accounting system only works In the interval of two ticks. However,
a process can give up the CPU before the tick ending.
This can be fixed by counting the idel time via the scheduler. We can
use the sum_exe_runtime of the idle process of each CPU to calculate the
the CPU utilization rate. The idle time of each CPU is given in the
/proc/stat2 file. An example of using this file is also attached.
Hongyu Li (2):
eulerfs: add the /proc/stat2 file
tools: add a tool to calculate the CPU utilization rate
fs/proc/Kconfig | 7 +++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 +++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++
kernel/sched/idle.c | 38 ++++++++++++++++
tools/accounting/Makefile | 2 +-
tools/accounting/idle_cal.c | 91 +++++++++++++++++++++++++++++++++++++
7 files changed, 263 insertions(+), 1 deletion(-)
create mode 100644 fs/proc/stat2.c
create mode 100644 tools/accounting/idle_cal.c
--
2.17.1
1
0

13 Oct '21
From: Yanling Song <songyl(a)ramaxel.com>
Ramaxel inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I494HF
CVE: NA
This initial commit contains Ramaxel's spraid module.
The spraid controller has two modes: HBA mode and raid mode. Raid mode
supports raid 0/1/5/6/10/50/60 mode.
The spraid driver works under scsi sub system and transfers scsi commands
to ramaxel raid chip.
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/spraid/Kconfig | 11 +
drivers/scsi/spraid/Makefile | 7 +
drivers/scsi/spraid/spraid.h | 655 ++++++
drivers/scsi/spraid/spraid_main.c | 3616 +++++++++++++++++++++++++++++
6 files changed, 4291 insertions(+)
create mode 100644 drivers/scsi/spraid/Kconfig
create mode 100644 drivers/scsi/spraid/Makefile
create mode 100644 drivers/scsi/spraid/spraid.h
create mode 100644 drivers/scsi/spraid/spraid_main.c
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 871f8ea7b928..0fbe4edeccd0 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -484,6 +484,7 @@ source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/smartpqi/Kconfig"
source "drivers/scsi/ufs/Kconfig"
+source "drivers/scsi/spraid/Kconfig"
config SCSI_HPTIOP
tristate "HighPoint RocketRAID 3xxx/4xxx Controller support"
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index ce61ba07fadd..78a3c832394c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -97,6 +97,7 @@ obj-$(CONFIG_SCSI_ZALON) += zalon7xx.o
obj-$(CONFIG_SCSI_DC395x) += dc395x.o
obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o
obj-$(CONFIG_CXLFLASH) += cxlflash/
+obj-$(CONFIG_RAMAXEL_SPRAID) += spraid/
obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
diff --git a/drivers/scsi/spraid/Kconfig b/drivers/scsi/spraid/Kconfig
new file mode 100644
index 000000000000..83962efaab07
--- /dev/null
+++ b/drivers/scsi/spraid/Kconfig
@@ -0,0 +1,11 @@
+#
+# Ramaxel driver configuration
+#
+
+config RAMAXEL_SPRAID
+ tristate "Ramaxel spraid Adapter"
+ depends on PCI && SCSI
+ depends on ARM64 || X86_64
+ default m
+ help
+ This driver supports Ramaxel spraid driver.
diff --git a/drivers/scsi/spraid/Makefile b/drivers/scsi/spraid/Makefile
new file mode 100644
index 000000000000..aadc2ffd37eb
--- /dev/null
+++ b/drivers/scsi/spraid/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the Ramaxel device drivers.
+#
+
+obj-$(CONFIG_RAMAXEL_SPRAID) += spraid.o
+
+spraid-objs := spraid_main.o
\ No newline at end of file
diff --git a/drivers/scsi/spraid/spraid.h b/drivers/scsi/spraid/spraid.h
new file mode 100644
index 000000000000..da46d8e1b4b6
--- /dev/null
+++ b/drivers/scsi/spraid/spraid.h
@@ -0,0 +1,655 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __SPRAID_H_
+#define __SPRAID_H_
+
+#define SPRAID_CAP_MQES(cap) ((cap) & 0xffff)
+#define SPRAID_CAP_STRIDE(cap) (((cap) >> 32) & 0xf)
+#define SPRAID_CAP_MPSMIN(cap) (((cap) >> 48) & 0xf)
+#define SPRAID_CAP_MPSMAX(cap) (((cap) >> 52) & 0xf)
+#define SPRAID_CAP_TIMEOUT(cap) (((cap) >> 24) & 0xff)
+#define SPRAID_CAP_DMAMASK(cap) (((cap) >> 37) & 0xff)
+
+#define SPRAID_DEFAULT_MAX_CHANNEL 4
+#define SPRAID_DEFAULT_MAX_ID 240
+#define SPRAID_DEFAULT_MAX_LUN_PER_HOST 8
+#define MAX_SECTORS 2048
+
+#define IO_SQE_SIZE sizeof(struct spraid_ioq_command)
+#define ADMIN_SQE_SIZE sizeof(struct spraid_admin_command)
+#define SQE_SIZE(qid) (((qid) > 0) ? IO_SQE_SIZE : ADMIN_SQE_SIZE)
+#define CQ_SIZE(depth) ((depth) * sizeof(struct spraid_completion))
+#define SQ_SIZE(qid, depth) ((depth) * SQE_SIZE(qid))
+
+#define SENSE_SIZE(depth) ((depth) * SCSI_SENSE_BUFFERSIZE)
+
+#define SPRAID_AQ_DEPTH 128
+#define SPRAID_NR_AEN_COMMANDS 1
+#define SPRAID_AQ_BLK_MQ_DEPTH (SPRAID_AQ_DEPTH - SPRAID_NR_AEN_COMMANDS)
+#define SPRAID_AQ_MQ_TAG_DEPTH (SPRAID_AQ_BLK_MQ_DEPTH - 1)
+
+#define SPRAID_ADMIN_QUEUE_NUM 1
+#define SPRAID_PTCMDS_PERQ 1
+#define SPRAID_IO_BLK_MQ_DEPTH (hdev->shost->can_queue)
+#define SPRAID_NR_IOQ_PTCMDS (SPRAID_PTCMDS_PERQ * hdev->shost->nr_hw_queues)
+
+#define FUA_MASK 0x08
+#define SPRAID_MINORS BIT(MINORBITS)
+
+#define COMMAND_IS_WRITE(cmd) ((cmd)->common.opcode & 1)
+
+#define SPRAID_IO_IOSQES 7
+#define SPRAID_IO_IOCQES 4
+#define PRP_ENTRY_SIZE 8
+
+#define SMALL_POOL_SIZE 256
+#define MAX_SMALL_POOL_NUM 16
+#define MAX_CMD_PER_DEV 32
+#define MAX_CDB_LEN 32
+
+#define SPRAID_UP_TO_MULTY4(x) (((x) + 4) & (~0x03))
+
+#define CQE_STATUS_SUCCESS (0x0)
+
+#define PCI_VENDOR_ID_RAMAXEL_LOGIC 0x1E81
+
+#define SPRAID_SERVER_DEVICE_HAB_DID 0x2100
+#define SPRAID_SERVER_DEVICE_RAID_DID 0x2200
+
+#define IO_6_DEFAULT_TX_LEN 256
+
+#define SPRAID_INT_PAGES 2
+#define SPRAID_INT_BYTES(hdev) (SPRAID_INT_PAGES * (hdev)->page_size)
+
+enum {
+ SPRAID_REQ_CANCELLED = (1 << 0),
+ SPRAID_REQ_USERCMD = (1 << 1),
+};
+
+enum {
+ SPRAID_SC_SUCCESS = 0x0,
+ SPRAID_SC_INVALID_OPCODE = 0x1,
+ SPRAID_SC_INVALID_FIELD = 0x2,
+
+ SPRAID_SC_ABORT_LIMIT = 0x103,
+ SPRAID_SC_ABORT_MISSING = 0x104,
+ SPRAID_SC_ASYNC_LIMIT = 0x105,
+
+ SPRAID_SC_DNR = 0x4000,
+};
+
+enum {
+ SPRAID_REG_CAP = 0x0000,
+ SPRAID_REG_CC = 0x0014,
+ SPRAID_REG_CSTS = 0x001c,
+ SPRAID_REG_AQA = 0x0024,
+ SPRAID_REG_ASQ = 0x0028,
+ SPRAID_REG_ACQ = 0x0030,
+ SPRAID_REG_DBS = 0x1000,
+};
+
+enum {
+ SPRAID_CC_ENABLE = 1 << 0,
+ SPRAID_CC_CSS_NVM = 0 << 4,
+ SPRAID_CC_MPS_SHIFT = 7,
+ SPRAID_CC_AMS_SHIFT = 11,
+ SPRAID_CC_SHN_SHIFT = 14,
+ SPRAID_CC_IOSQES_SHIFT = 16,
+ SPRAID_CC_IOCQES_SHIFT = 20,
+ SPRAID_CC_AMS_RR = 0 << SPRAID_CC_AMS_SHIFT,
+ SPRAID_CC_SHN_NONE = 0 << SPRAID_CC_SHN_SHIFT,
+ SPRAID_CC_IOSQES = SPRAID_IO_IOSQES << SPRAID_CC_IOSQES_SHIFT,
+ SPRAID_CC_IOCQES = SPRAID_IO_IOCQES << SPRAID_CC_IOCQES_SHIFT,
+ SPRAID_CC_SHN_NORMAL = 1 << SPRAID_CC_SHN_SHIFT,
+ SPRAID_CC_SHN_MASK = 3 << SPRAID_CC_SHN_SHIFT,
+ SPRAID_CSTS_CFS_SHIFT = 1,
+ SPRAID_CSTS_SHST_SHIFT = 2,
+ SPRAID_CSTS_PP_SHIFT = 5,
+ SPRAID_CSTS_RDY = 1 << 0,
+ SPRAID_CSTS_SHST_CMPLT = 2 << 2,
+ SPRAID_CSTS_SHST_MASK = 3 << 2,
+ SPRAID_CSTS_CFS_MASK = 1 << SPRAID_CSTS_CFS_SHIFT,
+ SPRAID_CSTS_PP_MASK = 1 << SPRAID_CSTS_PP_SHIFT,
+};
+
+enum {
+ SPRAID_ADMIN_DELETE_SQ = 0x00,
+ SPRAID_ADMIN_CREATE_SQ = 0x01,
+ SPRAID_ADMIN_DELETE_CQ = 0x04,
+ SPRAID_ADMIN_CREATE_CQ = 0x05,
+ SPRAID_ADMIN_ABORT_CMD = 0x08,
+ SPRAID_ADMIN_SET_FEATURES = 0x09,
+ SPRAID_ADMIN_ASYNC_EVENT = 0x0c,
+ SPRAID_ADMIN_GET_INFO = 0xc6,
+ SPRAID_ADMIN_RESET = 0xc8,
+};
+
+enum {
+ SPRAID_GET_INFO_CTRL = 0,
+ SPRAID_GET_INFO_DEV_LIST = 1,
+};
+
+enum {
+ SPRAID_RESET_TARGET = 0,
+ SPRAID_RESET_BUS = 1,
+};
+
+enum {
+ SPRAID_AEN_ERROR = 0,
+ SPRAID_AEN_NOTICE = 2,
+ SPRAID_AEN_VS = 7,
+};
+
+enum {
+ SPRAID_AEN_DEV_CHANGED = 0x00,
+ SPRAID_AEN_HOST_PROBING = 0x10,
+};
+
+enum {
+ SPRAID_AEN_TIMESYN = 0x07
+};
+
+enum {
+ SPRAID_CMD_WRITE = 0x01,
+ SPRAID_CMD_READ = 0x02,
+
+ SPRAID_CMD_NONIO_NONE = 0x80,
+ SPRAID_CMD_NONIO_TODEV = 0x81,
+ SPRAID_CMD_NONIO_FROMDEV = 0x82,
+};
+
+enum {
+ SPRAID_QUEUE_PHYS_CONTIG = (1 << 0),
+ SPRAID_CQ_IRQ_ENABLED = (1 << 1),
+
+ SPRAID_FEAT_NUM_QUEUES = 0x07,
+ SPRAID_FEAT_ASYNC_EVENT = 0x0b,
+ SPRAID_FEAT_TIMESTAMP = 0x0e,
+};
+
+enum spraid_state {
+ SPRAID_NEW,
+ SPRAID_LIVE,
+ SPRAID_RESETTING,
+ SPRAID_DELETING,
+ SPRAID_DEAD,
+};
+
+struct spraid_completion {
+ __le32 result;
+ union {
+ struct {
+ __u8 sense_len;
+ __u8 resv[3];
+ };
+ __le32 result1;
+ };
+ __le16 sq_head;
+ __le16 sq_id;
+ __u16 cmd_id;
+ __le16 status;
+};
+
+struct spraid_ctrl_info {
+ __le32 nd;
+ __le16 max_cmds;
+ __le16 max_channel;
+ __le32 max_tgt_id;
+ __le16 max_lun;
+ __le16 max_num_sge;
+ __le16 lun_num_in_boot;
+ __u8 mdts;
+ __u8 acl;
+ __u8 aerl;
+ __u8 card_type;
+ __u16 rsvd;
+ __u32 rtd3e;
+ __u8 sn[32];
+ __u8 fr[16];
+ __u8 rsvd1[4020];
+};
+
+struct spraid_dev {
+ struct pci_dev *pdev;
+ struct device *dev;
+ struct Scsi_Host *shost;
+ struct spraid_queue *queues;
+ struct dma_pool *prp_page_pool;
+ struct dma_pool *prp_small_pool[MAX_SMALL_POOL_NUM];
+ mempool_t *iod_mempool;
+ struct blk_mq_tag_set admin_tagset;
+ struct request_queue *admin_q;
+ void __iomem *bar;
+ u32 max_qid;
+ u32 num_vecs;
+ u32 queue_count;
+ u32 ioq_depth;
+ int db_stride;
+ u32 __iomem *dbs;
+ struct rw_semaphore devices_rwsem;
+ int numa_node;
+ u32 page_size;
+ u32 ctrl_config;
+ u32 online_queues;
+ u64 cap;
+ struct device ctrl_device;
+ struct cdev cdev;
+ int instance;
+ struct spraid_ctrl_info *ctrl_info;
+ struct spraid_dev_info *devices;
+
+ struct spraid_ioq_ptcmd *ioq_ptcmds;
+ struct list_head ioq_pt_list;
+ spinlock_t ioq_pt_lock;
+
+ struct work_struct aen_work;
+ struct work_struct scan_work;
+ struct work_struct timesyn_work;
+ struct work_struct reset_work;
+
+ enum spraid_state state;
+ spinlock_t state_lock;
+};
+
+struct spraid_sgl_desc {
+ __le64 addr;
+ __le32 length;
+ __u8 rsvd[3];
+ __u8 type;
+};
+
+union spraid_data_ptr {
+ struct {
+ __le64 prp1;
+ __le64 prp2;
+ };
+ struct spraid_sgl_desc sgl;
+};
+
+struct spraid_admin_common_command {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __le32 cdw2[4];
+ union spraid_data_ptr dptr;
+ __le32 cdw10;
+ __le32 cdw11;
+ __le32 cdw12;
+ __le32 cdw13;
+ __le32 cdw14;
+ __le32 cdw15;
+};
+
+struct spraid_features {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __u64 rsvd2[2];
+ union spraid_data_ptr dptr;
+ __le32 fid;
+ __le32 dword11;
+ __le32 dword12;
+ __le32 dword13;
+ __le32 dword14;
+ __le32 dword15;
+};
+
+struct spraid_create_cq {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 cqid;
+ __le16 qsize;
+ __le16 cq_flags;
+ __le16 irq_vector;
+ __u32 rsvd12[4];
+};
+
+struct spraid_create_sq {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u64 rsvd8;
+ __le16 sqid;
+ __le16 qsize;
+ __le16 sq_flags;
+ __le16 cqid;
+ __u32 rsvd12[4];
+};
+
+struct spraid_delete_queue {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __u32 rsvd1[9];
+ __le16 qid;
+ __u16 rsvd10;
+ __u32 rsvd11[5];
+};
+
+struct spraid_get_info {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __u32 rsvd2[4];
+ union spraid_data_ptr dptr;
+ __u8 type;
+ __u8 rsvd10[3];
+ __le32 cdw11;
+ __u32 rsvd12[4];
+};
+
+enum {
+ SPRAID_CMD_FLAG_SGL_METABUF = (1 << 6),
+ SPRAID_CMD_FLAG_SGL_METASEG = (1 << 7),
+ SPRAID_CMD_FLAG_SGL_ALL = SPRAID_CMD_FLAG_SGL_METABUF | SPRAID_CMD_FLAG_SGL_METASEG,
+};
+
+enum spraid_cmd_state {
+ SPRAID_CMD_IDLE = 0,
+ SPRAID_CMD_IN_FLIGHT = 1,
+ SPRAID_CMD_COMPLETE = 2,
+ SPRAID_CMD_TIMEOUT = 3,
+ SPRAID_CMD_TMO_COMPLETE = 4,
+};
+
+struct spraid_abort_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __le16 sqid;
+ __le16 cid;
+ __u32 rsvd11[5];
+};
+
+struct spraid_reset_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __u64 rsvd2[4];
+ __u8 type;
+ __u8 rsvd10[3];
+ __u32 rsvd11[5];
+};
+
+struct spraid_admin_command {
+ union {
+ struct spraid_admin_common_command common;
+ struct spraid_features features;
+ struct spraid_create_cq create_cq;
+ struct spraid_create_sq create_sq;
+ struct spraid_delete_queue delete_queue;
+ struct spraid_get_info get_info;
+ struct spraid_abort_cmd abort;
+ struct spraid_reset_cmd reset;
+ };
+};
+
+struct spraid_ioq_common_command {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __le32 cdw3[3];
+ union spraid_data_ptr dptr;
+ __le32 cdw10[6];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __le32 cdw26[6];
+};
+
+struct spraid_rw_command {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_len;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union spraid_data_ptr dptr;
+ __le64 slba;
+ __le16 nlb;
+ __le16 control;
+ __u32 rsvd13[3];
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct spraid_scsi_nonio {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __le32 hdid;
+ __le16 sense_len;
+ __u8 cdb_length;
+ __u8 rsvd2;
+ __u32 rsvd3[3];
+ union spraid_data_ptr dptr;
+ __u32 rsvd10[5];
+ __le32 buffer_len;
+ __u8 cdb[32];
+ __le64 sense_addr;
+ __u32 rsvd26[6];
+};
+
+struct spraid_ioq_command {
+ union {
+ struct spraid_ioq_common_command common;
+ struct spraid_rw_command rw;
+ struct spraid_scsi_nonio scsi_nonio;
+ };
+};
+
+#define SPRAID_IOCTL_RESET_CMD _IOWR('N', 0x80, struct spraid_passthru_common_cmd)
+#define SPRAID_IOCTL_ADMIN_CMD _IOWR('N', 0x41, struct spraid_passthru_common_cmd)
+
+struct spraid_passthru_common_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 data_len;
+ __u16 param_len;
+ } info_1;
+ __u32 cdw3;
+ };
+ __u64 metadata;
+
+ __u64 addr;
+ __u64 prp2;
+
+ __u32 cdw10;
+ __u32 cdw11;
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 cdw15;
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+#define SPRAID_IOCTL_IOQ_CMD _IOWR('N', 0x42, struct spraid_ioq_passthru_cmd)
+
+struct spraid_ioq_passthru_cmd {
+ __u8 opcode;
+ __u8 flags;
+ __u16 rsvd0;
+ __u32 nsid;
+ union {
+ struct {
+ __u16 res_sense_len;
+ __u8 cdb_len;
+ __u8 rsvd0;
+ } info_0;
+ __u32 cdw2;
+ };
+ union {
+ struct {
+ __u16 subopcode;
+ __u16 rsvd1;
+ } info_1;
+ __u32 cdw3;
+ };
+ union {
+ struct {
+ __u16 rsvd;
+ __u16 param_len;
+ } info_2;
+ __u32 cdw4;
+ };
+ __u32 cdw5;
+ __u64 addr;
+ __u64 prp2;
+ union {
+ struct {
+ __u16 eid;
+ __u16 sid;
+ } info_3;
+ __u32 cdw10;
+ };
+ union {
+ struct {
+ __u16 did;
+ __u8 did_flag;
+ __u8 rsvd2;
+ } info_4;
+ __u32 cdw11;
+ };
+ __u32 cdw12;
+ __u32 cdw13;
+ __u32 cdw14;
+ __u32 data_len;
+ __u32 cdw16;
+ __u32 cdw17;
+ __u32 cdw18;
+ __u32 cdw19;
+ __u32 cdw20;
+ __u32 cdw21;
+ __u32 cdw22;
+ __u32 cdw23;
+ __u64 sense_addr;
+ __u32 cdw26[4];
+ __u32 timeout_ms;
+ __u32 result0;
+ __u32 result1;
+};
+
+struct spraid_ioq_ptcmd {
+ int qid;
+ int cid;
+ u32 result0;
+ u32 result1;
+ u16 status;
+ void *priv;
+ enum spraid_cmd_state state;
+ struct completion cmd_done;
+ struct list_head list;
+};
+
+struct spraid_admin_request {
+ struct spraid_admin_command *cmd;
+ u32 result0;
+ u32 result1;
+ u16 flags;
+ u16 status;
+};
+
+struct spraid_queue {
+ struct spraid_dev *hdev;
+ spinlock_t sq_lock; /* spinlock for lock handling */
+
+ spinlock_t cq_lock ____cacheline_aligned_in_smp; /* spinlock for lock handling */
+
+ void *sq_cmds;
+
+ struct spraid_completion *cqes;
+
+ dma_addr_t sq_dma_addr;
+ dma_addr_t cq_dma_addr;
+ u32 __iomem *q_db;
+ u8 cq_phase;
+ u8 sqes;
+ u16 qid;
+ u16 sq_tail;
+ u16 cq_head;
+ u16 last_cq_head;
+ u16 q_depth;
+ s16 cq_vector;
+ void *sense;
+ dma_addr_t sense_dma_addr;
+ struct dma_pool *prp_small_pool;
+};
+
+struct spraid_iod {
+ struct spraid_admin_request req;
+ struct spraid_queue *spraidq;
+ enum spraid_cmd_state state;
+ int npages;
+ u32 nsge;
+ u32 length;
+ bool use_sgl;
+ bool sg_drv_mgmt;
+ dma_addr_t first_dma;
+ void *sense;
+ dma_addr_t sense_dma;
+ struct scatterlist *sg;
+ struct scatterlist inline_sg[0];
+};
+
+#define SPRAID_DEV_INFO_ATTR_BOOT(attr) ((attr) & 0x01)
+#define SPRAID_DEV_INFO_ATTR_HDD(attr) ((attr) & 0x02)
+#define SPRAID_DEV_INFO_ATTR_PT(attr) (((attr) & 0x22) == 0x02)
+#define SPRAID_DEV_INFO_ATTR_RAWDISK(attr) ((attr) & 0x20)
+
+#define SPRAID_DEV_INFO_FLAG_VALID(flag) ((flag) & 0x01)
+#define SPRAID_DEV_INFO_FLAG_CHANGE(flag) ((flag) & 0x02)
+
+struct spraid_dev_info {
+ __le32 hdid;
+ __le16 target;
+ __u8 channel;
+ __u8 lun;
+ __u8 attr;
+ __u8 flag;
+ __le16 max_io_kb;
+};
+
+#define MAX_DEV_ENTRY_PER_PAGE_4K 340
+struct spraid_dev_list {
+ __le32 dev_num;
+ __u32 rsvd0[3];
+ struct spraid_dev_info devices[MAX_DEV_ENTRY_PER_PAGE_4K];
+};
+
+struct spraid_sdev_hostdata {
+ u32 hdid;
+};
+
+#endif
+
diff --git a/drivers/scsi/spraid/spraid_main.c b/drivers/scsi/spraid/spraid_main.c
new file mode 100644
index 000000000000..a0a75ecb0027
--- /dev/null
+++ b/drivers/scsi/spraid/spraid_main.c
@@ -0,0 +1,3616 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Linux spraid device driver
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+#define pr_fmt(fmt) "spraid: " fmt
+
+#include <linux/sched/signal.h>
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/module.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/cdev.h>
+#include <linux/sysfs.h>
+#include <linux/gfp.h>
+#include <linux/types.h>
+#include <linux/ratelimit.h>
+#include <linux/once.h>
+#include <linux/debugfs.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/blkdev.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_dbg.h>
+
+#include "spraid.h"
+
+static u32 admin_tmout = 60;
+module_param(admin_tmout, uint, 0644);
+MODULE_PARM_DESC(admin_tmout, "admin commands timeout (seconds)");
+
+static u32 scmd_tmout_pt = 30;
+module_param(scmd_tmout_pt, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_pt, "scsi commands timeout for passthrough(seconds)");
+
+static u32 scmd_tmout_nonpt = 180;
+module_param(scmd_tmout_nonpt, uint, 0644);
+MODULE_PARM_DESC(scmd_tmout_nonpt, "scsi commands timeout for rawdisk&raid(seconds)");
+
+static u32 wait_abl_tmout = 3;
+module_param(wait_abl_tmout, uint, 0644);
+MODULE_PARM_DESC(wait_abl_tmout, "wait abnormal io timeout(seconds)");
+
+static bool use_sgl_force;
+module_param(use_sgl_force, bool, 0644);
+MODULE_PARM_DESC(use_sgl_force, "force IO use sgl format, default false");
+
+static int ioq_depth_set(const char *val, const struct kernel_param *kp);
+static const struct kernel_param_ops ioq_depth_ops = {
+ .set = ioq_depth_set,
+ .get = param_get_uint,
+};
+
+static u32 io_queue_depth = 1024;
+module_param_cb(io_queue_depth, &ioq_depth_ops, &io_queue_depth, 0644);
+MODULE_PARM_DESC(io_queue_depth, "set io queue depth, should >= 2");
+
+static int log_debug_switch_set(const char *val, const struct kernel_param *kp)
+{
+ u8 n = 0;
+ int ret;
+
+ ret = kstrtou8(val, 10, &n);
+ if (ret != 0)
+ return -EINVAL;
+
+ return param_set_byte(val, kp);
+}
+
+static const struct kernel_param_ops log_debug_switch_ops = {
+ .set = log_debug_switch_set,
+ .get = param_get_byte,
+};
+
+static unsigned char log_debug_switch;
+module_param_cb(log_debug_switch, &log_debug_switch_ops, &log_debug_switch, 0644);
+MODULE_PARM_DESC(log_debug_switch, "set log state, default non-zero for switch on");
+
+static int small_pool_num_set(const char *val, const struct kernel_param *kp)
+{
+ u8 n = 0;
+ int ret;
+
+ ret = kstrtou8(val, 10, &n);
+ if (ret != 0)
+ return -EINVAL;
+ if (n > MAX_SMALL_POOL_NUM)
+ n = MAX_SMALL_POOL_NUM;
+ if (n < 1)
+ n = 1;
+ *((u8 *)kp->arg) = n;
+
+ return 0;
+}
+
+static const struct kernel_param_ops small_pool_num_ops = {
+ .set = small_pool_num_set,
+ .get = param_get_byte,
+};
+
+static unsigned char small_pool_num = 4;
+module_param_cb(small_pool_num, &small_pool_num_ops, &small_pool_num, 0644);
+MODULE_PARM_DESC(small_pool_num, "set prp small pool num, default 4, MAX 16");
+
+static void spraid_free_queue(struct spraid_queue *spraidq);
+static void spraid_handle_aen_notice(struct spraid_dev *hdev, u32 result);
+static void spraid_handle_aen_vs(struct spraid_dev *hdev, u32 result);
+
+static DEFINE_IDA(spraid_instance_ida);
+static dev_t spraid_chr_devt;
+static struct class *spraid_class;
+
+#define SPRAID_CAP_TIMEOUT_UNIT_MS (HZ / 2)
+
+static struct workqueue_struct *spraid_wq;
+
+#define dev_log_dbg(dev, fmt, ...) do { \
+ if (unlikely(log_debug_switch)) \
+ dev_info(dev, "[%s] [%d] " fmt, \
+ __func__, __LINE__, ##__VA_ARGS__); \
+} while (0)
+
+#define SPRAID_DRV_VERSION "1.0.0.0"
+
+#define ADMIN_TIMEOUT (admin_tmout * HZ)
+#define ADMIN_ERR_TIMEOUT 32757
+
+#define SPRAID_WAIT_ABNL_CMD_TIMEOUT (wait_abl_tmout * 2)
+
+#define SPRAID_DMA_MSK_BIT_MAX 64
+
+enum FW_STAT_CODE {
+ FW_STAT_OK = 0,
+ FW_STAT_NEED_CHECK,
+ FW_STAT_ERROR,
+ FW_STAT_EP_PCIE_ERROR,
+ FW_STAT_NAC_DMA_ERROR,
+ FW_STAT_ABORTED,
+ FW_STAT_NEED_RETRY
+};
+
+static int ioq_depth_set(const char *val, const struct kernel_param *kp)
+{
+ int n = 0;
+ int ret;
+
+ ret = kstrtoint(val, 10, &n);
+ if (ret != 0 || n < 2)
+ return -EINVAL;
+
+ return param_set_int(val, kp);
+}
+
+static int spraid_remap_bar(struct spraid_dev *hdev, u32 size)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (size > pci_resource_len(pdev, 0)) {
+ dev_err(hdev->dev, "Input size[%u] exceed bar0 length[%llu]\n",
+ size, pci_resource_len(pdev, 0));
+ return -ENOMEM;
+ }
+
+ if (hdev->bar)
+ iounmap(hdev->bar);
+
+ hdev->bar = ioremap(pci_resource_start(pdev, 0), size);
+ if (!hdev->bar) {
+ dev_err(hdev->dev, "ioremap for bar0 failed\n");
+ return -ENOMEM;
+ }
+ hdev->dbs = hdev->bar + SPRAID_REG_DBS;
+
+ return 0;
+}
+
+static int spraid_dev_map(struct spraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret;
+
+ ret = pci_request_mem_regions(pdev, "spraid");
+ if (ret) {
+ dev_err(hdev->dev, "fail to request memory regions\n");
+ return ret;
+ }
+
+ ret = spraid_remap_bar(hdev, SPRAID_REG_DBS + 4096);
+ if (ret) {
+ pci_release_mem_regions(pdev);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void spraid_dev_unmap(struct spraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+
+ if (hdev->bar) {
+ iounmap(hdev->bar);
+ hdev->bar = NULL;
+ }
+ pci_release_mem_regions(pdev);
+}
+
+static int spraid_pci_enable(struct spraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ int ret = -ENOMEM;
+ u64 maskbit = SPRAID_DMA_MSK_BIT_MAX;
+
+ if (pci_enable_device_mem(pdev)) {
+ dev_err(hdev->dev, "Enable pci device memory resources failed\n");
+ return ret;
+ }
+ pci_set_master(pdev);
+
+ if (readl(hdev->bar + SPRAID_REG_CSTS) == U32_MAX) {
+ ret = -ENODEV;
+ dev_err(hdev->dev, "Read csts register failed\n");
+ goto disable;
+ }
+
+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+ if (ret < 0) {
+ dev_err(hdev->dev, "Allocate one IRQ for setup admin channel failed\n");
+ goto disable;
+ }
+
+ hdev->cap = lo_hi_readq(hdev->bar + SPRAID_REG_CAP);
+ hdev->ioq_depth = min_t(u32, SPRAID_CAP_MQES(hdev->cap) + 1, io_queue_depth);
+ hdev->db_stride = 1 << SPRAID_CAP_STRIDE(hdev->cap);
+
+ maskbit = SPRAID_CAP_DMAMASK(hdev->cap);
+ if (maskbit < 32 || maskbit > SPRAID_DMA_MSK_BIT_MAX) {
+ dev_err(hdev->dev, "err, dma mask invalid[%llu], set to default\n", maskbit);
+ maskbit = SPRAID_DMA_MSK_BIT_MAX;
+ }
+ if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(maskbit))) {
+ dev_err(hdev->dev, "set dma mask and coherent failed\n");
+ goto disable;
+ }
+
+ dev_info(hdev->dev, "set dma mask[%llu] success\n", maskbit);
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_save_state(pdev);
+
+ return 0;
+
+disable:
+ pci_disable_device(pdev);
+ return ret;
+}
+
+static inline
+struct spraid_admin_request *spraid_admin_req(struct request *req)
+{
+ return blk_mq_rq_to_pdu(req);
+}
+
+static int spraid_npages_prp(u32 size, struct spraid_dev *hdev)
+{
+ u32 nprps = DIV_ROUND_UP(size + hdev->page_size, hdev->page_size);
+
+ return DIV_ROUND_UP(PRP_ENTRY_SIZE * nprps, PAGE_SIZE - PRP_ENTRY_SIZE);
+}
+
+static int spraid_npages_sgl(u32 nseg)
+{
+ return DIV_ROUND_UP(nseg * sizeof(struct spraid_sgl_desc), PAGE_SIZE);
+}
+
+static void **spraid_iod_list(struct spraid_iod *iod)
+{
+ return (void **)(iod->inline_sg + (iod->sg_drv_mgmt ? iod->nsge : 0));
+}
+
+static u32 spraid_iod_ext_size(struct spraid_dev *hdev, u32 size, u32 nsge,
+ bool sg_drv_mgmt, bool use_sgl)
+{
+ size_t alloc_size, sg_size;
+
+ if (use_sgl)
+ alloc_size = sizeof(__le64 *) * spraid_npages_sgl(nsge);
+ else
+ alloc_size = sizeof(__le64 *) * spraid_npages_prp(size, hdev);
+
+ sg_size = sg_drv_mgmt ? (sizeof(struct scatterlist) * nsge) : 0;
+ return sg_size + alloc_size;
+}
+
+static u32 spraid_cmd_size(struct spraid_dev *hdev, bool sg_drv_mgmt, bool use_sgl)
+{
+ u32 alloc_size = spraid_iod_ext_size(hdev, SPRAID_INT_BYTES(hdev),
+ SPRAID_INT_PAGES, sg_drv_mgmt, use_sgl);
+
+ dev_info(hdev->dev, "sg_drv_mgmt: %s, use_sgl: %s, iod size: %lu, alloc_size: %u\n",
+ sg_drv_mgmt ? "true" : "false", use_sgl ? "true" : "false",
+ sizeof(struct spraid_iod), alloc_size);
+
+ return sizeof(struct spraid_iod) + alloc_size;
+}
+
+static int spraid_setup_prps(struct spraid_dev *hdev, struct spraid_iod *iod)
+{
+ struct scatterlist *sg = iod->sg;
+ u64 dma_addr = sg_dma_address(sg);
+ int dma_len = sg_dma_len(sg);
+ __le64 *prp_list, *old_prp_list;
+ u32 page_size = hdev->page_size;
+ int offset = dma_addr & (page_size - 1);
+ void **list = spraid_iod_list(iod);
+ int length = iod->length;
+ struct dma_pool *pool;
+ dma_addr_t prp_dma;
+ int nprps, i;
+
+ length -= (page_size - offset);
+ if (length <= 0) {
+ iod->first_dma = 0;
+ return 0;
+ }
+
+ dma_len -= (page_size - offset);
+ if (dma_len) {
+ dma_addr += (page_size - offset);
+ } else {
+ sg = sg_next(sg);
+ dma_addr = sg_dma_address(sg);
+ dma_len = sg_dma_len(sg);
+ }
+
+ if (length <= page_size) {
+ iod->first_dma = dma_addr;
+ return 0;
+ }
+
+ nprps = DIV_ROUND_UP(length, page_size);
+ if (nprps <= (SMALL_POOL_SIZE / PRP_ENTRY_SIZE)) {
+ pool = iod->spraidq->prp_small_pool;
+ iod->npages = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ iod->npages = 1;
+ }
+
+ prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ if (!prp_list) {
+ dev_err_ratelimited(hdev->dev, "Allocate first prp_list memory failed\n");
+ iod->first_dma = dma_addr;
+ iod->npages = -1;
+ return -ENOMEM;
+ }
+ list[0] = prp_list;
+ iod->first_dma = prp_dma;
+ i = 0;
+ for (;;) {
+ if (i == page_size / PRP_ENTRY_SIZE) {
+ old_prp_list = prp_list;
+
+ prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ if (!prp_list) {
+ dev_err_ratelimited(hdev->dev, "Allocate %dth prp_list memory failed\n",
+ iod->npages + 1);
+ return -ENOMEM;
+ }
+ list[iod->npages++] = prp_list;
+ prp_list[0] = old_prp_list[i - 1];
+ old_prp_list[i - 1] = cpu_to_le64(prp_dma);
+ i = 1;
+ }
+ prp_list[i++] = cpu_to_le64(dma_addr);
+ dma_len -= page_size;
+ dma_addr += page_size;
+ length -= page_size;
+ if (length <= 0)
+ break;
+ if (dma_len > 0)
+ continue;
+ if (unlikely(dma_len < 0))
+ goto bad_sgl;
+ sg = sg_next(sg);
+ dma_addr = sg_dma_address(sg);
+ dma_len = sg_dma_len(sg);
+ }
+
+ return 0;
+
+bad_sgl:
+ dev_err(hdev->dev, "Setup prps, invalid SGL for payload: %d nents: %d\n",
+ iod->length, iod->nsge);
+ return -EIO;
+}
+
+#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct spraid_sgl_desc))
+
+static void spraid_submit_cmd(struct spraid_queue *spraidq, const void *cmd)
+{
+ u32 sqes = SQE_SIZE(spraidq->qid);
+ unsigned long flags;
+ struct spraid_admin_common_command *acd = (struct spraid_admin_common_command *)cmd;
+
+ spin_lock_irqsave(&spraidq->sq_lock, flags);
+ memcpy((spraidq->sq_cmds + sqes * spraidq->sq_tail), cmd, sqes);
+ if (++spraidq->sq_tail == spraidq->q_depth)
+ spraidq->sq_tail = 0;
+
+ writel(spraidq->sq_tail, spraidq->q_db);
+ spin_unlock_irqrestore(&spraidq->sq_lock, flags);
+
+ dev_log_dbg(spraidq->hdev->dev, "cid[%d], qid[%d], opcode[0x%x], flags[0x%x], hdid[%u]\n",
+ acd->command_id, spraidq->qid, acd->opcode, acd->flags, le32_to_cpu(acd->hdid));
+}
+
+static u32 spraid_mod64(u64 dividend, u32 divisor)
+{
+ u64 d;
+ u32 remainder;
+
+ if (!divisor)
+ pr_err("DIVISOR is zero, in div fn\n");
+
+ d = dividend;
+ remainder = do_div(d, divisor);
+ return remainder;
+}
+
+static inline bool spraid_is_rw_scmd(struct scsi_cmnd *scmd)
+{
+ switch (scmd->cmnd[0]) {
+ case READ_6:
+ case READ_10:
+ case READ_12:
+ case READ_16:
+ case READ_32:
+ case WRITE_6:
+ case WRITE_10:
+ case WRITE_12:
+ case WRITE_16:
+ case WRITE_32:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool spraid_is_prp(struct spraid_dev *hdev, struct scsi_cmnd *scmd, u32 nsge)
+{
+ struct scatterlist *sg = scsi_sglist(scmd);
+ u32 page_size = hdev->page_size;
+ bool is_prp = true;
+ int i = 0;
+
+ scsi_for_each_sg(scmd, sg, nsge, i) {
+ if (i != 0 && i != nsge - 1) {
+ if (spraid_mod64(sg_dma_len(sg), page_size) ||
+ spraid_mod64(sg_dma_address(sg), page_size)) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ if (nsge > 1 && i == 0) {
+ if ((spraid_mod64((sg_dma_address(sg) + sg_dma_len(sg)), page_size))) {
+ is_prp = false;
+ break;
+ }
+ }
+
+ if (nsge > 1 && i == (nsge - 1)) {
+ if (spraid_mod64(sg_dma_address(sg), page_size)) {
+ is_prp = false;
+ break;
+ }
+ }
+ }
+
+ return is_prp;
+}
+
+enum {
+ SPRAID_SGL_FMT_DATA_DESC = 0x00,
+ SPRAID_SGL_FMT_SEG_DESC = 0x02,
+ SPRAID_SGL_FMT_LAST_SEG_DESC = 0x03,
+ SPRAID_KEY_SGL_FMT_DATA_DESC = 0x04,
+ SPRAID_TRANSPORT_SGL_DATA_DESC = 0x05
+};
+
+static void spraid_sgl_set_data(struct spraid_sgl_desc *sge, struct scatterlist *sg)
+{
+ sge->addr = cpu_to_le64(sg_dma_address(sg));
+ sge->length = cpu_to_le32(sg_dma_len(sg));
+ sge->type = SPRAID_SGL_FMT_DATA_DESC << 4;
+}
+
+static void spraid_sgl_set_seg(struct spraid_sgl_desc *sge, dma_addr_t dma_addr, int entries)
+{
+ sge->addr = cpu_to_le64(dma_addr);
+ if (entries <= SGES_PER_PAGE) {
+ sge->length = cpu_to_le32(entries * sizeof(*sge));
+ sge->type = SPRAID_SGL_FMT_LAST_SEG_DESC << 4;
+ } else {
+ sge->length = cpu_to_le32(PAGE_SIZE);
+ sge->type = SPRAID_SGL_FMT_SEG_DESC << 4;
+ }
+}
+
+static int spraid_setup_ioq_cmd_sgl(struct spraid_dev *hdev,
+ struct scsi_cmnd *scmd, struct spraid_ioq_command *ioq_cmd,
+ struct spraid_iod *iod)
+{
+ struct spraid_sgl_desc *sg_list, *link, *old_sg_list;
+ struct scatterlist *sg = scsi_sglist(scmd);
+ void **list = spraid_iod_list(iod);
+ struct dma_pool *pool;
+ int nsge = iod->nsge;
+ dma_addr_t sgl_dma;
+ int i = 0;
+
+ ioq_cmd->common.flags |= SPRAID_CMD_FLAG_SGL_METABUF;
+
+ if (nsge == 1) {
+ spraid_sgl_set_data(&ioq_cmd->common.dptr.sgl, sg);
+ return 0;
+ }
+
+ if (nsge <= (SMALL_POOL_SIZE / sizeof(struct spraid_sgl_desc))) {
+ pool = iod->spraidq->prp_small_pool;
+ iod->npages = 0;
+ } else {
+ pool = hdev->prp_page_pool;
+ iod->npages = 1;
+ }
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "Allocate first sgl_list failed\n");
+ iod->npages = -1;
+ return -ENOMEM;
+ }
+
+ list[0] = sg_list;
+ iod->first_dma = sgl_dma;
+ spraid_sgl_set_seg(&ioq_cmd->common.dptr.sgl, sgl_dma, nsge);
+ do {
+ if (i == SGES_PER_PAGE) {
+ old_sg_list = sg_list;
+ link = &old_sg_list[SGES_PER_PAGE - 1];
+
+ sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma);
+ if (!sg_list) {
+ dev_err_ratelimited(hdev->dev, "Allocate %dth sgl_list failed\n",
+ iod->npages + 1);
+ return -ENOMEM;
+ }
+ list[iod->npages++] = sg_list;
+
+ i = 0;
+ memcpy(&sg_list[i++], link, sizeof(*link));
+ spraid_sgl_set_seg(link, sgl_dma, nsge);
+ }
+
+ spraid_sgl_set_data(&sg_list[i++], sg);
+ sg = sg_next(sg);
+ } while (--nsge > 0);
+
+ return 0;
+}
+
+#define SPRAID_RW_FUA BIT(14)
+
+static void spraid_setup_rw_cmd(struct spraid_dev *hdev,
+ struct spraid_rw_command *rw,
+ struct scsi_cmnd *scmd)
+{
+ u32 start_lba_lo, start_lba_hi;
+ u32 datalength = 0;
+ u16 control = 0;
+
+ start_lba_lo = 0;
+ start_lba_hi = 0;
+
+ if (scmd->sc_data_direction == DMA_TO_DEVICE) {
+ rw->opcode = SPRAID_CMD_WRITE;
+ } else if (scmd->sc_data_direction == DMA_FROM_DEVICE) {
+ rw->opcode = SPRAID_CMD_READ;
+ } else {
+ dev_err(hdev->dev, "Invalid IO for unsupported data direction: %d\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ }
+
+ /* 6-byte READ(0x08) or WRITE(0x0A) cdb */
+ if (scmd->cmd_len == 6) {
+ datalength = (u32)(scmd->cmnd[4] == 0 ?
+ IO_6_DEFAULT_TX_LEN : scmd->cmnd[4]);
+ start_lba_lo = ((u32)scmd->cmnd[1] << 16) |
+ ((u32)scmd->cmnd[2] << 8) | (u32)scmd->cmnd[3];
+
+ start_lba_lo &= 0x1FFFFF;
+ }
+
+ /* 10-byte READ(0x28) or WRITE(0x2A) cdb */
+ else if (scmd->cmd_len == 10) {
+ datalength = (u32)scmd->cmnd[8] | ((u32)scmd->cmnd[7] << 8);
+ start_lba_lo = ((u32)scmd->cmnd[2] << 24) |
+ ((u32)scmd->cmnd[3] << 16) |
+ ((u32)scmd->cmnd[4] << 8) | (u32)scmd->cmnd[5];
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= SPRAID_RW_FUA;
+ }
+
+ /* 12-byte READ(0xA8) or WRITE(0xAA) cdb */
+ else if (scmd->cmd_len == 12) {
+ datalength = ((u32)scmd->cmnd[6] << 24) |
+ ((u32)scmd->cmnd[7] << 16) |
+ ((u32)scmd->cmnd[8] << 8) | (u32)scmd->cmnd[9];
+ start_lba_lo = ((u32)scmd->cmnd[2] << 24) |
+ ((u32)scmd->cmnd[3] << 16) |
+ ((u32)scmd->cmnd[4] << 8) | (u32)scmd->cmnd[5];
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= SPRAID_RW_FUA;
+ }
+ /* 16-byte READ(0x88) or WRITE(0x8A) cdb */
+ else if (scmd->cmd_len == 16) {
+ datalength = ((u32)scmd->cmnd[10] << 24) |
+ ((u32)scmd->cmnd[11] << 16) |
+ ((u32)scmd->cmnd[12] << 8) | (u32)scmd->cmnd[13];
+ start_lba_lo = ((u32)scmd->cmnd[6] << 24) |
+ ((u32)scmd->cmnd[7] << 16) |
+ ((u32)scmd->cmnd[8] << 8) | (u32)scmd->cmnd[9];
+ start_lba_hi = ((u32)scmd->cmnd[2] << 24) |
+ ((u32)scmd->cmnd[3] << 16) |
+ ((u32)scmd->cmnd[4] << 8) | (u32)scmd->cmnd[5];
+
+ if (scmd->cmnd[1] & FUA_MASK)
+ control |= SPRAID_RW_FUA;
+ }
+ /* 32-byte READ(0x88) or WRITE(0x8A) cdb */
+ else if (scmd->cmd_len == 32) {
+ datalength = ((u32)scmd->cmnd[28] << 24) |
+ ((u32)scmd->cmnd[29] << 16) |
+ ((u32)scmd->cmnd[30] << 8) | (u32)scmd->cmnd[31];
+ start_lba_lo = ((u32)scmd->cmnd[16] << 24) |
+ ((u32)scmd->cmnd[17] << 16) |
+ ((u32)scmd->cmnd[18] << 8) | (u32)scmd->cmnd[19];
+ start_lba_hi = ((u32)scmd->cmnd[12] << 24) |
+ ((u32)scmd->cmnd[13] << 16) |
+ ((u32)scmd->cmnd[14] << 8) | (u32)scmd->cmnd[15];
+
+ if (scmd->cmnd[10] & FUA_MASK)
+ control |= SPRAID_RW_FUA;
+ }
+
+ if (unlikely(datalength > U16_MAX || datalength == 0)) {
+ dev_err(hdev->dev, "Invalid IO for illegal transfer data length: %u\n",
+ datalength);
+ WARN_ON(1);
+ }
+
+ rw->slba = cpu_to_le64(((u64)start_lba_hi << 32) | start_lba_lo);
+ /* 0base for nlb */
+ rw->nlb = cpu_to_le16((u16)(datalength - 1));
+ rw->control = cpu_to_le16(control);
+}
+
+static void spraid_setup_nonio_cmd(struct spraid_dev *hdev,
+ struct spraid_scsi_nonio *scsi_nonio, struct scsi_cmnd *scmd)
+{
+ scsi_nonio->buffer_len = cpu_to_le32(scsi_bufflen(scmd));
+
+ switch (scmd->sc_data_direction) {
+ case DMA_NONE:
+ scsi_nonio->opcode = SPRAID_CMD_NONIO_NONE;
+ break;
+ case DMA_TO_DEVICE:
+ scsi_nonio->opcode = SPRAID_CMD_NONIO_TODEV;
+ break;
+ case DMA_FROM_DEVICE:
+ scsi_nonio->opcode = SPRAID_CMD_NONIO_FROMDEV;
+ break;
+ default:
+ dev_err(hdev->dev, "Invalid IO for unsupported data direction: %d\n",
+ scmd->sc_data_direction);
+ WARN_ON(1);
+ }
+}
+
+static void spraid_setup_ioq_cmd(struct spraid_dev *hdev,
+ struct spraid_ioq_command *ioq_cmd, struct scsi_cmnd *scmd)
+{
+ memcpy(ioq_cmd->common.cdb, scmd->cmnd, scmd->cmd_len);
+ ioq_cmd->common.cdb_len = scmd->cmd_len;
+
+ if (spraid_is_rw_scmd(scmd))
+ spraid_setup_rw_cmd(hdev, &ioq_cmd->rw, scmd);
+ else
+ spraid_setup_nonio_cmd(hdev, &ioq_cmd->scsi_nonio, scmd);
+}
+
+static int spraid_init_iod(struct spraid_dev *hdev,
+ struct spraid_iod *iod, struct spraid_ioq_command *ioq_cmd,
+ struct scsi_cmnd *scmd)
+{
+ if (unlikely(!iod->sense)) {
+ dev_err(hdev->dev, "Allocate sense data buffer failed\n");
+ return -ENOMEM;
+ }
+ ioq_cmd->common.sense_addr = cpu_to_le64(iod->sense_dma);
+ ioq_cmd->common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+
+ iod->nsge = 0;
+ iod->npages = -1;
+ iod->use_sgl = 0;
+ iod->sg_drv_mgmt = false;
+ WRITE_ONCE(iod->state, SPRAID_CMD_IDLE);
+
+ return 0;
+}
+
+static void spraid_free_iod_res(struct spraid_dev *hdev, struct spraid_iod *iod)
+{
+ const int last_prp = hdev->page_size / sizeof(__le64) - 1;
+ dma_addr_t dma_addr, next_dma_addr;
+ struct spraid_sgl_desc *sg_list;
+ __le64 *prp_list;
+ void *addr;
+ int i;
+
+ dma_addr = iod->first_dma;
+ if (iod->npages == 0)
+ dma_pool_free(iod->spraidq->prp_small_pool, spraid_iod_list(iod)[0], dma_addr);
+
+ for (i = 0; i < iod->npages; i++) {
+ addr = spraid_iod_list(iod)[i];
+
+ if (iod->use_sgl) {
+ sg_list = addr;
+ next_dma_addr =
+ le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr);
+ } else {
+ prp_list = addr;
+ next_dma_addr = le64_to_cpu(prp_list[last_prp]);
+ }
+
+ dma_pool_free(hdev->prp_page_pool, addr, dma_addr);
+ dma_addr = next_dma_addr;
+ }
+
+ if (iod->sg_drv_mgmt && iod->sg != iod->inline_sg) {
+ iod->sg_drv_mgmt = false;
+ mempool_free(iod->sg, hdev->iod_mempool);
+ }
+
+ iod->sense = NULL;
+ iod->npages = -1;
+}
+
+static int spraid_io_map_data(struct spraid_dev *hdev, struct spraid_iod *iod,
+ struct scsi_cmnd *scmd, struct spraid_ioq_command *ioq_cmd)
+{
+ int ret;
+
+ iod->nsge = scsi_dma_map(scmd);
+
+ /* No data to DMA, it may be scsi no-rw command */
+ if (unlikely(iod->nsge == 0))
+ return 0;
+
+ iod->length = scsi_bufflen(scmd);
+ iod->sg = scsi_sglist(scmd);
+ iod->use_sgl = !spraid_is_prp(hdev, scmd, iod->nsge);
+
+ if (iod->use_sgl) {
+ ret = spraid_setup_ioq_cmd_sgl(hdev, scmd, ioq_cmd, iod);
+ } else {
+ ret = spraid_setup_prps(hdev, iod);
+ ioq_cmd->common.dptr.prp1 =
+ cpu_to_le64(sg_dma_address(iod->sg));
+ ioq_cmd->common.dptr.prp2 = cpu_to_le64(iod->first_dma);
+ }
+
+ if (ret)
+ scsi_dma_unmap(scmd);
+
+ return ret;
+}
+
+static void spraid_map_status(struct spraid_iod *iod, struct scsi_cmnd *scmd,
+ struct spraid_completion *cqe)
+{
+ scsi_set_resid(scmd, 0);
+
+ switch ((le16_to_cpu(cqe->status) >> 1) & 0x7f) {
+ case FW_STAT_OK:
+ set_host_byte(scmd, DID_OK);
+ break;
+ case FW_STAT_NEED_CHECK:
+ set_host_byte(scmd, DID_OK);
+ scmd->result |= le16_to_cpu(cqe->status) >> 8;
+ if (scmd->result & SAM_STAT_CHECK_CONDITION) {
+ memset(scmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ memcpy(scmd->sense_buffer, iod->sense, SCSI_SENSE_BUFFERSIZE);
+ set_driver_byte(scmd, DRIVER_SENSE);
+ }
+ break;
+ case FW_STAT_ABORTED:
+ set_host_byte(scmd, DID_ABORT);
+ break;
+ case FW_STAT_NEED_RETRY:
+ set_host_byte(scmd, DID_REQUEUE);
+ break;
+ default:
+ set_host_byte(scmd, DID_BAD_TARGET);
+ break;
+ }
+}
+
+static inline void spraid_get_tag_from_scmd(struct scsi_cmnd *scmd, u16 *qid, u16 *cid)
+{
+ u32 tag = blk_mq_unique_tag(scmd->request);
+
+ *qid = blk_mq_unique_tag_to_hwq(tag) + 1;
+ *cid = blk_mq_unique_tag_to_tag(tag);
+}
+
+static int spraid_queue_command(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+{
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ struct spraid_dev *hdev = shost_priv(shost);
+ struct scsi_device *sdev = scmd->device;
+ struct spraid_sdev_hostdata *hostdata;
+ struct spraid_ioq_command ioq_cmd;
+ struct spraid_queue *ioq;
+ unsigned long elapsed;
+ u16 hwq, cid;
+ int ret;
+
+ if (unlikely(!scmd)) {
+ dev_err(hdev->dev, "err, scmd is null, return 0\n");
+ return 0;
+ }
+
+ if (unlikely(hdev->state != SPRAID_LIVE)) {
+ set_host_byte(scmd, DID_NO_CONNECT);
+ scmd->scsi_done(scmd);
+ dev_err(hdev->dev, "[%s] err, hdev state is not live\n", __func__);
+ return 0;
+ }
+
+ if (log_debug_switch)
+ scsi_print_command(scmd);
+
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+ hostdata = sdev->hostdata;
+ ioq = &hdev->queues[hwq];
+ memset(&ioq_cmd, 0, sizeof(ioq_cmd));
+ ioq_cmd.rw.hdid = cpu_to_le32(hostdata->hdid);
+ ioq_cmd.rw.command_id = cid;
+
+ spraid_setup_ioq_cmd(hdev, &ioq_cmd, scmd);
+
+ ret = cid * SCSI_SENSE_BUFFERSIZE;
+ iod->sense = ioq->sense + ret;
+ iod->sense_dma = ioq->sense_dma_addr + ret;
+
+ ret = spraid_init_iod(hdev, iod, &ioq_cmd, scmd);
+ if (unlikely(ret))
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ iod->spraidq = ioq;
+ ret = spraid_io_map_data(hdev, iod, scmd, &ioq_cmd);
+ if (unlikely(ret)) {
+ dev_err(hdev->dev, "spraid_io_map_data Err.\n");
+ set_host_byte(scmd, DID_ERROR);
+ scmd->scsi_done(scmd);
+ ret = 0;
+ goto deinit_iod;
+ }
+
+ WRITE_ONCE(iod->state, SPRAID_CMD_IN_FLIGHT);
+ spraid_submit_cmd(ioq, &ioq_cmd);
+ elapsed = jiffies - scmd->jiffies_at_alloc;
+ dev_log_dbg(hdev->dev, "cid[%d], qid[%d] submit IO cost %3ld.%3ld seconds\n",
+ cid, hwq, elapsed / HZ, elapsed % HZ);
+ return 0;
+
+deinit_iod:
+ spraid_free_iod_res(hdev, iod);
+ return ret;
+}
+
+static int spraid_match_dev(struct spraid_dev *hdev, u16 idx, struct scsi_device *sdev)
+{
+ if (SPRAID_DEV_INFO_FLAG_VALID(hdev->devices[idx].flag)) {
+ if (sdev->channel == hdev->devices[idx].channel &&
+ sdev->id == le16_to_cpu(hdev->devices[idx].target) &&
+ sdev->lun < hdev->devices[idx].lun) {
+ dev_info(hdev->dev, "Match device success, channel:target:lun[%d:%d:%d]\n",
+ hdev->devices[idx].channel,
+ hdev->devices[idx].target,
+ hdev->devices[idx].lun);
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+static int spraid_slave_alloc(struct scsi_device *sdev)
+{
+ struct spraid_sdev_hostdata *hostdata;
+ struct spraid_dev *hdev;
+ u16 idx;
+
+ hdev = shost_priv(sdev->host);
+ hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL);
+ if (!hostdata) {
+ dev_err(hdev->dev, "Alloc scsi host data memory failed\n");
+ return -ENOMEM;
+ }
+
+ down_read(&hdev->devices_rwsem);
+ for (idx = 0; idx < le32_to_cpu(hdev->ctrl_info->nd); idx++) {
+ if (spraid_match_dev(hdev, idx, sdev))
+ goto scan_host;
+ }
+ up_read(&hdev->devices_rwsem);
+
+ kfree(hostdata);
+ return -ENXIO;
+
+scan_host:
+ hostdata->hdid = le32_to_cpu(hdev->devices[idx].hdid);
+ sdev->hostdata = hostdata;
+ up_read(&hdev->devices_rwsem);
+ return 0;
+}
+
+static void spraid_slave_destroy(struct scsi_device *sdev)
+{
+ kfree(sdev->hostdata);
+ sdev->hostdata = NULL;
+}
+
+static int spraid_slave_configure(struct scsi_device *sdev)
+{
+ u16 idx;
+ unsigned int timeout = scmd_tmout_nonpt * HZ;
+ struct spraid_dev *hdev = shost_priv(sdev->host);
+ struct spraid_sdev_hostdata *hostdata = sdev->hostdata;
+ u32 max_sec = sdev->host->max_sectors;
+
+ if (!hostdata) {
+ idx = hostdata->hdid - 1;
+ if (sdev->channel == hdev->devices[idx].channel &&
+ sdev->id == le16_to_cpu(hdev->devices[idx].target) &&
+ sdev->lun < hdev->devices[idx].lun) {
+ if (SPRAID_DEV_INFO_ATTR_PT(hdev->devices[idx].attr))
+ timeout = scmd_tmout_pt * HZ;
+ else
+ timeout = scmd_tmout_nonpt * HZ;
+ max_sec = le16_to_cpu(hdev->devices[idx].max_io_kb) << 1;
+ } else {
+ dev_err(hdev->dev, "[%s] err, sdev->channel:id:lun[%d:%d:%lld];"
+ "devices[%d], channel:target:lun[%d:%d:%d]\n",
+ __func__, sdev->channel, sdev->id, sdev->lun,
+ idx, hdev->devices[idx].channel,
+ hdev->devices[idx].target,
+ hdev->devices[idx].lun);
+ }
+ } else {
+ dev_err(hdev->dev, "[%s] err, sdev->hostdata is null\n", __func__);
+ }
+
+ blk_queue_rq_timeout(sdev->request_queue, timeout);
+ sdev->eh_timeout = timeout;
+
+ if ((max_sec == 0) || (max_sec > sdev->host->max_sectors))
+ max_sec = sdev->host->max_sectors;
+ blk_queue_max_hw_sectors(sdev->request_queue, max_sec);
+
+ dev_info(hdev->dev, "[%s] sdev->channel:id:lun[%d:%d:%lld], scmd_timeout[%d]s, maxsec[%d]\n",
+ __func__, sdev->channel, sdev->id, sdev->lun, timeout / HZ, max_sec);
+
+ return 0;
+}
+
+static void spraid_shost_init(struct spraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u8 domain, bus;
+ u32 dev_func;
+
+ domain = pci_domain_nr(pdev->bus);
+ bus = pdev->bus->number;
+ dev_func = pdev->devfn;
+
+ hdev->shost->nr_hw_queues = hdev->online_queues - 1;
+ hdev->shost->can_queue = (hdev->ioq_depth - SPRAID_PTCMDS_PERQ);
+
+ hdev->shost->sg_tablesize = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+ /* 512B per sector */
+ hdev->shost->max_sectors = (1U << ((hdev->ctrl_info->mdts) * 1U) << 12) / 512;
+ hdev->shost->cmd_per_lun = MAX_CMD_PER_DEV;
+ hdev->shost->max_channel = le16_to_cpu(hdev->ctrl_info->max_channel) - 1;
+ hdev->shost->max_id = le32_to_cpu(hdev->ctrl_info->max_tgt_id);
+ hdev->shost->max_lun = le16_to_cpu(hdev->ctrl_info->max_lun);
+
+ hdev->shost->this_id = -1;
+ hdev->shost->unique_id = (domain << 16) | (bus << 8) | dev_func;
+ hdev->shost->max_cmd_len = MAX_CDB_LEN;
+ hdev->shost->hostt->cmd_size = max(spraid_cmd_size(hdev, false, true),
+ spraid_cmd_size(hdev, false, false));
+}
+
+static inline void spraid_host_deinit(struct spraid_dev *hdev)
+{
+ ida_free(&spraid_instance_ida, hdev->instance);
+}
+
+static int spraid_alloc_queue(struct spraid_dev *hdev, u16 qid, u16 depth)
+{
+ struct spraid_queue *spraidq = &hdev->queues[qid];
+ int ret = 0;
+
+ if (hdev->queue_count > qid) {
+ dev_info(hdev->dev, "[%s] warn: queue[%d] is exist\n", __func__, qid);
+ return 0;
+ }
+
+ spraidq->cqes = dma_alloc_coherent(hdev->dev, CQ_SIZE(depth),
+ &spraidq->cq_dma_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!spraidq->cqes)
+ return -ENOMEM;
+
+ spraidq->sq_cmds = dma_alloc_coherent(hdev->dev, SQ_SIZE(qid, depth),
+ &spraidq->sq_dma_addr, GFP_KERNEL);
+ if (!spraidq->sq_cmds) {
+ ret = -ENOMEM;
+ goto free_cqes;
+ }
+
+ spin_lock_init(&spraidq->sq_lock);
+ spin_lock_init(&spraidq->cq_lock);
+ spraidq->hdev = hdev;
+ spraidq->q_depth = depth;
+ spraidq->qid = qid;
+ spraidq->cq_vector = -1;
+ hdev->queue_count++;
+
+ /* alloc sense buffer */
+ spraidq->sense = dma_alloc_coherent(hdev->dev, SENSE_SIZE(depth),
+ &spraidq->sense_dma_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!spraidq->sense) {
+ ret = -ENOMEM;
+ goto free_sq_cmds;
+ }
+
+ return 0;
+
+free_sq_cmds:
+ dma_free_coherent(hdev->dev, SQ_SIZE(qid, depth), (void *)spraidq->sq_cmds,
+ spraidq->sq_dma_addr);
+free_cqes:
+ dma_free_coherent(hdev->dev, CQ_SIZE(depth), (void *)spraidq->cqes,
+ spraidq->cq_dma_addr);
+ return ret;
+}
+
+static int spraid_wait_ready(struct spraid_dev *hdev, u64 cap, bool enabled)
+{
+ unsigned long timeout =
+ ((SPRAID_CAP_TIMEOUT(cap) + 1) * SPRAID_CAP_TIMEOUT_UNIT_MS) + jiffies;
+ u32 bit = enabled ? SPRAID_CSTS_RDY : 0;
+
+ while ((readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_RDY) != bit) {
+ usleep_range(1000, 2000);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "Device not ready; aborting %s\n",
+ enabled ? "initialisation" : "reset");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int spraid_shutdown_ctrl(struct spraid_dev *hdev)
+{
+ unsigned long timeout = hdev->ctrl_info->rtd3e + jiffies;
+
+ hdev->ctrl_config &= ~SPRAID_CC_SHN_MASK;
+ hdev->ctrl_config |= SPRAID_CC_SHN_NORMAL;
+ writel(hdev->ctrl_config, hdev->bar + SPRAID_REG_CC);
+
+ while ((readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_SHST_MASK) !=
+ SPRAID_CSTS_SHST_CMPLT) {
+ msleep(100);
+ if (fatal_signal_pending(current))
+ return -EINTR;
+ if (time_after(jiffies, timeout)) {
+ dev_err(hdev->dev, "Device shutdown incomplete; abort shutdown\n");
+ return -ENODEV;
+ }
+ }
+ return 0;
+}
+
+static int spraid_disable_ctrl(struct spraid_dev *hdev)
+{
+ hdev->ctrl_config &= ~SPRAID_CC_SHN_MASK;
+ hdev->ctrl_config &= ~SPRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + SPRAID_REG_CC);
+
+ return spraid_wait_ready(hdev, hdev->cap, false);
+}
+
+static int spraid_enable_ctrl(struct spraid_dev *hdev)
+{
+ u64 cap = hdev->cap;
+ u32 dev_page_min = SPRAID_CAP_MPSMIN(cap) + 12;
+ u32 page_shift = PAGE_SHIFT;
+
+ if (page_shift < dev_page_min) {
+ dev_err(hdev->dev, "Minimum device page size[%u], too large for host[%u]\n",
+ 1U << dev_page_min, 1U << page_shift);
+ return -ENODEV;
+ }
+
+ page_shift = min_t(unsigned int, SPRAID_CAP_MPSMAX(cap) + 12, PAGE_SHIFT);
+ hdev->page_size = 1U << page_shift;
+
+ hdev->ctrl_config = SPRAID_CC_CSS_NVM;
+ hdev->ctrl_config |= (page_shift - 12) << SPRAID_CC_MPS_SHIFT;
+ hdev->ctrl_config |= SPRAID_CC_AMS_RR | SPRAID_CC_SHN_NONE;
+ hdev->ctrl_config |= SPRAID_CC_IOSQES | SPRAID_CC_IOCQES;
+ hdev->ctrl_config |= SPRAID_CC_ENABLE;
+ writel(hdev->ctrl_config, hdev->bar + SPRAID_REG_CC);
+
+ return spraid_wait_ready(hdev, cap, true);
+}
+
+static void spraid_init_queue(struct spraid_queue *spraidq, u16 qid)
+{
+ struct spraid_dev *hdev = spraidq->hdev;
+
+ memset((void *)spraidq->cqes, 0, CQ_SIZE(spraidq->q_depth));
+
+ spraidq->sq_tail = 0;
+ spraidq->cq_head = 0;
+ spraidq->cq_phase = 1;
+ spraidq->q_db = &hdev->dbs[qid * 2 * hdev->db_stride];
+ spraidq->prp_small_pool = hdev->prp_small_pool[qid % small_pool_num];
+ hdev->online_queues++;
+}
+
+static inline bool spraid_cqe_pending(struct spraid_queue *spraidq)
+{
+ return (le16_to_cpu(spraidq->cqes[spraidq->cq_head].status) & 1) ==
+ spraidq->cq_phase;
+}
+
+static void spraid_complete_ioq_cmnd(struct spraid_queue *ioq, struct spraid_completion *cqe)
+{
+ struct spraid_dev *hdev = ioq->hdev;
+ struct blk_mq_tags *tags;
+ struct scsi_cmnd *scmd;
+ struct spraid_iod *iod;
+ struct request *req;
+ unsigned long elapsed;
+
+ tags = hdev->shost->tag_set.tags[ioq->qid - 1];
+ req = blk_mq_tag_to_rq(tags, cqe->cmd_id);
+ if (unlikely(!req || !blk_mq_request_started(req))) {
+ dev_warn(hdev->dev, "Invalid id %d completed on queue %d\n",
+ cqe->cmd_id, ioq->qid);
+ return;
+ }
+
+ scmd = blk_mq_rq_to_pdu(req);
+ iod = scsi_cmd_priv(scmd);
+
+ elapsed = jiffies - scmd->jiffies_at_alloc;
+ dev_log_dbg(hdev->dev, "cid[%d], qid[%d] finish IO cost %3ld.%3ld seconds\n",
+ cqe->cmd_id, ioq->qid, elapsed / HZ, elapsed % HZ);
+
+ if (cmpxchg(&iod->state, SPRAID_CMD_IN_FLIGHT, SPRAID_CMD_COMPLETE) !=
+ SPRAID_CMD_IN_FLIGHT) {
+ dev_warn(hdev->dev, "cid[%d], qid[%d] enters abnormal handler, cost %3ld.%3ld seconds\n",
+ cqe->cmd_id, ioq->qid, elapsed / HZ, elapsed % HZ);
+ WRITE_ONCE(iod->state, SPRAID_CMD_TMO_COMPLETE);
+
+ if (iod->nsge) {
+ iod->nsge = 0;
+ scsi_dma_unmap(scmd);
+ }
+ spraid_free_iod_res(hdev, iod);
+
+ return;
+ }
+
+ spraid_map_status(iod, scmd, cqe);
+ if (iod->nsge) {
+ iod->nsge = 0;
+ scsi_dma_unmap(scmd);
+ }
+ spraid_free_iod_res(hdev, iod);
+ scmd->scsi_done(scmd);
+}
+
+static inline void spraid_end_admin_request(struct request *req, __le16 status,
+ __le32 result0, __le32 result1)
+{
+ struct spraid_admin_request *rq = spraid_admin_req(req);
+
+ rq->status = le16_to_cpu(status) >> 1;
+ rq->result0 = le32_to_cpu(result0);
+ rq->result1 = le32_to_cpu(result1);
+ blk_mq_complete_request(req);
+}
+
+static void spraid_complete_adminq_cmnd(struct spraid_queue *adminq, struct spraid_completion *cqe)
+{
+ struct blk_mq_tags *tags = adminq->hdev->admin_tagset.tags[0];
+ struct request *req;
+
+ req = blk_mq_tag_to_rq(tags, cqe->cmd_id);
+ if (unlikely(!req)) {
+ dev_warn(adminq->hdev->dev, "Invalid id %d completed on queue %d\n",
+ cqe->cmd_id, le16_to_cpu(cqe->sq_id));
+ return;
+ }
+ spraid_end_admin_request(req, cqe->status, cqe->result, cqe->result1);
+}
+
+static void spraid_complete_aen(struct spraid_queue *spraidq, struct spraid_completion *cqe)
+{
+ struct spraid_dev *hdev = spraidq->hdev;
+ u32 result = le32_to_cpu(cqe->result);
+
+ dev_info(hdev->dev, "rcv aen, status[%x], result[%x]\n",
+ le16_to_cpu(cqe->status) >> 1, result);
+
+ if ((le16_to_cpu(cqe->status) >> 1) != SPRAID_SC_SUCCESS)
+ return;
+ switch (result & 0x7) {
+ case SPRAID_AEN_NOTICE:
+ spraid_handle_aen_notice(hdev, result);
+ break;
+ case SPRAID_AEN_VS:
+ spraid_handle_aen_vs(hdev, result);
+ break;
+ default:
+ dev_warn(hdev->dev, "Unsupported async event type: %u\n",
+ result & 0x7);
+ break;
+ }
+ queue_work(spraid_wq, &hdev->aen_work);
+}
+
+static void spraid_put_ioq_ptcmd(struct spraid_dev *hdev, struct spraid_ioq_ptcmd *cmd);
+
+static void spraid_complete_ioq_sync_cmnd(struct spraid_queue *ioq, struct spraid_completion *cqe)
+{
+ struct spraid_dev *hdev = ioq->hdev;
+ struct spraid_ioq_ptcmd *ptcmd;
+
+ ptcmd = hdev->ioq_ptcmds + (ioq->qid - 1) * SPRAID_PTCMDS_PERQ +
+ cqe->cmd_id - SPRAID_IO_BLK_MQ_DEPTH;
+
+ ptcmd->status = le16_to_cpu(cqe->status) >> 1;
+ ptcmd->result0 = le32_to_cpu(cqe->result);
+ ptcmd->result1 = le32_to_cpu(cqe->result1);
+
+ complete(&ptcmd->cmd_done);
+
+ spraid_put_ioq_ptcmd(hdev, ptcmd);
+}
+
+static inline void spraid_handle_cqe(struct spraid_queue *spraidq, u16 idx)
+{
+ struct spraid_completion *cqe = &spraidq->cqes[idx];
+ struct spraid_dev *hdev = spraidq->hdev;
+
+ if (unlikely(cqe->cmd_id >= spraidq->q_depth)) {
+ dev_err(hdev->dev, "Invalid command id[%d] completed on queue %d\n",
+ cqe->cmd_id, cqe->sq_id);
+ return;
+ }
+
+ dev_log_dbg(hdev->dev, "cid[%d], qid[%d], result[0x%x], sq_id[%d], status[0x%x]\n",
+ cqe->cmd_id, spraidq->qid, le32_to_cpu(cqe->result),
+ le16_to_cpu(cqe->sq_id), le16_to_cpu(cqe->status));
+
+ if (unlikely(spraidq->qid == 0 && cqe->cmd_id >= SPRAID_AQ_BLK_MQ_DEPTH)) {
+ spraid_complete_aen(spraidq, cqe);
+ return;
+ }
+
+ if (unlikely(spraidq->qid && cqe->cmd_id >= SPRAID_IO_BLK_MQ_DEPTH)) {
+ spraid_complete_ioq_sync_cmnd(spraidq, cqe);
+ return;
+ }
+
+ if (spraidq->qid)
+ spraid_complete_ioq_cmnd(spraidq, cqe);
+ else
+ spraid_complete_adminq_cmnd(spraidq, cqe);
+}
+
+static void spraid_complete_cqes(struct spraid_queue *spraidq, u16 start, u16 end)
+{
+ while (start != end) {
+ spraid_handle_cqe(spraidq, start);
+ if (++start == spraidq->q_depth)
+ start = 0;
+ }
+}
+
+static inline void spraid_update_cq_head(struct spraid_queue *spraidq)
+{
+ if (++spraidq->cq_head == spraidq->q_depth) {
+ spraidq->cq_head = 0;
+ spraidq->cq_phase = !spraidq->cq_phase;
+ }
+}
+
+static inline bool spraid_process_cq(struct spraid_queue *spraidq, u16 *start, u16 *end, int tag)
+{
+ bool found = false;
+
+ *start = spraidq->cq_head;
+ while (!found && spraid_cqe_pending(spraidq)) {
+ if (spraidq->cqes[spraidq->cq_head].cmd_id == tag)
+ found = true;
+ spraid_update_cq_head(spraidq);
+ }
+ *end = spraidq->cq_head;
+
+ if (*start != *end)
+ writel(spraidq->cq_head, spraidq->q_db + spraidq->hdev->db_stride);
+
+ return found;
+}
+
+static bool spraid_poll_cq(struct spraid_queue *spraidq, int cid)
+{
+ u16 start, end;
+ bool found;
+
+ if (!spraid_cqe_pending(spraidq))
+ return 0;
+
+ spin_lock_irq(&spraidq->cq_lock);
+ found = spraid_process_cq(spraidq, &start, &end, cid);
+ spin_unlock_irq(&spraidq->cq_lock);
+
+ spraid_complete_cqes(spraidq, start, end);
+ return found;
+}
+
+static irqreturn_t spraid_irq(int irq, void *data)
+{
+ struct spraid_queue *spraidq = data;
+ irqreturn_t ret = IRQ_NONE;
+ u16 start, end;
+
+ spin_lock(&spraidq->cq_lock);
+ if (spraidq->cq_head != spraidq->last_cq_head)
+ ret = IRQ_HANDLED;
+
+ spraid_process_cq(spraidq, &start, &end, -1);
+ spraidq->last_cq_head = spraidq->cq_head;
+ spin_unlock(&spraidq->cq_lock);
+
+ if (start != end) {
+ spraid_complete_cqes(spraidq, start, end);
+ ret = IRQ_HANDLED;
+ }
+ return ret;
+}
+
+static int spraid_setup_admin_queue(struct spraid_dev *hdev)
+{
+ struct spraid_queue *adminq = &hdev->queues[0];
+ u32 aqa;
+ int ret;
+
+ dev_info(hdev->dev, "[%s] start disable ctrl\n", __func__);
+
+ ret = spraid_disable_ctrl(hdev);
+ if (ret)
+ return ret;
+
+ ret = spraid_alloc_queue(hdev, 0, SPRAID_AQ_DEPTH);
+ if (ret)
+ return ret;
+
+ aqa = adminq->q_depth - 1;
+ aqa |= aqa << 16;
+ writel(aqa, hdev->bar + SPRAID_REG_AQA);
+ lo_hi_writeq(adminq->sq_dma_addr, hdev->bar + SPRAID_REG_ASQ);
+ lo_hi_writeq(adminq->cq_dma_addr, hdev->bar + SPRAID_REG_ACQ);
+
+ dev_info(hdev->dev, "[%s] start enable ctrl\n", __func__);
+
+ ret = spraid_enable_ctrl(hdev);
+ if (ret) {
+ ret = -ENODEV;
+ goto free_queue;
+ }
+
+ adminq->cq_vector = 0;
+ spraid_init_queue(adminq, 0);
+ ret = pci_request_irq(hdev->pdev, adminq->cq_vector, spraid_irq, NULL,
+ adminq, "spraid%d_q%d", hdev->instance, adminq->qid);
+
+ if (ret) {
+ adminq->cq_vector = -1;
+ hdev->online_queues--;
+ goto free_queue;
+ }
+
+ dev_info(hdev->dev, "[%s] success, queuecount:[%d], onlinequeue:[%d]\n",
+ __func__, hdev->queue_count, hdev->online_queues);
+
+ return 0;
+
+free_queue:
+ spraid_free_queue(adminq);
+ return ret;
+}
+
+static u32 spraid_bar_size(struct spraid_dev *hdev, u32 nr_ioqs)
+{
+ return (SPRAID_REG_DBS + ((nr_ioqs + 1) * 8 * hdev->db_stride));
+}
+
+static inline void spraid_clear_spraid_request(struct request *req)
+{
+ if (!(req->rq_flags & RQF_DONTPREP)) {
+ spraid_admin_req(req)->flags = 0;
+ req->rq_flags |= RQF_DONTPREP;
+ }
+}
+
+static struct request *spraid_alloc_admin_request(struct request_queue *q,
+ struct spraid_admin_command *cmd,
+ blk_mq_req_flags_t flags)
+{
+ u32 op = COMMAND_IS_WRITE(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
+ struct request *req;
+
+ req = blk_mq_alloc_request(q, op, flags);
+ if (IS_ERR(req))
+ return req;
+ req->cmd_flags |= REQ_FAILFAST_DRIVER;
+ spraid_clear_spraid_request(req);
+ spraid_admin_req(req)->cmd = cmd;
+
+ return req;
+}
+
+static int spraid_submit_admin_sync_cmd(struct request_queue *q,
+ struct spraid_admin_command *cmd,
+ u32 *result, void *buffer,
+ u32 bufflen, u32 timeout, int at_head, blk_mq_req_flags_t flags)
+{
+ struct request *req;
+ int ret;
+
+ req = spraid_alloc_admin_request(q, cmd, flags);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
+ if (buffer && bufflen) {
+ ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
+ if (ret)
+ goto out;
+ }
+ blk_execute_rq(req->q, NULL, req, at_head);
+
+ if (result)
+ *result = spraid_admin_req(req)->result0;
+
+ if (spraid_admin_req(req)->flags & SPRAID_REQ_CANCELLED)
+ ret = -EINTR;
+ else
+ ret = spraid_admin_req(req)->status;
+
+out:
+ blk_mq_free_request(req);
+ return ret;
+}
+
+static int spraid_create_cq(struct spraid_dev *hdev, u16 qid,
+ struct spraid_queue *spraidq, u16 cq_vector)
+{
+ struct spraid_admin_command admin_cmd;
+ int flags = SPRAID_QUEUE_PHYS_CONTIG | SPRAID_CQ_IRQ_ENABLED;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_cq.opcode = SPRAID_ADMIN_CREATE_CQ;
+ admin_cmd.create_cq.prp1 = cpu_to_le64(spraidq->cq_dma_addr);
+ admin_cmd.create_cq.cqid = cpu_to_le16(qid);
+ admin_cmd.create_cq.qsize = cpu_to_le16(spraidq->q_depth - 1);
+ admin_cmd.create_cq.cq_flags = cpu_to_le16(flags);
+ admin_cmd.create_cq.irq_vector = cpu_to_le16(cq_vector);
+
+ return spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ NULL, 0, 0, 0, 0);
+}
+
+static int spraid_create_sq(struct spraid_dev *hdev, u16 qid,
+ struct spraid_queue *spraidq)
+{
+ struct spraid_admin_command admin_cmd;
+ int flags = SPRAID_QUEUE_PHYS_CONTIG;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.create_sq.opcode = SPRAID_ADMIN_CREATE_SQ;
+ admin_cmd.create_sq.prp1 = cpu_to_le64(spraidq->sq_dma_addr);
+ admin_cmd.create_sq.sqid = cpu_to_le16(qid);
+ admin_cmd.create_sq.qsize = cpu_to_le16(spraidq->q_depth - 1);
+ admin_cmd.create_sq.sq_flags = cpu_to_le16(flags);
+ admin_cmd.create_sq.cqid = cpu_to_le16(qid);
+
+ return spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ NULL, 0, 0, 0, 0);
+}
+
+static void spraid_free_queue(struct spraid_queue *spraidq)
+{
+ struct spraid_dev *hdev = spraidq->hdev;
+
+ hdev->queue_count--;
+ dma_free_coherent(hdev->dev, CQ_SIZE(spraidq->q_depth),
+ (void *)spraidq->cqes, spraidq->cq_dma_addr);
+ dma_free_coherent(hdev->dev, SQ_SIZE(spraidq->qid, spraidq->q_depth),
+ spraidq->sq_cmds, spraidq->sq_dma_addr);
+ dma_free_coherent(hdev->dev, SENSE_SIZE(spraidq->q_depth),
+ spraidq->sense, spraidq->sense_dma_addr);
+}
+
+static void spraid_free_admin_queue(struct spraid_dev *hdev)
+{
+ spraid_free_queue(&hdev->queues[0]);
+}
+
+static void spraid_free_io_queues(struct spraid_dev *hdev)
+{
+ int i;
+
+ for (i = hdev->queue_count - 1; i >= 1; i--)
+ spraid_free_queue(&hdev->queues[i]);
+}
+
+static int spraid_delete_queue(struct spraid_dev *hdev, u8 op, u16 id)
+{
+ struct spraid_admin_command admin_cmd;
+ int ret;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.delete_queue.opcode = op;
+ admin_cmd.delete_queue.qid = cpu_to_le16(id);
+
+ ret = spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ NULL, 0, 0, 0, 0);
+
+ if (ret)
+ dev_err(hdev->dev, "Delete %s:[%d] failed\n",
+ (op == SPRAID_ADMIN_DELETE_CQ) ? "cq" : "sq", id);
+
+ return ret;
+}
+
+static int spraid_delete_cq(struct spraid_dev *hdev, u16 cqid)
+{
+ return spraid_delete_queue(hdev, SPRAID_ADMIN_DELETE_CQ, cqid);
+}
+
+static int spraid_delete_sq(struct spraid_dev *hdev, u16 sqid)
+{
+ return spraid_delete_queue(hdev, SPRAID_ADMIN_DELETE_SQ, sqid);
+}
+
+static int spraid_create_queue(struct spraid_queue *spraidq, u16 qid)
+{
+ struct spraid_dev *hdev = spraidq->hdev;
+ u16 cq_vector;
+ int ret;
+
+ cq_vector = (hdev->num_vecs == 1) ? 0 : qid;
+ ret = spraid_create_cq(hdev, qid, spraidq, cq_vector);
+ if (ret)
+ return ret;
+
+ ret = spraid_create_sq(hdev, qid, spraidq);
+ if (ret)
+ goto delete_cq;
+
+ spraid_init_queue(spraidq, qid);
+ spraidq->cq_vector = cq_vector;
+
+ ret = pci_request_irq(hdev->pdev, cq_vector, spraid_irq, NULL,
+ spraidq, "spraid%d_q%d", hdev->instance, qid);
+
+ if (ret) {
+ dev_err(hdev->dev, "Request queue[%d] irq failed\n", qid);
+ goto delete_sq;
+ }
+
+ return 0;
+
+delete_sq:
+ spraidq->cq_vector = -1;
+ hdev->online_queues--;
+ spraid_delete_sq(hdev, qid);
+delete_cq:
+ spraid_delete_cq(hdev, qid);
+
+ return ret;
+}
+
+static int spraid_create_io_queues(struct spraid_dev *hdev)
+{
+ u32 i, max;
+ int ret = 0;
+
+ max = min(hdev->max_qid, hdev->queue_count - 1);
+ for (i = hdev->online_queues; i <= max; i++) {
+ ret = spraid_create_queue(&hdev->queues[i], i);
+ if (ret) {
+ dev_err(hdev->dev, "Create queue[%d] failed\n", i);
+ break;
+ }
+ }
+
+ dev_info(hdev->dev, "[%s] queue_count[%d], online_queue[%d]",
+ __func__, hdev->queue_count, hdev->online_queues);
+
+ return ret >= 0 ? 0 : ret;
+}
+
+static int spraid_set_features(struct spraid_dev *hdev, u32 fid, u32 dword11, void *buffer,
+ size_t buflen, u32 *result)
+{
+ struct spraid_admin_command admin_cmd;
+ u32 res;
+ int ret;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.features.opcode = SPRAID_ADMIN_SET_FEATURES;
+ admin_cmd.features.fid = cpu_to_le32(fid);
+ admin_cmd.features.dword11 = cpu_to_le32(dword11);
+
+ ret = spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, &res,
+ buffer, buflen, 0, 0, 0);
+
+ if (!ret && result)
+ *result = res;
+
+ return ret;
+}
+
+static int spraid_configure_timestamp(struct spraid_dev *hdev)
+{
+ __le64 ts;
+ int ret;
+
+ ts = cpu_to_le64(ktime_to_ms(ktime_get_real()));
+ ret = spraid_set_features(hdev, SPRAID_FEAT_TIMESTAMP, 0, &ts, sizeof(ts), NULL);
+
+ if (ret)
+ dev_err(hdev->dev, "set timestamp failed: %d\n", ret);
+ return ret;
+}
+
+static int spraid_set_queue_cnt(struct spraid_dev *hdev, u32 *cnt)
+{
+ u32 q_cnt = (*cnt - 1) | ((*cnt - 1) << 16);
+ u32 nr_ioqs, result;
+ int status;
+
+ status = spraid_set_features(hdev, SPRAID_FEAT_NUM_QUEUES, q_cnt, NULL, 0, &result);
+ if (status) {
+ dev_err(hdev->dev, "Set queue count failed, status: %d\n",
+ status);
+ return -EIO;
+ }
+
+ nr_ioqs = min(result & 0xffff, result >> 16) + 1;
+ *cnt = min(*cnt, nr_ioqs);
+ if (*cnt == 0) {
+ dev_err(hdev->dev, "Illegal queue count: zero\n");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int spraid_setup_io_queues(struct spraid_dev *hdev)
+{
+ struct spraid_queue *adminq = &hdev->queues[0];
+ struct pci_dev *pdev = hdev->pdev;
+ u32 nr_ioqs = num_online_cpus();
+ u32 i, size;
+ int ret;
+
+ struct irq_affinity affd = {
+ .pre_vectors = 1
+ };
+
+ ret = spraid_set_queue_cnt(hdev, &nr_ioqs);
+ if (ret < 0)
+ return ret;
+
+ size = spraid_bar_size(hdev, nr_ioqs);
+ ret = spraid_remap_bar(hdev, size);
+ if (ret)
+ return -ENOMEM;
+
+ adminq->q_db = hdev->dbs;
+
+ pci_free_irq(pdev, 0, adminq);
+ pci_free_irq_vectors(pdev);
+
+ ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_ioqs + 1),
+ PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
+ if (ret <= 0)
+ return -EIO;
+
+ hdev->num_vecs = ret;
+
+ hdev->max_qid = max(ret - 1, 1);
+
+ ret = pci_request_irq(pdev, adminq->cq_vector, spraid_irq, NULL,
+ adminq, "spraid%d_q%d", hdev->instance, adminq->qid);
+ if (ret) {
+ dev_err(hdev->dev, "Request admin irq failed\n");
+ adminq->cq_vector = -1;
+ return ret;
+ }
+
+ for (i = hdev->queue_count; i <= hdev->max_qid; i++) {
+ ret = spraid_alloc_queue(hdev, i, hdev->ioq_depth);
+ if (ret)
+ break;
+ }
+ dev_info(hdev->dev, "[%s] max_qid: %d, queue_count: %d, online_queue: %d, ioq_depth: %d\n",
+ __func__, hdev->max_qid, hdev->queue_count,
+ hdev->online_queues, hdev->ioq_depth);
+
+ return spraid_create_io_queues(hdev);
+}
+
+static void spraid_delete_io_queues(struct spraid_dev *hdev)
+{
+ u16 queues = hdev->online_queues - 1;
+ u8 opcode = SPRAID_ADMIN_DELETE_SQ;
+ u16 i, pass;
+
+ if (!pci_device_is_present(hdev->pdev)) {
+ dev_err(hdev->dev, "pci_device is not present, skip disable io queues\n");
+ return;
+ }
+
+ if (hdev->online_queues < 2) {
+ dev_err(hdev->dev, "[%s] err, io queue has been delete\n", __func__);
+ return;
+ }
+
+ for (pass = 0; pass < 2; pass++) {
+ for (i = queues; i > 0; i--)
+ if (spraid_delete_queue(hdev, opcode, i))
+ break;
+
+ opcode = SPRAID_ADMIN_DELETE_CQ;
+ }
+}
+
+static void spraid_remove_io_queues(struct spraid_dev *hdev)
+{
+ spraid_delete_io_queues(hdev);
+ spraid_free_io_queues(hdev);
+}
+
+static void spraid_pci_disable(struct spraid_dev *hdev)
+{
+ struct pci_dev *pdev = hdev->pdev;
+ u32 i;
+
+ for (i = 0; i < hdev->online_queues; i++)
+ pci_free_irq(pdev, hdev->queues[i].cq_vector, &hdev->queues[i]);
+ pci_free_irq_vectors(pdev);
+ if (pci_is_enabled(pdev)) {
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ }
+ hdev->online_queues = 0;
+}
+
+static void spraid_disable_admin_queue(struct spraid_dev *hdev, bool shutdown)
+{
+ struct spraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ spraid_shutdown_ctrl(hdev);
+ else
+ spraid_disable_ctrl(hdev);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "[%s] err, admin queue has been delete\n", __func__);
+ return;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ spraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+
+ spraid_complete_cqes(adminq, start, end);
+ spraid_free_admin_queue(hdev);
+}
+
+static int spraid_create_dma_pools(struct spraid_dev *hdev)
+{
+ int i;
+ char poolname[20] = { 0 };
+
+ hdev->prp_page_pool = dma_pool_create("prp list page", hdev->dev,
+ PAGE_SIZE, PAGE_SIZE, 0);
+
+ if (!hdev->prp_page_pool) {
+ dev_err(hdev->dev, "create prp_page_pool failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < small_pool_num; i++) {
+ sprintf(poolname, "prp_list_256_%d", i);
+ hdev->prp_small_pool[i] = dma_pool_create(poolname, hdev->dev, SMALL_POOL_SIZE,
+ SMALL_POOL_SIZE, 0);
+
+ if (!hdev->prp_small_pool[i]) {
+ dev_err(hdev->dev, "create prp_small_pool %d failed\n", i);
+ goto destroy_prp_small_pool;
+ }
+ }
+
+ return 0;
+
+destroy_prp_small_pool:
+ while (i > 0)
+ dma_pool_destroy(hdev->prp_small_pool[--i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+
+ return -ENOMEM;
+}
+
+static void spraid_destroy_dma_pools(struct spraid_dev *hdev)
+{
+ int i;
+
+ for (i = 0; i < small_pool_num; i++)
+ dma_pool_destroy(hdev->prp_small_pool[i]);
+ dma_pool_destroy(hdev->prp_page_pool);
+}
+
+static int spraid_get_dev_list(struct spraid_dev *hdev, struct spraid_dev_info *devices)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ struct spraid_admin_command admin_cmd;
+ struct spraid_dev_list *list_buf;
+ u32 i, idx, hdid, ndev;
+ int ret = 0;
+
+ list_buf = kmalloc(sizeof(*list_buf), GFP_KERNEL);
+ if (!list_buf)
+ return -ENOMEM;
+
+ for (idx = 0; idx < nd;) {
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = SPRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = SPRAID_GET_INFO_DEV_LIST;
+ admin_cmd.get_info.cdw11 = cpu_to_le32(idx);
+
+ ret = spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL, list_buf,
+ sizeof(*list_buf), 0, 0, 0);
+
+ if (ret) {
+ dev_err(hdev->dev, "Get device list failed, nd: %u, idx: %u, ret: %d\n",
+ nd, idx, ret);
+ goto out;
+ }
+ ndev = le32_to_cpu(list_buf->dev_num);
+
+ dev_info(hdev->dev, "ndev numbers: %u\n", ndev);
+
+ for (i = 0; i < ndev; i++) {
+ hdid = le32_to_cpu(list_buf->devices[i].hdid);
+ dev_info(hdev->dev, "list_buf->devices[%d], hdid: %u target: %d, channel: %d, lun: %d, attr[%x]\n",
+ i, hdid,
+ le16_to_cpu(list_buf->devices[i].target),
+ list_buf->devices[i].channel,
+ list_buf->devices[i].lun,
+ list_buf->devices[i].attr);
+ if (hdid > nd || hdid == 0) {
+ dev_err(hdev->dev, "err, hdid[%d] invalid\n", hdid);
+ continue;
+ }
+ memcpy(&devices[hdid - 1], &list_buf->devices[i],
+ sizeof(struct spraid_dev_info));
+ }
+ idx += ndev;
+
+ if (idx < MAX_DEV_ENTRY_PER_PAGE_4K)
+ break;
+ }
+
+out:
+ kfree(list_buf);
+ return ret;
+}
+
+static void spraid_send_aen(struct spraid_dev *hdev)
+{
+ struct spraid_queue *adminq = &hdev->queues[0];
+ struct spraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = SPRAID_ADMIN_ASYNC_EVENT;
+ admin_cmd.common.command_id = SPRAID_AQ_BLK_MQ_DEPTH;
+
+ spraid_submit_cmd(adminq, &admin_cmd);
+ dev_info(hdev->dev, "send aen, cid[%d]\n", SPRAID_AQ_BLK_MQ_DEPTH);
+}
+
+static int spraid_add_device(struct spraid_dev *hdev, struct spraid_dev_info *device)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ sdev = scsi_device_lookup(shost, device->channel, le16_to_cpu(device->target), 0);
+ if (sdev) {
+ dev_warn(hdev->dev, "Device is already exist, channel: %d, target_id: %d, lun: %d\n",
+ device->channel, le16_to_cpu(device->target), 0);
+ scsi_device_put(sdev);
+ return -EEXIST;
+ }
+ scsi_add_device(shost, device->channel, le16_to_cpu(device->target), 0);
+ return 0;
+}
+
+static int spraid_rescan_device(struct spraid_dev *hdev, struct spraid_dev_info *device)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ sdev = scsi_device_lookup(shost, device->channel, le16_to_cpu(device->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "Device is not exit, channel: %d, target_id: %d, lun: %d\n",
+ device->channel, le16_to_cpu(device->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_rescan_device(&sdev->sdev_gendev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int spraid_remove_device(struct spraid_dev *hdev, struct spraid_dev_info *org_device)
+{
+ struct Scsi_Host *shost = hdev->shost;
+ struct scsi_device *sdev;
+
+ sdev = scsi_device_lookup(shost, org_device->channel, le16_to_cpu(org_device->target), 0);
+ if (!sdev) {
+ dev_warn(hdev->dev, "Device is not exit, channel: %d, target_id: %d, lun: %d\n",
+ org_device->channel, le16_to_cpu(org_device->target), 0);
+ return -ENODEV;
+ }
+
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ return 0;
+}
+
+static int spraid_dev_list_init(struct spraid_dev *hdev)
+{
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ int i, ret;
+
+ hdev->devices = kzalloc_node(nd * sizeof(struct spraid_dev_info),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->devices)
+ return -ENOMEM;
+
+ ret = spraid_get_dev_list(hdev, hdev->devices);
+ if (ret) {
+ dev_err(hdev->dev, "Ignore failure of getting device list within initialization\n");
+ return 0;
+ }
+
+ for (i = 0; i < nd; i++) {
+ if (SPRAID_DEV_INFO_FLAG_VALID(hdev->devices[i].flag) &&
+ SPRAID_DEV_INFO_ATTR_BOOT(hdev->devices[i].attr)) {
+ spraid_add_device(hdev, &hdev->devices[i]);
+ break;
+ }
+ }
+ return 0;
+}
+
+static void spraid_scan_work(struct work_struct *work)
+{
+ struct spraid_dev *hdev =
+ container_of(work, struct spraid_dev, scan_work);
+ struct spraid_dev_info *devices, *org_devices;
+ u32 nd = le32_to_cpu(hdev->ctrl_info->nd);
+ u8 flag, org_flag;
+ int i, ret;
+
+ devices = kcalloc(nd, sizeof(struct spraid_dev_info), GFP_KERNEL);
+ if (!devices)
+ return;
+ ret = spraid_get_dev_list(hdev, devices);
+ if (ret)
+ goto free_list;
+ org_devices = hdev->devices;
+ for (i = 0; i < nd; i++) {
+ org_flag = org_devices[i].flag;
+ flag = devices[i].flag;
+
+ dev_log_dbg(hdev->dev, "i: %d, org_flag: 0x%x, flag: 0x%x\n",
+ i, org_flag, flag);
+
+ if (SPRAID_DEV_INFO_FLAG_VALID(flag)) {
+ if (!SPRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->devices_rwsem);
+ memcpy(&org_devices[i], &devices[i],
+ sizeof(struct spraid_dev_info));
+ up_write(&hdev->devices_rwsem);
+ spraid_add_device(hdev, &devices[i]);
+ } else if (SPRAID_DEV_INFO_FLAG_CHANGE(flag)) {
+ spraid_rescan_device(hdev, &devices[i]);
+ }
+ } else {
+ if (SPRAID_DEV_INFO_FLAG_VALID(org_flag)) {
+ down_write(&hdev->devices_rwsem);
+ org_devices[i].flag &= 0xfe;
+ up_write(&hdev->devices_rwsem);
+ spraid_remove_device(hdev, &org_devices[i]);
+ }
+ }
+ }
+free_list:
+ kfree(devices);
+}
+
+static void spraid_timesyn_work(struct work_struct *work)
+{
+ struct spraid_dev *hdev =
+ container_of(work, struct spraid_dev, timesyn_work);
+
+ spraid_configure_timestamp(hdev);
+}
+
+static void spraid_queue_scan(struct spraid_dev *hdev)
+{
+ queue_work(spraid_wq, &hdev->scan_work);
+}
+
+static void spraid_handle_aen_notice(struct spraid_dev *hdev, u32 result)
+{
+ switch ((result & 0xff00) >> 8) {
+ case SPRAID_AEN_DEV_CHANGED:
+ spraid_queue_scan(hdev);
+ break;
+ case SPRAID_AEN_HOST_PROBING:
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result %08x\n", result);
+ }
+}
+
+static void spraid_handle_aen_vs(struct spraid_dev *hdev, u32 result)
+{
+ switch (result) {
+ case SPRAID_AEN_TIMESYN:
+ queue_work(spraid_wq, &hdev->timesyn_work);
+ break;
+ default:
+ dev_warn(hdev->dev, "async event result: %x\n", result);
+ }
+}
+
+static void spraid_async_event_work(struct work_struct *work)
+{
+ struct spraid_dev *hdev =
+ container_of(work, struct spraid_dev, aen_work);
+
+ spraid_send_aen(hdev);
+}
+
+static int spraid_alloc_resources(struct spraid_dev *hdev)
+{
+ int ret, nqueue;
+
+ ret = ida_alloc(&spraid_instance_ida, GFP_KERNEL);
+ if (ret < 0) {
+ dev_err(hdev->dev, "Get instance id failed\n");
+ return ret;
+ }
+ hdev->instance = ret;
+
+ hdev->ctrl_info = kzalloc_node(sizeof(*hdev->ctrl_info),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->ctrl_info) {
+ ret = -ENOMEM;
+ goto release_instance;
+ }
+
+ ret = spraid_create_dma_pools(hdev);
+ if (ret)
+ goto free_ctrl_info;
+ nqueue = num_possible_cpus() + 1;
+ hdev->queues = kcalloc_node(nqueue, sizeof(struct spraid_queue),
+ GFP_KERNEL, hdev->numa_node);
+ if (!hdev->queues) {
+ ret = -ENOMEM;
+ goto destroy_dma_pools;
+ }
+
+ dev_info(hdev->dev, "[%s] queues num: %d\n", __func__, nqueue);
+
+ return 0;
+
+destroy_dma_pools:
+ spraid_destroy_dma_pools(hdev);
+free_ctrl_info:
+ kfree(hdev->ctrl_info);
+release_instance:
+ ida_free(&spraid_instance_ida, hdev->instance);
+ return ret;
+}
+
+static void spraid_free_resources(struct spraid_dev *hdev)
+{
+ kfree(hdev->queues);
+ spraid_destroy_dma_pools(hdev);
+ kfree(hdev->ctrl_info);
+ ida_free(&spraid_instance_ida, hdev->instance);
+}
+
+static void spraid_setup_passthrough(struct request *req, struct spraid_admin_command *cmd)
+{
+ memcpy(cmd, spraid_admin_req(req)->cmd, sizeof(*cmd));
+ cmd->common.flags &= ~SPRAID_CMD_FLAG_SGL_ALL;
+}
+
+static inline void spraid_clear_hreq(struct request *req)
+{
+ if (!(req->rq_flags & RQF_DONTPREP)) {
+ spraid_admin_req(req)->flags = 0;
+ req->rq_flags |= RQF_DONTPREP;
+ }
+}
+
+static blk_status_t spraid_setup_admin_cmd(struct request *req, struct spraid_admin_command *cmd)
+{
+ spraid_clear_hreq(req);
+
+ memset(cmd, 0, sizeof(*cmd));
+ switch (req_op(req)) {
+ case REQ_OP_DRV_IN:
+ case REQ_OP_DRV_OUT:
+ spraid_setup_passthrough(req, cmd);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ return BLK_STS_IOERR;
+ }
+
+ cmd->common.command_id = req->tag;
+ return BLK_STS_OK;
+}
+
+static void spraid_unmap_data(struct spraid_dev *hdev, struct request *req)
+{
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ enum dma_data_direction dma_dir = rq_data_dir(req) ?
+ DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+ if (iod->nsge)
+ dma_unmap_sg(hdev->dev, iod->sg, iod->nsge, dma_dir);
+
+ spraid_free_iod_res(hdev, iod);
+}
+
+static blk_status_t spraid_admin_map_data(struct spraid_dev *hdev, struct request *req,
+ struct spraid_admin_command *cmd)
+{
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ struct request_queue *admin_q = req->q;
+ enum dma_data_direction dma_dir = rq_data_dir(req) ?
+ DMA_TO_DEVICE : DMA_FROM_DEVICE;
+ blk_status_t ret = BLK_STS_IOERR;
+ int nr_mapped;
+ int res;
+
+ sg_init_table(iod->sg, blk_rq_nr_phys_segments(req));
+ iod->nsge = blk_rq_map_sg(admin_q, req, iod->sg);
+ if (!iod->nsge)
+ goto out;
+
+ dev_info(hdev->dev, "nseg: %u, nsge: %u\n",
+ blk_rq_nr_phys_segments(req), iod->nsge);
+
+ ret = BLK_STS_RESOURCE;
+ nr_mapped = dma_map_sg_attrs(hdev->dev, iod->sg, iod->nsge, dma_dir, DMA_ATTR_NO_WARN);
+ if (!nr_mapped)
+ goto out;
+
+ res = spraid_setup_prps(hdev, iod);
+ if (res)
+ goto unmap;
+ cmd->common.dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg));
+ cmd->common.dptr.prp2 = cpu_to_le64(iod->first_dma);
+ return BLK_STS_OK;
+
+unmap:
+ dma_unmap_sg(hdev->dev, iod->sg, iod->nsge, dma_dir);
+out:
+ return ret;
+}
+
+static blk_status_t spraid_init_admin_iod(struct request *rq, struct spraid_dev *hdev)
+{
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(rq);
+ int nents = blk_rq_nr_phys_segments(rq);
+ unsigned int size = blk_rq_payload_bytes(rq);
+
+ if (nents > SPRAID_INT_PAGES || size > SPRAID_INT_BYTES(hdev)) {
+ iod->sg = mempool_alloc(hdev->iod_mempool, GFP_ATOMIC);
+ if (!iod->sg)
+ return BLK_STS_RESOURCE;
+ } else {
+ iod->sg = iod->inline_sg;
+ }
+
+ iod->nsge = 0;
+ iod->use_sgl = false;
+ iod->npages = -1;
+ iod->length = size;
+ iod->sg_drv_mgmt = true;
+
+ return BLK_STS_OK;
+}
+
+static blk_status_t spraid_queue_admin_rq(struct blk_mq_hw_ctx *hctx,
+ const struct blk_mq_queue_data *bd)
+{
+ struct spraid_queue *adminq = hctx->driver_data;
+ struct spraid_dev *hdev = adminq->hdev;
+ struct request *req = bd->rq;
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ struct spraid_admin_command cmd;
+ blk_status_t ret;
+
+ ret = spraid_setup_admin_cmd(req, &cmd);
+ if (ret)
+ goto out;
+
+ ret = spraid_init_admin_iod(req, hdev);
+ if (ret)
+ goto out;
+
+ if (blk_rq_nr_phys_segments(req)) {
+ ret = spraid_admin_map_data(hdev, req, &cmd);
+ if (ret)
+ goto cleanup_iod;
+ }
+
+ blk_mq_start_request(req);
+ spraid_submit_cmd(adminq, &cmd);
+ return BLK_STS_OK;
+
+cleanup_iod:
+ spraid_free_iod_res(hdev, iod);
+out:
+ return ret;
+}
+
+static blk_status_t spraid_error_status(struct request *req)
+{
+ switch (spraid_admin_req(req)->status & 0x7ff) {
+ case SPRAID_SC_SUCCESS:
+ return BLK_STS_OK;
+ default:
+ return BLK_STS_IOERR;
+ }
+}
+
+static void spraid_complete_admin_rq(struct request *req)
+{
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ struct spraid_dev *hdev = iod->spraidq->hdev;
+
+ if (blk_rq_nr_phys_segments(req))
+ spraid_unmap_data(hdev, req);
+ blk_mq_end_request(req, spraid_error_status(req));
+}
+
+static int spraid_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, unsigned int hctx_idx)
+{
+ struct spraid_dev *hdev = data;
+ struct spraid_queue *adminq = &hdev->queues[0];
+
+ WARN_ON(hctx_idx != 0);
+ WARN_ON(hdev->admin_tagset.tags[0] != hctx->tags);
+
+ hctx->driver_data = adminq;
+ return 0;
+}
+
+static int spraid_admin_init_request(struct blk_mq_tag_set *set, struct request *req,
+ unsigned int hctx_idx, unsigned int numa_node)
+{
+ struct spraid_dev *hdev = set->driver_data;
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ struct spraid_queue *adminq = &hdev->queues[0];
+
+ WARN_ON(!adminq);
+ iod->spraidq = adminq;
+ return 0;
+}
+
+static enum blk_eh_timer_return
+spraid_admin_timeout(struct request *req, bool reserved)
+{
+ struct spraid_iod *iod = blk_mq_rq_to_pdu(req);
+ struct spraid_queue *spraidq = iod->spraidq;
+ struct spraid_dev *hdev = spraidq->hdev;
+
+ dev_err(hdev->dev, "Admin cid[%d] qid[%d] timeout\n",
+ req->tag, spraidq->qid);
+
+ if (spraid_poll_cq(spraidq, req->tag)) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, completion polled\n",
+ req->tag, spraidq->qid);
+ return BLK_EH_DONE;
+ }
+
+ spraid_end_admin_request(req, cpu_to_le16(-EINVAL), 0, 0);
+ return BLK_EH_DONE;
+}
+
+static int spraid_get_ctrl_info(struct spraid_dev *hdev, struct spraid_ctrl_info *ctrl_info)
+{
+ struct spraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.get_info.opcode = SPRAID_ADMIN_GET_INFO;
+ admin_cmd.get_info.type = SPRAID_GET_INFO_CTRL;
+
+ return spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ ctrl_info, sizeof(struct spraid_ctrl_info), 0, 0, 0);
+}
+
+static int spraid_init_ctrl_info(struct spraid_dev *hdev)
+{
+ int ret;
+
+ hdev->ctrl_info->nd = cpu_to_le32(240);
+ hdev->ctrl_info->mdts = 8;
+ hdev->ctrl_info->max_cmds = cpu_to_le16(4096);
+ hdev->ctrl_info->max_num_sge = cpu_to_le16(128);
+ hdev->ctrl_info->max_channel = cpu_to_le16(4);
+ hdev->ctrl_info->max_tgt_id = cpu_to_le32(3239);
+ hdev->ctrl_info->max_lun = cpu_to_le16(2);
+
+ ret = spraid_get_ctrl_info(hdev, hdev->ctrl_info);
+ if (ret)
+ dev_err(hdev->dev, "get controller info failed: %d\n", ret);
+
+ dev_info(hdev->dev, "[%s]nd = %d\n", __func__, hdev->ctrl_info->nd);
+ dev_info(hdev->dev, "[%s]max_cmd = %d\n", __func__, hdev->ctrl_info->max_cmds);
+ dev_info(hdev->dev, "[%s]max_channel = %d\n", __func__, hdev->ctrl_info->max_channel);
+ dev_info(hdev->dev, "[%s]max_tgt_id = %d\n", __func__, hdev->ctrl_info->max_tgt_id);
+ dev_info(hdev->dev, "[%s]max_lun = %d\n", __func__, hdev->ctrl_info->max_lun);
+ dev_info(hdev->dev, "[%s]max_num_sge = %d\n", __func__, hdev->ctrl_info->max_num_sge);
+ dev_info(hdev->dev, "[%s]lun_num_boot = %d\n", __func__, hdev->ctrl_info->lun_num_in_boot);
+ dev_info(hdev->dev, "[%s]mdts = %d\n", __func__, hdev->ctrl_info->mdts);
+ dev_info(hdev->dev, "[%s]acl = %d\n", __func__, hdev->ctrl_info->acl);
+ dev_info(hdev->dev, "[%s]aer1 = %d\n", __func__, hdev->ctrl_info->aerl);
+ dev_info(hdev->dev, "[%s]card_type = %d\n", __func__, hdev->ctrl_info->card_type);
+ dev_info(hdev->dev, "[%s]rtd3e = %d\n", __func__, hdev->ctrl_info->rtd3e);
+ dev_info(hdev->dev, "[%s]sn = %s\n", __func__, hdev->ctrl_info->sn);
+ dev_info(hdev->dev, "[%s]fr = %s\n", __func__, hdev->ctrl_info->fr);
+
+ return 0;
+}
+
+#define SPRAID_MAX_ADMIN_PAYLOAD_SIZE BIT(16)
+static int spraid_alloc_iod_ext_mem_pool(struct spraid_dev *hdev)
+{
+ u16 max_sge = le16_to_cpu(hdev->ctrl_info->max_num_sge);
+ size_t alloc_size;
+
+ alloc_size = spraid_iod_ext_size(hdev, SPRAID_MAX_ADMIN_PAYLOAD_SIZE,
+ max_sge, true, false);
+ if (alloc_size > PAGE_SIZE)
+ dev_warn(hdev->dev, "It is unreasonable for sg allocation more than one page\n");
+ hdev->iod_mempool = mempool_create_node(1, mempool_kmalloc, mempool_kfree,
+ (void *)alloc_size, GFP_KERNEL, hdev->numa_node);
+ if (!hdev->iod_mempool) {
+ dev_err(hdev->dev, "Create iod extension memory pool failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void spraid_free_iod_ext_mem_pool(struct spraid_dev *hdev)
+{
+ mempool_destroy(hdev->iod_mempool);
+}
+
+static int spraid_submit_user_cmd(struct request_queue *q, struct spraid_admin_command *cmd,
+ void __user *ubuffer, unsigned int bufflen, u32 *result,
+ unsigned int timeout)
+{
+ struct request *req;
+ struct bio *bio = NULL;
+ int ret;
+
+ req = spraid_alloc_admin_request(q, cmd, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
+ spraid_admin_req(req)->flags |= SPRAID_REQ_USERCMD;
+
+ if (ubuffer && bufflen) {
+ ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen, GFP_KERNEL);
+ if (ret)
+ goto out;
+ bio = req->bio;
+ }
+ blk_execute_rq(req->q, NULL, req, 0);
+ if (spraid_admin_req(req)->flags & SPRAID_REQ_CANCELLED)
+ ret = -EINTR;
+ else
+ ret = spraid_admin_req(req)->status;
+ if (result) {
+ result[0] = spraid_admin_req(req)->result0;
+ result[1] = spraid_admin_req(req)->result1;
+ }
+ if (bio)
+ blk_rq_unmap_user(bio);
+out:
+ blk_mq_free_request(req);
+ return ret;
+}
+
+static int spraid_user_admin_cmd(struct spraid_dev *hdev,
+ struct spraid_passthru_common_cmd __user *ucmd)
+{
+ struct spraid_passthru_common_cmd cmd;
+ struct spraid_admin_command admin_cmd;
+ u32 timeout = 0;
+ int status;
+
+ if (!capable(CAP_SYS_ADMIN)) {
+ dev_err(hdev->dev, "Current user hasn't administrator right, reject service\n");
+ return -EACCES;
+ }
+
+ if (copy_from_user(&cmd, ucmd, sizeof(cmd))) {
+ dev_err(hdev->dev, "Copy command from user space to kernel space failed\n");
+ return -EFAULT;
+ }
+
+ if (cmd.flags) {
+ dev_err(hdev->dev, "Invalid flags in user command\n");
+ return -EINVAL;
+ }
+
+ dev_info(hdev->dev, "user_admin_cmd opcode: 0x%x, subopcode: 0x%x\n",
+ cmd.opcode, cmd.cdw2 & 0x7ff);
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.common.opcode = cmd.opcode;
+ admin_cmd.common.flags = cmd.flags;
+ admin_cmd.common.hdid = cpu_to_le32(cmd.nsid);
+ admin_cmd.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
+ admin_cmd.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
+ admin_cmd.common.cdw10 = cpu_to_le32(cmd.cdw10);
+ admin_cmd.common.cdw11 = cpu_to_le32(cmd.cdw11);
+ admin_cmd.common.cdw12 = cpu_to_le32(cmd.cdw12);
+ admin_cmd.common.cdw13 = cpu_to_le32(cmd.cdw13);
+ admin_cmd.common.cdw14 = cpu_to_le32(cmd.cdw14);
+ admin_cmd.common.cdw15 = cpu_to_le32(cmd.cdw15);
+
+ if (cmd.timeout_ms)
+ timeout = msecs_to_jiffies(cmd.timeout_ms);
+
+ status = spraid_submit_user_cmd(hdev->admin_q, &admin_cmd,
+ (void __user *)(uintptr_t)cmd.addr, cmd.info_1.data_len,
+ &cmd.result0, timeout);
+
+ dev_info(hdev->dev, "user_admin_cmd status: 0x%x, result0: 0x%x, result1: 0x%x\n",
+ status, cmd.result0, cmd.result1);
+
+ if (status >= 0) {
+ if (put_user(cmd.result0, &ucmd->result0))
+ return -EFAULT;
+ if (put_user(cmd.result1, &ucmd->result1))
+ return -EFAULT;
+ }
+
+ return status;
+}
+
+static int spraid_alloc_ioq_ptcmds(struct spraid_dev *hdev)
+{
+ int i;
+ int ptnum = SPRAID_NR_IOQ_PTCMDS;
+
+ INIT_LIST_HEAD(&hdev->ioq_pt_list);
+ spin_lock_init(&hdev->ioq_pt_lock);
+
+ hdev->ioq_ptcmds = kcalloc_node(ptnum, sizeof(struct spraid_ioq_ptcmd),
+ GFP_KERNEL, hdev->numa_node);
+
+ if (!hdev->ioq_ptcmds) {
+ dev_err(hdev->dev, "Alloc ioq_ptcmds failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < ptnum; i++) {
+ hdev->ioq_ptcmds[i].qid = i / SPRAID_PTCMDS_PERQ + 1;
+ hdev->ioq_ptcmds[i].cid = i % SPRAID_PTCMDS_PERQ + SPRAID_IO_BLK_MQ_DEPTH;
+ list_add_tail(&(hdev->ioq_ptcmds[i].list), &hdev->ioq_pt_list);
+ }
+
+ dev_info(hdev->dev, "Alloc ioq_ptcmds success, ptnum[%d]\n", ptnum);
+
+ return 0;
+}
+
+static struct spraid_ioq_ptcmd *spraid_get_ioq_ptcmd(struct spraid_dev *hdev)
+{
+ struct spraid_ioq_ptcmd *cmd = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hdev->ioq_pt_lock, flags);
+ if (list_empty(&hdev->ioq_pt_list)) {
+ spin_unlock_irqrestore(&hdev->ioq_pt_lock, flags);
+ dev_err(hdev->dev, "err, ioq ptcmd list empty\n");
+ return NULL;
+ }
+ cmd = list_entry((&hdev->ioq_pt_list)->next, struct spraid_ioq_ptcmd, list);
+ list_del_init(&cmd->list);
+ spin_unlock_irqrestore(&hdev->ioq_pt_lock, flags);
+
+ WRITE_ONCE(cmd->state, SPRAID_CMD_IDLE);
+
+ return cmd;
+}
+
+static void spraid_put_ioq_ptcmd(struct spraid_dev *hdev, struct spraid_ioq_ptcmd *cmd)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hdev->ioq_pt_lock, flags);
+ list_add(&cmd->list, (&hdev->ioq_pt_list)->next);
+ spin_unlock_irqrestore(&hdev->ioq_pt_lock, flags);
+}
+
+static int spraid_submit_ioq_sync_cmd(struct spraid_dev *hdev, struct spraid_ioq_command *cmd,
+ u32 *result, void **sense, u32 timeout)
+{
+ struct spraid_queue *ioq;
+ int ret;
+ dma_addr_t sense_dma;
+ struct spraid_ioq_ptcmd *pt_cmd = spraid_get_ioq_ptcmd(hdev);
+
+ *sense = NULL;
+
+ if (!pt_cmd)
+ return -EFAULT;
+
+ dev_info(hdev->dev, "[%s] ptcmd, cid[%d], qid[%d]\n", __func__, pt_cmd->cid, pt_cmd->qid);
+
+ init_completion(&pt_cmd->cmd_done);
+
+ ioq = &hdev->queues[pt_cmd->qid];
+ ret = pt_cmd->cid * SCSI_SENSE_BUFFERSIZE;
+ pt_cmd->priv = ioq->sense + ret;
+ sense_dma = ioq->sense_dma_addr + ret;
+
+ cmd->common.sense_addr = cpu_to_le64(sense_dma);
+ cmd->common.sense_len = cpu_to_le16(SCSI_SENSE_BUFFERSIZE);
+ cmd->common.command_id = pt_cmd->cid;
+
+ spraid_submit_cmd(ioq, cmd);
+
+ if (!wait_for_completion_timeout(&pt_cmd->cmd_done, timeout)) {
+ dev_err(hdev->dev, "[%s] cid[%d], qid[%d] timeout\n",
+ __func__, pt_cmd->cid, pt_cmd->qid);
+ WRITE_ONCE(pt_cmd->state, SPRAID_CMD_TIMEOUT);
+ return -EINVAL;
+ }
+
+ if (result) {
+ result[0] = pt_cmd->result0;
+ result[1] = pt_cmd->result1;
+ }
+
+ if ((pt_cmd->status & 0x17f) == 0x101)
+ *sense = pt_cmd->priv;
+
+ return pt_cmd->status;
+}
+
+static int spraid_user_ioq_cmd(struct spraid_dev *hdev,
+ struct spraid_ioq_passthru_cmd __user *ucmd)
+{
+ struct spraid_ioq_passthru_cmd cmd;
+ struct spraid_ioq_command ioq_cmd;
+ u32 timeout = 0;
+ int status = 0;
+ u8 *data_ptr = NULL;
+ dma_addr_t data_dma;
+ enum dma_data_direction dma_dir = DMA_NONE;
+ void *sense = NULL;
+
+ if (!capable(CAP_SYS_ADMIN)) {
+ dev_err(hdev->dev, "Current user hasn't administrator right, reject service\n");
+ return -EACCES;
+ }
+
+ if (copy_from_user(&cmd, ucmd, sizeof(cmd))) {
+ dev_err(hdev->dev, "Copy command from user space to kernel space failed\n");
+ return -EFAULT;
+ }
+
+ if (cmd.data_len > PAGE_SIZE) {
+ dev_err(hdev->dev, "[%s] data len bigger than 4k\n", __func__);
+ return -EFAULT;
+ }
+
+ dev_info(hdev->dev, "[%s] opcode: 0x%x, subopcode: 0x%x, datalen: %d\n",
+ __func__, cmd.opcode, cmd.info_1.subopcode, cmd.data_len);
+
+ if (cmd.addr && cmd.data_len) {
+ data_ptr = dma_alloc_coherent(hdev->dev, PAGE_SIZE, &data_dma, GFP_KERNEL);
+ if (!data_ptr)
+ return -ENOMEM;
+
+ dma_dir = (cmd.opcode & 1) ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+ }
+
+ if (dma_dir == DMA_TO_DEVICE) {
+ if (copy_from_user(data_ptr, (void __user *)(uintptr_t)cmd.addr, cmd.data_len)) {
+ dev_err(hdev->dev, "[%s] copy user data failed\n", __func__);
+ status = -EFAULT;
+ goto free_dma_mem;
+ }
+ }
+
+ memset(&ioq_cmd, 0, sizeof(ioq_cmd));
+ ioq_cmd.common.opcode = cmd.opcode;
+ ioq_cmd.common.flags = cmd.flags;
+ ioq_cmd.common.hdid = cpu_to_le32(cmd.nsid);
+ ioq_cmd.common.sense_len = cpu_to_le16(cmd.info_0.res_sense_len);
+ ioq_cmd.common.cdb_len = cmd.info_0.cdb_len;
+ ioq_cmd.common.rsvd2 = cmd.info_0.rsvd0;
+ ioq_cmd.common.cdw3[0] = cpu_to_le32(cmd.cdw3);
+ ioq_cmd.common.cdw3[1] = cpu_to_le32(cmd.cdw4);
+ ioq_cmd.common.cdw3[2] = cpu_to_le32(cmd.cdw5);
+ ioq_cmd.common.dptr.prp1 = cpu_to_le64(data_dma);
+
+ ioq_cmd.common.cdw10[0] = cpu_to_le32(cmd.cdw10);
+ ioq_cmd.common.cdw10[1] = cpu_to_le32(cmd.cdw11);
+ ioq_cmd.common.cdw10[2] = cpu_to_le32(cmd.cdw12);
+ ioq_cmd.common.cdw10[3] = cpu_to_le32(cmd.cdw13);
+ ioq_cmd.common.cdw10[4] = cpu_to_le32(cmd.cdw14);
+ ioq_cmd.common.cdw10[5] = cpu_to_le32(cmd.data_len);
+
+ memcpy(ioq_cmd.common.cdb, &cmd.cdw16, cmd.info_0.cdb_len);
+
+ ioq_cmd.common.cdw26[0] = cpu_to_le32(cmd.cdw26[0]);
+ ioq_cmd.common.cdw26[1] = cpu_to_le32(cmd.cdw26[1]);
+ ioq_cmd.common.cdw26[2] = cpu_to_le32(cmd.cdw26[2]);
+ ioq_cmd.common.cdw26[3] = cpu_to_le32(cmd.cdw26[3]);
+
+ if (cmd.timeout_ms)
+ timeout = msecs_to_jiffies(cmd.timeout_ms);
+ timeout = timeout ? timeout : ADMIN_TIMEOUT;
+
+ status = spraid_submit_ioq_sync_cmd(hdev, &ioq_cmd, &cmd.result0, &sense, timeout);
+
+ if (status >= 0) {
+ if (put_user(cmd.result0, &ucmd->result0)) {
+ status = -EFAULT;
+ goto free_dma_mem;
+ }
+ if (put_user(cmd.result1, &ucmd->result1)) {
+ status = -EFAULT;
+ goto free_dma_mem;
+ }
+ if (dma_dir == DMA_FROM_DEVICE &&
+ copy_to_user((void __user *)(uintptr_t)cmd.addr, data_ptr, cmd.data_len)) {
+ status = -EFAULT;
+ goto free_dma_mem;
+ }
+ }
+
+ if (sense) {
+ if (copy_to_user((void *__user *)(uintptr_t)cmd.sense_addr,
+ sense, cmd.info_0.res_sense_len)) {
+ status = -EFAULT;
+ goto free_dma_mem;
+ }
+ }
+
+free_dma_mem:
+ if (data_ptr)
+ dma_free_coherent(hdev->dev, PAGE_SIZE, data_ptr, data_dma);
+
+ return status;
+
+}
+
+static int spraid_reset_work_sync(struct spraid_dev *hdev);
+
+static int spraid_user_reset_cmd(struct spraid_dev *hdev)
+{
+ int ret;
+
+ dev_info(hdev->dev, "[%s] start user reset cmd\n", __func__);
+ ret = spraid_reset_work_sync(hdev);
+ dev_info(hdev->dev, "[%s] stop user reset cmd[%d]\n", __func__, ret);
+
+ return ret;
+}
+
+static int hdev_open(struct inode *inode, struct file *file)
+{
+ struct spraid_dev *hdev =
+ container_of(inode->i_cdev, struct spraid_dev, cdev);
+ file->private_data = hdev;
+ return 0;
+}
+
+static long hdev_ioctl(struct file *file, u32 cmd, unsigned long arg)
+{
+ struct spraid_dev *hdev = file->private_data;
+ void __user *argp = (void __user *)arg;
+
+ switch (cmd) {
+ case SPRAID_IOCTL_ADMIN_CMD:
+ return spraid_user_admin_cmd(hdev, argp);
+ case SPRAID_IOCTL_IOQ_CMD:
+ return spraid_user_ioq_cmd(hdev, argp);
+ case SPRAID_IOCTL_RESET_CMD:
+ return spraid_user_reset_cmd(hdev);
+ default:
+ return -ENOTTY;
+ }
+}
+
+static const struct file_operations spraid_dev_fops = {
+ .owner = THIS_MODULE,
+ .open = hdev_open,
+ .unlocked_ioctl = hdev_ioctl,
+ .compat_ioctl = hdev_ioctl,
+};
+
+static int spraid_create_cdev(struct spraid_dev *hdev)
+{
+ int ret;
+
+ device_initialize(&hdev->ctrl_device);
+ hdev->ctrl_device.devt = MKDEV(MAJOR(spraid_chr_devt), hdev->instance);
+ hdev->ctrl_device.class = spraid_class;
+ hdev->ctrl_device.parent = hdev->dev;
+ dev_set_drvdata(&hdev->ctrl_device, hdev);
+ ret = dev_set_name(&hdev->ctrl_device, "spraid%d", hdev->instance);
+ if (ret)
+ return ret;
+ cdev_init(&hdev->cdev, &spraid_dev_fops);
+ hdev->cdev.owner = THIS_MODULE;
+ ret = cdev_device_add(&hdev->cdev, &hdev->ctrl_device);
+ if (ret) {
+ dev_err(hdev->dev, "Add cdev failed, ret: %d", ret);
+ put_device(&hdev->ctrl_device);
+ kfree_const(hdev->ctrl_device.kobj.name);
+ return ret;
+ }
+
+ return 0;
+}
+
+static inline void spraid_remove_cdev(struct spraid_dev *hdev)
+{
+ cdev_device_del(&hdev->cdev, &hdev->ctrl_device);
+}
+
+static const struct blk_mq_ops spraid_admin_mq_ops = {
+ .queue_rq = spraid_queue_admin_rq,
+ .complete = spraid_complete_admin_rq,
+ .init_hctx = spraid_admin_init_hctx,
+ .init_request = spraid_admin_init_request,
+ .timeout = spraid_admin_timeout,
+};
+
+static void spraid_remove_admin_tagset(struct spraid_dev *hdev)
+{
+ if (hdev->admin_q && !blk_queue_dying(hdev->admin_q)) {
+ blk_mq_unquiesce_queue(hdev->admin_q);
+ blk_cleanup_queue(hdev->admin_q);
+ blk_mq_free_tag_set(&hdev->admin_tagset);
+ }
+}
+
+static int spraid_alloc_admin_tags(struct spraid_dev *hdev)
+{
+ if (!hdev->admin_q) {
+ hdev->admin_tagset.ops = &spraid_admin_mq_ops;
+ hdev->admin_tagset.nr_hw_queues = 1;
+
+ hdev->admin_tagset.queue_depth = SPRAID_AQ_MQ_TAG_DEPTH;
+ hdev->admin_tagset.timeout = ADMIN_TIMEOUT;
+ hdev->admin_tagset.numa_node = hdev->numa_node;
+ hdev->admin_tagset.cmd_size =
+ spraid_cmd_size(hdev, true, false);
+ hdev->admin_tagset.flags = BLK_MQ_F_NO_SCHED;
+ hdev->admin_tagset.driver_data = hdev;
+
+ if (blk_mq_alloc_tag_set(&hdev->admin_tagset)) {
+ dev_err(hdev->dev, "Allocate admin tagset failed\n");
+ return -ENOMEM;
+ }
+
+ hdev->admin_q = blk_mq_init_queue(&hdev->admin_tagset);
+ if (IS_ERR(hdev->admin_q)) {
+ dev_err(hdev->dev, "Initialize admin request queue failed\n");
+ blk_mq_free_tag_set(&hdev->admin_tagset);
+ return -ENOMEM;
+ }
+ if (!blk_get_queue(hdev->admin_q)) {
+ dev_err(hdev->dev, "Get admin request queue failed\n");
+ spraid_remove_admin_tagset(hdev);
+ hdev->admin_q = NULL;
+ return -ENODEV;
+ }
+ } else {
+ blk_mq_unquiesce_queue(hdev->admin_q);
+ }
+ return 0;
+}
+
+static bool spraid_check_scmd_completed(struct scsi_cmnd *scmd)
+{
+ struct spraid_dev *hdev = shost_priv(scmd->device->host);
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ struct spraid_queue *spraidq;
+ u16 hwq, cid;
+
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+ spraidq = &hdev->queues[hwq];
+ if (READ_ONCE(iod->state) == SPRAID_CMD_COMPLETE || spraid_poll_cq(spraidq, cid)) {
+ dev_warn(hdev->dev, "cid[%d], qid[%d] has been completed\n",
+ cid, spraidq->qid);
+ return true;
+ }
+ return false;
+}
+
+static enum blk_eh_timer_return spraid_scmd_timeout(struct scsi_cmnd *scmd)
+{
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ unsigned int timeout = scmd->device->request_queue->rq_timeout;
+
+ if (spraid_check_scmd_completed(scmd))
+ goto out;
+
+ if (time_after(jiffies, scmd->jiffies_at_alloc + timeout)) {
+ if (cmpxchg(&iod->state, SPRAID_CMD_IN_FLIGHT, SPRAID_CMD_TIMEOUT) ==
+ SPRAID_CMD_IN_FLIGHT) {
+ return BLK_EH_DONE;
+ }
+ }
+out:
+ return BLK_EH_RESET_TIMER;
+}
+
+/* send abort command by admin queue temporary */
+static int spraid_send_abort_cmd(struct spraid_dev *hdev, u32 hdid, u16 qid, u16 cid)
+{
+ struct spraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.abort.opcode = SPRAID_ADMIN_ABORT_CMD;
+ admin_cmd.abort.hdid = cpu_to_le32(hdid);
+ admin_cmd.abort.sqid = cpu_to_le16(qid);
+ admin_cmd.abort.cid = cpu_to_le16(cid);
+
+ return spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ NULL, 0, 0, 0, 0);
+}
+
+/* send reset command by admin quueue temporary */
+static int spraid_send_reset_cmd(struct spraid_dev *hdev, int type, u32 hdid)
+{
+ struct spraid_admin_command admin_cmd;
+
+ memset(&admin_cmd, 0, sizeof(admin_cmd));
+ admin_cmd.reset.opcode = SPRAID_ADMIN_RESET;
+ admin_cmd.reset.hdid = cpu_to_le32(hdid);
+ admin_cmd.reset.type = type;
+
+ return spraid_submit_admin_sync_cmd(hdev->admin_q, &admin_cmd, NULL,
+ NULL, 0, 0, 0, 0);
+}
+
+static bool spraid_change_host_state(struct spraid_dev *hdev, enum spraid_state newstate)
+{
+ unsigned long flags;
+ enum spraid_state oldstate;
+ bool change = false;
+
+ spin_lock_irqsave(&hdev->state_lock, flags);
+
+ oldstate = hdev->state;
+ switch (newstate) {
+ case SPRAID_LIVE:
+ switch (oldstate) {
+ case SPRAID_NEW:
+ case SPRAID_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case SPRAID_RESETTING:
+ switch (oldstate) {
+ case SPRAID_LIVE:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ case SPRAID_DELETING:
+ if (oldstate != SPRAID_DELETING)
+ change = true;
+ break;
+ case SPRAID_DEAD:
+ switch (oldstate) {
+ case SPRAID_NEW:
+ case SPRAID_LIVE:
+ case SPRAID_RESETTING:
+ change = true;
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+ if (change)
+ hdev->state = newstate;
+ spin_unlock_irqrestore(&hdev->state_lock, flags);
+
+ dev_info(hdev->dev, "[%s][%d]->[%d], change[%d]\n", __func__, oldstate, newstate, change);
+
+ return change;
+}
+
+static void spraid_back_fault_cqe(struct spraid_queue *ioq, struct spraid_completion *cqe)
+{
+ struct spraid_dev *hdev = ioq->hdev;
+ struct blk_mq_tags *tags;
+ struct scsi_cmnd *scmd;
+ struct spraid_iod *iod;
+ struct request *req;
+
+ tags = hdev->shost->tag_set.tags[ioq->qid - 1];
+ req = blk_mq_tag_to_rq(tags, cqe->cmd_id);
+ if (unlikely(!req || !blk_mq_request_started(req)))
+ return;
+
+ scmd = blk_mq_rq_to_pdu(req);
+ iod = scsi_cmd_priv(scmd);
+
+ set_host_byte(scmd, DID_NO_CONNECT);
+ if (iod->nsge)
+ scsi_dma_unmap(scmd);
+ spraid_free_iod_res(hdev, iod);
+ scmd->scsi_done(scmd);
+ dev_warn(hdev->dev, "Back fault CQE, cid[%d], qid[%d]\n",
+ cqe->cmd_id, ioq->qid);
+}
+
+static void spraid_back_all_io(struct spraid_dev *hdev)
+{
+ int i, j;
+ struct spraid_queue *ioq;
+ struct spraid_completion cqe = { 0 };
+
+ for (i = 1; i <= hdev->shost->nr_hw_queues; i++) {
+ ioq = &hdev->queues[i];
+ for (j = 0; j < hdev->shost->can_queue; j++) {
+ cqe.cmd_id = j;
+ spraid_back_fault_cqe(ioq, &cqe);
+ }
+ }
+}
+
+static void spraid_dev_disable(struct spraid_dev *hdev, bool shutdown)
+{
+ struct spraid_queue *adminq = &hdev->queues[0];
+ u16 start, end;
+ unsigned long timeout = jiffies + 600 * HZ;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ if (shutdown)
+ spraid_shutdown_ctrl(hdev);
+ else
+ spraid_disable_ctrl(hdev);
+ }
+
+ while (!time_after(jiffies, timeout)) {
+ if (!pci_device_is_present(hdev->pdev)) {
+ dev_info(hdev->dev, "[%s] pci_device not present, skip wait\n", __func__);
+ break;
+ }
+ if (!spraid_wait_ready(hdev, hdev->cap, false)) {
+ dev_info(hdev->dev, "[%s] wait ready success after reset\n", __func__);
+ break;
+ }
+ dev_info(hdev->dev, "[%s] waiting csts_rdy ready\n", __func__);
+ }
+
+ if (hdev->queue_count == 0) {
+ dev_err(hdev->dev, "[%s] warn, queue has been delete\n", __func__);
+ return;
+ }
+
+ spin_lock_irq(&adminq->cq_lock);
+ spraid_process_cq(adminq, &start, &end, -1);
+ spin_unlock_irq(&adminq->cq_lock);
+ spraid_complete_cqes(adminq, start, end);
+
+ spraid_pci_disable(hdev);
+
+ spraid_back_all_io(hdev);
+}
+
+static void spraid_reset_work(struct work_struct *work)
+{
+ int ret;
+ struct spraid_dev *hdev = container_of(work, struct spraid_dev, reset_work);
+
+ if (hdev->state != SPRAID_RESETTING) {
+ dev_err(hdev->dev, "[%s] err, host is not reset state\n", __func__);
+ return;
+ }
+
+ dev_info(hdev->dev, "[%s] enter host reset\n", __func__);
+
+ if (hdev->ctrl_config & SPRAID_CC_ENABLE) {
+ dev_info(hdev->dev, "[%s] start dev_disable\n", __func__);
+ spraid_dev_disable(hdev, false);
+ }
+
+ ret = spraid_pci_enable(hdev);
+ if (ret)
+ goto out;
+
+ ret = spraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = spraid_alloc_admin_tags(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = spraid_setup_io_queues(hdev);
+ if (ret || hdev->online_queues <= hdev->shost->nr_hw_queues)
+ goto pci_disable;
+
+ spraid_change_host_state(hdev, SPRAID_LIVE);
+
+ spraid_send_aen(hdev);
+
+ return;
+
+pci_disable:
+ spraid_pci_disable(hdev);
+out:
+ spraid_change_host_state(hdev, SPRAID_DEAD);
+ dev_err(hdev->dev, "[%s] err, host reset failed\n", __func__);
+}
+
+static int spraid_reset_work_sync(struct spraid_dev *hdev)
+{
+ if (!spraid_change_host_state(hdev, SPRAID_RESETTING)) {
+ dev_info(hdev->dev, "[%s] can't change to reset state\n", __func__);
+ return -EBUSY;
+ }
+
+ if (!queue_work(spraid_wq, &hdev->reset_work)) {
+ dev_err(hdev->dev, "[%s] err, host is already in reset state\n", __func__);
+ return -EBUSY;
+ }
+
+ flush_work(&hdev->reset_work);
+ if (hdev->state != SPRAID_LIVE)
+ return -ENODEV;
+
+ return 0;
+}
+
+static int spraid_wait_abnl_cmd_done(struct spraid_iod *iod)
+{
+ u16 times = 0;
+
+ do {
+ if (READ_ONCE(iod->state) == SPRAID_CMD_TMO_COMPLETE)
+ break;
+ msleep(500);
+ times++;
+ } while (times <= SPRAID_WAIT_ABNL_CMD_TIMEOUT);
+
+ /* wait command completion timeout after abort/reset success */
+ if (times >= SPRAID_WAIT_ABNL_CMD_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static int spraid_abort_handler(struct scsi_cmnd *scmd)
+{
+ struct spraid_dev *hdev = shost_priv(scmd->device->host);
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ struct spraid_sdev_hostdata *hostdata;
+ u16 hwq, cid;
+ int ret;
+
+ scsi_print_command(scmd);
+
+ if (!spraid_wait_abnl_cmd_done(iod) || spraid_check_scmd_completed(scmd) ||
+ hdev->state != SPRAID_LIVE)
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, aborting\n", cid, hwq);
+ ret = spraid_send_abort_cmd(hdev, hostdata->hdid, hwq, cid);
+ if (ret != ADMIN_ERR_TIMEOUT) {
+ ret = spraid_wait_abnl_cmd_done(iod);
+ if (ret) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, not found\n", cid, hwq);
+ return FAILED;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort succ\n", cid, hwq);
+ return SUCCESS;
+ }
+ dev_warn(hdev->dev, "cid[%d] qid[%d] abort failed, timeout\n", cid, hwq);
+ return FAILED;
+}
+
+static int spraid_tgt_reset_handler(struct scsi_cmnd *scmd)
+{
+ struct spraid_dev *hdev = shost_priv(scmd->device->host);
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ struct spraid_sdev_hostdata *hostdata;
+ u16 hwq, cid;
+ int ret;
+
+ scsi_print_command(scmd);
+
+ if (!spraid_wait_abnl_cmd_done(iod) || spraid_check_scmd_completed(scmd) ||
+ hdev->state != SPRAID_LIVE)
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, target reset\n", cid, hwq);
+ ret = spraid_send_reset_cmd(hdev, SPRAID_RESET_TARGET, hostdata->hdid);
+ if (ret == 0) {
+ ret = spraid_wait_abnl_cmd_done(iod);
+ if (ret) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d]target reset failed, not found\n",
+ cid, hwq);
+ return FAILED;
+ }
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] target reset success\n", cid, hwq);
+ return SUCCESS;
+ }
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] ret[%d] target reset failed\n", cid, hwq, ret);
+ return FAILED;
+}
+
+static int spraid_bus_reset_handler(struct scsi_cmnd *scmd)
+{
+ struct spraid_dev *hdev = shost_priv(scmd->device->host);
+ struct spraid_iod *iod = scsi_cmd_priv(scmd);
+ struct spraid_sdev_hostdata *hostdata;
+ u16 hwq, cid;
+ int ret;
+
+ scsi_print_command(scmd);
+
+ if (!spraid_wait_abnl_cmd_done(iod) || spraid_check_scmd_completed(scmd) ||
+ hdev->state != SPRAID_LIVE)
+ return SUCCESS;
+
+ hostdata = scmd->device->hostdata;
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] timeout, bus reset\n", cid, hwq);
+ ret = spraid_send_reset_cmd(hdev, SPRAID_RESET_BUS, hostdata->hdid);
+ if (ret == 0) {
+ ret = spraid_wait_abnl_cmd_done(iod);
+ if (ret) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] bus reset failed, not found\n",
+ cid, hwq);
+ return FAILED;
+ }
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] bus reset succ\n", cid, hwq);
+ return SUCCESS;
+ }
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] ret[%d] bus reset failed\n", cid, hwq, ret);
+ return FAILED;
+}
+
+static int spraid_shost_reset_handler(struct scsi_cmnd *scmd)
+{
+ u16 hwq, cid;
+ struct spraid_dev *hdev = shost_priv(scmd->device->host);
+
+ scsi_print_command(scmd);
+ if (spraid_check_scmd_completed(scmd) || hdev->state != SPRAID_LIVE)
+ return SUCCESS;
+
+ spraid_get_tag_from_scmd(scmd, &hwq, &cid);
+ dev_warn(hdev->dev, "cid[%d] qid[%d] host reset\n", cid, hwq);
+
+ if (spraid_reset_work_sync(hdev)) {
+ dev_warn(hdev->dev, "cid[%d] qid[%d] host reset failed\n", cid, hwq);
+ return FAILED;
+ }
+
+ dev_warn(hdev->dev, "cid[%d] qid[%d] host reset success\n", cid, hwq);
+
+ return SUCCESS;
+}
+
+static ssize_t csts_pp_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct spraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_PP_MASK);
+ ret >>= SPRAID_CSTS_PP_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_shst_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct spraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_SHST_MASK);
+ ret >>= SPRAID_CSTS_SHST_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_cfs_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct spraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev)) {
+ ret = (readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_CFS_MASK);
+ ret >>= SPRAID_CSTS_CFS_SHIFT;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t csts_rdy_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct spraid_dev *hdev = shost_priv(shost);
+ int ret = -1;
+
+ if (pci_device_is_present(hdev->pdev))
+ ret = (readl(hdev->bar + SPRAID_REG_CSTS) & SPRAID_CSTS_RDY);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", ret);
+}
+
+static ssize_t fw_version_show(struct device *cdev, struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct spraid_dev *hdev = shost_priv(shost);
+
+ return snprintf(buf, sizeof(hdev->ctrl_info->fr), "%s\n", hdev->ctrl_info->fr);
+}
+
+static DEVICE_ATTR_RO(csts_pp);
+static DEVICE_ATTR_RO(csts_shst);
+static DEVICE_ATTR_RO(csts_cfs);
+static DEVICE_ATTR_RO(csts_rdy);
+static DEVICE_ATTR_RO(fw_version);
+
+static struct device_attribute *spraid_host_attrs[] = {
+ &dev_attr_csts_pp,
+ &dev_attr_csts_shst,
+ &dev_attr_csts_cfs,
+ &dev_attr_csts_rdy,
+ &dev_attr_fw_version,
+ NULL,
+};
+
+static struct scsi_host_template spraid_driver_template = {
+ .module = THIS_MODULE,
+ .name = "Ramaxel Logic spraid driver",
+ .proc_name = "spraid",
+ .queuecommand = spraid_queue_command,
+ .slave_alloc = spraid_slave_alloc,
+ .slave_destroy = spraid_slave_destroy,
+ .slave_configure = spraid_slave_configure,
+ .eh_timed_out = spraid_scmd_timeout,
+ .eh_abort_handler = spraid_abort_handler,
+ .eh_target_reset_handler = spraid_tgt_reset_handler,
+ .eh_bus_reset_handler = spraid_bus_reset_handler,
+ .eh_host_reset_handler = spraid_shost_reset_handler,
+ .change_queue_depth = scsi_change_queue_depth,
+ .host_tagset = 1,
+ .this_id = -1,
+ .shost_attrs = spraid_host_attrs,
+};
+
+static void spraid_shutdown(struct pci_dev *pdev)
+{
+ struct spraid_dev *hdev = pci_get_drvdata(pdev);
+
+ spraid_remove_io_queues(hdev);
+ spraid_disable_admin_queue(hdev, true);
+}
+
+static int spraid_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct spraid_dev *hdev;
+ struct Scsi_Host *shost;
+ int node, ret;
+
+ shost = scsi_host_alloc(&spraid_driver_template, sizeof(*hdev));
+ if (!shost) {
+ dev_err(&pdev->dev, "Failed to allocate scsi host\n");
+ return -ENOMEM;
+ }
+ hdev = shost_priv(shost);
+ hdev->pdev = pdev;
+ hdev->dev = get_device(&pdev->dev);
+
+ node = dev_to_node(hdev->dev);
+ if (node == NUMA_NO_NODE) {
+ node = first_memory_node;
+ set_dev_node(hdev->dev, node);
+ }
+ hdev->numa_node = node;
+ hdev->shost = shost;
+ pci_set_drvdata(pdev, hdev);
+
+ ret = spraid_dev_map(hdev);
+ if (ret)
+ goto put_dev;
+
+ init_rwsem(&hdev->devices_rwsem);
+ INIT_WORK(&hdev->aen_work, spraid_async_event_work);
+ INIT_WORK(&hdev->scan_work, spraid_scan_work);
+ INIT_WORK(&hdev->timesyn_work, spraid_timesyn_work);
+ INIT_WORK(&hdev->reset_work, spraid_reset_work);
+ spin_lock_init(&hdev->state_lock);
+
+ ret = spraid_alloc_resources(hdev);
+ if (ret)
+ goto dev_unmap;
+
+ ret = spraid_pci_enable(hdev);
+ if (ret)
+ goto resources_free;
+
+ ret = spraid_setup_admin_queue(hdev);
+ if (ret)
+ goto pci_disable;
+
+ ret = spraid_alloc_admin_tags(hdev);
+ if (ret)
+ goto disable_admin_q;
+
+ ret = spraid_init_ctrl_info(hdev);
+ if (ret)
+ goto free_admin_tagset;
+
+ ret = spraid_alloc_iod_ext_mem_pool(hdev);
+ if (ret)
+ goto free_admin_tagset;
+
+ ret = spraid_setup_io_queues(hdev);
+ if (ret)
+ goto free_iod_mempool;
+
+ spraid_shost_init(hdev);
+
+ ret = scsi_add_host(hdev->shost, hdev->dev);
+ if (ret) {
+ dev_err(hdev->dev, "Add shost to system failed, ret: %d\n",
+ ret);
+ goto remove_io_queues;
+ }
+
+ ret = spraid_create_cdev(hdev);
+ if (ret)
+ goto remove_io_queues;
+
+ if (hdev->online_queues == SPRAID_ADMIN_QUEUE_NUM) {
+ dev_warn(hdev->dev, "warn only admin queue can be used\n");
+ return 0;
+ }
+
+ hdev->state = SPRAID_LIVE;
+
+ spraid_send_aen(hdev);
+
+ ret = spraid_dev_list_init(hdev);
+ if (ret)
+ goto remove_cdev;
+
+ ret = spraid_configure_timestamp(hdev);
+ if (ret)
+ dev_warn(hdev->dev, "init set timestamp failed\n");
+
+ ret = spraid_alloc_ioq_ptcmds(hdev);
+ if (ret)
+ goto remove_cdev;
+
+ scsi_scan_host(hdev->shost);
+
+ return 0;
+
+remove_cdev:
+ spraid_remove_cdev(hdev);
+remove_io_queues:
+ spraid_remove_io_queues(hdev);
+free_iod_mempool:
+ spraid_free_iod_ext_mem_pool(hdev);
+free_admin_tagset:
+ spraid_remove_admin_tagset(hdev);
+disable_admin_q:
+ spraid_disable_admin_queue(hdev, false);
+pci_disable:
+ spraid_pci_disable(hdev);
+resources_free:
+ spraid_free_resources(hdev);
+dev_unmap:
+ spraid_dev_unmap(hdev);
+put_dev:
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ return -ENODEV;
+}
+
+static void spraid_remove(struct pci_dev *pdev)
+{
+ struct spraid_dev *hdev = pci_get_drvdata(pdev);
+ struct Scsi_Host *shost = hdev->shost;
+
+ dev_info(hdev->dev, "enter spraid remove\n");
+
+ spraid_change_host_state(hdev, SPRAID_DELETING);
+
+ if (!pci_device_is_present(pdev)) {
+ scsi_block_requests(shost);
+ spraid_back_all_io(hdev);
+ scsi_unblock_requests(shost);
+ }
+
+ flush_work(&hdev->reset_work);
+ scsi_remove_host(shost);
+
+ kfree(hdev->ioq_ptcmds);
+ kfree(hdev->devices);
+ spraid_remove_cdev(hdev);
+ spraid_remove_io_queues(hdev);
+ spraid_free_iod_ext_mem_pool(hdev);
+ spraid_remove_admin_tagset(hdev);
+ spraid_disable_admin_queue(hdev, false);
+ spraid_pci_disable(hdev);
+ spraid_free_resources(hdev);
+ spraid_dev_unmap(hdev);
+ put_device(hdev->dev);
+ scsi_host_put(shost);
+
+ dev_info(hdev->dev, "exit spraid remove\n");
+}
+
+static const struct pci_device_id spraid_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_RAMAXEL_LOGIC, SPRAID_SERVER_DEVICE_HAB_DID) },
+ { PCI_DEVICE(PCI_VENDOR_ID_RAMAXEL_LOGIC, SPRAID_SERVER_DEVICE_RAID_DID) },
+ { 0, }
+};
+MODULE_DEVICE_TABLE(pci, spraid_id_table);
+
+static struct pci_driver spraid_driver = {
+ .name = "spraid",
+ .id_table = spraid_id_table,
+ .probe = spraid_probe,
+ .remove = spraid_remove,
+ .shutdown = spraid_shutdown,
+};
+
+static int __init spraid_init(void)
+{
+ int ret;
+
+ spraid_wq = alloc_workqueue("spraid-wq", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
+ if (!spraid_wq)
+ return -ENOMEM;
+
+ ret = alloc_chrdev_region(&spraid_chr_devt, 0, SPRAID_MINORS, "spraid");
+ if (ret < 0)
+ goto destroy_wq;
+
+ spraid_class = class_create(THIS_MODULE, "spraid");
+ if (IS_ERR(spraid_class)) {
+ ret = PTR_ERR(spraid_class);
+ goto unregister_chrdev;
+ }
+
+ ret = pci_register_driver(&spraid_driver);
+ if (ret < 0)
+ goto destroy_class;
+
+ return 0;
+
+destroy_class:
+ class_destroy(spraid_class);
+unregister_chrdev:
+ unregister_chrdev_region(spraid_chr_devt, SPRAID_MINORS);
+destroy_wq:
+ destroy_workqueue(spraid_wq);
+
+ return ret;
+}
+
+static void __exit spraid_exit(void)
+{
+ pci_unregister_driver(&spraid_driver);
+ class_destroy(spraid_class);
+ unregister_chrdev_region(spraid_chr_devt, SPRAID_MINORS);
+ destroy_workqueue(spraid_wq);
+ ida_destroy(&spraid_instance_ida);
+}
+
+MODULE_AUTHOR("Ramaxel Memory Technology");
+MODULE_DESCRIPTION("Ramaxel Memory Technology SPraid Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(SPRAID_DRV_VERSION);
+module_init(spraid_init);
+module_exit(spraid_exit);
--
2.20.1
1
3

[PATCH] arm64/mpam: fix the problem that the ret variable is not initialized
by wenzhiwei 13 Oct '21
by wenzhiwei 13 Oct '21
13 Oct '21
From: wenzhiwei11 <wenzhiwei(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHUL?from=project-issue
CVE: NA
---------------------------------------------------
initialize the value "ret" in "schemata_list_init()"
Signed-off-by: 温志伟 <wenzhiwei(a)kylinos.cn>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index a4a298a455e0..9759eb223dac 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -131,7 +131,7 @@ static int add_schema(enum resctrl_conf_type t, struct resctrl_resource *r)
int schemata_list_init(void)
{
- int ret;
+ int ret = 0;
struct mpam_resctrl_res *res;
struct resctrl_resource *r;
--
2.30.0
3
2
Hi,
I hava unsubscribed the list. (send an email to kernel-leave(a)openeuler.org<mailto:kernel-leave@openeuler.org>).
Why is the system still sending me the mails?
B.R.
1
0

[PATCH kernel-4.19 1/2] sched/topology: Make sched_init_numa() use a set for the deduplicating sort
by Yang Yingliang 13 Oct '21
by Yang Yingliang 13 Oct '21
13 Oct '21
From: Valentin Schneider <valentin.schneider(a)arm.com>
mainline inclusion
from mainline-v5.12-rc1
commit 620a6dc40754dc218f5b6389b5d335e9a107fd29
category: bugfix
bugzilla: 182847,https://gitee.com/openeuler/kernel/issues/I48TV8
CVE: NA
----------------------------------------------------------
The deduplicating sort in sched_init_numa() assumes that the first line in
the distance table contains all unique values in the entire table. I've
been trying to pen what this exactly means for the topology, but it's not
straightforward. For instance, topology.c uses this example:
node 0 1 2 3
0: 10 20 20 30
1: 20 10 20 20
2: 20 20 10 20
3: 30 20 20 10
0 ----- 1
| / |
| / |
| / |
2 ----- 3
Which works out just fine. However, if we swap nodes 0 and 1:
1 ----- 0
| / |
| / |
| / |
2 ----- 3
we get this distance table:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 30
2: 20 20 10 20
3: 20 30 20 10
Which breaks the deduplicating sort (non-representative first line). In
this case this would just be a renumbering exercise, but it so happens that
we can have a deduplicating sort that goes through the whole table in O(n²)
at the extra cost of a temporary memory allocation (i.e. any form of set).
The ACPI spec (SLIT) mentions distances are encoded on 8 bits. Following
this, implement the set as a 256-bits bitmap. Should this not be
satisfactory (i.e. we want to support 32-bit values), then we'll have to go
for some other sparse set implementation.
This has the added benefit of letting us allocate just the right amount of
memory for sched_domains_numa_distance[], rather than an arbitrary
(nr_node_ids + 1).
Note: DT binding equivalent (distance-map) decodes distances as 32-bit
values.
Signed-off-by: Valentin Schneider <valentin.schneider(a)arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/20210122123943.1217-2-valentin.schneider@arm.com
Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/topology.h | 1 +
kernel/sched/topology.c | 99 +++++++++++++++++++---------------------
2 files changed, 49 insertions(+), 51 deletions(-)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 5cc8595dd0e4e..a19771cd267d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -47,6 +47,7 @@ int arch_update_cpu_topology(void);
/* Conform to ACPI 2.0 SLIT distance definitions */
#define LOCAL_DISTANCE 10
#define REMOTE_DISTANCE 20
+#define DISTANCE_BITS 8
#ifndef node_distance
#define node_distance(from,to) ((from) == (to) ? LOCAL_DISTANCE : REMOTE_DISTANCE)
#endif
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index f383a8f5a88bc..23f0a69b2ed49 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1442,66 +1442,58 @@ static void check_node_limit(void)
static inline void check_node_limit(void) { }
#endif /* CONFIG_SCHED_STEAL */
+
+#define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
+
void sched_init_numa(void)
{
- int next_distance, curr_distance = node_distance(0, 0);
struct sched_domain_topology_level *tl;
- int level = 0;
- int i, j, k;
-
- sched_domains_numa_distance = kzalloc(sizeof(int) * (nr_node_ids + 1), GFP_KERNEL);
- if (!sched_domains_numa_distance)
- return;
-
- /* Includes NUMA identity node at level 0. */
- sched_domains_numa_distance[level++] = curr_distance;
- sched_domains_numa_levels = level;
+ unsigned long *distance_map;
+ int nr_levels = 0;
+ int i, j;
/*
* O(nr_nodes^2) deduplicating selection sort -- in order to find the
* unique distances in the node_distance() table.
- *
- * Assumes node_distance(0,j) includes all distances in
- * node_distance(i,j) in order to avoid cubic time.
*/
- next_distance = curr_distance;
+ distance_map = bitmap_alloc(NR_DISTANCE_VALUES, GFP_KERNEL);
+ if (!distance_map)
+ return;
+
+ bitmap_zero(distance_map, NR_DISTANCE_VALUES);
for (i = 0; i < nr_node_ids; i++) {
for (j = 0; j < nr_node_ids; j++) {
- for (k = 0; k < nr_node_ids; k++) {
- int distance = node_distance(i, k);
-
- if (distance > curr_distance &&
- (distance < next_distance ||
- next_distance == curr_distance))
- next_distance = distance;
-
- /*
- * While not a strong assumption it would be nice to know
- * about cases where if node A is connected to B, B is not
- * equally connected to A.
- */
- if (sched_debug() && node_distance(k, i) != distance)
- sched_numa_warn("Node-distance not symmetric");
+ int distance = node_distance(i, j);
- if (sched_debug() && i && !find_numa_distance(distance))
- sched_numa_warn("Node-0 not representative");
+ if (distance < LOCAL_DISTANCE || distance >= NR_DISTANCE_VALUES) {
+ sched_numa_warn("Invalid distance value range");
+ return;
}
- if (next_distance != curr_distance) {
- sched_domains_numa_distance[level++] = next_distance;
- sched_domains_numa_levels = level;
- curr_distance = next_distance;
- } else break;
+
+ bitmap_set(distance_map, distance, 1);
}
+ }
+ /*
+ * We can now figure out how many unique distance values there are and
+ * allocate memory accordingly.
+ */
+ nr_levels = bitmap_weight(distance_map, NR_DISTANCE_VALUES);
- /*
- * In case of sched_debug() we verify the above assumption.
- */
- if (!sched_debug())
- break;
+ sched_domains_numa_distance = kcalloc(nr_levels, sizeof(int), GFP_KERNEL);
+ if (!sched_domains_numa_distance) {
+ bitmap_free(distance_map);
+ return;
}
+ for (i = 0, j = 0; i < nr_levels; i++, j++) {
+ j = find_next_bit(distance_map, NR_DISTANCE_VALUES, j);
+ sched_domains_numa_distance[i] = j;
+ }
+
+ bitmap_free(distance_map);
+
/*
- * 'level' contains the number of unique distances
+ * 'nr_levels' contains the number of unique distances
*
* The sched_domains_numa_distance[] array includes the actual distance
* numbers.
@@ -1510,15 +1502,15 @@ void sched_init_numa(void)
/*
* Here, we should temporarily reset sched_domains_numa_levels to 0.
* If it fails to allocate memory for array sched_domains_numa_masks[][],
- * the array will contain less then 'level' members. This could be
+ * the array will contain less then 'nr_levels' members. This could be
* dangerous when we use it to iterate array sched_domains_numa_masks[][]
* in other functions.
*
- * We reset it to 'level' at the end of this function.
+ * We reset it to 'nr_levels' at the end of this function.
*/
sched_domains_numa_levels = 0;
- sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
+ sched_domains_numa_masks = kzalloc(sizeof(void *) * nr_levels, GFP_KERNEL);
if (!sched_domains_numa_masks)
return;
@@ -1526,7 +1518,7 @@ void sched_init_numa(void)
* Now for each level, construct a mask per node which contains all
* CPUs of nodes that are that many hops away from us.
*/
- for (i = 0; i < level; i++) {
+ for (i = 0; i < nr_levels; i++) {
sched_domains_numa_masks[i] =
kzalloc(nr_node_ids * sizeof(void *), GFP_KERNEL);
if (!sched_domains_numa_masks[i])
@@ -1534,12 +1526,17 @@ void sched_init_numa(void)
for (j = 0; j < nr_node_ids; j++) {
struct cpumask *mask = kzalloc(cpumask_size(), GFP_KERNEL);
+ int k;
+
if (!mask)
return;
sched_domains_numa_masks[i][j] = mask;
for_each_node(k) {
+ if (sched_debug() && (node_distance(j, k) != node_distance(k, j)))
+ sched_numa_warn("Node-distance not symmetric");
+
if (node_distance(j, k) > sched_domains_numa_distance[i])
continue;
@@ -1551,7 +1548,7 @@ void sched_init_numa(void)
/* Compute default topology size */
for (i = 0; sched_domain_topology[i].mask; i++);
- tl = kzalloc((i + level + 1) *
+ tl = kzalloc((i + nr_levels) *
sizeof(struct sched_domain_topology_level), GFP_KERNEL);
if (!tl)
return;
@@ -1574,7 +1571,7 @@ void sched_init_numa(void)
/*
* .. and append 'j' levels of NUMA goodness.
*/
- for (j = 1; j < level; i++, j++) {
+ for (j = 1; j < nr_levels; i++, j++) {
tl[i] = (struct sched_domain_topology_level){
.mask = sd_numa_mask,
.sd_flags = cpu_numa_flags,
@@ -1586,8 +1583,8 @@ void sched_init_numa(void)
sched_domain_topology = tl;
- sched_domains_numa_levels = level;
- sched_max_numa_distance = sched_domains_numa_distance[level - 1];
+ sched_domains_numa_levels = nr_levels;
+ sched_max_numa_distance = sched_domains_numa_distance[nr_levels - 1];
init_numa_topology_type();
check_node_limit();
--
2.25.1
1
1

[PATCH openEuler-21.03] arch/arm64/kernel/mpam: fix Uninitialized scalar variable
by shenzijun 13 Oct '21
by shenzijun 13 Oct '21
13 Oct '21
From: 沈子俊 <shenzijun(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHUK?from=project-issue
CVE: NA
---------------------------------------------------
initialize the value "ret"
Signed-off-by: 沈子俊 <shenzijun(a)kylinos.cn>
---
arch/arm64/kernel/mpam/mpam_ctrlmon.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_ctrlmon.c b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
index b1d32d432556..6bcb4ca65b18 100644
--- a/arch/arm64/kernel/mpam/mpam_ctrlmon.c
+++ b/arch/arm64/kernel/mpam/mpam_ctrlmon.c
@@ -733,7 +733,7 @@ static int resctrl_group_mkdir_info_resdir(struct resctrl_resource *r,
char *name,unsigned long fflags, struct kernfs_node *kn_info)
{
struct kernfs_node *kn_subdir;
- int ret;
+ int ret = 0;
kn_subdir = kernfs_create_dir(kn_info, name,
kn_info->mode, r);
--
2.30.0
1
0

[PATCH openEuler-5.10 001/582] Revert "evm: Refuse EVM_ALLOW_METADATA_WRITES only if an HMAC key is loaded"
by Zheng Zengkai 13 Oct '21
by Zheng Zengkai 13 Oct '21
13 Oct '21
hulk inclusion
category: feature
feature: IMA Digest Lists extension
bugzilla: 46797
---------------------------
This reverts commit 9b772f4948fa513c501ae37c7afc89aa8613314c.
backport patch from LTS 5.10.50 instead.
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
Documentation/ABI/testing/evm | 5 ++---
security/integrity/evm/evm_secfs.c | 2 +-
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
index eb6d70fd6fa2..3c477ba48a31 100644
--- a/Documentation/ABI/testing/evm
+++ b/Documentation/ABI/testing/evm
@@ -49,9 +49,8 @@ Description:
modification of EVM-protected metadata and
disable all further modification of policy
- Note that once an HMAC key has been loaded, it will no longer
- be possible to enable metadata modification and, if it is
- already enabled, it will be disabled.
+ Note that once a key has been loaded, it will no longer be
+ possible to enable metadata modification.
Until key loading has been signaled EVM can not create
or validate the 'security.evm' xattr, but returns
diff --git a/security/integrity/evm/evm_secfs.c b/security/integrity/evm/evm_secfs.c
index 92fe26ace797..cfc3075769bb 100644
--- a/security/integrity/evm/evm_secfs.c
+++ b/security/integrity/evm/evm_secfs.c
@@ -84,7 +84,7 @@ static ssize_t evm_write_key(struct file *file, const char __user *buf,
* keys are loaded.
*/
if ((i & EVM_ALLOW_METADATA_WRITES) &&
- ((evm_initialized & EVM_INIT_HMAC) != 0) &&
+ ((evm_initialized & EVM_KEY_MASK) != 0) &&
!(evm_initialized & EVM_ALLOW_METADATA_WRITES))
return -EPERM;
--
2.20.1
1
581

[PATCH openEuler-1.0-LTS 1/2] block: fix blk-iolatency accounting underflow
by Yang Yingliang 13 Oct '21
by Yang Yingliang 13 Oct '21
13 Oct '21
From: Dennis Zhou <dennis(a)kernel.org>
mainline inclusion
from mainline-v5.0-rc1
commit 13369816cb648f897ce9cbf57e55eeb742ce4eb3
category: bugfix
bugzilla: 182234
CVE: NA
---------------------------
The blk-iolatency controller measures the time from rq_qos_throttle() to
rq_qos_done_bio() and attributes this time to the first bio that needs
to create the request. This means if a bio is plug-mergeable or
bio-mergeable, it gets to bypass the blk-iolatency controller.
The recent series [1], to tag all bios w/ blkgs undermined how iolatency
was determining which bios it was charging and should process in
rq_qos_done_bio(). Because all bios are being tagged, this caused the
atomic_t for the struct rq_wait inflight count to underflow and result
in a stall.
This patch adds a new flag BIO_TRACKED to let controllers know that a
bio is going through the rq_qos path. blk-iolatency now checks if this
flag is set to see if it should process the bio in rq_qos_done_bio().
Overloading BLK_QUEUE_ENTERED works, but makes the flag rules confusing.
BIO_THROTTLED was another candidate, but the flag is set for all bios
that have gone through blk-throttle code. Overloading a flag comes with
the burden of making sure that when either implementation changes, a
change in setting rules for one doesn't cause a bug in the other. So
here, we unfortunately opt for adding a new flag.
[1] https://lore.kernel.org/lkml/20181205171039.73066-1-dennis@kernel.org/
Fixes: 5cdf2e3fea5e ("blkcg: associate blkg when associating a device")
Signed-off-by: Dennis Zhou <dennis(a)kernel.org>
Cc: Josef Bacik <josef(a)toxicpanda.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Conflicts:
block/blk-iolatency.c
block/blk-rq-qos.c
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-iolatency.c | 2 +-
block/blk-rq-qos.c | 6 ++++++
include/linux/blk_types.h | 1 +
3 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 0529e94a20f7f..ae6ae7f50c6f7 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -563,7 +563,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
int inflight = 0;
blkg = bio->bi_blkg;
- if (!blkg)
+ if (!blkg || !bio_flagged(bio, BIO_TRACKED))
return;
iolat = blkg_to_lat(bio->bi_blkg);
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index 43bcd4e7a7f9a..d1eaa118ce309 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -72,6 +72,12 @@ void rq_qos_throttle(struct request_queue *q, struct bio *bio,
{
struct rq_qos *rqos;
+ /*
+ * BIO_TRACKED lets controllers know that a bio went through the
+ * normal rq_qos path.
+ */
+ bio_set_flag(bio, BIO_TRACKED);
+
for(rqos = q->rq_qos; rqos; rqos = rqos->next) {
if (rqos->ops->throttle)
rqos->ops->throttle(rqos, bio, lock);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index c07caa2a28429..8075b9955bb3c 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -233,6 +233,7 @@ struct bio {
#define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion
* of this bio. */
#define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */
+#define BIO_TRACKED 12 /* set if bio goes through the rq_qos path */
/* See BVEC_POOL_OFFSET below before adding new flags */
--
2.25.1
1
1

13 Oct '21
From: Dennis Zhou <dennis(a)kernel.org>
mainline inclusion
from mainline-v5.0-rc1
commit 13369816cb648f897ce9cbf57e55eeb742ce4eb3
category: bugfix
bugzilla: 182234
CVE: NA
---------------------------
The blk-iolatency controller measures the time from rq_qos_throttle() to
rq_qos_done_bio() and attributes this time to the first bio that needs
to create the request. This means if a bio is plug-mergeable or
bio-mergeable, it gets to bypass the blk-iolatency controller.
The recent series [1], to tag all bios w/ blkgs undermined how iolatency
was determining which bios it was charging and should process in
rq_qos_done_bio(). Because all bios are being tagged, this caused the
atomic_t for the struct rq_wait inflight count to underflow and result
in a stall.
This patch adds a new flag BIO_TRACKED to let controllers know that a
bio is going through the rq_qos path. blk-iolatency now checks if this
flag is set to see if it should process the bio in rq_qos_done_bio().
Overloading BLK_QUEUE_ENTERED works, but makes the flag rules confusing.
BIO_THROTTLED was another candidate, but the flag is set for all bios
that have gone through blk-throttle code. Overloading a flag comes with
the burden of making sure that when either implementation changes, a
change in setting rules for one doesn't cause a bug in the other. So
here, we unfortunately opt for adding a new flag.
[1] https://lore.kernel.org/lkml/20181205171039.73066-1-dennis@kernel.org/
Fixes: 5cdf2e3fea5e ("blkcg: associate blkg when associating a device")
Signed-off-by: Dennis Zhou <dennis(a)kernel.org>
Cc: Josef Bacik <josef(a)toxicpanda.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Conflicts:
block/blk-iolatency.c
block/blk-rq-qos.c
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-iolatency.c | 2 +-
block/blk-rq-qos.c | 6 ++++++
include/linux/blk_types.h | 1 +
3 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 8897f7c579ec7..b10336b401f6e 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -563,7 +563,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
int inflight = 0;
blkg = bio->bi_blkg;
- if (!blkg)
+ if (!blkg || !bio_flagged(bio, BIO_TRACKED))
return;
iolat = blkg_to_lat(bio->bi_blkg);
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index 8f277518b21ba..86456d016cf92 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -72,6 +72,12 @@ void rq_qos_throttle(struct request_queue *q, struct bio *bio,
{
struct rq_qos *rqos;
+ /*
+ * BIO_TRACKED lets controllers know that a bio went through the
+ * normal rq_qos path.
+ */
+ bio_set_flag(bio, BIO_TRACKED);
+
for(rqos = q->rq_qos; rqos; rqos = rqos->next) {
if (rqos->ops->throttle)
rqos->ops->throttle(rqos, bio, lock);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 29ffec0814e9a..78e701f34dc42 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -236,6 +236,7 @@ struct bio {
#define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion
* of this bio. */
#define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */
+#define BIO_TRACKED 12 /* set if bio goes through the rq_qos path */
/* See BVEC_POOL_OFFSET below before adding new flags */
--
2.25.1
1
1

[PATCH kernel-4.19] ovl: fix missing negative dentry check in ovl_rename()
by Yang Yingliang 13 Oct '21
by Yang Yingliang 13 Oct '21
13 Oct '21
From: Zheng Liang <zhengliang6(a)huawei.com>
mainline inclusion
from mainline-v5.15-rc5
commit a295aef603e109a47af355477326bd41151765b6
category: bugfix
bugzilla: NA
CVE: NA
-------------------------------------------------
The following reproducer
mkdir lower upper work merge
touch lower/old
touch lower/new
mount -t overlay overlay -olowerdir=lower,upperdir=upper,workdir=work merge
rm merge/new
mv merge/old merge/new & unlink upper/new
may result in this race:
PROCESS A:
rename("merge/old", "merge/new");
overwrite=true,ovl_lower_positive(old)=true,
ovl_dentry_is_whiteout(new)=true -> flags |= RENAME_EXCHANGE
PROCESS B:
unlink("upper/new");
PROCESS A:
lookup newdentry in new_upperdir
call vfs_rename() with negative newdentry and RENAME_EXCHANGE
Fix by adding the missing check for negative newdentry.
Signed-off-by: Zheng Liang <zhengliang6(a)huawei.com>
Fixes: e9be9d5e76e3 ("overlay filesystem")
Cc: <stable(a)vger.kernel.org> # v3.18
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/overlayfs/dir.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
index 9902c1706be91..1de8ef95ad960 100644
--- a/fs/overlayfs/dir.c
+++ b/fs/overlayfs/dir.c
@@ -1188,9 +1188,13 @@ static int ovl_rename(struct inode *olddir, struct dentry *old,
goto out_dput;
}
} else {
- if (!d_is_negative(newdentry) &&
- (!new_opaque || !ovl_is_whiteout(newdentry)))
- goto out_dput;
+ if (!d_is_negative(newdentry)) {
+ if (!new_opaque || !ovl_is_whiteout(newdentry))
+ goto out_dput;
+ } else {
+ if (flags & RENAME_EXCHANGE)
+ goto out_dput;
+ }
}
if (olddentry == trap)
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/2] Revert "ext4: fix panic when mount failed with parallel flush_stashed_error_work"
by Yang Yingliang 13 Oct '21
by Yang Yingliang 13 Oct '21
13 Oct '21
From: yangerkun <yangerkun(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 172146, https://gitee.com/openeuler/kernel/issues/I46HJ6
CVE: NA
---------------------------
This reverts commit fde31a90246972ddb417381d2b26831a001edaa1.
We will include the mainline patch, revert this now.
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 280e991e61f47..d793e597c0623 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4832,7 +4832,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
sbi->s_ea_block_cache = NULL;
}
if (sbi->s_journal) {
- flush_work(&sbi->s_error_work);
jbd2_journal_destroy(sbi->s_journal);
sbi->s_journal = NULL;
}
--
2.25.1
1
1

[PATCH kernel-4.19 1/2] Revert "ext4: fix panic when mount failed with parallel flush_stashed_error_work"
by Yang Yingliang 13 Oct '21
by Yang Yingliang 13 Oct '21
13 Oct '21
From: yangerkun <yangerkun(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 172146, https://gitee.com/openeuler/kernel/issues/I46HJ6
CVE: NA
---------------------------
This reverts commit 881543818e4af30765ffb2604cd70a90f6007427.
We will include the mainline patch, revert this now.
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 280e991e61f47..d793e597c0623 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4832,7 +4832,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
sbi->s_ea_block_cache = NULL;
}
if (sbi->s_journal) {
- flush_work(&sbi->s_error_work);
jbd2_journal_destroy(sbi->s_journal);
sbi->s_journal = NULL;
}
--
2.25.1
1
1

13 Oct '21
From: zhengbin <zhengbin13(a)huawei.com>
mainline inclusion
from mainline-5.6-rc1
commit 4756ee183f25b1fa2a7306a439da3bcd687244e0
category: bugfix
bugzilla: 181871, https://gitee.com/openeuler/kernel/issues/I4DP71
CVE: NA
---------------------------
Fixes coccicheck warning:
fs/ext4/extents.c:5271:6-12: WARNING: Assignment of 0/1 to bool variable
fs/ext4/extents.c:5287:4-10: WARNING: Assignment of 0/1 to bool variable
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Signed-off-by: zhengbin <zhengbin13(a)huawei.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Link: https://lore.kernel.org/r/1577241959-138695-1-git-send-email-zhengbin13@hua…
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/extents.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 6a0aab71a01f0..fe7398a5888b6 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -5295,7 +5295,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
{
int depth, err = 0;
struct ext4_extent *ex_start, *ex_last;
- bool update = 0;
+ bool update = false;
depth = path->p_depth;
while (depth >= 0) {
@@ -5311,7 +5311,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
goto out;
if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr))
- update = 1;
+ update = true;
while (ex_start <= ex_last) {
if (SHIFT == SHIFT_LEFT) {
--
2.25.1
1
2

13 Oct '21
From: zhengbin <zhengbin13(a)huawei.com>
mainline inclusion
from mainline-5.6-rc1
commit 4756ee183f25b1fa2a7306a439da3bcd687244e0
category: bugfix
bugzilla: 181871, https://gitee.com/openeuler/kernel/issues/I4DP71
CVE: NA
---------------------------
Fixes coccicheck warning:
fs/ext4/extents.c:5271:6-12: WARNING: Assignment of 0/1 to bool variable
fs/ext4/extents.c:5287:4-10: WARNING: Assignment of 0/1 to bool variable
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Signed-off-by: zhengbin <zhengbin13(a)huawei.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Link: https://lore.kernel.org/r/1577241959-138695-1-git-send-email-zhengbin13@hua…
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/extents.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 6a0aab71a01f0..fe7398a5888b6 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -5295,7 +5295,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
{
int depth, err = 0;
struct ext4_extent *ex_start, *ex_last;
- bool update = 0;
+ bool update = false;
depth = path->p_depth;
while (depth >= 0) {
@@ -5311,7 +5311,7 @@ ext4_ext_shift_path_extents(struct ext4_ext_path *path, ext4_lblk_t shift,
goto out;
if (ex_start == EXT_FIRST_EXTENT(path[depth].p_hdr))
- update = 1;
+ update = true;
while (ex_start <= ex_last) {
if (SHIFT == SHIFT_LEFT) {
--
2.25.1
1
2

13 Oct '21
This initial commit contains Ramaxel's spnic module
Changes since v3:
- Change VXLAN_OFFLOAD_PORT_LE to VXLAN_OFFLOAD_PORT_BE;
Changes since v2:
- Solve the "new blank line at EOF" error;
- Change SPHW_COMM_H to SPHW_HW_COMM_H;
Changes since v1:
- Solve the compile error
Yanling Song (2):
net: spnic: initial commit the common module of Ramaxel NIC driver
spnic: add NIC layer
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/ramaxel/Kconfig | 20 +
drivers/net/ethernet/ramaxel/Makefile | 6 +
drivers/net/ethernet/ramaxel/spnic/Kconfig | 14 +
drivers/net/ethernet/ramaxel/spnic/Makefile | 39 +
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.c | 1165 +++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.h | 277 +++
.../ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h | 126 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 1605 +++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h | 195 ++
.../ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h | 60 +
.../ramaxel/spnic/hw/sphw_comm_msg_intf.h | 273 +++
.../ethernet/ramaxel/spnic/hw/sphw_common.c | 88 +
.../ethernet/ramaxel/spnic/hw/sphw_common.h | 118 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_crm.h | 984 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_csr.h | 171 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c | 1374 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.h | 157 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_hw.h | 649 ++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c | 1338 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h | 326 +++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.c | 1253 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.h | 42 +
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.c | 1402 +++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.h | 93 +
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c | 911 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.h | 102 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 1808 +++++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.h | 273 +++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 1382 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h | 156 ++
.../ramaxel/spnic/hw/sphw_mgmt_msg_base.h | 19 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mt.h | 534 +++++
.../ramaxel/spnic/hw/sphw_prof_adap.c | 94 +
.../ramaxel/spnic/hw/sphw_prof_adap.h | 49 +
.../ethernet/ramaxel/spnic/hw/sphw_profile.h | 36 +
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.c | 152 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.h | 119 ++
.../net/ethernet/ramaxel/spnic/spnic_dbg.c | 752 +++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.c | 965 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.h | 56 +
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.c | 811 ++++++++
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.h | 78 +
.../ethernet/ramaxel/spnic/spnic_ethtool.c | 988 +++++++++
.../ramaxel/spnic/spnic_ethtool_stats.c | 1035 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_filter.c | 411 ++++
.../net/ethernet/ramaxel/spnic/spnic_irq.c | 178 ++
.../net/ethernet/ramaxel/spnic/spnic_lld.c | 937 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_lld.h | 75 +
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 778 +++++++
.../ethernet/ramaxel/spnic/spnic_mag_cmd.h | 643 ++++++
.../net/ethernet/ramaxel/spnic/spnic_main.c | 925 +++++++++
.../ramaxel/spnic/spnic_mgmt_interface.h | 617 ++++++
.../ethernet/ramaxel/spnic/spnic_netdev_ops.c | 1526 ++++++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic.h | 148 ++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.c | 1321 ++++++++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.h | 724 +++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 647 ++++++
.../ethernet/ramaxel/spnic/spnic_nic_cmd.h | 105 +
.../ethernet/ramaxel/spnic/spnic_nic_dbg.c | 151 ++
.../ethernet/ramaxel/spnic/spnic_nic_dbg.h | 16 +
.../ethernet/ramaxel/spnic/spnic_nic_dev.h | 352 ++++
.../ethernet/ramaxel/spnic/spnic_nic_event.c | 506 +++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.c | 1123 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.h | 309 +++
.../net/ethernet/ramaxel/spnic/spnic_nic_qp.h | 421 ++++
.../net/ethernet/ramaxel/spnic/spnic_ntuple.c | 841 ++++++++
.../ethernet/ramaxel/spnic/spnic_pci_id_tbl.h | 12 +
.../net/ethernet/ramaxel/spnic/spnic_rss.c | 750 +++++++
.../net/ethernet/ramaxel/spnic/spnic_rss.h | 48 +
.../ethernet/ramaxel/spnic/spnic_rss_cfg.c | 390 ++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.c | 1249 ++++++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.h | 118 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.c | 200 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.h | 24 +
drivers/net/ethernet/ramaxel/spnic/spnic_tx.c | 879 ++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_tx.h | 129 ++
80 files changed, 38654 insertions(+)
create mode 100644 drivers/net/ethernet/ramaxel/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_msg_intf.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_crm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_csr.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt_msg_base.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_profile.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_filter.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_irq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_main.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mgmt_interface.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_netdev_ops.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_event.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_qp.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ntuple.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.h
--
2.30.0
2
3

[PATCH openEuler-5.10 001/136] module: limit enabling module.sig_enforce
by Zheng Zengkai 12 Oct '21
by Zheng Zengkai 12 Oct '21
12 Oct '21
From: Mimi Zohar <zohar(a)linux.ibm.com>
stable inclusion
from stable-5.10.47
commit 3051f230f19feb02dfe5b36794f8c883b576e184
bugzilla: 172973 https://gitee.com/openeuler/kernel/issues/I4DAKB
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 0c18f29aae7ce3dadd26d8ee3505d07cc982df75 ]
Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying
"module.sig_enforce=1" on the boot command line sets "sig_enforce".
Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured.
This patch makes the presence of /sys/module/module/parameters/sig_enforce
dependent on CONFIG_MODULE_SIG=y.
Fixes: fda784e50aac ("module: export module signature enforcement status")
Reported-by: Nayna Jain <nayna(a)linux.ibm.com>
Tested-by: Mimi Zohar <zohar(a)linux.ibm.com>
Tested-by: Jessica Yu <jeyu(a)kernel.org>
Signed-off-by: Mimi Zohar <zohar(a)linux.ibm.com>
Signed-off-by: Jessica Yu <jeyu(a)kernel.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Acked-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Chen Jun <chenjun102(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
kernel/module.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index a54dda703feb..c5af21dcb873 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -272,9 +272,18 @@ static void module_assert_mutex_or_preempt(void)
#endif
}
+#ifdef CONFIG_MODULE_SIG
static bool sig_enforce = IS_ENABLED(CONFIG_MODULE_SIG_FORCE);
module_param(sig_enforce, bool_enable_only, 0644);
+void set_module_sig_enforced(void)
+{
+ sig_enforce = true;
+}
+#else
+#define sig_enforce false
+#endif
+
/*
* Export sig_enforce kernel cmdline parameter to allow other subsystems rely
* on that instead of directly to CONFIG_MODULE_SIG_FORCE config.
@@ -285,11 +294,6 @@ bool is_module_sig_enforced(void)
}
EXPORT_SYMBOL(is_module_sig_enforced);
-void set_module_sig_enforced(void)
-{
- sig_enforce = true;
-}
-
/* Block module loading/unloading? */
int modules_disabled = 0;
core_param(nomodule, modules_disabled, bint, 0);
--
2.20.1
1
135

【Meeting Notice】openEuler kernel 技术分享第十三期 & 双周例会 Time: 2021-10-15 14:00-16:30
by Meeting Book 12 Oct '21
by Meeting Book 12 Oct '21
12 Oct '21
1
0

[PATCH openEuler-1.0-LTS] net: 6pack: fix slab-out-of-bounds in decode_data
by Yang Yingliang 12 Oct '21
by Yang Yingliang 12 Oct '21
12 Oct '21
From: Pavel Skripkin <paskripkin(a)gmail.com>
stable inclusion
from linux-4.19.205
commit 4e370cc081a78ee23528311ca58fd98a06768ec7
CVE: CVE-2021-42008
--------------------------------
[ Upstream commit 19d1532a187669ce86d5a2696eb7275310070793 ]
Syzbot reported slab-out-of bounds write in decode_data().
The problem was in missing validation checks.
Syzbot's reproducer generated malicious input, which caused
decode_data() to be called a lot in sixpack_decode(). Since
rx_count_cooked is only 400 bytes and noone reported before,
that 400 bytes is not enough, let's just check if input is malicious
and complain about buffer overrun.
Fail log:
==================================================================
BUG: KASAN: slab-out-of-bounds in drivers/net/hamradio/6pack.c:843
Write of size 1 at addr ffff888087c5544e by task kworker/u4:0/7
CPU: 0 PID: 7 Comm: kworker/u4:0 Not tainted 5.6.0-rc3-syzkaller #0
...
Workqueue: events_unbound flush_to_ldisc
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x197/0x210 lib/dump_stack.c:118
print_address_description.constprop.0.cold+0xd4/0x30b mm/kasan/report.c:374
__kasan_report.cold+0x1b/0x32 mm/kasan/report.c:506
kasan_report+0x12/0x20 mm/kasan/common.c:641
__asan_report_store1_noabort+0x17/0x20 mm/kasan/generic_report.c:137
decode_data.part.0+0x23b/0x270 drivers/net/hamradio/6pack.c:843
decode_data drivers/net/hamradio/6pack.c:965 [inline]
sixpack_decode drivers/net/hamradio/6pack.c:968 [inline]
Reported-and-tested-by: syzbot+fc8cd9a673d4577fb2e4(a)syzkaller.appspotmail.com
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Pavel Skripkin <paskripkin(a)gmail.com>
Reviewed-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
---
drivers/net/hamradio/6pack.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/hamradio/6pack.c b/drivers/net/hamradio/6pack.c
index 8c636c4932274..1001e9a2edd4f 100644
--- a/drivers/net/hamradio/6pack.c
+++ b/drivers/net/hamradio/6pack.c
@@ -859,6 +859,12 @@ static void decode_data(struct sixpack *sp, unsigned char inbyte)
return;
}
+ if (sp->rx_count_cooked + 2 >= sizeof(sp->cooked_buf)) {
+ pr_err("6pack: cooked buffer overrun, data loss\n");
+ sp->rx_count = 0;
+ return;
+ }
+
buf = sp->raw_buf;
sp->cooked_buf[sp->rx_count_cooked++] =
buf[0] | ((buf[1] << 2) & 0xc0);
--
2.25.1
1
0

12 Oct '21
This initial commit contains Ramaxel's spnic module
Changes since v2:
- Solve the "new blank line at EOF" error;
- Change SPHW_COMM_H to SPHW_HW_COMM_H;
Changes since v1:
- Solve the compile error
Yanling Song (2):
net: spnic: initial commit the common module of Ramaxel NIC driver
spnic: add NIC layer
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/ramaxel/Kconfig | 20 +
drivers/net/ethernet/ramaxel/Makefile | 6 +
drivers/net/ethernet/ramaxel/spnic/Kconfig | 14 +
drivers/net/ethernet/ramaxel/spnic/Makefile | 39 +
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.c | 1165 +++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.h | 277 +++
.../ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h | 126 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 1605 +++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h | 195 ++
.../ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h | 60 +
.../ramaxel/spnic/hw/sphw_comm_msg_intf.h | 273 +++
.../ethernet/ramaxel/spnic/hw/sphw_common.c | 88 +
.../ethernet/ramaxel/spnic/hw/sphw_common.h | 118 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_crm.h | 984 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_csr.h | 171 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c | 1374 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.h | 157 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_hw.h | 649 ++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c | 1338 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h | 326 +++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.c | 1253 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.h | 42 +
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.c | 1402 +++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.h | 93 +
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c | 911 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.h | 102 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 1808 +++++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.h | 273 +++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 1382 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h | 156 ++
.../ramaxel/spnic/hw/sphw_mgmt_msg_base.h | 19 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mt.h | 534 +++++
.../ramaxel/spnic/hw/sphw_prof_adap.c | 94 +
.../ramaxel/spnic/hw/sphw_prof_adap.h | 49 +
.../ethernet/ramaxel/spnic/hw/sphw_profile.h | 36 +
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.c | 152 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.h | 119 ++
.../net/ethernet/ramaxel/spnic/spnic_dbg.c | 752 +++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.c | 965 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.h | 56 +
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.c | 811 ++++++++
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.h | 78 +
.../ethernet/ramaxel/spnic/spnic_ethtool.c | 988 +++++++++
.../ramaxel/spnic/spnic_ethtool_stats.c | 1035 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_filter.c | 411 ++++
.../net/ethernet/ramaxel/spnic/spnic_irq.c | 178 ++
.../net/ethernet/ramaxel/spnic/spnic_lld.c | 937 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_lld.h | 75 +
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 778 +++++++
.../ethernet/ramaxel/spnic/spnic_mag_cmd.h | 643 ++++++
.../net/ethernet/ramaxel/spnic/spnic_main.c | 925 +++++++++
.../ramaxel/spnic/spnic_mgmt_interface.h | 617 ++++++
.../ethernet/ramaxel/spnic/spnic_netdev_ops.c | 1526 ++++++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic.h | 148 ++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.c | 1321 ++++++++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.h | 724 +++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 647 ++++++
.../ethernet/ramaxel/spnic/spnic_nic_cmd.h | 105 +
.../ethernet/ramaxel/spnic/spnic_nic_dbg.c | 151 ++
.../ethernet/ramaxel/spnic/spnic_nic_dbg.h | 16 +
.../ethernet/ramaxel/spnic/spnic_nic_dev.h | 352 ++++
.../ethernet/ramaxel/spnic/spnic_nic_event.c | 506 +++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.c | 1123 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.h | 309 +++
.../net/ethernet/ramaxel/spnic/spnic_nic_qp.h | 421 ++++
.../net/ethernet/ramaxel/spnic/spnic_ntuple.c | 841 ++++++++
.../ethernet/ramaxel/spnic/spnic_pci_id_tbl.h | 12 +
.../net/ethernet/ramaxel/spnic/spnic_rss.c | 750 +++++++
.../net/ethernet/ramaxel/spnic/spnic_rss.h | 48 +
.../ethernet/ramaxel/spnic/spnic_rss_cfg.c | 390 ++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.c | 1249 ++++++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.h | 118 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.c | 200 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.h | 24 +
drivers/net/ethernet/ramaxel/spnic/spnic_tx.c | 879 ++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_tx.h | 129 ++
80 files changed, 38654 insertions(+)
create mode 100644 drivers/net/ethernet/ramaxel/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_msg_intf.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_crm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_csr.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt_msg_base.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_profile.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_filter.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_irq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_main.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mgmt_interface.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_netdev_ops.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_event.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_qp.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ntuple.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.h
--
2.30.0
1
2

12 Oct '21
This initial commit contains Ramaxel's spnic module
Changes since v1:
- Solve the compile error
Yanling Song (2):
spnic: initial commit the common module of Ramaxel NIC driver
spnic: add NIC layer
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/ramaxel/Kconfig | 20 +
drivers/net/ethernet/ramaxel/Makefile | 6 +
drivers/net/ethernet/ramaxel/spnic/Kconfig | 15 +
drivers/net/ethernet/ramaxel/spnic/Makefile | 40 +
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.c | 1165 +++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.h | 277 +++
.../ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h | 126 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 1606 +++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h | 196 ++
.../ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h | 60 +
.../ramaxel/spnic/hw/sphw_comm_msg_intf.h | 273 +++
.../ethernet/ramaxel/spnic/hw/sphw_common.c | 88 +
.../ethernet/ramaxel/spnic/hw/sphw_common.h | 118 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_crm.h | 984 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_csr.h | 171 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c | 1374 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.h | 157 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_hw.h | 649 ++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c | 1339 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h | 327 +++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.c | 1253 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.h | 42 +
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.c | 1402 +++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.h | 93 +
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c | 911 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.h | 102 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 1808 +++++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.h | 274 +++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 1382 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h | 156 ++
.../ramaxel/spnic/hw/sphw_mgmt_msg_base.h | 19 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mt.h | 534 +++++
.../ramaxel/spnic/hw/sphw_prof_adap.c | 94 +
.../ramaxel/spnic/hw/sphw_prof_adap.h | 49 +
.../ethernet/ramaxel/spnic/hw/sphw_profile.h | 36 +
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.c | 152 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.h | 119 ++
.../net/ethernet/ramaxel/spnic/spnic_dbg.c | 753 +++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.c | 965 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.h | 56 +
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.c | 811 ++++++++
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.h | 78 +
.../ethernet/ramaxel/spnic/spnic_ethtool.c | 989 +++++++++
.../ramaxel/spnic/spnic_ethtool_stats.c | 1035 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_filter.c | 412 ++++
.../net/ethernet/ramaxel/spnic/spnic_irq.c | 178 ++
.../net/ethernet/ramaxel/spnic/spnic_lld.c | 937 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_lld.h | 75 +
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 778 +++++++
.../ethernet/ramaxel/spnic/spnic_mag_cmd.h | 643 ++++++
.../net/ethernet/ramaxel/spnic/spnic_main.c | 925 +++++++++
.../ramaxel/spnic/spnic_mgmt_interface.h | 617 ++++++
.../ethernet/ramaxel/spnic/spnic_netdev_ops.c | 1526 ++++++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic.h | 148 ++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.c | 1321 ++++++++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.h | 724 +++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 647 ++++++
.../ethernet/ramaxel/spnic/spnic_nic_cmd.h | 105 +
.../ethernet/ramaxel/spnic/spnic_nic_dbg.c | 151 ++
.../ethernet/ramaxel/spnic/spnic_nic_dbg.h | 16 +
.../ethernet/ramaxel/spnic/spnic_nic_dev.h | 353 ++++
.../ethernet/ramaxel/spnic/spnic_nic_event.c | 506 +++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.c | 1124 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.h | 309 +++
.../net/ethernet/ramaxel/spnic/spnic_nic_qp.h | 421 ++++
.../net/ethernet/ramaxel/spnic/spnic_ntuple.c | 841 ++++++++
.../ethernet/ramaxel/spnic/spnic_pci_id_tbl.h | 13 +
.../net/ethernet/ramaxel/spnic/spnic_rss.c | 750 +++++++
.../net/ethernet/ramaxel/spnic/spnic_rss.h | 48 +
.../ethernet/ramaxel/spnic/spnic_rss_cfg.c | 391 ++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.c | 1250 ++++++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.h | 118 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.c | 200 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.h | 24 +
drivers/net/ethernet/ramaxel/spnic/spnic_tx.c | 879 ++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_tx.h | 129 ++
80 files changed, 38669 insertions(+)
create mode 100644 drivers/net/ethernet/ramaxel/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_msg_intf.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_crm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_csr.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt_msg_base.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_profile.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_filter.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_irq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_main.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mgmt_interface.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_netdev_ops.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_event.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_qp.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ntuple.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.h
--
2.30.0
2
5

[PATCH kernel-4.19] bpf: Fix integer overflow in prealloc_elems_and_freelist()
by Yang Yingliang 11 Oct '21
by Yang Yingliang 11 Oct '21
11 Oct '21
From: Xu Kuohai <xukuohai(a)huawei.com>
mainline inclusion
from mainline-5.15-rc4
commit 30e29a9a2bc6a4888335a6ede968b75cd329657a
category: bugfix
bugzilla: NA
CVE: CVE-2021-41864
-------------------------------------------------
In prealloc_elems_and_freelist(), the multiplication to calculate the
size passed to bpf_map_area_alloc() could lead to an integer overflow.
As a result, out-of-bounds write could occur in pcpu_freelist_populate()
as reported by KASAN:
[...]
[ 16.968613] BUG: KASAN: slab-out-of-bounds in pcpu_freelist_populate+0xd9/0x100
[ 16.969408] Write of size 8 at addr ffff888104fc6ea0 by task crash/78
[ 16.970038]
[ 16.970195] CPU: 0 PID: 78 Comm: crash Not tainted 5.15.0-rc2+ #1
[ 16.970878] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 16.972026] Call Trace:
[ 16.972306] dump_stack_lvl+0x34/0x44
[ 16.972687] print_address_description.constprop.0+0x21/0x140
[ 16.973297] ? pcpu_freelist_populate+0xd9/0x100
[ 16.973777] ? pcpu_freelist_populate+0xd9/0x100
[ 16.974257] kasan_report.cold+0x7f/0x11b
[ 16.974681] ? pcpu_freelist_populate+0xd9/0x100
[ 16.975190] pcpu_freelist_populate+0xd9/0x100
[ 16.975669] stack_map_alloc+0x209/0x2a0
[ 16.976106] __sys_bpf+0xd83/0x2ce0
[...]
The possibility of this overflow was originally discussed in [0], but
was overlooked.
Fix the integer overflow by changing elem_size to u64 from u32.
[0] https://lore.kernel.org/bpf/728b238e-a481-eb50-98e9-b0f430ab01e7@gmail.com/
Fixes: 557c0c6e7df8 ("bpf: convert stackmap to pre-allocation")
Signed-off-by: Tatsuhiko Yasumatsu <th.yasumatsu(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Link: https://lore.kernel.org/bpf/20210930135545.173698-1-th.yasumatsu@gmail.com
Signed-off-by: Xu Kuohai <xukuohai(a)huawei.com>
Reviewed-by: Yang Jihong <yangjihong1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/bpf/stackmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index a47d623f59fe7..92310b07cb98e 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -63,7 +63,8 @@ static inline int stack_map_data_size(struct bpf_map *map)
static int prealloc_elems_and_freelist(struct bpf_stack_map *smap)
{
- u32 elem_size = sizeof(struct stack_map_bucket) + smap->map.value_size;
+ u64 elem_size = sizeof(struct stack_map_bucket) +
+ (u64)smap->map.value_size;
int err;
smap->elems = bpf_map_area_alloc(elem_size * smap->map.max_entries,
--
2.25.1
1
0

你推荐,我翻译:openEuler G11N SIG文章/书籍翻译计划,请您在10月17日前将希望翻译的文章/书籍名称和链接邮件发送至G11N mailing list。谢啦~
by suqin (D) 11 Oct '21
by suqin (D) 11 Oct '21
11 Oct '21
请将您希望翻译的文章/书籍名称和链接,在10月17日前邮件发送至G11N mailing list:g11n(a)openeuler.org<mailto:g11n@openeuler.org> (订阅<https://mailweb.openeuler.org/postorius/lists/g11n.openeuler.org/>)
关于该活动有任何意见和建议,欢迎反馈哦~ [cid:image003.png@01D7BEB6.79A7BBC0]
· 文章或书籍选定
我们将组织志愿者翻译符合以下条件的文章或书籍:
* 版权要求:可自由传播的免费电子文章或书籍(启动翻译之前,我们会征求原作者的翻译许可)
* 内容要求:开源技术、操作系统或软件相关
* 内容语言:英文
* 英文字数:文章,每篇 < 1万字;书籍:< 10万字
* 如何反馈 :请订阅openEuler G11N mailing list(如何订阅?<https://gitee.com/openeuler/globalization/blob/master/openeuler-g11n-contri…>),并将您希望翻译的文章或书籍名称和链接,邮件发送至G11N mailing list。
收到您的反馈后,我们将根据翻译工作量和项目周期,制定和公布翻译计划,多谢啦~
[cid:image004.png@01D7BEBB.05DAB4E0]
1
0

[PATCH openEuler-5.10 01/32] net: hns3: make hclgevf_cmd_caps_bit_map0 and hclge_cmd_caps_bit_map0 static
by Zheng Zengkai 11 Oct '21
by Zheng Zengkai 11 Oct '21
11 Oct '21
From: chongjiapeng <jiapeng.chong(a)linux.alibaba.com>
mainline inclusion
from mainline-v5.15-rc1
commit 0c0383918a3e
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CVS3
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
----------------------------------------------------------------------
This symbols is not used outside of hclge_cmd.c and hclgevf_cmd.c, so marks
it static.
Fix the following sparse warning:
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c:345:35:
warning: symbol 'hclgevf_cmd_caps_bit_map0' was not declared. Should it
be static?
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c:365:33: warning:
symbol 'hclge_cmd_caps_bit_map0' was not declared. Should it be static?
Reported-by: Abaci Robot <abaci(a)linux.alibaba.com>
Signed-off-by: chongjiapeng <jiapeng.chong(a)linux.alibaba.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Reviewed-by: Yongxin Li <liyongxin1(a)huawei.com>
Signed-off-by: Junxin Chen <chenjunxin1(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
index 474c6d1664e7..ac9b69513332 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
@@ -362,7 +362,7 @@ static void hclge_set_default_capability(struct hclge_dev *hdev)
}
}
-const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = {
+static const struct hclge_caps_bit_map hclge_cmd_caps_bit_map0[] = {
{HCLGE_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B},
{HCLGE_CAP_PTP_B, HNAE3_DEV_SUPPORT_PTP_B},
{HCLGE_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B},
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
index 59772b0e9531..f89bfb352adf 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
@@ -342,7 +342,7 @@ static void hclgevf_set_default_capability(struct hclgevf_dev *hdev)
set_bit(HNAE3_DEV_SUPPORT_FEC_B, ae_dev->caps);
}
-const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = {
+static const struct hclgevf_caps_bit_map hclgevf_cmd_caps_bit_map0[] = {
{HCLGEVF_CAP_UDP_GSO_B, HNAE3_DEV_SUPPORT_UDP_GSO_B},
{HCLGEVF_CAP_INT_QL_B, HNAE3_DEV_SUPPORT_INT_QL_B},
{HCLGEVF_CAP_TQP_TXRX_INDEP_B, HNAE3_DEV_SUPPORT_TQP_TXRX_INDEP_B},
--
2.20.1
1
31

09 Oct '21
From: yin-xiujiang <yinxiujiang(a)kylinos.cn>
kylin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AHUH?from=project-issue
CVE: NA
---------------------------------------------------
the size of mpam_msc_err_str is _MPAM_NUM_ERRCODE,
so device_errcode needs to be less than _MPAM_NUM_ERRCODE
Signed-off-by: yin-xiujiang <yinxiujiang(a)kylinos.cn>
---
arch/arm64/kernel/mpam/mpam_device.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c
index 86aaf52146bc..fd538fd22c6e 100644
--- a/arch/arm64/kernel/mpam/mpam_device.c
+++ b/arch/arm64/kernel/mpam/mpam_device.c
@@ -435,7 +435,7 @@ static irqreturn_t mpam_handle_error_irq(int irq, void *data)
return IRQ_NONE;
/* No-one expects MPAM errors! */
- if (device_errcode <= _MPAM_NUM_ERRCODE)
+ if (device_errcode < _MPAM_NUM_ERRCODE)
pr_err_ratelimited("unexpected error '%s' [esr:%x]\n",
mpam_msc_err_str[device_errcode],
device_esr);
--
2.27.0
2
1
From: Andi Kleen <andi(a)firstfloor.org>
mainline inclusion
from mainline-5.11
commit 55a4de94c64bacffbcd802c954764e0de2ab217f
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMQA
CVE: NA
--------------------------------
Add a new --quiet option to 'perf stat'. This is useful with 'perf stat
record' to write the data only to the perf.data file, which can lower
measurement overhead because the data doesn't need to be formatted.
On my 4C desktop:
% time ./perf stat record -e $(python -c 'print ",\
".join(["cycles"]*1000)') -a -I 1000 sleep 5
...
real 0m5.377s
user 0m0.238s
sys 0m0.452s
% time ./perf stat record --quiet -e $(python -c 'print ",\
".join(["cycles"]*1000)') -a -I 1000 sleep 5
real 0m5.452s
user 0m0.183s
sys 0m0.423s
In this example it cuts the user time by 20%. On systems with more cores
the savings are higher.
Signed-off-by: Andi Kleen <andi(a)firstfloor.org>
Acked-by: Jiri Olsa <jolsa(a)kernel.org>
Cc: Alexey Budankov <alexey.budankov(a)linux.intel.com>
Link: http://lore.kernel.org/lkml/20201027002737.30942-1-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Signed-off-by: yin-xiujiang <yinxiujiang(a)kylinos.cn>
---
tools/perf/Documentation/perf-stat.txt | 4 ++++
tools/perf/builtin-stat.c | 6 +++++-
tools/perf/util/stat.h | 1 +
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 9f9f29025e49..f9bcd95bf352 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -320,6 +320,10 @@ STAT RECORD
-----------
Stores stat data into perf data file.
+--quiet::
+Don't print output. This is useful with perf stat record below to only
+write data to the perf.data file.
+
-o file::
--output file::
Output file name.
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index b01af171d94f..89e80a3bc9c3 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -973,6 +973,8 @@ static void print_counters(struct timespec *ts, int argc, const char **argv)
if (STAT_RECORD && perf_stat.data.is_pipe)
return;
+ if (stat_config.quiet)
+ return;
perf_evlist__print_counters(evsel_list, &stat_config, &target,
ts, argc, argv);
}
@@ -1171,6 +1173,8 @@ static struct option stat_options[] = {
"threads of same physical core"),
OPT_BOOLEAN(0, "summary", &stat_config.summary,
"print summary for interval mode"),
+ OPT_BOOLEAN(0, "quiet", &stat_config.quiet,
+ "don't print output (useful with record)"),
#ifdef HAVE_LIBPFM
OPT_CALLBACK(0, "pfm-events", &evsel_list, "event",
"libpfm4 event selector. use 'perf list' to list available events",
@@ -2132,7 +2136,7 @@ int cmd_stat(int argc, const char **argv)
goto out;
}
- if (!output) {
+ if (!output && !stat_config.quiet) {
struct timespec tm;
mode = append_file ? "a" : "w";
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index 487010c624be..05adf8165025 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -122,6 +122,7 @@ struct perf_stat_config {
bool metric_no_group;
bool metric_no_merge;
bool stop_read_counter;
+ bool quiet;
FILE *output;
unsigned int interval;
unsigned int timeout;
--
2.27.0
2
1
issue: https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
jiazhenyuan (4):
tcp: address problems caused by EDT misshaps
tcp: always set retrans_stamp on recovery
tcp: create a helper to model exponential backoff
tcp: adjust rto_base in retransmits_timed_out()
net/ipv4/tcp_input.c | 16 +++++++----
net/ipv4/tcp_output.c | 9 +++---
net/ipv4/tcp_timer.c | 65 ++++++++++++++++++++-----------------------
3 files changed, 44 insertions(+), 46 deletions(-)
--
2.27.0
2
6
Davidlohr Bueso (1):
lib/timerqueue: Rely on rbtree semantics for next timer
Xiongfeng Wang (1):
timerqueue: fix kabi for struct timerqueue_head
include/linux/timerqueue.h | 14 +++++++++-----
lib/timerqueue.c | 30 ++++++++++++------------------
2 files changed, 21 insertions(+), 23 deletions(-)
--
2.25.1
1
2

[PATCH openEuler-5.10] imans: Use initial ima namespace domain tag when IMANS is disabled.
by Zheng Zengkai 08 Oct '21
by Zheng Zengkai 08 Oct '21
08 Oct '21
From: Ajo Jose Panoor <ajo.jose.panoor(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4C9AE
CVE: NA
-----------------------------------------------------------------
As part of the imans support, a key domain tag is added to the search
criteria in digsig module. When IMA Namespace is disabled, the initial
ima namespace domain tag should be used instead of nsproxy.
Signed-off-by: Ajo Jose Panoor <ajo.jose.panoor(a)huawei.com>
Reviewed-by: Zhang Tianxing <zhangtianxing3(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
security/integrity/digsig.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
index 2c923dc0dbd3..c866a5c2c9b1 100644
--- a/security/integrity/digsig.c
+++ b/security/integrity/digsig.c
@@ -74,8 +74,11 @@ static struct key_tag *domain_tag_from_id(const unsigned int id)
return ERR_PTR(-EINVAL);
if (id == INTEGRITY_KEYRING_IMA)
+#ifdef CONFIG_IMA_NS
return current->nsproxy->ima_ns->key_domain;
-
+#else
+ return init_ima_ns.key_domain;
+#endif
return NULL;
}
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS] Fix incorrect call to free srb in qla2xxx_mqueuecommand()
by yinxiujiang 08 Oct '21
by yinxiujiang 08 Oct '21
08 Oct '21
From: Arun Easi <aeasi(a)marvell.com>
stable inclusion
from linux-4.19.207
commit c5ab9b67d8b061de74e2ca51bf787ee599bd7f89
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AFG0?from=project-issue
CVE: NA
-------------------------------------------------
RIP: 0010:kmem_cache_free+0xfa/0x1b0
Call Trace:
qla2xxx_mqueuecommand+0x2b5/0x2c0 [qla2xxx]
scsi_queue_rq+0x5e2/0xa40
__blk_mq_try_issue_directly+0x128/0x1d0
blk_mq_request_issue_directly+0x4e/0xb0
Fix incorrect call to free srb in qla2xxx_mqueuecommand(), as srb is now
allocated by upper layers. This fixes smatch warning of srb unintended
free.
Link: https://lore.kernel.org/r/20210329085229.4367-7-njavali@marvell.com
Fixes: af2a0c51b120 ("scsi: qla2xxx: Fix SRB leak on switch command timeout")
Cc: stable(a)vger.kernel.org # 5.5
Reported-by: Laurence Oberman <loberman(a)redhat.com>
Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Reported-by: magicyan2022
Reviewed-by: Himanshu Madhani <himanshu.madhani(a)oracle.com>
Signed-off-by: Arun Easi <aeasi(a)marvell.com>
Signed-off-by: Nilesh Javali <njavali(a)marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: yin-xiujiang <yinxiujiang(a)kylinos.cn>
---
drivers/scsi/qla2xxx/qla_os.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index bfbf213b15c0..8e9d386146ac 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1028,8 +1028,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
- if (rval == QLA_INTERFACE_ERROR)
- goto qc24_free_sp_fail_command;
goto qc24_host_busy_free_sp;
}
@@ -1044,11 +1042,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
qc24_target_busy:
return SCSI_MLQUEUE_TARGET_BUSY;
-qc24_free_sp_fail_command:
- sp->free(sp);
- CMD_SP(cmd) = NULL;
- qla2xxx_rel_qpair_sp(sp->qpair, sp);
-
qc24_fail_command:
cmd->scsi_done(cmd);
--
2.27.0
3
2

30 Sep '21
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
This tool can help calculate the CPU idle rate in higher precision.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
tools/accounting/Makefile | 2 +-
tools/accounting/idle_cal.c | 91 +++++++++++++++++++++++++++++++++++++
2 files changed, 92 insertions(+), 1 deletion(-)
create mode 100644 tools/accounting/idle_cal.c
diff --git a/tools/accounting/Makefile b/tools/accounting/Makefile
index 03687f19cbb1..d14151e28173 100644
--- a/tools/accounting/Makefile
+++ b/tools/accounting/Makefile
@@ -2,7 +2,7 @@
CC := $(CROSS_COMPILE)gcc
CFLAGS := -I../../usr/include
-PROGS := getdelays
+PROGS := getdelays idle_cal
all: $(PROGS)
diff --git a/tools/accounting/idle_cal.c b/tools/accounting/idle_cal.c
new file mode 100644
index 000000000000..397b0fc1c69c
--- /dev/null
+++ b/tools/accounting/idle_cal.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * idle_cal.c
+ *
+ * Copyright (C) 2007
+ *
+ * cpu idle time accouting
+ */
+
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <fcntl.h>
+#include <string.h>
+#include <unistd.h>
+#include <time.h>
+#include <limits.h>
+#include <sys/time.h>
+
+#define CPUS 4
+#define BUFFSIZE 4096
+#define HZ 100
+#define FILE_NAME "/proc/stat2"
+
+struct cpu_info {
+ char name[BUFFSIZE];
+ long long value[1];
+};
+
+int main(void)
+{
+ char buf[BUFFSIZE];
+ struct cpu_info cpus[CPUS+1];
+ struct cpu_info cpus_2[CPUS+1];
+ long long sub;
+ double value;
+
+ FILE *fp = fopen(FILE_NAME, "r");
+ int i = 0;
+ struct timeval start, end;
+
+
+ while (i < CPUS+1) {
+ int n = fscanf(fp, "%s %lld\n", cpus[i].name, &cpus[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ for (int i = 0; i < CPUS+1; i++)
+ printf("%s %lld\n", cpus[i].name, cpus[i].value[0]);
+ gettimeofday(&start, NULL);
+ fflush(fp);
+ fclose(fp);
+ i = 0;
+
+ sleep(2);
+
+ FILE *fp_2 = fopen(FILE_NAME, "r");
+
+ while (i < CPUS+1) {
+ int n = fscanf(fp_2, "%s %lld\n", cpus_2[i].name, &cpus_2[i].value[0]);
+
+ if (n < 0) {
+ printf("wrong");
+ return -1;
+ }
+ i += 1;
+ }
+
+ for (int i = 0; i < CPUS+1; i++)
+ printf("%s %lld\n", cpus_2[i].name, cpus_2[i].value[0]);
+ gettimeofday(&end, NULL);
+ fclose(fp_2);
+
+ sub = end.tv_sec-start.tv_sec;
+ value = sub*1000000.0+end.tv_usec-start.tv_usec;
+
+ printf("CPU idle rate %f\n", 1000000/HZ*(cpus_2[0].value[0]-cpus[0].value[0])/value);
+
+ for (int i = 1; i < CPUS+1; i++) {
+ printf("CPU%d idle rate %f\n", i-1, 1000000/HZ
+ *(cpus_2[i].value[0]-cpus[i].value[0])/value);
+ }
+
+ return 0;
+}
+
--
2.17.1
2
1
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
The default way of calculating CPU utilization is to check which task is
executed during the interval of two ticks. This leads to the inaccurate
results of CPU utilization.
This problem can be solved by counting the idle time via scheduler rather
than the tick interval. We can record the time before executing idle
process and calculate the execute time before quiting the idle process.
The idle time of each CPU is given in the /proc/stat2 file. This way can
give higher precision in accounting the CPU idle time compared with the
/proc/stat.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++++
kernel/sched/idle.c | 28 +++++++++++++
5 files changed, 161 insertions(+)
create mode 100644 fs/proc/stat2.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..33588a37579e 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/stat2 file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/stat2 file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..b0d5f2b347d7 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += stat2.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/stat2.c b/fs/proc/stat2.c
new file mode 100644
index 000000000000..6036a946c71d
--- /dev/null
+++ b/fs/proc/stat2.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * linux/fs/proc/stat2.c
+ *
+ * Copyright (C) 2007
+ *
+ * cpu idle time accouting
+ */
+
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "stat2"
+
+extern u64 cal_idle_sum_exec_runtime(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cal_idle_sum_exec_runtime(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 32 + 32 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..25218a8f822f 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -19,6 +19,8 @@
*/
DEFINE_PER_CPU(struct irqtime, cpu_irqtime);
+extern struct static_key_true proc_idle;
+
static int sched_clock_irqtime;
void enable_sched_clock_irqtime(void)
@@ -1078,3 +1080,35 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cal_idle_sum_exec_runtime(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ if (!static_branch_likely(&proc_idle))
+ return 0ULL;
+
+ if (rq->curr == rq->idle) {
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return idle;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle += delta_exec;
+ }
+
+ return idle;
+}
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..15f076ab5823 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -10,6 +10,8 @@
#include <trace/events/power.h>
+DEFINE_STATIC_KEY_TRUE(proc_idle);
+
/* Linker adds these: start and end of __cpuidle functions */
extern char __cpuidle_text_start[], __cpuidle_text_end[];
@@ -424,6 +426,23 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ if (!static_branch_likely(&proc_idle))
+ return;
+
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
@@ -436,6 +455,15 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ if (static_branch_likely(&proc_idle)) {
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+ }
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
2
1

[PATCH openEuler-21.09 0/2] Improve the precisoin of accounuting CPU utilization rate
by Hongyu Li 30 Sep '21
by Hongyu Li 30 Sep '21
30 Sep '21
The current way of calculating the CPU utilization rate is not accurate.
The accounting system only works In the interval of two ticks. However,
a process can give up the CPU before the tick ending.
This can be fixed by counting the idel time via the scheduler. We can
use the sum_exe_runtime of the idle process of each CPU to calculate the
the CPU utilization rate. The idle time of each CPU is given in the
/proc/stat2 file. An example of using this file is also attached.
Hongyu Li (2):
eulerfs: add the /proc/stat2 file.
tools: add a tool to calculate the idle rate
fs/proc/Kconfig | 7 +++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 +++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++
kernel/sched/idle.c | 28 ++++++++++++
tools/accounting/Makefile | 2 +-
tools/accounting/idle_cal.c | 91 +++++++++++++++++++++++++++++++++++++
7 files changed, 253 insertions(+), 1 deletion(-)
create mode 100644 fs/proc/stat2.c
create mode 100644 tools/accounting/idle_cal.c
--
2.17.1
1
0
Hi Community:
Does Euler 2.9 support BPFTrace Or not?
It is very necessary for us now.
B.R.
2
1
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
The default way of calculating CPU utilization is to check which task is
executed during the interval of two ticks. This leads to the inaccurate
results of CPU utilization.
This problem can be solved by counting the idle time via scheduler rather
than the tick interval. We can record the time before executing idle
process and calculate the execute time before quiting the idle process.
The idle time of each CPU is given in the /proc/stat2 file. This way can
give higher precision in accounting the CPU idle time compared with the
/proc/stat.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++++
kernel/sched/idle.c | 28 +++++++++++++
5 files changed, 161 insertions(+)
create mode 100644 fs/proc/stat2.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..33588a37579e 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/stat2 file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/stat2 file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..b0d5f2b347d7 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += stat2.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/stat2.c b/fs/proc/stat2.c
new file mode 100644
index 000000000000..6036a946c71d
--- /dev/null
+++ b/fs/proc/stat2.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * linux/fs/proc/stat2.c
+ *
+ * Copyright (C) 2007
+ *
+ * cpu idle time accouting
+ */
+
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "stat2"
+
+extern u64 cal_idle_sum_exec_runtime(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cal_idle_sum_exec_runtime(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 32 + 32 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..25218a8f822f 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -19,6 +19,8 @@
*/
DEFINE_PER_CPU(struct irqtime, cpu_irqtime);
+extern struct static_key_true proc_idle;
+
static int sched_clock_irqtime;
void enable_sched_clock_irqtime(void)
@@ -1078,3 +1080,35 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cal_idle_sum_exec_runtime(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ if (!static_branch_likely(&proc_idle))
+ return 0ULL;
+
+ if (rq->curr == rq->idle) {
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return idle;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle += delta_exec;
+ }
+
+ return idle;
+}
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..15f076ab5823 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -10,6 +10,8 @@
#include <trace/events/power.h>
+DEFINE_STATIC_KEY_TRUE(proc_idle);
+
/* Linker adds these: start and end of __cpuidle functions */
extern char __cpuidle_text_start[], __cpuidle_text_end[];
@@ -424,6 +426,23 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ if (!static_branch_likely(&proc_idle))
+ return;
+
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
@@ -436,6 +455,15 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ if (static_branch_likely(&proc_idle)) {
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+ }
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
1
0
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CIJQ
CVE: NA
----------------------------------------------------------------------
The default way of calculating CPU utilization is to check which task is
executed during the interval of two ticks. This leads to the inaccurate
results of CPU utilization.
This problem can be solved by counting the idle time via scheduler rather
than the tick interval. We can record the time before executing idle
process and calculate the execute time before quiting the idle process.
The idle time of each CPU is given in the /proc/stat2 file. This way can
give higher precision in accounting the CPU idle time compared with the
/proc/stat.
Signed-off-by: Hongyu Li <543306408(a)qq.com>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/stat2.c | 91 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 34 ++++++++++++++++
kernel/sched/idle.c | 28 +++++++++++++
5 files changed, 161 insertions(+)
create mode 100644 fs/proc/stat2.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..33588a37579e 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/stat2 file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/stat2 file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..b0d5f2b347d7 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += stat2.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/stat2.c b/fs/proc/stat2.c
new file mode 100644
index 000000000000..6036a946c71d
--- /dev/null
+++ b/fs/proc/stat2.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * linux/fs/proc/stat2.c
+ *
+ * Copyright (C) 2007
+ *
+ * cpu idle time accouting
+ */
+
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "stat2"
+
+extern u64 cal_idle_sum_exec_runtime(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cal_idle_sum_exec_runtime(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 32 + 32 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..25218a8f822f 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -19,6 +19,8 @@
*/
DEFINE_PER_CPU(struct irqtime, cpu_irqtime);
+extern struct static_key_true proc_idle;
+
static int sched_clock_irqtime;
void enable_sched_clock_irqtime(void)
@@ -1078,3 +1080,35 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cal_idle_sum_exec_runtime(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ if (!static_branch_likely(&proc_idle))
+ return 0ULL;
+
+ if (rq->curr == rq->idle) {
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return idle;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle += delta_exec;
+ }
+
+ return idle;
+}
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..15f076ab5823 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -10,6 +10,8 @@
#include <trace/events/power.h>
+DEFINE_STATIC_KEY_TRUE(proc_idle);
+
/* Linker adds these: start and end of __cpuidle functions */
extern char __cpuidle_text_start[], __cpuidle_text_end[];
@@ -424,6 +426,23 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ if (!static_branch_likely(&proc_idle))
+ return;
+
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
@@ -436,6 +455,15 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ if (static_branch_likely(&proc_idle)) {
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+ }
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
1
0

30 Sep '21
From: Alexander Potapenko <glider(a)google.com>
Patch series "KFENCE: A low-overhead sampling-based memory safety error detector", v7.
This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
low-overhead sampling-based memory safety error detector of heap
use-after-free, invalid-free, and out-of-bounds access errors. This
series enables KFENCE for the x86 and arm64 architectures, and adds
KFENCE hooks to the SLAB and SLUB allocators.
KFENCE is designed to be enabled in production kernels, and has near
zero performance overhead. Compared to KASAN, KFENCE trades performance
for precision. The main motivation behind KFENCE's design, is that with
enough total uptime KFENCE will detect bugs in code paths not typically
exercised by non-production test workloads. One way to quickly achieve a
large enough total uptime is when the tool is deployed across a large
fleet of machines.
KFENCE objects each reside on a dedicated page, at either the left or
right page boundaries. The pages to the left and right of the object
page are "guard pages", whose attributes are changed to a protected
state, and cause page faults on any attempted access to them. Such page
faults are then intercepted by KFENCE, which handles the fault
gracefully by reporting a memory access error.
Guarded allocations are set up based on a sample interval (can be set
via kfence.sample_interval). After expiration of the sample interval,
the next allocation through the main allocator (SLAB or SLUB) returns a
guarded allocation from the KFENCE object pool. At this point, the timer
is reset, and the next allocation is set up after the expiration of the
interval.
To enable/disable a KFENCE allocation through the main allocator's
fast-path without overhead, KFENCE relies on static branches via the
static keys infrastructure. The static branch is toggled to redirect the
allocation to KFENCE.
The KFENCE memory pool is of fixed size, and if the pool is exhausted no
further KFENCE allocations occur. The default config is conservative
with only 255 objects, resulting in a pool size of 2 MiB (with 4 KiB
pages).
We have verified by running synthetic benchmarks (sysbench I/O,
hackbench) and production server-workload benchmarks that a kernel with
KFENCE (using sample intervals 100-500ms) is performance-neutral
compared to a non-KFENCE baseline kernel.
KFENCE is inspired by GWP-ASan [1], a userspace tool with similar
properties. The name "KFENCE" is a homage to the Electric Fence Malloc
Debugger [2].
For more details, see Documentation/dev-tools/kfence.rst added in the
series -- also viewable here:
https://raw.githubusercontent.com/google/kasan/kfence/Documentation/dev-too…
[1] http://llvm.org/docs/GwpAsan.html
[2] https://linux.die.net/man/3/efence
This patch (of 9):
This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
low-overhead sampling-based memory safety error detector of heap
use-after-free, invalid-free, and out-of-bounds access errors.
KFENCE is designed to be enabled in production kernels, and has near
zero performance overhead. Compared to KASAN, KFENCE trades performance
for precision. The main motivation behind KFENCE's design, is that with
enough total uptime KFENCE will detect bugs in code paths not typically
exercised by non-production test workloads. One way to quickly achieve a
large enough total uptime is when the tool is deployed across a large
fleet of machines.
KFENCE objects each reside on a dedicated page, at either the left or
right page boundaries. The pages to the left and right of the object
page are "guard pages", whose attributes are changed to a protected
state, and cause page faults on any attempted access to them. Such page
faults are then intercepted by KFENCE, which handles the fault
gracefully by reporting a memory access error. To detect out-of-bounds
writes to memory within the object's page itself, KFENCE also uses
pattern-based redzones. The following figure illustrates the page
layout:
---+-----------+-----------+-----------+-----------+-----------+---
| xxxxxxxxx | O : | xxxxxxxxx | : O | xxxxxxxxx |
| xxxxxxxxx | B : | xxxxxxxxx | : B | xxxxxxxxx |
| x GUARD x | J : RED- | x GUARD x | RED- : J | x GUARD x |
| xxxxxxxxx | E : ZONE | xxxxxxxxx | ZONE : E | xxxxxxxxx |
| xxxxxxxxx | C : | xxxxxxxxx | : C | xxxxxxxxx |
| xxxxxxxxx | T : | xxxxxxxxx | : T | xxxxxxxxx |
---+-----------+-----------+-----------+-----------+-----------+---
Guarded allocations are set up based on a sample interval (can be set
via kfence.sample_interval). After expiration of the sample interval, a
guarded allocation from the KFENCE object pool is returned to the main
allocator (SLAB or SLUB). At this point, the timer is reset, and the
next allocation is set up after the expiration of the interval.
To enable/disable a KFENCE allocation through the main allocator's
fast-path without overhead, KFENCE relies on static branches via the
static keys infrastructure. The static branch is toggled to redirect the
allocation to KFENCE. To date, we have verified by running synthetic
benchmarks (sysbench I/O, hackbench) that a kernel compiled with KFENCE
is performance-neutral compared to the non-KFENCE baseline.
For more details, see Documentation/dev-tools/kfence.rst (added later in
the series).
[elver(a)google.com: fix parameter description for kfence_object_start()]
Link: https://lkml.kernel.org/r/20201106092149.GA2851373@elver.google.com
[elver(a)google.com: avoid stalling work queue task without allocations]
Link: https://lkml.kernel.org/r/CADYN=9J0DQhizAGB0-jz4HOBBh+05kMBXb4c0cXMS7Qi5NAJ…
Link: https://lkml.kernel.org/r/20201110135320.3309507-1-elver@google.com
[elver(a)google.com: fix potential deadlock due to wake_up()]
Link: https://lkml.kernel.org/r/000000000000c0645805b7f982e4@google.com
Link: https://lkml.kernel.org/r/20210104130749.1768991-1-elver@google.com
[elver(a)google.com: add option to use KFENCE without static keys]
Link: https://lkml.kernel.org/r/20210111091544.3287013-1-elver@google.com
[elver(a)google.com: add missing copyright and description headers]
Link: https://lkml.kernel.org/r/20210118092159.145934-1-elver@google.com
Link: https://lkml.kernel.org/r/20201103175841.3495947-2-elver@google.com
Signed-off-by: Marco Elver <elver(a)google.com>
Signed-off-by: Alexander Potapenko <glider(a)google.com>
Reviewed-by: Dmitry Vyukov <dvyukov(a)google.com>
Reviewed-by: SeongJae Park <sjpark(a)amazon.de>
Co-developed-by: Marco Elver <elver(a)google.com>
Reviewed-by: Jann Horn <jannh(a)google.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Paul E. McKenney <paulmck(a)kernel.org>
Cc: Andrey Konovalov <andreyknvl(a)google.com>
Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Christopher Lameter <cl(a)linux.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Eric Dumazet <edumazet(a)google.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Hillf Danton <hdanton(a)sina.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jonathan Corbet <corbet(a)lwn.net>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Joern Engel <joern(a)purestorage.com>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Mark Rutland <mark.rutland(a)arm.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Will Deacon <will(a)kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Yingjie Shang <1415317271(a)qq.com>
Reviewed-by: Bixuan Cui <cuibixuan(a)huawei.com>
---
include/linux/kfence.h | 216 +++++++++++
init/main.c | 3 +
lib/Kconfig.debug | 1 +
lib/Kconfig.kfence | 67 ++++
mm/Makefile | 1 +
mm/kfence/Makefile | 3 +
mm/kfence/core.c | 840 +++++++++++++++++++++++++++++++++++++++++
mm/kfence/kfence.h | 113 ++++++
mm/kfence/report.c | 240 ++++++++++++
9 files changed, 1484 insertions(+)
create mode 100644 include/linux/kfence.h
create mode 100644 lib/Kconfig.kfence
create mode 100644 mm/kfence/Makefile
create mode 100644 mm/kfence/core.c
create mode 100644 mm/kfence/kfence.h
create mode 100644 mm/kfence/report.c
diff --git a/include/linux/kfence.h b/include/linux/kfence.h
new file mode 100644
index 000000000000..81f3911cb298
--- /dev/null
+++ b/include/linux/kfence.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Kernel Electric-Fence (KFENCE). Public interface for allocator and fault
+ * handler integration. For more info see Documentation/dev-tools/kfence.rst.
+ *
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#ifndef _LINUX_KFENCE_H
+#define _LINUX_KFENCE_H
+
+#include <linux/mm.h>
+#include <linux/types.h>
+
+#ifdef CONFIG_KFENCE
+
+/*
+ * We allocate an even number of pages, as it simplifies calculations to map
+ * address to metadata indices; effectively, the very first page serves as an
+ * extended guard page, but otherwise has no special purpose.
+ */
+#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
+extern char *__kfence_pool;
+
+#ifdef CONFIG_KFENCE_STATIC_KEYS
+#include <linux/static_key.h>
+DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
+#else
+#include <linux/atomic.h>
+extern atomic_t kfence_allocation_gate;
+#endif
+
+/**
+ * is_kfence_address() - check if an address belongs to KFENCE pool
+ * @addr: address to check
+ *
+ * Return: true or false depending on whether the address is within the KFENCE
+ * object range.
+ *
+ * KFENCE objects live in a separate page range and are not to be intermixed
+ * with regular heap objects (e.g. KFENCE objects must never be added to the
+ * allocator freelists). Failing to do so may and will result in heap
+ * corruptions, therefore is_kfence_address() must be used to check whether
+ * an object requires specific handling.
+ *
+ * Note: This function may be used in fast-paths, and is performance critical.
+ * Future changes should take this into account; for instance, we want to avoid
+ * introducing another load and therefore need to keep KFENCE_POOL_SIZE a
+ * constant (until immediate patching support is added to the kernel).
+ */
+static __always_inline bool is_kfence_address(const void *addr)
+{
+ /*
+ * The non-NULL check is required in case the __kfence_pool pointer was
+ * never initialized; keep it in the slow-path after the range-check.
+ */
+ return unlikely((unsigned long)((char *)addr - __kfence_pool) < KFENCE_POOL_SIZE && addr);
+}
+
+/**
+ * kfence_alloc_pool() - allocate the KFENCE pool via memblock
+ */
+void __init kfence_alloc_pool(void);
+
+/**
+ * kfence_init() - perform KFENCE initialization at boot time
+ *
+ * Requires that kfence_alloc_pool() was called before. This sets up the
+ * allocation gate timer, and requires that workqueues are available.
+ */
+void __init kfence_init(void);
+
+/**
+ * kfence_shutdown_cache() - handle shutdown_cache() for KFENCE objects
+ * @s: cache being shut down
+ *
+ * Before shutting down a cache, one must ensure there are no remaining objects
+ * allocated from it. Because KFENCE objects are not referenced from the cache
+ * directly, we need to check them here.
+ *
+ * Note that shutdown_cache() is internal to SL*B, and kmem_cache_destroy() does
+ * not return if allocated objects still exist: it prints an error message and
+ * simply aborts destruction of a cache, leaking memory.
+ *
+ * If the only such objects are KFENCE objects, we will not leak the entire
+ * cache, but instead try to provide more useful debug info by making allocated
+ * objects "zombie allocations". Objects may then still be used or freed (which
+ * is handled gracefully), but usage will result in showing KFENCE error reports
+ * which include stack traces to the user of the object, the original allocation
+ * site, and caller to shutdown_cache().
+ */
+void kfence_shutdown_cache(struct kmem_cache *s);
+
+/*
+ * Allocate a KFENCE object. Allocators must not call this function directly,
+ * use kfence_alloc() instead.
+ */
+void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags);
+
+/**
+ * kfence_alloc() - allocate a KFENCE object with a low probability
+ * @s: struct kmem_cache with object requirements
+ * @size: exact size of the object to allocate (can be less than @s->size
+ * e.g. for kmalloc caches)
+ * @flags: GFP flags
+ *
+ * Return:
+ * * NULL - must proceed with allocating as usual,
+ * * non-NULL - pointer to a KFENCE object.
+ *
+ * kfence_alloc() should be inserted into the heap allocation fast path,
+ * allowing it to transparently return KFENCE-allocated objects with a low
+ * probability using a static branch (the probability is controlled by the
+ * kfence.sample_interval boot parameter).
+ */
+static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+{
+#ifdef CONFIG_KFENCE_STATIC_KEYS
+ if (static_branch_unlikely(&kfence_allocation_key))
+#else
+ if (unlikely(!atomic_read(&kfence_allocation_gate)))
+#endif
+ return __kfence_alloc(s, size, flags);
+ return NULL;
+}
+
+/**
+ * kfence_ksize() - get actual amount of memory allocated for a KFENCE object
+ * @addr: pointer to a heap object
+ *
+ * Return:
+ * * 0 - not a KFENCE object, must call __ksize() instead,
+ * * non-0 - this many bytes can be accessed without causing a memory error.
+ *
+ * kfence_ksize() returns the number of bytes requested for a KFENCE object at
+ * allocation time. This number may be less than the object size of the
+ * corresponding struct kmem_cache.
+ */
+size_t kfence_ksize(const void *addr);
+
+/**
+ * kfence_object_start() - find the beginning of a KFENCE object
+ * @addr: address within a KFENCE-allocated object
+ *
+ * Return: address of the beginning of the object.
+ *
+ * SL[AU]B-allocated objects are laid out within a page one by one, so it is
+ * easy to calculate the beginning of an object given a pointer inside it and
+ * the object size. The same is not true for KFENCE, which places a single
+ * object at either end of the page. This helper function is used to find the
+ * beginning of a KFENCE-allocated object.
+ */
+void *kfence_object_start(const void *addr);
+
+/**
+ * __kfence_free() - release a KFENCE heap object to KFENCE pool
+ * @addr: object to be freed
+ *
+ * Requires: is_kfence_address(addr)
+ *
+ * Release a KFENCE object and mark it as freed.
+ */
+void __kfence_free(void *addr);
+
+/**
+ * kfence_free() - try to release an arbitrary heap object to KFENCE pool
+ * @addr: object to be freed
+ *
+ * Return:
+ * * false - object doesn't belong to KFENCE pool and was ignored,
+ * * true - object was released to KFENCE pool.
+ *
+ * Release a KFENCE object and mark it as freed. May be called on any object,
+ * even non-KFENCE objects, to simplify integration of the hooks into the
+ * allocator's free codepath. The allocator must check the return value to
+ * determine if it was a KFENCE object or not.
+ */
+static __always_inline __must_check bool kfence_free(void *addr)
+{
+ if (!is_kfence_address(addr))
+ return false;
+ __kfence_free(addr);
+ return true;
+}
+
+/**
+ * kfence_handle_page_fault() - perform page fault handling for KFENCE pages
+ * @addr: faulting address
+ *
+ * Return:
+ * * false - address outside KFENCE pool,
+ * * true - page fault handled by KFENCE, no additional handling required.
+ *
+ * A page fault inside KFENCE pool indicates a memory error, such as an
+ * out-of-bounds access, a use-after-free or an invalid memory access. In these
+ * cases KFENCE prints an error message and marks the offending page as
+ * present, so that the kernel can proceed.
+ */
+bool __must_check kfence_handle_page_fault(unsigned long addr);
+
+#else /* CONFIG_KFENCE */
+
+static inline bool is_kfence_address(const void *addr) { return false; }
+static inline void kfence_alloc_pool(void) { }
+static inline void kfence_init(void) { }
+static inline void kfence_shutdown_cache(struct kmem_cache *s) { }
+static inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { return NULL; }
+static inline size_t kfence_ksize(const void *addr) { return 0; }
+static inline void *kfence_object_start(const void *addr) { return NULL; }
+static inline void __kfence_free(void *addr) { }
+static inline bool __must_check kfence_free(void *addr) { return false; }
+static inline bool __must_check kfence_handle_page_fault(unsigned long addr) { return false; }
+
+#endif
+
+#endif /* _LINUX_KFENCE_H */
diff --git a/init/main.c b/init/main.c
index fc0277a9e7a3..2d027dc11b10 100644
--- a/init/main.c
+++ b/init/main.c
@@ -40,6 +40,7 @@
#include <linux/security.h>
#include <linux/smp.h>
#include <linux/profile.h>
+#include <linux/kfence.h>
#include <linux/rcupdate.h>
#include <linux/moduleparam.h>
#include <linux/kallsyms.h>
@@ -828,6 +829,7 @@ static void __init mm_init(void)
*/
page_ext_init_flatmem();
init_debug_pagealloc();
+ kfence_alloc_pool();
report_meminit();
mem_init();
kmem_cache_init();
@@ -956,6 +958,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
hrtimers_init();
softirq_init();
timekeeping_init();
+ kfence_init();
/*
* For best initial stack canary entropy, prepare it after:
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 799e49a29797..29e64b4cd412 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -878,6 +878,7 @@ config DEBUG_STACKOVERFLOW
If in doubt, say "N".
source "lib/Kconfig.kasan"
+source "lib/Kconfig.kfence"
endmenu # "Memory Debugging"
diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
new file mode 100644
index 000000000000..b88ac9d6b2e6
--- /dev/null
+++ b/lib/Kconfig.kfence
@@ -0,0 +1,67 @@
+# SPDX-License-Identifier: GPL-2.0-only
+
+config HAVE_ARCH_KFENCE
+ bool
+
+menuconfig KFENCE
+ bool "KFENCE: low-overhead sampling-based memory safety error detector"
+ depends on HAVE_ARCH_KFENCE && !KASAN && (SLAB || SLUB)
+ select STACKTRACE
+ help
+ KFENCE is a low-overhead sampling-based detector of heap out-of-bounds
+ access, use-after-free, and invalid-free errors. KFENCE is designed
+ to have negligible cost to permit enabling it in production
+ environments.
+
+ Note that, KFENCE is not a substitute for explicit testing with tools
+ such as KASAN. KFENCE can detect a subset of bugs that KASAN can
+ detect, albeit at very different performance profiles. If you can
+ afford to use KASAN, continue using KASAN, for example in test
+ environments. If your kernel targets production use, and cannot
+ enable KASAN due to its cost, consider using KFENCE.
+
+if KFENCE
+
+config KFENCE_STATIC_KEYS
+ bool "Use static keys to set up allocations"
+ default y
+ depends on JUMP_LABEL # To ensure performance, require jump labels
+ help
+ Use static keys (static branches) to set up KFENCE allocations. Using
+ static keys is normally recommended, because it avoids a dynamic
+ branch in the allocator's fast path. However, with very low sample
+ intervals, or on systems that do not support jump labels, a dynamic
+ branch may still be an acceptable performance trade-off.
+
+config KFENCE_SAMPLE_INTERVAL
+ int "Default sample interval in milliseconds"
+ default 100
+ help
+ The KFENCE sample interval determines the frequency with which heap
+ allocations will be guarded by KFENCE. May be overridden via boot
+ parameter "kfence.sample_interval".
+
+ Set this to 0 to disable KFENCE by default, in which case only
+ setting "kfence.sample_interval" to a non-zero value enables KFENCE.
+
+config KFENCE_NUM_OBJECTS
+ int "Number of guarded objects available"
+ range 1 65535
+ default 255
+ help
+ The number of guarded objects available. For each KFENCE object, 2
+ pages are required; with one containing the object and two adjacent
+ ones used as guard pages.
+
+config KFENCE_STRESS_TEST_FAULTS
+ int "Stress testing of fault handling and error reporting" if EXPERT
+ default 0
+ help
+ The inverse probability with which to randomly protect KFENCE object
+ pages, resulting in spurious use-after-frees. The main purpose of
+ this option is to stress test KFENCE with concurrent error reports
+ and allocations/frees. A value of 0 disables stress testing logic.
+
+ Only for KFENCE testing; set to 0 if you are not a KFENCE developer.
+
+endif # KFENCE
diff --git a/mm/Makefile b/mm/Makefile
index 2b1991759835..3d81a67d66a2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -82,6 +82,7 @@ obj-$(CONFIG_PAGE_POISONING) += page_poison.o
obj-$(CONFIG_SLAB) += slab.o
obj-$(CONFIG_SLUB) += slub.o
obj-$(CONFIG_KASAN) += kasan/
+obj-$(CONFIG_KFENCE) += kfence/
obj-$(CONFIG_FAILSLAB) += failslab.o
obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-$(CONFIG_MEMTEST) += memtest.o
diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile
new file mode 100644
index 000000000000..d991e9a349f0
--- /dev/null
+++ b/mm/kfence/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0
+
+obj-$(CONFIG_KFENCE) := core.o report.o
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
new file mode 100644
index 000000000000..d6a32c13336b
--- /dev/null
+++ b/mm/kfence/core.c
@@ -0,0 +1,840 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KFENCE guarded object allocator and fault handling.
+ *
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#define pr_fmt(fmt) "kfence: " fmt
+
+#include <linux/atomic.h>
+#include <linux/bug.h>
+#include <linux/debugfs.h>
+#include <linux/kcsan-checks.h>
+#include <linux/kfence.h>
+#include <linux/list.h>
+#include <linux/lockdep.h>
+#include <linux/memblock.h>
+#include <linux/moduleparam.h>
+#include <linux/random.h>
+#include <linux/rcupdate.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+
+#include <asm/kfence.h>
+
+#include "kfence.h"
+
+/* Disables KFENCE on the first warning assuming an irrecoverable error. */
+#define KFENCE_WARN_ON(cond) \
+ ({ \
+ const bool __cond = WARN_ON(cond); \
+ if (unlikely(__cond)) \
+ WRITE_ONCE(kfence_enabled, false); \
+ __cond; \
+ })
+
+/* === Data ================================================================= */
+
+static bool kfence_enabled __read_mostly;
+
+static unsigned long kfence_sample_interval __read_mostly = CONFIG_KFENCE_SAMPLE_INTERVAL;
+
+#ifdef MODULE_PARAM_PREFIX
+#undef MODULE_PARAM_PREFIX
+#endif
+#define MODULE_PARAM_PREFIX "kfence."
+
+static int param_set_sample_interval(const char *val, const struct kernel_param *kp)
+{
+ unsigned long num;
+ int ret = kstrtoul(val, 0, &num);
+
+ if (ret < 0)
+ return ret;
+
+ if (!num) /* Using 0 to indicate KFENCE is disabled. */
+ WRITE_ONCE(kfence_enabled, false);
+ else if (!READ_ONCE(kfence_enabled) && system_state != SYSTEM_BOOTING)
+ return -EINVAL; /* Cannot (re-)enable KFENCE on-the-fly. */
+
+ *((unsigned long *)kp->arg) = num;
+ return 0;
+}
+
+static int param_get_sample_interval(char *buffer, const struct kernel_param *kp)
+{
+ if (!READ_ONCE(kfence_enabled))
+ return sprintf(buffer, "0\n");
+
+ return param_get_ulong(buffer, kp);
+}
+
+static const struct kernel_param_ops sample_interval_param_ops = {
+ .set = param_set_sample_interval,
+ .get = param_get_sample_interval,
+};
+module_param_cb(sample_interval, &sample_interval_param_ops, &kfence_sample_interval, 0600);
+
+/* The pool of pages used for guard pages and objects. */
+char *__kfence_pool __ro_after_init;
+EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */
+
+/*
+ * Per-object metadata, with one-to-one mapping of object metadata to
+ * backing pages (in __kfence_pool).
+ */
+static_assert(CONFIG_KFENCE_NUM_OBJECTS > 0);
+struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+
+/* Freelist with available objects. */
+static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist);
+static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
+
+#ifdef CONFIG_KFENCE_STATIC_KEYS
+/* The static key to set up a KFENCE allocation. */
+DEFINE_STATIC_KEY_FALSE(kfence_allocation_key);
+#endif
+
+/* Gates the allocation, ensuring only one succeeds in a given period. */
+atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
+
+/* Statistics counters for debugfs. */
+enum kfence_counter_id {
+ KFENCE_COUNTER_ALLOCATED,
+ KFENCE_COUNTER_ALLOCS,
+ KFENCE_COUNTER_FREES,
+ KFENCE_COUNTER_ZOMBIES,
+ KFENCE_COUNTER_BUGS,
+ KFENCE_COUNTER_COUNT,
+};
+static atomic_long_t counters[KFENCE_COUNTER_COUNT];
+static const char *const counter_names[] = {
+ [KFENCE_COUNTER_ALLOCATED] = "currently allocated",
+ [KFENCE_COUNTER_ALLOCS] = "total allocations",
+ [KFENCE_COUNTER_FREES] = "total frees",
+ [KFENCE_COUNTER_ZOMBIES] = "zombie allocations",
+ [KFENCE_COUNTER_BUGS] = "total bugs",
+};
+static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT);
+
+/* === Internals ============================================================ */
+
+static bool kfence_protect(unsigned long addr)
+{
+ return !KFENCE_WARN_ON(!kfence_protect_page(ALIGN_DOWN(addr, PAGE_SIZE), true));
+}
+
+static bool kfence_unprotect(unsigned long addr)
+{
+ return !KFENCE_WARN_ON(!kfence_protect_page(ALIGN_DOWN(addr, PAGE_SIZE), false));
+}
+
+static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
+{
+ long index;
+
+ /* The checks do not affect performance; only called from slow-paths. */
+
+ if (!is_kfence_address((void *)addr))
+ return NULL;
+
+ /*
+ * May be an invalid index if called with an address at the edge of
+ * __kfence_pool, in which case we would report an "invalid access"
+ * error.
+ */
+ index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
+ if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
+ return NULL;
+
+ return &kfence_metadata[index];
+}
+
+static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
+{
+ unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
+ unsigned long pageaddr = (unsigned long)&__kfence_pool[offset];
+
+ /* The checks do not affect performance; only called from slow-paths. */
+
+ /* Only call with a pointer into kfence_metadata. */
+ if (KFENCE_WARN_ON(meta < kfence_metadata ||
+ meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
+ return 0;
+
+ /*
+ * This metadata object only ever maps to 1 page; verify that the stored
+ * address is in the expected range.
+ */
+ if (KFENCE_WARN_ON(ALIGN_DOWN(meta->addr, PAGE_SIZE) != pageaddr))
+ return 0;
+
+ return pageaddr;
+}
+
+/*
+ * Update the object's metadata state, including updating the alloc/free stacks
+ * depending on the state transition.
+ */
+static noinline void metadata_update_state(struct kfence_metadata *meta,
+ enum kfence_object_state next)
+{
+ struct kfence_track *track =
+ next == KFENCE_OBJECT_FREED ? &meta->free_track : &meta->alloc_track;
+
+ lockdep_assert_held(&meta->lock);
+
+ /*
+ * Skip over 1 (this) functions; noinline ensures we do not accidentally
+ * skip over the caller by never inlining.
+ */
+ track->num_stack_entries = stack_trace_save(track->stack_entries, KFENCE_STACK_DEPTH, 1);
+ track->pid = task_pid_nr(current);
+
+ /*
+ * Pairs with READ_ONCE() in
+ * kfence_shutdown_cache(),
+ * kfence_handle_page_fault().
+ */
+ WRITE_ONCE(meta->state, next);
+}
+
+/* Write canary byte to @addr. */
+static inline bool set_canary_byte(u8 *addr)
+{
+ *addr = KFENCE_CANARY_PATTERN(addr);
+ return true;
+}
+
+/* Check canary byte at @addr. */
+static inline bool check_canary_byte(u8 *addr)
+{
+ if (likely(*addr == KFENCE_CANARY_PATTERN(addr)))
+ return true;
+
+ atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]);
+ kfence_report_error((unsigned long)addr, addr_to_metadata((unsigned long)addr),
+ KFENCE_ERROR_CORRUPTION);
+ return false;
+}
+
+/* __always_inline this to ensure we won't do an indirect call to fn. */
+static __always_inline void for_each_canary(const struct kfence_metadata *meta, bool (*fn)(u8 *))
+{
+ const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE);
+ unsigned long addr;
+
+ lockdep_assert_held(&meta->lock);
+
+ /*
+ * We'll iterate over each canary byte per-side until fn() returns
+ * false. However, we'll still iterate over the canary bytes to the
+ * right of the object even if there was an error in the canary bytes to
+ * the left of the object. Specifically, if check_canary_byte()
+ * generates an error, showing both sides might give more clues as to
+ * what the error is about when displaying which bytes were corrupted.
+ */
+
+ /* Apply to left of object. */
+ for (addr = pageaddr; addr < meta->addr; addr++) {
+ if (!fn((u8 *)addr))
+ break;
+ }
+
+ /* Apply to right of object. */
+ for (addr = meta->addr + meta->size; addr < pageaddr + PAGE_SIZE; addr++) {
+ if (!fn((u8 *)addr))
+ break;
+ }
+}
+
+static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp)
+{
+ struct kfence_metadata *meta = NULL;
+ unsigned long flags;
+ struct page *page;
+ void *addr;
+
+ /* Try to obtain a free object. */
+ raw_spin_lock_irqsave(&kfence_freelist_lock, flags);
+ if (!list_empty(&kfence_freelist)) {
+ meta = list_entry(kfence_freelist.next, struct kfence_metadata, list);
+ list_del_init(&meta->list);
+ }
+ raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags);
+ if (!meta)
+ return NULL;
+
+ if (unlikely(!raw_spin_trylock_irqsave(&meta->lock, flags))) {
+ /*
+ * This is extremely unlikely -- we are reporting on a
+ * use-after-free, which locked meta->lock, and the reporting
+ * code via printk calls kmalloc() which ends up in
+ * kfence_alloc() and tries to grab the same object that we're
+ * reporting on. While it has never been observed, lockdep does
+ * report that there is a possibility of deadlock. Fix it by
+ * using trylock and bailing out gracefully.
+ */
+ raw_spin_lock_irqsave(&kfence_freelist_lock, flags);
+ /* Put the object back on the freelist. */
+ list_add_tail(&meta->list, &kfence_freelist);
+ raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags);
+
+ return NULL;
+ }
+
+ meta->addr = metadata_to_pageaddr(meta);
+ /* Unprotect if we're reusing this page. */
+ if (meta->state == KFENCE_OBJECT_FREED)
+ kfence_unprotect(meta->addr);
+
+ /*
+ * Note: for allocations made before RNG initialization, will always
+ * return zero. We still benefit from enabling KFENCE as early as
+ * possible, even when the RNG is not yet available, as this will allow
+ * KFENCE to detect bugs due to earlier allocations. The only downside
+ * is that the out-of-bounds accesses detected are deterministic for
+ * such allocations.
+ */
+ if (prandom_u32_max(2)) {
+ /* Allocate on the "right" side, re-calculate address. */
+ meta->addr += PAGE_SIZE - size;
+ meta->addr = ALIGN_DOWN(meta->addr, cache->align);
+ }
+
+ addr = (void *)meta->addr;
+
+ /* Update remaining metadata. */
+ metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED);
+ /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */
+ WRITE_ONCE(meta->cache, cache);
+ meta->size = size;
+ for_each_canary(meta, set_canary_byte);
+
+ /* Set required struct page fields. */
+ page = virt_to_page(meta->addr);
+ page->slab_cache = cache;
+
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+
+ /* Memory initialization. */
+
+ /*
+ * We check slab_want_init_on_alloc() ourselves, rather than letting
+ * SL*B do the initialization, as otherwise we might overwrite KFENCE's
+ * redzone.
+ */
+ if (unlikely(slab_want_init_on_alloc(gfp, cache)))
+ memzero_explicit(addr, size);
+ if (cache->ctor)
+ cache->ctor(addr);
+
+ if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS))
+ kfence_protect(meta->addr); /* Random "faults" by protecting the object. */
+
+ atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);
+ atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCS]);
+
+ return addr;
+}
+
+static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie)
+{
+ struct kcsan_scoped_access assert_page_exclusive;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&meta->lock, flags);
+
+ if (meta->state != KFENCE_OBJECT_ALLOCATED || meta->addr != (unsigned long)addr) {
+ /* Invalid or double-free, bail out. */
+ atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]);
+ kfence_report_error((unsigned long)addr, meta, KFENCE_ERROR_INVALID_FREE);
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+ return;
+ }
+
+ /* Detect racy use-after-free, or incorrect reallocation of this page by KFENCE. */
+ kcsan_begin_scoped_access((void *)ALIGN_DOWN((unsigned long)addr, PAGE_SIZE), PAGE_SIZE,
+ KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT,
+ &assert_page_exclusive);
+
+ if (CONFIG_KFENCE_STRESS_TEST_FAULTS)
+ kfence_unprotect((unsigned long)addr); /* To check canary bytes. */
+
+ /* Restore page protection if there was an OOB access. */
+ if (meta->unprotected_page) {
+ kfence_protect(meta->unprotected_page);
+ meta->unprotected_page = 0;
+ }
+
+ /* Check canary bytes for memory corruption. */
+ for_each_canary(meta, check_canary_byte);
+
+ /*
+ * Clear memory if init-on-free is set. While we protect the page, the
+ * data is still there, and after a use-after-free is detected, we
+ * unprotect the page, so the data is still accessible.
+ */
+ if (!zombie && unlikely(slab_want_init_on_free(meta->cache)))
+ memzero_explicit(addr, meta->size);
+
+ /* Mark the object as freed. */
+ metadata_update_state(meta, KFENCE_OBJECT_FREED);
+
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+
+ /* Protect to detect use-after-frees. */
+ kfence_protect((unsigned long)addr);
+
+ kcsan_end_scoped_access(&assert_page_exclusive);
+ if (!zombie) {
+ /* Add it to the tail of the freelist for reuse. */
+ raw_spin_lock_irqsave(&kfence_freelist_lock, flags);
+ KFENCE_WARN_ON(!list_empty(&meta->list));
+ list_add_tail(&meta->list, &kfence_freelist);
+ raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags);
+
+ atomic_long_dec(&counters[KFENCE_COUNTER_ALLOCATED]);
+ atomic_long_inc(&counters[KFENCE_COUNTER_FREES]);
+ } else {
+ /* See kfence_shutdown_cache(). */
+ atomic_long_inc(&counters[KFENCE_COUNTER_ZOMBIES]);
+ }
+}
+
+static void rcu_guarded_free(struct rcu_head *h)
+{
+ struct kfence_metadata *meta = container_of(h, struct kfence_metadata, rcu_head);
+
+ kfence_guarded_free((void *)meta->addr, meta, false);
+}
+
+static bool __init kfence_init_pool(void)
+{
+ unsigned long addr = (unsigned long)__kfence_pool;
+ struct page *pages;
+ int i;
+
+ if (!__kfence_pool)
+ return false;
+
+ if (!arch_kfence_init_pool())
+ goto err;
+
+ pages = virt_to_page(addr);
+
+ /*
+ * Set up object pages: they must have PG_slab set, to avoid freeing
+ * these as real pages.
+ *
+ * We also want to avoid inserting kfence_free() in the kfree()
+ * fast-path in SLUB, and therefore need to ensure kfree() correctly
+ * enters __slab_free() slow-path.
+ */
+ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
+ if (!i || (i % 2))
+ continue;
+
+ /* Verify we do not have a compound head page. */
+ if (WARN_ON(compound_head(&pages[i]) != &pages[i]))
+ goto err;
+
+ __SetPageSlab(&pages[i]);
+ }
+
+ /*
+ * Protect the first 2 pages. The first page is mostly unnecessary, and
+ * merely serves as an extended guard page. However, adding one
+ * additional page in the beginning gives us an even number of pages,
+ * which simplifies the mapping of address to metadata index.
+ */
+ for (i = 0; i < 2; i++) {
+ if (unlikely(!kfence_protect(addr)))
+ goto err;
+
+ addr += PAGE_SIZE;
+ }
+
+ for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ struct kfence_metadata *meta = &kfence_metadata[i];
+
+ /* Initialize metadata. */
+ INIT_LIST_HEAD(&meta->list);
+ raw_spin_lock_init(&meta->lock);
+ meta->state = KFENCE_OBJECT_UNUSED;
+ meta->addr = addr; /* Initialize for validation in metadata_to_pageaddr(). */
+ list_add_tail(&meta->list, &kfence_freelist);
+
+ /* Protect the right redzone. */
+ if (unlikely(!kfence_protect(addr + PAGE_SIZE)))
+ goto err;
+
+ addr += 2 * PAGE_SIZE;
+ }
+
+ return true;
+
+err:
+ /*
+ * Only release unprotected pages, and do not try to go back and change
+ * page attributes due to risk of failing to do so as well. If changing
+ * page attributes for some pages fails, it is very likely that it also
+ * fails for the first page, and therefore expect addr==__kfence_pool in
+ * most failure cases.
+ */
+ memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
+ __kfence_pool = NULL;
+ return false;
+}
+
+/* === DebugFS Interface ==================================================== */
+
+static int stats_show(struct seq_file *seq, void *v)
+{
+ int i;
+
+ seq_printf(seq, "enabled: %i\n", READ_ONCE(kfence_enabled));
+ for (i = 0; i < KFENCE_COUNTER_COUNT; i++)
+ seq_printf(seq, "%s: %ld\n", counter_names[i], atomic_long_read(&counters[i]));
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(stats);
+
+/*
+ * debugfs seq_file operations for /sys/kernel/debug/kfence/objects.
+ * start_object() and next_object() return the object index + 1, because NULL is used
+ * to stop iteration.
+ */
+static void *start_object(struct seq_file *seq, loff_t *pos)
+{
+ if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ return (void *)((long)*pos + 1);
+ return NULL;
+}
+
+static void stop_object(struct seq_file *seq, void *v)
+{
+}
+
+static void *next_object(struct seq_file *seq, void *v, loff_t *pos)
+{
+ ++*pos;
+ if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ return (void *)((long)*pos + 1);
+ return NULL;
+}
+
+static int show_object(struct seq_file *seq, void *v)
+{
+ struct kfence_metadata *meta = &kfence_metadata[(long)v - 1];
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&meta->lock, flags);
+ kfence_print_object(seq, meta);
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+ seq_puts(seq, "---------------------------------\n");
+
+ return 0;
+}
+
+static const struct seq_operations object_seqops = {
+ .start = start_object,
+ .next = next_object,
+ .stop = stop_object,
+ .show = show_object,
+};
+
+static int open_objects(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &object_seqops);
+}
+
+static const struct file_operations objects_fops = {
+ .open = open_objects,
+ .read = seq_read,
+ .llseek = seq_lseek,
+};
+
+static int __init kfence_debugfs_init(void)
+{
+ struct dentry *kfence_dir = debugfs_create_dir("kfence", NULL);
+
+ debugfs_create_file("stats", 0444, kfence_dir, NULL, &stats_fops);
+ debugfs_create_file("objects", 0400, kfence_dir, NULL, &objects_fops);
+ return 0;
+}
+
+late_initcall(kfence_debugfs_init);
+
+/* === Allocation Gate Timer ================================================ */
+
+/*
+ * Set up delayed work, which will enable and disable the static key. We need to
+ * use a work queue (rather than a simple timer), since enabling and disabling a
+ * static key cannot be done from an interrupt.
+ *
+ * Note: Toggling a static branch currently causes IPIs, and here we'll end up
+ * with a total of 2 IPIs to all CPUs. If this ends up a problem in future (with
+ * more aggressive sampling intervals), we could get away with a variant that
+ * avoids IPIs, at the cost of not immediately capturing allocations if the
+ * instructions remain cached.
+ */
+static struct delayed_work kfence_timer;
+static void toggle_allocation_gate(struct work_struct *work)
+{
+ if (!READ_ONCE(kfence_enabled))
+ return;
+
+ /* Enable static key, and await allocation to happen. */
+ atomic_set(&kfence_allocation_gate, 0);
+#ifdef CONFIG_KFENCE_STATIC_KEYS
+ static_branch_enable(&kfence_allocation_key);
+ /*
+ * Await an allocation. Timeout after 1 second, in case the kernel stops
+ * doing allocations, to avoid stalling this worker task for too long.
+ */
+ {
+ unsigned long end_wait = jiffies + HZ;
+
+ do {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&kfence_allocation_gate) != 0)
+ break;
+ schedule_timeout(1);
+ } while (time_before(jiffies, end_wait));
+ __set_current_state(TASK_RUNNING);
+ }
+ /* Disable static key and reset timer. */
+ static_branch_disable(&kfence_allocation_key);
+#endif
+ schedule_delayed_work(&kfence_timer, msecs_to_jiffies(kfence_sample_interval));
+}
+static DECLARE_DELAYED_WORK(kfence_timer, toggle_allocation_gate);
+
+/* === Public interface ===================================================== */
+
+void __init kfence_alloc_pool(void)
+{
+ if (!kfence_sample_interval)
+ return;
+
+ __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+
+ if (!__kfence_pool)
+ pr_err("failed to allocate pool\n");
+}
+
+void __init kfence_init(void)
+{
+ /* Setting kfence_sample_interval to 0 on boot disables KFENCE. */
+ if (!kfence_sample_interval)
+ return;
+
+ if (!kfence_init_pool()) {
+ pr_err("%s failed\n", __func__);
+ return;
+ }
+
+ WRITE_ONCE(kfence_enabled, true);
+ schedule_delayed_work(&kfence_timer, 0);
+ pr_info("initialized - using %lu bytes for %d objects", KFENCE_POOL_SIZE,
+ CONFIG_KFENCE_NUM_OBJECTS);
+ if (IS_ENABLED(CONFIG_DEBUG_KERNEL))
+ pr_cont(" at 0x%px-0x%px\n", (void *)__kfence_pool,
+ (void *)(__kfence_pool + KFENCE_POOL_SIZE));
+ else
+ pr_cont("\n");
+}
+
+void kfence_shutdown_cache(struct kmem_cache *s)
+{
+ unsigned long flags;
+ struct kfence_metadata *meta;
+ int i;
+
+ for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ bool in_use;
+
+ meta = &kfence_metadata[i];
+
+ /*
+ * If we observe some inconsistent cache and state pair where we
+ * should have returned false here, cache destruction is racing
+ * with either kmem_cache_alloc() or kmem_cache_free(). Taking
+ * the lock will not help, as different critical section
+ * serialization will have the same outcome.
+ */
+ if (READ_ONCE(meta->cache) != s ||
+ READ_ONCE(meta->state) != KFENCE_OBJECT_ALLOCATED)
+ continue;
+
+ raw_spin_lock_irqsave(&meta->lock, flags);
+ in_use = meta->cache == s && meta->state == KFENCE_OBJECT_ALLOCATED;
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+
+ if (in_use) {
+ /*
+ * This cache still has allocations, and we should not
+ * release them back into the freelist so they can still
+ * safely be used and retain the kernel's default
+ * behaviour of keeping the allocations alive (leak the
+ * cache); however, they effectively become "zombie
+ * allocations" as the KFENCE objects are the only ones
+ * still in use and the owning cache is being destroyed.
+ *
+ * We mark them freed, so that any subsequent use shows
+ * more useful error messages that will include stack
+ * traces of the user of the object, the original
+ * allocation, and caller to shutdown_cache().
+ */
+ kfence_guarded_free((void *)meta->addr, meta, /*zombie=*/true);
+ }
+ }
+
+ for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ meta = &kfence_metadata[i];
+
+ /* See above. */
+ if (READ_ONCE(meta->cache) != s || READ_ONCE(meta->state) != KFENCE_OBJECT_FREED)
+ continue;
+
+ raw_spin_lock_irqsave(&meta->lock, flags);
+ if (meta->cache == s && meta->state == KFENCE_OBJECT_FREED)
+ meta->cache = NULL;
+ raw_spin_unlock_irqrestore(&meta->lock, flags);
+ }
+}
+
+void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+{
+ /*
+ * allocation_gate only needs to become non-zero, so it doesn't make
+ * sense to continue writing to it and pay the associated contention
+ * cost, in case we have a large number of concurrent allocations.
+ */
+ if (atomic_read(&kfence_allocation_gate) || atomic_inc_return(&kfence_allocation_gate) > 1)
+ return NULL;
+
+ if (!READ_ONCE(kfence_enabled))
+ return NULL;
+
+ if (size > PAGE_SIZE)
+ return NULL;
+
+ return kfence_guarded_alloc(s, size, flags);
+}
+
+size_t kfence_ksize(const void *addr)
+{
+ const struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
+
+ /*
+ * Read locklessly -- if there is a race with __kfence_alloc(), this is
+ * either a use-after-free or invalid access.
+ */
+ return meta ? meta->size : 0;
+}
+
+void *kfence_object_start(const void *addr)
+{
+ const struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
+
+ /*
+ * Read locklessly -- if there is a race with __kfence_alloc(), this is
+ * either a use-after-free or invalid access.
+ */
+ return meta ? (void *)meta->addr : NULL;
+}
+
+void __kfence_free(void *addr)
+{
+ struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
+
+ /*
+ * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing
+ * the object, as the object page may be recycled for other-typed
+ * objects once it has been freed. meta->cache may be NULL if the cache
+ * was destroyed.
+ */
+ if (unlikely(meta->cache && (meta->cache->flags & SLAB_TYPESAFE_BY_RCU)))
+ call_rcu(&meta->rcu_head, rcu_guarded_free);
+ else
+ kfence_guarded_free(addr, meta, false);
+}
+
+bool kfence_handle_page_fault(unsigned long addr)
+{
+ const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE;
+ struct kfence_metadata *to_report = NULL;
+ enum kfence_error_type error_type;
+ unsigned long flags;
+
+ if (!is_kfence_address((void *)addr))
+ return false;
+
+ if (!READ_ONCE(kfence_enabled)) /* If disabled at runtime ... */
+ return kfence_unprotect(addr); /* ... unprotect and proceed. */
+
+ atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]);
+
+ if (page_index % 2) {
+ /* This is a redzone, report a buffer overflow. */
+ struct kfence_metadata *meta;
+ int distance = 0;
+
+ meta = addr_to_metadata(addr - PAGE_SIZE);
+ if (meta && READ_ONCE(meta->state) == KFENCE_OBJECT_ALLOCATED) {
+ to_report = meta;
+ /* Data race ok; distance calculation approximate. */
+ distance = addr - data_race(meta->addr + meta->size);
+ }
+
+ meta = addr_to_metadata(addr + PAGE_SIZE);
+ if (meta && READ_ONCE(meta->state) == KFENCE_OBJECT_ALLOCATED) {
+ /* Data race ok; distance calculation approximate. */
+ if (!to_report || distance > data_race(meta->addr) - addr)
+ to_report = meta;
+ }
+
+ if (!to_report)
+ goto out;
+
+ raw_spin_lock_irqsave(&to_report->lock, flags);
+ to_report->unprotected_page = addr;
+ error_type = KFENCE_ERROR_OOB;
+
+ /*
+ * If the object was freed before we took the look we can still
+ * report this as an OOB -- the report will simply show the
+ * stacktrace of the free as well.
+ */
+ } else {
+ to_report = addr_to_metadata(addr);
+ if (!to_report)
+ goto out;
+
+ raw_spin_lock_irqsave(&to_report->lock, flags);
+ error_type = KFENCE_ERROR_UAF;
+ /*
+ * We may race with __kfence_alloc(), and it is possible that a
+ * freed object may be reallocated. We simply report this as a
+ * use-after-free, with the stack trace showing the place where
+ * the object was re-allocated.
+ */
+ }
+
+out:
+ if (to_report) {
+ kfence_report_error(addr, to_report, error_type);
+ raw_spin_unlock_irqrestore(&to_report->lock, flags);
+ } else {
+ /* This may be a UAF or OOB access, but we can't be sure. */
+ kfence_report_error(addr, NULL, KFENCE_ERROR_INVALID);
+ }
+
+ return kfence_unprotect(addr); /* Unprotect and let access proceed. */
+}
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
new file mode 100644
index 000000000000..1014060f9707
--- /dev/null
+++ b/mm/kfence/kfence.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Kernel Electric-Fence (KFENCE). For more info please see
+ * Documentation/dev-tools/kfence.rst.
+ *
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#ifndef MM_KFENCE_KFENCE_H
+#define MM_KFENCE_KFENCE_H
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include "../slab.h" /* for struct kmem_cache */
+
+/* For non-debug builds, avoid leaking kernel pointers into dmesg. */
+#ifdef CONFIG_DEBUG_KERNEL
+#define PTR_FMT "%px"
+#else
+#define PTR_FMT "%p"
+#endif
+
+/*
+ * Get the canary byte pattern for @addr. Use a pattern that varies based on the
+ * lower 3 bits of the address, to detect memory corruptions with higher
+ * probability, where similar constants are used.
+ */
+#define KFENCE_CANARY_PATTERN(addr) ((u8)0xaa ^ (u8)((unsigned long)(addr) & 0x7))
+
+/* Maximum stack depth for reports. */
+#define KFENCE_STACK_DEPTH 64
+
+/* KFENCE object states. */
+enum kfence_object_state {
+ KFENCE_OBJECT_UNUSED, /* Object is unused. */
+ KFENCE_OBJECT_ALLOCATED, /* Object is currently allocated. */
+ KFENCE_OBJECT_FREED, /* Object was allocated, and then freed. */
+};
+
+/* Alloc/free tracking information. */
+struct kfence_track {
+ pid_t pid;
+ int num_stack_entries;
+ unsigned long stack_entries[KFENCE_STACK_DEPTH];
+};
+
+/* KFENCE metadata per guarded allocation. */
+struct kfence_metadata {
+ struct list_head list; /* Freelist node; access under kfence_freelist_lock. */
+ struct rcu_head rcu_head; /* For delayed freeing. */
+
+ /*
+ * Lock protecting below data; to ensure consistency of the below data,
+ * since the following may execute concurrently: __kfence_alloc(),
+ * __kfence_free(), kfence_handle_page_fault(). However, note that we
+ * cannot grab the same metadata off the freelist twice, and multiple
+ * __kfence_alloc() cannot run concurrently on the same metadata.
+ */
+ raw_spinlock_t lock;
+
+ /* The current state of the object; see above. */
+ enum kfence_object_state state;
+
+ /*
+ * Allocated object address; cannot be calculated from size, because of
+ * alignment requirements.
+ *
+ * Invariant: ALIGN_DOWN(addr, PAGE_SIZE) is constant.
+ */
+ unsigned long addr;
+
+ /*
+ * The size of the original allocation.
+ */
+ size_t size;
+
+ /*
+ * The kmem_cache cache of the last allocation; NULL if never allocated
+ * or the cache has already been destroyed.
+ */
+ struct kmem_cache *cache;
+
+ /*
+ * In case of an invalid access, the page that was unprotected; we
+ * optimistically only store one address.
+ */
+ unsigned long unprotected_page;
+
+ /* Allocation and free stack information. */
+ struct kfence_track alloc_track;
+ struct kfence_track free_track;
+};
+
+extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+
+/* KFENCE error types for report generation. */
+enum kfence_error_type {
+ KFENCE_ERROR_OOB, /* Detected a out-of-bounds access. */
+ KFENCE_ERROR_UAF, /* Detected a use-after-free access. */
+ KFENCE_ERROR_CORRUPTION, /* Detected a memory corruption on free. */
+ KFENCE_ERROR_INVALID, /* Invalid access of unknown type. */
+ KFENCE_ERROR_INVALID_FREE, /* Invalid free. */
+};
+
+void kfence_report_error(unsigned long address, const struct kfence_metadata *meta,
+ enum kfence_error_type type);
+
+void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta);
+
+#endif /* MM_KFENCE_KFENCE_H */
diff --git a/mm/kfence/report.c b/mm/kfence/report.c
new file mode 100644
index 000000000000..64f27c8d46a3
--- /dev/null
+++ b/mm/kfence/report.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KFENCE reporting.
+ *
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#include <stdarg.h>
+
+#include <linux/kernel.h>
+#include <linux/lockdep.h>
+#include <linux/printk.h>
+#include <linux/seq_file.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+
+#include <asm/kfence.h>
+
+#include "kfence.h"
+
+/* Helper function to either print to a seq_file or to console. */
+__printf(2, 3)
+static void seq_con_printf(struct seq_file *seq, const char *fmt, ...)
+{
+ va_list args;
+
+ va_start(args, fmt);
+ if (seq)
+ seq_vprintf(seq, fmt, args);
+ else
+ vprintk(fmt, args);
+ va_end(args);
+}
+
+/*
+ * Get the number of stack entries to skip to get out of MM internals. @type is
+ * optional, and if set to NULL, assumes an allocation or free stack.
+ */
+static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries,
+ const enum kfence_error_type *type)
+{
+ char buf[64];
+ int skipnr, fallback = 0;
+ bool is_access_fault = false;
+
+ if (type) {
+ /* Depending on error type, find different stack entries. */
+ switch (*type) {
+ case KFENCE_ERROR_UAF:
+ case KFENCE_ERROR_OOB:
+ case KFENCE_ERROR_INVALID:
+ is_access_fault = true;
+ break;
+ case KFENCE_ERROR_CORRUPTION:
+ case KFENCE_ERROR_INVALID_FREE:
+ break;
+ }
+ }
+
+ for (skipnr = 0; skipnr < num_entries; skipnr++) {
+ int len = scnprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skipnr]);
+
+ if (is_access_fault) {
+ if (!strncmp(buf, KFENCE_SKIP_ARCH_FAULT_HANDLER, len))
+ goto found;
+ } else {
+ if (str_has_prefix(buf, "kfence_") || str_has_prefix(buf, "__kfence_") ||
+ !strncmp(buf, "__slab_free", len)) {
+ /*
+ * In case of tail calls from any of the below
+ * to any of the above.
+ */
+ fallback = skipnr + 1;
+ }
+
+ /* Also the *_bulk() variants by only checking prefixes. */
+ if (str_has_prefix(buf, "kfree") ||
+ str_has_prefix(buf, "kmem_cache_free") ||
+ str_has_prefix(buf, "__kmalloc") ||
+ str_has_prefix(buf, "kmem_cache_alloc"))
+ goto found;
+ }
+ }
+ if (fallback < num_entries)
+ return fallback;
+found:
+ skipnr++;
+ return skipnr < num_entries ? skipnr : 0;
+}
+
+static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta,
+ bool show_alloc)
+{
+ const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track;
+
+ if (track->num_stack_entries) {
+ /* Skip allocation/free internals stack. */
+ int i = get_stack_skipnr(track->stack_entries, track->num_stack_entries, NULL);
+
+ /* stack_trace_seq_print() does not exist; open code our own. */
+ for (; i < track->num_stack_entries; i++)
+ seq_con_printf(seq, " %pS\n", (void *)track->stack_entries[i]);
+ } else {
+ seq_con_printf(seq, " no %s stack\n", show_alloc ? "allocation" : "deallocation");
+ }
+}
+
+void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta)
+{
+ const int size = abs(meta->size);
+ const unsigned long start = meta->addr;
+ const struct kmem_cache *const cache = meta->cache;
+
+ lockdep_assert_held(&meta->lock);
+
+ if (meta->state == KFENCE_OBJECT_UNUSED) {
+ seq_con_printf(seq, "kfence-#%zd unused\n", meta - kfence_metadata);
+ return;
+ }
+
+ seq_con_printf(seq,
+ "kfence-#%zd [0x" PTR_FMT "-0x" PTR_FMT
+ ", size=%d, cache=%s] allocated by task %d:\n",
+ meta - kfence_metadata, (void *)start, (void *)(start + size - 1), size,
+ (cache && cache->name) ? cache->name : "<destroyed>", meta->alloc_track.pid);
+ kfence_print_stack(seq, meta, true);
+
+ if (meta->state == KFENCE_OBJECT_FREED) {
+ seq_con_printf(seq, "\nfreed by task %d:\n", meta->free_track.pid);
+ kfence_print_stack(seq, meta, false);
+ }
+}
+
+/*
+ * Show bytes at @addr that are different from the expected canary values, up to
+ * @max_bytes.
+ */
+static void print_diff_canary(unsigned long address, size_t bytes_to_show,
+ const struct kfence_metadata *meta)
+{
+ const unsigned long show_until_addr = address + bytes_to_show;
+ const u8 *cur, *end;
+
+ /* Do not show contents of object nor read into following guard page. */
+ end = (const u8 *)(address < meta->addr ? min(show_until_addr, meta->addr)
+ : min(show_until_addr, PAGE_ALIGN(address)));
+
+ pr_cont("[");
+ for (cur = (const u8 *)address; cur < end; cur++) {
+ if (*cur == KFENCE_CANARY_PATTERN(cur))
+ pr_cont(" .");
+ else if (IS_ENABLED(CONFIG_DEBUG_KERNEL))
+ pr_cont(" 0x%02x", *cur);
+ else /* Do not leak kernel memory in non-debug builds. */
+ pr_cont(" !");
+ }
+ pr_cont(" ]");
+}
+
+void kfence_report_error(unsigned long address, const struct kfence_metadata *meta,
+ enum kfence_error_type type)
+{
+ unsigned long stack_entries[KFENCE_STACK_DEPTH] = { 0 };
+ int num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 1);
+ int skipnr = get_stack_skipnr(stack_entries, num_stack_entries, &type);
+ const ptrdiff_t object_index = meta ? meta - kfence_metadata : -1;
+
+ /* Require non-NULL meta, except if KFENCE_ERROR_INVALID. */
+ if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta))
+ return;
+
+ if (meta)
+ lockdep_assert_held(&meta->lock);
+ /*
+ * Because we may generate reports in printk-unfriendly parts of the
+ * kernel, such as scheduler code, the use of printk() could deadlock.
+ * Until such time that all printing code here is safe in all parts of
+ * the kernel, accept the risk, and just get our message out (given the
+ * system might already behave unpredictably due to the memory error).
+ * As such, also disable lockdep to hide warnings, and avoid disabling
+ * lockdep for the rest of the kernel.
+ */
+ lockdep_off();
+
+ pr_err("==================================================================\n");
+ /* Print report header. */
+ switch (type) {
+ case KFENCE_ERROR_OOB: {
+ const bool left_of_object = address < meta->addr;
+
+ pr_err("BUG: KFENCE: out-of-bounds in %pS\n\n", (void *)stack_entries[skipnr]);
+ pr_err("Out-of-bounds access at 0x" PTR_FMT " (%luB %s of kfence-#%zd):\n",
+ (void *)address,
+ left_of_object ? meta->addr - address : address - meta->addr,
+ left_of_object ? "left" : "right", object_index);
+ break;
+ }
+ case KFENCE_ERROR_UAF:
+ pr_err("BUG: KFENCE: use-after-free in %pS\n\n", (void *)stack_entries[skipnr]);
+ pr_err("Use-after-free access at 0x" PTR_FMT " (in kfence-#%zd):\n",
+ (void *)address, object_index);
+ break;
+ case KFENCE_ERROR_CORRUPTION:
+ pr_err("BUG: KFENCE: memory corruption in %pS\n\n", (void *)stack_entries[skipnr]);
+ pr_err("Corrupted memory at 0x" PTR_FMT " ", (void *)address);
+ print_diff_canary(address, 16, meta);
+ pr_cont(" (in kfence-#%zd):\n", object_index);
+ break;
+ case KFENCE_ERROR_INVALID:
+ pr_err("BUG: KFENCE: invalid access in %pS\n\n", (void *)stack_entries[skipnr]);
+ pr_err("Invalid access at 0x" PTR_FMT ":\n", (void *)address);
+ break;
+ case KFENCE_ERROR_INVALID_FREE:
+ pr_err("BUG: KFENCE: invalid free in %pS\n\n", (void *)stack_entries[skipnr]);
+ pr_err("Invalid free of 0x" PTR_FMT " (in kfence-#%zd):\n", (void *)address,
+ object_index);
+ break;
+ }
+
+ /* Print stack trace and object info. */
+ stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0);
+
+ if (meta) {
+ pr_err("\n");
+ kfence_print_object(NULL, meta);
+ }
+
+ /* Print report footer. */
+ pr_err("\n");
+ dump_stack_print_info(KERN_ERR);
+ pr_err("==================================================================\n");
+
+ lockdep_on();
+
+ if (panic_on_warn)
+ panic("panic_on_warn set ...\n");
+
+ /* We encountered a memory unsafety error, taint the kernel! */
+ add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
+}
--
2.25.1
2
1

[PATCH kernel-4.19 1/2] ACPI / APEI: Add a notifier chain for unknown (vendor) CPER records
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: Shiju Jose <shiju.jose(a)huawei.com>
mainline inclusion
from mainline-v5.10-rc1
commit 9aa9cf3ee9451d08adafc03cef8e44c7ea3898e7
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMAR
CVE: NA
--------------------------------
CPER records describing a firmware-first error are identified by GUID.
The ghes driver currently logs, but ignores any unknown CPER records.
This prevents describing errors that can't be represented by a standard
entry, that would otherwise allow a driver to recover from an error.
The UEFI spec calls these 'Non-standard Section Body' (N.2.3 of
version 2.8).
Add a notifier chain for these non-standard/vendor-records. Callers
must identify their type of records by GUID.
Record data is copied to memory from the ghes_estatus_pool to allow
us to keep it until after the notifier has run.
Co-developed-by: James Morse <james.morse(a)arm.com>
Link: https://lore.kernel.org/r/20200903123456.1823-2-shiju.jose@huawei.com
Signed-off-by: James Morse <james.morse(a)arm.com>
Signed-off-by: Shiju Jose <shiju.jose(a)huawei.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>
Acked-by: "Rafael J. Wysocki" <rjw(a)rjwysocki.net>
Signed-off-by: Weilong Chen <chenweilong(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/acpi/apei/ghes.c | 64 ++++++++++++++++++++++++++++++++++++++++
include/acpi/ghes.h | 18 +++++++++++
2 files changed, 82 insertions(+)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index e807e8f74d1e0..08658c433b1fc 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -84,6 +84,12 @@
((struct acpi_hest_generic_status *) \
((struct ghes_estatus_node *)(estatus_node) + 1))
+#define GHES_VENDOR_ENTRY_LEN(gdata_len) \
+ (sizeof(struct ghes_vendor_record_entry) + (gdata_len))
+#define GHES_GDATA_FROM_VENDOR_ENTRY(vendor_entry) \
+ ((struct acpi_hest_generic_data *) \
+ ((struct ghes_vendor_record_entry *)(vendor_entry) + 1))
+
static inline bool is_hest_type_generic_v2(struct ghes *ghes)
{
return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2;
@@ -126,6 +132,12 @@ EXPORT_SYMBOL(ghes_ts_err_chain);
static DEFINE_RAW_SPINLOCK(ghes_ioremap_lock_nmi);
static DEFINE_SPINLOCK(ghes_ioremap_lock_irq);
+struct ghes_vendor_record_entry {
+ struct work_struct work;
+ int error_severity;
+ char vendor_record[];
+};
+
static struct gen_pool *ghes_estatus_pool;
static unsigned long ghes_estatus_pool_size_request;
@@ -464,6 +476,57 @@ static void ghes_handle_aer(struct acpi_hest_generic_data *gdata)
#endif
}
+static BLOCKING_NOTIFIER_HEAD(vendor_record_notify_list);
+
+int ghes_register_vendor_record_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&vendor_record_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(ghes_register_vendor_record_notifier);
+
+void ghes_unregister_vendor_record_notifier(struct notifier_block *nb)
+{
+ blocking_notifier_chain_unregister(&vendor_record_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(ghes_unregister_vendor_record_notifier);
+
+static void ghes_vendor_record_work_func(struct work_struct *work)
+{
+ struct ghes_vendor_record_entry *entry;
+ struct acpi_hest_generic_data *gdata;
+ u32 len;
+
+ entry = container_of(work, struct ghes_vendor_record_entry, work);
+ gdata = GHES_GDATA_FROM_VENDOR_ENTRY(entry);
+
+ blocking_notifier_call_chain(&vendor_record_notify_list,
+ entry->error_severity, gdata);
+
+ len = GHES_VENDOR_ENTRY_LEN(acpi_hest_get_record_size(gdata));
+ gen_pool_free(ghes_estatus_pool, (unsigned long)entry, len);
+}
+
+static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata,
+ int sev)
+{
+ struct acpi_hest_generic_data *copied_gdata;
+ struct ghes_vendor_record_entry *entry;
+ u32 len;
+
+ len = GHES_VENDOR_ENTRY_LEN(acpi_hest_get_record_size(gdata));
+ entry = (void *)gen_pool_alloc(ghes_estatus_pool, len);
+ if (!entry)
+ return;
+
+ copied_gdata = GHES_GDATA_FROM_VENDOR_ENTRY(entry);
+ memcpy(copied_gdata, gdata, acpi_hest_get_record_size(gdata));
+ entry->error_severity = sev;
+
+ INIT_WORK(&entry->work, ghes_vendor_record_work_func);
+ schedule_work(&entry->work);
+}
+
+
void __weak ghes_arm_process_error(struct ghes *ghes,
struct cper_sec_proc_arm *err, int sec_sev)
{
@@ -517,6 +580,7 @@ static void ghes_do_proc(struct ghes *ghes,
} else {
void *err = acpi_hest_get_payload(gdata);
+ ghes_defer_non_standard_event(gdata, sev);
log_non_standard_event(sec_type, fru_id, fru_text,
sec_sev, err,
gdata->error_data_length);
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index 9aaeaaa3d1a7f..0e96c78f9f180 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -52,6 +52,24 @@ enum {
GHES_SEV_PANIC = 0x3,
};
+#ifdef CONFIG_ACPI_APEI_GHES
+/**
+ * ghes_register_vendor_record_notifier - register a notifier for vendor
+ * records that the kernel would otherwise ignore.
+ * @nb: pointer to the notifier_block structure of the event handler.
+ *
+ * return 0 : SUCCESS, non-zero : FAIL
+ */
+int ghes_register_vendor_record_notifier(struct notifier_block *nb);
+
+/**
+ * ghes_unregister_vendor_record_notifier - unregister the previously
+ * registered vendor record notifier.
+ * @nb: pointer to the notifier_block structure of the vendor record handler.
+ */
+void ghes_unregister_vendor_record_notifier(struct notifier_block *nb);
+#endif
+
/* From drivers/edac/ghes_edac.c */
#ifdef CONFIG_EDAC_GHES
--
2.25.1
1
1

[PATCH kernel-4.19] blk-mq-sched: Fix blk_mq_sched_alloc_tags() error handling
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: John Garry <john.garry(a)huawei.com>
mainline inclusion
from mainline-v5.14-rc1
commit b93af3055d6f32d3b0361cfdb110c9399c1241ba
category: bugfix
bugzilla: 177012
CVE: NA
---------------------------
If the blk_mq_sched_alloc_tags() -> blk_mq_alloc_rqs() call fails, then we
call blk_mq_sched_free_tags() -> blk_mq_free_rqs().
It is incorrect to do so, as any rqs would have already been freed in the
blk_mq_alloc_rqs() call.
Fix by calling blk_mq_free_rq_map() only directly.
Fixes: 6917ff0b5bd41 ("blk-mq-sched: refactor scheduler initialization")
Signed-off-by: John Garry <john.garry(a)huawei.com>
Reviewed-by: Ming Lei <ming.lei(a)redhat.com>
Link: https://lore.kernel.org/r/1627378373-148090-1-git-send-email-john.garry@hua…
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
conflicts:
block/blk-mq-sched.c
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/blk-mq-sched.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index ce4b2ac6d6977..3521eca1b2984 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -513,8 +513,10 @@ static int blk_mq_sched_alloc_tags(struct request_queue *q,
return -ENOMEM;
ret = blk_mq_alloc_rqs(set, hctx->sched_tags, hctx_idx, q->nr_requests);
- if (ret)
- blk_mq_sched_free_tags(set, hctx, hctx_idx);
+ if (ret) {
+ blk_mq_free_rq_map(hctx->sched_tags);
+ hctx->sched_tags = NULL;
+ }
return ret;
}
--
2.25.1
1
0

[PATCH kernel-4.19 1/3] jbd2: Drop unnecessary branch from jbd2_journal_forget()
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: Jan Kara <jack(a)suse.cz>
mainline inclusion
from mainline-5.5-rc1
commit 6d69843e5d3f0c394e1e3004cc2b36efbe402b71
category: bugfix
bugzilla: 176007
CVE: NA
---------------------------
We have cleared both dirty & jbddirty bits from the bh. So there's no
difference between bforget() and brelse(). Thus there's no point jumping
to no_jbd branch.
Signed-off-by: Jan Kara <jack(a)suse.cz>
Link: https://lore.kernel.org/r/20190809124233.13277-5-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Conflicts:
fs/jbd2/transaction.c
Signed-off-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/jbd2/transaction.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index 4055929a043cf..c48658a4d53c0 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -1639,10 +1639,6 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
__jbd2_journal_file_buffer(jh, transaction, BJ_Forget);
} else {
__jbd2_journal_unfile_buffer(jh);
- if (!buffer_jbd(bh)) {
- spin_unlock(&journal->j_list_lock);
- goto not_jbd;
- }
}
spin_unlock(&journal->j_list_lock);
} else if (jh->b_transaction) {
--
2.25.1
1
2

30 Sep '21
From: Longfang Liu <liulongfang(a)huawei.com>
mainline inclusion
from mainline-v5.11-rc5
commit 643a4df7fe3f6831d14536fd692be85f92670a52
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CDK3?from=project-issue
CVE: NA
----------------------------------------
The system that use Synopsys USB host controllers goes to suspend
when using USB audio player. This causes the USB host controller
continuous send interrupt signal to system, When the number of
interrupts exceeds 100000, the system will forcibly close the
interrupts and output a calltrace error.
When the system goes to suspend, the last interrupt is reported to
the driver. At this time, the system has set the state to suspend.
This causes the last interrupt to not be processed by the system and
not clear the interrupt flag. This uncleared interrupt flag constantly
triggers new interrupt event. This causing the driver to receive more
than 100,000 interrupts, which causes the system to forcibly close the
interrupt report and report the calltrace error.
so, when the driver goes to sleep and changes the system state to
suspend, the interrupt flag needs to be cleared.
Signed-off-by: Longfang Liu <liulongfang(a)huawei.com>
Acked-by: Alan Stern <stern(a)rowland.harvard.edu>
Link: https://lore.kernel.org/r/1610416647-45774-1-git-send-email-liulongfang@hua…
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Longfang Liu <liulongfang(a)huawei.com>
Acked-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/host/ehci-hub.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
index ce0eaf7d7c12a..a99c1ac5d8c8b 100644
--- a/drivers/usb/host/ehci-hub.c
+++ b/drivers/usb/host/ehci-hub.c
@@ -346,8 +346,12 @@ static int ehci_bus_suspend (struct usb_hcd *hcd)
unlink_empty_async_suspended(ehci);
+ /* Some Synopsys controllers mistakenly leave IAA turned on */
+ ehci_writel(ehci, STS_IAA, &ehci->regs->status);
+
/* Any IAA cycle that started before the suspend is now invalid */
end_iaa_cycle(ehci);
+
ehci_handle_start_intr_unlinks(ehci);
ehci_handle_intr_unlinks(ehci);
end_free_itds(ehci);
--
2.25.1
1
0

30 Sep '21
From: Valentin Schneider <valentin.schneider(a)arm.com>
mainline inclusion
from mainline-v5.11-rc1
commit b5b217346de85ed1b03fdecd5c5076b34fbb2f0b
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4CAA9
CVE: NA
----------------------------------------------------------
NUMA topologies where the shortest path between some two nodes requires
three or more hops (i.e. diameter > 2) end up being misrepresented in the
scheduler topology structures.
This is currently detected when booting a kernel with CONFIG_SCHED_DEBUG=y
+ sched_debug on the cmdline, although this will only yield a warning about
sched_group spans not matching sched_domain spans:
ERROR: groups don't span domain->span
Add an explicit warning for that case, triggered regardless of
CONFIG_SCHED_DEBUG, and decorate it with an appropriate comment.
The topology described in the comment can be booted up on QEMU by appending
the following to your usual QEMU incantation:
-smp cores=4 \
-numa node,cpus=0,nodeid=0 -numa node,cpus=1,nodeid=1, \
-numa node,cpus=2,nodeid=2, -numa node,cpus=3,nodeid=3, \
-numa dist,src=0,dst=1,val=20, -numa dist,src=0,dst=2,val=30, \
-numa dist,src=0,dst=3,val=40, -numa dist,src=1,dst=2,val=20, \
-numa dist,src=1,dst=3,val=30, -numa dist,src=2,dst=3,val=20
A somewhat more realistic topology (6-node mesh) with the same affliction
can be conjured with:
-smp cores=6 \
-numa node,cpus=0,nodeid=0 -numa node,cpus=1,nodeid=1, \
-numa node,cpus=2,nodeid=2, -numa node,cpus=3,nodeid=3, \
-numa node,cpus=4,nodeid=4, -numa node,cpus=5,nodeid=5, \
-numa dist,src=0,dst=1,val=20, -numa dist,src=0,dst=2,val=30, \
-numa dist,src=0,dst=3,val=40, -numa dist,src=0,dst=4,val=30, \
-numa dist,src=0,dst=5,val=20, \
-numa dist,src=1,dst=2,val=20, -numa dist,src=1,dst=3,val=30, \
-numa dist,src=1,dst=4,val=20, -numa dist,src=1,dst=5,val=30, \
-numa dist,src=2,dst=3,val=20, -numa dist,src=2,dst=4,val=30, \
-numa dist,src=2,dst=5,val=40, \
-numa dist,src=3,dst=4,val=20, -numa dist,src=3,dst=5,val=30, \
-numa dist,src=4,dst=5,val=20
Signed-off-by: Valentin Schneider <valentin.schneider(a)arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Acked-by: Mel Gorman <mgorman(a)techsingularity.net>
Link: https://lore.kernel.org/lkml/jhjtux5edo2.mognet@arm.com
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/sched/topology.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 0002b269ed3da..cd2005832d59c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -461,6 +461,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
{
struct rq *rq = cpu_rq(cpu);
struct sched_domain *tmp;
+ int numa_distance = 0;
/* Remove the sched domains which do not contribute to scheduling. */
for (tmp = sd; tmp; ) {
@@ -492,6 +493,38 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
sd->child = NULL;
}
+ for (tmp = sd; tmp; tmp = tmp->parent)
+ numa_distance += !!(tmp->flags & SD_NUMA);
+
+ /*
+ * FIXME: Diameter >=3 is misrepresented.
+ *
+ * Smallest diameter=3 topology is:
+ *
+ * node 0 1 2 3
+ * 0: 10 20 30 40
+ * 1: 20 10 20 30
+ * 2: 30 20 10 20
+ * 3: 40 30 20 10
+ *
+ * 0 --- 1 --- 2 --- 3
+ *
+ * NUMA-3 0-3 N/A N/A 0-3
+ * groups: {0-2},{1-3} {1-3},{0-2}
+ *
+ * NUMA-2 0-2 0-3 0-3 1-3
+ * groups: {0-1},{1-3} {0-2},{2-3} {1-3},{0-1} {2-3},{0-2}
+ *
+ * NUMA-1 0-1 0-2 1-3 2-3
+ * groups: {0},{1} {1},{2},{0} {2},{3},{1} {3},{2}
+ *
+ * NUMA-0 0 1 2 3
+ *
+ * The NUMA-2 groups for nodes 0 and 3 are obviously buggered, as the
+ * group span isn't a subset of the domain span.
+ */
+ WARN_ONCE(numa_distance > 2, "Shortest NUMA path spans too many nodes\n");
+
sched_domain_debug(sd, cpu);
rq_attach_root(rq, rd);
--
2.25.1
1
1

[PATCH kernel-4.19] ipc: replace costly bailout check in sysvipc_find_ipc()
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: Rafael Aquini <aquini(a)redhat.com>
mainline inclusion
from mainline-v5.15
commit 20401d1058f3f841f35a594ac2fc1293710e55b9
category: bugfix
bugzilla: NA
CVE: CVE-2021-3669
--------------------------------
sysvipc_find_ipc() was left with a costly way to check if the offset
position fed to it is bigger than the total number of IPC IDs in use. So
much so that the time it takes to iterate over /proc/sysvipc/* files grows
exponentially for a custom benchmark that creates "N" SYSV shm segments
and then times the read of /proc/sysvipc/shm (milliseconds):
12 msecs to read 1024 segs from /proc/sysvipc/shm
18 msecs to read 2048 segs from /proc/sysvipc/shm
65 msecs to read 4096 segs from /proc/sysvipc/shm
325 msecs to read 8192 segs from /proc/sysvipc/shm
1303 msecs to read 16384 segs from /proc/sysvipc/shm
5182 msecs to read 32768 segs from /proc/sysvipc/shm
The root problem lies with the loop that computes the total amount of ids
in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger than
"ids->in_use". That is a quite inneficient way to get to the maximum
index in the id lookup table, specially when that value is already
provided by struct ipc_ids.max_idx.
This patch follows up on the optimization introduced via commit
15df03c879836 ("sysvipc: make get_maxid O(1) again") and gets rid of the
aforementioned costly loop replacing it by a simpler checkpoint based on
ipc_get_maxidx() returned value, which allows for a smooth linear increase
in time complexity for the same custom benchmark:
2 msecs to read 1024 segs from /proc/sysvipc/shm
2 msecs to read 2048 segs from /proc/sysvipc/shm
4 msecs to read 4096 segs from /proc/sysvipc/shm
9 msecs to read 8192 segs from /proc/sysvipc/shm
19 msecs to read 16384 segs from /proc/sysvipc/shm
39 msecs to read 32768 segs from /proc/sysvipc/shm
Link: https://lkml.kernel.org/r/20210809203554.1562989-1-aquini@redhat.com
Signed-off-by: Rafael Aquini <aquini(a)redhat.com>
Acked-by: Davidlohr Bueso <dbueso(a)suse.de>
Acked-by: Manfred Spraul <manfred(a)colorfullife.com>
Cc: Waiman Long <llong(a)redhat.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
ipc/util.c
Signed-off-by: zhiwentao <zhiwentao(a)huawei.com>
Reviewed-by: Wang Hui <john.wanghui(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
ipc/util.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index af1b572effb14..53204b1d2e207 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -725,21 +725,14 @@ struct pid_namespace *ipc_seq_pid_ns(struct seq_file *s)
static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
loff_t *new_pos)
{
- struct kern_ipc_perm *ipc;
- int total, id;
-
- total = 0;
- for (id = 0; id < pos && total < ids->in_use; id++) {
- ipc = idr_find(&ids->ipcs_idr, id);
- if (ipc != NULL)
- total++;
- }
- ipc = NULL;
- if (total >= ids->in_use)
+ struct kern_ipc_perm *ipc = NULL;
+ int max_idx = ipc_get_maxidx(ids);
+
+ if (max_idx == -1 || pos > max_idx)
goto out;
- for (; pos < IPCMNI; pos++) {
+ for (; pos <= max_idx; pos++) {
ipc = idr_find(&ids->ipcs_idr, pos);
if (ipc != NULL) {
rcu_read_lock();
--
2.25.1
1
0

[PATCH kernel-4.19 1/4] net: hns3: fix memory override when bd_num is bigger than port info size
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: Yonglong Liu <liuyonglong(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
----------------------------
The bd_num is from firmware, it may be bigger than the size of
struct hclge_port_info, and may cause memory override problem.
Signed-off-by: Yonglong Liu <liuyonglong(a)huawei.com>
Reviewed-by: Jian Shen <shenjian15(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
.../net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_port.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_port.c b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_port.c
index 5404048ad60d1..ebbc25cc86940 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_port.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_cae/hns3_cae_port.c
@@ -88,9 +88,9 @@ int hns3_get_port_info(const struct hns3_nic_priv *net_priv,
desc_data = (__le32 *)(&desc.data[0]);
bd_num = le32_to_cpu(*desc_data);
- if (bd_num > hdev->hw.cmq.csq.desc_num) {
- dev_err(&hdev->pdev->dev, "get invalid BD num %u(max %u)\n",
- bd_num, hdev->hw.cmq.csq.desc_num);
+ if (bd_num * sizeof(struct hclge_desc) >
+ sizeof(struct hclge_port_info)) {
+ dev_err(&hdev->pdev->dev, "get invalid BD num %u\n", bd_num);
return -EINVAL;
}
--
2.25.1
1
3

[PATCH kernel-4.19] scsi: hisi_sas: Optimize the code flow of setting sense data when ssp I/O abnormally completed
by Yang Yingliang 30 Sep '21
by Yang Yingliang 30 Sep '21
30 Sep '21
From: yangxingui <yangxingui(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: NA
CVE: NA
---------------------------
In the data underflow scenario, if correct sense data and response frame
have been written to the host memory and the CQ RSPNS_GOOD bit is 0,
then driver sends the sense data to the upper layer.
Signed-off-by: yangxingui <yangxingui(a)huawei.com>
Reviewed-by: Xiang Chen <chenxiang66(a)hisilicon.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/hisi_sas/hisi_sas.h | 8 +++++--
drivers/scsi/hisi_sas/hisi_sas_main.c | 17 -------------
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 33 ++++++++++++++++++++++----
3 files changed, 34 insertions(+), 24 deletions(-)
diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
index 193fc960d87fd..742ffcaeaa95c 100644
--- a/drivers/scsi/hisi_sas/hisi_sas.h
+++ b/drivers/scsi/hisi_sas/hisi_sas.h
@@ -102,6 +102,12 @@ enum hisi_sas_dev_type {
HISI_SAS_DEV_TYPE_SATA,
};
+enum datapres_field {
+ NO_DATA = 0,
+ RESPONSE_DATA = 1,
+ SENSE_DATA = 2,
+};
+
struct hisi_sas_hw_error {
u32 irq_msk;
u32 msk;
@@ -571,8 +577,6 @@ extern void hisi_sas_free(struct hisi_hba *hisi_hba);
extern u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis,
int direction);
extern struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port);
-extern void hisi_sas_set_sense_data(struct sas_task *task,
- struct hisi_sas_slot *slot);
extern void hisi_sas_sata_done(struct sas_task *task,
struct hisi_sas_slot *slot);
extern int hisi_sas_get_fw_info(struct hisi_hba *hisi_hba);
diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index ea08d53c11495..67befcc033126 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -105,23 +105,6 @@ u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction)
}
EXPORT_SYMBOL_GPL(hisi_sas_get_ata_protocol);
-void hisi_sas_set_sense_data(struct sas_task *task,
- struct hisi_sas_slot *slot)
-{
- struct ssp_response_iu *iu =
- hisi_sas_status_buf_addr_mem(slot) +
- sizeof(struct hisi_sas_err_record);
- if (iu->datapres == 2) {
- struct task_status_struct *ts = &task->task_status;
-
- ts->buf_valid_size =
- min_t(int, SAS_STATUS_BUF_SIZE,
- be32_to_cpu(iu->sense_data_len));
- memcpy(ts->buf, iu->sense_data, ts->buf_valid_size);
- }
-}
-EXPORT_SYMBOL_GPL(hisi_sas_set_sense_data);
-
void hisi_sas_sata_done(struct sas_task *task,
struct hisi_sas_slot *slot)
{
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index 06b4f2db62f7b..9ce1177a8e455 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -395,6 +395,8 @@
#define CMPLT_HDR_ERROR_PHASE_MSK (0xff << CMPLT_HDR_ERROR_PHASE_OFF)
#define CMPLT_HDR_RSPNS_XFRD_OFF 10
#define CMPLT_HDR_RSPNS_XFRD_MSK (0x1 << CMPLT_HDR_RSPNS_XFRD_OFF)
+#define CMPLT_HDR_RSPNS_GOOD_OFF 11
+#define CMPLT_HDR_RSPNS_GOOD_MSK (0x1 << CMPLT_HDR_RSPNS_GOOD_OFF)
#define CMPLT_HDR_ERX_OFF 12
#define CMPLT_HDR_ERX_MSK (0x1 << CMPLT_HDR_ERX_OFF)
#define CMPLT_HDR_ABORT_STAT_OFF 13
@@ -2208,6 +2210,24 @@ static irqreturn_t fatal_axi_int_v3_hw(int irq_no, void *p)
return IRQ_HANDLED;
}
+static void hisi_sas_set_sense_data(struct sas_task *task,
+ struct hisi_sas_slot *slot)
+{
+ struct ssp_response_iu *iu =
+ hisi_sas_status_buf_addr_mem(slot) +
+ sizeof(struct hisi_sas_err_record);
+ if ((iu->status == SAM_STAT_CHECK_CONDITION) &&
+ (iu->datapres == SENSE_DATA)) {
+ struct task_status_struct *ts = &task->task_status;
+
+ ts->buf_valid_size =
+ min_t(int, SAS_STATUS_BUF_SIZE,
+ be32_to_cpu(iu->sense_data_len));
+ memcpy(ts->buf, iu->sense_data, ts->buf_valid_size);
+ ts->stat = SAM_STAT_CHECK_CONDITION;
+ }
+}
+
static void
slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
struct hisi_sas_slot *slot)
@@ -2224,17 +2244,20 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
switch (task->task_proto) {
case SAS_PROTOCOL_SSP:
- if (complete_hdr->dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) {
- ts->stat = SAS_QUEUE_FULL;
- slot->abort = 1;
- } else if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) {
+ if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) {
ts->residual = trans_tx_fail_type;
ts->stat = SAS_DATA_UNDERRUN;
+ if ((!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_GOOD_MSK)) &&
+ (complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)) {
+ hisi_sas_set_sense_data(task, slot);
+ }
+ } else if (complete_hdr->dw3 & CMPLT_HDR_IO_IN_TARGET_MSK) {
+ ts->stat = SAS_QUEUE_FULL;
+ slot->abort = 1;
} else {
ts->stat = SAS_OPEN_REJECT;
ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
}
- hisi_sas_set_sense_data(task, slot);
break;
case SAS_PROTOCOL_SATA:
case SAS_PROTOCOL_STP:
--
2.25.1
1
0

29 Sep '21
Hi Community:
How can I get the kernel config values in EulerOS?
For example, in other release, I can do that by "zcat /proc/config.gz"
B.R.
2
1
openEuler Summit 是由欧拉开源社区举办的开发者峰会。全新升级后的首场峰会将于 11 月10日在北京举行。
openEuler 从服务器操作系统,升级为数字基础设施的操作系统,支持 IT、CT、OT
等数字基础设施全场景,覆盖服务器、云、边、嵌入式等各种形态的需求。
伴随 openEuler 21.09 的发布,openEuler 已经包含了服务器、云原生、边缘计算和嵌入式的四大应用场景。
openEuler 通过开源开放,不断探索科技创新的边界。开发者、用户、社区贡献者、软件爱好者在 openEuler Summit 汇聚,共同解读
openEuler 的最新版,探讨未来的技术路线,让技术、生态、商业在这里产生奇妙的化学反应。
*本次峰会旨在推动 openEuler 在多样性计算、云计算、边缘计算、嵌入式等技术方向的持续探索和创新。*
开源是一种态度、分享是一种精神。Call for Speaker、Call for Sponsor、Call for Demo 现已全面开放报名。
大会报名通道同步开启。
*我们诚挚的邀请您提交演讲议题、发表联合演讲、成为共建单位、贡献展示方案、参与社区建设。*
- *大会官网:*https://www.openeuler.org/zh/interaction/summit-list/summit2021/
- *Call for Speaker:*https://shimowendang.com/forms/X6X9jj9KPcdQwVr8/fill
- *Call for Sponsor:*https://shimowendang.com/forms/k76zTLKvumYwRdsP/fill
- *Call for Demo:*https://shimowendang.com/forms/LjHs8JlsLSsW92kl/fill
1
0
From: Hongyu Li <“543306408(a)qq.xn--com-9o0a>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler-competition/summer-2021/issues/I3EBT6
CVE: NA
----------------------------------------------------------------------
The /proc/idle can give higher precision in accounting the CPU idle
time compared with the traditional /proc/stat ways.
Signed-off-by: Hongyu Li <“543306408(a)qq.xn--com-9o0a>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/proc_idle.c | 82 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 16 +++++++++
kernel/sched/idle.c | 21 +++++++++++
5 files changed, 127 insertions(+)
create mode 100644 fs/proc/proc_idle.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..46620afe9cac 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/idle file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/idle file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..69dd2da3a080 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += proc_idle.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/proc_idle.c b/fs/proc/proc_idle.c
new file mode 100644
index 000000000000..bbb52e247448
--- /dev/null
+++ b/fs/proc/proc_idle.c
@@ -0,0 +1,82 @@
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "idle"
+
+extern u64 cpu_rq_get(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cpu_rq_get(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 1024 + 128 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..bb280852bb06 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -1078,3 +1078,19 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cpu_rq_get(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ return idle;
+}
+EXPORT_SYMBOL_GPL(cpu_rq_get);
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..e3fb940a61e8 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -424,6 +424,20 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
@@ -436,6 +450,13 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
2
1

[PATCH kernel-4.19] Bluetooth: fix use-after-free error in lock_sock_nested()
by Yang Yingliang 29 Sep '21
by Yang Yingliang 29 Sep '21
29 Sep '21
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
mainline inclusion
from mainline-v5.16
commit 1bff51ea59a9afb67d2dd78518ab0582a54a472c
category: bugfix
bugzilla: NA
CVE: CVE-2021-3752
---------------------------
use-after-free error in lock_sock_nested is reported:
[ 179.140137][ T3731] =====================================================
[ 179.142675][ T3731] BUG: KMSAN: use-after-free in lock_sock_nested+0x280/0x2c0
[ 179.145494][ T3731] CPU: 4 PID: 3731 Comm: kworker/4:2 Not tainted 5.12.0-rc6+ #54
[ 179.148432][ T3731] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 179.151806][ T3731] Workqueue: events l2cap_chan_timeout
[ 179.152730][ T3731] Call Trace:
[ 179.153301][ T3731] dump_stack+0x24c/0x2e0
[ 179.154063][ T3731] kmsan_report+0xfb/0x1e0
[ 179.154855][ T3731] __msan_warning+0x5c/0xa0
[ 179.155579][ T3731] lock_sock_nested+0x280/0x2c0
[ 179.156436][ T3731] ? kmsan_get_metadata+0x116/0x180
[ 179.157257][ T3731] l2cap_sock_teardown_cb+0xb8/0x890
[ 179.158154][ T3731] ? __msan_metadata_ptr_for_load_8+0x10/0x20
[ 179.159141][ T3731] ? kmsan_get_metadata+0x116/0x180
[ 179.159994][ T3731] ? kmsan_get_shadow_origin_ptr+0x84/0xb0
[ 179.160959][ T3731] ? l2cap_sock_recv_cb+0x420/0x420
[ 179.161834][ T3731] l2cap_chan_del+0x3e1/0x1d50
[ 179.162608][ T3731] ? kmsan_get_metadata+0x116/0x180
[ 179.163435][ T3731] ? kmsan_get_shadow_origin_ptr+0x84/0xb0
[ 179.164406][ T3731] l2cap_chan_close+0xeea/0x1050
[ 179.165189][ T3731] ? kmsan_internal_unpoison_shadow+0x42/0x70
[ 179.166180][ T3731] l2cap_chan_timeout+0x1da/0x590
[ 179.167066][ T3731] ? __msan_metadata_ptr_for_load_8+0x10/0x20
[ 179.168023][ T3731] ? l2cap_chan_create+0x560/0x560
[ 179.168818][ T3731] process_one_work+0x121d/0x1ff0
[ 179.169598][ T3731] worker_thread+0x121b/0x2370
[ 179.170346][ T3731] kthread+0x4ef/0x610
[ 179.171010][ T3731] ? process_one_work+0x1ff0/0x1ff0
[ 179.171828][ T3731] ? kthread_blkcg+0x110/0x110
[ 179.172587][ T3731] ret_from_fork+0x1f/0x30
[ 179.173348][ T3731]
[ 179.173752][ T3731] Uninit was created at:
[ 179.174409][ T3731] kmsan_internal_poison_shadow+0x5c/0xf0
[ 179.175373][ T3731] kmsan_slab_free+0x76/0xc0
[ 179.176060][ T3731] kfree+0x3a5/0x1180
[ 179.176664][ T3731] __sk_destruct+0x8af/0xb80
[ 179.177375][ T3731] __sk_free+0x812/0x8c0
[ 179.178032][ T3731] sk_free+0x97/0x130
[ 179.178686][ T3731] l2cap_sock_release+0x3d5/0x4d0
[ 179.179457][ T3731] sock_close+0x150/0x450
[ 179.180117][ T3731] __fput+0x6bd/0xf00
[ 179.180787][ T3731] ____fput+0x37/0x40
[ 179.181481][ T3731] task_work_run+0x140/0x280
[ 179.182219][ T3731] do_exit+0xe51/0x3e60
[ 179.182930][ T3731] do_group_exit+0x20e/0x450
[ 179.183656][ T3731] get_signal+0x2dfb/0x38f0
[ 179.184344][ T3731] arch_do_signal_or_restart+0xaa/0xe10
[ 179.185266][ T3731] exit_to_user_mode_prepare+0x2d2/0x560
[ 179.186136][ T3731] syscall_exit_to_user_mode+0x35/0x60
[ 179.186984][ T3731] do_syscall_64+0xc5/0x140
[ 179.187681][ T3731] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 179.188604][ T3731] =====================================================
In our case, there are two Thread A and B:
Context: Thread A: Context: Thread B:
l2cap_chan_timeout() __se_sys_shutdown()
l2cap_chan_close() l2cap_sock_shutdown()
l2cap_chan_del() l2cap_chan_close()
l2cap_sock_teardown_cb() l2cap_sock_teardown_cb()
Once l2cap_sock_teardown_cb() excuted, this sock will be marked as SOCK_ZAPPED,
and can be treated as killable in l2cap_sock_kill() if sock_orphan() has
excuted, at this time we close sock through sock_close() which end to call
l2cap_sock_kill() like Thread C:
Context: Thread C:
sock_close()
l2cap_sock_release()
sock_orphan()
l2cap_sock_kill() #free sock if refcnt is 1
If C completed, Once A or B reaches l2cap_sock_teardown_cb() again,
use-after-free happened.
We should set chan->data to NULL if sock is destructed, for telling teardown
operation is not allowed in l2cap_sock_teardown_cb(), and also we should
avoid killing an already killed socket in l2cap_sock_close_cb().
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz(a)intel.com>
Signed-off-by: Marcel Holtmann <marcel(a)holtmann.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/bluetooth/l2cap_sock.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index d1d5fa35dcc9a..b4f6b74c40038 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -1308,6 +1308,9 @@ static void l2cap_sock_close_cb(struct l2cap_chan *chan)
{
struct sock *sk = chan->data;
+ if (!sk)
+ return;
+
l2cap_sock_kill(sk);
}
@@ -1316,6 +1319,9 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
struct sock *sk = chan->data;
struct sock *parent;
+ if (!sk)
+ return;
+
BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
/* This callback can be called both for server (BT_LISTEN)
@@ -1498,8 +1504,10 @@ static void l2cap_sock_destruct(struct sock *sk)
{
BT_DBG("sk %p", sk);
- if (l2cap_pi(sk)->chan)
+ if (l2cap_pi(sk)->chan) {
+ l2cap_pi(sk)->chan->data = NULL;
l2cap_chan_put(l2cap_pi(sk)->chan);
+ }
if (l2cap_pi(sk)->rx_busy_skb) {
kfree_skb(l2cap_pi(sk)->rx_busy_skb);
--
2.25.1
1
0

[PATCH openEuler-5.10 01/17] slub: fix unreclaimable slab stat for bulk free
by Zheng Zengkai 29 Sep '21
by Zheng Zengkai 29 Sep '21
29 Sep '21
From: Shakeel Butt <shakeelb(a)google.com>
mainline inclusion
from mainline-v5.14-rc4
commit f227f0faf63b46a113c4d1aca633c80195622dd2
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4C12I
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
----------------------------------------------------------------------
SLUB uses page allocator for higher order allocations and update
unreclaimable slab stat for such allocations. At the moment, the bulk
free for SLUB does not share code with normal free code path for these
type of allocations and have missed the stat update. So, fix the stat
update by common code. The user visible impact of the bug is the
potential of inconsistent unreclaimable slab stat visible through
meminfo and vmstat.
Link: https://lkml.kernel.org/r/20210728155354.3440560-1-shakeelb@google.com
Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: Shakeel Butt <shakeelb(a)google.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Chen Huang <chenhuang5(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
mm/slub.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index f06f002bb098..4a83fa347672 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3164,6 +3164,16 @@ struct detached_freelist {
struct kmem_cache *s;
};
+static inline void free_nonslab_page(struct page *page)
+{
+ unsigned int order = compound_order(page);
+
+ VM_BUG_ON_PAGE(!PageCompound(page), page);
+ kfree_hook(page_address(page));
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
+ __free_pages(page, order);
+}
+
/*
* This function progressively scans the array with free objects (with
* a limited look ahead) and extract objects belonging to the same
@@ -3200,9 +3210,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
if (!s) {
/* Handle kalloc'ed objects */
if (unlikely(!PageSlab(page))) {
- BUG_ON(!PageCompound(page));
- kfree_hook(object);
- __free_pages(page, compound_order(page));
+ free_nonslab_page(page);
p[size] = NULL; /* mark object processed */
return size;
}
@@ -4102,13 +4110,7 @@ void kfree(const void *x)
page = virt_to_head_page(x);
if (unlikely(!PageSlab(page))) {
- unsigned int order = compound_order(page);
-
- BUG_ON(!PageCompound(page));
- kfree_hook(object);
- mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
- -(PAGE_SIZE << order));
- __free_pages(page, order);
+ free_nonslab_page(page);
return;
}
slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
--
2.20.1
1
16
From: Hongyu Li <“543306408(a)qq.xn--com-9o0a>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler-competition/summer-2021/issues/I3EBT6
CVE: NA
----------------------------------------------------------------------
The /proc/idle can give higher precision in accounting the CPU idle
time compared with the traditional /proc/stat ways.
Signed-off-by: Hongyu Li <“543306408(a)qq.xn--com-9o0a>
---
fs/proc/Kconfig | 7 ++++
fs/proc/Makefile | 1 +
fs/proc/proc_idle.c | 82 ++++++++++++++++++++++++++++++++++++++++++
kernel/sched/cputime.c | 16 +++++++++
kernel/sched/idle.c | 21 +++++++++++
5 files changed, 127 insertions(+)
create mode 100644 fs/proc/proc_idle.c
diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig
index c930001056f9..46620afe9cac 100644
--- a/fs/proc/Kconfig
+++ b/fs/proc/Kconfig
@@ -107,3 +107,10 @@ config PROC_PID_ARCH_STATUS
config PROC_CPU_RESCTRL
def_bool n
depends on PROC_FS
+
+config PROC_IDLE
+ bool "include /proc/idle file"
+ depends on PROC_FS
+ default y
+ help
+ Provide the CPU idle time in the /proc/idle file.
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 8704d41dd67c..69dd2da3a080 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -34,5 +34,6 @@ proc-$(CONFIG_PROC_VMCORE) += vmcore.o
proc-$(CONFIG_PRINTK) += kmsg.o
proc-$(CONFIG_PROC_PAGE_MONITOR) += page.o
proc-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+proc-$(CONFIG_PROC_IDLE) += proc_idle.o
obj-$(CONFIG_ETMEM_SCAN) += etmem_scan.o
obj-$(CONFIG_ETMEM_SWAP) += etmem_swap.o
diff --git a/fs/proc/proc_idle.c b/fs/proc/proc_idle.c
new file mode 100644
index 000000000000..bbb52e247448
--- /dev/null
+++ b/fs/proc/proc_idle.c
@@ -0,0 +1,82 @@
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/proc_fs.h>
+#include <linux/sched.h>
+#include <linux/sched/stat.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/irqnr.h>
+#include <linux/sched/cputime.h>
+#include <linux/tick.h>
+
+#ifdef CONFIG_PROC_IDLE
+
+#define PROC_NAME "idle"
+
+extern u64 cpu_rq_get(int cpu);
+
+static u64 get_idle_sum_exec_runtime(int cpu)
+{
+ u64 idle = cpu_rq_get(cpu);
+
+ return idle;
+}
+
+static int show_idle(struct seq_file *p, void *v)
+{
+ int i;
+ u64 idle;
+
+ idle = 0;
+
+ for_each_possible_cpu(i) {
+
+ idle += get_idle_sum_exec_runtime(i);
+
+ }
+
+ seq_put_decimal_ull(p, "cpu ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+
+ for_each_online_cpu(i) {
+
+ idle = get_idle_sum_exec_runtime(i);
+
+ seq_printf(p, "cpu%d", i);
+ seq_put_decimal_ull(p, " ", nsec_to_clock_t(idle));
+ seq_putc(p, '\n');
+ }
+
+ return 0;
+}
+
+static int idle_open(struct inode *inode, struct file *file)
+{
+ unsigned int size = 1024 + 128 * num_online_cpus();
+
+ return single_open_size(file, show_idle, NULL, size);
+}
+
+static struct proc_ops idle_procs_ops = {
+ .proc_open = idle_open,
+ .proc_read_iter = seq_read_iter,
+ .proc_lseek = seq_lseek,
+ .proc_release = single_release,
+};
+
+static int __init kernel_module_init(void)
+{
+ proc_create(PROC_NAME, 0, NULL, &idle_procs_ops);
+ return 0;
+}
+
+fs_initcall(kernel_module_init);
+
+#endif /*CONFIG_PROC_IDLE*/
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5a55d2300452..bb280852bb06 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -1078,3 +1078,19 @@ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
EXPORT_SYMBOL_GPL(kcpustat_cpu_fetch);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_GEN */
+
+
+#ifdef CONFIG_PROC_IDLE
+
+
+u64 cpu_rq_get(int cpu)
+{
+ struct rq *rq = cpu_rq(cpu);
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 idle = idle_se->sum_exec_runtime;
+
+ return idle;
+}
+EXPORT_SYMBOL_GPL(cpu_rq_get);
+
+#endif /* CONFIG_PROC_IDLE */
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 36b545f17206..e3fb940a61e8 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -424,6 +424,20 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
{
+#ifdef CONFIG_PROC_IDLE
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+ u64 delta_exec;
+
+ delta_exec = now - idle_se->exec_start;
+ if (unlikely((s64)delta_exec <= 0))
+ return;
+
+ schedstat_set(idle_se->statistics.exec_max,
+ max(delta_exec, idle_se->statistics.exec_max));
+
+ idle_se->sum_exec_runtime += delta_exec;
+#endif
}
static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
@@ -436,6 +450,13 @@ struct task_struct *pick_next_task_idle(struct rq *rq)
{
struct task_struct *next = rq->idle;
+#ifdef CONFIG_PROC_IDLE
+ struct sched_entity *idle_se = &rq->idle->se;
+ u64 now = sched_clock();
+
+ idle_se->exec_start = now;
+#endif
+
set_next_task_idle(rq, next, true);
return next;
--
2.17.1
1
0

29 Sep '21
From: Piotr Krysiuk <piotras(a)gmail.com>
mainline inclusion
from mainline-v5.16
commit 37cb28ec7d3a36a5bace7063a3dba633ab110f8b
category: bugfix
bugzilla: NA
CVE: CVE-2021-38300
-------------------------------------------------
The conditional branch instructions on MIPS use 18-bit signed offsets
allowing for a branch range of 128 KBytes (backward and forward).
However, this limit is not observed by the cBPF JIT compiler, and so
the JIT compiler emits out-of-range branches when translating certain
cBPF programs. A specific example of such a cBPF program is included in
the "BPF_MAXINSNS: exec all MSH" test from lib/test_bpf.c that executes
anomalous machine code containing incorrect branch offsets under JIT.
Furthermore, this issue can be abused to craft undesirable machine
code, where the control flow is hijacked to execute arbitrary Kernel
code.
The following steps can be used to reproduce the issue:
# echo 1 > /proc/sys/net/core/bpf_jit_enable
# modprobe test_bpf test_name="BPF_MAXINSNS: exec all MSH"
This should produce multiple warnings from build_bimm() similar to:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 209 at arch/mips/mm/uasm-mips.c:210 build_insn+0x558/0x590
Micro-assembler field overflow
Modules linked in: test_bpf(+)
CPU: 0 PID: 209 Comm: modprobe Not tainted 5.14.3 #1
Stack : 00000000 807bb824 82b33c9c 801843c0 00000000 00000004 00000000 63c9b5ee
82b33af4 80999898 80910000 80900000 82fd6030 00000001 82b33a98 82087180
00000000 00000000 80873b28 00000000 000000fc 82b3394c 00000000 2e34312e
6d6d6f43 809a180f 809a1836 6f6d203a 80900000 00000001 82b33bac 80900000
00027f80 00000000 00000000 807bb824 00000000 804ed790 001cc317 00000001
[...]
Call Trace:
[<80108f44>] show_stack+0x38/0x118
[<807a7aac>] dump_stack_lvl+0x5c/0x7c
[<807a4b3c>] __warn+0xcc/0x140
[<807a4c3c>] warn_slowpath_fmt+0x8c/0xb8
[<8011e198>] build_insn+0x558/0x590
[<8011e358>] uasm_i_bne+0x20/0x2c
[<80127b48>] build_body+0xa58/0x2a94
[<80129c98>] bpf_jit_compile+0x114/0x1e4
[<80613fc4>] bpf_prepare_filter+0x2ec/0x4e4
[<8061423c>] bpf_prog_create+0x80/0xc4
[<c0a006e4>] test_bpf_init+0x300/0xba8 [test_bpf]
[<8010051c>] do_one_initcall+0x50/0x1d4
[<801c5e54>] do_init_module+0x60/0x220
[<801c8b20>] sys_finit_module+0xc4/0xfc
[<801144d0>] syscall_common+0x34/0x58
[...]
---[ end trace a287d9742503c645 ]---
Then the anomalous machine code executes:
=> 0xc0a18000: addiu sp,sp,-16
0xc0a18004: sw s3,0(sp)
0xc0a18008: sw s4,4(sp)
0xc0a1800c: sw s5,8(sp)
0xc0a18010: sw ra,12(sp)
0xc0a18014: move s5,a0
0xc0a18018: move s4,zero
0xc0a1801c: move s3,zero
# __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
0xc0a18020: lui t6,0x8012
0xc0a18024: ori t4,t6,0x9e14
0xc0a18028: li a1,0
0xc0a1802c: jalr t4
0xc0a18030: move a0,s5
0xc0a18034: bnez v0,0xc0a1ffb8 # incorrect branch offset
0xc0a18038: move v0,zero
0xc0a1803c: andi s4,s3,0xf
0xc0a18040: b 0xc0a18048
0xc0a18044: sll s4,s4,0x2
[...]
# __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
0xc0a1ffa0: lui t6,0x8012
0xc0a1ffa4: ori t4,t6,0x9e14
0xc0a1ffa8: li a1,0
0xc0a1ffac: jalr t4
0xc0a1ffb0: move a0,s5
0xc0a1ffb4: bnez v0,0xc0a1ffb8 # incorrect branch offset
0xc0a1ffb8: move v0,zero
0xc0a1ffbc: andi s4,s3,0xf
0xc0a1ffc0: b 0xc0a1ffc8
0xc0a1ffc4: sll s4,s4,0x2
# __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0)
0xc0a1ffc8: lui t6,0x8012
0xc0a1ffcc: ori t4,t6,0x9e14
0xc0a1ffd0: li a1,0
0xc0a1ffd4: jalr t4
0xc0a1ffd8: move a0,s5
0xc0a1ffdc: bnez v0,0xc0a3ffb8 # correct branch offset
0xc0a1ffe0: move v0,zero
0xc0a1ffe4: andi s4,s3,0xf
0xc0a1ffe8: b 0xc0a1fff0
0xc0a1ffec: sll s4,s4,0x2
[...]
# epilogue
0xc0a3ffb8: lw s3,0(sp)
0xc0a3ffbc: lw s4,4(sp)
0xc0a3ffc0: lw s5,8(sp)
0xc0a3ffc4: lw ra,12(sp)
0xc0a3ffc8: addiu sp,sp,16
0xc0a3ffcc: jr ra
0xc0a3ffd0: nop
To mitigate this issue, we assert the branch ranges for each emit call
that could generate an out-of-range branch.
Fixes: 36366e367ee9 ("MIPS: BPF: Restore MIPS32 cBPF JIT")
Fixes: c6610de353da ("MIPS: net: Add BPF JIT")
Signed-off-by: Piotr Krysiuk <piotras(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Tested-by: Johan Almbladh <johan.almbladh(a)anyfinetworks.com>
Acked-by: Johan Almbladh <johan.almbladh(a)anyfinetworks.com>
Cc: Paul Burton <paulburton(a)kernel.org>
Cc: Thomas Bogendoerfer <tsbogend(a)alpha.franken.de>
Link: https://lore.kernel.org/bpf/20210915160437.4080-1-piotras@gmail.com
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
Reviewed-by: Kuohai Xu <xukuohai(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/mips/net/bpf_jit.c | 57 +++++++++++++++++++++++++++++++----------
1 file changed, 43 insertions(+), 14 deletions(-)
diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c
index 4d8cb9bb8365d..43e6597c720c2 100644
--- a/arch/mips/net/bpf_jit.c
+++ b/arch/mips/net/bpf_jit.c
@@ -662,6 +662,11 @@ static void build_epilogue(struct jit_ctx *ctx)
((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \
func##_positive)
+static bool is_bad_offset(int b_off)
+{
+ return b_off > 0x1ffff || b_off < -0x20000;
+}
+
static int build_body(struct jit_ctx *ctx)
{
const struct bpf_prog *prog = ctx->skf;
@@ -728,7 +733,10 @@ static int build_body(struct jit_ctx *ctx)
/* Load return register on DS for failures */
emit_reg_move(r_ret, r_zero, ctx);
/* Return with error */
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
break;
case BPF_LD | BPF_W | BPF_IND:
@@ -775,8 +783,10 @@ static int build_body(struct jit_ctx *ctx)
emit_jalr(MIPS_R_RA, r_s0, ctx);
emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */
/* Check the error value */
- emit_bcond(MIPS_COND_NE, r_ret, 0,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_NE, r_ret, 0, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx);
/* We are good */
/* X <- P[1:K] & 0xf */
@@ -855,8 +865,10 @@ static int build_body(struct jit_ctx *ctx)
/* A /= X */
ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */
- emit_bcond(MIPS_COND_EQ, r_X, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_div(r_A, r_X, ctx);
break;
@@ -864,8 +876,10 @@ static int build_body(struct jit_ctx *ctx)
/* A %= X */
ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */
- emit_bcond(MIPS_COND_EQ, r_X, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_mod(r_A, r_X, ctx);
break;
@@ -926,7 +940,10 @@ static int build_body(struct jit_ctx *ctx)
break;
case BPF_JMP | BPF_JA:
/* pc += K */
- emit_b(b_imm(i + k + 1, ctx), ctx);
+ b_off = b_imm(i + k + 1, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
break;
case BPF_JMP | BPF_JEQ | BPF_K:
@@ -1056,12 +1073,16 @@ static int build_body(struct jit_ctx *ctx)
break;
case BPF_RET | BPF_A:
ctx->flags |= SEEN_A;
- if (i != prog->len - 1)
+ if (i != prog->len - 1) {
/*
* If this is not the last instruction
* then jump to the epilogue
*/
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
+ }
emit_reg_move(r_ret, r_A, ctx); /* delay slot */
break;
case BPF_RET | BPF_K:
@@ -1075,7 +1096,10 @@ static int build_body(struct jit_ctx *ctx)
* If this is not the last instruction
* then jump to the epilogue
*/
- emit_b(b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_b(b_off, ctx);
emit_nop(ctx);
}
break;
@@ -1133,8 +1157,10 @@ static int build_body(struct jit_ctx *ctx)
/* Load *dev pointer */
emit_load_ptr(r_s0, r_skb, off, ctx);
/* error (0) in the delay slot */
- emit_bcond(MIPS_COND_EQ, r_s0, r_zero,
- b_imm(prog->len, ctx), ctx);
+ b_off = b_imm(prog->len, ctx);
+ if (is_bad_offset(b_off))
+ return -E2BIG;
+ emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx);
if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, ifindex) != 4);
@@ -1244,7 +1270,10 @@ void bpf_jit_compile(struct bpf_prog *fp)
/* Generate the actual JIT code */
build_prologue(&ctx);
- build_body(&ctx);
+ if (build_body(&ctx)) {
+ module_memfree(ctx.target);
+ goto out;
+ }
build_epilogue(&ctx);
/* Update the icache */
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] scsi: qla2xxx: Fix crash in qla2xxx_mqueuecommand()
by Yang Yingliang 29 Sep '21
by Yang Yingliang 29 Sep '21
29 Sep '21
From: Arun Easi <aeasi(a)marvell.com>
stable inclusion
from linux-4.19.191
commit c5ab9b67d8b061de74e2ca51bf787ee599bd7f89
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AFG0?from=project-issue
CVE: NA
--------------------------------
commit 6641df81ab799f28a5d564f860233dd26cca0d93 upstream.
RIP: 0010:kmem_cache_free+0xfa/0x1b0
Call Trace:
qla2xxx_mqueuecommand+0x2b5/0x2c0 [qla2xxx]
scsi_queue_rq+0x5e2/0xa40
__blk_mq_try_issue_directly+0x128/0x1d0
blk_mq_request_issue_directly+0x4e/0xb0
Fix incorrect call to free srb in qla2xxx_mqueuecommand(), as srb is now
allocated by upper layers. This fixes smatch warning of srb unintended
free.
Link: https://lore.kernel.org/r/20210329085229.4367-7-njavali@marvell.com
Fixes: af2a0c51b120 ("scsi: qla2xxx: Fix SRB leak on switch command timeout")
Cc: stable(a)vger.kernel.org # 5.5
Reported-by: Laurence Oberman <loberman(a)redhat.com>
Reported-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Reviewed-by: Himanshu Madhani <himanshu.madhani(a)oracle.com>
Signed-off-by: Arun Easi <aeasi(a)marvell.com>
Signed-off-by: Nilesh Javali <njavali(a)marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: yin-xiujiang <yinxiujiang(a)kylinos.cn> # openEuler_contributor
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/scsi/qla2xxx/qla_os.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index bfbf213b15c0b..8e9d386146ac6 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1028,8 +1028,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
if (rval != QLA_SUCCESS) {
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078,
"Start scsi failed rval=%d for cmd=%p.\n", rval, cmd);
- if (rval == QLA_INTERFACE_ERROR)
- goto qc24_free_sp_fail_command;
goto qc24_host_busy_free_sp;
}
@@ -1044,11 +1042,6 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
qc24_target_busy:
return SCSI_MLQUEUE_TARGET_BUSY;
-qc24_free_sp_fail_command:
- sp->free(sp);
- CMD_SP(cmd) = NULL;
- qla2xxx_rel_qpair_sp(sp->qpair, sp);
-
qc24_fail_command:
cmd->scsi_done(cmd);
--
2.25.1
1
0

[PATCH kernel-4.19] crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()
by Yang Yingliang 28 Sep '21
by Yang Yingliang 28 Sep '21
28 Sep '21
From: Dan Carpenter <dan.carpenter(a)oracle.com>
mainline inclusion
from mainline-v5.16
commit 505d9dcb0f7ddf9d075e729523a33d38642ae680
category: bugfix
bugzilla: NA
CVE: CVE-2021-3764, CVE-2021-3744
---------------------------
There are three bugs in this code:
1) If we ccp_init_data() fails for &src then we need to free aad.
Use goto e_aad instead of goto e_ctx.
2) The label to free the &final_wa was named incorrectly as "e_tag" but
it should have been "e_final_wa". One error path leaked &final_wa.
3) The &tag was leaked on one error path. In that case, I added a free
before the goto because the resource was local to that block.
Fixes: 36cf515b9bbe ("crypto: ccp - Enable support for AES GCM on v5 CCPs")
Reported-by: "minihanshen(沈明航)" <minihanshen(a)tencent.com>
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Reviewed-by: John Allen <john.allen(a)amd.com>
Tested-by: John Allen <john.allen(a)amd.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: weiyang wang <wangweiyang2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/crypto/ccp/ccp-ops.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 4dca1c7b9d47b..cdffff7c8abb9 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -768,7 +768,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
in_place ? DMA_BIDIRECTIONAL
: DMA_TO_DEVICE);
if (ret)
- goto e_ctx;
+ goto e_aad;
if (in_place) {
dst = src;
@@ -853,7 +853,7 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
op.u.aes.size = 0;
ret = cmd_q->ccp->vdata->perform->aes(&op);
if (ret)
- goto e_dst;
+ goto e_final_wa;
if (aes->action == CCP_AES_ACTION_ENCRYPT) {
/* Put the ciphered tag after the ciphertext. */
@@ -863,17 +863,19 @@ static int ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q,
ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
DMA_BIDIRECTIONAL);
if (ret)
- goto e_tag;
+ goto e_final_wa;
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
- if (ret)
- goto e_tag;
+ if (ret) {
+ ccp_dm_free(&tag);
+ goto e_final_wa;
+ }
ret = crypto_memneq(tag.address, final_wa.address,
authsize) ? -EBADMSG : 0;
ccp_dm_free(&tag);
}
-e_tag:
+e_final_wa:
ccp_dm_free(&final_wa);
e_dst:
--
2.25.1
1
0

28 Sep '21
This initial commit contains Ramaxel's spnic module
Yanling Song (2):
spnic: initial commit the common module of Ramaxel NIC driver
spnic: add NIC layer
arch/arm64/configs/openeuler_defconfig | 2 +
arch/x86/configs/openeuler_defconfig | 2 +
drivers/net/ethernet/Kconfig | 1 +
drivers/net/ethernet/Makefile | 1 +
drivers/net/ethernet/ramaxel/Kconfig | 20 +
drivers/net/ethernet/ramaxel/Makefile | 6 +
drivers/net/ethernet/ramaxel/spnic/Kconfig | 15 +
drivers/net/ethernet/ramaxel/spnic/Makefile | 40 +
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.c | 1165 +++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.h | 277 +++
.../ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h | 126 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 1606 +++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h | 196 ++
.../ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h | 60 +
.../ramaxel/spnic/hw/sphw_comm_msg_intf.h | 273 +++
.../ethernet/ramaxel/spnic/hw/sphw_common.c | 88 +
.../ethernet/ramaxel/spnic/hw/sphw_common.h | 118 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_crm.h | 984 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_csr.h | 171 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c | 1374 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.h | 157 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_hw.h | 649 ++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c | 1339 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h | 327 +++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.c | 1253 ++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.h | 42 +
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.c | 1402 +++++++++++++
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.h | 93 +
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c | 911 +++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.h | 102 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 1808 +++++++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.h | 274 +++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 1382 +++++++++++++
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h | 156 ++
.../ramaxel/spnic/hw/sphw_mgmt_msg_base.h | 19 +
.../net/ethernet/ramaxel/spnic/hw/sphw_mt.h | 534 +++++
.../ramaxel/spnic/hw/sphw_prof_adap.c | 94 +
.../ramaxel/spnic/hw/sphw_prof_adap.h | 49 +
.../ethernet/ramaxel/spnic/hw/sphw_profile.h | 36 +
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.c | 152 ++
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.h | 119 ++
.../net/ethernet/ramaxel/spnic/spnic_dbg.c | 753 +++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.c | 965 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_dcb.h | 56 +
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.c | 811 ++++++++
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.h | 78 +
.../ethernet/ramaxel/spnic/spnic_ethtool.c | 983 +++++++++
.../ramaxel/spnic/spnic_ethtool_stats.c | 1035 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_filter.c | 412 ++++
.../net/ethernet/ramaxel/spnic/spnic_irq.c | 178 ++
.../net/ethernet/ramaxel/spnic/spnic_lld.c | 937 +++++++++
.../net/ethernet/ramaxel/spnic/spnic_lld.h | 75 +
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 778 +++++++
.../ethernet/ramaxel/spnic/spnic_mag_cmd.h | 643 ++++++
.../net/ethernet/ramaxel/spnic/spnic_main.c | 925 +++++++++
.../ramaxel/spnic/spnic_mgmt_interface.h | 617 ++++++
.../ethernet/ramaxel/spnic/spnic_netdev_ops.c | 1526 ++++++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic.h | 148 ++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.c | 1321 ++++++++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg.h | 724 +++++++
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 647 ++++++
.../ethernet/ramaxel/spnic/spnic_nic_cmd.h | 105 +
.../ethernet/ramaxel/spnic/spnic_nic_dbg.c | 151 ++
.../ethernet/ramaxel/spnic/spnic_nic_dbg.h | 16 +
.../ethernet/ramaxel/spnic/spnic_nic_dev.h | 353 ++++
.../ethernet/ramaxel/spnic/spnic_nic_event.c | 506 +++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.c | 1124 ++++++++++
.../net/ethernet/ramaxel/spnic/spnic_nic_io.h | 309 +++
.../net/ethernet/ramaxel/spnic/spnic_nic_qp.h | 421 ++++
.../net/ethernet/ramaxel/spnic/spnic_ntuple.c | 841 ++++++++
.../ethernet/ramaxel/spnic/spnic_pci_id_tbl.h | 13 +
.../net/ethernet/ramaxel/spnic/spnic_rss.c | 750 +++++++
.../net/ethernet/ramaxel/spnic/spnic_rss.h | 48 +
.../ethernet/ramaxel/spnic/spnic_rss_cfg.c | 391 ++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.c | 1250 ++++++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_rx.h | 118 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.c | 200 ++
.../net/ethernet/ramaxel/spnic/spnic_sriov.h | 24 +
drivers/net/ethernet/ramaxel/spnic/spnic_tx.c | 879 ++++++++
drivers/net/ethernet/ramaxel/spnic/spnic_tx.h | 129 ++
80 files changed, 38663 insertions(+)
create mode 100644 drivers/net/ethernet/ramaxel/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Kconfig
create mode 100644 drivers/net/ethernet/ramaxel/spnic/Makefile
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_msg_intf.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_crm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_csr.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt_msg_base.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_profile.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_filter.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_irq.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_main.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mgmt_interface.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_netdev_ops.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cmd.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dev.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_event.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_qp.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ntuple.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss_cfg.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.h
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.c
create mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.h
--
2.30.0
1
2

28 Sep '21
From: Zhenhua Huang <zhenhuah(a)codeaurora.org>
mainline inclusion
from mainline-v5.11-rc1
commit 7fb7ab6d618a4dc7ea3f3eafc92388a35b4f8894
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I49LXW
CVE: NA
-------------------------------------------------
Page owner of pages used by page owner itself used is missing on arm32
targets. The reason is dummy_handle and failure_handle is not initialized
correctly. Buddy allocator is used to initialize these two handles.
However, buddy allocator is not ready when page owner calls it. This
change fixed that by initializing page owner after buddy initialization.
The working flow before and after this change are:
original logic:
1. allocated memory for page_ext(using memblock).
2. invoke the init callback of page_ext_ops like page_owner(using buddy
allocator).
3. initialize buddy.
after this change:
1. allocated memory for page_ext(using memblock).
2. initialize buddy.
3. invoke the init callback of page_ext_ops like page_owner(using buddy
allocator).
with the change, failure/dummy_handle can get its correct value and page
owner output for example has the one for page owner itself:
Page allocated via order 2, mask 0x6202c0(GFP_USER|__GFP_NOWARN),
pid 1006, ts 67278156558 ns
PFN 543776 type Unmovable Block 531 type Unmovable Flags 0x0()
init_page_owner+0x28/0x2f8
invoke_init_callbacks_flatmem+0x24/0x34
start_kernel+0x33c/0x5d8
Link: https://lkml.kernel.org/r/1603104925-5888-1-git-send-email-zhenhuah@codeaur…
Signed-off-by: Zhenhua Huang <zhenhuah(a)codeaurora.org>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
(cherry picked from commit 7fb7ab6d618a4dc7ea3f3eafc92388a35b4f8894)
Signed-off-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yuanzheng Song <songyuanzheng(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
include/linux/page_ext.h | 8 ++++++++
init/main.c | 2 ++
mm/page_ext.c | 10 ++++++++--
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index cfce186f0c4e..aff81ba31bd8 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -44,8 +44,12 @@ static inline void page_ext_init_flatmem(void)
{
}
extern void page_ext_init(void);
+static inline void page_ext_init_flatmem_late(void)
+{
+}
#else
extern void page_ext_init_flatmem(void);
+extern void page_ext_init_flatmem_late(void);
static inline void page_ext_init(void)
{
}
@@ -76,6 +80,10 @@ static inline void page_ext_init(void)
{
}
+static inline void page_ext_init_flatmem_late(void)
+{
+}
+
static inline void page_ext_init_flatmem(void)
{
}
diff --git a/init/main.c b/init/main.c
index fc0277a9e7a3..f6fe37a744b7 100644
--- a/init/main.c
+++ b/init/main.c
@@ -830,6 +830,8 @@ static void __init mm_init(void)
init_debug_pagealloc();
report_meminit();
mem_init();
+ /* page_owner must be initialized after buddy is ready */
+ page_ext_init_flatmem_late();
kmem_cache_init();
kmemleak_init();
pgtable_init();
diff --git a/mm/page_ext.c b/mm/page_ext.c
index cf931ebb58d7..df6f74aac8e1 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -99,12 +99,19 @@ static void __init invoke_init_callbacks(void)
}
}
+#ifndef CONFIG_SPARSEMEM
+void __init page_ext_init_flatmem_late(void)
+{
+ invoke_init_callbacks();
+}
+#endif
+
static inline struct page_ext *get_entry(void *base, unsigned long index)
{
return base + page_ext_size * index;
}
-#if !defined(CONFIG_SPARSEMEM)
+#ifndef CONFIG_SPARSEMEM
void __meminit pgdat_page_ext_init(struct pglist_data *pgdat)
@@ -177,7 +184,6 @@ void __init page_ext_init_flatmem(void)
goto fail;
}
pr_info("allocated %ld bytes of page_ext\n", total_usage);
- invoke_init_callbacks();
return;
fail:
--
2.20.1
1
30

[PATCH openEuler-1.0-LTS] net:fix tcp timeout retransmits are always missing 2 times
by jiazhenyuan 28 Sep '21
by jiazhenyuan 28 Sep '21
28 Sep '21
mainline inclusion
from mainline-v5.3.0
commit 3256a2d6ab1f71f9a1bd2d7f6f18eb8108c48d17
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4AFRJ?from=project-issue
CVE: NA
---------------------------------------------------------------
tcp: adjust rto_base in retransmits_timed_out()
The cited commit exposed an old retransmits_timed_out() bug
which assumed it could call tcp_model_timeout() with
TCP_RTO_MIN as rto_base for all states.
But flows in SYN_SENT or SYN_RECV state uses a different
RTO base (1 sec instead of 200 ms, unless BPF choses
another value)
This caused a reduction of SYN retransmits from 6 to 4 with
the default /proc/sys/net/ipv4/tcp_syn_retries value.
Fixes: a41e8a8 ("tcp: better handle TCP_USER_TIMEOUT in SYN_SENT state")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Cc: Yuchung Cheng <ycheng(a)google.com>
Cc: Marek Majkowski <marek(a)cloudflare.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: jiazhenyuan <jiazhenyuan(a)uniontech.com>
---
net/ipv4/tcp_timer.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index 681882a40968..81a47e87c35d 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -190,7 +190,7 @@ static bool retransmits_timed_out(struct sock *sk,
unsigned int boundary,
unsigned int timeout)
{
- const unsigned int rto_base = TCP_RTO_MIN;
+ unsigned int rto_base = TCP_RTO_MIN;
unsigned int linear_backoff_thresh, start_ts;
if (!inet_csk(sk)->icsk_retransmits)
@@ -201,6 +201,9 @@ static bool retransmits_timed_out(struct sock *sk,
return false;
if (likely(timeout == 0)) {
+ if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV))
+ rto_base = tcp_timeout_init(sk);
+
linear_backoff_thresh = ilog2(TCP_RTO_MAX/rto_base);
if (boundary <= linear_backoff_thresh)
--
2.27.0
2
1

[PATCH openEuler-1.0-LTS 01/14] Revert "bpf: Fix truncation handling for mod32 dst reg wrt zero"
by Yang Yingliang 28 Sep '21
by Yang Yingliang 28 Sep '21
28 Sep '21
From: He Fengqing <hefengqing(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: NA
CVE: CVE-2021-3444
-------------------------------------------------
This reverts commit 946dd60de74146a418f62275e5a6f83496f74dcd.
Signed-off-by: He Fengqing <hefengqing(a)huawei.com>
Reviewed-by: Kuohai Xu <xukuohai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/bpf/verifier.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 962dc7c48430f..61a535eec0a9b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6322,7 +6322,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
bool isdiv = BPF_OP(insn->code) == BPF_DIV;
struct bpf_insn *patchlet;
struct bpf_insn chk_and_div[] = {
- /* [R,W]x div 0 -> 0 */
+ /* Rx div 0 -> 0 */
BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
BPF_JNE | BPF_K, insn->src_reg,
0, 2, 0),
@@ -6331,18 +6331,16 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
*insn,
};
struct bpf_insn chk_and_mod[] = {
- /* [R,W]x mod 0 -> [R,W]x */
+ /* Rx mod 0 -> Rx */
BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
BPF_JEQ | BPF_K, insn->src_reg,
- 0, 1 + (is64 ? 0 : 1), 0),
+ 0, 1, 0),
*insn,
- BPF_JMP_IMM(BPF_JA, 0, 0, 1),
- BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
};
patchlet = isdiv ? chk_and_div : chk_and_mod;
cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
- ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
+ ARRAY_SIZE(chk_and_mod);
new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
if (!new_prog)
--
2.25.1
1
13