Kernel
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 55 participants
- 16918 discussions

[PATCH openEuler-1.0-LTS 01/15] binder: use euid from cred instead of using task
by Laibin Qiu 15 Apr '22
by Laibin Qiu 15 Apr '22
15 Apr '22
From: Todd Kjos <tkjos(a)google.com>
stable inclusion
from linux-4.19.218
commit 5d40061285b81a7e213dc9b37acc4a0545eedf32
--------------------------------
commit 29bc22ac5e5bc63275e850f0c8fc549e3d0e306b upstream.
Save the 'struct cred' associated with a binder process
at initial open to avoid potential race conditions
when converting to an euid.
Set a transaction's sender_euid from the 'struct cred'
saved at binder_open() instead of looking up the euid
from the binder proc's 'struct task'. This ensures
the euid is associated with the security context that
of the task that opened binder.
Cc: stable(a)vger.kernel.org # 4.4+
Fixes: 457b9a6f09f0 ("Staging: android: add binder driver")
Signed-off-by: Todd Kjos <tkjos(a)google.com>
Suggested-by: Stephen Smalley <stephen.smalley.work(a)gmail.com>
Suggested-by: Jann Horn <jannh(a)google.com>
Acked-by: Casey Schaufler <casey(a)schaufler-ca.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/android/binder.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 18c32637047f..3ba23d29f50a 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -483,6 +483,9 @@ enum binder_deferred_state {
* @files files_struct for process
* (protected by @files_lock)
* @files_lock mutex to protect @files
+ * @cred struct cred associated with the `struct file`
+ * in binder_open()
+ * (invariant after initialized)
* @deferred_work_node: element for binder_deferred_list
* (protected by binder_deferred_lock)
* @deferred_work: bitmap of deferred work to perform
@@ -529,6 +532,7 @@ struct binder_proc {
struct task_struct *tsk;
struct files_struct *files;
struct mutex files_lock;
+ const struct cred *cred;
struct hlist_node deferred_work_node;
int deferred_work;
bool is_dead;
@@ -2956,7 +2960,7 @@ static void binder_transaction(struct binder_proc *proc,
t->from = thread;
else
t->from = NULL;
- t->sender_euid = task_euid(proc->tsk);
+ t->sender_euid = proc->cred->euid;
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
@@ -4328,6 +4332,7 @@ static void binder_free_proc(struct binder_proc *proc)
BUG_ON(!list_empty(&proc->delivered_death));
binder_alloc_deferred_release(&proc->alloc);
put_task_struct(proc->tsk);
+ put_cred(proc->cred);
binder_stats_deleted(BINDER_STAT_PROC);
kfree(proc);
}
@@ -4786,6 +4791,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
get_task_struct(current->group_leader);
proc->tsk = current->group_leader;
mutex_init(&proc->files_lock);
+ proc->cred = get_cred(filp->f_cred);
INIT_LIST_HEAD(&proc->todo);
proc->default_priority = task_nice(current);
binder_dev = container_of(filp->private_data, struct binder_device,
--
2.22.0
1
14
1
0
各位好,
openEuler开发者大会于4月13-15日线上举报,周四下午14:00内核SIG工作会议,周五下午14:00内核分论坛主题演讲,欢迎大家参与。
参与周四下午的SIG工作会议需要注册,麻烦大家提前注册,方便届时接入:https://us06web.zoom.us/webinar/register/WN_nxtHqu_VSbOuWhiP16e-Bw
观看周五下午的内核分论坛主题演讲,请关注官网直播:https://www.openeuler.org/zh/interaction/summit-li…
谢谢大家支持。
1
0

13 Apr '22
zhaoxin inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I52DS7
CVE: NA
Add support for the temperature sensor inside CPU.
Supported are all known variants of the Zhaoxin processors.
v1: Fix some character encoding mistaken.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/hwmon/Kconfig | 9 +
drivers/hwmon/Makefile | 1 +
drivers/hwmon/via-cputemp.c | 1 -
drivers/hwmon/zhaoxin-cputemp.c | 292 ++++++++++++++++++++++++++++++++
4 files changed, 302 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwmon/zhaoxin-cputemp.c
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
index 0c2b032ee617..c11c3c46898a 100644
--- a/drivers/hwmon/Kconfig
+++ b/drivers/hwmon/Kconfig
@@ -1889,6 +1889,15 @@ config SENSORS_VIA_CPUTEMP
sensor inside your CPU. Supported are all known variants of
the VIA C7 and Nano.
+config SENSORS_ZHAOXIN_CPUTEMP
+ tristate "Zhaoxin CPU temperature sensor"
+ depends on X86
+ select HWMON_VID
+ help
+ If you say yes here you get support for the temperature
+ sensor inside your CPU. Supported are all known variants of
+ the Zhaoxin processors.
+
config SENSORS_VIA686A
tristate "VIA686A"
depends on PCI
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
index 9db2903b61e5..2427687972c9 100644
--- a/drivers/hwmon/Makefile
+++ b/drivers/hwmon/Makefile
@@ -184,6 +184,7 @@ obj-$(CONFIG_SENSORS_TMP421) += tmp421.o
obj-$(CONFIG_SENSORS_TMP513) += tmp513.o
obj-$(CONFIG_SENSORS_VEXPRESS) += vexpress-hwmon.o
obj-$(CONFIG_SENSORS_VIA_CPUTEMP)+= via-cputemp.o
+obj-$(CONFIG_SENSORS_ZHAOXIN_CPUTEMP) += zhaoxin-cputemp.o
obj-$(CONFIG_SENSORS_VIA686A) += via686a.o
obj-$(CONFIG_SENSORS_VT1211) += vt1211.o
obj-$(CONFIG_SENSORS_VT8231) += vt8231.o
diff --git a/drivers/hwmon/via-cputemp.c b/drivers/hwmon/via-cputemp.c
index e5d18dac8ee7..0a5057dbe51a 100644
--- a/drivers/hwmon/via-cputemp.c
+++ b/drivers/hwmon/via-cputemp.c
@@ -273,7 +273,6 @@ static const struct x86_cpu_id __initconst
cputemp_ids[] = {
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_C7_A,
NULL),
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_C7_D,
NULL),
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_NANO,
NULL),
- X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 7, X86_MODEL_ANY, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, cputemp_ids);
diff --git a/drivers/hwmon/zhaoxin-cputemp.c
b/drivers/hwmon/zhaoxin-cputemp.c
new file mode 100644
index 000000000000..dcedb51515f0
--- /dev/null
+++ b/drivers/hwmon/zhaoxin-cputemp.c
@@ -0,0 +1,292 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * zhaoxin-cputemp.c - Driver for Zhaoxin CPU core temperature monitoring
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-vid.h>
+#include <linux/sysfs.h>
+#include <linux/hwmon-sysfs.h>
+#include <linux/err.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/platform_device.h>
+#include <linux/cpu.h>
+#include <asm/msr.h>
+#include <asm/processor.h>
+#include <asm/cpu_device_id.h>
+
+#define DRVNAME "zhaoxin_cputemp"
+
+enum { SHOW_TEMP, SHOW_LABEL, SHOW_NAME };
+
+/* Functions declaration */
+
+struct zhaoxin_cputemp_data {
+ struct device *hwmon_dev;
+ const char *name;
+ u8 vrm;
+ u32 id;
+ u32 msr_temp;
+ u32 msr_vid;
+};
+
+/* Sysfs stuff */
+
+static ssize_t name_show(struct device *dev, struct device_attribute
*devattr,
+ char *buf)
+{
+ int ret;
+ struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+
+ if (attr->index == SHOW_NAME)
+ ret = sprintf(buf, "%s\n", data->name);
+ else /* show label */
+ ret = sprintf(buf, "Core %d\n", data->id);
+ return ret;
+}
+
+static ssize_t temp_show(struct device *dev, struct device_attribute
*devattr, char *buf)
+{
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+ u32 eax, edx;
+ int err;
+
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ if (err)
+ return -EAGAIN;
+
+ return sprintf(buf, "%lu\n", ((unsigned long)eax & 0xffffff) * 1000);
+}
+
+static ssize_t cpu0_vid_show(struct device *dev, struct
device_attribute *devattr, char *buf)
+{
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+ u32 eax, edx;
+ int err;
+
+ err = rdmsr_safe_on_cpu(data->id, data->msr_vid, &eax, &edx);
+ if (err)
+ return -EAGAIN;
+
+ return sprintf(buf, "%d\n", vid_from_reg(~edx & 0x7f, data->vrm));
+}
+
+static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, SHOW_TEMP);
+static SENSOR_DEVICE_ATTR_RO(temp1_label, name, SHOW_LABEL);
+static SENSOR_DEVICE_ATTR_RO(name, name, SHOW_NAME);
+
+static struct attribute *zhaoxin_cputemp_attributes[] = {
+ &sensor_dev_attr_name.dev_attr.attr,
+ &sensor_dev_attr_temp1_label.dev_attr.attr,
+ &sensor_dev_attr_temp1_input.dev_attr.attr,
+ NULL
+};
+
+static const struct attribute_group zhaoxin_cputemp_group = {
+ .attrs = zhaoxin_cputemp_attributes,
+};
+
+/* Optional attributes */
+static DEVICE_ATTR_RO(cpu0_vid);
+
+static int zhaoxin_cputemp_probe(struct platform_device *pdev)
+{
+ struct zhaoxin_cputemp_data *data;
+ int err;
+ u32 eax, edx;
+
+ data = devm_kzalloc(&pdev->dev, sizeof(struct
zhaoxin_cputemp_data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->id = pdev->id;
+ data->name = "zhaoxin_cputemp";
+ data->msr_temp = 0x1423;
+
+ /* test if we can access the TEMPERATURE MSR */
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ if (err) {
+ dev_err(&pdev->dev, "Unable to access TEMPERATURE MSR, giving
up\n");
+ return err;
+ }
+
+ platform_set_drvdata(pdev, data);
+
+ err = sysfs_create_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ if (err)
+ return err;
+
+ if (data->msr_vid)
+ data->vrm = vid_which_vrm();
+
+ if (data->vrm) {
+ err = device_create_file(&pdev->dev, &dev_attr_cpu0_vid);
+ if (err)
+ goto exit_remove;
+ }
+
+ data->hwmon_dev = hwmon_device_register(&pdev->dev);
+ if (IS_ERR(data->hwmon_dev)) {
+ err = PTR_ERR(data->hwmon_dev);
+ dev_err(&pdev->dev, "Class registration failed (%d)\n", err);
+ goto exit_remove;
+ }
+
+ return 0;
+
+exit_remove:
+ if (data->vrm)
+ device_remove_file(&pdev->dev, &dev_attr_cpu0_vid);
+ sysfs_remove_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ return err;
+}
+
+static int zhaoxin_cputemp_remove(struct platform_device *pdev)
+{
+ struct zhaoxin_cputemp_data *data = platform_get_drvdata(pdev);
+
+ hwmon_device_unregister(data->hwmon_dev);
+ if (data->vrm)
+ device_remove_file(&pdev->dev, &dev_attr_cpu0_vid);
+ sysfs_remove_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ return 0;
+}
+
+static struct platform_driver zhaoxin_cputemp_driver = {
+ .driver = {
+ .name = DRVNAME,
+ },
+ .probe = zhaoxin_cputemp_probe,
+ .remove = zhaoxin_cputemp_remove,
+};
+
+struct pdev_entry {
+ struct list_head list;
+ struct platform_device *pdev;
+ unsigned int cpu;
+};
+
+static LIST_HEAD(pdev_list);
+static DEFINE_MUTEX(pdev_list_mutex);
+
+static int zhaoxin_cputemp_online(unsigned int cpu)
+{
+ int err;
+ struct platform_device *pdev;
+ struct pdev_entry *pdev_entry;
+
+ pdev = platform_device_alloc(DRVNAME, cpu);
+ if (!pdev) {
+ err = -ENOMEM;
+ pr_err("Device allocation failed\n");
+ goto exit;
+ }
+
+ pdev_entry = kzalloc(sizeof(struct pdev_entry), GFP_KERNEL);
+ if (!pdev_entry) {
+ err = -ENOMEM;
+ goto exit_device_put;
+ }
+
+ err = platform_device_add(pdev);
+ if (err) {
+ pr_err("Device addition failed (%d)\n", err);
+ goto exit_device_free;
+ }
+
+ pdev_entry->pdev = pdev;
+ pdev_entry->cpu = cpu;
+ mutex_lock(&pdev_list_mutex);
+ list_add_tail(&pdev_entry->list, &pdev_list);
+ mutex_unlock(&pdev_list_mutex);
+
+ return 0;
+
+exit_device_free:
+ kfree(pdev_entry);
+exit_device_put:
+ platform_device_put(pdev);
+exit:
+ return err;
+}
+
+static int zhaoxin_cputemp_down_prep(unsigned int cpu)
+{
+ struct pdev_entry *p;
+
+ mutex_lock(&pdev_list_mutex);
+ list_for_each_entry(p, &pdev_list, list) {
+ if (p->cpu == cpu) {
+ platform_device_unregister(p->pdev);
+ list_del(&p->list);
+ mutex_unlock(&pdev_list_mutex);
+ kfree(p);
+ return 0;
+ }
+ }
+ mutex_unlock(&pdev_list_mutex);
+ return 0;
+}
+
+static const struct x86_cpu_id __initconst cputemp_ids[] = {
+ X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 7, X86_MODEL_ANY, NULL),
+ X86_MATCH_VENDOR_FAM_MODEL(ZHAOXIN, 7, X86_MODEL_ANY, NULL),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, cputemp_ids);
+
+static enum cpuhp_state zhaoxin_temp_online;
+
+static int __init zhaoxin_cputemp_init(void)
+{
+ int err;
+
+ if (!x86_match_cpu(cputemp_ids))
+ return -ENODEV;
+
+ err = platform_driver_register(&zhaoxin_cputemp_driver);
+ if (err)
+ goto exit;
+
+ err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hwmon/zhaoxin:online",
+ zhaoxin_cputemp_online, zhaoxin_cputemp_down_prep);
+ if (err < 0)
+ goto exit_driver_unreg;
+ zhaoxin_temp_online = err;
+
+#ifndef CONFIG_HOTPLUG_CPU
+ if (list_empty(&pdev_list)) {
+ err = -ENODEV;
+ goto exit_hp_unreg;
+ }
+#endif
+ return 0;
+
+#ifndef CONFIG_HOTPLUG_CPU
+exit_hp_unreg:
+ cpuhp_remove_state_nocalls(zhaoxin_temp_online);
+#endif
+exit_driver_unreg:
+ platform_driver_unregister(&zhaoxin_cputemp_driver);
+exit:
+ return err;
+}
+
+static void __exit zhaoxin_cputemp_exit(void)
+{
+ cpuhp_remove_state(zhaoxin_temp_online);
+ platform_driver_unregister(&zhaoxin_cputemp_driver);
+}
+
+MODULE_DESCRIPTION("Zhaoxin CPU temperature monitor");
+MODULE_LICENSE("GPL");
+
+module_init(zhaoxin_cputemp_init)
+module_exit(zhaoxin_cputemp_exit)
--
2.20.1
1
0

13 Apr '22
From: Halil Pasic <pasic(a)linux.ibm.com>
mainline inclusion
from mainline-v5.17-rc6
commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e
category: bugfix
bugzilla: 186478, https://gitee.com/src-openeuler/kernel/issues/I503W4
CVE: CVE-2022-0854
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
The problem I'm addressing was discovered by the LTP test covering
cve-2018-1000204.
A short description of what happens follows:
1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO
interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV
and a corresponding dxferp. The peculiar thing about this is that TUR
is not reading from the device.
2) In sg_start_req() the invocation of blk_rq_map_user() effectively
bounces the user-space buffer. As if the device was to transfer into
it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in
sg_build_indirect()") we make sure this first bounce buffer is
allocated with GFP_ZERO.
3) For the rest of the story we keep ignoring that we have a TUR, so the
device won't touch the buffer we prepare as if the we had a
DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device
and the buffer allocated by SG is mapped by the function
virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here
scatter-gather and not scsi generics). This mapping involves bouncing
via the swiotlb (we need swiotlb to do virtio in protected guest like
s390 Secure Execution, or AMD SEV).
4) When the SCSI TUR is done, we first copy back the content of the second
(that is swiotlb) bounce buffer (which most likely contains some
previous IO data), to the first bounce buffer, which contains all
zeros. Then we copy back the content of the first bounce buffer to
the user-space buffer.
5) The test case detects that the buffer, which it zero-initialized,
ain't all zeros and fails.
One can argue that this is an swiotlb problem, because without swiotlb
we leak all zeros, and the swiotlb should be transparent in a sense that
it does not affect the outcome (if all other participants are well
behaved).
Copying the content of the original buffer into the swiotlb buffer is
the only way I can think of to make swiotlb transparent in such
scenarios. So let's do just that if in doubt, but allow the driver
to tell us that the whole mapped buffer is going to be overwritten,
in which case we can preserve the old behavior and avoid the performance
impact of the extra bounce.
Signed-off-by: Halil Pasic <pasic(a)linux.ibm.com>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Conflicts:
kernel/dma/swiotlb.c
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
---
Documentation/core-api/dma-attributes.rst | 8 ++++++++
include/linux/dma-mapping.h | 8 ++++++++
kernel/dma/swiotlb.c | 3 ++-
3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst
index 1887d92e8e92..17706dc91ec9 100644
--- a/Documentation/core-api/dma-attributes.rst
+++ b/Documentation/core-api/dma-attributes.rst
@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
subsystem that the buffer is fully accessible at the elevated privilege
level (and ideally inaccessible or at least read-only at the
lesser-privileged levels).
+
+DMA_ATTR_OVERWRITE
+------------------
+
+This is a hint to the DMA-mapping subsystem that the device is expected to
+overwrite the entire mapped size, thus the caller does not require any of the
+previous buffer contents to be preserved. This allows bounce-buffering
+implementations to optimise DMA_FROM_DEVICE transfers.
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index a7d70cdee25e..a9361178c5db 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -61,6 +61,14 @@
*/
#define DMA_ATTR_PRIVILEGED (1UL << 9)
+/*
+ * This is a hint to the DMA-mapping subsystem that the device is expected
+ * to overwrite the entire mapped size, thus the caller does not require any
+ * of the previous buffer contents to be preserved. This allows
+ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
+ */
+#define DMA_ATTR_OVERWRITE (1UL << 10)
+
/*
* A dma_addr_t can hold any valid DMA or bus address for the platform. It can
* be given to a device to use as a DMA source or target. It is specific to a
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0ed0e1f215c7..62b1e5fa8673 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -598,7 +598,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
tlb_addr = slot_addr(io_tlb_start, index) + offset;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
- (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
+ dir == DMA_BIDIRECTIONAL))
swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
return tlb_addr;
}
--
2.20.1
1
3

12 Apr '22
zhaoxin inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I52DS7
CVE: NA
Add support for the temperature sensor inside your CPU.
Supported are all known variants of the Zhaoxin processors.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/hwmon/Kconfig | 9 +
drivers/hwmon/Makefile | 1 +
drivers/hwmon/via-cputemp.c | 1 -
drivers/hwmon/zhaoxin-cputemp.c | 292 ++++++++++++++++++++++++++++++++
4 files changed, 302 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwmon/zhaoxin-cputemp.c
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
index 0c2b032ee617..c11c3c46898a 100644
--- a/drivers/hwmon/Kconfig
+++ b/drivers/hwmon/Kconfig
@@ -1889,6 +1889,15 @@ config SENSORS_VIA_CPUTEMP
sensor inside your CPU. Supported are all known variants of
the VIA C7 and Nano.
+config SENSORS_ZHAOXIN_CPUTEMP
+ tristate "VIA CPU temperature sensor"
+ depends on X86
+ select HWMON_VID
+ help
+ If you say yes here you get support for the temperature
+ sensor inside your CPU. Supported are all known variants of
+ the Zhaoxin processors.
+
config SENSORS_VIA686A
tristate "VIA686A"
depends on PCI
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
index 9db2903b61e5..2427687972c9 100644
--- a/drivers/hwmon/Makefile
+++ b/drivers/hwmon/Makefile
@@ -184,6 +184,7 @@ obj-$(CONFIG_SENSORS_TMP421) += tmp421.o
obj-$(CONFIG_SENSORS_TMP513) += tmp513.o
obj-$(CONFIG_SENSORS_VEXPRESS) += vexpress-hwmon.o
obj-$(CONFIG_SENSORS_VIA_CPUTEMP)+= via-cputemp.o
+obj-$(CONFIG_SENSORS_ZHAOXIN_CPUTEMP) += zhaoxin-cputemp.o
obj-$(CONFIG_SENSORS_VIA686A) += via686a.o
obj-$(CONFIG_SENSORS_VT1211) += vt1211.o
obj-$(CONFIG_SENSORS_VT8231) += vt8231.o
diff --git a/drivers/hwmon/via-cputemp.c b/drivers/hwmon/via-cputemp.c
index e5d18dac8ee7..0a5057dbe51a 100644
--- a/drivers/hwmon/via-cputemp.c
+++ b/drivers/hwmon/via-cputemp.c
@@ -273,7 +273,6 @@ static const struct x86_cpu_id __initconst
cputemp_ids[] = {
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_C7_A,
NULL),
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_C7_D,
NULL),
X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 6, X86_CENTAUR_FAM6_NANO,
NULL),
- X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 7, X86_MODEL_ANY, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, cputemp_ids);
diff --git a/drivers/hwmon/zhaoxin-cputemp.c
b/drivers/hwmon/zhaoxin-cputemp.c
new file mode 100644
index 000000000000..dcedb51515f0
--- /dev/null
+++ b/drivers/hwmon/zhaoxin-cputemp.c
@@ -0,0 +1,292 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * zhaoxin-cputemp.c - Driver for Zhaoxin CPU core temperature monitoring
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-vid.h>
+#include <linux/sysfs.h>
+#include <linux/hwmon-sysfs.h>
+#include <linux/err.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/platform_device.h>
+#include <linux/cpu.h>
+#include <asm/msr.h>
+#include <asm/processor.h>
+#include <asm/cpu_device_id.h>
+
+#define DRVNAME "zhaoxin_cputemp"
+
+enum { SHOW_TEMP, SHOW_LABEL, SHOW_NAME };
+
+/* Functions declaration */
+
+struct zhaoxin_cputemp_data {
+ struct device *hwmon_dev;
+ const char *name;
+ u8 vrm;
+ u32 id;
+ u32 msr_temp;
+ u32 msr_vid;
+};
+
+/* Sysfs stuff */
+
+static ssize_t name_show(struct device *dev, struct device_attribute
*devattr,
+ char *buf)
+{
+ int ret;
+ struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+
+ if (attr->index == SHOW_NAME)
+ ret = sprintf(buf, "%s\n", data->name);
+ else /* show label */
+ ret = sprintf(buf, "Core %d\n", data->id);
+ return ret;
+}
+
+static ssize_t temp_show(struct device *dev, struct device_attribute
*devattr, char *buf)
+{
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+ u32 eax, edx;
+ int err;
+
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ if (err)
+ return -EAGAIN;
+
+ return sprintf(buf, "%lu\n", ((unsigned long)eax & 0xffffff) * 1000);
+}
+
+static ssize_t cpu0_vid_show(struct device *dev, struct
device_attribute *devattr, char *buf)
+{
+ struct zhaoxin_cputemp_data *data = dev_get_drvdata(dev);
+ u32 eax, edx;
+ int err;
+
+ err = rdmsr_safe_on_cpu(data->id, data->msr_vid, &eax, &edx);
+ if (err)
+ return -EAGAIN;
+
+ return sprintf(buf, "%d\n", vid_from_reg(~edx & 0x7f, data->vrm));
+}
+
+static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, SHOW_TEMP);
+static SENSOR_DEVICE_ATTR_RO(temp1_label, name, SHOW_LABEL);
+static SENSOR_DEVICE_ATTR_RO(name, name, SHOW_NAME);
+
+static struct attribute *zhaoxin_cputemp_attributes[] = {
+ &sensor_dev_attr_name.dev_attr.attr,
+ &sensor_dev_attr_temp1_label.dev_attr.attr,
+ &sensor_dev_attr_temp1_input.dev_attr.attr,
+ NULL
+};
+
+static const struct attribute_group zhaoxin_cputemp_group = {
+ .attrs = zhaoxin_cputemp_attributes,
+};
+
+/* Optional attributes */
+static DEVICE_ATTR_RO(cpu0_vid);
+
+static int zhaoxin_cputemp_probe(struct platform_device *pdev)
+{
+ struct zhaoxin_cputemp_data *data;
+ int err;
+ u32 eax, edx;
+
+ data = devm_kzalloc(&pdev->dev, sizeof(struct
zhaoxin_cputemp_data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->id = pdev->id;
+ data->name = "zhaoxin_cputemp";
+ data->msr_temp = 0x1423;
+
+ /* test if we can access the TEMPERATURE MSR */
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ if (err) {
+ dev_err(&pdev->dev, "Unable to access TEMPERATURE MSR, giving
up\n");
+ return err;
+ }
+
+ platform_set_drvdata(pdev, data);
+
+ err = sysfs_create_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ if (err)
+ return err;
+
+ if (data->msr_vid)
+ data->vrm = vid_which_vrm();
+
+ if (data->vrm) {
+ err = device_create_file(&pdev->dev, &dev_attr_cpu0_vid);
+ if (err)
+ goto exit_remove;
+ }
+
+ data->hwmon_dev = hwmon_device_register(&pdev->dev);
+ if (IS_ERR(data->hwmon_dev)) {
+ err = PTR_ERR(data->hwmon_dev);
+ dev_err(&pdev->dev, "Class registration failed (%d)\n", err);
+ goto exit_remove;
+ }
+
+ return 0;
+
+exit_remove:
+ if (data->vrm)
+ device_remove_file(&pdev->dev, &dev_attr_cpu0_vid);
+ sysfs_remove_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ return err;
+}
+
+static int zhaoxin_cputemp_remove(struct platform_device *pdev)
+{
+ struct zhaoxin_cputemp_data *data = platform_get_drvdata(pdev);
+
+ hwmon_device_unregister(data->hwmon_dev);
+ if (data->vrm)
+ device_remove_file(&pdev->dev, &dev_attr_cpu0_vid);
+ sysfs_remove_group(&pdev->dev.kobj, &zhaoxin_cputemp_group);
+ return 0;
+}
+
+static struct platform_driver zhaoxin_cputemp_driver = {
+ .driver = {
+ .name = DRVNAME,
+ },
+ .probe = zhaoxin_cputemp_probe,
+ .remove = zhaoxin_cputemp_remove,
+};
+
+struct pdev_entry {
+ struct list_head list;
+ struct platform_device *pdev;
+ unsigned int cpu;
+};
+
+static LIST_HEAD(pdev_list);
+static DEFINE_MUTEX(pdev_list_mutex);
+
+static int zhaoxin_cputemp_online(unsigned int cpu)
+{
+ int err;
+ struct platform_device *pdev;
+ struct pdev_entry *pdev_entry;
+
+ pdev = platform_device_alloc(DRVNAME, cpu);
+ if (!pdev) {
+ err = -ENOMEM;
+ pr_err("Device allocation failed\n");
+ goto exit;
+ }
+
+ pdev_entry = kzalloc(sizeof(struct pdev_entry), GFP_KERNEL);
+ if (!pdev_entry) {
+ err = -ENOMEM;
+ goto exit_device_put;
+ }
+
+ err = platform_device_add(pdev);
+ if (err) {
+ pr_err("Device addition failed (%d)\n", err);
+ goto exit_device_free;
+ }
+
+ pdev_entry->pdev = pdev;
+ pdev_entry->cpu = cpu;
+ mutex_lock(&pdev_list_mutex);
+ list_add_tail(&pdev_entry->list, &pdev_list);
+ mutex_unlock(&pdev_list_mutex);
+
+ return 0;
+
+exit_device_free:
+ kfree(pdev_entry);
+exit_device_put:
+ platform_device_put(pdev);
+exit:
+ return err;
+}
+
+static int zhaoxin_cputemp_down_prep(unsigned int cpu)
+{
+ struct pdev_entry *p;
+
+ mutex_lock(&pdev_list_mutex);
+ list_for_each_entry(p, &pdev_list, list) {
+ if (p->cpu == cpu) {
+ platform_device_unregister(p->pdev);
+ list_del(&p->list);
+ mutex_unlock(&pdev_list_mutex);
+ kfree(p);
+ return 0;
+ }
+ }
+ mutex_unlock(&pdev_list_mutex);
+ return 0;
+}
+
+static const struct x86_cpu_id __initconst cputemp_ids[] = {
+ X86_MATCH_VENDOR_FAM_MODEL(CENTAUR, 7, X86_MODEL_ANY, NULL),
+ X86_MATCH_VENDOR_FAM_MODEL(ZHAOXIN, 7, X86_MODEL_ANY, NULL),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, cputemp_ids);
+
+static enum cpuhp_state zhaoxin_temp_online;
+
+static int __init zhaoxin_cputemp_init(void)
+{
+ int err;
+
+ if (!x86_match_cpu(cputemp_ids))
+ return -ENODEV;
+
+ err = platform_driver_register(&zhaoxin_cputemp_driver);
+ if (err)
+ goto exit;
+
+ err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hwmon/zhaoxin:online",
+ zhaoxin_cputemp_online, zhaoxin_cputemp_down_prep);
+ if (err < 0)
+ goto exit_driver_unreg;
+ zhaoxin_temp_online = err;
+
+#ifndef CONFIG_HOTPLUG_CPU
+ if (list_empty(&pdev_list)) {
+ err = -ENODEV;
+ goto exit_hp_unreg;
+ }
+#endif
+ return 0;
+
+#ifndef CONFIG_HOTPLUG_CPU
+exit_hp_unreg:
+ cpuhp_remove_state_nocalls(zhaoxin_temp_online);
+#endif
+exit_driver_unreg:
+ platform_driver_unregister(&zhaoxin_cputemp_driver);
+exit:
+ return err;
+}
+
+static void __exit zhaoxin_cputemp_exit(void)
+{
+ cpuhp_remove_state(zhaoxin_temp_online);
+ platform_driver_unregister(&zhaoxin_cputemp_driver);
+}
+
+MODULE_DESCRIPTION("Zhaoxin CPU temperature monitor");
+MODULE_LICENSE("GPL");
+
+module_init(zhaoxin_cputemp_init)
+module_exit(zhaoxin_cputemp_exit)
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS 01/62] PM: wakeup: simplify the output logic of pm_show_wakelocks()
by Laibin Qiu 12 Apr '22
by Laibin Qiu 12 Apr '22
12 Apr '22
From: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
stable inclusion
from linux-4.19.228
commit cf94a8a3ff3d7aa06742f1beb5166fa45ba72122
--------------------------------
commit c9d967b2ce40d71e968eb839f36c936b8a9cf1ea upstream.
The buffer handling in pm_show_wakelocks() is tricky, and hopefully
correct. Ensure it really is correct by using sysfs_emit_at() which
handles all of the tricky string handling logic in a PAGE_SIZE buffer
for us automatically as this is a sysfs file being read from.
Reviewed-by: Lee Jones <lee.jones(a)linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
kernel/power/wakelock.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/kernel/power/wakelock.c b/kernel/power/wakelock.c
index 4210152e56f0..aad7c8fcb22f 100644
--- a/kernel/power/wakelock.c
+++ b/kernel/power/wakelock.c
@@ -39,23 +39,19 @@ ssize_t pm_show_wakelocks(char *buf, bool show_active)
{
struct rb_node *node;
struct wakelock *wl;
- char *str = buf;
- char *end = buf + PAGE_SIZE;
+ int len = 0;
mutex_lock(&wakelocks_lock);
for (node = rb_first(&wakelocks_tree); node; node = rb_next(node)) {
wl = rb_entry(node, struct wakelock, node);
if (wl->ws.active == show_active)
- str += scnprintf(str, end - str, "%s ", wl->name);
+ len += sysfs_emit_at(buf, len, "%s ", wl->name);
}
- if (str > buf)
- str--;
-
- str += scnprintf(str, end - str, "\n");
+ len += sysfs_emit_at(buf, len, "\n");
mutex_unlock(&wakelocks_lock);
- return (str - buf);
+ return len;
}
#if CONFIG_PM_WAKELOCKS_LIMIT > 0
--
2.22.0
1
61

[PATCH OLK-5.10] xhci: Fix a logic issue when display Zhaoxin XHCI root hub speed
by LeoLiuoc 11 Apr '22
by LeoLiuoc 11 Apr '22
11 Apr '22
Fix a logic issue when display Zhaoxin XHCI root hub speed.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 7f1e5296d0f6..71dcc1ba73ac 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5261,10 +5261,10 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i])
>= 5)
minor_rev = 1;
}
- if (minor_rev != 1) {
- hcd->speed = HCD_USB3;
- hcd->self.root_hub->speed = USB_SPEED_SUPER;
- }
+ }
+ if (minor_rev != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
}
}
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS 1/3] printk: Convert a use of sprintf to snprintf in console_unlock
by Laibin Qiu 11 Apr '22
by Laibin Qiu 11 Apr '22
11 Apr '22
From: Nathan Chancellor <natechancellor(a)gmail.com>
mainline inclusion
from mainline-v5.8-rc1
commit 5661dd95a2958634485bb1a53f90a6ab621d4b0c
category: bugfix
bugzilla: 91291
CVE: N/A
--------------------------------
When CONFIG_PRINTK is disabled (e.g. when building allnoconfig), clang
warns:
../kernel/printk/printk.c:2416:10: warning: 'sprintf' will always
overflow; destination buffer has size 0, but format string expands to at
least 33 [-Wfortify-source]
len = sprintf(text,
^
1 warning generated.
It is not wrong; text has a zero size when CONFIG_PRINTK is disabled
because LOG_LINE_MAX and PREFIX_MAX are both zero. Change to snprintf so
that this case is explicitly handled without any risk of overflow.
Link: https://github.com/ClangBuiltLinux/linux/issues/846
Link: https://github.com/llvm/llvm-project/commit/6d485ff455ea2b37fef9e06e426dae6…
Link: http://lkml.kernel.org/r/20200130221644.2273-1-natechancellor@gmail.com
Cc: Steven Rostedt <rostedt(a)goodmis.org>
Cc: linux-kernel(a)vger.kernel.org
Cc: clang-built-linux(a)googlegroups.com
Signed-off-by: Nathan Chancellor <natechancellor(a)gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky(a)gmail.com>
Signed-off-by: Petr Mladek <pmladek(a)suse.com>
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
kernel/printk/printk.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 0fe45941b5c7..c645a7221d0b 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2423,9 +2423,9 @@ void console_unlock(void)
printk_safe_enter_irqsave(flags);
raw_spin_lock(&logbuf_lock);
if (console_seq < log_first_seq) {
- len = sprintf(text,
- "** %llu printk messages dropped **\n",
- log_first_seq - console_seq);
+ len = snprintf(text, sizeof(text),
+ "** %llu printk messages dropped **\n",
+ log_first_seq - console_seq);
/* messages are gone, move to first one */
console_seq = log_first_seq;
--
2.22.0
1
2
Fix CWE-404 bug: Leak of memory or pointers to system resoources
3
3

[PATCH openEuler-5.10 01/28] mm: share_pool: adjust sp_make_share_k2u behavior when coredump
by Zheng Zengkai 07 Apr '22
by Zheng Zengkai 07 Apr '22
07 Apr '22
From: Guo Mengqi <guomengqi3(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4MUV2
CVE: NA
when k2u is being executed ont the whole sharepool group,
and one process coredumps, k2u will skip the coredumped process and
continue on the rest processes in the group.
Signed-off-by: Guo Mengqi <guomengqi3(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
mm/share_pool.c | 50 +++++++++++++++++++++++++++++++------------------
1 file changed, 32 insertions(+), 18 deletions(-)
diff --git a/mm/share_pool.c b/mm/share_pool.c
index 494a829d6f3a..f18bcd188027 100644
--- a/mm/share_pool.c
+++ b/mm/share_pool.c
@@ -666,8 +666,25 @@ static unsigned long sp_mmap(struct mm_struct *mm, struct file *file,
struct sp_area *spa, unsigned long *populate,
unsigned long prot);
static void sp_munmap(struct mm_struct *mm, unsigned long addr, unsigned long size);
+
+#define K2U_NORMAL 0
+#define K2U_COREDUMP 1
+
+struct sp_k2u_context {
+ unsigned long kva;
+ unsigned long kva_aligned;
+ unsigned long size;
+ unsigned long size_aligned;
+ unsigned long sp_flags;
+ int state;
+ int spg_id;
+ bool to_task;
+ struct timespec64 start;
+ struct timespec64 end;
+};
+
static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
- struct mm_struct *mm, unsigned long prot);
+ struct mm_struct *mm, unsigned long prot, struct sp_k2u_context *kc);
static void free_sp_group_id(int spg_id)
{
@@ -1313,7 +1330,7 @@ int mg_sp_group_add_task(int pid, unsigned long prot, int spg_id)
spin_unlock(&sp_area_lock);
if (spa->type == SPA_TYPE_K2SPG && spa->kva) {
- addr = sp_remap_kva_to_vma(spa->kva, spa, mm, prot);
+ addr = sp_remap_kva_to_vma(spa->kva, spa, mm, prot, NULL);
if (IS_ERR_VALUE(addr))
pr_warn("add group remap k2u failed %ld\n", addr);
@@ -2586,7 +2603,7 @@ static unsigned long __sp_remap_get_pfn(unsigned long kva)
/* when called by k2u to group, always make sure rw_lock of spg is down */
static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
- struct mm_struct *mm, unsigned long prot)
+ struct mm_struct *mm, unsigned long prot, struct sp_k2u_context *kc)
{
struct vm_area_struct *vma;
unsigned long ret_addr;
@@ -2598,6 +2615,8 @@ static unsigned long sp_remap_kva_to_vma(unsigned long kva, struct sp_area *spa,
if (unlikely(mm->core_state)) {
pr_err("k2u mmap: encountered coredump, abort\n");
ret_addr = -EBUSY;
+ if (kc)
+ kc->state = K2U_COREDUMP;
goto put_mm;
}
@@ -2683,7 +2702,7 @@ static void *sp_make_share_kva_to_task(unsigned long kva, unsigned long size, un
spa->kva = kva;
- uva = (void *)sp_remap_kva_to_vma(kva, spa, current->mm, prot);
+ uva = (void *)sp_remap_kva_to_vma(kva, spa, current->mm, prot, NULL);
__sp_area_drop(spa);
if (IS_ERR(uva))
pr_err("remap k2u to task failed %ld\n", PTR_ERR(uva));
@@ -2711,6 +2730,8 @@ static void *sp_make_share_kva_to_spg(unsigned long kva, unsigned long size,
struct mm_struct *mm;
struct sp_group_node *spg_node;
void *uva = ERR_PTR(-ENODEV);
+ struct sp_k2u_context kc;
+ unsigned long ret_addr = -ENODEV;
down_read(&spg->rw_lock);
spa = sp_alloc_area(size, sp_flags, spg, SPA_TYPE_K2SPG, current->tgid);
@@ -2725,12 +2746,17 @@ static void *sp_make_share_kva_to_spg(unsigned long kva, unsigned long size,
list_for_each_entry(spg_node, &spg->procs, proc_node) {
mm = spg_node->master->mm;
- uva = (void *)sp_remap_kva_to_vma(kva, spa, mm, spg_node->prot);
- if (IS_ERR(uva)) {
+ kc.state = K2U_NORMAL;
+ ret_addr = sp_remap_kva_to_vma(kva, spa, mm, spg_node->prot, &kc);
+ if (IS_ERR_VALUE(ret_addr)) {
+ if (kc.state == K2U_COREDUMP)
+ continue;
+ uva = (void *)ret_addr;
pr_err("remap k2u to spg failed %ld\n", PTR_ERR(uva));
__sp_free(spg, spa->va_start, spa_size(spa), mm);
goto out;
}
+ uva = (void *)ret_addr;
}
out:
@@ -2755,18 +2781,6 @@ static bool vmalloc_area_set_flag(unsigned long kva, unsigned long flags)
return false;
}
-struct sp_k2u_context {
- unsigned long kva;
- unsigned long kva_aligned;
- unsigned long size;
- unsigned long size_aligned;
- unsigned long sp_flags;
- int spg_id;
- bool to_task;
- struct timespec64 start;
- struct timespec64 end;
-};
-
static void trace_sp_k2u_begin(struct sp_k2u_context *kc)
{
if (!sysctl_sp_perf_k2u)
--
2.20.1
1
27
大家好,openEuler开发者大会Kernel SIG工作会议将在4月14日下午进行。诚邀大家参加,并在此征集议题。议题内容不限于需求讨论、社区治理,问题反馈等。
如有议题申报请在如下链接填写:
https://etherpad.openeuler.org/p/sig-kernel-22.09-planning
谢谢大家的支持。
1
0
From: Yi Li <yili(a)winhong.com>
Some distributions are about to switch to Python 3 support only.
This means that /usr/bin/python, which is Python 2, is not available
anymore. Hence, switch scripts to use Python 3 explicitly.
*** ERROR: ambiguous python shebang in ***/power/pm-graph/sleepgraph.py:
#!/usr/bin/python. Change it to python3 (or python2) explicitly.
*** ERROR: ambiguous python shebang in ***/power/pm-graph/bootgraph.py:
#!/usr/bin/python. Change it to python3 (or python2) explicitly.
*** ERROR: ambiguous python shebang in ***/intel_pstate_tracer.py:
#!/usr/bin/env python. Change it to python3 (or python2) explicitly.
use sed command fix all python files.
Signed-off-by: Yi Li <yili(a)winhong.com>
---
Documentation/networking/device_drivers/atm/cxacru-cf.py | 2 +-
Documentation/sphinx/maintainers_include.py | 2 +-
Documentation/target/tcm_mod_builder.py | 2 +-
Documentation/trace/postprocess/decode_msr.py | 2 +-
arch/ia64/scripts/unwcheck.py | 2 +-
drivers/staging/greybus/tools/lbtest | 2 +-
scripts/jobserver-exec | 2 +-
scripts/show_delta | 2 +-
scripts/spdxcheck.py | 2 +-
scripts/tracing/draw_functrace.py | 2 +-
tools/hv/lsvmbus | 2 +-
tools/perf/python/tracepoint.py | 2 +-
tools/perf/python/twatch.py | 2 +-
tools/perf/scripts/python/exported-sql-viewer.py | 2 +-
tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py | 2 +-
.../selftests/drivers/net/mlxsw/sharedbuffer_configuration.py | 2 +-
16 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/Documentation/networking/device_drivers/atm/cxacru-cf.py b/Documentation/networking/device_drivers/atm/cxacru-cf.py
index b41d298398c8..1b960fb45982 100644
--- a/Documentation/networking/device_drivers/atm/cxacru-cf.py
+++ b/Documentation/networking/device_drivers/atm/cxacru-cf.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# Copyright 2009 Simon Arlott
#
# This program is free software; you can redistribute it and/or modify it
diff --git a/Documentation/sphinx/maintainers_include.py b/Documentation/sphinx/maintainers_include.py
index dc8fed48d3c2..5b7cb116886b 100755
--- a/Documentation/sphinx/maintainers_include.py
+++ b/Documentation/sphinx/maintainers_include.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# -*- coding: utf-8; mode: python -*-
# pylint: disable=R0903, C0330, R0914, R0912, E0401
diff --git a/Documentation/target/tcm_mod_builder.py b/Documentation/target/tcm_mod_builder.py
index 54492aa813b9..92b740465e73 100755
--- a/Documentation/target/tcm_mod_builder.py
+++ b/Documentation/target/tcm_mod_builder.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# The TCM v4 multi-protocol fabric module generation script for drivers/target/$NEW_MOD
#
# Copyright (c) 2010 Rising Tide Systems
diff --git a/Documentation/trace/postprocess/decode_msr.py b/Documentation/trace/postprocess/decode_msr.py
index aa9cc7abd5c2..009122b700a6 100644
--- a/Documentation/trace/postprocess/decode_msr.py
+++ b/Documentation/trace/postprocess/decode_msr.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# add symbolic names to read_msr / write_msr in trace
# decode_msr msr-index.h < trace
import sys
diff --git a/arch/ia64/scripts/unwcheck.py b/arch/ia64/scripts/unwcheck.py
index bfd1b671e35f..9581742f0db2 100644
--- a/arch/ia64/scripts/unwcheck.py
+++ b/arch/ia64/scripts/unwcheck.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
#
# Usage: unwcheck.py FILE
diff --git a/drivers/staging/greybus/tools/lbtest b/drivers/staging/greybus/tools/lbtest
index 47c481239e98..4e6fcfbd815a 100755
--- a/drivers/staging/greybus/tools/lbtest
+++ b/drivers/staging/greybus/tools/lbtest
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (c) 2015 Google, Inc.
diff --git a/scripts/jobserver-exec b/scripts/jobserver-exec
index 0fdb31a790a8..48d141e3ec56 100755
--- a/scripts/jobserver-exec
+++ b/scripts/jobserver-exec
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0+
#
# This determines how many parallel tasks "make" is expecting, as it is
diff --git a/scripts/show_delta b/scripts/show_delta
index 28e67e178194..4660a988b2ad 100755
--- a/scripts/show_delta
+++ b/scripts/show_delta
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0-only
#
# show_deltas: Read list of printk messages instrumented with
diff --git a/scripts/spdxcheck.py b/scripts/spdxcheck.py
index bc87200f9c7c..cbdb5c83c08f 100755
--- a/scripts/spdxcheck.py
+++ b/scripts/spdxcheck.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# Copyright Thomas Gleixner <tglx(a)linutronix.de>
diff --git a/scripts/tracing/draw_functrace.py b/scripts/tracing/draw_functrace.py
index 7011fbe003ff..025a5c8106a6 100755
--- a/scripts/tracing/draw_functrace.py
+++ b/scripts/tracing/draw_functrace.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0-only
"""
diff --git a/tools/hv/lsvmbus b/tools/hv/lsvmbus
index 099f2c44dbed..f83698f14da2 100644
--- a/tools/hv/lsvmbus
+++ b/tools/hv/lsvmbus
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
import os
diff --git a/tools/perf/python/tracepoint.py b/tools/perf/python/tracepoint.py
index 461848c7f57d..640c740f460a 100755
--- a/tools/perf/python/tracepoint.py
+++ b/tools/perf/python/tracepoint.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# -*- python -*-
# -*- coding: utf-8 -*-
diff --git a/tools/perf/python/twatch.py b/tools/perf/python/twatch.py
index 04f3db29b9bc..56ed003a1d97 100755
--- a/tools/perf/python/twatch.py
+++ b/tools/perf/python/twatch.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0-only
# -*- python -*-
# -*- coding: utf-8 -*-
diff --git a/tools/perf/scripts/python/exported-sql-viewer.py b/tools/perf/scripts/python/exported-sql-viewer.py
index 711d4f9f5645..2768fc8f6141 100755
--- a/tools/perf/scripts/python/exported-sql-viewer.py
+++ b/tools/perf/scripts/python/exported-sql-viewer.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
# exported-sql-viewer.py: view data from sql database
# Copyright (c) 2014-2018, Intel Corporation.
diff --git a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
index e15e20696d17..a14953f61e5d 100755
--- a/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
+++ b/tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0-only
# -*- coding: utf-8 -*-
#
diff --git a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
index 2223337eed0c..bdacd8eb00f3 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
+++ b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
import subprocess
--
2.25.3
3
4

[PATCH openEuler-1.0-LTS] serial: 8250: Fix max baud limit in generic 8250 port
by Laibin Qiu 02 Apr '22
by Laibin Qiu 02 Apr '22
02 Apr '22
From: Serge Semin <Sergey.Semin(a)baikalelectronics.ru>
stable inclusion
from linux-4.19.130
commit 0eeaf62981ecc79e8395ca8caa1570eaf3a12257
category: bugfix
bugzilla: 89545
CVE: N/A
------------------------------------------------
[ Upstream commit 7b668c064ec33f3d687c3a413d05e355172e6c92 ]
Standard 8250 UART ports are designed in a way so they can communicate
with baud rates up to 1/16 of a reference frequency. It's expected from
most of the currently supported UART controllers. That's why the former
version of serial8250_get_baud_rate() method called uart_get_baud_rate()
with min and max baud rates passed as (port->uartclk / 16 / UART_DIV_MAX)
and ((port->uartclk + tolerance) / 16) respectively. Doing otherwise, like
it was suggested in commit ("serial: 8250_mtk: support big baud rate."),
caused acceptance of bauds, which was higher than the normal UART
controllers actually supported. As a result if some user-space program
requested to set a baud greater than (uartclk / 16) it would have been
permitted without truncation, but then serial8250_get_divisor(baud)
(which calls uart_get_divisor() to get the reference clock divisor) would
have returned a zero divisor. Setting zero divisor will cause an
unpredictable effect varying from chip to chip. In case of DW APB UART the
communications just stop.
Lets fix this problem by getting back the limitation of (uartclk +
tolerance) / 16 maximum baud supported by the generic 8250 port. Mediatek
8250 UART ports driver developer shouldn't have touched it in the first
place notably seeing he already provided a custom version of set_termios()
callback in that glue-driver which took into account the extended baud
rate values and accordingly updated the standard and vendor-specific
divisor latch registers anyway.
Fixes: 81bb549fdf14 ("serial: 8250_mtk: support big baud rate.")
Signed-off-by: Serge Semin <Sergey.Semin(a)baikalelectronics.ru>
Cc: Alexey Malahov <Alexey.Malahov(a)baikalelectronics.ru>
Cc: Thomas Bogendoerfer <tsbogend(a)alpha.franken.de>
Cc: Paul Burton <paulburton(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: Long Cheng <long.cheng(a)mediatek.com>
Cc: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Will Deacon <will(a)kernel.org>
Cc: Russell King <linux(a)armlinux.org.uk>
Cc: linux-mips(a)vger.kernel.org
Cc: linux-arm-kernel(a)lists.infradead.org
Cc: linux-mediatek(a)lists.infradead.org
Link: https://lore.kernel.org/r/20200506233136.11842-2-Sergey.Semin@baikalelectro…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Yi Yang <yiyang13(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/tty/serial/8250/8250_port.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
index 1867c2546cd9..3a0604b025e9 100644
--- a/drivers/tty/serial/8250/8250_port.c
+++ b/drivers/tty/serial/8250/8250_port.c
@@ -2647,6 +2647,8 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
struct ktermios *termios,
struct ktermios *old)
{
+ unsigned int tolerance = port->uartclk / 100;
+
/*
* Ask the core to calculate the divisor for us.
* Allow 1% tolerance at the upper limit so uart clks marginally
@@ -2655,7 +2657,7 @@ static unsigned int serial8250_get_baud_rate(struct uart_port *port,
*/
return uart_get_baud_rate(port, termios, old,
port->uartclk / 16 / UART_DIV_MAX,
- port->uartclk);
+ (port->uartclk + tolerance) / 16);
}
void
--
2.22.0
1
0
Hi all:
Who can help figure out why makecache failed?
[root@localhost yum.repos.d]# yum makecache
OS 3.3 kB/s | 217 B 00:00
Error: Failed to download metadata for repo 'OS': repomd.xml parser error: Parse error at line: 3 (Couldn't find end of Start Tag meta
cat openEuler.repo
# http://license.coscl.org.cn/MulanPSL2
#THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
#PURPOSE.
#See the Mulan PSL v2 for more details.
[OS]
name=OS
baseurl=http://121.36.97.194/openEuler-20.03-LTS-SP3/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://121.36.97.194/openEuler-20.03-LTS-SP3/OS/$basearch/RPM-GPG-KEY-openEuler
3
3
openEuler开发者大会议题征集,官方截止时间为3月30号上午9:00。如有议题想要在大会分享的,请进入如下链接填写,谢谢:
https://shimo.im/forms/7DO3Fg79JSAfaflt/fill
1
0
From: Yun Xu <xuyun(a)ramaxel.com>
Ramaxel inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4ZR0O
CVE: NA
------------------------------------
There are some issues of the driver that cannot be fixed now.
The driver is not good enough for the LTS quality requirements of
openEuler,so remove it.
Signed-off-by: Yun Xu <xuyun(a)ramaxel.com>
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
Reviewed-by: Yun Xu <xuyun(a)ramaxel.com>
Acked-by: Xie Xiuqi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 1 -
arch/x86/configs/openeuler_defconfig | 1 -
drivers/scsi/Kconfig | 1 -
drivers/scsi/Makefile | 1 -
drivers/scsi/spfc/Kconfig | 16 -
drivers/scsi/spfc/Makefile | 47 -
drivers/scsi/spfc/common/unf_common.h | 1753 -------
drivers/scsi/spfc/common/unf_disc.c | 1276 -----
drivers/scsi/spfc/common/unf_disc.h | 51 -
drivers/scsi/spfc/common/unf_event.c | 517 --
drivers/scsi/spfc/common/unf_event.h | 83 -
drivers/scsi/spfc/common/unf_exchg.c | 2317 ---------
drivers/scsi/spfc/common/unf_exchg.h | 436 --
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 ---
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 -
drivers/scsi/spfc/common/unf_fcstruct.h | 459 --
drivers/scsi/spfc/common/unf_gs.c | 2521 ---------
drivers/scsi/spfc/common/unf_gs.h | 58 -
drivers/scsi/spfc/common/unf_init.c | 353 --
drivers/scsi/spfc/common/unf_io.c | 1219 -----
drivers/scsi/spfc/common/unf_io.h | 96 -
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ----
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 -
drivers/scsi/spfc/common/unf_log.h | 178 -
drivers/scsi/spfc/common/unf_lport.c | 1008 ----
drivers/scsi/spfc/common/unf_lport.h | 519 --
drivers/scsi/spfc/common/unf_ls.c | 4883 ------------------
drivers/scsi/spfc/common/unf_ls.h | 61 -
drivers/scsi/spfc/common/unf_npiv.c | 1005 ----
drivers/scsi/spfc/common/unf_npiv.h | 47 -
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 --
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 -
drivers/scsi/spfc/common/unf_portman.c | 2431 ---------
drivers/scsi/spfc/common/unf_portman.h | 96 -
drivers/scsi/spfc/common/unf_rport.c | 2286 --------
drivers/scsi/spfc/common/unf_rport.h | 301 --
drivers/scsi/spfc/common/unf_scsi.c | 1462 ------
drivers/scsi/spfc/common/unf_scsi_common.h | 570 --
drivers/scsi/spfc/common/unf_service.c | 1430 -----
drivers/scsi/spfc/common/unf_service.h | 66 -
drivers/scsi/spfc/common/unf_type.h | 216 -
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ----
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 ---
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1611 ------
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 -
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 885 ----
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 -
drivers/scsi/spfc/hw/spfc_cqm_main.c | 987 ----
drivers/scsi/spfc/hw/spfc_cqm_main.h | 411 --
drivers/scsi/spfc/hw/spfc_cqm_object.c | 937 ----
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 -
drivers/scsi/spfc/hw/spfc_hba.c | 1751 -------
drivers/scsi/spfc/hw/spfc_hba.h | 341 --
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ------
drivers/scsi/spfc/hw/spfc_io.c | 1193 -----
drivers/scsi/spfc/hw/spfc_io.h | 138 -
drivers/scsi/spfc/hw/spfc_lld.c | 997 ----
drivers/scsi/spfc/hw/spfc_lld.h | 76 -
drivers/scsi/spfc/hw/spfc_module.h | 297 --
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 -
drivers/scsi/spfc/hw/spfc_queue.c | 4852 -----------------
drivers/scsi/spfc/hw/spfc_queue.h | 711 ---
drivers/scsi/spfc/hw/spfc_service.c | 2170 --------
drivers/scsi/spfc/hw/spfc_service.h | 282 -
drivers/scsi/spfc/hw/spfc_utils.c | 102 -
drivers/scsi/spfc/hw/spfc_utils.h | 202 -
drivers/scsi/spfc/hw/spfc_wqe.c | 646 ---
drivers/scsi/spfc/hw/spfc_wqe.h | 239 -
drivers/scsi/spfc/sphw_api_cmd.c | 1 -
drivers/scsi/spfc/sphw_cmdq.c | 1 -
drivers/scsi/spfc/sphw_common.c | 1 -
drivers/scsi/spfc/sphw_eqs.c | 1 -
drivers/scsi/spfc/sphw_hw_cfg.c | 1 -
drivers/scsi/spfc/sphw_hw_comm.c | 1 -
drivers/scsi/spfc/sphw_hwdev.c | 1 -
drivers/scsi/spfc/sphw_hwif.c | 1 -
drivers/scsi/spfc/sphw_mbox.c | 1 -
drivers/scsi/spfc/sphw_mgmt.c | 1 -
drivers/scsi/spfc/sphw_prof_adap.c | 1 -
drivers/scsi/spfc/sphw_wq.c | 1 -
80 files changed, 53210 deletions(-)
delete mode 100644 drivers/scsi/spfc/Kconfig
delete mode 100644 drivers/scsi/spfc/Makefile
delete mode 100644 drivers/scsi/spfc/common/unf_common.h
delete mode 100644 drivers/scsi/spfc/common/unf_disc.c
delete mode 100644 drivers/scsi/spfc/common/unf_disc.h
delete mode 100644 drivers/scsi/spfc/common/unf_event.c
delete mode 100644 drivers/scsi/spfc/common/unf_event.h
delete mode 100644 drivers/scsi/spfc/common/unf_exchg.c
delete mode 100644 drivers/scsi/spfc/common/unf_exchg.h
delete mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
delete mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
delete mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
delete mode 100644 drivers/scsi/spfc/common/unf_gs.c
delete mode 100644 drivers/scsi/spfc/common/unf_gs.h
delete mode 100644 drivers/scsi/spfc/common/unf_init.c
delete mode 100644 drivers/scsi/spfc/common/unf_io.c
delete mode 100644 drivers/scsi/spfc/common/unf_io.h
delete mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
delete mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
delete mode 100644 drivers/scsi/spfc/common/unf_log.h
delete mode 100644 drivers/scsi/spfc/common/unf_lport.c
delete mode 100644 drivers/scsi/spfc/common/unf_lport.h
delete mode 100644 drivers/scsi/spfc/common/unf_ls.c
delete mode 100644 drivers/scsi/spfc/common/unf_ls.h
delete mode 100644 drivers/scsi/spfc/common/unf_npiv.c
delete mode 100644 drivers/scsi/spfc/common/unf_npiv.h
delete mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
delete mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
delete mode 100644 drivers/scsi/spfc/common/unf_portman.c
delete mode 100644 drivers/scsi/spfc/common/unf_portman.h
delete mode 100644 drivers/scsi/spfc/common/unf_rport.c
delete mode 100644 drivers/scsi/spfc/common/unf_rport.h
delete mode 100644 drivers/scsi/spfc/common/unf_scsi.c
delete mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
delete mode 100644 drivers/scsi/spfc/common/unf_service.c
delete mode 100644 drivers/scsi/spfc/common/unf_service.h
delete mode 100644 drivers/scsi/spfc/common/unf_type.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_io.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_io.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_module.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_service.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_service.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
delete mode 120000 drivers/scsi/spfc/sphw_api_cmd.c
delete mode 120000 drivers/scsi/spfc/sphw_cmdq.c
delete mode 120000 drivers/scsi/spfc/sphw_common.c
delete mode 120000 drivers/scsi/spfc/sphw_eqs.c
delete mode 120000 drivers/scsi/spfc/sphw_hw_cfg.c
delete mode 120000 drivers/scsi/spfc/sphw_hw_comm.c
delete mode 120000 drivers/scsi/spfc/sphw_hwdev.c
delete mode 120000 drivers/scsi/spfc/sphw_hwif.c
delete mode 120000 drivers/scsi/spfc/sphw_mbox.c
delete mode 120000 drivers/scsi/spfc/sphw_mgmt.c
delete mode 120000 drivers/scsi/spfc/sphw_prof_adap.c
delete mode 120000 drivers/scsi/spfc/sphw_wq.c
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 401ab0f99631..1fd250cc103c 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2403,7 +2403,6 @@ CONFIG_SCSI_QLA_FC=m
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_QEDI=m
CONFIG_QEDF=m
-CONFIG_SPFC=m
CONFIG_SCSI_HUAWEI_FC=m
CONFIG_SCSI_FC_HIFC=m
CONFIG_SCSI_LPFC=m
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 64afe7021c45..cd9aaba18fa4 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2367,7 +2367,6 @@ CONFIG_SCSI_QLA_FC=m
CONFIG_SCSI_QLA_ISCSI=m
CONFIG_QEDI=m
CONFIG_QEDF=m
-CONFIG_SPFC=m
CONFIG_SCSI_HUAWEI_FC=m
CONFIG_SCSI_FC_HIFC=m
CONFIG_SCSI_LPFC=m
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 170d59df48d1..0fbe4edeccd0 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1151,7 +1151,6 @@ source "drivers/scsi/qla2xxx/Kconfig"
source "drivers/scsi/qla4xxx/Kconfig"
source "drivers/scsi/qedi/Kconfig"
source "drivers/scsi/qedf/Kconfig"
-source "drivers/scsi/spfc/Kconfig"
source "drivers/scsi/huawei/Kconfig"
config SCSI_LPFC
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 299d3318fac8..78a3c832394c 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -85,7 +85,6 @@ obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
-obj-$(CONFIG_SPFC) += spfc/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_HUAWEI_FC) += huawei/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/
diff --git a/drivers/scsi/spfc/Kconfig b/drivers/scsi/spfc/Kconfig
deleted file mode 100644
index 9d4566d90809..000000000000
--- a/drivers/scsi/spfc/Kconfig
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# Ramaxel SPFC driver configuration
-#
-
-config SPFC
- tristate "Ramaxel Fabric Channel Host Adapter Support"
- default m
- depends on PCI && SCSI
- depends on SCSI_FC_ATTRS
- depends on ARM64 || X86_64
- help
- This driver supports Ramaxel Fabric Channel PCIe host adapter.
- To compile this driver as part of the kernel, choose Y here.
- If unsure, choose N.
- The default is M.
diff --git a/drivers/scsi/spfc/Makefile b/drivers/scsi/spfc/Makefile
deleted file mode 100644
index 849b730ac733..000000000000
--- a/drivers/scsi/spfc/Makefile
+++ /dev/null
@@ -1,47 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_SPFC) += spfc.o
-
-subdir-ccflags-y += -I$(srctree)/$(src)/../../net/ethernet/ramaxel/spnic/hw
-subdir-ccflags-y += -I$(srctree)/$(src)/hw
-subdir-ccflags-y += -I$(srctree)/$(src)/common
-
-spfc-objs := common/unf_init.o \
- common/unf_event.o \
- common/unf_exchg.o \
- common/unf_exchg_abort.o \
- common/unf_io.o \
- common/unf_io_abnormal.o \
- common/unf_lport.o \
- common/unf_npiv.o \
- common/unf_npiv_portman.o \
- common/unf_disc.o \
- common/unf_rport.o \
- common/unf_service.o \
- common/unf_ls.o \
- common/unf_gs.o \
- common/unf_portman.o \
- common/unf_scsi.o \
- hw/spfc_utils.o \
- hw/spfc_lld.o \
- hw/spfc_io.o \
- hw/spfc_wqe.o \
- hw/spfc_service.o \
- hw/spfc_chipitf.o \
- hw/spfc_queue.o \
- hw/spfc_hba.o \
- hw/spfc_cqm_bat_cla.o \
- hw/spfc_cqm_bitmap_table.o \
- hw/spfc_cqm_main.o \
- hw/spfc_cqm_object.o \
- sphw_hwdev.o \
- sphw_hw_cfg.o \
- sphw_hw_comm.o \
- sphw_prof_adap.o \
- sphw_common.o \
- sphw_hwif.o \
- sphw_wq.o \
- sphw_cmdq.o \
- sphw_eqs.o \
- sphw_mbox.o \
- sphw_mgmt.o \
- sphw_api_cmd.o
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
deleted file mode 100644
index 9613649308bf..000000000000
--- a/drivers/scsi/spfc/common/unf_common.h
+++ /dev/null
@@ -1,1753 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_COMMON_H
-#define UNF_COMMON_H
-
-#include "unf_type.h"
-#include "unf_fcstruct.h"
-
-/* version num */
-#define SPFC_DRV_VERSION "B101"
-#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
-
-#define UNF_MAX_SECTORS 0xffff
-#define UNF_PKG_FREE_OXID 0x0
-#define UNF_PKG_FREE_RXID 0x1
-
-#define UNF_SPFC_MAXRPORT_NUM (2048)
-#define SPFC_DEFAULT_RPORT_INDEX (UNF_SPFC_MAXRPORT_NUM - 1)
-
-/* session use sq num */
-#define UNF_SQ_NUM_PER_SESSION 3
-
-extern atomic_t fc_mem_ref;
-extern u32 unf_dgb_level;
-extern u32 spfc_dif_type;
-extern u32 spfc_dif_enable;
-extern u8 spfc_guard;
-extern int link_lose_tmo;
-
-/* define bits */
-#define UNF_BIT(n) (0x1UL << (n))
-#define UNF_BIT_0 UNF_BIT(0)
-#define UNF_BIT_1 UNF_BIT(1)
-#define UNF_BIT_2 UNF_BIT(2)
-#define UNF_BIT_3 UNF_BIT(3)
-#define UNF_BIT_4 UNF_BIT(4)
-#define UNF_BIT_5 UNF_BIT(5)
-
-#define UNF_BITS_PER_BYTE 8
-
-#define UNF_NOTIFY_UP_CLEAN_FLASH 2
-
-/* Echo macro define */
-#define ECHO_MG_VERSION_LOCAL 1
-#define ECHO_MG_VERSION_REMOTE 2
-
-#define SPFC_WIN_NPIV_NUM 32
-
-#define UNF_GET_NAME_HIGH_WORD(name) (((name) >> 32) & 0xffffffff)
-#define UNF_GET_NAME_LOW_WORD(name) ((name) & 0xffffffff)
-
-#define UNF_FIRST_LPORT_ID_MASK 0xffffff00
-#define UNF_PORT_ID_MASK 0x000000ff
-#define UNF_FIRST_LPORT_ID 0x00000000
-#define UNF_SECOND_LPORT_ID 0x00000001
-#define UNF_EIGHTH_LPORT_ID 0x00000007
-#define SPFC_MAX_COUNTER_TYPE 128
-
-#define UNF_EVENT_ASYN 0
-#define UNF_EVENT_SYN 1
-#define UNF_GLOBAL_EVENT_ASYN 2
-#define UNF_GLOBAL_EVENT_SYN 3
-
-#define UNF_GET_SLOT_ID_BY_PORTID(port_id) (((port_id) & 0x001f00) >> 8)
-#define UNF_GET_FUNC_ID_BY_PORTID(port_id) ((port_id) & 0x0000ff)
-#define UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(port_id) \
- (((port_id) & 0x00FF00) >> 8)
-
-#define UNF_FC_SERVER_BOARD_8_G 13 /* 8G mode */
-#define UNF_FC_SERVER_BOARD_16_G 7 /* 16G mode */
-#define UNF_FC_SERVER_BOARD_32_G 6 /* 32G mode */
-
-#define UNF_PORT_TYPE_FC_QSFP 1
-#define UNF_PORT_TYPE_FC_SFP 0
-#define UNF_PORT_UNGRADE_FW_RESET_ACTIVE 0
-#define UNF_PORT_UNGRADE_FW_RESET_INACTIVE 1
-
-enum unf_rport_qos_level {
- UNF_QOS_LEVEL_DEFAULT = 0,
- UNF_QOS_LEVEL_MIDDLE,
- UNF_QOS_LEVEL_HIGH,
- UNF_QOS_LEVEL_BUTT
-};
-
-struct buff_list {
- u8 *vaddr;
- dma_addr_t paddr;
-};
-
-struct buf_describe {
- struct buff_list *buflist;
- u32 buf_size;
- u32 buf_num;
-};
-
-#define IO_STATICS
-struct unf_port_info {
- u32 local_nport_id;
- u32 nport_id;
- u32 rport_index;
- u64 port_name;
- enum unf_rport_qos_level qos_level;
- u8 cs_ctrl;
- u8 rsvd0[3];
- u32 sqn_base;
-};
-
-struct unf_cfg_item {
- char *puc_name;
- u32 min_value;
- u32 default_value;
- u32 max_value;
-};
-
-struct unf_port_param {
- u32 ra_tov;
- u32 ed_tov;
-};
-
-/* get wwpn adn wwnn */
-struct unf_get_chip_info_argout {
- u8 board_type;
- u64 wwpn;
- u64 wwnn;
- u64 sys_mac;
-};
-
-/* get sfp info: present and speed */
-struct unf_get_port_info_argout {
- u8 sfp_speed;
- u8 present;
- u8 rsvd[2];
-};
-
-/* SFF-8436(QSFP+) Rev 4.7 */
-struct unf_sfp_plus_field_a0 {
- u8 identifier;
- /* offset 1~2 */
- struct {
- u8 reserved;
- u8 status;
- } status_indicator;
- /* offset 3~21 */
- struct {
- u8 rx_tx_los;
- u8 tx_fault;
- u8 all_resv;
-
- u8 ini_complete : 1;
- u8 bit_resv : 3;
- u8 temp_low_warn : 1;
- u8 temp_high_warn : 1;
- u8 temp_low_alarm : 1;
- u8 temp_high_alarm : 1;
-
- u8 resv : 4;
- u8 vcc_low_warn : 1;
- u8 vcc_high_warn : 1;
- u8 vcc_low_alarm : 1;
- u8 vcc_high_alarm : 1;
-
- u8 resv8;
- u8 rx_pow[2];
- u8 tx_bias[2];
- u8 reserved[6];
- u8 vendor_specifics[3];
- } interrupt_flag;
- /* offset 22~33 */
- struct {
- u8 temp[2];
- u8 reserved[2];
- u8 supply_vol[2];
- u8 reserveds[2];
- u8 vendor_specific[4];
- } module_monitors;
- /* offset 34~81 */
- struct {
- u8 rx_pow[8];
- u8 tx_bias[8];
- u8 reserved[16];
- u8 vendor_specific[16];
- } channel_monitor_val;
-
- /* offset 82~85 */
- u8 reserved[4];
-
- /* offset 86~97 */
- struct {
- /* 86~88 */
- u8 tx_disable;
- u8 rx_rate_select;
- u8 tx_rate_select;
-
- /* 89~92 */
- u8 rx4_app_select;
- u8 rx3_app_select;
- u8 rx2_app_select;
- u8 rx1_app_select;
- /* 93 */
- u8 power_override : 1;
- u8 power_set : 1;
- u8 reserved : 6;
-
- /* 94~97 */
- u8 tx4_app_select;
- u8 tx3_app_select;
- u8 tx2_app_select;
- u8 tx1_app_select;
- /* 98~99 */
- u8 reserved2[2];
- } control;
- /* 100~106 */
- struct {
- /* 100 */
- u8 m_rx1_los : 1;
- u8 m_rx2_los : 1;
- u8 m_rx3_los : 1;
- u8 m_rx4_los : 1;
- u8 m_tx1_los : 1;
- u8 m_tx2_los : 1;
- u8 m_tx3_los : 1;
- u8 m_tx4_los : 1;
- /* 101 */
- u8 m_tx1_fault : 1;
- u8 m_tx2_fault : 1;
- u8 m_tx3_fault : 1;
- u8 m_tx4_fault : 1;
- u8 reserved : 4;
- /* 102 */
- u8 reserved1;
- /* 103 */
- u8 mini_cmp_flag : 1;
- u8 rsv : 3;
- u8 m_temp_low_warn : 1;
- u8 m_temp_high_warn : 1;
- u8 m_temp_low_alarm : 1;
- u8 m_temp_high_alarm : 1;
- /* 104 */
- u8 rsv1 : 4;
- u8 m_vcc_low_warn : 1;
- u8 m_vcc_high_warn : 1;
- u8 m_vcc_low_alarm : 1;
- u8 m_vcc_high_alarm : 1;
- /* 105~106 */
- u8 vendor_specific[2];
- } module_channel_mask_bit;
- /* 107~118 */
- u8 resv[12];
- /* 119~126 */
- u8 password_reserved[8];
- /* 127 */
- u8 page_select;
-};
-
-/* page 00 */
-struct unf_sfp_plus_field_00 {
- /* 128~191 */
- struct {
- u8 id;
- u8 id_ext;
- u8 connector;
- u8 speci_com[6];
- u8 mode;
- u8 speed;
- u8 encoding;
- u8 br_nominal;
- u8 ext_rate_select_com;
- u8 length_smf;
- u8 length_om3;
- u8 length_om2;
- u8 length_om1;
- u8 length_copper;
- u8 device_tech;
- u8 vendor_name[16];
- u8 ex_module;
- u8 vendor_oui[3];
- u8 vendor_pn[16];
- u8 vendor_rev[2];
- /* Wave length or Copper cable Attenuation*/
- u8 wave_or_copper_attenuation[2];
- u8 wave_length_toler[2]; /* Wavelength tolerance */
- u8 max_temp;
- u8 cc_base;
- } base_id_fields;
- /* 192~223 */
- struct {
- u8 options[4];
- u8 vendor_sn[16];
- u8 date_code[8];
- u8 diagn_monit_type;
- u8 enhance_opt;
- u8 reserved;
- u8 ccext;
- } ext_id_fields;
- /* 224~255 */
- u8 vendor_spec_eeprom[32];
-};
-
-/* page 01 */
-struct unf_sfp_plus_field_01 {
- u8 optional01[128];
-};
-
-/* page 02 */
-struct unf_sfp_plus_field_02 {
- u8 optional02[128];
-};
-
-/* page 03 */
-struct unf_sfp_plus_field_03 {
- u8 temp_high_alarm[2];
- u8 temp_low_alarm[2];
- u8 temp_high_warn[2];
- u8 temp_low_warn[2];
-
- u8 reserved1[8];
-
- u8 vcc_high_alarm[2];
- u8 vcc_low_alarm[2];
- u8 vcc_high_warn[2];
- u8 vcc_low_warn[2];
-
- u8 reserved2[8];
- u8 vendor_specific1[16];
-
- u8 pow_high_alarm[2];
- u8 pow_low_alarm[2];
- u8 pow_high_warn[2];
- u8 pow_low_warn[2];
-
- u8 bias_high_alarm[2];
- u8 bias_low_alarm[2];
- u8 bias_high_warn[2];
- u8 bias_low_warn[2];
-
- u8 tx_power_high_alarm[2];
- u8 tx_power_low_alarm[2];
- u8 reserved3[4];
-
- u8 reserved4[8];
-
- u8 vendor_specific2[16];
- u8 reserved5[2];
- u8 vendor_specific3[12];
- u8 rx_ampl[2];
- u8 rx_tx_sq_disable;
- u8 rx_output_disable;
- u8 chan_monit_mask[12];
- u8 reserved6[2];
-};
-
-struct unf_sfp_plus_info {
- struct unf_sfp_plus_field_a0 sfp_plus_info_a0;
- struct unf_sfp_plus_field_00 sfp_plus_info_00;
- struct unf_sfp_plus_field_01 sfp_plus_info_01;
- struct unf_sfp_plus_field_02 sfp_plus_info_02;
- struct unf_sfp_plus_field_03 sfp_plus_info_03;
-};
-
-struct unf_sfp_data_field_a0 {
- /* Offset 0~63 */
- struct {
- u8 id;
- u8 id_ext;
- u8 connector;
- u8 transceiver[8];
- u8 encoding;
- u8 br_nominal; /* Nominal signalling rate, units of 100MBd. */
- u8 rate_identifier; /* Type of rate select functionality */
- /* Link length supported for single mode fiber, units of km */
- u8 length_smk_km;
- /* Link length supported for single mode fiber,
- *units of 100 m
- */
- u8 length_smf;
- /* Link length supported for 50 um OM2 fiber,units of 10 m */
- u8 length_smf_om2;
- /* Link length supported for 62.5 um OM1 fiber, units of 10 m */
- u8 length_smf_om1;
- /*Link length supported for copper/direct attach cable,
- *units of m
- */
- u8 length_cable;
- /* Link length supported for 50 um OM3 fiber, units of 10m */
- u8 length_om3;
- u8 vendor_name[16]; /* ASCII */
- /* Code for electronic or optical compatibility*/
- u8 transceiver2;
- u8 vendor_oui[3]; /* SFP vendor IEEE company ID */
- u8 vendor_pn[16]; /* Part number provided by SFP vendor (ASCII)
- */
- /* Revision level for part number provided by vendor (ASCII) */
- u8 vendor_rev[4];
- /* Laser wavelength (Passive/Active Cable
- *Specification Compliance)
- */
- u8 wave_length[2];
- u8 unallocated;
- /* Check code for Base ID Fields (addresses 0 to 62)*/
- u8 cc_base;
- } base_id_fields;
-
- /* Offset 64~95 */
- struct {
- u8 options[2];
- u8 br_max;
- u8 br_min;
- u8 vendor_sn[16];
- u8 date_code[8];
- u8 diag_monitoring_type;
- u8 enhanced_options;
- u8 sff8472_compliance;
- u8 cc_ext;
- } ext_id_fields;
-
- /* Offset 96~255 */
- struct {
- u8 vendor_spec_eeprom[32];
- u8 rsvd[128];
- } vendor_spec_id_fields;
-};
-
-struct unf_sfp_data_field_a2 {
- /* Offset 0~119 */
- struct {
- /* 0~39 */
- struct {
- u8 temp_alarm_high[2];
- u8 temp_alarm_low[2];
- u8 temp_warning_high[2];
- u8 temp_warning_low[2];
-
- u8 vcc_alarm_high[2];
- u8 vcc_alarm_low[2];
- u8 vcc_warning_high[2];
- u8 vcc_warning_low[2];
-
- u8 bias_alarm_high[2];
- u8 bias_alarm_low[2];
- u8 bias_warning_high[2];
- u8 bias_warning_low[2];
-
- u8 tx_alarm_high[2];
- u8 tx_alarm_low[2];
- u8 tx_warning_high[2];
- u8 tx_warning_low[2];
-
- u8 rx_alarm_high[2];
- u8 rx_alarm_low[2];
- u8 rx_warning_high[2];
- u8 rx_warning_low[2];
- } alarm_warn_th;
-
- u8 unallocated0[16];
- u8 ext_cal_constants[36];
- u8 unallocated1[3];
- u8 cc_dmi;
-
- /* 96~105 */
- struct {
- u8 temp[2];
- u8 vcc[2];
- u8 tx_bias[2];
- u8 tx_power[2];
- u8 rx_power[2];
- } diag;
-
- u8 unallocated2[4];
-
- struct {
- u8 data_rdy_bar_state : 1;
- u8 rx_los : 1;
- u8 tx_fault_state : 1;
- u8 soft_rate_select_state : 1;
- u8 rate_select_state : 1;
- u8 rs_state : 1;
- u8 soft_tx_disable_select : 1;
- u8 tx_disable_state : 1;
- } status_ctrl;
- u8 rsvd;
-
- /* 112~113 */
- struct {
- /* 112 */
- u8 tx_alarm_low : 1;
- u8 tx_alarm_high : 1;
- u8 tx_bias_alarm_low : 1;
- u8 tx_bias_alarm_high : 1;
- u8 vcc_alarm_low : 1;
- u8 vcc_alarm_high : 1;
- u8 temp_alarm_low : 1;
- u8 temp_alarm_high : 1;
-
- /* 113 */
- u8 rsvd : 6;
- u8 rx_alarm_low : 1;
- u8 rx_alarm_high : 1;
- } alarm;
-
- u8 unallocated3[2];
-
- /* 116~117 */
- struct {
- /* 116 */
- u8 tx_warn_lo : 1;
- u8 tx_warn_hi : 1;
- u8 bias_warn_lo : 1;
- u8 bias_warn_hi : 1;
- u8 vcc_warn_lo : 1;
- u8 vcc_warn_hi : 1;
- u8 temp_warn_lo : 1;
- u8 temp_warn_hi : 1;
-
- /* 117 */
- u8 rsvd : 6;
- u8 rx_warn_lo : 1;
- u8 rx_warn_hi : 1;
- } warning;
-
- u8 ext_status_and_ctrl[2];
- } diag;
-
- /* Offset 120~255 */
- struct {
- u8 vendor_spec[8];
- u8 user_eeprom[120];
- u8 vendor_ctrl[8];
- } general_use_fields;
-};
-
-struct unf_sfp_info {
- struct unf_sfp_data_field_a0 sfp_info_a0;
- struct unf_sfp_data_field_a2 sfp_info_a2;
-};
-
-struct unf_sfp_err_rome_info {
- struct unf_sfp_info sfp_info;
- struct unf_sfp_plus_info sfp_plus_info;
-};
-
-struct unf_err_code {
- u32 loss_of_signal_count;
- u32 bad_rx_char_count;
- u32 loss_of_sync_count;
- u32 link_fail_count;
- u32 rx_eof_a_count;
- u32 dis_frame_count;
- u32 bad_crc_count;
- u32 proto_error_count;
-};
-
-/* config file */
-enum unf_port_mode {
- UNF_PORT_MODE_UNKNOWN = 0x00,
- UNF_PORT_MODE_TGT = 0x10,
- UNF_PORT_MODE_INI = 0x20,
- UNF_PORT_MODE_BOTH = 0x30
-};
-
-enum unf_port_upgrade {
- UNF_PORT_UNSUPPORT_UPGRADE_REPORT = 0x00,
- UNF_PORT_SUPPORT_UPGRADE_REPORT = 0x01,
- UNF_PORT_UPGRADE_BUTT
-};
-
-#define UNF_BYTES_OF_DWORD 0x4
-static inline void __attribute__((unused)) unf_big_end_to_cpu(u8 *buffer, u32 size)
-{
- u32 *buf = NULL;
- u32 word_sum = 0;
- u32 index = 0;
-
- if (!buffer)
- return;
-
- buf = (u32 *)buffer;
-
- /* byte to word */
- if (size % UNF_BYTES_OF_DWORD == 0)
- word_sum = size / UNF_BYTES_OF_DWORD;
- else
- return;
-
- /* word to byte */
- while (index < word_sum) {
- *buf = be32_to_cpu(*buf);
- buf++;
- index++;
- }
-}
-
-static inline void __attribute__((unused)) unf_cpu_to_big_end(void *buffer, u32 size)
-{
-#define DWORD_BIT 32
-#define BYTE_BIT 8
- u32 *buf = NULL;
- u32 word_sum = 0;
- u32 index = 0;
- u32 tmp = 0;
-
- if (!buffer)
- return;
-
- buf = (u32 *)buffer;
-
- /* byte to dword */
- word_sum = size / UNF_BYTES_OF_DWORD;
-
- /* dword to byte */
- while (index < word_sum) {
- *buf = cpu_to_be32(*buf);
- buf++;
- index++;
- }
-
- if (size % UNF_BYTES_OF_DWORD) {
- tmp = cpu_to_be32(*buf);
- tmp =
- tmp >> (DWORD_BIT - (size % UNF_BYTES_OF_DWORD) * BYTE_BIT);
- memcpy(buf, &tmp, (size % UNF_BYTES_OF_DWORD));
- }
-}
-
-#define UNF_TOP_AUTO_MASK 0x0f
-#define UNF_TOP_UNKNOWN 0xff
-#define SPFC_TOP_AUTO 0x0
-
-#define UNF_NORMAL_MODE 0
-#define UNF_SET_NOMAL_MODE(mode) ((mode) = UNF_NORMAL_MODE)
-
-/*
- * * SCSI status
- */
-#define SCSI_GOOD 0x00
-#define SCSI_CHECK_CONDITION 0x02
-#define SCSI_CONDITION_MET 0x04
-#define SCSI_BUSY 0x08
-#define SCSI_INTERMEDIATE 0x10
-#define SCSI_INTERMEDIATE_COND_MET 0x14
-#define SCSI_RESERVATION_CONFLICT 0x18
-#define SCSI_TASK_SET_FULL 0x28
-#define SCSI_ACA_ACTIVE 0x30
-#define SCSI_TASK_ABORTED 0x40
-
-enum unf_act_topo {
- UNF_ACT_TOP_PUBLIC_LOOP = 0x1,
- UNF_ACT_TOP_PRIVATE_LOOP = 0x2,
- UNF_ACT_TOP_P2P_DIRECT = 0x4,
- UNF_ACT_TOP_P2P_FABRIC = 0x8,
- UNF_TOP_LOOP_MASK = 0x03,
- UNF_TOP_P2P_MASK = 0x0c,
- UNF_TOP_FCOE_MASK = 0x30,
- UNF_ACT_TOP_UNKNOWN
-};
-
-#define UNF_FL_PORT_LOOP_ADDR 0x00
-#define UNF_INVALID_LOOP_ADDR 0xff
-
-#define UNF_LOOP_ROLE_MASTER_OR_SLAVE 0x0
-#define UNF_LOOP_ROLE_ONLY_SLAVE 0x1
-
-#define UNF_TOU16_CHECK(dest, src, over_action) \
- do { \
- if (unlikely(0xFFFF < (src))) { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, \
- UNF_ERR, "ToU16 error, src 0x%x ", \
- (src)); \
- over_action; \
- } \
- ((dest) = (u16)(src)); \
- } while (0)
-
-#define UNF_PORT_SPEED_AUTO 0
-#define UNF_PORT_SPEED_2_G 2
-#define UNF_PORT_SPEED_4_G 4
-#define UNF_PORT_SPEED_8_G 8
-#define UNF_PORT_SPEED_10_G 10
-#define UNF_PORT_SPEED_16_G 16
-#define UNF_PORT_SPEED_32_G 32
-
-#define UNF_PORT_SPEED_UNKNOWN (~0)
-#define UNF_PORT_SFP_SPEED_ERR 0xFF
-
-#define UNF_OP_DEBUG_DUMP 0x0001
-#define UNF_OP_FCPORT_INFO 0x0002
-#define UNF_OP_FCPORT_LINK_CMD_TEST 0x0003
-#define UNF_OP_TEST_MBX 0x0004
-
-/* max frame size */
-#define UNF_MAX_FRAME_SIZE 2112
-
-/* default */
-#define UNF_DEFAULT_FRAME_SIZE 2048
-#define UNF_DEFAULT_EDTOV 2000
-#define UNF_DEFAULT_RATOV 10000
-#define UNF_DEFAULT_FABRIC_RATOV 10000
-#define UNF_MAX_RETRY_COUNT 3
-#define UNF_RRQ_MIN_TIMEOUT_INTERVAL 30000
-#define UNF_LOGO_TIMEOUT_INTERVAL 3000
-#define UNF_SFS_MIN_TIMEOUT_INTERVAL 15000
-#define UNF_WRITE_RRQ_SENDERR_INTERVAL 3000
-#define UNF_REC_TOV 3000
-
-#define UNF_WAIT_SEM_TIMEOUT (5000UL)
-#define UNF_WAIT_ABTS_RSP_TIMEOUT (20000UL)
-#define UNF_MAX_ABTS_WAIT_INTERVAL ((UNF_WAIT_SEM_TIMEOUT - 500) / 1000)
-
-#define UNF_TGT_RRQ_REDUNDANT_TIME 2000
-#define UNF_INI_RRQ_REDUNDANT_TIME 500
-#define UNF_INI_ELS_REDUNDANT_TIME 2000
-
-/* ELS command values */
-#define UNF_ELS_CMND_HIGH_MASK 0xff000000
-#define UNF_ELS_CMND_RJT 0x01000000
-#define UNF_ELS_CMND_ACC 0x02000000
-#define UNF_ELS_CMND_PLOGI 0x03000000
-#define UNF_ELS_CMND_FLOGI 0x04000000
-#define UNF_ELS_CMND_LOGO 0x05000000
-#define UNF_ELS_CMND_RLS 0x0F000000
-#define UNF_ELS_CMND_ECHO 0x10000000
-#define UNF_ELS_CMND_REC 0x13000000
-#define UNF_ELS_CMND_RRQ 0x12000000
-#define UNF_ELS_CMND_PRLI 0x20000000
-#define UNF_ELS_CMND_PRLO 0x21000000
-#define UNF_ELS_CMND_PDISC 0x50000000
-#define UNF_ELS_CMND_FDISC 0x51000000
-#define UNF_ELS_CMND_ADISC 0x52000000
-#define UNF_ELS_CMND_FAN 0x60000000
-#define UNF_ELS_CMND_RSCN 0x61000000
-#define UNF_FCP_CMND_SRR 0x14000000
-#define UNF_GS_CMND_SCR 0x62000000
-
-#define UNF_PLOGI_VERSION_UPPER 0x20
-#define UNF_PLOGI_VERSION_LOWER 0x20
-#define UNF_PLOGI_CONCURRENT_SEQ 0x00FF
-#define UNF_PLOGI_RO_CATEGORY 0x00FE
-#define UNF_PLOGI_SEQ_PER_XCHG 0x0001
-#define UNF_LGN_INFRAMESIZE 2048
-
-/* CT_IU pream defines */
-#define UNF_REV_NPORTID_INIT 0x01000000
-#define UNF_FSTYPE_OPT_INIT 0xfc020000
-#define UNF_FSTYPE_RFT_ID 0x02170000
-#define UNF_FSTYPE_GID_PT 0x01A10000
-#define UNF_FSTYPE_GID_FT 0x01710000
-#define UNF_FSTYPE_RFF_ID 0x021F0000
-#define UNF_FSTYPE_GFF_ID 0x011F0000
-#define UNF_FSTYPE_GNN_ID 0x01130000
-#define UNF_FSTYPE_GPN_ID 0x01120000
-
-#define UNF_CT_IU_RSP_MASK 0xffff0000
-#define UNF_CT_IU_REASON_MASK 0x00ff0000
-#define UNF_CT_IU_EXPLAN_MASK 0x0000ff00
-#define UNF_CT_IU_REJECT 0x80010000
-#define UNF_CT_IU_ACCEPT 0x80020000
-
-#define UNF_FABRIC_FULL_REG 0x00000003
-
-#define UNF_FC4_SCSI_BIT8 0x00000100
-#define UNF_FC4_FCP_TYPE 0x00000008
-#define UNF_FRAG_REASON_VENDOR 0
-
-/* GID_PT, GID_FT */
-#define UNF_GID_PT_TYPE 0x7F000000
-#define UNF_GID_FT_TYPE 0x00000008
-
-/*
- *FC4 defines
- */
-#define UNF_FC4_FRAME_PAGE_SIZE 0x10
-#define UNF_FC4_FRAME_PAGE_SIZE_SHIFT 16
-
-#define UNF_FC4_FRAME_PARM_0_FCP 0x08000000
-#define UNF_FC4_FRAME_PARM_0_I_PAIR 0x00002000
-#define UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE 0x00000100
-#define UNF_FC4_FRAME_PARM_0_MASK \
- (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR | \
- UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE)
-#define UNF_FC4_FRAME_PARM_3_INI 0x00000020
-#define UNF_FC4_FRAME_PARM_3_TGT 0x00000010
-#define UNF_FC4_FRAME_PARM_3_BOTH \
- (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT)
-#define UNF_FC4_FRAME_PARM_3_R_XFER_DIS 0x00000002
-#define UNF_FC4_FRAME_PARM_3_W_XFER_DIS 0x00000001
-#define UNF_FC4_FRAME_PARM_3_REC_SUPPORT 0x00000400 /* bit 10 */
-#define UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT 0x00000200 /* bit 9 */
-#define UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT 0x00000100 /* bit 8 */
-#define UNF_FC4_FRAME_PARM_3_CONF_ALLOW 0x00000080 /* bit 7 */
-
-#define UNF_FC4_FRAME_PARM_3_MASK \
- (UNF_FC4_FRAME_PARM_3_INI | UNF_FC4_FRAME_PARM_3_TGT | \
- UNF_FC4_FRAME_PARM_3_R_XFER_DIS)
-
-#define UNF_FC4_TYPE_SHIFT 24
-#define UNF_FC4_TYPE_MASK 0xff
-/* FC4 feature we support */
-#define UNF_GFF_ACC_MASK 0xFF000000
-
-/* Reject CT_IU Reason Codes */
-#define UNF_CTIU_RJT_MASK 0xffff0000
-#define UNF_CTIU_RJT_INVALID_COMMAND 0x00010000
-#define UNF_CTIU_RJT_INVALID_VERSION 0x00020000
-#define UNF_CTIU_RJT_LOGIC_ERR 0x00030000
-#define UNF_CTIU_RJT_INVALID_SIZE 0x00040000
-#define UNF_CTIU_RJT_LOGIC_BUSY 0x00050000
-#define UNF_CTIU_RJT_PROTOCOL_ERR 0x00070000
-#define UNF_CTIU_RJT_UNABLE_PERFORM 0x00090000
-#define UNF_CTIU_RJT_NOT_SUPPORTED 0x000B0000
-
-/* FS_RJT Reason code explanations, FC-GS-2 6.5 */
-#define UNF_CTIU_RJT_EXP_MASK 0x0000FF00
-#define UNF_CTIU_RJT_EXP_NO_ADDTION 0x00000000
-#define UNF_CTIU_RJT_EXP_PORTID_NO_REG 0x00000100
-#define UNF_CTIU_RJT_EXP_PORTNAME_NO_REG 0x00000200
-#define UNF_CTIU_RJT_EXP_NODENAME_NO_REG 0x00000300
-#define UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG 0x00000700
-#define UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG 0x00000A00
-
-/*
- * LS_RJT defines
- */
-#define UNF_FC_LS_RJT_REASON_MASK 0x00ff0000
-
-/*
- * LS_RJT reason code defines
- */
-#define UNF_LS_OK 0x00000000
-#define UNF_LS_RJT_INVALID_COMMAND 0x00010000
-#define UNF_LS_RJT_LOGICAL_ERROR 0x00030000
-#define UNF_LS_RJT_BUSY 0x00050000
-#define UNF_LS_RJT_PROTOCOL_ERROR 0x00070000
-#define UNF_LS_RJT_REQUEST_DENIED 0x00090000
-#define UNF_LS_RJT_NOT_SUPPORTED 0x000b0000
-#define UNF_LS_RJT_CLASS_ERROR 0x000c0000
-
-/*
- * LS_RJT code explanation
- */
-#define UNF_LS_RJT_NO_ADDITIONAL_INFO 0x00000000
-#define UNF_LS_RJT_INV_DATA_FIELD_SIZE 0x00000700
-#define UNF_LS_RJT_INV_COMMON_SERV_PARAM 0x00000F00
-#define UNF_LS_RJT_INVALID_OXID_RXID 0x00001700
-#define UNF_LS_RJT_COMMAND_IN_PROGRESS 0x00001900
-#define UNF_LS_RJT_INSUFFICIENT_RESOURCES 0x00002900
-#define UNF_LS_RJT_COMMAND_NOT_SUPPORTED 0x00002C00
-#define UNF_LS_RJT_UNABLE_TO_SUPLY_REQ_DATA 0x00002A00
-#define UNF_LS_RJT_INVALID_PAYLOAD_LENGTH 0x00002D00
-
-#define UNF_P2P_LOCAL_NPORT_ID 0x000000EF
-#define UNF_P2P_REMOTE_NPORT_ID 0x000000D6
-
-#define UNF_BBCREDIT_MANAGE_NFPORT 0
-#define UNF_BBCREDIT_MANAGE_LPORT 1
-#define UNF_BBCREDIT_LPORT 0
-#define UNF_CONTIN_INCREASE_SUPPORT 1
-#define UNF_CLASS_VALID 1
-#define UNF_CLASS_INVALID 0
-#define UNF_NOT_MEANINGFUL 0
-#define UNF_NO_SERVICE_PARAMS 0
-#define UNF_CLEAN_ADDRESS_DEFAULT 0
-#define UNF_PRIORITY_ENABLE 1
-#define UNF_PRIORITY_DISABLE 0
-#define UNF_SEQUEN_DELIVERY_REQ 1 /* Sequential delivery requested */
-
-#define UNF_FC_PROTOCOL_CLASS_3 0x0
-#define UNF_FC_PROTOCOL_CLASS_2 0x1
-#define UNF_FC_PROTOCOL_CLASS_1 0x2
-#define UNF_FC_PROTOCOL_CLASS_F 0x3
-#define UNF_FC_PROTOCOL_CLASS_OTHER 0x4
-
-#define UNF_RSCN_PORT_ADDR 0x0
-#define UNF_RSCN_AREA_ADDR_GROUP 0x1
-#define UNF_RSCN_DOMAIN_ADDR_GROUP 0x2
-#define UNF_RSCN_FABRIC_ADDR_GROUP 0x3
-
-#define UNF_GET_RSCN_PLD_LEN(cmnd) ((cmnd) & 0x0000ffff)
-#define UNF_RSCN_PAGE_LEN 0x4
-
-#define UNF_PORT_LINK_UP 0x0000
-#define UNF_PORT_LINK_DOWN 0x0001
-#define UNF_PORT_RESET_START 0x0002
-#define UNF_PORT_RESET_END 0x0003
-#define UNF_PORT_LINK_UNKNOWN 0x0004
-#define UNF_PORT_NOP 0x0005
-#define UNF_PORT_CORE_FATAL_ERROR 0x0006
-#define UNF_PORT_CORE_UNRECOVERABLE_ERROR 0x0007
-#define UNF_PORT_CORE_RECOVERABLE_ERROR 0x0008
-#define UNF_PORT_LOGOUT 0x0009
-#define UNF_PORT_CLEAR_VLINK 0x000a
-#define UNF_PORT_UPDATE_PROCESS 0x000b
-#define UNF_PORT_DEBUG_DUMP 0x000c
-#define UNF_PORT_GET_FWLOG 0x000d
-#define UNF_PORT_CLEAN_DONE 0x000e
-#define UNF_PORT_BEGIN_REMOVE 0x000f
-#define UNF_PORT_RELEASE_RPORT_INDEX 0x0010
-#define UNF_PORT_ABNORMAL_RESET 0x0012
-
-/*
- * SCSI begin
- */
-#define SCSIOPC_TEST_UNIT_READY 0x00
-#define SCSIOPC_INQUIRY 0x12
-#define SCSIOPC_MODE_SENSE_6 0x1A
-#define SCSIOPC_MODE_SENSE_10 0x5A
-#define SCSIOPC_MODE_SELECT_6 0x15
-#define SCSIOPC_RESERVE 0x16
-#define SCSIOPC_RELEASE 0x17
-#define SCSIOPC_START_STOP_UNIT 0x1B
-#define SCSIOPC_READ_CAPACITY_10 0x25
-#define SCSIOPC_READ_CAPACITY_16 0x9E
-#define SCSIOPC_READ_6 0x08
-#define SCSIOPC_READ_10 0x28
-#define SCSIOPC_READ_12 0xA8
-#define SCSIOPC_READ_16 0x88
-#define SCSIOPC_WRITE_6 0x0A
-#define SCSIOPC_WRITE_10 0x2A
-#define SCSIOPC_WRITE_12 0xAA
-#define SCSIOPC_WRITE_16 0x8A
-#define SCSIOPC_WRITE_VERIFY 0x2E
-#define SCSIOPC_VERIFY_10 0x2F
-#define SCSIOPC_VERIFY_12 0xAF
-#define SCSIOPC_VERIFY_16 0x8F
-#define SCSIOPC_REQUEST_SENSE 0x03
-#define SCSIOPC_REPORT_LUN 0xA0
-#define SCSIOPC_FORMAT_UNIT 0x04
-#define SCSIOPC_SEND_DIAGNOSTIC 0x1D
-#define SCSIOPC_WRITE_SAME_10 0x41
-#define SCSIOPC_WRITE_SAME_16 0x93
-#define SCSIOPC_READ_BUFFER 0x3C
-#define SCSIOPC_WRITE_BUFFER 0x3B
-
-#define SCSIOPC_LOG_SENSE 0x4D
-#define SCSIOPC_MODE_SELECT_10 0x55
-#define SCSIOPC_SYNCHRONIZE_CACHE_10 0x35
-#define SCSIOPC_SYNCHRONIZE_CACHE_16 0x91
-#define SCSIOPC_WRITE_AND_VERIFY_10 0x2E
-#define SCSIOPC_WRITE_AND_VERIFY_12 0xAE
-#define SCSIOPC_WRITE_AND_VERIFY_16 0x8E
-#define SCSIOPC_READ_MEDIA_SERIAL_NUMBER 0xAB
-#define SCSIOPC_REASSIGN_BLOCKS 0x07
-#define SCSIOPC_ATA_PASSTHROUGH_16 0x85
-#define SCSIOPC_ATA_PASSTHROUGH_12 0xa1
-
-/*
- * SCSI end
- */
-#define IS_READ_COMMAND(opcode) \
- ((opcode) == SCSIOPC_READ_6 || (opcode) == SCSIOPC_READ_10 || \
- (opcode) == SCSIOPC_READ_12 || (opcode) == SCSIOPC_READ_16)
-#define IS_WRITE_COMMAND(opcode) \
- ((opcode) == SCSIOPC_WRITE_6 || (opcode) == SCSIOPC_WRITE_10 || \
- (opcode) == SCSIOPC_WRITE_12 || (opcode) == SCSIOPC_WRITE_16)
-
-#define IS_VERIFY_COMMAND(opcode) \
- ((opcode) == SCSIOPC_VERIFY_10 || (opcode) == SCSIOPC_VERIFY_12 || \
- (opcode) == SCSIOPC_VERIFY_16)
-
-#define FCP_RSP_LEN_VALID_MASK 0x1
-#define FCP_SNS_LEN_VALID_MASK 0x2
-#define FCP_RESID_OVER_MASK 0x4
-#define FCP_RESID_UNDER_MASK 0x8
-#define FCP_CONF_REQ_MASK 0x10
-#define FCP_SCSI_STATUS_GOOD 0x0
-
-#define UNF_DELAYED_WORK_SYNC(ret, port_id, work, work_symb) \
- do { \
- if (!cancel_delayed_work_sync(work)) { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, \
- UNF_INFO, \
- "[info]LPort or RPort(0x%x) %s worker " \
- "can't destroy, or no " \
- "worker", \
- port_id, work_symb); \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = RETURN_OK; \
- } \
- } while (0)
-
-#define UNF_GET_SFS_ENTRY(pkg) ((union unf_sfs_u *)(void *)(((struct unf_frame_pkg *)(pkg)) \
- ->unf_cmnd_pload_bl.buffer_ptr))
-/* FLOGI */
-#define UNF_GET_FLOGI_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->flogi.flogi_payload))
-#define UNF_FLOGI_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
-
-/* FLOGI ACC */
-#define UNF_GET_FLOGI_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg))) \
- ->flogi_acc.flogi_payload))
-#define UNF_FLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_flogi_fdisc_payload)
-
-/* FDISC */
-#define UNF_FDISC_PAYLOAD_LEN UNF_FLOGI_PAYLOAD_LEN
-#define UNF_FDISC_ACC_PAYLOAD_LEN UNF_FLOGI_ACC_PAYLOAD_LEN
-
-/* PLOGI */
-#define UNF_GET_PLOGI_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi.payload))
-#define UNF_PLOGI_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
-
-/* PLOGI ACC */
-#define UNF_GET_PLOGI_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->plogi_acc.payload))
-#define UNF_PLOGI_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
-
-/* LOGO */
-#define UNF_GET_LOGO_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->logo.payload))
-#define UNF_LOGO_PAYLOAD_LEN sizeof(struct unf_logo_payload)
-
-/* ECHO */
-#define UNF_GET_ECHO_PAYLOAD(pkg) \
- (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.echo_pld)
-
-/* ECHO PHYADDR */
-#define UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg) \
- (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo.phy_echo_addr)
-
-#define UNF_ECHO_PAYLOAD_LEN sizeof(struct unf_echo_payload)
-
-/* REC */
-#define UNF_GET_REC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rec.rec_pld))
-
-#define UNF_REC_PAYLOAD_LEN sizeof(struct unf_rec_pld)
-
-/* ECHO ACC */
-#define UNF_GET_ECHO_ACC_PAYLOAD(pkg) \
- (((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->echo_acc.echo_pld)
-#define UNF_ECHO_ACC_PAYLOAD_LEN sizeof(struct unf_echo_payload)
-
-/* RRQ */
-#define UNF_GET_RRQ_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->rrq.cmnd))
-#define UNF_RRQ_PAYLOAD_LEN \
- (sizeof(struct unf_rrq) - sizeof(struct unf_fc_head))
-
-/* PRLI */
-#define UNF_GET_PRLI_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli.payload))
-#define UNF_PRLI_PAYLOAD_LEN sizeof(struct unf_prli_payload)
-
-/* PRLI ACC */
-#define UNF_GET_PRLI_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prli_acc.payload))
-#define UNF_PRLI_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
-
-/* PRLO */
-#define UNF_GET_PRLO_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo.payload))
-#define UNF_PRLO_PAYLOAD_LEN sizeof(struct unf_prli_payload)
-
-#define UNF_GET_PRLO_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->prlo_acc.payload))
-#define UNF_PRLO_ACC_PAYLOAD_LEN sizeof(struct unf_prli_payload)
-
-/* PDISC */
-#define UNF_GET_PDISC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc.payload))
-#define UNF_PDISC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
-
-/* PDISC ACC */
-#define UNF_GET_PDISC_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->pdisc_acc.payload))
-#define UNF_PDISC_ACC_PAYLOAD_LEN sizeof(struct unf_plogi_payload)
-
-/* ADISC */
-#define UNF_GET_ADISC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc.adisc_payl))
-#define UNF_ADISC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
-
-/* ADISC ACC */
-#define UNF_GET_ADISC_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->adisc_acc.adisc_payl))
-#define UNF_ADISC_ACC_PAYLOAD_LEN sizeof(struct unf_adisc_payload)
-
-/* RSCN ACC */
-#define UNF_GET_RSCN_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
-#define UNF_RSCN_ACC_PAYLOAD_LEN \
- (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
-
-/* LOGO ACC */
-#define UNF_GET_LOGO_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
-#define UNF_LOGO_ACC_PAYLOAD_LEN \
- (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
-
-/* RRQ ACC */
-#define UNF_GET_RRQ_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
-#define UNF_RRQ_ACC_PAYLOAD_LEN \
- (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
-
-/* REC ACC */
-#define UNF_GET_REC_ACC_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)(UNF_GET_SFS_ENTRY(pkg)))->els_acc.cmnd))
-#define UNF_REC_ACC_PAYLOAD_LEN \
- (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
-
-/* GPN_ID */
-#define UNF_GET_GPNID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id.ctiu_pream))
-#define UNF_GPNID_PAYLOAD_LEN \
- (sizeof(struct unf_gpnid) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_GPNID_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gpn_id_rsp.ctiu_pream))
-#define UNF_GPNID_RSP_PAYLOAD_LEN \
- (sizeof(struct unf_gpnid_rsp) - sizeof(struct unf_fc_head))
-
-/* GNN_ID */
-#define UNF_GET_GNNID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id.ctiu_pream))
-#define UNF_GNNID_PAYLOAD_LEN \
- (sizeof(struct unf_gnnid) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_GNNID_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gnn_id_rsp.ctiu_pream))
-#define UNF_GNNID_RSP_PAYLOAD_LEN \
- (sizeof(struct unf_gnnid_rsp) - sizeof(struct unf_fc_head))
-
-/* GFF_ID */
-#define UNF_GET_GFFID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id.ctiu_pream))
-#define UNF_GFFID_PAYLOAD_LEN \
- (sizeof(struct unf_gffid) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_GFFID_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->gff_id_rsp.ctiu_pream))
-#define UNF_GFFID_RSP_PAYLOAD_LEN \
- (sizeof(struct unf_gffid_rsp) - sizeof(struct unf_fc_head))
-
-/* GID_FT/GID_PT */
-#define UNF_GET_GID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
- ->get_id.gid_req.ctiu_pream))
-
-#define UNF_GID_PAYLOAD_LEN (sizeof(struct unf_ctiu_prem) + sizeof(u32))
-#define UNF_GET_GID_ACC_PAYLOAD(pkg) \
- (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg)) \
- ->get_id.gid_rsp.gid_acc_pld)
-#define UNF_GID_ACC_PAYLOAD_LEN sizeof(struct unf_gid_acc_pld)
-
-/* RFT_ID */
-#define UNF_GET_RFTID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id.ctiu_pream))
-#define UNF_RFTID_PAYLOAD_LEN \
- (sizeof(struct unf_rftid) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_RFTID_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rft_id_rsp.ctiu_pream))
-#define UNF_RFTID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
-
-/* RFF_ID */
-#define UNF_GET_RFFID_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id.ctiu_pream))
-#define UNF_RFFID_PAYLOAD_LEN \
- (sizeof(struct unf_rffid) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_RFFID_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->rff_id_rsp.ctiu_pream))
-#define UNF_RFFID_RSP_PAYLOAD_LEN sizeof(struct unf_ctiu_prem)
-
-/* ACC&RJT */
-#define UNF_GET_ELS_ACC_RJT_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_rjt.cmnd))
-#define UNF_ELS_ACC_RJT_LEN \
- (sizeof(struct unf_els_rjt) - sizeof(struct unf_fc_head))
-
-/* SCR */
-#define UNF_SCR_PAYLOAD(pkg) \
- (((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->scr.payload)
-#define UNF_SCR_PAYLOAD_LEN \
- (sizeof(struct unf_scr) - sizeof(struct unf_fc_head))
-
-#define UNF_SCR_RSP_PAYLOAD(pkg) \
- (&(((union unf_sfs_u *)UNF_GET_SFS_ENTRY(pkg))->els_acc.cmnd))
-#define UNF_SCR_RSP_PAYLOAD_LEN \
- (sizeof(struct unf_els_acc) - sizeof(struct unf_fc_head))
-
-#define UNF_GS_RSP_PAYLOAD_LEN \
- (sizeof(union unf_sfs_u) - sizeof(struct unf_fc_head))
-
-#define UNF_GET_XCHG_TAG(pkg) \
- (((struct unf_frame_pkg *)(pkg)) \
- ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
-#define UNF_GET_ABTS_XCHG_TAG(pkg) \
- ((u16)(((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]) >> 16))
-#define UNF_GET_IO_XCHG_TAG(pkg) \
- ((u16)((pkg)->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]))
-
-#define UNF_GET_HOTPOOL_TAG(pkg) \
- (((struct unf_frame_pkg *)(pkg)) \
- ->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX])
-#define UNF_GET_SID(pkg) \
- (((struct unf_frame_pkg *)(pkg))->frame_head.csctl_sid & \
- UNF_NPORTID_MASK)
-#define UNF_GET_DID(pkg) \
- (((struct unf_frame_pkg *)(pkg))->frame_head.rctl_did & \
- UNF_NPORTID_MASK)
-#define UNF_GET_OXID(pkg) \
- (((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid >> 16)
-#define UNF_GET_RXID(pkg) \
- ((u16)((struct unf_frame_pkg *)(pkg))->frame_head.oxid_rxid)
-#define UNF_GET_XID_RELEASE_TIMER(pkg) \
- (((struct unf_frame_pkg *)(pkg))->release_task_id_timer)
-#define UNF_GETXCHGALLOCTIME(pkg) \
- (((struct unf_frame_pkg *)(pkg)) \
- ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME])
-
-#define UNF_SET_XCHG_ALLOC_TIME(pkg, xchg) \
- (((struct unf_frame_pkg *)(pkg)) \
- ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = \
- (((struct unf_xchg *)(xchg)) \
- ->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]))
-#define UNF_SET_ABORT_INFO_IOTYPE(pkg, xchg) \
- (((struct unf_frame_pkg *)(pkg)) \
- ->private_data[PKG_PRIVATE_XCHG_ABORT_INFO] |= \
- (((u8)(((struct unf_xchg *)(xchg))->data_direction & 0x7)) \
- << 2))
-
-#define UNF_CHECK_NPORT_FPORT_BIT(els_payload) \
- (((struct unf_flogi_fdisc_payload *)(els_payload)) \
- ->fabric_parms.co_parms.nport)
-
-#define UNF_GET_RSP_BUF(pkg) \
- ((void *)(((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.buffer_ptr))
-#define UNF_GET_RSP_LEN(pkg) \
- (((struct unf_frame_pkg *)(pkg))->unf_rsp_pload_bl.length)
-
-#define UNF_N_PORT 0
-#define UNF_F_PORT 1
-
-#define UNF_GET_RA_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
-#define UNF_GET_RT_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
-#define UNF_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
-#define UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
-#define UNF_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
-#define UNF_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
-
-enum unf_pcie_error_code {
- UNF_PCIE_ERROR_NONE = 0,
- UNF_PCIE_DATAPARITYDETECTED = 1,
- UNF_PCIE_SIGNALTARGETABORT,
- UNF_PCIE_RECEIVEDTARGETABORT,
- UNF_PCIE_RECEIVEDMASTERABORT,
- UNF_PCIE_SIGNALEDSYSTEMERROR,
- UNF_PCIE_DETECTEDPARITYERROR,
- UNF_PCIE_CORRECTABLEERRORDETECTED,
- UNF_PCIE_NONFATALERRORDETECTED,
- UNF_PCIE_FATALERRORDETECTED,
- UNF_PCIE_UNSUPPORTEDREQUESTDETECTED,
- UNF_PCIE_AUXILIARYPOWERDETECTED,
- UNF_PCIE_TRANSACTIONSPENDING,
-
- UNF_PCIE_UNCORRECTINTERERRSTATUS,
- UNF_PCIE_UNSUPPORTREQERRSTATUS,
- UNF_PCIE_ECRCERRORSTATUS,
- UNF_PCIE_MALFORMEDTLPSTATUS,
- UNF_PCIE_RECEIVEROVERFLOWSTATUS,
- UNF_PCIE_UNEXPECTCOMPLETESTATUS,
- UNF_PCIE_COMPLETERABORTSTATUS,
- UNF_PCIE_COMPLETIONTIMEOUTSTATUS,
- UNF_PCIE_FLOWCTRLPROTOCOLERRSTATUS,
- UNF_PCIE_POISONEDTLPSTATUS,
- UNF_PCIE_SURPRISEDOWNERRORSTATUS,
- UNF_PCIE_DATALINKPROTOCOLERRSTATUS,
- UNF_PCIE_ADVISORYNONFATALERRSTATUS,
- UNF_PCIE_REPLAYTIMERTIMEOUTSTATUS,
- UNF_PCIE_REPLAYNUMROLLOVERSTATUS,
- UNF_PCIE_BADDLLPSTATUS,
- UNF_PCIE_BADTLPSTATUS,
- UNF_PCIE_RECEIVERERRORSTATUS,
-
- UNF_PCIE_BUTT
-};
-
-#define UNF_DMA_HI32(a) (((a) >> 32) & 0xffffffff)
-#define UNF_DMA_LO32(a) ((a) & 0xffffffff)
-
-#define UNF_WWN_LEN 8
-#define UNF_MAC_LEN 6
-
-/* send BLS/ELS/BLS REPLY/ELS REPLY/GS/ */
-/* rcvd BLS/ELS/REQ DONE/REPLY DONE */
-#define UNF_PKG_BLS_REQ 0x0100
-#define UNF_PKG_BLS_REQ_DONE 0x0101
-#define UNF_PKG_BLS_REPLY 0x0102
-#define UNF_PKG_BLS_REPLY_DONE 0x0103
-
-#define UNF_PKG_ELS_REQ 0x0200
-#define UNF_PKG_ELS_REQ_DONE 0x0201
-
-#define UNF_PKG_ELS_REPLY 0x0202
-#define UNF_PKG_ELS_REPLY_DONE 0x0203
-
-#define UNF_PKG_GS_REQ 0x0300
-#define UNF_PKG_GS_REQ_DONE 0x0301
-
-#define UNF_PKG_TGT_XFER 0x0400
-#define UNF_PKG_TGT_RSP 0x0401
-#define UNF_PKG_TGT_RSP_NOSGL 0x0402
-#define UNF_PKG_TGT_RSP_STATUS 0x0403
-
-#define UNF_PKG_INI_IO 0x0500
-#define UNF_PKG_INI_RCV_TGT_RSP 0x0507
-
-/* external sgl struct start */
-struct unf_esgl_page {
- u64 page_address;
- dma_addr_t esgl_phy_addr;
- u32 page_size;
-};
-
-/* external sgl struct end */
-struct unf_esgl {
- struct list_head entry_esgl;
- struct unf_esgl_page page;
-};
-
-#define UNF_RESPONE_DATA_LEN 8
-struct unf_frame_payld {
- u8 *buffer_ptr;
- dma_addr_t buf_dma_addr;
- u32 length;
-};
-
-enum pkg_private_index {
- PKG_PRIVATE_LOWLEVEL_XCHG_ADD = 0,
- PKG_PRIVATE_XCHG_HOT_POOL_INDEX = 1, /* Hot Pool Index */
- PKG_PRIVATE_XCHG_RPORT_INDEX = 2, /* RPort index */
- PKG_PRIVATE_XCHG_VP_INDEX = 3, /* VPort index */
- PKG_PRIVATE_XCHG_SSQ_INDEX,
- PKG_PRIVATE_RPORT_RX_SIZE,
- PKG_PRIVATE_XCHG_TIMEER,
- PKG_PRIVATE_XCHG_ALLOC_TIME,
- PKG_PRIVATE_XCHG_ABORT_INFO,
- PKG_PRIVATE_ECHO_CMD_SND_TIME, /* local send echo cmd time stamp */
- PKG_PRIVATE_ECHO_ACC_RCV_TIME, /* local receive echo acc time stamp */
- PKG_PRIVATE_ECHO_CMD_RCV_TIME, /* remote receive echo cmd time stamp */
- PKG_PRIVATE_ECHO_RSP_SND_TIME, /* remote send echo rsp time stamp */
- PKG_MAX_PRIVATE_DATA_SIZE
-};
-
-extern u32 dix_flag;
-extern u32 dif_sgl_mode;
-extern u32 dif_app_esc_check;
-extern u32 dif_ref_esc_check;
-
-#define UNF_DIF_ACTION_NONE 0
-
-enum unf_adm_dif_mode_E {
- UNF_SWITCH_DIF_DIX = 0,
- UNF_APP_REF_ESCAPE,
- ALL_DIF_MODE = 20,
-};
-
-#define UNF_DIF_CRC_ERR 0x1001
-#define UNF_DIF_APP_ERR 0x1002
-#define UNF_DIF_LBA_ERR 0x1003
-
-#define UNF_VERIFY_CRC_MASK (1 << 1)
-#define UNF_VERIFY_APP_MASK (1 << 2)
-#define UNF_VERIFY_LBA_MASK (1 << 3)
-
-#define UNF_REPLACE_CRC_MASK (1 << 8)
-#define UNF_REPLACE_APP_MASK (1 << 9)
-#define UNF_REPLACE_LBA_MASK (1 << 10)
-
-#define UNF_DIF_ACTION_MASK (0xff << 16)
-#define UNF_DIF_ACTION_INSERT (0x1 << 16)
-#define UNF_DIF_ACTION_VERIFY_AND_DELETE (0x2 << 16)
-#define UNF_DIF_ACTION_VERIFY_AND_FORWARD (0x3 << 16)
-#define UNF_DIF_ACTION_VERIFY_AND_REPLACE (0x4 << 16)
-
-#define UNF_DIF_ACTION_NO_INCREASE_REFTAG (0x1 << 24)
-
-#define UNF_DEFAULT_CRC_GUARD_SEED (0)
-#define UNF_CAL_512_BLOCK_CNT(data_len) ((data_len) >> 9)
-#define UNF_CAL_BLOCK_CNT(data_len, sector_size) ((data_len) / (sector_size))
-#define UNF_CAL_CRC_BLK_CNT(crc_data_len, sector_size) \
- ((crc_data_len) / ((sector_size) + 8))
-
-#define UNF_DIF_DOUBLE_SGL (1 << 1)
-#define UNF_DIF_SECTSIZE_4KB (1 << 2)
-#define UNF_DIF_SECTSIZE_512 (0 << 2)
-#define UNF_DIF_LBA_NONE_INCREASE (1 << 3)
-#define UNF_DIF_TYPE3 (1 << 4)
-
-#define SECTOR_SIZE_512 512
-#define SECTOR_SIZE_4096 4096
-#define SPFC_DIF_APP_REF_ESC_NOT_CHECK 1
-#define SPFC_DIF_APP_REF_ESC_CHECK 0
-
-struct unf_dif {
- u16 crc;
- u16 app_tag;
- u32 lba;
-};
-
-enum unf_io_state { UNF_INI_IO = 0, UNF_TGT_XFER = 1, UNF_TGT_RSP = 2 };
-
-#define UNF_PKG_LAST_RESPONSE 0
-#define UNF_PKG_NOT_LAST_RESPONSE 1
-
-struct unf_frame_pkg {
- /* pkt type:BLS/ELS/FC4LS/CMND/XFER/RSP */
- u32 type;
- u32 last_pkg_flag;
- u32 fcp_conf_flag;
-
-#define UNF_FCP_RESPONSE_VALID 0x01
-#define UNF_FCP_SENSE_VALID 0x02
- u32 response_and_sense_valid_flag; /* resp and sense vailed flag */
- u32 cmnd;
- struct unf_fc_head frame_head;
- u32 entry_count;
- void *xchg_contex;
- u32 transfer_len;
- u32 residus_len;
- u32 status;
- u32 status_sub_code;
- enum unf_io_state io_state;
- u32 qos_level;
- u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
- struct unf_fcp_cmnd *fcp_cmnd;
- struct unf_dif_control_info dif_control;
- struct unf_frame_payld unf_cmnd_pload_bl;
- struct unf_frame_payld unf_rsp_pload_bl;
- struct unf_frame_payld unf_sense_pload_bl;
- void *upper_cmd;
- u32 abts_maker_status;
- u32 release_task_id_timer;
- u8 byte_orders;
- u8 rx_or_ox_id;
- u8 class_mode;
- u8 rsvd;
- u8 *peresp;
- u32 rcvrsp_len;
- ulong timeout;
- u32 origin_hottag;
- u32 origin_magicnum;
-};
-
-#define UNF_MAX_SFS_XCHG 2048
-#define UNF_RESERVE_SFS_XCHG 128 /* times on exchange mgr num */
-
-struct unf_lport_cfg_item {
- u32 port_id;
- u32 port_mode; /* INI(0x20), TGT(0x10), BOTH(0x30) */
- u32 port_topology; /* 0x3:loop , 0xc:p2p ,0xf:auto */
- u32 max_queue_depth;
- u32 max_io; /* Recommended Value 512-4096 */
- u32 max_login;
- u32 max_sfs_xchg;
- u32 port_speed; /* 0:auto 1:1Gbps 2:2Gbps 4:4Gbps 8:8Gbps 16:16Gbps */
- u32 tape_support; /* ape support */
- u32 fcp_conf; /* fcp confirm support */
- u32 bbscn;
-};
-
-struct unf_port_dynamic_info {
- u32 sfp_posion;
- u32 sfp_valid;
- u32 phy_link;
- u32 firmware_state;
- u32 cur_speed;
- u32 mailbox_timeout_cnt;
-};
-
-struct unf_port_intr_coalsec {
- u32 delay_timer;
- u32 depth;
-};
-
-struct unf_port_topo {
- u32 topo_cfg;
- enum unf_act_topo topo_act;
-};
-
-struct unf_port_transfer_para {
- u32 type;
- u32 value;
-};
-
-struct unf_buf {
- u8 *buf;
- u32 buf_len;
-};
-
-/* get ucode & up ver */
-#define SPFC_VER_LEN (16)
-#define SPFC_COMPILE_TIME_LEN (20)
-struct unf_fw_version {
- u32 message_type;
- u8 fw_version[SPFC_VER_LEN];
-};
-
-struct unf_port_wwn {
- u64 sys_port_wwn;
- u64 sys_node_name;
-};
-
-enum unf_port_config_set_op {
- UNF_PORT_CFG_SET_SPEED,
- UNF_PORT_CFG_SET_PORT_SWITCH,
- UNF_PORT_CFG_SET_POWER_STATE,
- UNF_PORT_CFG_SET_PORT_STATE,
- UNF_PORT_CFG_UPDATE_WWN,
- UNF_PORT_CFG_TEST_FLASH,
- UNF_PORT_CFG_UPDATE_FABRIC_PARAM,
- UNF_PORT_CFG_UPDATE_PLOGI_PARAM,
- UNF_PORT_CFG_SET_BUTT
-};
-
-enum unf_port_cfg_get_op {
- UNF_PORT_CFG_GET_TOPO_ACT,
- UNF_PORT_CFG_GET_LOOP_MAP,
- UNF_PORT_CFG_GET_SFP_PRESENT,
- UNF_PORT_CFG_GET_FW_VER,
- UNF_PORT_CFG_GET_HW_VER,
- UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
- UNF_PORT_CFG_GET_WORKBALE_BBSCN,
- UNF_PORT_CFG_GET_FC_SERDES,
- UNF_PORT_CFG_GET_LOOP_ALPA,
- UNF_PORT_CFG_GET_MAC_ADDR,
- UNF_PORT_CFG_GET_SFP_VER,
- UNF_PORT_CFG_GET_SFP_SUPPORT_UPDATE,
- UNF_PORT_CFG_GET_SFP_LOG,
- UNF_PORT_CFG_GET_PCIE_LINK_STATE,
- UNF_PORT_CFG_GET_FLASH_DATA_INFO,
- UNF_PORT_CFG_GET_BUTT,
-};
-
-enum unf_port_config_state {
- UNF_PORT_CONFIG_STATE_START,
- UNF_PORT_CONFIG_STATE_STOP,
- UNF_PORT_CONFIG_STATE_RESET,
- UNF_PORT_CONFIG_STATE_STOP_INTR,
- UNF_PORT_CONFIG_STATE_BUTT
-};
-
-enum unf_port_config_update {
- UNF_PORT_CONFIG_UPDATE_FW_MINIMUM,
- UNF_PORT_CONFIG_UPDATE_FW_ALL,
- UNF_PORT_CONFIG_UPDATE_BUTT
-};
-
-enum unf_disable_vp_mode {
- UNF_DISABLE_VP_MODE_ONLY = 0x8,
- UNF_DISABLE_VP_MODE_REINIT_LINK = 0x9,
- UNF_DISABLE_VP_MODE_NOFAB_LOGO = 0xA,
- UNF_DISABLE_VP_MODE_LOGO_ALL = 0xB
-};
-
-struct unf_vport_info {
- u16 vp_index;
- u64 node_name;
- u64 port_name;
- u32 port_mode; /* INI, TGT or both */
- enum unf_disable_vp_mode disable_mode;
- u32 nport_id; /* maybe acquired by lowlevel and update to common */
- void *vport;
-};
-
-struct unf_port_login_parms {
- enum unf_act_topo act_topo;
-
- u32 rport_index;
- u32 seq_cnt : 1;
- u32 ed_tov : 1;
- u32 reserved : 14;
- u32 tx_mfs : 16;
- u32 ed_tov_timer_val;
-
- u8 remote_rttov_tag;
- u8 remote_edtov_tag;
- u16 remote_bb_credit;
- u16 compared_bbscn;
- u32 compared_edtov_val;
- u32 compared_ratov_val;
- u32 els_cmnd_code;
-};
-
-struct unf_mbox_head_info {
- /* mbox header */
- u8 cmnd_type;
- u8 length;
- u8 port_id;
- u8 pad0;
-
- /* operation */
- u32 opcode : 4;
- u32 pad1 : 28;
-};
-
-struct unf_mbox_head_sts {
- /* mbox header */
- u8 cmnd_type;
- u8 length;
- u8 port_id;
- u8 pad0;
-
- /* operation */
- u16 pad1;
- u8 pad2;
- u8 status;
-};
-
-struct unf_low_level_service_op {
- u32 (*unf_ls_gs_send)(void *hba, struct unf_frame_pkg *pkg);
- u32 (*unf_bls_send)(void *hba, struct unf_frame_pkg *pkg);
- u32 (*unf_cmnd_send)(void *hba, struct unf_frame_pkg *pkg);
- u32 (*unf_rsp_send)(void *handle, struct unf_frame_pkg *pkg);
- u32 (*unf_release_rport_res)(void *handle, struct unf_port_info *rport_info);
- u32 (*unf_flush_ini_resp_que)(void *handle);
- u32 (*unf_alloc_rport_res)(void *handle, struct unf_port_info *rport_info);
- u32 (*ll_release_xid)(void *handle, struct unf_frame_pkg *pkg);
- u32 (*unf_xfer_send)(void *handle, struct unf_frame_pkg *pkg);
-};
-
-struct unf_low_level_port_mgr_op {
- /* fcport/opcode/input parameter */
- u32 (*ll_port_config_set)(void *fc_port, enum unf_port_config_set_op opcode, void *para_in);
-
- /* fcport/opcode/output parameter */
- u32 (*ll_port_config_get)(void *fc_port, enum unf_port_cfg_get_op opcode, void *para_out);
-};
-
-struct unf_chip_info {
- u8 chip_type;
- u8 chip_work_mode;
- u8 disable_err_flag;
-};
-
-struct unf_low_level_functioon_op {
- struct unf_chip_info chip_info;
- /* low level type */
- u32 low_level_type;
- const char *name;
- struct pci_dev *dev;
- u64 sys_node_name;
- u64 sys_port_name;
- struct unf_lport_cfg_item lport_cfg_items;
-#define UNF_LOW_LEVEL_MGR_TYPE_ACTIVE 0
-#define UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE 1
- const u32 xchg_mgr_type;
-
-#define UNF_NO_EXTRA_ABTS_XCHG 0x0
-#define UNF_LL_IOC_ABTS_XCHG 0x1
- const u32 abts_xchg;
-
-#define UNF_CM_RPORT_SET_QUALIFIER 0x0
-#define UNF_CM_RPORT_SET_QUALIFIER_REUSE 0x1
-#define UNF_CM_RPORT_SET_QUALIFIER_SPFC 0x2
-
- /* low level pass-through flag. */
-#define UNF_LOW_LEVEL_PASS_THROUGH_FIP 0x0
-#define UNF_LOW_LEVEL_PASS_THROUGH_FABRIC_LOGIN 0x1
-#define UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN 0x2
- u32 passthrough_flag;
-
- /* low level parameter */
- u32 support_max_npiv_num;
- u32 support_max_ssq_num;
- u32 support_max_speed;
- u32 support_min_speed;
- u32 fc_ser_max_speed;
-
- u32 support_max_rport;
-
- u32 support_max_hot_tag_range;
- u32 sfp_type;
- u32 update_fw_reset_active;
- u32 support_upgrade_report;
- u32 multi_conf_support;
- u32 port_type;
-#define UNF_LOW_LEVEL_RELEASE_RPORT_SYNC 0x0
-#define UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC 0x1
- u8 rport_release_type;
-#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_FIXED 0x0
-#define UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG 0x1
- u8 sirt_page_mode;
- u8 sfp_speed;
-
- /* IO reference */
- struct unf_low_level_service_op service_op;
-
- /* Port Mgr reference */
- struct unf_low_level_port_mgr_op port_mgr_op;
-
- u8 chip_id;
-};
-
-struct unf_cm_handle_op {
- /* return:L_Port */
- void *(*unf_alloc_local_port)(void *private_data,
- struct unf_low_level_functioon_op *low_level_op);
-
- /* input para:L_Port */
- u32 (*unf_release_local_port)(void *lport);
-
- /* input para:L_Port, FRAME_PKG_S */
- u32 (*unf_receive_ls_gs_pkg)(void *lport, struct unf_frame_pkg *pkg);
-
- /* input para:L_Port, FRAME_PKG_S */
- u32 (*unf_receive_bls_pkg)(void *lport, struct unf_frame_pkg *pkg);
- /* input para:L_Port, FRAME_PKG_S */
- u32 (*unf_send_els_done)(void *lport, struct unf_frame_pkg *pkg);
-
- /* input para:L_Port, FRAME_PKG_S */
- u32 (*unf_receive_marker_status)(void *lport, struct unf_frame_pkg *pkg);
- u32 (*unf_receive_abts_marker_status)(void *lport, struct unf_frame_pkg *pkg);
- /* input para:L_Port, FRAME_PKG_S */
- u32 (*unf_receive_ini_response)(void *lport, struct unf_frame_pkg *pkg);
-
- int (*unf_get_cfg_parms)(char *section_name,
- struct unf_cfg_item *cfg_parm, u32 *cfg_value,
- u32 item_num);
-
- /* TGT IO interface */
- u32 (*unf_process_fcp_cmnd)(void *lport, struct unf_frame_pkg *pkg);
-
- /* TGT IO Done */
- u32 (*unf_tgt_cmnd_xfer_or_rsp_echo)(void *lport, struct unf_frame_pkg *pkg);
-
- u32 (*unf_cm_get_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
- u32 (*unf_cm_get_dif_sgl_entry)(void *pkg, char **buf, u32 *buf_len);
-
- struct unf_esgl_page *(*unf_get_one_free_esgl_page)(void *lport, struct unf_frame_pkg *pkg);
-
- /* input para:L_Port, EVENT */
- u32 (*unf_fc_port_event)(void *lport, u32 events, void *input);
-
- int (*unf_drv_start_work)(void *lport);
-
- void (*unf_card_rport_chip_err)(struct pci_dev const *pci_dev);
-};
-
-u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle);
-int unf_common_init(void);
-void unf_common_exit(void);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_disc.c b/drivers/scsi/spfc/common/unf_disc.c
deleted file mode 100644
index c48d0ba670d4..000000000000
--- a/drivers/scsi/spfc/common/unf_disc.c
+++ /dev/null
@@ -1,1276 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_disc.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "unf_event.h"
-#include "unf_lport.h"
-#include "unf_rport.h"
-#include "unf_exchg.h"
-#include "unf_ls.h"
-#include "unf_gs.h"
-#include "unf_portman.h"
-
-#define UNF_LIST_RSCN_PAGE_CNT 2560
-#define UNF_MAX_PORTS_PRI_LOOP 2
-#define UNF_MAX_GS_SEND_NUM 8
-#define UNF_OS_REMOVE_CARD_TIMEOUT (60 * 1000)
-
-static void unf_set_disc_state(struct unf_disc *disc,
- enum unf_disc_state states)
-{
- FC_CHECK_RETURN_VOID(disc);
-
- if (states != disc->states) {
- /* Reset disc retry count */
- disc->retry_count = 0;
- }
-
- disc->states = states;
-}
-
-static inline u32 unf_get_loop_map(struct unf_lport *lport, u8 loop_map[], u32 loop_map_size)
-{
- struct unf_buf buf = {0};
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
- UNF_RETURN_ERROR);
-
- buf.buf = loop_map;
- buf.buf_len = loop_map_size;
-
- ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
- UNF_PORT_CFG_GET_LOOP_MAP,
- (void *)&buf);
- return ret;
-}
-
-static void unf_login_with_loop_node(struct unf_lport *lport, u32 alpa)
-{
- /* Only used for Private Loop LOGIN */
- struct unf_rport *unf_rport = NULL;
- ulong rport_flag = 0;
- u32 port_feature = 0;
- u32 ret;
-
- /* Check AL_PA validity */
- if (lport->nport_id == alpa) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) is the same as RPort with AL_PA(0x%x), do nothing",
- lport->port_id, alpa);
- return;
- }
-
- if (alpa == 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) is fabric, do nothing",
- lport->port_id, alpa);
- return;
- }
-
- /* Get & set R_Port: reuse only */
- unf_rport = unf_get_rport_by_nport_id(lport, alpa);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x_0x%p) login with private loop",
- lport->port_id, lport->nport_id, alpa, unf_rport);
-
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, alpa);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) allocate new RPort(0x%x) failed",
- lport->port_id, lport->nport_id, alpa);
- return;
- }
-
- /* Update R_Port state & N_Port_ID */
- spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
- unf_rport->nport_id = alpa;
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
-
- /* Private Loop: check whether need delay to send PLOGI or not */
- port_feature = unf_rport->options;
-
- /* check Rport and Lport feature */
- if (port_feature == UNF_PORT_MODE_UNKNOWN &&
- lport->options == UNF_PORT_MODE_INI) {
- /* Start to send PLOGI */
- ret = unf_send_plogi(lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
- lport->port_id, lport->nport_id, unf_rport->nport_id);
-
- unf_rport_error_recovery(unf_rport);
- }
- } else {
- unf_check_rport_need_delay_plogi(lport, unf_rport, port_feature);
- }
-}
-
-static int unf_discover_private_loop(void *arg_in, void *arg_out)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
- u32 ret = UNF_RETURN_ERROR;
- u32 i = 0;
- u8 loop_id = 0;
- u32 alpa_index = 0;
- u8 loop_map[UNF_LOOPMAP_COUNT];
-
- FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
- memset(loop_map, 0x0, UNF_LOOPMAP_COUNT);
-
- /* Get Port Loop Map */
- ret = unf_get_loop_map(unf_lport, loop_map, UNF_LOOPMAP_COUNT);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) get loop map failed", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Check Loop Map Ports Count */
- if (loop_map[ARRAY_INDEX_0] > UNF_MAX_PORTS_PRI_LOOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has more than %d ports(%u) in private loop",
- unf_lport->port_id, UNF_MAX_PORTS_PRI_LOOP, loop_map[ARRAY_INDEX_0]);
-
- return UNF_RETURN_ERROR;
- }
-
- /* AL_PA = 0 means Public Loop */
- if (loop_map[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
- loop_map[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) one or more AL_PA is 0x00, indicate it's FL_Port",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Discovery Private Loop Ports */
- for (i = 0; i < loop_map[ARRAY_INDEX_0]; i++) {
- alpa_index = i + 1;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) start to disc(0x%x) with count(0x%x)",
- unf_lport->port_id, loop_map[alpa_index], i);
-
- /* Check whether need delay to send PLOGI or not */
- loop_id = loop_map[alpa_index];
- unf_login_with_loop_node(unf_lport, (u32)loop_id);
- }
-
- return RETURN_OK;
-}
-
-u32 unf_disc_start(void *lport)
-{
- /*
- * Call by:
- * 1. Enter Private Loop Login
- * 2. Analysis RSCN payload
- * 3. SCR callback
- * *
- * Doing:
- * Fabric/Public Loop: Send GID_PT
- * Private Loop: (delay to) send PLOGI or send LOGO immediately
- * P2P: do nothing
- */
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_rport *unf_rport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_cm_event_report *event = NULL;
- u32 ret = RETURN_OK;
- ulong flag = 0;
- enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- act_topo = unf_lport->act_topo;
- disc = &unf_lport->disc;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x) with topo(0x%x) begin to discovery",
- unf_lport->port_id, act_topo);
-
- if (act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
- /* 1. Fabric or Public Loop Topology: for directory server */
- unf_rport = unf_get_rport_by_nport_id(unf_lport,
- UNF_FC_FID_DIR_SERV); /* 0xfffffc */
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) unable to get SNS RPort(0xfffffc)",
- unf_lport->port_id);
-
- unf_rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC,
- UNF_FC_FID_DIR_SERV);
- if (!unf_rport)
- return UNF_RETURN_ERROR;
-
- unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
- }
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_set_disc_state(disc, UNF_DISC_ST_START); /* disc start */
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_NORMAL_ENTER);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /*
- * NOTE: Send GID_PT
- * The Name Server shall, when it receives a GID_PT request,
- * return all Port Identifiers having registered support for the
- * specified Port Type. One or more Port Identifiers, having
- * registered as the specified Port Type, are returned.
- */
- ret = unf_send_gid_pt(unf_lport, unf_rport);
- if (ret != RETURN_OK)
- unf_disc_error_recovery(unf_lport);
- } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- /* Private Loop: to thread process */
- event = unf_get_one_event_node(unf_lport);
- FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
-
- event->lport = unf_lport;
- event->event_asy_flag = UNF_EVENT_ASYN;
- event->unf_event_task = unf_discover_private_loop;
- event->para_in = (void *)unf_lport;
-
- unf_post_one_event_node(unf_lport, event);
- } else {
- /* P2P toplogy mode: Do nothing */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) with topo(0x%x) need do nothing",
- unf_lport->port_id, act_topo);
- }
-
- return ret;
-}
-
-static u32 unf_disc_stop(void *lport)
-{
- /* Call by GID_ACC processer */
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *root_lport = NULL;
- struct unf_rport *sns_port = NULL;
- struct unf_disc_rport *disc_rport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_disc *root_disc = NULL;
- struct list_head *node = NULL;
- ulong flag = 0;
- u32 ret = RETURN_OK;
- u32 nport_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)lport;
- disc = &unf_lport->disc;
- root_lport = (struct unf_lport *)unf_lport->root_lport;
- root_disc = &root_lport->disc;
-
- /* Get R_Port for Directory server */
- sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find fabric RPort(0xfffffc) failed",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* for R_Port from disc pool busy list */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- if (list_empty(&disc->disc_rport_mgr.list_disc_rports_busy)) {
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- return RETURN_OK;
- }
-
- node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
- do {
- /* Delete from Disc busy list */
- disc_rport = list_entry(node, struct unf_disc_rport, entry_rport);
- nport_id = disc_rport->nport_id;
- list_del_init(node);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Add back to (free) Disc R_Port pool (list) */
- spin_lock_irqsave(&root_disc->rport_busy_pool_lock, flag);
- list_add_tail(node, &root_disc->disc_rport_mgr.list_disc_rports_pool);
- spin_unlock_irqrestore(&root_disc->rport_busy_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x_0x%x) remove nportid:0x%x from rportbusy list",
- unf_lport->port_id, unf_lport->nport_id, disc_rport->nport_id);
- /* Send GNN_ID to Name Server */
- ret = unf_get_and_post_disc_event(unf_lport, sns_port, nport_id,
- UNF_DISC_GET_NODE_NAME);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- unf_lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
-
- /* NOTE: go to next stage */
- unf_rcv_gnn_id_rsp_unknown(unf_lport, sns_port, nport_id);
- }
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- node = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_busy);
- } while (node != &disc->disc_rport_mgr.list_disc_rports_busy);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- return ret;
-}
-
-static u32 unf_init_rport_pool(struct unf_lport *lport)
-{
- struct unf_rport_pool *rport_pool = NULL;
- struct unf_rport *unf_rport = NULL;
- u32 ret = RETURN_OK;
- u32 i = 0;
- u32 bitmap_cnt = 0;
- ulong flag = 0;
- u32 max_login = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* Init RPort Pool info */
- rport_pool = &lport->rport_pool;
- max_login = lport->low_level_func.lport_cfg_items.max_login;
- rport_pool->rport_pool_completion = NULL;
- rport_pool->rport_pool_count = max_login;
- spin_lock_init(&rport_pool->rport_free_pool_lock);
- INIT_LIST_HEAD(&rport_pool->list_rports_pool); /* free RPort pool */
-
- /* 1. Alloc RPort Pool buffer/resource (memory) */
- rport_pool->rport_pool_add = vmalloc((size_t)(max_login * sizeof(struct unf_rport)));
- if (!rport_pool->rport_pool_add) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate RPort(s) resource failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(rport_pool->rport_pool_add, 0, (max_login * sizeof(struct unf_rport)));
-
- /* 2. Alloc R_Port Pool bitmap */
- bitmap_cnt = (lport->low_level_func.support_max_rport) / BITS_PER_LONG + 1;
- rport_pool->rpi_bitmap = vmalloc((size_t)(bitmap_cnt * sizeof(ulong)));
- if (!rport_pool->rpi_bitmap) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate RPort Bitmap failed", lport->port_id);
-
- vfree(rport_pool->rport_pool_add);
- rport_pool->rport_pool_add = NULL;
- return UNF_RETURN_ERROR;
- }
- memset(rport_pool->rpi_bitmap, 0, (bitmap_cnt * sizeof(ulong)));
-
- /* 3. Rport resource Management: Add Rports (buffer) to Rport Pool List
- */
- unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- for (i = 0; i < rport_pool->rport_pool_count; i++) {
- spin_lock_init(&unf_rport->rport_state_lock);
- list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
- sema_init(&unf_rport->task_sema, 0);
- unf_rport++;
- }
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
-
- return ret;
-}
-
-static void unf_free_rport_pool(struct unf_lport *lport)
-{
- struct unf_rport_pool *rport_pool = NULL;
- bool wait = false;
- ulong flag = 0;
- u32 remain = 0;
- u64 timeout = 0;
- u32 max_login = 0;
- u32 i;
- struct unf_rport *unf_rport = NULL;
- struct completion rport_pool_completion;
-
- init_completion(&rport_pool_completion);
- FC_CHECK_RETURN_VOID(lport);
-
- rport_pool = &lport->rport_pool;
- max_login = lport->low_level_func.lport_cfg_items.max_login;
-
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- if (rport_pool->rport_pool_count != max_login) {
- rport_pool->rport_pool_completion = &rport_pool_completion;
- remain = max_login - rport_pool->rport_pool_count;
- wait = true;
- }
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
-
- if (wait) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to wait for RPort pool completion, remain(0x%x)",
- lport->port_id, remain);
-
- unf_show_all_rport(lport);
-
- timeout = wait_for_completion_timeout(rport_pool->rport_pool_completion,
- msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
- if (timeout == 0)
- unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) wait for RPort pool completion end",
- lport->port_id);
-
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- rport_pool->rport_pool_completion = NULL;
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
- }
-
- unf_rport = (struct unf_rport *)(rport_pool->rport_pool_add);
- for (i = 0; i < rport_pool->rport_pool_count; i++) {
- if (!unf_rport)
- break;
- unf_rport++;
- }
-
- if ((lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) == 0) {
- vfree(rport_pool->rport_pool_add);
- rport_pool->rport_pool_add = NULL;
- vfree(rport_pool->rpi_bitmap);
- rport_pool->rpi_bitmap = NULL;
- }
-}
-
-static void unf_init_rscn_node(struct unf_port_id_page *port_id_page)
-{
- FC_CHECK_RETURN_VOID(port_id_page);
-
- port_id_page->addr_format = 0;
- port_id_page->event_qualifier = 0;
- port_id_page->reserved = 0;
- port_id_page->port_id_area = 0;
- port_id_page->port_id_domain = 0;
- port_id_page->port_id_port = 0;
-}
-
-struct unf_port_id_page *unf_get_free_rscn_node(void *rscn_mg)
-{
- /* Call by Save RSCN Port_ID */
- struct unf_rscn_mgr *rscn_mgr = NULL;
- struct unf_port_id_page *port_id_node = NULL;
- struct list_head *list_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(rscn_mg, NULL);
- rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
-
- spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
- if (list_empty(&rscn_mgr->list_free_rscn_page)) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
- "[warn]No RSCN node anymore");
-
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
- return NULL;
- }
-
- /* Get from list_free_RSCN_page */
- list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_free_rscn_page);
- list_del(list_node);
- rscn_mgr->free_rscn_count--;
- port_id_node = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
- unf_init_rscn_node(port_id_node);
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
-
- return port_id_node;
-}
-
-static void unf_release_rscn_node(void *rscn_mg, void *port_id_node)
-{
- /* Call by RSCN GID_ACC */
- struct unf_rscn_mgr *rscn_mgr = NULL;
- struct unf_port_id_page *port_id_page = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(rscn_mg);
- FC_CHECK_RETURN_VOID(port_id_node);
- rscn_mgr = (struct unf_rscn_mgr *)rscn_mg;
- port_id_page = (struct unf_port_id_page *)port_id_node;
-
- /* Back to list_free_RSCN_page */
- spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
- rscn_mgr->free_rscn_count++;
- unf_init_rscn_node(port_id_page);
- list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
-}
-
-static u32 unf_init_rscn_pool(struct unf_lport *lport)
-{
- struct unf_rscn_mgr *rscn_mgr = NULL;
- struct unf_port_id_page *port_id_page = NULL;
- u32 ret = RETURN_OK;
- u32 i = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- rscn_mgr = &lport->disc.rscn_mgr;
-
- /* Get RSCN Pool buffer */
- rscn_mgr->rscn_pool_add = vmalloc(UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
- if (!rscn_mgr->rscn_pool_add) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate RSCN pool failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(rscn_mgr->rscn_pool_add, 0,
- UNF_LIST_RSCN_PAGE_CNT * sizeof(struct unf_port_id_page));
-
- spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
- port_id_page = (struct unf_port_id_page *)(rscn_mgr->rscn_pool_add);
- for (i = 0; i < UNF_LIST_RSCN_PAGE_CNT; i++) {
- /* Add tail to list_free_RSCN_page */
- list_add_tail(&port_id_page->list_node_rscn, &rscn_mgr->list_free_rscn_page);
-
- rscn_mgr->free_rscn_count++;
- port_id_page++;
- }
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
-
- return ret;
-}
-
-static void unf_freerscn_pool(struct unf_lport *lport)
-{
- struct unf_disc *disc = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- disc = &lport->disc;
- if (disc->rscn_mgr.rscn_pool_add) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO, "[info]Port(0x%x) free RSCN pool", lport->nport_id);
-
- vfree(disc->rscn_mgr.rscn_pool_add);
- disc->rscn_mgr.rscn_pool_add = NULL;
- }
-}
-
-static u32 unf_init_rscn_mgr(struct unf_lport *lport)
-{
- struct unf_rscn_mgr *rscn_mgr = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- rscn_mgr = &lport->disc.rscn_mgr;
-
- INIT_LIST_HEAD(&rscn_mgr->list_free_rscn_page); /* free RSCN page list */
- INIT_LIST_HEAD(&rscn_mgr->list_using_rscn_page); /* busy RSCN page list */
- spin_lock_init(&rscn_mgr->rscn_id_list_lock);
- rscn_mgr->free_rscn_count = 0;
- rscn_mgr->unf_get_free_rscn_node = unf_get_free_rscn_node;
- rscn_mgr->unf_release_rscn_node = unf_release_rscn_node;
-
- ret = unf_init_rscn_pool(lport);
- return ret;
-}
-
-static void unf_destroy_rscn_mngr(struct unf_lport *lport)
-{
- struct unf_rscn_mgr *rscn_mgr = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- rscn_mgr = &lport->disc.rscn_mgr;
-
- rscn_mgr->free_rscn_count = 0;
- rscn_mgr->unf_get_free_rscn_node = NULL;
- rscn_mgr->unf_release_rscn_node = NULL;
-
- unf_freerscn_pool(lport);
-}
-
-static u32 unf_init_disc_rport_pool(struct unf_lport *lport)
-{
- struct unf_disc_rport_mg *disc_mgr = NULL;
- struct unf_disc_rport *disc_rport = NULL;
- u32 i = 0;
- u32 max_log_in = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- max_log_in = lport->low_level_func.lport_cfg_items.max_login;
- disc_mgr = &lport->disc.disc_rport_mgr;
-
- /* Alloc R_Port Disc Pool buffer */
- disc_mgr->disc_pool_add =
- vmalloc(max_log_in * sizeof(struct unf_disc_rport));
- if (!disc_mgr->disc_pool_add) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate disc RPort pool failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(disc_mgr->disc_pool_add, 0, (max_log_in * sizeof(struct unf_disc_rport)));
-
- /* Add R_Port to (free) DISC R_Port Pool */
- spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
- disc_rport = (struct unf_disc_rport *)(disc_mgr->disc_pool_add);
- for (i = 0; i < max_log_in; i++) {
- /* Add tail to list_disc_Rport_pool */
- list_add_tail(&disc_rport->entry_rport, &disc_mgr->list_disc_rports_pool);
-
- disc_rport++;
- }
- spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
-
- return RETURN_OK;
-}
-
-static void unf_free_disc_rport_pool(struct unf_lport *lport)
-{
- struct unf_disc *disc = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- disc = &lport->disc;
- if (disc->disc_rport_mgr.disc_pool_add) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO, "[info]Port(0x%x) free disc RPort pool", lport->port_id);
-
- vfree(disc->disc_rport_mgr.disc_pool_add);
- disc->disc_rport_mgr.disc_pool_add = NULL;
- }
-}
-
-int unf_discover_port_info(void *arg_in)
-{
- struct unf_disc_gs_event_info *disc_gs_info = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
-
- disc_gs_info = (struct unf_disc_gs_event_info *)arg_in;
- unf_lport = (struct unf_lport *)disc_gs_info->lport;
- unf_rport = (struct unf_rport *)disc_gs_info->rport;
-
- switch (disc_gs_info->type) {
- case UNF_DISC_GET_PORT_NAME:
- ret = unf_send_gpn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send GPN_ID failed RPort(0x%x)",
- unf_lport->nport_id, disc_gs_info->rport_id);
- unf_rcv_gpn_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
- }
- break;
- case UNF_DISC_GET_FEATURE:
- ret = unf_send_gff_id(unf_lport, unf_rport, disc_gs_info->rport_id);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send GFF_ID failed to get RPort(0x%x)'s feature",
- unf_lport->port_id, disc_gs_info->rport_id);
-
- unf_rcv_gff_id_rsp_unknown(unf_lport, disc_gs_info->rport_id);
- }
- break;
- case UNF_DISC_GET_NODE_NAME:
- ret = unf_send_gnn_id(unf_lport, unf_rport, disc_gs_info->rport_id);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) GNN_ID send failed with NPort ID(0x%x)",
- unf_lport->port_id, disc_gs_info->rport_id);
-
- /* NOTE: Continue to next stage */
- unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_rport, disc_gs_info->rport_id);
- }
- break;
- default:
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]Send GS packet type(0x%x) is unknown", disc_gs_info->type);
- }
-
- kfree(disc_gs_info);
-
- return (int)ret;
-}
-
-u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
- enum unf_disc_type type)
-{
- struct unf_disc_gs_event_info *disc_gs_info = NULL;
- ulong flag = 0;
- struct unf_lport *root_lport = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_disc_manage_info *disc_info = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)lport;
-
- if (unf_lport->link_up == UNF_PORT_LINK_DOWN)
- return RETURN_OK;
-
- root_lport = unf_lport->root_lport;
- disc_info = &root_lport->disc.disc_thread_info;
-
- if (disc_info->thread_exit)
- return RETURN_OK;
-
- disc_gs_info = kmalloc(sizeof(struct unf_disc_gs_event_info), GFP_ATOMIC);
- if (!disc_gs_info)
- return UNF_RETURN_ERROR;
-
- disc_gs_info->type = type;
- disc_gs_info->lport = unf_lport;
- disc_gs_info->rport = sns_port;
- disc_gs_info->rport_id = nport_id;
-
- INIT_LIST_HEAD(&disc_gs_info->list_entry);
-
- spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
- list_add_tail(&disc_gs_info->list_entry, &disc_info->list_head);
- spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
- wake_up_process(disc_info->thread);
- return RETURN_OK;
-}
-
-static int unf_disc_event_process(void *arg)
-{
- struct list_head *node = NULL;
- struct unf_disc_gs_event_info *disc_gs_info = NULL;
- ulong flags = 0;
- struct unf_disc *disc = (struct unf_disc *)arg;
- struct unf_disc_manage_info *disc_info = &disc->disc_thread_info;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) enter discovery thread.", disc->lport->port_id);
-
- while (!kthread_should_stop()) {
- if (disc_info->thread_exit)
- break;
-
- spin_lock_irqsave(&disc_info->disc_event_list_lock, flags);
- if ((list_empty(&disc_info->list_head)) ||
- (atomic_read(&disc_info->disc_contrl_size) == 0)) {
- spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
-
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
- } else {
- node = UNF_OS_LIST_NEXT(&disc_info->list_head);
- list_del_init(node);
- disc_gs_info = list_entry(node, struct unf_disc_gs_event_info, list_entry);
- spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flags);
- unf_discover_port_info(disc_gs_info);
- }
- }
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "Port(0x%x) discovery thread over.", disc->lport->port_id);
-
- return RETURN_OK;
-}
-
-void unf_flush_disc_event(void *disc, void *vport)
-{
- struct unf_disc *unf_disc = (struct unf_disc *)disc;
- struct unf_disc_manage_info *disc_info = NULL;
- struct list_head *list = NULL;
- struct list_head *list_tmp = NULL;
- struct unf_disc_gs_event_info *disc_gs_info = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(disc);
-
- disc_info = &unf_disc->disc_thread_info;
-
- spin_lock_irqsave(&disc_info->disc_event_list_lock, flag);
- list_for_each_safe(list, list_tmp, &disc_info->list_head) {
- disc_gs_info = list_entry(list, struct unf_disc_gs_event_info, list_entry);
-
- if (!vport || disc_gs_info->lport == vport) {
- list_del_init(&disc_gs_info->list_entry);
- kfree(disc_gs_info);
- }
- }
-
- if (!vport)
- atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
- spin_unlock_irqrestore(&disc_info->disc_event_list_lock, flag);
-}
-
-void unf_disc_ctrl_size_inc(void *lport, u32 cmnd)
-{
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = (struct unf_lport *)lport;
- unf_lport = unf_lport->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- if (atomic_read(&unf_lport->disc.disc_thread_info.disc_contrl_size) ==
- UNF_MAX_GS_SEND_NUM)
- return;
-
- if (cmnd == NS_GPN_ID || cmnd == NS_GNN_ID || cmnd == NS_GFF_ID)
- atomic_inc(&unf_lport->disc.disc_thread_info.disc_contrl_size);
-}
-
-void unf_destroy_disc_thread(void *disc)
-{
- struct unf_disc_manage_info *disc_info = NULL;
- struct unf_disc *unf_disc = (struct unf_disc *)disc;
-
- FC_CHECK_RETURN_VOID(unf_disc);
-
- disc_info = &unf_disc->disc_thread_info;
-
- disc_info->thread_exit = true;
- unf_flush_disc_event(unf_disc, NULL);
-
- wake_up_process(disc_info->thread);
- kthread_stop(disc_info->thread);
- disc_info->thread = NULL;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) destroy discovery thread succeed.",
- unf_disc->lport->port_id);
-}
-
-u32 unf_crerate_disc_thread(void *disc)
-{
- struct unf_disc_manage_info *disc_info = NULL;
- struct unf_disc *unf_disc = (struct unf_disc *)disc;
-
- FC_CHECK_RETURN_VALUE(unf_disc, UNF_RETURN_ERROR);
-
- /* If the thread cannot be found, apply for a new thread. */
- disc_info = &unf_disc->disc_thread_info;
-
- memset(disc_info, 0, sizeof(struct unf_disc_manage_info));
-
- INIT_LIST_HEAD(&disc_info->list_head);
- spin_lock_init(&disc_info->disc_event_list_lock);
- atomic_set(&disc_info->disc_contrl_size, UNF_MAX_GS_SEND_NUM);
-
- disc_info->thread_exit = false;
- disc_info->thread = kthread_create(unf_disc_event_process, unf_disc, "%x_DiscT",
- unf_disc->lport->port_id);
-
- if (IS_ERR(disc_info->thread) || !disc_info->thread) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) creat discovery thread(0x%p) unsuccessful.",
- unf_disc->lport->port_id, disc_info->thread);
-
- return UNF_RETURN_ERROR;
- }
-
- wake_up_process(disc_info->thread);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) creat discovery thread succeed.", unf_disc->lport->port_id);
-
- return RETURN_OK;
-}
-
-void unf_disc_ref_cnt_dec(struct unf_disc *disc)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(disc);
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- if (atomic_dec_and_test(&disc->disc_ref_cnt)) {
- if (disc->disc_completion)
- complete(disc->disc_completion);
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-}
-
-void unf_wait_disc_complete(struct unf_lport *lport)
-{
- struct unf_disc *disc = NULL;
- bool wait = false;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- u64 time_out = 0;
-
- struct completion disc_completion;
-
- init_completion(&disc_completion);
- disc = &lport->disc;
-
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&disc->disc_work),
- "Disc_work");
- if (ret == RETURN_OK)
- unf_disc_ref_cnt_dec(disc);
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- if (atomic_read(&disc->disc_ref_cnt) != 0) {
- disc->disc_completion = &disc_completion;
- wait = true;
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- if (wait) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to wait for discover completion",
- lport->port_id);
-
- time_out =
- wait_for_completion_timeout(disc->disc_completion,
- msecs_to_jiffies(UNF_OS_REMOVE_CARD_TIMEOUT));
- if (time_out == 0)
- unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_DISC_DIRTY);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) wait for discover completion end", lport->port_id);
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- disc->disc_completion = NULL;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- }
-}
-
-void unf_disc_mgr_destroy(void *lport)
-{
- struct unf_disc *disc = NULL;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- unf_lport = (struct unf_lport *)lport;
-
- disc = &unf_lport->disc;
- disc->retry_count = 0;
- disc->disc_temp.unf_disc_start = NULL;
- disc->disc_temp.unf_disc_stop = NULL;
- disc->disc_temp.unf_disc_callback = NULL;
-
- unf_free_disc_rport_pool(unf_lport);
- unf_destroy_rscn_mngr(unf_lport);
- unf_wait_disc_complete(unf_lport);
-
- if (unf_lport->root_lport != unf_lport)
- return;
-
- unf_destroy_disc_thread(disc);
- unf_free_rport_pool(unf_lport);
- unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR;
-}
-
-void unf_disc_error_recovery(void *lport)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_disc *disc = NULL;
- ulong delay = 0;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = (struct unf_lport *)lport;
- disc = &unf_lport->disc;
-
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) find RPort failed", unf_lport->port_id);
- return;
- }
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
-
- /* Delay work is pending */
- if (delayed_work_pending(&disc->disc_work)) {
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) disc_work is running and do nothing",
- unf_lport->port_id);
- return;
- }
-
- /* Continue to retry */
- if (disc->retry_count < disc->max_retry_count) {
- disc->retry_count++;
- delay = (ulong)unf_lport->ed_tov;
- if (queue_delayed_work(unf_wq, &disc->disc_work,
- (ulong)msecs_to_jiffies((u32)delay)))
- atomic_inc(&disc->disc_ref_cnt);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- } else {
- /* Go to next stage */
- if (disc->states == UNF_DISC_ST_GIDPT_WAIT) {
- /* GID_PT_WAIT --->>> Send GID_FT */
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- while ((ret != RETURN_OK) &&
- (disc->retry_count < disc->max_retry_count)) {
- ret = unf_send_gid_ft(unf_lport, unf_rport);
- disc->retry_count++;
- }
- } else if (disc->states == UNF_DISC_ST_GIDFT_WAIT) {
- /* GID_FT_WAIT --->>> Send LOGO */
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_RETRY_TIMEOUT);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- } else {
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- }
- }
-}
-
-enum unf_disc_state unf_disc_stat_start(enum unf_disc_state old_state,
- enum unf_disc_event event)
-{
- enum unf_disc_state next_state = UNF_DISC_ST_END;
-
- if (event == UNF_EVENT_DISC_NORMAL_ENTER)
- next_state = UNF_DISC_ST_GIDPT_WAIT;
- else
- next_state = old_state;
-
- return next_state;
-}
-
-enum unf_disc_state unf_disc_stat_gid_pt_wait(enum unf_disc_state old_state,
- enum unf_disc_event event)
-{
- enum unf_disc_state next_state = UNF_DISC_ST_END;
-
- switch (event) {
- case UNF_EVENT_DISC_FAILED:
- next_state = UNF_DISC_ST_GIDPT_WAIT;
- break;
-
- case UNF_EVENT_DISC_RETRY_TIMEOUT:
- next_state = UNF_DISC_ST_GIDFT_WAIT;
- break;
-
- case UNF_EVENT_DISC_SUCCESS:
- next_state = UNF_DISC_ST_END;
- break;
-
- case UNF_EVENT_DISC_LINKDOWN:
- next_state = UNF_DISC_ST_START;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-enum unf_disc_state unf_disc_stat_gid_ft_wait(enum unf_disc_state old_state,
- enum unf_disc_event event)
-{
- enum unf_disc_state next_state = UNF_DISC_ST_END;
-
- switch (event) {
- case UNF_EVENT_DISC_FAILED:
- next_state = UNF_DISC_ST_GIDFT_WAIT;
- break;
-
- case UNF_EVENT_DISC_RETRY_TIMEOUT:
- next_state = UNF_DISC_ST_END;
- break;
-
- case UNF_EVENT_DISC_LINKDOWN:
- next_state = UNF_DISC_ST_START;
- break;
-
- case UNF_EVENT_DISC_SUCCESS:
- next_state = UNF_DISC_ST_END;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-enum unf_disc_state unf_disc_stat_end(enum unf_disc_state old_state, enum unf_disc_event event)
-{
- enum unf_disc_state next_state = UNF_DISC_ST_END;
-
- if (event == UNF_EVENT_DISC_LINKDOWN)
- next_state = UNF_DISC_ST_START;
- else
- next_state = old_state;
-
- return next_state;
-}
-
-void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event event)
-{
- struct unf_disc *disc = NULL;
- enum unf_disc_state old_state = UNF_DISC_ST_START;
- enum unf_disc_state next_state = UNF_DISC_ST_START;
-
- FC_CHECK_RETURN_VOID(lport);
-
- disc = &lport->disc;
- old_state = disc->states;
-
- switch (disc->states) {
- case UNF_DISC_ST_START:
- next_state = unf_disc_stat_start(old_state, event);
- break;
-
- case UNF_DISC_ST_GIDPT_WAIT:
- next_state = unf_disc_stat_gid_pt_wait(old_state, event);
- break;
-
- case UNF_DISC_ST_GIDFT_WAIT:
- next_state = unf_disc_stat_gid_ft_wait(old_state, event);
- break;
-
- case UNF_DISC_ST_END:
- next_state = unf_disc_stat_end(old_state, event);
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- unf_set_disc_state(disc, next_state);
-}
-
-static void unf_lport_disc_timeout(struct work_struct *work)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_disc *disc = NULL;
- enum unf_disc_state state = UNF_DISC_ST_END;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(work);
-
- disc = container_of(work, struct unf_disc, disc_work.work);
- if (!disc) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Get discover pointer failed");
-
- return;
- }
-
- unf_lport = disc->lport;
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Find Port by discovery work failed");
-
- unf_disc_ref_cnt_dec(disc);
- return;
- }
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- state = disc->states;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV); /* 0xfffffc */
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find fabric RPort failed", unf_lport->port_id);
-
- unf_disc_ref_cnt_dec(disc);
- return;
- }
-
- switch (state) {
- case UNF_DISC_ST_START:
- break;
-
- case UNF_DISC_ST_GIDPT_WAIT:
- (void)unf_send_gid_pt(unf_lport, unf_rport);
- break;
-
- case UNF_DISC_ST_GIDFT_WAIT:
- (void)unf_send_gid_ft(unf_lport, unf_rport);
- break;
-
- case UNF_DISC_ST_END:
- break;
-
- default:
- break;
- }
-
- unf_disc_ref_cnt_dec(disc);
-}
-
-u32 unf_init_disc_mgr(struct unf_lport *lport)
-{
- struct unf_disc *disc = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- disc = &lport->disc;
- disc->max_retry_count = UNF_DISC_RETRY_TIMES;
- disc->retry_count = 0;
- disc->disc_flag = UNF_DISC_NONE;
- INIT_LIST_HEAD(&disc->list_busy_rports);
- INIT_LIST_HEAD(&disc->list_delete_rports);
- INIT_LIST_HEAD(&disc->list_destroy_rports);
- spin_lock_init(&disc->rport_busy_pool_lock);
-
- disc->disc_rport_mgr.disc_pool_add = NULL;
- INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_pool);
- INIT_LIST_HEAD(&disc->disc_rport_mgr.list_disc_rports_busy);
-
- disc->disc_completion = NULL;
- disc->lport = lport;
- INIT_DELAYED_WORK(&disc->disc_work, unf_lport_disc_timeout);
- disc->disc_temp.unf_disc_start = unf_disc_start;
- disc->disc_temp.unf_disc_stop = unf_disc_stop;
- disc->disc_temp.unf_disc_callback = NULL;
- atomic_set(&disc->disc_ref_cnt, 0);
-
- /* Init RSCN Manager */
- ret = unf_init_rscn_mgr(lport);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- if (lport->root_lport != lport)
- return ret;
-
- ret = unf_crerate_disc_thread(disc);
- if (ret != RETURN_OK) {
- unf_destroy_rscn_mngr(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Init R_Port free Pool */
- ret = unf_init_rport_pool(lport);
- if (ret != RETURN_OK) {
- unf_destroy_disc_thread(disc);
- unf_destroy_rscn_mngr(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Init R_Port free disc Pool */
- ret = unf_init_disc_rport_pool(lport);
- if (ret != RETURN_OK) {
- unf_destroy_disc_thread(disc);
- unf_free_rport_pool(lport);
- unf_destroy_rscn_mngr(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- return ret;
-}
diff --git a/drivers/scsi/spfc/common/unf_disc.h b/drivers/scsi/spfc/common/unf_disc.h
deleted file mode 100644
index 7ecad3eec424..000000000000
--- a/drivers/scsi/spfc/common/unf_disc.h
+++ /dev/null
@@ -1,51 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_DISC_H
-#define UNF_DISC_H
-
-#include "unf_type.h"
-
-#define UNF_DISC_RETRY_TIMES 3
-#define UNF_DISC_NONE 0
-#define UNF_DISC_FABRIC 1
-#define UNF_DISC_LOOP 2
-
-enum unf_disc_state {
- UNF_DISC_ST_START = 0x3000,
- UNF_DISC_ST_GIDPT_WAIT,
- UNF_DISC_ST_GIDFT_WAIT,
- UNF_DISC_ST_END
-};
-
-enum unf_disc_event {
- UNF_EVENT_DISC_NORMAL_ENTER = 0x8000,
- UNF_EVENT_DISC_FAILED = 0x8001,
- UNF_EVENT_DISC_SUCCESS = 0x8002,
- UNF_EVENT_DISC_RETRY_TIMEOUT = 0x8003,
- UNF_EVENT_DISC_LINKDOWN = 0x8004
-};
-
-enum unf_disc_type {
- UNF_DISC_GET_PORT_NAME = 0,
- UNF_DISC_GET_NODE_NAME,
- UNF_DISC_GET_FEATURE
-};
-
-struct unf_disc_gs_event_info {
- void *lport;
- void *rport;
- u32 rport_id;
- enum unf_disc_type type;
- struct list_head list_entry;
-};
-
-u32 unf_get_and_post_disc_event(void *lport, void *sns_port, u32 nport_id,
- enum unf_disc_type type);
-
-void unf_flush_disc_event(void *disc, void *vport);
-void unf_disc_ctrl_size_inc(void *lport, u32 cmnd);
-void unf_disc_error_recovery(void *lport);
-void unf_disc_mgr_destroy(void *lport);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_event.c b/drivers/scsi/spfc/common/unf_event.c
deleted file mode 100644
index cf51c31ca4a3..000000000000
--- a/drivers/scsi/spfc/common/unf_event.c
+++ /dev/null
@@ -1,517 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_event.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "unf_lport.h"
-
-struct unf_event_list fc_event_list;
-struct unf_global_event_queue global_event_queue;
-
-/* Max global event node */
-#define UNF_MAX_GLOBAL_ENENT_NODE 24
-
-u32 unf_init_event_msg(struct unf_lport *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
- struct unf_cm_event_report *event_node = NULL;
- u32 ret = RETURN_OK;
- u32 index = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- event_mgr = &lport->event_mgr;
-
- /* Get and Initial Event Node resource */
- event_mgr->mem_add = vmalloc((size_t)event_mgr->free_event_count *
- sizeof(struct unf_cm_event_report));
- if (!event_mgr->mem_add) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate event manager failed",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(event_mgr->mem_add, 0,
- ((size_t)event_mgr->free_event_count * sizeof(struct unf_cm_event_report)));
-
- event_node = (struct unf_cm_event_report *)(event_mgr->mem_add);
-
- spin_lock_irqsave(&event_mgr->port_event_lock, flag);
- for (index = 0; index < event_mgr->free_event_count; index++) {
- INIT_LIST_HEAD(&event_node->list_entry);
- list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
- event_node++;
- }
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
-
- return ret;
-}
-
-static void unf_del_event_center_fun_op(struct unf_lport *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- event_mgr = &lport->event_mgr;
- event_mgr->unf_get_free_event_func = NULL;
- event_mgr->unf_release_event = NULL;
- event_mgr->unf_post_event_func = NULL;
-}
-
-void unf_init_event_node(struct unf_cm_event_report *event_node)
-{
- FC_CHECK_RETURN_VOID(event_node);
-
- event_node->event = UNF_EVENT_TYPE_REQUIRE;
- event_node->event_asy_flag = UNF_EVENT_ASYN;
- event_node->delay_times = 0;
- event_node->para_in = NULL;
- event_node->para_out = NULL;
- event_node->result = 0;
- event_node->lport = NULL;
- event_node->unf_event_task = NULL;
-}
-
-struct unf_cm_event_report *unf_get_free_event_node(void *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
- struct unf_cm_event_report *event_node = NULL;
- struct list_head *list_node = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- unf_lport = (struct unf_lport *)lport;
- unf_lport = unf_lport->root_lport;
-
- if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
- return NULL;
-
- event_mgr = &unf_lport->event_mgr;
-
- spin_lock_irqsave(&event_mgr->port_event_lock, flags);
- if (list_empty(&event_mgr->list_free_event)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) have no event node anymore",
- unf_lport->port_id);
-
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
- return NULL;
- }
-
- list_node = UNF_OS_LIST_NEXT(&event_mgr->list_free_event);
- list_del(list_node);
- event_mgr->free_event_count--;
- event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
-
- unf_init_event_node(event_node);
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
-
- return event_node;
-}
-
-void unf_post_event(void *lport, void *event_node)
-{
- struct unf_cm_event_report *cm_event_node = NULL;
- struct unf_chip_manage_info *card_thread_info = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(event_node);
- cm_event_node = (struct unf_cm_event_report *)event_node;
-
- /* If null, post to global event center */
- if (!lport) {
- spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
- fc_event_list.list_num++;
- list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
- spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
-
- wake_up_process(event_task_thread);
- } else {
- unf_lport = (struct unf_lport *)lport;
- unf_lport = unf_lport->root_lport;
- card_thread_info = unf_lport->chip_info;
-
- /* Post to global event center */
- if (!card_thread_info) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
- "[warn]Port(0x%x) has strange event with type(0x%x)",
- unf_lport->nport_id, cm_event_node->event);
-
- spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
- fc_event_list.list_num++;
- list_add_tail(&cm_event_node->list_entry, &fc_event_list.list_head);
- spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
-
- wake_up_process(event_task_thread);
- } else {
- spin_lock_irqsave(&card_thread_info->chip_event_list_lock, flags);
- card_thread_info->list_num++;
- list_add_tail(&cm_event_node->list_entry, &card_thread_info->list_head);
- spin_unlock_irqrestore(&card_thread_info->chip_event_list_lock, flags);
-
- wake_up_process(card_thread_info->thread);
- }
- }
-}
-
-void unf_check_event_mgr_status(struct unf_event_mgr *event_mgr)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(event_mgr);
-
- spin_lock_irqsave(&event_mgr->port_event_lock, flag);
- if (event_mgr->emg_completion && event_mgr->free_event_count == UNF_MAX_EVENT_NODE)
- complete(event_mgr->emg_completion);
-
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
-}
-
-void unf_release_event(void *lport, void *event_node)
-{
- struct unf_event_mgr *event_mgr = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_event_report *cm_event_node = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(event_node);
-
- cm_event_node = (struct unf_cm_event_report *)event_node;
- unf_lport = (struct unf_lport *)lport;
- unf_lport = unf_lport->root_lport;
- event_mgr = &unf_lport->event_mgr;
-
- spin_lock_irqsave(&event_mgr->port_event_lock, flags);
- event_mgr->free_event_count++;
- unf_init_event_node(cm_event_node);
- list_add_tail(&cm_event_node->list_entry, &event_mgr->list_free_event);
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flags);
-
- unf_check_event_mgr_status(event_mgr);
-}
-
-void unf_release_global_event(void *event_node)
-{
- ulong flag = 0;
- struct unf_cm_event_report *cm_event_node = NULL;
-
- FC_CHECK_RETURN_VOID(event_node);
- cm_event_node = (struct unf_cm_event_report *)event_node;
-
- unf_init_event_node(cm_event_node);
-
- spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
- global_event_queue.list_number++;
- list_add_tail(&cm_event_node->list_entry, &global_event_queue.global_event_list);
- spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
-}
-
-u32 unf_init_event_center(void *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
- u32 ret = RETURN_OK;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- unf_lport = (struct unf_lport *)lport;
-
- /* Initial Disc manager */
- event_mgr = &unf_lport->event_mgr;
- event_mgr->free_event_count = UNF_MAX_EVENT_NODE;
- event_mgr->unf_get_free_event_func = unf_get_free_event_node;
- event_mgr->unf_release_event = unf_release_event;
- event_mgr->unf_post_event_func = unf_post_event;
-
- INIT_LIST_HEAD(&event_mgr->list_free_event);
- spin_lock_init(&event_mgr->port_event_lock);
- event_mgr->emg_completion = NULL;
-
- ret = unf_init_event_msg(unf_lport);
-
- return ret;
-}
-
-void unf_wait_event_mgr_complete(struct unf_event_mgr *event_mgr)
-{
- struct unf_event_mgr *event_mgr_temp = NULL;
- bool wait = false;
- ulong mg_flag = 0;
-
- struct completion fc_event_completion;
-
- init_completion(&fc_event_completion);
- FC_CHECK_RETURN_VOID(event_mgr);
- event_mgr_temp = event_mgr;
-
- spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
- if (event_mgr_temp->free_event_count != UNF_MAX_EVENT_NODE) {
- event_mgr_temp->emg_completion = &fc_event_completion;
- wait = true;
- }
- spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
-
- if (wait)
- wait_for_completion(event_mgr_temp->emg_completion);
-
- spin_lock_irqsave(&event_mgr_temp->port_event_lock, mg_flag);
- event_mgr_temp->emg_completion = NULL;
- spin_unlock_irqrestore(&event_mgr_temp->port_event_lock, mg_flag);
-}
-
-u32 unf_event_center_destroy(void *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
- struct list_head *list = NULL;
- struct list_head *list_tmp = NULL;
- struct unf_cm_event_report *event_node = NULL;
- u32 ret = RETURN_OK;
- ulong flag = 0;
- ulong list_lock_flag = 0;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- unf_lport = (struct unf_lport *)lport;
- event_mgr = &unf_lport->event_mgr;
-
- spin_lock_irqsave(&fc_event_list.fc_event_list_lock, list_lock_flag);
- if (!list_empty(&fc_event_list.list_head)) {
- list_for_each_safe(list, list_tmp, &fc_event_list.list_head) {
- event_node = list_entry(list, struct unf_cm_event_report, list_entry);
-
- if (event_node->lport == unf_lport) {
- list_del_init(&event_node->list_entry);
- if (event_node->event_asy_flag == UNF_EVENT_SYN) {
- event_node->result = UNF_RETURN_ERROR;
- complete(&event_node->event_comp);
- }
-
- spin_lock_irqsave(&event_mgr->port_event_lock, flag);
- event_mgr->free_event_count++;
- list_add_tail(&event_node->list_entry,
- &event_mgr->list_free_event);
- spin_unlock_irqrestore(&event_mgr->port_event_lock, flag);
- }
- }
- }
- spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, list_lock_flag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to wait event",
- unf_lport->port_id);
-
- unf_wait_event_mgr_complete(event_mgr);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) wait event process end",
- unf_lport->port_id);
-
- unf_del_event_center_fun_op(unf_lport);
-
- vfree(event_mgr->mem_add);
- event_mgr->mem_add = NULL;
- unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER;
-
- return ret;
-}
-
-static void unf_procee_asyn_event(struct unf_cm_event_report *event_node)
-{
- struct unf_lport *lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- lport = (struct unf_lport *)event_node->lport;
-
- FC_CHECK_RETURN_VOID(lport);
- if (event_node->unf_event_task) {
- ret = (u32)event_node->unf_event_task(event_node->para_in,
- event_node->para_out);
- }
-
- if (lport->event_mgr.unf_release_event)
- lport->event_mgr.unf_release_event(lport, event_node);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
- "[warn]Port(0x%x) handle event(0x%x) failed",
- lport->port_id, event_node->event);
- }
-}
-
-void unf_handle_event(struct unf_cm_event_report *event_node)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 event = 0;
- u32 event_asy_flag = UNF_EVENT_ASYN;
-
- FC_CHECK_RETURN_VOID(event_node);
-
- event = event_node->event;
- event_asy_flag = event_node->event_asy_flag;
-
- switch (event_asy_flag) {
- case UNF_EVENT_SYN: /* synchronous event node */
- case UNF_GLOBAL_EVENT_SYN:
- if (event_node->unf_event_task)
- ret = (u32)event_node->unf_event_task(event_node->para_in,
- event_node->para_out);
-
- event_node->result = ret;
- complete(&event_node->event_comp);
- break;
-
- case UNF_EVENT_ASYN: /* asynchronous event node */
- unf_procee_asyn_event(event_node);
- break;
-
- case UNF_GLOBAL_EVENT_ASYN:
- if (event_node->unf_event_task) {
- ret = (u32)event_node->unf_event_task(event_node->para_in,
- event_node->para_out);
- }
-
- unf_release_global_event(event_node);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
- "[warn]handle global event(0x%x) failed", event);
- }
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_WARN,
- "[warn]Unknown event(0x%x)", event);
- break;
- }
-}
-
-u32 unf_init_global_event_msg(void)
-{
- struct unf_cm_event_report *event_node = NULL;
- u32 ret = RETURN_OK;
- u32 index = 0;
- ulong flag = 0;
-
- INIT_LIST_HEAD(&global_event_queue.global_event_list);
- spin_lock_init(&global_event_queue.global_event_list_lock);
- global_event_queue.list_number = 0;
-
- global_event_queue.global_event_add = vmalloc(UNF_MAX_GLOBAL_ENENT_NODE *
- sizeof(struct unf_cm_event_report));
- if (!global_event_queue.global_event_add) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Can't allocate global event queue");
-
- return UNF_RETURN_ERROR;
- }
- memset(global_event_queue.global_event_add, 0,
- (UNF_MAX_GLOBAL_ENENT_NODE * sizeof(struct unf_cm_event_report)));
-
- event_node = (struct unf_cm_event_report *)(global_event_queue.global_event_add);
-
- spin_lock_irqsave(&global_event_queue.global_event_list_lock, flag);
- for (index = 0; index < UNF_MAX_GLOBAL_ENENT_NODE; index++) {
- INIT_LIST_HEAD(&event_node->list_entry);
- list_add_tail(&event_node->list_entry, &global_event_queue.global_event_list);
-
- global_event_queue.list_number++;
- event_node++;
- }
- spin_unlock_irqrestore(&global_event_queue.global_event_list_lock, flag);
-
- return ret;
-}
-
-void unf_destroy_global_event_msg(void)
-{
- if (global_event_queue.list_number != UNF_MAX_GLOBAL_ENENT_NODE) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
- "[warn]Global event release not complete with remain nodes(0x%x)",
- global_event_queue.list_number);
- }
-
- vfree(global_event_queue.global_event_add);
-}
-
-u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
- int (*unf_event_task)(void *arg_in, void *arg_out))
-{
- struct list_head *list_node = NULL;
- struct unf_cm_event_report *event_node = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- spinlock_t *event_list_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(unf_event_task, UNF_RETURN_ERROR);
-
- if (event_asy_flag != UNF_GLOBAL_EVENT_ASYN && event_asy_flag != UNF_GLOBAL_EVENT_SYN) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Event async flag(0x%x) abnormity",
- event_asy_flag);
-
- return UNF_RETURN_ERROR;
- }
-
- event_list_lock = &global_event_queue.global_event_list_lock;
- spin_lock_irqsave(event_list_lock, flag);
- if (list_empty(&global_event_queue.global_event_list)) {
- spin_unlock_irqrestore(event_list_lock, flag);
-
- return UNF_RETURN_ERROR;
- }
-
- list_node = UNF_OS_LIST_NEXT(&global_event_queue.global_event_list);
- list_del_init(list_node);
- global_event_queue.list_number--;
- event_node = list_entry(list_node, struct unf_cm_event_report, list_entry);
- spin_unlock_irqrestore(event_list_lock, flag);
-
- /* Initial global event */
- unf_init_event_node(event_node);
- init_completion(&event_node->event_comp);
- event_node->event_asy_flag = event_asy_flag;
- event_node->unf_event_task = unf_event_task;
- event_node->para_in = (void *)para_in;
- event_node->para_out = NULL;
-
- unf_post_event(NULL, event_node);
-
- if (event_asy_flag == UNF_GLOBAL_EVENT_SYN) {
- /* must wait for complete */
- wait_for_completion(&event_node->event_comp);
- ret = event_node->result;
- unf_release_global_event(event_node);
- } else {
- ret = RETURN_OK;
- }
-
- return ret;
-}
-
-struct unf_cm_event_report *unf_get_one_event_node(void *lport)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(unf_lport->event_mgr.unf_get_free_event_func, NULL);
-
- return unf_lport->event_mgr.unf_get_free_event_func((void *)unf_lport);
-}
-
-void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(event);
-
- FC_CHECK_RETURN_VOID(unf_lport->event_mgr.unf_post_event_func);
- FC_CHECK_RETURN_VOID(event);
-
- unf_lport->event_mgr.unf_post_event_func((void *)unf_lport, event);
-}
diff --git a/drivers/scsi/spfc/common/unf_event.h b/drivers/scsi/spfc/common/unf_event.h
deleted file mode 100644
index 3fbd72bff8d7..000000000000
--- a/drivers/scsi/spfc/common/unf_event.h
+++ /dev/null
@@ -1,83 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_EVENT_H
-#define UNF_EVENT_H
-
-#include "unf_type.h"
-
-#define UNF_MAX_EVENT_NODE 256
-
-enum unf_event_type {
- UNF_EVENT_TYPE_ALARM = 0, /* Alarm */
- UNF_EVENT_TYPE_REQUIRE, /* Require */
- UNF_EVENT_TYPE_RECOVERY, /* Recovery */
- UNF_EVENT_TYPE_BUTT
-};
-
-struct unf_cm_event_report {
- /* event type */
- u32 event;
-
- /* ASY flag */
- u32 event_asy_flag;
-
- /* Delay times,must be async event */
- u32 delay_times;
-
- struct list_head list_entry;
-
- void *lport;
-
- /* parameter */
- void *para_in;
- void *para_out;
- u32 result;
-
- /* recovery strategy */
- int (*unf_event_task)(void *arg_in, void *arg_out);
-
- struct completion event_comp;
-};
-
-struct unf_event_mgr {
- spinlock_t port_event_lock;
- u32 free_event_count;
-
- struct list_head list_free_event;
-
- struct completion *emg_completion;
-
- void *mem_add;
- struct unf_cm_event_report *(*unf_get_free_event_func)(void *lport);
- void (*unf_release_event)(void *lport, void *event_node);
- void (*unf_post_event_func)(void *lport, void *event_node);
-};
-
-struct unf_global_event_queue {
- void *global_event_add;
- u32 list_number;
- struct list_head global_event_list;
- spinlock_t global_event_list_lock;
-};
-
-struct unf_event_list {
- struct list_head list_head;
- spinlock_t fc_event_list_lock;
- u32 list_num; /* list node number */
-};
-
-void unf_handle_event(struct unf_cm_event_report *event_node);
-u32 unf_init_global_event_msg(void);
-void unf_destroy_global_event_msg(void);
-u32 unf_schedule_global_event(void *para_in, u32 event_asy_flag,
- int (*unf_event_task)(void *arg_in, void *arg_out));
-struct unf_cm_event_report *unf_get_one_event_node(void *lport);
-void unf_post_one_event_node(void *lport, struct unf_cm_event_report *event);
-u32 unf_event_center_destroy(void *lport);
-u32 unf_init_event_center(void *lport);
-
-extern struct task_struct *event_task_thread;
-extern struct unf_global_event_queue global_event_queue;
-extern struct unf_event_list fc_event_list;
-#endif
diff --git a/drivers/scsi/spfc/common/unf_exchg.c b/drivers/scsi/spfc/common/unf_exchg.c
deleted file mode 100644
index ab35cc318b6f..000000000000
--- a/drivers/scsi/spfc/common/unf_exchg.c
+++ /dev/null
@@ -1,2317 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_exchg.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "unf_rport.h"
-#include "unf_service.h"
-#include "unf_io.h"
-#include "unf_exchg_abort.h"
-
-#define SPFC_XCHG_TYPE_MASK 0xFFFF
-#define UNF_DEL_XCHG_TIMER_SAFE(xchg) \
- do { \
- if (cancel_delayed_work(&((xchg)->timeout_work))) { \
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR, \
- "Exchange(0x%p) is free, but timer is pending.", \
- xchg); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_IO_ATT, \
- UNF_CRITICAL, \
- "Exchange(0x%p) is free, but timer is running.", \
- xchg); \
- } \
- } while (0)
-
-static struct unf_io_flow_id io_stage_table[] = {
- {"XCHG_ALLOC"}, {"TGT_RECEIVE_ABTS"},
- {"TGT_ABTS_DONE"}, {"TGT_IO_SRR"},
- {"SFS_RESPONSE"}, {"SFS_TIMEOUT"},
- {"INI_SEND_CMND"}, {"INI_RESPONSE_DONE"},
- {"INI_EH_ABORT"}, {"INI_EH_DEVICE_RESET"},
- {"INI_EH_BLS_DONE"}, {"INI_IO_TIMEOUT"},
- {"INI_REQ_TIMEOUT"}, {"XCHG_CANCEL_TIMER"},
- {"XCHG_FREE_XCHG"}, {"SEND_ELS"},
- {"IO_XCHG_WAIT"},
-};
-
-static void unf_init_xchg_attribute(struct unf_xchg *xchg);
-static void unf_delay_work_del_syn(struct unf_xchg *xchg);
-static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr,
- bool done_ini_flag);
-static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr);
-
-void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport)
-{
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong hot_pool_lock_flags = 0;
- ulong xchg_flag = 0;
- struct unf_xchg_mgr *xchg_mgrs = NULL;
- u32 i;
-
- FC_CHECK_RETURN_VOID(lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- xchg_mgrs = unf_get_xchg_mgr_by_lport(lport, i);
-
- if (!xchg_mgrs) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
- "Can't find LPort(0x%x) MgrIdx %u exchange manager.",
- lport->port_id, i);
- continue;
- }
-
- spin_lock_irqsave(&xchg_mgrs->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- list_for_each_safe(node, next_node,
- (&xchg_mgrs->hot_pool->ini_busylist)) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- if (INI_IO_STATE_UPTASK & xchg->io_state &&
- (atomic_read(&xchg->ref_cnt) > 0)) {
- UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
- up(&xchg->task_sema);
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MINOR,
- "Wake up task command exchange(0x%p), Hot Pool Tag(0x%x).",
- xchg, xchg->hotpooltag);
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
- }
-
- spin_unlock_irqrestore(&xchg_mgrs->hot_pool->xchg_hotpool_lock,
- hot_pool_lock_flags);
- }
-}
-
-void *unf_cm_get_free_xchg(void *lport, u32 xchg_type)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
-
- FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
-
- unf_lport = (struct unf_lport *)lport;
- xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
-
- /* Find the corresponding Lport Xchg management template. */
- FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_xchg_get_free_and_init), NULL);
-
- return xchg_mgr_temp->unf_xchg_get_free_and_init(unf_lport, xchg_type);
-}
-
-void unf_cm_free_xchg(void *lport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
-
- FC_CHECK_RETURN_VOID(unlikely(lport));
- FC_CHECK_RETURN_VOID(unlikely(xchg));
-
- unf_lport = (struct unf_lport *)lport;
- xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
- FC_CHECK_RETURN_VOID(unlikely(xchg_mgr_temp->unf_xchg_release));
-
- /*
- * unf_cm_free_xchg --->>> unf_free_xchg
- * --->>> unf_xchg_ref_dec --->>> unf_free_fcp_xchg --->>>
- * unf_done_ini_xchg
- */
- xchg_mgr_temp->unf_xchg_release(lport, xchg);
-}
-
-void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
-
- FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
-
- /* Find the corresponding Lport Xchg management template */
- unf_lport = (struct unf_lport *)lport;
- xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
-
- FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_tag), NULL);
-
- return xchg_mgr_temp->unf_look_up_xchg_by_tag(lport, hot_pool_tag);
-}
-
-void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
-
- FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
-
- unf_lport = (struct unf_lport *)lport;
- xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
-
- /* Find the corresponding Lport Xchg management template */
- FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_id), NULL);
-
- return xchg_mgr_temp->unf_look_up_xchg_by_id(lport, ox_id, oid);
-}
-
-struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
- u32 world_id, void *pinitiator)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_cm_xchg_mgr_template *xchg_mgr_temp = NULL;
- struct unf_xchg *xchg = NULL;
-
- FC_CHECK_RETURN_VALUE(unlikely(lport), NULL);
-
- unf_lport = (struct unf_lport *)lport;
- xchg_mgr_temp = &unf_lport->xchg_mgr_temp;
-
- FC_CHECK_RETURN_VALUE(unlikely(xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn), NULL);
-
- xchg = (struct unf_xchg *)xchg_mgr_temp->unf_look_up_xchg_by_cmnd_sn(unf_lport,
- command_sn,
- world_id,
- pinitiator);
-
- return xchg;
-}
-
-static u32 unf_init_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr,
- u32 xchg_sum, u32 sfs_sum)
-{
- struct unf_xchg *xchg_mem = NULL;
- union unf_sfs_u *sfs_mm_start = NULL;
- dma_addr_t sfs_dma_addr;
- struct unf_xchg *xchg = NULL;
- struct unf_xchg_free_pool *free_pool = NULL;
- ulong flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VALUE((sfs_sum <= xchg_sum), UNF_RETURN_ERROR);
-
- free_pool = &xchg_mgr->free_pool;
- xchg_mem = xchg_mgr->fcp_mm_start;
- xchg = xchg_mem;
-
- sfs_mm_start = (union unf_sfs_u *)xchg_mgr->sfs_mm_start;
- sfs_dma_addr = xchg_mgr->sfs_phy_addr;
- /* 1. Allocate the SFS UNION memory to each SFS XCHG
- * and mount the SFS XCHG to the corresponding FREE linked list
- */
- free_pool->total_sfs_xchg = 0;
- free_pool->sfs_xchg_sum = sfs_sum;
- for (i = 0; i < sfs_sum; i++) {
- INIT_LIST_HEAD(&xchg->list_xchg_entry);
- INIT_LIST_HEAD(&xchg->list_esgls);
- spin_lock_init(&xchg->xchg_state_lock);
- sema_init(&xchg->task_sema, 0);
- sema_init(&xchg->echo_info.echo_sync_sema, 0);
-
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr = sfs_mm_start;
- xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr = sfs_dma_addr;
- xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = sizeof(*sfs_mm_start);
- list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
- free_pool->total_sfs_xchg++;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-
- sfs_mm_start++;
- sfs_dma_addr = sfs_dma_addr + sizeof(union unf_sfs_u);
- xchg++;
- }
-
- free_pool->total_fcp_xchg = 0;
-
- for (i = 0; (i < xchg_sum - sfs_sum); i++) {
- INIT_LIST_HEAD(&xchg->list_xchg_entry);
-
- INIT_LIST_HEAD(&xchg->list_esgls);
- spin_lock_init(&xchg->xchg_state_lock);
- sema_init(&xchg->task_sema, 0);
- sema_init(&xchg->echo_info.echo_sync_sema, 0);
-
- /* alloc dma buffer for fcp_rsp_iu */
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
- list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
- free_pool->total_fcp_xchg++;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-
- xchg++;
- }
-
- free_pool->fcp_xchg_sum = free_pool->total_fcp_xchg;
-
- return RETURN_OK;
-}
-
-static u32 unf_get_xchg_config_sum(struct unf_lport *lport, u32 *xchg_sum)
-{
- struct unf_lport_cfg_item *lport_cfg_items = NULL;
-
- lport_cfg_items = &lport->low_level_func.lport_cfg_items;
-
- /* It has been checked at the bottom layer. Don't need to check it
- * again.
- */
- *xchg_sum = lport_cfg_items->max_sfs_xchg + lport_cfg_items->max_io;
- if ((*xchg_sum / UNF_EXCHG_MGR_NUM) == 0 ||
- lport_cfg_items->max_sfs_xchg / UNF_EXCHG_MGR_NUM == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) Xchgsum(%u) or SfsXchg(%u) is less than ExchangeMgrNum(%u).",
- lport->port_id, *xchg_sum, lport_cfg_items->max_sfs_xchg,
- UNF_EXCHG_MGR_NUM);
- return UNF_RETURN_ERROR;
- }
-
- if (*xchg_sum > (INVALID_VALUE16 - 1)) {
- /* If the format of ox_id/rx_id is exceeded, this function is
- * not supported
- */
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) Exchange num(0x%x) is Too Big.",
- lport->port_id, *xchg_sum);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-static void unf_xchg_cancel_timer(void *xchg)
-{
- struct unf_xchg *tmp_xchg = NULL;
- bool need_dec_xchg_ref = false;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
- tmp_xchg = (struct unf_xchg *)xchg;
-
- spin_lock_irqsave(&tmp_xchg->xchg_state_lock, flag);
- if (cancel_delayed_work(&tmp_xchg->timeout_work))
- need_dec_xchg_ref = true;
-
- spin_unlock_irqrestore(&tmp_xchg->xchg_state_lock, flag);
-
- if (need_dec_xchg_ref)
- unf_xchg_ref_dec(xchg, XCHG_CANCEL_TIMER);
-}
-
-void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *xchg = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- unf_lport = lport;
-
- /* hot Xchg */
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "INI busy :");
- list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->ini_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
- xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
- (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
- atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
- }
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "SFS :");
- list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->sfs_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "0x%p---0x%x---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
- xchg, xchg->cmnd_code, (u32)xchg->hotpooltag,
- (u32)xchg->xchg_type, (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
- (u32)xchg->did, atomic_read(&xchg->ref_cnt),
- (u32)xchg->io_state, xchg->alloc_jif);
- }
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN, "Destroy list.");
- list_for_each_safe(xchg_node, next_xchg_node, &xchg_mgr->hot_pool->list_destroy_xchg) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
- xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
- (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid, (u32)xchg->did,
- atomic_read(&xchg->ref_cnt), (u32)xchg->io_state, xchg->alloc_jif);
- }
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, flags);
-}
-
-static u32 unf_free_lport_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
-{
-#define UNF_OS_WAITIO_TIMEOUT (10 * 1000)
-
- ulong free_pool_lock_flags = 0;
- bool wait = false;
- u32 total_xchg = 0;
- u32 total_xchg_sum = 0;
- u32 ret = RETURN_OK;
- u64 time_out = 0;
- struct completion xchg_mgr_completion;
-
- init_completion(&xchg_mgr_completion);
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg_mgr->hot_pool, UNF_RETURN_ERROR);
-
- unf_free_lport_sfs_xchg(xchg_mgr, false);
-
- /* free INI Mode exchanges belong to L_Port */
- unf_free_lport_ini_xchg(xchg_mgr, false);
-
- spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
- total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
- total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
- if (total_xchg != total_xchg_sum) {
- xchg_mgr->free_pool.xchg_mgr_completion = &xchg_mgr_completion;
- wait = true;
- }
- spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
-
- if (wait) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to wait for exchange manager completion (0x%x:0x%x)",
- lport->port_id, total_xchg, total_xchg_sum);
-
- unf_show_all_xchg(lport, xchg_mgr);
-
- time_out = wait_for_completion_timeout(xchg_mgr->free_pool.xchg_mgr_completion,
- msecs_to_jiffies(UNF_OS_WAITIO_TIMEOUT));
- if (time_out == 0)
- unf_free_lport_destroy_xchg(xchg_mgr);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) wait for exchange manager completion end",
- lport->port_id);
-
- spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, free_pool_lock_flags);
- xchg_mgr->free_pool.xchg_mgr_completion = NULL;
- spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock,
- free_pool_lock_flags);
- }
-
- return ret;
-}
-
-void unf_free_lport_all_xchg(struct unf_lport *lport)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- u32 i;
-
- FC_CHECK_RETURN_VOID(lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
- ;
- if (unlikely(!xchg_mgr)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) hot pool is NULL",
- lport->port_id);
-
- continue;
- }
- unf_free_lport_sfs_xchg(xchg_mgr, false);
-
- /* free INI Mode exchanges belong to L_Port */
- unf_free_lport_ini_xchg(xchg_mgr, false);
-
- unf_free_lport_destroy_xchg(xchg_mgr);
- }
-}
-
-static void unf_delay_work_del_syn(struct unf_xchg *xchg)
-{
- FC_CHECK_RETURN_VOID(xchg);
-
- /* synchronous release timer */
- if (!cancel_delayed_work_sync(&xchg->timeout_work)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Exchange(0x%p), State(0x%x) can't delete work timer, timer is running or no timer.",
- xchg, xchg->io_state);
- } else {
- /* The reference count cannot be directly subtracted.
- * This prevents the XCHG from being moved to the Free linked
- * list when the card is unloaded.
- */
- unf_cm_free_xchg(xchg->lport, xchg);
- }
-}
-
-static void unf_free_lport_sfs_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
-{
- struct list_head *list = NULL;
- struct unf_xchg *xchg = NULL;
- ulong hot_pool_lock_flags = 0;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
- FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
-
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- while (!list_empty(&xchg_mgr->hot_pool->sfs_busylist)) {
- list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->sfs_busylist);
- list_del_init(list);
-
- /* Prevent the xchg of the sfs from being accessed repeatedly.
- * The xchg is first mounted to the destroy linked list.
- */
- list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
-
- xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- unf_delay_work_del_syn(xchg);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Free SFS Exchange(0x%p), State(0x%x), Reference count(%d), Start time(%llu).",
- xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
-
- unf_cm_free_xchg(xchg->lport, xchg);
-
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- }
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
-}
-
-void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag)
-{
- struct list_head *list = NULL;
- struct unf_xchg *xchg = NULL;
- ulong hot_pool_lock_flags = 0;
- u32 up_status = 0;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
- FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
-
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- while (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
- /* for each INI busy_list (exchange) node */
- list = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->ini_busylist);
-
- /* Put exchange node to destroy_list, prevent done repeatly */
- list_del_init(list);
- list_add_tail(list, &xchg_mgr->hot_pool->list_destroy_xchg);
- xchg = list_entry(list, struct unf_xchg, list_xchg_entry);
- if (atomic_read(&xchg->ref_cnt) <= 0)
- continue;
-
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
- hot_pool_lock_flags);
- unf_delay_work_del_syn(xchg);
-
- /* In the case of INI done, the command should be set to fail to
- * prevent data inconsistency caused by the return of OK
- */
- up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
- xchg->scsi_cmnd_info.err_code_table_cout,
- UNF_IO_PORT_LOGOUT);
-
- if (INI_IO_STATE_UPABORT & xchg->io_state) {
- /*
- * About L_Port destroy:
- * UP_ABORT ---to--->>> ABORT_Port_Removing
- */
- up_status = UNF_IO_ABORT_PORT_REMOVING;
- }
-
- xchg->scsi_cmnd_info.result = up_status;
- up(&xchg->task_sema);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Free INI exchange(0x%p) state(0x%x) reference count(%d) start time(%llu)",
- xchg, xchg->io_state, atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
-
- unf_cm_free_xchg(xchg->lport, xchg);
-
- /* go to next INI busy_list (exchange) node */
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- }
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
-}
-
-static void unf_free_lport_destroy_xchg(struct unf_xchg_mgr *xchg_mgr)
-{
-#define UNF_WAIT_DESTROY_EMPTY_STEP_MS 1000
-#define UNF_WAIT_IO_STATE_TGT_FRONT_MS (10 * 1000)
-
- struct unf_xchg *xchg = NULL;
- struct list_head *next_xchg_node = NULL;
- ulong hot_pool_lock_flags = 0;
- ulong xchg_flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
- FC_CHECK_RETURN_VOID(xchg_mgr->hot_pool);
-
- /* In this case, the timer on the destroy linked list is deleted.
- * You only need to check whether the timer is released at the end of
- * the tgt.
- */
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- while (!list_empty(&xchg_mgr->hot_pool->list_destroy_xchg)) {
- next_xchg_node = UNF_OS_LIST_NEXT(&xchg_mgr->hot_pool->list_destroy_xchg);
- xchg = list_entry(next_xchg_node, struct unf_xchg, list_xchg_entry);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Free Exchange(0x%p), Type(0x%x), State(0x%x), Reference count(%d), Start time(%llu)",
- xchg, xchg->xchg_type, xchg->io_state,
- atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
-
- /* This interface can be invoked to ensure that the timer is
- * successfully canceled or wait until the timer execution is
- * complete
- */
- unf_delay_work_del_syn(xchg);
-
- /*
- * If the timer is canceled successfully, delete Xchg
- * If the timer has burst, the Xchg may have been released,In
- * this case, deleting the Xchg will be failed
- */
- unf_cm_free_xchg(xchg->lport, xchg);
-
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
- };
-
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock, hot_pool_lock_flags);
-}
-
-static void unf_free_all_big_sfs(struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_big_sfs *big_sfs = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- u32 i;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- /* Release the free resources in the busy state */
- spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
- list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_busypool) {
- list_del(node);
- list_add_tail(node, &xchg_mgr->big_sfs_pool.list_freepool);
- }
-
- list_for_each_safe(node, next_node, &xchg_mgr->big_sfs_pool.list_freepool) {
- list_del(node);
- big_sfs = list_entry(node, struct unf_big_sfs, entry_bigsfs);
- if (big_sfs->addr)
- big_sfs->addr = NULL;
- }
- spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
-
- if (xchg_mgr->big_sfs_buf_list.buflist) {
- for (i = 0; i < xchg_mgr->big_sfs_buf_list.buf_num; i++) {
- kfree(xchg_mgr->big_sfs_buf_list.buflist[i].vaddr);
- xchg_mgr->big_sfs_buf_list.buflist[i].vaddr = NULL;
- }
-
- kfree(xchg_mgr->big_sfs_buf_list.buflist);
- xchg_mgr->big_sfs_buf_list.buflist = NULL;
- }
-}
-
-static void unf_free_big_sfs_pool(struct unf_xchg_mgr *xchg_mgr)
-{
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Free Big SFS Pool, Count(0x%x).",
- xchg_mgr->big_sfs_pool.free_count);
-
- unf_free_all_big_sfs(xchg_mgr);
- xchg_mgr->big_sfs_pool.free_count = 0;
-
- if (xchg_mgr->big_sfs_pool.big_sfs_pool) {
- vfree(xchg_mgr->big_sfs_pool.big_sfs_pool);
- xchg_mgr->big_sfs_pool.big_sfs_pool = NULL;
- }
-}
-
-static void unf_free_xchg_mgr_mem(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_xchg *xchg = NULL;
- u32 i = 0;
- u32 xchg_sum = 0;
- struct unf_xchg_free_pool *free_pool = NULL;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- unf_free_big_sfs_pool(xchg_mgr);
-
- /* The sfs is released first, and the XchgMgr is allocated by the get
- * free page. Therefore, the XchgMgr is compared with the '0'
- */
- if (xchg_mgr->sfs_mm_start != 0) {
- dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
- xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
- xchg_mgr->sfs_mm_start = 0;
- }
-
- /* Release Xchg first */
- if (xchg_mgr->fcp_mm_start) {
- unf_get_xchg_config_sum(lport, &xchg_sum);
- xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
-
- xchg = xchg_mgr->fcp_mm_start;
- for (i = 0; i < xchg_sum; i++) {
- if (!xchg)
- break;
- xchg++;
- }
-
- vfree(xchg_mgr->fcp_mm_start);
- xchg_mgr->fcp_mm_start = NULL;
- }
-
- /* release the hot pool */
- if (xchg_mgr->hot_pool) {
- vfree(xchg_mgr->hot_pool);
- xchg_mgr->hot_pool = NULL;
- }
-
- free_pool = &xchg_mgr->free_pool;
-
- vfree(xchg_mgr);
-}
-
-static void unf_free_xchg_mgr(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
-{
- ulong flags = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- /* 1. At first, free exchanges for this Exch_Mgr */
- ret = unf_free_lport_xchg(lport, xchg_mgr);
-
- /* 2. Delete this Exch_Mgr entry */
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- list_del_init(&xchg_mgr->xchg_mgr_entry);
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- /* 3. free Exch_Mgr memory if necessary */
- if (ret == RETURN_OK) {
- /* free memory directly */
- unf_free_xchg_mgr_mem(lport, xchg_mgr);
- } else {
- /* Add it to Dirty list */
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_drty_xchg_mgr_head);
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- /* Mark dirty flag */
- unf_cm_mark_dirty_mem(lport, UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY);
- }
-}
-
-void unf_free_all_xchg_mgr(struct unf_lport *lport)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- ulong flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- /* for each L_Port->Exch_Mgr_List */
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- while (!list_empty(&lport->list_xchg_mgr_head)) {
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
- unf_free_xchg_mgr(lport, xchg_mgr);
- if (i < UNF_EXCHG_MGR_NUM)
- lport->xchg_mgr[i] = NULL;
-
- i++;
-
- /* go to next */
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- }
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR;
-}
-
-static u32 unf_init_xchg_mgr(struct unf_xchg_mgr *xchg_mgr)
-{
- FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
-
- memset(xchg_mgr, 0, sizeof(struct unf_xchg_mgr));
-
- INIT_LIST_HEAD(&xchg_mgr->xchg_mgr_entry);
- xchg_mgr->fcp_mm_start = NULL;
- xchg_mgr->mem_szie = sizeof(struct unf_xchg_mgr);
-
- return RETURN_OK;
-}
-
-static u32 unf_init_xchg_mgr_free_pool(struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_xchg_free_pool *free_pool = NULL;
-
- FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
-
- free_pool = &xchg_mgr->free_pool;
- INIT_LIST_HEAD(&free_pool->list_free_xchg_list);
- INIT_LIST_HEAD(&free_pool->list_sfs_xchg_list);
- spin_lock_init(&free_pool->xchg_freepool_lock);
- free_pool->fcp_xchg_sum = 0;
- free_pool->xchg_mgr_completion = NULL;
-
- return RETURN_OK;
-}
-
-static u32 unf_init_xchg_hot_pool(struct unf_lport *lport, struct unf_xchg_hot_pool *hot_pool,
- u32 xchg_sum)
-{
- FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
-
- INIT_LIST_HEAD(&hot_pool->sfs_busylist);
- INIT_LIST_HEAD(&hot_pool->ini_busylist);
- spin_lock_init(&hot_pool->xchg_hotpool_lock);
- INIT_LIST_HEAD(&hot_pool->list_destroy_xchg);
- hot_pool->total_xchges = 0;
- hot_pool->wait_state = false;
- hot_pool->lport = lport;
-
- /* Slab Pool Index */
- hot_pool->slab_next_index = 0;
- UNF_TOU16_CHECK(hot_pool->slab_total_sum, xchg_sum, return UNF_RETURN_ERROR);
-
- return RETURN_OK;
-}
-
-static u32 unf_alloc_and_init_big_sfs_pool(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr)
-{
-#define UNF_MAX_RESOURCE_RESERVED_FOR_RSCN 20
-#define UNF_BIG_SFS_POOL_TYPES 6
- u32 i = 0;
- u32 size = 0;
- u32 align_size = 0;
- u32 npiv_cnt = 0;
- struct unf_big_sfs_pool *big_sfs_pool = NULL;
- struct unf_big_sfs *big_sfs_buff = NULL;
- u32 buf_total_size;
- u32 buf_num;
- u32 buf_cnt_per_huge_buf;
- u32 alloc_idx;
- u32 cur_buf_idx = 0;
- u32 cur_buf_offset = 0;
-
- FC_CHECK_RETURN_VALUE(xchg_mgr, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- big_sfs_pool = &xchg_mgr->big_sfs_pool;
-
- INIT_LIST_HEAD(&big_sfs_pool->list_freepool);
- INIT_LIST_HEAD(&big_sfs_pool->list_busypool);
- spin_lock_init(&big_sfs_pool->big_sfs_pool_lock);
- npiv_cnt = lport->low_level_func.support_max_npiv_num;
-
- /*
- * The value*6 indicates GID_PT/GID_FT, RSCN, and ECHO
- * Another command is received when a command is being responded
- * A maximum of 20 resources are reserved for the RSCN. During the test,
- * multiple rscn are found. As a result, the resources are insufficient
- * and the disc fails.
- */
- big_sfs_pool->free_count = (npiv_cnt + 1) * UNF_BIG_SFS_POOL_TYPES +
- UNF_MAX_RESOURCE_RESERVED_FOR_RSCN;
- big_sfs_buff =
- (struct unf_big_sfs *)vmalloc(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
- if (!big_sfs_buff) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Allocate Big SFS buf fail.");
-
- return UNF_RETURN_ERROR;
- }
- memset(big_sfs_buff, 0, big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
- xchg_mgr->mem_szie += (u32)(big_sfs_pool->free_count * sizeof(struct unf_big_sfs));
- big_sfs_pool->big_sfs_pool = (void *)big_sfs_buff;
-
- /*
- * Use the larger value of sizeof (struct unf_gid_acc_pld) and sizeof
- * (struct unf_rscn_pld) to avoid the icp error.Therefore, the value is
- * directly assigned instead of being compared.
- */
- size = sizeof(struct unf_gid_acc_pld);
- align_size = ALIGN(size, PAGE_SIZE);
-
- buf_total_size = align_size * big_sfs_pool->free_count;
- xchg_mgr->big_sfs_buf_list.buf_size =
- buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE
- : buf_total_size;
-
- buf_cnt_per_huge_buf = xchg_mgr->big_sfs_buf_list.buf_size / align_size;
- buf_num = big_sfs_pool->free_count % buf_cnt_per_huge_buf
- ? big_sfs_pool->free_count / buf_cnt_per_huge_buf + 1
- : big_sfs_pool->free_count / buf_cnt_per_huge_buf;
-
- xchg_mgr->big_sfs_buf_list.buflist = (struct buff_list *)kmalloc(buf_num *
- sizeof(struct buff_list), GFP_KERNEL);
- xchg_mgr->big_sfs_buf_list.buf_num = buf_num;
-
- if (!xchg_mgr->big_sfs_buf_list.buflist) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate BigSfs pool buf list failed out of memory");
- goto free_buff;
- }
- memset(xchg_mgr->big_sfs_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
- for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
- xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr =
- kmalloc(xchg_mgr->big_sfs_buf_list.buf_size, GFP_ATOMIC);
- if (xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr ==
- NULL) {
- goto free_buff;
- }
- memset(xchg_mgr->big_sfs_buf_list.buflist[alloc_idx].vaddr, 0,
- xchg_mgr->big_sfs_buf_list.buf_size);
- }
-
- for (i = 0; i < big_sfs_pool->free_count; i++) {
- if (i != 0 && !(i % buf_cnt_per_huge_buf))
- cur_buf_idx++;
-
- cur_buf_offset = align_size * (i % buf_cnt_per_huge_buf);
- big_sfs_buff->addr = xchg_mgr->big_sfs_buf_list.buflist[cur_buf_idx].vaddr +
- cur_buf_offset;
- big_sfs_buff->size = size;
- xchg_mgr->mem_szie += size;
- list_add_tail(&big_sfs_buff->entry_bigsfs, &big_sfs_pool->list_freepool);
- big_sfs_buff++;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[EVENT]Allocate BigSfs pool size:%d,align_size:%d,buf_num:%u,buf_size:%u",
- size, align_size, xchg_mgr->big_sfs_buf_list.buf_num,
- xchg_mgr->big_sfs_buf_list.buf_size);
- return RETURN_OK;
-free_buff:
- unf_free_all_big_sfs(xchg_mgr);
- vfree(big_sfs_buff);
- big_sfs_pool->big_sfs_pool = NULL;
- return UNF_RETURN_ERROR;
-}
-
-static void unf_free_one_big_sfs(struct unf_xchg *xchg)
-{
- ulong flag = 0;
- struct unf_xchg_mgr *xchg_mgr = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
- xchg_mgr = xchg->xchg_mgr;
- FC_CHECK_RETURN_VOID(xchg_mgr);
- if (!xchg->big_sfs_buf)
- return;
-
- if (xchg->cmnd_code != NS_GID_PT && xchg->cmnd_code != NS_GID_FT &&
- xchg->cmnd_code != ELS_ECHO &&
- xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_ECHO)) && xchg->cmnd_code != ELS_RSCN &&
- xchg->cmnd_code != (UNF_SET_ELS_ACC_TYPE(ELS_RSCN))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Exchange(0x%p), Command(0x%x) big SFS buf is not NULL.",
- xchg, xchg->cmnd_code);
- }
-
- spin_lock_irqsave(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
- list_del(&xchg->big_sfs_buf->entry_bigsfs);
- list_add_tail(&xchg->big_sfs_buf->entry_bigsfs,
- &xchg_mgr->big_sfs_pool.list_freepool);
- xchg_mgr->big_sfs_pool.free_count++;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Free one big SFS buf(0x%p), Count(0x%x), Exchange(0x%p), Command(0x%x).",
- xchg->big_sfs_buf->addr, xchg_mgr->big_sfs_pool.free_count,
- xchg, xchg->cmnd_code);
- spin_unlock_irqrestore(&xchg_mgr->big_sfs_pool.big_sfs_pool_lock, flag);
-}
-
-static void unf_free_exchg_mgr_info(struct unf_lport *lport)
-{
- u32 i;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
- struct unf_xchg_mgr *xchg_mgr = NULL;
-
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- list_for_each_safe(node, next_node, &lport->list_xchg_mgr_head) {
- list_del(node);
- xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
- }
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- xchg_mgr = lport->xchg_mgr[i];
-
- if (xchg_mgr) {
- unf_free_big_sfs_pool(xchg_mgr);
-
- if (xchg_mgr->sfs_mm_start) {
- dma_free_coherent(&lport->low_level_func.dev->dev,
- xchg_mgr->sfs_mem_size, xchg_mgr->sfs_mm_start,
- xchg_mgr->sfs_phy_addr);
- xchg_mgr->sfs_mm_start = 0;
- }
-
- if (xchg_mgr->fcp_mm_start) {
- vfree(xchg_mgr->fcp_mm_start);
- xchg_mgr->fcp_mm_start = NULL;
- }
-
- if (xchg_mgr->hot_pool) {
- vfree(xchg_mgr->hot_pool);
- xchg_mgr->hot_pool = NULL;
- }
-
- vfree(xchg_mgr);
- lport->xchg_mgr[i] = NULL;
- }
- }
-}
-
-static u32 unf_alloc_and_init_xchg_mgr(struct unf_lport *lport)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct unf_xchg *xchg_mem = NULL;
- void *sfs_mm_start = 0;
- dma_addr_t sfs_phy_addr = 0;
- u32 xchg_sum = 0;
- u32 sfs_xchg_sum = 0;
- ulong flags = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 slab_num = 0;
- u32 i = 0;
-
- ret = unf_get_xchg_config_sum(lport, &xchg_sum);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) can't get Exchange.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* SFS Exchange Sum */
- sfs_xchg_sum = lport->low_level_func.lport_cfg_items.max_sfs_xchg /
- UNF_EXCHG_MGR_NUM;
- xchg_sum = xchg_sum / UNF_EXCHG_MGR_NUM;
- slab_num = lport->low_level_func.support_max_hot_tag_range / UNF_EXCHG_MGR_NUM;
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- /* Alloc Exchange Manager */
- xchg_mgr = (struct unf_xchg_mgr *)vmalloc(sizeof(struct unf_xchg_mgr));
- if (!xchg_mgr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) allocate Exchange Manager Memory Fail.",
- lport->port_id);
- goto exit;
- }
-
- /* Init Exchange Manager */
- ret = unf_init_xchg_mgr(xchg_mgr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) initialization Exchange Manager unsuccessful.",
- lport->port_id);
- goto free_xchg_mgr;
- }
-
- /* Initialize the Exchange Free Pool resource */
- ret = unf_init_xchg_mgr_free_pool(xchg_mgr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) initialization Exchange Manager Free Pool unsuccessful.",
- lport->port_id);
- goto free_xchg_mgr;
- }
-
- /* Allocate memory for Hot Pool and Xchg slab */
- hot_pool = vmalloc(sizeof(struct unf_xchg_hot_pool) +
- sizeof(struct unf_xchg *) * slab_num);
- if (!hot_pool) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) allocate Hot Pool Memory Fail.",
- lport->port_id);
- goto free_xchg_mgr;
- }
- memset(hot_pool, 0,
- sizeof(struct unf_xchg_hot_pool) + sizeof(struct unf_xchg *) * slab_num);
-
- xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg_hot_pool) +
- sizeof(struct unf_xchg *) * slab_num);
- /* Initialize the Exchange Hot Pool resource */
- ret = unf_init_xchg_hot_pool(lport, hot_pool, slab_num);
- if (ret != RETURN_OK)
- goto free_hot_pool;
-
- hot_pool->base += (u16)(i * slab_num);
- /* Allocate the memory of all Xchg (IO/SFS) */
- xchg_mem = vmalloc(sizeof(struct unf_xchg) * xchg_sum);
- if (!xchg_mem) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) allocate Exchange Memory Fail.",
- lport->port_id);
- goto free_hot_pool;
- }
- memset(xchg_mem, 0, sizeof(struct unf_xchg) * xchg_sum);
-
- xchg_mgr->mem_szie += (u32)(sizeof(struct unf_xchg) * xchg_sum);
- xchg_mgr->hot_pool = hot_pool;
- xchg_mgr->fcp_mm_start = xchg_mem;
- /* Allocate the memory used by the SFS Xchg to carry the
- * ELS/BLS/GS command and response
- */
- xchg_mgr->sfs_mem_size = (u32)(sizeof(union unf_sfs_u) * sfs_xchg_sum);
-
- /* Apply for the DMA space for sending sfs frames.
- * If the value of DMA32 is less than 4 GB, cross-4G problems
- * will not occur
- */
- sfs_mm_start = dma_alloc_coherent(&lport->low_level_func.dev->dev,
- xchg_mgr->sfs_mem_size,
- &sfs_phy_addr, GFP_KERNEL);
- if (!sfs_mm_start) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) Get Free Pagers Fail .",
- lport->port_id);
- goto free_xchg_mem;
- }
- memset(sfs_mm_start, 0, sizeof(union unf_sfs_u) * sfs_xchg_sum);
-
- xchg_mgr->mem_szie += xchg_mgr->sfs_mem_size;
- xchg_mgr->sfs_mm_start = sfs_mm_start;
- xchg_mgr->sfs_phy_addr = sfs_phy_addr;
- /* The Xchg is initialized and mounted to the Free Pool */
- ret = unf_init_xchg(lport, xchg_mgr, xchg_sum, sfs_xchg_sum);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) initialization Exchange unsuccessful, Exchange Number(%d), SFS Exchange number(%d).",
- lport->port_id, xchg_sum, sfs_xchg_sum);
- dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
- xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
- xchg_mgr->sfs_mm_start = 0;
- goto free_xchg_mem;
- }
-
- /* Apply for the memory used by GID_PT, GID_FT, and RSCN */
- ret = unf_alloc_and_init_big_sfs_pool(lport, xchg_mgr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) allocate big SFS fail", lport->port_id);
- dma_free_coherent(&lport->low_level_func.dev->dev, xchg_mgr->sfs_mem_size,
- xchg_mgr->sfs_mm_start, xchg_mgr->sfs_phy_addr);
- xchg_mgr->sfs_mm_start = 0;
- goto free_xchg_mem;
- }
-
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- lport->xchg_mgr[i] = (void *)xchg_mgr;
- list_add_tail(&xchg_mgr->xchg_mgr_entry, &lport->list_xchg_mgr_head);
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) ExchangeMgr:(0x%p),Base:(0x%x).",
- lport->port_id, lport->xchg_mgr[i], hot_pool->base);
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) allocate Exchange Manager size(0x%x).",
- lport->port_id, xchg_mgr->mem_szie);
- return RETURN_OK;
-free_xchg_mem:
- vfree(xchg_mem);
-free_hot_pool:
- vfree(hot_pool);
-free_xchg_mgr:
- vfree(xchg_mgr);
-exit:
- unf_free_exchg_mgr_info(lport);
- return UNF_RETURN_ERROR;
-}
-
-void unf_xchg_mgr_destroy(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- unf_free_all_xchg_mgr(lport);
-}
-
-u32 unf_alloc_xchg_resource(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- INIT_LIST_HEAD(&lport->list_drty_xchg_mgr_head);
- INIT_LIST_HEAD(&lport->list_xchg_mgr_head);
- spin_lock_init(&lport->xchg_mgr_lock);
-
- /* LPort Xchg Management Unit Alloc */
- if (unf_alloc_and_init_xchg_mgr(lport) != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- return RETURN_OK;
-}
-
-void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only)
-{
- u32 dirty_xchg = 0;
- struct unf_xchg_mgr *xchg_mgr = NULL;
- ulong flags = 0;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- list_for_each_safe(node, next_node, &lport->list_drty_xchg_mgr_head) {
- xchg_mgr = list_entry(node, struct unf_xchg_mgr, xchg_mgr_entry);
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
- if (xchg_mgr) {
- dirty_xchg = (xchg_mgr->free_pool.total_fcp_xchg +
- xchg_mgr->free_pool.total_sfs_xchg);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has %u dirty exchange(s)",
- lport->port_id, dirty_xchg);
-
- unf_show_all_xchg(lport, xchg_mgr);
-
- if (!show_only) {
- /* Delete Dirty Exchange Mgr entry */
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- list_del_init(&xchg_mgr->xchg_mgr_entry);
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- /* Free Dirty Exchange Mgr memory */
- unf_free_xchg_mgr_mem(lport, xchg_mgr);
- }
- }
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- }
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
- }
-}
-
-struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport, u32 idx)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE((idx < UNF_EXCHG_MGR_NUM), NULL);
-
- spin_lock_irqsave(&lport->xchg_mgr_lock, flags);
- xchg_mgr = lport->xchg_mgr[idx];
- spin_unlock_irqrestore(&lport->xchg_mgr_lock, flags);
-
- return xchg_mgr;
-}
-
-struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
- u32 mgr_idx)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)(lport->root_lport);
-
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- /* Get Xchg Manager */
- xchg_mgr = unf_get_xchg_mgr_by_lport(unf_lport, mgr_idx);
- if (!xchg_mgr) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) Exchange Manager is NULL.",
- unf_lport->port_id);
-
- return NULL;
- }
-
- /* Get Xchg Manager Hot Pool */
- return xchg_mgr->hot_pool;
-}
-
-static inline void unf_hot_pool_slab_set(struct unf_xchg_hot_pool *hot_pool,
- u16 slab_index, struct unf_xchg *xchg)
-{
- FC_CHECK_RETURN_VOID(hot_pool);
-
- hot_pool->xchg_slab[slab_index] = xchg;
-}
-
-static inline struct unf_xchg *unf_get_xchg_by_xchg_tag(struct unf_xchg_hot_pool *hot_pool,
- u16 slab_index)
-{
- FC_CHECK_RETURN_VALUE(hot_pool, NULL);
-
- return hot_pool->xchg_slab[slab_index];
-}
-
-static void *unf_look_up_xchg_by_tag(void *lport, u16 hot_pool_tag)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- u32 exchg_mgr_idx = 0;
- struct unf_xchg_mgr *xchg_mgr = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* In the case of NPIV, lport is the Vport pointer,
- * the share uses the ExchMgr of RootLport
- */
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- exchg_mgr_idx = (hot_pool_tag * UNF_EXCHG_MGR_NUM) /
- unf_lport->low_level_func.support_max_hot_tag_range;
- if (unlikely(exchg_mgr_idx >= UNF_EXCHG_MGR_NUM)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) Get ExchgMgr %u err",
- unf_lport->port_id, exchg_mgr_idx);
-
- return NULL;
- }
-
- xchg_mgr = unf_lport->xchg_mgr[exchg_mgr_idx];
-
- if (unlikely(!xchg_mgr)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) ExchgMgr %u is null",
- unf_lport->port_id, exchg_mgr_idx);
-
- return NULL;
- }
-
- hot_pool = xchg_mgr->hot_pool;
-
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) Hot Pool is NULL.",
- unf_lport->port_id);
-
- return NULL;
- }
-
- if (unlikely(hot_pool_tag >= (hot_pool->slab_total_sum + hot_pool->base))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]LPort(0x%x) can't Input Tag(0x%x), Max(0x%x).",
- unf_lport->port_id, hot_pool_tag,
- (hot_pool->slab_total_sum + hot_pool->base));
-
- return NULL;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- xchg = unf_get_xchg_by_xchg_tag(hot_pool, hot_pool_tag - hot_pool->base);
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- return (void *)xchg;
-}
-
-static void *unf_find_xchg_by_ox_id(void *lport, u16 ox_id, u32 oid)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct unf_xchg *xchg = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flags = 0;
- ulong xchg_flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* In the case of NPIV, the lport is the Vport pointer,
- * and the share uses the ExchMgr of the RootLport
- */
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) MgrIdex %u Hot Pool is NULL.",
- unf_lport->port_id, i);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
-
- /* 1. Traverse sfs_busy list */
- list_for_each_safe(node, next_node, &hot_pool->sfs_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
- if (unf_check_oxid_matched(ox_id, oid, xchg)) {
- atomic_inc(&xchg->ref_cnt);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- return xchg;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
- }
-
- /* 2. Traverse INI_Busy List */
- list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flags);
- if (unf_check_oxid_matched(ox_id, oid, xchg)) {
- atomic_inc(&xchg->ref_cnt);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flags);
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- return xchg;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock,
- xchg_flags);
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-
- return NULL;
-}
-
-static inline bool unf_check_xchg_matched(struct unf_xchg *xchg, u64 command_sn,
- u32 world_id, void *pinitiator)
-{
- bool matched = false;
-
- matched = (command_sn == xchg->cmnd_sn);
- if (matched && (atomic_read(&xchg->ref_cnt) > 0))
- return true;
- else
- return false;
-}
-
-static void *unf_look_up_xchg_by_cmnd_sn(void *lport, u64 command_sn,
- u32 world_id, void *pinitiator)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- u32 i;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* In NPIV, lport is a Vport pointer, and idle resources are shared by
- * ExchMgr of RootLport. However, busy resources are mounted on each
- * vport. Therefore, vport needs to be used.
- */
- unf_lport = (struct unf_lport *)lport;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) hot pool is NULL",
- unf_lport->port_id);
-
- continue;
- }
-
- /* from busy_list */
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- if (unf_check_xchg_matched(xchg, command_sn, world_id, pinitiator)) {
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- return xchg;
- }
- }
-
- /* vport: from destroy_list */
- if (unf_lport != unf_lport->root_lport) {
- list_for_each_safe(node, next_node, &hot_pool->list_destroy_xchg) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- if (unf_check_xchg_matched(xchg, command_sn, world_id,
- pinitiator)) {
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) lookup exchange from destroy list",
- unf_lport->port_id);
-
- return xchg;
- }
- }
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-
- return NULL;
-}
-
-static inline u32 unf_alloc_hot_pool_slab(struct unf_xchg_hot_pool *hot_pool, struct unf_xchg *xchg)
-{
- u16 slab_index = 0;
-
- FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- /* Check whether the hotpool tag is in the specified range sirt.
- * If yes, set up the management relationship. If no, handle the problem
- * according to the normal IO. If the sirt digitmap is used but the tag
- * is occupied, it indicates that the I/O is discarded.
- */
-
- hot_pool->slab_next_index = (u16)hot_pool->slab_next_index;
- slab_index = hot_pool->slab_next_index;
- while (unf_get_xchg_by_xchg_tag(hot_pool, slab_index)) {
- slab_index++;
- slab_index = slab_index % hot_pool->slab_total_sum;
-
- /* Rewind occurs */
- if (slab_index == hot_pool->slab_next_index) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
- "There is No Slab At Hot Pool(0x%p) for xchg(0x%p).",
- hot_pool, xchg);
-
- return UNF_RETURN_ERROR;
- }
- }
-
- unf_hot_pool_slab_set(hot_pool, slab_index, xchg);
- xchg->hotpooltag = slab_index + hot_pool->base;
- slab_index++;
- hot_pool->slab_next_index = slab_index % hot_pool->slab_total_sum;
-
- return RETURN_OK;
-}
-
-struct unf_esgl_page *
-unf_get_and_add_one_free_esgl_page(struct unf_lport *lport, struct unf_xchg *xchg)
-{
- struct unf_esgl *esgl = NULL;
- struct list_head *list_head = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(xchg, NULL);
-
- /* Obtain a new Esgl from the EsglPool and add it to the list_esgls of
- * the Xchg
- */
- spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
- if (!list_empty(&lport->esgl_pool.list_esgl_pool)) {
- list_head = UNF_OS_LIST_NEXT(&lport->esgl_pool.list_esgl_pool);
- list_del(list_head);
- lport->esgl_pool.esgl_pool_count--;
- list_add_tail(list_head, &xchg->list_esgls);
-
- esgl = list_entry(list_head, struct unf_esgl, entry_esgl);
- atomic_inc(&xchg->esgl_cnt);
- spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) esgl pool is empty",
- lport->nport_id);
-
- spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
- return NULL;
- }
-
- return &esgl->page;
-}
-
-void unf_release_esgls(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct list_head *list = NULL;
- struct list_head *list_tmp = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
- FC_CHECK_RETURN_VOID(xchg->lport);
-
- if (atomic_read(&xchg->esgl_cnt) <= 0)
- return;
-
- /* In the case of NPIV, the Vport pointer is saved in v_pstExch,
- * and the EsglPool of RootLport is shared.
- */
- unf_lport = (xchg->lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- spin_lock_irqsave(&unf_lport->esgl_pool.esgl_pool_lock, flag);
- if (!list_empty(&xchg->list_esgls)) {
- list_for_each_safe(list, list_tmp, &xchg->list_esgls) {
- list_del(list);
- list_add_tail(list, &unf_lport->esgl_pool.list_esgl_pool);
- unf_lport->esgl_pool.esgl_pool_count++;
- atomic_dec(&xchg->esgl_cnt);
- }
- }
- spin_unlock_irqrestore(&unf_lport->esgl_pool.esgl_pool_lock, flag);
-}
-
-static void unf_add_back_to_fcp_list(struct unf_xchg_free_pool *free_pool, struct unf_xchg *xchg)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(free_pool);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_init_xchg_attribute(xchg);
-
- /* The released I/O resources are added to the queue tail to facilitate
- * fault locating
- */
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
- list_add_tail(&xchg->list_xchg_entry, &free_pool->list_free_xchg_list);
- free_pool->total_fcp_xchg++;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-}
-
-static void unf_check_xchg_mgr_status(struct unf_xchg_mgr *xchg_mgr)
-{
- ulong flags = 0;
- u32 total_xchg = 0;
- u32 total_xchg_sum = 0;
-
- FC_CHECK_RETURN_VOID(xchg_mgr);
-
- spin_lock_irqsave(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
-
- total_xchg = xchg_mgr->free_pool.total_fcp_xchg + xchg_mgr->free_pool.total_sfs_xchg;
- total_xchg_sum = xchg_mgr->free_pool.fcp_xchg_sum + xchg_mgr->free_pool.sfs_xchg_sum;
-
- if (xchg_mgr->free_pool.xchg_mgr_completion && total_xchg == total_xchg_sum)
- complete(xchg_mgr->free_pool.xchg_mgr_completion);
-
- spin_unlock_irqrestore(&xchg_mgr->free_pool.xchg_freepool_lock, flags);
-}
-
-static void unf_free_fcp_xchg(struct unf_xchg *xchg)
-{
- struct unf_xchg_free_pool *free_pool = NULL;
- struct unf_xchg_mgr *xchg_mgr = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- /* Releasing a Specified INI I/O and Invoking the scsi_done Process */
- unf_done_ini_xchg(xchg);
- free_pool = xchg->free_pool;
- xchg_mgr = xchg->xchg_mgr;
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
-
- atomic_dec(&unf_rport->pending_io_cnt);
- /* Release the Esgls in the Xchg structure and return it to the EsglPool
- * of the Lport
- */
- unf_release_esgls(xchg);
-
- if (unlikely(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu)) {
- kfree(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu);
- xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = NULL;
- }
-
- /* Mount I/O resources to the FCP Free linked list */
- unf_add_back_to_fcp_list(free_pool, xchg);
-
- /* The Xchg is released synchronously and then forcibly released to
- * prevent the Xchg from accessing the Xchg in the normal I/O process
- */
- if (unlikely(unf_lport->port_removing))
- unf_check_xchg_mgr_status(xchg_mgr);
-}
-
-static void unf_init_io_xchg_param(struct unf_xchg *xchg, struct unf_lport *lport,
- struct unf_xchg_mgr *xchg_mgr)
-{
- static atomic64_t exhd_id;
-
- xchg->start_jif = atomic64_inc_return(&exhd_id);
- xchg->xchg_mgr = xchg_mgr;
- xchg->free_pool = &xchg_mgr->free_pool;
- xchg->hot_pool = xchg_mgr->hot_pool;
- xchg->lport = lport;
- xchg->xchg_type = UNF_XCHG_TYPE_INI;
- xchg->free_xchg = unf_free_fcp_xchg;
- xchg->scsi_or_tgt_cmnd_func = NULL;
- xchg->io_state = UNF_IO_STATE_NEW;
- xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
- xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
- xchg->io_send_abort = false;
- xchg->io_abort_result = false;
- xchg->oxid = INVALID_VALUE16;
- xchg->abort_oxid = INVALID_VALUE16;
- xchg->rxid = INVALID_VALUE16;
- xchg->sid = INVALID_VALUE32;
- xchg->did = INVALID_VALUE32;
- xchg->oid = INVALID_VALUE32;
- xchg->seq_id = INVALID_VALUE8;
- xchg->cmnd_code = INVALID_VALUE32;
- xchg->data_len = 0;
- xchg->resid_len = 0;
- xchg->data_direction = DMA_NONE;
- xchg->may_consume_res_cnt = 0;
- xchg->fast_consume_res_cnt = 0;
- xchg->io_front_jif = 0;
- xchg->tmf_state = 0;
- xchg->ucode_abts_state = INVALID_VALUE32;
- xchg->abts_state = 0;
- xchg->rport_bind_jifs = INVALID_VALUE64;
- xchg->scsi_id = INVALID_VALUE32;
- xchg->qos_level = 0;
- xchg->world_id = INVALID_VALUE32;
-
- memset(&xchg->dif_control, 0, sizeof(struct unf_dif_control_info));
- memset(&xchg->req_sgl_info, 0, sizeof(struct unf_req_sgl_info));
- memset(&xchg->dif_sgl_info, 0, sizeof(struct unf_req_sgl_info));
- xchg->scsi_cmnd_info.result = 0;
-
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
-
- if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == INVALID_VALUE32) {
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
- }
-
- if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
- }
-
- atomic_set(&xchg->ref_cnt, 0);
- atomic_set(&xchg->delay_flag, 0);
-
- if (delayed_work_pending(&xchg->timeout_work))
- UNF_DEL_XCHG_TIMER_SAFE(xchg);
-
- INIT_DELAYED_WORK(&xchg->timeout_work, unf_fc_ini_io_xchg_time_out);
-}
-
-static struct unf_xchg *unf_alloc_io_xchg(struct unf_lport *lport,
- struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_xchg *xchg = NULL;
- struct list_head *list_node = NULL;
- struct unf_xchg_free_pool *free_pool = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- free_pool = &xchg_mgr->free_pool;
- hot_pool = xchg_mgr->hot_pool;
- FC_CHECK_RETURN_VALUE(free_pool, NULL);
- FC_CHECK_RETURN_VALUE(hot_pool, NULL);
-
- /* 1. Free Pool */
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
- if (unlikely(list_empty(&free_pool->list_free_xchg_list))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "Port(0x%x) have no Exchange anymore.",
- lport->port_id);
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
- return NULL;
- }
-
- /* Select an idle node from free pool */
- list_node = UNF_OS_LIST_NEXT(&free_pool->list_free_xchg_list);
- list_del(list_node);
- free_pool->total_fcp_xchg--;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-
- xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
- /*
- * Hot Pool:
- * When xchg is mounted to Hot Pool, the mount mode and release mode
- * of Xchg must be specified and stored in the sfs linked list.
- */
- flags = 0;
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- unf_add_back_to_fcp_list(free_pool, xchg);
- if (unlikely(lport->port_removing))
- unf_check_xchg_mgr_status(xchg_mgr);
-
- return NULL;
- }
- list_add_tail(&xchg->list_xchg_entry, &hot_pool->ini_busylist);
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- /* 3. Exchange State */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_init_io_xchg_param(xchg, lport, xchg_mgr);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return xchg;
-}
-
-static void unf_add_back_to_sfs_list(struct unf_xchg_free_pool *free_pool,
- struct unf_xchg *xchg)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(free_pool);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_init_xchg_attribute(xchg);
-
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
-
- list_add_tail(&xchg->list_xchg_entry, &free_pool->list_sfs_xchg_list);
- free_pool->total_sfs_xchg++;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-}
-
-static void unf_free_sfs_xchg(struct unf_xchg *xchg)
-{
- struct unf_xchg_free_pool *free_pool = NULL;
- struct unf_xchg_mgr *xchg_mgr = NULL;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- free_pool = xchg->free_pool;
- unf_lport = xchg->lport;
- xchg_mgr = xchg->xchg_mgr;
-
- /* The memory is applied for when the GID_PT/GID_FT is sent.
- * If no response is received, the GID_PT/GID_FT needs to be forcibly
- * released.
- */
-
- unf_free_one_big_sfs(xchg);
-
- unf_add_back_to_sfs_list(free_pool, xchg);
-
- if (unlikely(unf_lport->port_removing))
- unf_check_xchg_mgr_status(xchg_mgr);
-}
-
-static void unf_fc_xchg_add_timer(void *xchg, ulong time_ms,
- enum unf_timer_type time_type)
-{
- ulong flag = 0;
- struct unf_xchg *unf_xchg = NULL;
- ulong times_ms = time_ms;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
- unf_xchg = (struct unf_xchg *)xchg;
- unf_lport = unf_xchg->lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- /* update timeout */
- switch (time_type) {
- /* The processing of TGT RRQ timeout is the same as that of TGT IO
- * timeout. The timeout period is different.
- */
- case UNF_TIMER_TYPE_TGT_RRQ:
- times_ms = times_ms + UNF_TGT_RRQ_REDUNDANT_TIME;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "TGT RRQ Timer set.");
- break;
-
- case UNF_TIMER_TYPE_INI_RRQ:
- times_ms = times_ms - UNF_INI_RRQ_REDUNDANT_TIME;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "INI RRQ Timer set.");
- break;
-
- case UNF_TIMER_TYPE_SFS:
- times_ms = times_ms + UNF_INI_ELS_REDUNDANT_TIME;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "INI ELS Timer set.");
- break;
- default:
- break;
- }
-
- /* The xchg of the timer must be valid. If the reference count of xchg
- * is 0, the timer must not be added
- */
- if (atomic_read(&unf_xchg->ref_cnt) <= 0) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[warn]Abnormal Exchange(0x%p), Reference count(0x%x), Can't add timer.",
- unf_xchg, atomic_read(&unf_xchg->ref_cnt));
- return;
- }
-
- /* Delay Work: Hold for timer */
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
- if (queue_delayed_work(unf_lport->xchg_wq, &unf_xchg->timeout_work,
- (ulong)msecs_to_jiffies((u32)times_ms))) {
- /* hold for timer */
- atomic_inc(&unf_xchg->ref_cnt);
- }
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
-}
-
-static void unf_init_sfs_xchg_param(struct unf_xchg *xchg,
- struct unf_lport *lport,
- struct unf_xchg_mgr *xchg_mgr)
-{
- xchg->free_pool = &xchg_mgr->free_pool;
- xchg->hot_pool = xchg_mgr->hot_pool;
- xchg->lport = lport;
- xchg->xchg_mgr = xchg_mgr;
- xchg->free_xchg = unf_free_sfs_xchg;
- xchg->xchg_type = UNF_XCHG_TYPE_SFS;
- xchg->io_state = UNF_IO_STATE_NEW;
- xchg->scsi_cmnd_info.result = 0;
- xchg->ob_callback_sts = UNF_IO_SUCCESS;
-
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
-
- if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] ==
- INVALID_VALUE32) {
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
- }
-
- if (xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] == 0) {
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- (u32)atomic64_inc_return(&((struct unf_lport *)lport->root_lport)->exchg_index);
- }
-
- if (delayed_work_pending(&xchg->timeout_work))
- UNF_DEL_XCHG_TIMER_SAFE(xchg);
-
- INIT_DELAYED_WORK(&xchg->timeout_work, unf_sfs_xchg_time_out);
-}
-
-static struct unf_xchg *unf_alloc_sfs_xchg(struct unf_lport *lport,
- struct unf_xchg_mgr *xchg_mgr)
-{
- struct unf_xchg *xchg = NULL;
- struct list_head *list_node = NULL;
- struct unf_xchg_free_pool *free_pool = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
- free_pool = &xchg_mgr->free_pool;
- hot_pool = xchg_mgr->hot_pool;
- FC_CHECK_RETURN_VALUE(free_pool, NULL);
- FC_CHECK_RETURN_VALUE(hot_pool, NULL);
-
- /* Select an idle node from free pool */
- spin_lock_irqsave(&free_pool->xchg_freepool_lock, flags);
- if (list_empty(&free_pool->list_sfs_xchg_list)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) have no Exchange anymore.",
- lport->port_id);
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
- return NULL;
- }
-
- list_node = UNF_OS_LIST_NEXT(&free_pool->list_sfs_xchg_list);
- list_del(list_node);
- free_pool->total_sfs_xchg--;
- spin_unlock_irqrestore(&free_pool->xchg_freepool_lock, flags);
-
- xchg = list_entry(list_node, struct unf_xchg, list_xchg_entry);
- /*
- * The xchg is mounted to the Hot Pool.
- * The mount mode and release mode of the xchg must be specified
- * and stored in the sfs linked list.
- */
- flags = 0;
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- if (unf_alloc_hot_pool_slab(hot_pool, xchg) != RETURN_OK) {
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- unf_add_back_to_sfs_list(free_pool, xchg);
- if (unlikely(lport->port_removing))
- unf_check_xchg_mgr_status(xchg_mgr);
-
- return NULL;
- }
-
- list_add_tail(&xchg->list_xchg_entry, &hot_pool->sfs_busylist);
- hot_pool->total_xchges++;
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_init_sfs_xchg_param(xchg, lport, xchg_mgr);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return xchg;
-}
-
-static void *unf_get_new_xchg(void *lport, u32 xchg_type)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg_mgr *xchg_mgr = NULL;
- struct unf_xchg *xchg = NULL;
- u32 exchg_type = 0;
- u16 xchg_mgr_type;
- u32 rtry_cnt = 0;
- u32 last_exchg_mgr_idx;
-
- xchg_mgr_type = (xchg_type >> UNF_SHIFT_16);
- exchg_type = xchg_type & SPFC_XCHG_TYPE_MASK;
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* In the case of NPIV, the lport is the Vport pointer,
- * and the share uses the ExchMgr of the RootLport.
- */
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- if (unlikely((atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) ||
- (atomic_read(&((struct unf_lport *)lport)->lport_no_operate_flag) ==
- UNF_LPORT_NOP))) {
- return NULL;
- }
-
- last_exchg_mgr_idx = (u32)atomic64_inc_return(&unf_lport->last_exchg_mgr_idx);
-try_next_mgr:
- rtry_cnt++;
- if (unlikely(rtry_cnt > UNF_EXCHG_MGR_NUM))
- return NULL;
-
- /* If Fixed mode,only use XchgMgr 0 */
- if (unlikely(xchg_mgr_type == UNF_XCHG_MGR_TYPE_FIXED)) {
- xchg_mgr = (struct unf_xchg_mgr *)unf_lport->xchg_mgr[ARRAY_INDEX_0];
- } else {
- xchg_mgr = (struct unf_xchg_mgr *)unf_lport
- ->xchg_mgr[last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM];
- }
- if (unlikely(!xchg_mgr)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) get exchangemgr %u is null.",
- unf_lport->port_id, last_exchg_mgr_idx % UNF_EXCHG_MGR_NUM);
- return NULL;
- }
- last_exchg_mgr_idx++;
-
- /* Allocate entries based on the Exchange type */
- switch (exchg_type) {
- case UNF_XCHG_TYPE_SFS:
- xchg = unf_alloc_sfs_xchg(lport, xchg_mgr);
- break;
- case UNF_XCHG_TYPE_INI:
- xchg = unf_alloc_io_xchg(lport, xchg_mgr);
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) unwonted, Exchange type(0x%x).",
- unf_lport->port_id, exchg_type);
- break;
- }
-
- if (likely(xchg)) {
- xchg->oxid = INVALID_VALUE16;
- xchg->abort_oxid = INVALID_VALUE16;
- xchg->rxid = INVALID_VALUE16;
- xchg->debug_hook = false;
- xchg->alloc_jif = jiffies;
-
- atomic_set(&xchg->ref_cnt, 1);
- atomic_set(&xchg->esgl_cnt, 0);
- } else {
- goto try_next_mgr;
- }
-
- return xchg;
-}
-
-static void unf_free_xchg(void *lport, void *xchg)
-{
- struct unf_xchg *unf_xchg = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_xchg = (struct unf_xchg *)xchg;
- unf_xchg_ref_dec(unf_xchg, XCHG_FREE_XCHG);
-}
-
-u32 unf_init_xchg_mgr_temp(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- lport->xchg_mgr_temp.unf_xchg_get_free_and_init = unf_get_new_xchg;
- lport->xchg_mgr_temp.unf_xchg_release = unf_free_xchg;
- lport->xchg_mgr_temp.unf_look_up_xchg_by_tag = unf_look_up_xchg_by_tag;
- lport->xchg_mgr_temp.unf_look_up_xchg_by_id = unf_find_xchg_by_ox_id;
- lport->xchg_mgr_temp.unf_xchg_add_timer = unf_fc_xchg_add_timer;
- lport->xchg_mgr_temp.unf_xchg_cancel_timer = unf_xchg_cancel_timer;
- lport->xchg_mgr_temp.unf_xchg_abort_all_io = unf_xchg_abort_all_xchg;
- lport->xchg_mgr_temp.unf_look_up_xchg_by_cmnd_sn = unf_look_up_xchg_by_cmnd_sn;
- lport->xchg_mgr_temp.unf_xchg_abort_by_lun = unf_xchg_abort_by_lun;
- lport->xchg_mgr_temp.unf_xchg_abort_by_session = unf_xchg_abort_by_session;
- lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort = unf_xchg_mgr_io_xchg_abort;
- lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort = unf_xchg_mgr_sfs_xchg_abort;
-
- return RETURN_OK;
-}
-
-void unf_release_xchg_mgr_temp(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) has dirty exchange, Don't release exchange manager template.",
- lport->port_id);
-
- return;
- }
-
- memset(&lport->xchg_mgr_temp, 0, sizeof(struct unf_cm_xchg_mgr_template));
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP;
-}
-
-void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- ulong pool_lock_flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) hot pool is NULL",
- lport->port_id);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- hot_pool->wait_state = wait_state;
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- }
-}
-
-u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- ulong flags = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- if (unlikely(xchg->debug_hook)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OX_ID_RX_ID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Stage(%s)",
- xchg, xchg->io_state, xchg->sid, xchg->did,
- xchg->oxid, xchg->rxid, xchg->alloc_jif,
- atomic_read(&xchg->ref_cnt),
- io_stage_table[io_stage].stage);
- }
-
- hot_pool = xchg->hot_pool;
- FC_CHECK_RETURN_VALUE(hot_pool, UNF_RETURN_ERROR);
-
- /* Exchange -> Hot Pool Tag check */
- if (unlikely((xchg->hotpooltag >= (hot_pool->slab_total_sum + hot_pool->base)) ||
- xchg->hotpooltag < hot_pool->base)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Xchg(0x%p) S_ID(%xh) D_ID(0x%x) hot_pool_tag(0x%x) is bigger than slab total num(0x%x) base(0x%x)",
- xchg, xchg->sid, xchg->did, xchg->hotpooltag,
- hot_pool->slab_total_sum + hot_pool->base, hot_pool->base);
-
- return UNF_RETURN_ERROR;
- }
-
- /* atomic read & inc */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (unlikely(atomic_read(&xchg->ref_cnt) <= 0)) {
- ret = UNF_RETURN_ERROR;
- } else {
- if (unf_get_xchg_by_xchg_tag(hot_pool, xchg->hotpooltag - hot_pool->base) == xchg) {
- atomic_inc(&xchg->ref_cnt);
- ret = RETURN_OK;
- } else {
- ret = UNF_RETURN_ERROR;
- }
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return ret;
-}
-
-void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage)
-{
- /* Atomic dec ref_cnt & test, free exchange if necessary (ref_cnt==0) */
- struct unf_xchg_hot_pool *hot_pool = NULL;
- void (*free_xchg)(struct unf_xchg *) = NULL;
- ulong flags = 0;
- ulong xchg_lock_falgs = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- if (xchg->debug_hook) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Xchg(0x%p) State(0x%x) SID_DID(0x%x_0x%x) OXID_RXID(0x%x_0x%x) AllocJiff(%llu) Refcnt(%d) Statge %s",
- xchg, xchg->io_state, xchg->sid, xchg->did, xchg->oxid,
- xchg->rxid, xchg->alloc_jif,
- atomic_read(&xchg->ref_cnt),
- io_stage_table[io_stage].stage);
- }
-
- hot_pool = xchg->hot_pool;
- FC_CHECK_RETURN_VOID(hot_pool);
- FC_CHECK_RETURN_VOID((xchg->hotpooltag >= hot_pool->base));
-
- /*
- * 1. Atomic dec & test
- * 2. Free exchange if necessary (ref_cnt == 0)
- */
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_falgs);
- if (atomic_dec_and_test(&xchg->ref_cnt)) {
- free_xchg = xchg->free_xchg;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- unf_hot_pool_slab_set(hot_pool,
- xchg->hotpooltag - hot_pool->base, NULL);
- /* Delete exchange list entry */
- list_del_init(&xchg->list_xchg_entry);
- hot_pool->total_xchges--;
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- /* unf_free_fcp_xchg --->>> unf_done_ini_xchg */
- if (free_xchg)
- free_xchg(xchg);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_falgs);
- }
-}
-
-static void unf_init_xchg_attribute(struct unf_xchg *xchg)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->xchg_mgr = NULL;
- xchg->free_pool = NULL;
- xchg->hot_pool = NULL;
- xchg->lport = NULL;
- xchg->rport = NULL;
- xchg->disc_rport = NULL;
- xchg->io_state = UNF_IO_STATE_NEW;
- xchg->io_send_stage = TGT_IO_SEND_STAGE_NONE;
- xchg->io_send_result = TGT_IO_SEND_RESULT_INVALID;
- xchg->io_send_abort = false;
- xchg->io_abort_result = false;
- xchg->abts_state = 0;
- xchg->oxid = INVALID_VALUE16;
- xchg->abort_oxid = INVALID_VALUE16;
- xchg->rxid = INVALID_VALUE16;
- xchg->sid = INVALID_VALUE32;
- xchg->did = INVALID_VALUE32;
- xchg->oid = INVALID_VALUE32;
- xchg->disc_portid = INVALID_VALUE32;
- xchg->seq_id = INVALID_VALUE8;
- xchg->cmnd_code = INVALID_VALUE32;
- xchg->cmnd_sn = INVALID_VALUE64;
- xchg->data_len = 0;
- xchg->resid_len = 0;
- xchg->data_direction = DMA_NONE;
- xchg->hotpooltag = INVALID_VALUE16;
- xchg->big_sfs_buf = NULL;
- xchg->may_consume_res_cnt = 0;
- xchg->fast_consume_res_cnt = 0;
- xchg->io_front_jif = INVALID_VALUE64;
- xchg->ob_callback_sts = UNF_IO_SUCCESS;
- xchg->start_jif = 0;
- xchg->rport_bind_jifs = INVALID_VALUE64;
- xchg->scsi_id = INVALID_VALUE32;
- xchg->qos_level = 0;
- xchg->world_id = INVALID_VALUE32;
-
- memset(&xchg->seq, 0, sizeof(struct unf_seq));
- memset(&xchg->fcp_cmnd, 0, sizeof(struct unf_fcp_cmnd));
- memset(&xchg->scsi_cmnd_info, 0, sizeof(struct unf_scsi_cmd_info));
- memset(&xchg->dif_info, 0, sizeof(struct dif_info));
- memset(xchg->private_data, 0, (PKG_MAX_PRIVATE_DATA_SIZE * sizeof(u32)));
- xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
- xchg->echo_info.response_time = 0;
-
- if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
- if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
- memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0,
- sizeof(union unf_sfs_u));
- xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
- }
- } else if (xchg->xchg_type != UNF_XCHG_TYPE_INI) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Exchange Type(0x%x) SFS Union uninited.",
- xchg->xchg_type);
- }
- xchg->xchg_type = UNF_XCHG_TYPE_INVALID;
- xchg->xfer_or_rsp_echo = NULL;
- xchg->scsi_or_tgt_cmnd_func = NULL;
- xchg->ob_callback = NULL;
- xchg->callback = NULL;
- xchg->free_xchg = NULL;
-
- atomic_set(&xchg->ref_cnt, 0);
- atomic_set(&xchg->esgl_cnt, 0);
- atomic_set(&xchg->delay_flag, 0);
-
- if (delayed_work_pending(&xchg->timeout_work))
- UNF_DEL_XCHG_TIMER_SAFE(xchg);
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-}
-
-bool unf_busy_io_completed(struct unf_lport *lport)
-{
- struct unf_xchg_mgr *xchg_mgr = NULL;
- ulong pool_lock_flags = 0;
- u32 i;
-
- FC_CHECK_RETURN_VALUE(lport, true);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- xchg_mgr = unf_get_xchg_mgr_by_lport(lport, i);
- if (unlikely(!xchg_mgr)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) Exchange Manager is NULL",
- lport->port_id);
- continue;
- }
-
- spin_lock_irqsave(&xchg_mgr->hot_pool->xchg_hotpool_lock,
- pool_lock_flags);
- if (!list_empty(&xchg_mgr->hot_pool->ini_busylist)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) ini busylist is not empty.",
- lport->port_id);
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
- pool_lock_flags);
- return false;
- }
- spin_unlock_irqrestore(&xchg_mgr->hot_pool->xchg_hotpool_lock,
- pool_lock_flags);
- }
- return true;
-}
diff --git a/drivers/scsi/spfc/common/unf_exchg.h b/drivers/scsi/spfc/common/unf_exchg.h
deleted file mode 100644
index 0a48be31b971..000000000000
--- a/drivers/scsi/spfc/common/unf_exchg.h
+++ /dev/null
@@ -1,436 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_EXCHG_H
-#define UNF_EXCHG_H
-
-#include "unf_type.h"
-#include "unf_fcstruct.h"
-#include "unf_lport.h"
-#include "unf_scsi_common.h"
-
-enum unf_ioflow_id {
- XCHG_ALLOC = 0,
- TGT_RECEIVE_ABTS,
- TGT_ABTS_DONE,
- TGT_IO_SRR,
- SFS_RESPONSE,
- SFS_TIMEOUT,
- INI_SEND_CMND,
- INI_RESPONSE_DONE,
- INI_EH_ABORT,
- INI_EH_DEVICE_RESET,
- INI_EH_BLS_DONE,
- INI_IO_TIMEOUT,
- INI_REQ_TIMEOUT,
- XCHG_CANCEL_TIMER,
- XCHG_FREE_XCHG,
- SEND_ELS,
- IO_XCHG_WAIT,
- XCHG_BUTT
-};
-
-enum unf_xchg_type {
- UNF_XCHG_TYPE_INI = 0, /* INI IO */
- UNF_XCHG_TYPE_SFS = 1,
- UNF_XCHG_TYPE_INVALID
-};
-
-enum unf_xchg_mgr_type {
- UNF_XCHG_MGR_TYPE_RANDOM = 0,
- UNF_XCHG_MGR_TYPE_FIXED = 1,
- UNF_XCHG_MGR_TYPE_INVALID
-};
-
-enum tgt_io_send_stage {
- TGT_IO_SEND_STAGE_NONE = 0,
- TGT_IO_SEND_STAGE_DOING = 1, /* xfer/rsp into queue */
- TGT_IO_SEND_STAGE_DONE = 2, /* xfer/rsp into queue complete */
- TGT_IO_SEND_STAGE_ECHO = 3, /* driver handled TSTS */
- TGT_IO_SEND_STAGE_INVALID
-};
-
-enum tgt_io_send_result {
- TGT_IO_SEND_RESULT_OK = 0, /* xfer/rsp enqueue succeed */
- TGT_IO_SEND_RESULT_FAIL = 1, /* xfer/rsp enqueue fail */
- TGT_IO_SEND_RESULT_INVALID
-};
-
-struct unf_io_flow_id {
- char *stage;
-};
-
-#define unf_check_oxid_matched(ox_id, oid, xchg) \
- (((ox_id) == (xchg)->oxid) && ((oid) == (xchg)->oid) && \
- (atomic_read(&(xchg)->ref_cnt) > 0))
-
-#define UNF_CHECK_ALLOCTIME_VALID(lport, xchg_tag, exchg, pkg_alloc_time, \
- xchg_alloc_time) \
- do { \
- if (unlikely(((pkg_alloc_time) != 0) && \
- ((pkg_alloc_time) != (xchg_alloc_time)))) { \
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR, \
- "Lport(0x%x_0x%x_0x%x_0x%p) AllocTime is not " \
- "equal,PKG " \
- "AllocTime:0x%x,Exhg AllocTime:0x%x", \
- (lport)->port_id, (lport)->nport_id, xchg_tag, \
- exchg, pkg_alloc_time, xchg_alloc_time); \
- return UNF_RETURN_ERROR; \
- }; \
- if (unlikely((pkg_alloc_time) == 0)) { \
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR, \
- "Lport(0x%x_0x%x_0x%x_0x%p) pkgtime err,PKG " \
- "AllocTime:0x%x,Exhg AllocTime:0x%x", \
- (lport)->port_id, (lport)->nport_id, xchg_tag, \
- exchg, pkg_alloc_time, xchg_alloc_time); \
- }; \
- } while (0)
-
-#define UNF_SET_SCSI_CMND_RESULT(xchg, cmnd_result) \
- ((xchg)->scsi_cmnd_info.result = (cmnd_result))
-
-#define UNF_GET_GS_SFS_XCHG_TIMER(lport) (3 * (ulong)(lport)->ra_tov)
-
-#define UNF_GET_BLS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
-
-#define UNF_GET_ELS_SFS_XCHG_TIMER(lport) (2 * (ulong)(lport)->ra_tov)
-
-#define UNF_ELS_ECHO_RESULT_OK 0
-#define UNF_ELS_ECHO_RESULT_FAIL 1
-
-struct unf_xchg;
-/* Xchg hot pool, busy IO lookup Xchg */
-struct unf_xchg_hot_pool {
- /* Xchg sum, in hot pool */
- u16 total_xchges;
- bool wait_state;
-
- /* pool lock */
- spinlock_t xchg_hotpool_lock;
-
- /* Xchg posiontion list */
- struct list_head sfs_busylist;
- struct list_head ini_busylist;
- struct list_head list_destroy_xchg;
-
- /* Next free hot point */
- u16 slab_next_index;
- u16 slab_total_sum;
- u16 base;
-
- struct unf_lport *lport;
-
- struct unf_xchg *xchg_slab[ARRAY_INDEX_0];
-};
-
-/* Xchg's FREE POOL */
-struct unf_xchg_free_pool {
- spinlock_t xchg_freepool_lock;
-
- u32 fcp_xchg_sum;
-
- /* IO used Xchg */
- struct list_head list_free_xchg_list;
- u32 total_fcp_xchg;
-
- /* SFS used Xchg */
- struct list_head list_sfs_xchg_list;
- u32 total_sfs_xchg;
- u32 sfs_xchg_sum;
-
- struct completion *xchg_mgr_completion;
-};
-
-struct unf_big_sfs {
- struct list_head entry_bigsfs;
- void *addr;
- u32 size;
-};
-
-struct unf_big_sfs_pool {
- void *big_sfs_pool;
- u32 free_count;
- struct list_head list_freepool;
- struct list_head list_busypool;
- spinlock_t big_sfs_pool_lock;
-};
-
-/* Xchg Manager for vport Xchg */
-struct unf_xchg_mgr {
- /* MG type */
- u32 mgr_type;
-
- /* MG entry */
- struct list_head xchg_mgr_entry;
-
- /* MG attribution */
- u32 mem_szie;
-
- /* MG alloced resource */
- void *fcp_mm_start;
-
- u32 sfs_mem_size;
- void *sfs_mm_start;
- dma_addr_t sfs_phy_addr;
-
- struct unf_xchg_free_pool free_pool;
- struct unf_xchg_hot_pool *hot_pool;
-
- struct unf_big_sfs_pool big_sfs_pool;
-
- struct buf_describe big_sfs_buf_list;
-};
-
-struct unf_seq {
- /* Seq ID */
- u8 seq_id;
-
- /* Seq Cnt */
- u16 seq_cnt;
-
- /* Seq state and len,maybe used for fcoe */
- u16 seq_stat;
- u32 rec_data_len;
-};
-
-union unf_xchg_fcp_sfs {
- struct unf_sfs_entry sfs_entry;
- struct unf_fcp_rsp_iu_entry fcp_rsp_entry;
-};
-
-#define UNF_IO_STATE_NEW 0
-#define TGT_IO_STATE_SEND_XFERRDY (1 << 2) /* succeed to send XFer rdy */
-#define TGT_IO_STATE_RSP (1 << 5) /* chip send rsp */
-#define TGT_IO_STATE_ABORT (1 << 7)
-
-#define INI_IO_STATE_UPTASK \
- (1 << 15) /* INI Upper-layer Task Management Commands */
-#define INI_IO_STATE_UPABORT \
- (1 << 16) /* INI Upper-layer timeout Abort flag \
- */
-#define INI_IO_STATE_DRABORT (1 << 17) /* INI driver Abort flag */
-#define INI_IO_STATE_DONE (1 << 18) /* INI complete flag */
-#define INI_IO_STATE_WAIT_RRQ (1 << 19) /* INI wait send rrq */
-#define INI_IO_STATE_UPSEND_ERR (1 << 20) /* INI send fail flag */
-/* INI only clear firmware resource flag */
-#define INI_IO_STATE_ABORT_RESOURCE (1 << 21)
-/* ioc abort:INI send ABTS ,5S timeout Semaphore,than set 1 */
-#define INI_IO_STATE_ABORT_TIMEOUT (1 << 22)
-#define INI_IO_STATE_RRQSEND_ERR (1 << 23) /* INI send RRQ fail flag */
-#define INI_IO_STATE_LOGO (1 << 24) /* INI busy IO session logo status */
-#define INI_IO_STATE_TMF_ABORT (1 << 25) /* INI TMF ABORT IO flag */
-#define INI_IO_STATE_REC_TIMEOUT_WAIT (1 << 26) /* INI REC TIMEOUT WAIT */
-#define INI_IO_STATE_REC_TIMEOUT (1 << 27) /* INI REC TIMEOUT */
-
-#define TMF_RESPONSE_RECEIVED (1 << 0)
-#define MARKER_STS_RECEIVED (1 << 1)
-#define ABTS_RESPONSE_RECEIVED (1 << 2)
-
-struct unf_scsi_cmd_info {
- ulong time_out;
- ulong abort_time_out;
- void *scsi_cmnd;
- void (*done)(struct unf_scsi_cmnd *scsi_cmd);
- ini_get_sgl_entry_buf unf_get_sgl_entry_buf;
- struct unf_ini_error_code *err_code_table; /* error code table */
- char *sense_buf;
- u32 err_code_table_cout; /* Size of the error code table */
- u32 buf_len;
- u32 entry_cnt;
- u32 result; /* Stores command execution results */
- u32 port_id;
-/* Re-search for rport based on scsiid during retry. Otherwise,
- *data inconsistency will occur
- */
- u32 scsi_id;
- void *sgl;
- uplevel_cmd_done uplevel_done;
-};
-
-struct unf_req_sgl_info {
- void *sgl;
- void *sgl_start;
- u32 req_index;
- u32 entry_index;
-};
-
-struct unf_els_echo_info {
- u64 response_time;
- struct semaphore echo_sync_sema;
- u32 echo_result;
-};
-
-struct unf_xchg {
- /* Mg resource relative */
- /* list delete from HotPool */
- struct unf_xchg_hot_pool *hot_pool;
-
- /* attach to FreePool */
- struct unf_xchg_free_pool *free_pool;
- struct unf_xchg_mgr *xchg_mgr;
- struct unf_lport *lport; /* Local LPort/VLPort */
- struct unf_rport *rport; /* Rmote Port */
- struct unf_rport *disc_rport; /* Discover Rmote Port */
- struct list_head list_xchg_entry;
- struct list_head list_abort_xchg_entry;
- spinlock_t xchg_state_lock;
-
- /* Xchg reference */
- atomic_t ref_cnt;
- atomic_t esgl_cnt;
- bool debug_hook;
- /* Xchg attribution */
- u16 hotpooltag;
- u16 abort_oxid;
- u32 xchg_type; /* LS,TGT CMND ,REQ,or SCSI Cmnd */
- u16 oxid;
- u16 rxid;
- u32 sid;
- u32 did;
- u32 oid; /* ID of the exchange initiator */
- u32 disc_portid; /* Send GNN_ID/GFF_ID NPortId */
- u8 seq_id;
- u8 byte_orders; /* Byte order */
- struct unf_seq seq;
-
- u32 cmnd_code;
- u32 world_id;
- /* Dif control */
- struct unf_dif_control_info dif_control;
- struct dif_info dif_info;
- /* IO status Abort,timer out */
- u32 io_state; /* TGT_IO_STATE_E */
- u32 tmf_state; /* TMF STATE */
- u32 ucode_abts_state;
- u32 abts_state;
-
- /* IO Enqueuing */
- enum tgt_io_send_stage io_send_stage; /* tgt_io_send_stage */
- /* IO Enqueuing result, success or failure */
- enum tgt_io_send_result io_send_result; /* tgt_io_send_result */
-
- u8 io_send_abort; /* is or not send io abort */
- /*result of io abort cmd(succ:true; fail:false)*/
- u8 io_abort_result;
- /* for INI,Indicates the length of the data transmitted over the PCI
- * link
- */
- u32 data_len;
- /* ResidLen,greater than 0 UnderFlow or Less than Overflow */
- int resid_len;
- /* +++++++++++++++++IO Special++++++++++++++++++++ */
- /* point to tgt cmnd/req/scsi cmnd */
- /* Fcp cmnd */
- struct unf_fcp_cmnd fcp_cmnd;
-
- struct unf_scsi_cmd_info scsi_cmnd_info;
-
- struct unf_req_sgl_info req_sgl_info;
-
- struct unf_req_sgl_info dif_sgl_info;
-
- u64 cmnd_sn;
- void *pinitiator;
-
- /* timestamp */
- u64 start_jif;
- u64 alloc_jif;
-
- u64 io_front_jif;
-
- u32 may_consume_res_cnt;
- u32 fast_consume_res_cnt;
-
- /* scsi req info */
- u32 data_direction;
-
- struct unf_big_sfs *big_sfs_buf;
-
- /* scsi cmnd sense_buffer pointer */
- union unf_xchg_fcp_sfs fcp_sfs_union;
-
- /* One exchange may use several External Sgls */
- struct list_head list_esgls;
- struct unf_els_echo_info echo_info;
- struct semaphore task_sema;
-
- /* for RRQ ,IO Xchg add to SFS Xchg */
- void *io_xchg;
-
- /* Xchg delay work */
- struct delayed_work timeout_work;
-
- void (*xfer_or_rsp_echo)(struct unf_xchg *xchg, u32 status);
-
- /* wait list XCHG send function */
- int (*scsi_or_tgt_cmnd_func)(struct unf_xchg *xchg);
-
- /* send result callback */
- void (*ob_callback)(struct unf_xchg *xchg);
-
- /* Response IO callback */
- void (*callback)(void *lport, void *rport, void *xchg);
-
- /* Xchg release function */
- void (*free_xchg)(struct unf_xchg *xchg);
-
- /* +++++++++++++++++low level Special++++++++++++++++++++ */
- /* private data,provide for low level */
- u32 private_data[PKG_MAX_PRIVATE_DATA_SIZE];
-
- u64 rport_bind_jifs;
-
- /* sfs exchg ob callback status */
- u32 ob_callback_sts;
- u32 scsi_id;
- u32 qos_level;
- void *ls_rsp_addr;
- void *ls_req;
- u32 status;
- atomic_t delay_flag;
- void *upper_ct;
-};
-
-struct unf_esgl_page *
-unf_get_and_add_one_free_esgl_page(struct unf_lport *lport,
- struct unf_xchg *xchg);
-void unf_release_xchg_mgr_temp(struct unf_lport *lport);
-u32 unf_init_xchg_mgr_temp(struct unf_lport *lport);
-u32 unf_alloc_xchg_resource(struct unf_lport *lport);
-void unf_free_all_xchg_mgr(struct unf_lport *lport);
-void unf_xchg_mgr_destroy(struct unf_lport *lport);
-u32 unf_xchg_ref_inc(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
-void unf_xchg_ref_dec(struct unf_xchg *xchg, enum unf_ioflow_id io_stage);
-struct unf_xchg_mgr *unf_get_xchg_mgr_by_lport(struct unf_lport *lport,
- u32 mgr_idx);
-struct unf_xchg_hot_pool *unf_get_hot_pool_by_lport(struct unf_lport *lport,
- u32 mgr_idx);
-void unf_free_lport_ini_xchg(struct unf_xchg_mgr *xchg_mgr, bool done_ini_flag);
-struct unf_xchg *unf_cm_lookup_xchg_by_cmnd_sn(void *lport, u64 command_sn,
- u32 world_id, void *pinitiator);
-void *unf_cm_lookup_xchg_by_id(void *lport, u16 ox_id, u32 oid);
-void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
- u64 lun_id, void *tm_xchg,
- bool abort_all_lun_flag);
-void unf_cm_xchg_abort_by_session(struct unf_lport *lport,
- struct unf_rport *rport);
-
-void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport,
- struct unf_rport *rport, u32 sid, u32 did,
- u32 extra_io_stat);
-void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
- struct unf_rport *rport, u32 sid, u32 did);
-void unf_cm_free_xchg(void *lport, void *xchg);
-void *unf_cm_get_free_xchg(void *lport, u32 xchg_type);
-void *unf_cm_lookup_xchg_by_tag(void *lport, u16 hot_pool_tag);
-void unf_release_esgls(struct unf_xchg *xchg);
-void unf_show_all_xchg(struct unf_lport *lport, struct unf_xchg_mgr *xchg_mgr);
-void unf_destroy_dirty_xchg(struct unf_lport *lport, bool show_only);
-void unf_wake_up_scsi_task_cmnd(struct unf_lport *lport);
-void unf_set_hot_pool_wait_state(struct unf_lport *lport, bool wait_state);
-void unf_free_lport_all_xchg(struct unf_lport *lport);
-extern u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
- u32 err_table_count, u32 drv_err_code);
-bool unf_busy_io_completed(struct unf_lport *lport);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.c b/drivers/scsi/spfc/common/unf_exchg_abort.c
deleted file mode 100644
index 68f751be04aa..000000000000
--- a/drivers/scsi/spfc/common/unf_exchg_abort.c
+++ /dev/null
@@ -1,825 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_exchg_abort.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "unf_rport.h"
-#include "unf_service.h"
-#include "unf_ls.h"
-#include "unf_io.h"
-
-void unf_cm_xchg_mgr_abort_io_by_id(struct unf_lport *lport, struct unf_rport *rport, u32 sid,
- u32 did, u32 extra_io_state)
-{
- /*
- * for target session: set ABORT
- * 1. R_Port remove
- * 2. Send PLOGI_ACC callback
- * 3. RCVD PLOGI
- * 4. RCVD LOGO
- */
- FC_CHECK_RETURN_VOID(lport);
-
- if (lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort) {
- /* The SID/DID of the Xchg is in reverse direction in different
- * phases. Therefore, the reverse direction needs to be
- * considered
- */
- lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, sid, did,
- extra_io_state);
- lport->xchg_mgr_temp.unf_xchg_mgr_io_xchg_abort(lport, rport, did, sid,
- extra_io_state);
- }
-}
-
-void unf_cm_xchg_mgr_abort_sfs_by_id(struct unf_lport *lport,
- struct unf_rport *rport, u32 sid, u32 did)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- if (lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort) {
- /* The SID/DID of the Xchg is in reverse direction in different
- * phases, therefore, the reverse direction needs to be
- * considered
- */
- lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, sid, did);
- lport->xchg_mgr_temp.unf_xchg_mgr_sfs_xchg_abort(lport, rport, did, sid);
- }
-}
-
-void unf_cm_xchg_abort_by_lun(struct unf_lport *lport, struct unf_rport *rport,
- u64 lun_id, void *xchg, bool abort_all_lun_flag)
-{
- /*
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- void (*unf_xchg_abort_by_lun)(void *, void *, u64, void *, bool) = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_xchg_abort_by_lun = lport->xchg_mgr_temp.unf_xchg_abort_by_lun;
- if (unf_xchg_abort_by_lun)
- unf_xchg_abort_by_lun((void *)lport, (void *)rport, lun_id,
- xchg, abort_all_lun_flag);
-}
-
-void unf_cm_xchg_abort_by_session(struct unf_lport *lport, struct unf_rport *rport)
-{
- void (*unf_xchg_abort_by_session)(void *, void *) = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_xchg_abort_by_session = lport->xchg_mgr_temp.unf_xchg_abort_by_session;
- if (unf_xchg_abort_by_session)
- unf_xchg_abort_by_session((void *)lport, (void *)rport);
-}
-
-static void unf_xchg_abort_all_sfs_xchg(struct unf_lport *lport, bool clean)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong pool_lock_falgs = 0;
- ulong xchg_lock_flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT,
- UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.", lport->port_id);
-
- continue;
- }
-
- if (!clean) {
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
-
- /* Clearing the SFS_Busy_list Exchange Resource */
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
- if (atomic_read(&xchg->ref_cnt) > 0)
- xchg->io_state |= TGT_IO_STATE_ABORT;
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
- } else {
- continue;
- }
- }
-}
-
-static void unf_xchg_abort_ini_io_xchg(struct unf_lport *lport, bool clean)
-{
- /* Clean L_Port/V_Port Link Down I/O: Abort */
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong pool_lock_falgs = 0;
- ulong xchg_lock_flags = 0;
- u32 io_state = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) hot pool is NULL",
- lport->port_id);
-
- continue;
- }
-
- if (!clean) {
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
-
- /* 1. Abort INI_Busy_List IO */
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
- if (atomic_read(&xchg->ref_cnt) > 0)
- xchg->io_state |= INI_IO_STATE_DRABORT | io_state;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
- } else {
- /* Do nothing, just return */
- continue;
- }
- }
-}
-
-void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean)
-{
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- unf_lport = (struct unf_lport *)lport;
-
- switch (xchg_type) {
- case UNF_XCHG_TYPE_SFS:
- unf_xchg_abort_all_sfs_xchg(unf_lport, clean);
- break;
- /* Clean L_Port/V_Port Link Down I/O: Abort */
- case UNF_XCHG_TYPE_INI:
- unf_xchg_abort_ini_io_xchg(unf_lport, clean);
- break;
- default:
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) unknown exch type(0x%x)",
- unf_lport->port_id, xchg_type);
- break;
- }
-}
-
-static void unf_xchg_abort_ini_send_tm_cmd(void *lport, void *rport, u64 lun_id)
-{
- /*
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- ulong xchg_flag = 0;
- u32 i = 0;
- u64 raw_lun_id = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
- unf_rport = (struct unf_rport *)rport;
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) hot pool is NULL",
- unf_lport->port_id);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
-
- /* 1. for each exchange from busy list */
- list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
-
- raw_lun_id = *(u64 *)(xchg->fcp_cmnd.lun) >> UNF_SHIFT_16 &
- UNF_RAW_LUN_ID_MASK;
- if (lun_id == raw_lun_id && unf_rport == xchg->rport) {
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- xchg->io_state |= INI_IO_STATE_TMF_ABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
- xchg, xchg->io_state,
- ((struct unf_lport *)lport)->nport_id,
- unf_rport->nport_id, xchg->hotpooltag);
- }
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-}
-
-static void unf_xchg_abort_ini_tmf_target_reset(void *lport, void *rport)
-{
- /*
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- ulong xchg_flag = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
- unf_rport = (struct unf_rport *)rport;
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) hot pool is NULL",
- unf_lport->port_id);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
-
- /* 1. for each exchange from busy_list */
- list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- if (unf_rport == xchg->rport) {
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- xchg->io_state |= INI_IO_STATE_TMF_ABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Exchange(%p) state(0x%x) S_ID(0x%x) D_ID(0x%x) tag(0x%x) abort by TMF CMD",
- xchg, xchg->io_state, unf_lport->nport_id,
- unf_rport->nport_id, xchg->hotpooltag);
- }
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-}
-
-void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *xchg,
- bool abort_all_lun_flag)
-{
- /* ABORT: set UP_ABORT tag for target LUN I/O */
- struct unf_xchg *tm_xchg = (struct unf_xchg *)xchg;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[event]Port(0x%x) LUN_ID(0x%llx) TM_EXCH(0x%p) flag(%d)",
- ((struct unf_lport *)lport)->port_id, lun_id, xchg,
- abort_all_lun_flag);
-
- /* for INI Mode */
- if (!tm_xchg) {
- /*
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- unf_xchg_abort_ini_send_tm_cmd(lport, rport, lun_id);
-
- return;
- }
-}
-
-void unf_xchg_abort_by_session(void *lport, void *rport)
-{
- /*
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[event]Port(0x%x) Rport(0x%x) start session reset with TMF",
- ((struct unf_lport *)lport)->port_id, ((struct unf_rport *)rport)->nport_id);
-
- unf_xchg_abort_ini_tmf_target_reset(lport, rport);
-}
-
-void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- ulong xchg_flag = 0;
- u32 i;
- u32 io_abort_flag = INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR |
- INI_IO_STATE_TMF_ABORT;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) hot pool is NULL",
- unf_lport->port_id);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
-
- /* 1. for each exchange from busy_list */
- list_for_each_safe(node, next_node, &hot_pool->ini_busylist) {
- xchg = list_entry(node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- if (lport == xchg->lport && scsi_id == xchg->scsi_id &&
- !(xchg->io_state & io_abort_flag)) {
- xchg->io_state |= INI_IO_STATE_UPABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Exchange(%p) scsi_cmd(0x%p) state(0x%x) scsi_id(0x%x) tag(0x%x) upabort by scsi id",
- xchg, xchg->scsi_cmnd_info.scsi_cmnd,
- xchg->io_state, scsi_id, xchg->hotpooltag);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
- }
- }
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-}
-
-static void unf_ini_busy_io_xchg_abort(void *xchg_hot_pool, void *rport,
- u32 sid, u32 did, u32 extra_io_state)
-{
- /*
- * for target session: Set (DRV) ABORT
- * 1. R_Port remove
- * 2. Send PLOGI_ACC callback
- * 3. RCVD PLOGI
- * 4. RCVD LOGO
- */
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct unf_xchg *xchg = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong xchg_lock_flags = 0;
-
- unf_rport = (struct unf_rport *)rport;
- hot_pool = (struct unf_xchg_hot_pool *)xchg_hot_pool;
-
- /* ABORT INI IO: INI_BUSY_LIST */
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
- if (did == xchg->did && sid == xchg->sid &&
- unf_rport == xchg->rport &&
- (atomic_read(&xchg->ref_cnt) > 0)) {
- xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
- xchg->io_state |= INI_IO_STATE_DRABORT;
- xchg->io_state |= extra_io_state;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Abort INI:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
- xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
- (u32)xchg->oxid, (u32)xchg->rxid,
- (u32)xchg->sid, (u32)xchg->did, (u32)xchg->io_state,
- atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
- }
-}
-
-void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did, u32 extra_io_state)
-{
- /*
- * for target session: set ABORT
- * 1. R_Port remove
- * 2. Send PLOGI_ACC callback
- * 3. RCVD PLOGI
- * 4. RCVD LOGO
- */
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong pool_lock_falgs = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) hot pool is NULL",
- unf_lport->port_id);
-
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
-
- /* 1. Clear INI (session) IO: INI Mode */
- unf_ini_busy_io_xchg_abort(hot_pool, rport, sid, did, extra_io_state);
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
- }
-}
-
-void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong pool_lock_falgs = 0;
- ulong xchg_lock_flags = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport(unf_lport, i);
- if (!hot_pool) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT,
- UNF_MAJOR, "Port(0x%x) Hot Pool is NULL.",
- unf_lport->port_id);
-
- continue;
- }
-
- unf_rport = (struct unf_rport *)rport;
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
-
- /* Clear the SFS exchange of the corresponding connection */
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
- xchg = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_lock_flags);
- if (did == xchg->did && sid == xchg->sid &&
- unf_rport == xchg->rport && (atomic_read(&xchg->ref_cnt) > 0)) {
- xchg->io_state |= TGT_IO_STATE_ABORT;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Abort SFS:0x%p---0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----0x%x----%llu.",
- xchg, (u32)xchg->hotpooltag, (u32)xchg->xchg_type,
- (u32)xchg->oxid, (u32)xchg->rxid, (u32)xchg->sid,
- (u32)xchg->did, (u32)xchg->io_state,
- atomic_read(&xchg->ref_cnt), xchg->alloc_jif);
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_lock_flags);
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_falgs);
- }
-}
-
-static void unf_fc_wait_abts_complete(struct unf_lport *lport, struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = lport;
- struct unf_scsi_cmnd scsi_cmnd = {0};
- ulong flag = 0;
- u32 time_out_value = 2000;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- u32 io_result;
-
- scsi_cmnd.scsi_id = xchg->scsi_cmnd_info.scsi_id;
- scsi_cmnd.upper_cmnd = xchg->scsi_cmnd_info.scsi_cmnd;
- scsi_cmnd.done = xchg->scsi_cmnd_info.done;
- scsi_image_table = &unf_lport->rport_scsi_table;
-
- if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
- unf_lport->port_id, xchg, xchg->oxid, xchg->rxid);
- goto ABTS_FIAILED;
- }
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
- xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
- unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
- xchg->ucode_abts_state);
- io_result = DID_BUS_BUSY;
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_cmnd.scsi_id, io_result);
- unf_complete_cmnd(&scsi_cmnd, io_result << UNF_SHIFT_16);
- return;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
- unf_lport->port_id, xchg, xchg->hotpooltag,
- xchg->scsi_cmnd_info.result, xchg->io_state);
- goto ABTS_FIAILED;
-
-ABTS_FIAILED:
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-}
-
-void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = lport;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- if (xchg->io_state & INI_IO_STATE_UPABORT) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "LPort(0x%x) xchange(0x%p) OX_ID(0x%x), RX_ID(0x%x) Cmdsn(0x%lx) has been aborted.",
- unf_lport->port_id, xchg, xchg->oxid,
- xchg->rxid, (ulong)xchg->cmnd_sn);
- return;
- }
- xchg->io_state |= INI_IO_STATE_UPABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_KEVENT,
- "LPort(0x%x) exchg(0x%p) OX_ID(0x%x) RX_ID(0x%x) Cmdsn(0x%lx) timeout abort it",
- unf_lport->port_id, xchg, xchg->oxid, xchg->rxid, (ulong)xchg->cmnd_sn);
-
- unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
- (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
-
- sema_init(&xchg->task_sema, 0);
-
- if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "LPort(0x%x) send ABTS, Send ABTS unsuccessful. Exchange OX_ID(0x%x), RX_ID(0x%x).",
- unf_lport->port_id, xchg->oxid, xchg->rxid);
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- return;
- }
- unf_fc_wait_abts_complete(unf_lport, xchg);
-}
-
-static void unf_fc_ini_io_rec_wait_time_out(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- ulong time_out = 0;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) Rec timeout exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- lport->port_id, rport->nport_id, xchg, xchg->oxid,
- xchg->rxid, xchg->io_state);
-
- if (xchg->rport_bind_jifs == rport->rport_alloc_jifs) {
- unf_send_rec(lport, rport, xchg);
-
- if (xchg->scsi_cmnd_info.abort_time_out > 0) {
- time_out = (xchg->scsi_cmnd_info.abort_time_out > UNF_REC_TOV) ?
- (xchg->scsi_cmnd_info.abort_time_out - UNF_REC_TOV) : 0;
- if (time_out > 0) {
- lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out,
- UNF_TIMER_TYPE_REQ_IO);
- } else {
- unf_fc_abort_time_out_cmnd(lport, xchg);
- }
- }
- }
-}
-
-static void unf_fc_ini_send_abts_time_out(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- if (xchg->rport_bind_jifs == rport->rport_alloc_jifs &&
- xchg->rport_bind_jifs != INVALID_VALUE64) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) first time to send abts timeout, retry again OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) state(0x%x)",
- lport->port_id, rport->nport_id, xchg, xchg->oxid,
- xchg->rxid, xchg->hotpooltag, xchg->io_state);
-
- lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
- (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT, UNF_TIMER_TYPE_INI_ABTS);
-
- if (unf_send_abts(lport, xchg) != RETURN_OK) {
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- unf_abts_timeout_recovery_default(rport, xchg);
-
- unf_cm_free_xchg(lport, xchg);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) rport is invalid, exchg rport jiff(0x%llx 0x%llx), free exchange OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- lport->port_id, rport->nport_id, xchg,
- xchg->rport_bind_jifs, rport->rport_alloc_jifs,
- xchg->oxid, xchg->rxid, xchg->io_state);
-
- unf_cm_free_xchg(lport, xchg);
- }
-}
-
-void unf_fc_ini_io_xchg_time_out(struct work_struct *work)
-{
- struct unf_xchg *xchg = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 port_valid_flag = 0;
-
- xchg = container_of(work, struct unf_xchg, timeout_work.work);
- FC_CHECK_RETURN_VOID(xchg);
-
- ret = unf_xchg_ref_inc(xchg, INI_IO_TIMEOUT);
- FC_CHECK_RETURN_VOID(ret == RETURN_OK);
-
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
-
- port_valid_flag = (!unf_lport) || (!unf_rport);
- if (port_valid_flag) {
- unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
- unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
- return;
- }
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- /* 1. for Send RRQ failed Timer timeout */
- if (INI_IO_STATE_RRQSEND_ERR & xchg->io_state) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[info]LPort(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for RRQ send failed OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- unf_lport->port_id, unf_rport->nport_id, xchg,
- xchg->oxid, xchg->rxid, xchg->io_state);
- unf_notify_chip_free_xid(xchg);
- unf_cm_free_xchg(unf_lport, xchg);
- }
- /* Second ABTS timeout and enter LOGO process */
- else if ((INI_IO_STATE_ABORT_TIMEOUT & xchg->io_state) &&
- (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) Exch(0x%p) had wait enough time for second abts send OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- unf_lport->port_id, unf_rport->nport_id, xchg,
- xchg->oxid, xchg->rxid, xchg->io_state);
- unf_abts_timeout_recovery_default(unf_rport, xchg);
- unf_cm_free_xchg(unf_lport, xchg);
- }
- /* First time to send ABTS, timeout and retry to send ABTS again */
- else if ((INI_IO_STATE_UPABORT & xchg->io_state) &&
- (!(ABTS_RESPONSE_RECEIVED & xchg->abts_state))) {
- xchg->io_state |= INI_IO_STATE_ABORT_TIMEOUT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_fc_ini_send_abts_time_out(unf_lport, unf_rport, xchg);
- }
- /* 3. IO_DONE */
- else if ((INI_IO_STATE_DONE & xchg->io_state) &&
- (ABTS_RESPONSE_RECEIVED & xchg->abts_state)) {
- /*
- * for IO_DONE:
- * 1. INI ABTS first timer time out
- * 2. INI RCVD ABTS Response
- * 3. Normal case for I/O Done
- */
- /* Send ABTS & RCVD RSP & no timeout */
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- if (unf_send_rrq(unf_lport, unf_rport, xchg) == RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]LPort(0x%x) send RRQ succeed to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- unf_lport->port_id, unf_rport->nport_id, xchg,
- xchg->oxid, xchg->rxid, xchg->io_state);
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]LPort(0x%x) can't send RRQ to RPort(0x%x) Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) state(0x%x)",
- unf_lport->port_id, unf_rport->nport_id, xchg,
- xchg->oxid, xchg->rxid, xchg->io_state);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->io_state |= INI_IO_STATE_RRQSEND_ERR;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg,
- (ulong)UNF_WRITE_RRQ_SENDERR_INTERVAL, UNF_TIMER_TYPE_INI_IO);
- }
- } else if (INI_IO_STATE_REC_TIMEOUT_WAIT & xchg->io_state) {
- xchg->io_state &= ~INI_IO_STATE_REC_TIMEOUT_WAIT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_fc_ini_io_rec_wait_time_out(unf_lport, unf_rport, xchg);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_fc_abort_time_out_cmnd(unf_lport, xchg);
- }
-
- unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
- unf_xchg_ref_dec(xchg, INI_IO_TIMEOUT);
-}
-
-void unf_sfs_xchg_time_out(struct work_struct *work)
-{
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(work);
- xchg = container_of(work, struct unf_xchg, timeout_work.work);
- FC_CHECK_RETURN_VOID(xchg);
-
- ret = unf_xchg_ref_inc(xchg, SFS_TIMEOUT);
- FC_CHECK_RETURN_VOID(ret == RETURN_OK);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]SFS Exch(%p) Cmnd(0x%x) IO Exch(0x%p) Sid_Did(0x%x:0x%x) HotTag(0x%x) State(0x%x) Timeout.",
- xchg, xchg->cmnd_code, xchg->io_xchg, xchg->sid, xchg->did,
- xchg->hotpooltag, xchg->io_state);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if ((xchg->io_state & TGT_IO_STATE_ABORT) &&
- xchg->cmnd_code != ELS_RRQ && xchg->cmnd_code != ELS_LOGO) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "SFS Exch(0x%p) Cmnd(0x%x) Hot Pool Tag(0x%x) timeout, but aborted, no need to handle.",
- xchg, xchg->cmnd_code, xchg->hotpooltag);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
- unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
-
- return;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- /* The sfs times out. If the sfs is ELS reply,
- * go to UNF_RPortErrorRecovery/unf_lport_error_recovery.
- * Otherwise, go to the corresponding obCallback.
- */
- if (UNF_XCHG_IS_ELS_REPLY(xchg) && unf_rport) {
- if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
- unf_lport_error_recovery(unf_lport);
- else
- unf_rport_error_recovery(unf_rport);
-
- } else if (xchg->ob_callback) {
- xchg->ob_callback(xchg);
- } else {
- /* Do nothing */
- }
- unf_notify_chip_free_xid(xchg);
- unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
- unf_xchg_ref_dec(xchg, SFS_TIMEOUT);
-}
diff --git a/drivers/scsi/spfc/common/unf_exchg_abort.h b/drivers/scsi/spfc/common/unf_exchg_abort.h
deleted file mode 100644
index 75b5a1bab733..000000000000
--- a/drivers/scsi/spfc/common/unf_exchg_abort.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_EXCHG_ABORT_H
-#define UNF_EXCHG_ABORT_H
-
-#include "unf_type.h"
-#include "unf_exchg.h"
-
-#define UNF_RAW_LUN_ID_MASK 0x000000000000ffff
-
-void unf_xchg_abort_by_lun(void *lport, void *rport, u64 lun_id, void *tm_xchg,
- bool abort_all_lun_flag);
-void unf_xchg_abort_by_session(void *lport, void *rport);
-void unf_xchg_mgr_io_xchg_abort(void *lport, void *rport, u32 sid, u32 did,
- u32 extra_io_state);
-void unf_xchg_mgr_sfs_xchg_abort(void *lport, void *rport, u32 sid, u32 did);
-void unf_xchg_abort_all_xchg(void *lport, u32 xchg_type, bool clean);
-void unf_fc_abort_time_out_cmnd(struct unf_lport *lport, struct unf_xchg *xchg);
-void unf_fc_ini_io_xchg_time_out(struct work_struct *work);
-void unf_sfs_xchg_time_out(struct work_struct *work);
-void unf_xchg_up_abort_io_by_scsi_id(void *lport, u32 scsi_id);
-#endif
diff --git a/drivers/scsi/spfc/common/unf_fcstruct.h b/drivers/scsi/spfc/common/unf_fcstruct.h
deleted file mode 100644
index d6eb8592994b..000000000000
--- a/drivers/scsi/spfc/common/unf_fcstruct.h
+++ /dev/null
@@ -1,459 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_FCSTRUCT_H
-#define UNF_FCSTRUCT_H
-
-#include "unf_type.h"
-#include "unf_scsi_common.h"
-
-#define FC_RCTL_BLS 0x80000000
-
-/*
- * * R_CTL Basic Link Data defines
- */
-
-#define FC_RCTL_BLS_ACC (FC_RCTL_BLS | 0x04000000)
-#define FC_RCTL_BLS_RJT (FC_RCTL_BLS | 0x05000000)
-
-/*
- * * BA_RJT reason code defines
- */
-#define FCXLS_BA_RJT_LOGICAL_ERROR 0x00030000
-
-/*
- * * BA_RJT code explanation
- */
-
-#define FCXLS_LS_RJT_INVALID_OXID_RXID 0x00001700
-
-/*
- * * ELS ACC
- */
-struct unf_els_acc {
- struct unf_fc_head frame_hdr;
- u32 cmnd;
-};
-
-/*
- * * ELS RJT
- */
-struct unf_els_rjt {
- struct unf_fc_head frame_hdr;
- u32 cmnd;
- u32 reason_code;
-};
-
-/*
- * * FLOGI payload,
- * * FC-LS-2 FLOGI, PLOGI, FDISC or LS_ACC Payload
- */
-struct unf_flogi_fdisc_payload {
- u32 cmnd;
- struct unf_fabric_parm fabric_parms;
-};
-
-/*
- * * Flogi and Flogi accept frames. They are the same structure
- */
-struct unf_flogi_fdisc_acc {
- struct unf_fc_head frame_hdr;
- struct unf_flogi_fdisc_payload flogi_payload;
-};
-
-/*
- * * Fdisc and Fdisc accept frames. They are the same structure
- */
-
-struct unf_fdisc_acc {
- struct unf_fc_head frame_hdr;
- struct unf_flogi_fdisc_payload fdisc_payload;
-};
-
-/*
- * * PLOGI payload
- */
-struct unf_plogi_payload {
- u32 cmnd;
- struct unf_lgn_parm stparms;
-};
-
-/*
- *Plogi, Plogi accept, Pdisc and Pdisc accept frames. They are all the same
- *structure.
- */
-struct unf_plogi_pdisc {
- struct unf_fc_head frame_hdr;
- struct unf_plogi_payload payload;
-};
-
-/*
- * * LOGO logout link service requests invalidation of service parameters and
- * * port name.
- * * see FC-PH 4.3 Section 21.4.8
- */
-struct unf_logo_payload {
- u32 cmnd;
- u32 nport_id;
- u32 high_port_name;
- u32 low_port_name;
-};
-
-/*
- * * payload to hold LOGO command
- */
-struct unf_logo {
- struct unf_fc_head frame_hdr;
- struct unf_logo_payload payload;
-};
-
-/*
- * * payload for ECHO command, refer to FC-LS-2 4.2.4
- */
-struct unf_echo_payload {
- u32 cmnd;
-#define UNF_FC_ECHO_PAYLOAD_LENGTH 255 /* Length in words */
- u32 data[UNF_FC_ECHO_PAYLOAD_LENGTH];
-};
-
-struct unf_echo {
- struct unf_fc_head frame_hdr;
- struct unf_echo_payload *echo_pld;
- dma_addr_t phy_echo_addr;
-};
-
-#define UNF_PRLI_SIRT_EXTRA_SIZE 12
-
-/*
- * * payload for PRLI and PRLO
- */
-struct unf_prli_payload {
- u32 cmnd;
-#define UNF_FC_PRLI_PAYLOAD_LENGTH 7 /* Length in words */
- u32 parms[UNF_FC_PRLI_PAYLOAD_LENGTH];
-};
-
-/*
- * * FCHS structure with payload
- */
-struct unf_prli_prlo {
- struct unf_fc_head frame_hdr;
- struct unf_prli_payload payload;
-};
-
-struct unf_adisc_payload {
- u32 cmnd;
- u32 hard_address;
- u32 high_port_name;
- u32 low_port_name;
- u32 high_node_name;
- u32 low_node_name;
- u32 nport_id;
-};
-
-/*
- * * FCHS structure with payload
- */
-struct unf_adisc {
- struct unf_fc_head frame_hdr; /* FCHS structure */
- struct unf_adisc_payload
- adisc_payl; /* Payload data containing ADISC info
- */
-};
-
-/*
- * * RLS payload
- */
-struct unf_rls_payload {
- u32 cmnd;
- u32 nport_id; /* in litle endian format */
-};
-
-/*
- * * RLS
- */
-struct unf_rls {
- struct unf_fc_head frame_hdr; /* FCHS structure */
- struct unf_rls_payload rls; /* payload data containing the RLS info */
-};
-
-/*
- * * RLS accept payload
- */
-struct unf_rls_acc_payload {
- u32 cmnd;
- u32 link_failure_count;
- u32 loss_of_sync_count;
- u32 loss_of_signal_count;
- u32 primitive_seq_count;
- u32 invalid_trans_word_count;
- u32 invalid_crc_count;
-};
-
-/*
- * * RLS accept
- */
-struct unf_rls_acc {
- struct unf_fc_head frame_hdr; /* FCHS structure */
- struct unf_rls_acc_payload
- rls; /* payload data containing the RLS ACC info
- */
-};
-
-/*
- * * FCHS structure with payload
- */
-struct unf_rrq {
- struct unf_fc_head frame_hdr;
- u32 cmnd;
- u32 sid;
- u32 oxid_rxid;
-};
-
-#define UNF_SCR_PAYLOAD_CNT 2
-struct unf_scr {
- struct unf_fc_head frame_hdr;
- u32 payload[UNF_SCR_PAYLOAD_CNT];
-};
-
-struct unf_ctiu_prem {
- u32 rev_inid;
- u32 gstype_gssub_options;
- u32 cmnd_rsp_size;
- u32 frag_reason_exp_vend;
-};
-
-#define UNF_FC4TYPE_CNT 8
-struct unf_rftid {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 nport_id;
- u32 fc4_types[UNF_FC4TYPE_CNT];
-};
-
-struct unf_rffid {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 nport_id;
- u32 fc4_feature;
-};
-
-struct unf_rffid_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
-};
-
-struct unf_gffid {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 nport_id;
-};
-
-struct unf_gffid_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 fc4_feature[32];
-};
-
-struct unf_gnnid {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 nport_id;
-};
-
-struct unf_gnnid_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 node_name[2];
-};
-
-struct unf_gpnid {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 nport_id;
-};
-
-struct unf_gpnid_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
- u32 port_name[2];
-};
-
-struct unf_rft_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
-};
-
-struct unf_ls_rjt_pld {
- u32 srr_op; /* 01000000h */
- u8 vandor;
- u8 reason_exp;
- u8 reason;
- u8 reserved;
-};
-
-struct unf_ls_rjt {
- struct unf_fc_head frame_hdr;
- struct unf_ls_rjt_pld pld;
-};
-
-struct unf_rec_pld {
- u32 rec_cmnd;
- u32 xchg_org_sid; /* bit0-bit23 */
- u16 rx_id;
- u16 ox_id;
-};
-
-struct unf_rec {
- struct unf_fc_head frame_hdr;
- struct unf_rec_pld rec_pld;
-};
-
-struct unf_rec_acc_pld {
- u32 cmnd;
- u16 rx_id;
- u16 ox_id;
- u32 org_addr_id; /* bit0-bit23 */
- u32 rsp_addr_id; /* bit0-bit23 */
-};
-
-struct unf_rec_acc {
- struct unf_fc_head frame_hdr;
- struct unf_rec_acc_pld payload;
-};
-
-struct unf_gid {
- struct unf_ctiu_prem ctiu_pream;
- u32 scope_type;
-};
-
-struct unf_gid_acc {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
-};
-
-#define UNF_LOOPMAP_COUNT 128
-struct unf_loop_init {
- struct unf_fc_head frame_hdr;
- u32 cmnd;
-#define UNF_FC_ALPA_BIT_MAP_SIZE 4
- u32 alpha_bit_map[UNF_FC_ALPA_BIT_MAP_SIZE];
-};
-
-struct unf_loop_map {
- struct unf_fc_head frame_hdr;
- u32 cmnd;
- u32 loop_map[32];
-};
-
-struct unf_ctiu_rjt {
- struct unf_fc_head frame_hdr;
- struct unf_ctiu_prem ctiu_pream;
-};
-
-struct unf_gid_acc_pld {
- struct unf_ctiu_prem ctiu_pream;
-
- u32 gid_port_id[UNF_GID_PORT_CNT];
-};
-
-struct unf_gid_rsp {
- struct unf_gid_acc_pld *gid_acc_pld;
-};
-
-struct unf_gid_req_rsp {
- struct unf_fc_head frame_hdr;
- struct unf_gid gid_req;
- struct unf_gid_rsp gid_rsp;
-};
-
-/* FC-LS-2 Table 31 RSCN Payload */
-struct unf_rscn_port_id_page {
- u8 port_id_port;
- u8 port_id_area;
- u8 port_id_domain;
-
- u8 addr_format : 2;
- u8 event_qualifier : 4;
- u8 reserved : 2;
-};
-
-struct unf_rscn_pld {
- u32 cmnd;
- struct unf_rscn_port_id_page port_id_page[UNF_RSCN_PAGE_SUM];
-};
-
-struct unf_rscn {
- struct unf_fc_head frame_hdr;
- struct unf_rscn_pld *rscn_pld;
-};
-
-union unf_sfs_u {
- struct {
- struct unf_fc_head frame_head;
- u8 data[0];
- } sfs_common;
- struct unf_els_acc els_acc;
- struct unf_els_rjt els_rjt;
- struct unf_plogi_pdisc plogi;
- struct unf_logo logo;
- struct unf_echo echo;
- struct unf_echo echo_acc;
- struct unf_prli_prlo prli;
- struct unf_prli_prlo prlo;
- struct unf_rls rls;
- struct unf_rls_acc rls_acc;
- struct unf_plogi_pdisc pdisc;
- struct unf_adisc adisc;
- struct unf_rrq rrq;
- struct unf_flogi_fdisc_acc flogi;
- struct unf_fdisc_acc fdisc;
- struct unf_scr scr;
- struct unf_rec rec;
- struct unf_rec_acc rec_acc;
- struct unf_ls_rjt ls_rjt;
- struct unf_rscn rscn;
- struct unf_gid_req_rsp get_id;
- struct unf_rftid rft_id;
- struct unf_rft_rsp rft_id_rsp;
- struct unf_rffid rff_id;
- struct unf_rffid_rsp rff_id_rsp;
- struct unf_gffid gff_id;
- struct unf_gffid_rsp gff_id_rsp;
- struct unf_gnnid gnn_id;
- struct unf_gnnid_rsp gnn_id_rsp;
- struct unf_gpnid gpn_id;
- struct unf_gpnid_rsp gpn_id_rsp;
- struct unf_plogi_pdisc plogi_acc;
- struct unf_plogi_pdisc pdisc_acc;
- struct unf_adisc adisc_acc;
- struct unf_prli_prlo prli_acc;
- struct unf_prli_prlo prlo_acc;
- struct unf_flogi_fdisc_acc flogi_acc;
- struct unf_fdisc_acc fdisc_acc;
- struct unf_loop_init lpi;
- struct unf_loop_map loop_map;
- struct unf_ctiu_rjt ctiu_rjt;
-};
-
-struct unf_sfs_entry {
- union unf_sfs_u *fc_sfs_entry_ptr; /* Virtual addr of SFS buffer */
- u64 sfs_buff_phy_addr; /* Physical addr of SFS buffer */
- u32 sfs_buff_len; /* Length of bytes in SFS buffer */
- u32 cur_offset;
-};
-
-struct unf_fcp_rsp_iu_entry {
- u8 *fcp_rsp_iu;
- u32 fcp_sense_len;
-};
-
-struct unf_rjt_info {
- u32 els_cmnd_code;
- u32 reason_code;
- u32 reason_explanation;
- u8 class_mode;
- u8 ucrsvd[3];
-};
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_gs.c b/drivers/scsi/spfc/common/unf_gs.c
deleted file mode 100644
index cb5fc1a5d246..000000000000
--- a/drivers/scsi/spfc/common/unf_gs.c
+++ /dev/null
@@ -1,2521 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_gs.h"
-#include "unf_log.h"
-#include "unf_exchg.h"
-#include "unf_rport.h"
-#include "unf_service.h"
-#include "unf_portman.h"
-#include "unf_ls.h"
-
-static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg);
-static void unf_gpn_id_ob_callback(struct unf_xchg *xchg);
-static void unf_gnn_id_ob_callback(struct unf_xchg *xchg);
-static void unf_scr_callback(void *lport, void *rport, void *xchg);
-static void unf_scr_ob_callback(struct unf_xchg *xchg);
-static void unf_gff_id_ob_callback(struct unf_xchg *xchg);
-static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg);
-static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg);
-static void unf_gid_ft_ob_callback(struct unf_xchg *xchg);
-static void unf_gid_ft_callback(void *lport, void *rport, void *xchg);
-static void unf_gid_pt_ob_callback(struct unf_xchg *xchg);
-static void unf_gid_pt_callback(void *lport, void *rport, void *xchg);
-static void unf_rft_id_ob_callback(struct unf_xchg *xchg);
-static void unf_rft_id_callback(void *lport, void *rport, void *xchg);
-static void unf_rff_id_callback(void *lport, void *rport, void *xchg);
-static void unf_rff_id_ob_callback(struct unf_xchg *xchg);
-
-#define UNF_GET_DOMAIN_ID(x) (((x) & 0xFF0000) >> 16)
-#define UNF_GET_AREA_ID(x) (((x) & 0x00FF00) >> 8)
-
-#define UNF_GID_LAST_PORT_ID 0x80
-#define UNF_GID_CONTROL(nport_id) ((nport_id) >> 24)
-#define UNF_GET_PORT_OPTIONS(fc_4feature) ((fc_4feature) >> 20)
-
-#define UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(port_id_page) \
- (((u32)(port_id_page)->port_id_domain << 16) | \
- ((u32)(port_id_page)->port_id_area << 8) | \
- ((u32)(port_id_page)->port_id_port))
-
-#define UNF_GNN_GFF_ID_RJT_REASON(rjt_reason) \
- ((UNF_CTIU_RJT_UNABLE_PERFORM == \
- ((rjt_reason) & UNF_CTIU_RJT_MASK)) && \
- ((UNF_CTIU_RJT_EXP_PORTID_NO_REG == \
- ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
- (UNF_CTIU_RJT_EXP_PORTNAME_NO_REG == \
- ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK)) || \
- (UNF_CTIU_RJT_EXP_NODENAME_NO_REG == \
- ((rjt_reason) & UNF_CTIU_RJT_EXP_MASK))))
-
-u32 unf_send_scr(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* after RCVD RFF_ID ACC */
- struct unf_scr *scr = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, NULL, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for SCR",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_SCR;
-
- xchg->callback = unf_scr_callback;
- xchg->ob_callback = unf_scr_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- scr = &fc_entry->scr;
- memset(scr, 0, sizeof(struct unf_scr));
- scr->payload[ARRAY_INDEX_0] = (UNF_GS_CMND_SCR); /* SCR is 0x62 */
- scr->payload[ARRAY_INDEX_1] = (UNF_FABRIC_FULL_REG); /* Full registration */
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: SCR send %s. Port(0x%x_0x%x)--->RPort(0x%x) with hottag(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, rport->nport_id, xchg->hotpooltag);
-
- return ret;
-}
-
-static void unf_fill_gff_id_pld(struct unf_gffid *gff_id, u32 nport_id)
-{
- FC_CHECK_RETURN_VOID(gff_id);
-
- gff_id->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- gff_id->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- gff_id->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GFF_ID);
- gff_id->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
- gff_id->nport_id = nport_id;
-}
-
-static void unf_ctpass_thru_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_gid_acc_pld *gid_acc_pld = NULL;
- struct unf_xchg *unf_xchg = NULL;
- union unf_sfs_u *sfs = NULL;
- u32 cmnd_rsp_size = 0;
-
- struct send_com_trans_out *out_send = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
- sfs = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
-
- gid_acc_pld = sfs->get_id.gid_rsp.gid_acc_pld;
- if (!gid_acc_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) CT PassThru response payload is NULL",
- unf_lport->port_id);
-
- return;
- }
-
- out_send = (struct send_com_trans_out *)unf_xchg->upper_ct;
-
- cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
- if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- out_send->hba_status = 0; /* HBA_STATUS_OK 0 */
- out_send->total_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
- out_send->actual_resp_buffer_cnt = unf_xchg->fcp_sfs_union.sfs_entry.cur_offset;
- unf_cpu_to_big_end(out_send->resp_buffer, (u32)out_send->total_resp_buffer_cnt);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) CT PassThru was receive len is(0x%0x)",
- unf_lport->port_id, unf_lport->nport_id,
- out_send->total_resp_buffer_cnt);
- } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- out_send->hba_status = 13; /* HBA_STATUS_ERROR_ELS_REJECT 13 */
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was rejected",
- unf_lport->port_id, unf_lport->nport_id);
- } else {
- out_send->hba_status = 1; /* HBA_STATUS_ERROR 1 */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) CT PassThru was UNKNOWN",
- unf_lport->port_id, unf_lport->nport_id);
- }
-
- up(&unf_lport->wmi_task_sema);
-}
-
-u32 unf_send_ctpass_thru(struct unf_lport *lport, void *buffer, u32 bufflen)
-{
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_rport *sns_port = NULL;
- struct send_com_trans_in *in_send = (struct send_com_trans_in *)buffer;
- struct send_com_trans_out *out_send =
- (struct send_com_trans_out *)buffer;
- struct unf_ctiu_prem *ctiu_pream = NULL;
- struct unf_gid *gs_pld = NULL;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buffer, UNF_RETURN_ERROR);
-
- ctiu_pream = (struct unf_ctiu_prem *)in_send->req_buffer;
- unf_cpu_to_big_end(ctiu_pream, sizeof(struct unf_gid));
-
- if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GIEL) {
- sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_MGMT_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) can't find SNS port",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- } else if (ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16 == NS_GA_NXT) {
- sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) can't find SNS port",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[info]%s cmnd(0x%x) is error:", __func__,
- ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16);
-
- return UNF_RETURN_ERROR;
- }
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- xchg->cmnd_code = ctiu_pream->cmnd_rsp_size >> UNF_SHIFT_16;
- xchg->upper_ct = buffer;
- xchg->ob_callback = NULL;
- xchg->callback = unf_ctpass_thru_callback;
- xchg->oxid = xchg->hotpooltag;
- unf_fill_package(&pkg, xchg, sns_port);
- pkg.type = UNF_PKG_GS_REQ;
- xchg->fcp_sfs_union.sfs_entry.sfs_buff_len = bufflen;
- gs_pld = &fc_entry->get_id.gid_req; /* GID req payload */
- memset(gs_pld, 0, sizeof(struct unf_gid));
- memcpy(gs_pld, (struct unf_gid *)in_send->req_buffer, sizeof(struct unf_gid));
- fc_entry->get_id.gid_rsp.gid_acc_pld = (struct unf_gid_acc_pld *)out_send->resp_buffer;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
-
- return ret;
-}
-
-u32 unf_send_gff_id(struct unf_lport *lport, struct unf_rport *sns_port,
- u32 nport_id)
-{
- struct unf_gffid *gff_id = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- struct unf_frame_pkg pkg;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
-
- if (unf_is_lport_valid(lport) != RETURN_OK)
- /* Lport is invalid, no retry or handle required, return ok */
- return RETURN_OK;
-
- unf_lport = (struct unf_lport *)lport->root_lport;
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id, sns_port, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for GFF_ID",
- lport->port_id);
-
- return unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
- }
-
- xchg->cmnd_code = NS_GFF_ID;
- xchg->disc_portid = nport_id;
-
- xchg->ob_callback = unf_gff_id_ob_callback;
- xchg->callback = unf_gff_id_callback;
-
- unf_fill_package(&pkg, xchg, sns_port);
- pkg.type = UNF_PKG_GS_REQ;
-
- gff_id = &fc_entry->gff_id;
- memset(gff_id, 0, sizeof(struct unf_gffid));
- unf_fill_gff_id_pld(gff_id, nport_id);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- else
- atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: GFF_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- sns_port->nport_id, nport_id);
-
- return ret;
-}
-
-static void unf_fill_gnnid_pld(struct unf_gnnid *gnnid_pld, u32 nport_id)
-{
- /* Inquiry R_Port node name from SW */
- FC_CHECK_RETURN_VOID(gnnid_pld);
-
- gnnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- gnnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- gnnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GNN_ID);
- gnnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
-
- gnnid_pld->nport_id = nport_id;
-}
-
-u32 unf_send_gnn_id(struct unf_lport *lport, struct unf_rport *sns_port,
- u32 nport_id)
-{
- /* from DISC stop/re-login */
- struct unf_gnnid *unf_gnnid = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x_0x%x) send gnnid to 0x%x.", lport->port_id,
- lport->nport_id, nport_id);
-
- if (unf_is_lport_valid(lport) != RETURN_OK)
- /* Lport is invalid, no retry or handle required, return ok */
- return RETURN_OK;
-
- unf_lport = (struct unf_lport *)lport->root_lport;
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
- sns_port, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) exchange can't be NULL for GNN_ID",
- lport->port_id);
-
- return unf_get_and_post_disc_event(lport, sns_port, nport_id,
- UNF_DISC_GET_NODE_NAME);
- }
-
- xchg->cmnd_code = NS_GNN_ID;
- xchg->disc_portid = nport_id;
-
- xchg->ob_callback = unf_gnn_id_ob_callback;
- xchg->callback = unf_gnn_id_callback;
-
- unf_fill_package(&pkg, xchg, sns_port);
- pkg.type = UNF_PKG_GS_REQ;
-
- unf_gnnid = &fc_entry->gnn_id; /* GNNID payload */
- memset(unf_gnnid, 0, sizeof(struct unf_gnnid));
- unf_fill_gnnid_pld(unf_gnnid, nport_id);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- else
- atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: GNN_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x) inquire Nportid(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, sns_port->nport_id, nport_id);
-
- return ret;
-}
-
-static void unf_fill_gpnid_pld(struct unf_gpnid *gpnid_pld, u32 nport_id)
-{
- FC_CHECK_RETURN_VOID(gpnid_pld);
-
- gpnid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- gpnid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- gpnid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GPN_ID);
- gpnid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
-
- /* Inquiry WWN from SW */
- gpnid_pld->nport_id = nport_id;
-}
-
-u32 unf_send_gpn_id(struct unf_lport *lport, struct unf_rport *sns_port,
- u32 nport_id)
-{
- struct unf_gpnid *gpnid_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
-
- if (unf_is_lport_valid(lport) != RETURN_OK)
- /* Lport is invalid, no retry or handle required, return ok */
- return RETURN_OK;
-
- unf_lport = (struct unf_lport *)lport->root_lport;
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, sns_port->nport_id,
- sns_port, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for GPN_ID",
- lport->port_id);
-
- return unf_get_and_post_disc_event(lport, sns_port, nport_id,
- UNF_DISC_GET_PORT_NAME);
- }
-
- xchg->cmnd_code = NS_GPN_ID;
- xchg->disc_portid = nport_id;
-
- xchg->callback = unf_gpn_id_callback;
- xchg->ob_callback = unf_gpn_id_ob_callback;
-
- unf_fill_package(&pkg, xchg, sns_port);
- pkg.type = UNF_PKG_GS_REQ;
-
- gpnid_pld = &fc_entry->gpn_id;
- memset(gpnid_pld, 0, sizeof(struct unf_gpnid));
- unf_fill_gpnid_pld(gpnid_pld, nport_id);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- else
- atomic_dec(&unf_lport->disc.disc_thread_info.disc_contrl_size);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: GPN_ID send %s. Port(0x%x)--->RPort(0x%x). Inquire RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- sns_port->nport_id, nport_id);
-
- return ret;
-}
-
-static void unf_fill_gid_ft_pld(struct unf_gid *gid_pld)
-{
- FC_CHECK_RETURN_VOID(gid_pld);
-
- gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_FT);
- gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
-
- gid_pld->scope_type = (UNF_GID_FT_TYPE);
-}
-
-u32 unf_send_gid_ft(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_gid *gid_pld = NULL;
- struct unf_gid_rsp *gid_rsp = NULL;
- struct unf_gid_acc_pld *gid_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
- rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for GID_FT",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = NS_GID_FT;
-
- xchg->ob_callback = unf_gid_ft_ob_callback;
- xchg->callback = unf_gid_ft_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_GS_REQ;
-
- gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
- unf_fill_gid_ft_pld(gid_pld);
- gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
-
- gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
- if (!gid_acc_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate GID_FT response buffer failed",
- lport->port_id);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
- memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
- gid_rsp->gid_acc_pld = gid_acc_pld;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: GID_FT send %s. Port(0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_gid_pt_pld(struct unf_gid *gid_pld,
- struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(gid_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- gid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- gid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- gid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_GID_PT);
- gid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
-
- /* 0x7F000000 means NX_Port */
- gid_pld->scope_type = (UNF_GID_PT_TYPE);
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, gid_pld,
- sizeof(struct unf_gid));
-}
-
-u32 unf_send_gid_pt(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* from DISC start */
- struct unf_gid *gid_pld = NULL;
- struct unf_gid_rsp *gid_rsp = NULL;
- struct unf_gid_acc_pld *gid_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
- rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for GID_PT",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = NS_GID_PT;
-
- xchg->ob_callback = unf_gid_pt_ob_callback;
- xchg->callback = unf_gid_pt_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_GS_REQ;
-
- gid_pld = &fc_entry->get_id.gid_req; /* GID req payload */
- unf_fill_gid_pt_pld(gid_pld, lport);
- gid_rsp = &fc_entry->get_id.gid_rsp; /* GID rsp payload */
-
- gid_acc_pld = (struct unf_gid_acc_pld *)unf_get_one_big_sfs_buf(xchg);
- if (!gid_acc_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%0x) Allocate GID_PT response buffer failed",
- lport->port_id);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
- memset(gid_acc_pld, 0, sizeof(struct unf_gid_acc_pld));
- gid_rsp->gid_acc_pld = gid_acc_pld;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: GID_PT send %s. Port(0x%x_0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_rft_id_pld(struct unf_rftid *rftid_pld,
- struct unf_lport *lport)
-{
- u32 index = 1;
-
- FC_CHECK_RETURN_VOID(rftid_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- rftid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- rftid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- rftid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFT_ID);
- rftid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
- rftid_pld->nport_id = (lport->nport_id);
- rftid_pld->fc4_types[ARRAY_INDEX_0] = (UNF_FC4_SCSI_BIT8);
-
- for (index = ARRAY_INDEX_2; index < UNF_FC4TYPE_CNT; index++)
- rftid_pld->fc4_types[index] = 0;
-}
-
-u32 unf_send_rft_id(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* After PLOGI process */
- struct unf_rftid *rft_id = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
- rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for RFT_ID",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = NS_RFT_ID;
-
- xchg->callback = unf_rft_id_callback;
- xchg->ob_callback = unf_rft_id_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_GS_REQ;
-
- rft_id = &fc_entry->rft_id;
- memset(rft_id, 0, sizeof(struct unf_rftid));
- unf_fill_rft_id_pld(rft_id, lport);
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: RFT_ID send %s. Port(0x%x_0x%x)--->RPort(0x%x). rport(0x%p) wwpn(0x%llx) ",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, rport->nport_id, rport, rport->port_name);
-
- return ret;
-}
-
-static void unf_fill_rff_id_pld(struct unf_rffid *rffid_pld,
- struct unf_lport *lport, u32 fc4_type)
-{
- FC_CHECK_RETURN_VOID(rffid_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- rffid_pld->ctiu_pream.rev_inid = (UNF_REV_NPORTID_INIT);
- rffid_pld->ctiu_pream.gstype_gssub_options = (UNF_FSTYPE_OPT_INIT);
- rffid_pld->ctiu_pream.cmnd_rsp_size = (UNF_FSTYPE_RFF_ID);
- rffid_pld->ctiu_pream.frag_reason_exp_vend = UNF_FRAG_REASON_VENDOR;
- rffid_pld->nport_id = (lport->nport_id);
- rffid_pld->fc4_feature = (fc4_type | (lport->options << UNF_SHIFT_4));
-}
-
-u32 unf_send_rff_id(struct unf_lport *lport, struct unf_rport *rport,
- u32 fc4_type)
-{
- /* from RFT_ID, then Send SCR */
- struct unf_rffid *rff_id = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO, "%s Enter", __func__);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id,
- rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for RFF_ID",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = NS_RFF_ID;
-
- xchg->callback = unf_rff_id_callback;
- xchg->ob_callback = unf_rff_id_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_GS_REQ;
-
- rff_id = &fc_entry->rff_id;
- memset(rff_id, 0, sizeof(struct unf_rffid));
- unf_fill_rff_id_pld(rff_id, lport, fc4_type);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: RFF_ID feature 0x%x(10:TGT,20:INI,30:COM) send %s. Port(0x%x_0x%x)--->RPortid(0x%x) rport(0x%p)",
- lport->options, (ret != RETURN_OK) ? "failed" : "succeed",
- lport->port_id, lport->nport_id, rport->nport_id, rport);
-
- return ret;
-}
-
-void unf_handle_init_gid_acc(struct unf_gid_acc_pld *gid_acc_pld,
- struct unf_lport *lport)
-{
- /*
- * from SCR ACC callback
- * NOTE: inquiry disc R_Port used for NPIV
- */
- struct unf_disc_rport *disc_rport = NULL;
- struct unf_disc *disc = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 gid_port_id = 0;
- u32 nport_id = 0;
- u32 index = 0;
- u8 control = 0;
-
- FC_CHECK_RETURN_VOID(gid_acc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- /*
- * 1. Find & Check & Get (new) R_Port from list_disc_rports_pool
- * then, Add to R_Port Disc_busy_list
- */
- while (index < UNF_GID_PORT_CNT) {
- gid_port_id = (gid_acc_pld->gid_port_id[index]);
- nport_id = UNF_NPORTID_MASK & gid_port_id;
- control = UNF_GID_CONTROL(gid_port_id);
-
- /* for each N_Port_ID from GID_ACC payload */
- if (lport->nport_id != nport_id && nport_id != 0 &&
- (!unf_lookup_lport_by_nportid(lport, nport_id))) {
- /* for New Port, not L_Port */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) get nportid(0x%x) from GID_ACC",
- lport->port_id, lport->nport_id, nport_id);
-
- /* Get R_Port from list of RPort Disc Pool */
- disc_rport = unf_rport_get_free_and_init(lport,
- UNF_PORT_TYPE_DISC, nport_id);
- if (!disc_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) can't allocate new rport(0x%x) from disc pool",
- lport->port_id, lport->nport_id,
- nport_id);
-
- index++;
- continue;
- }
- }
-
- if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
- break;
-
- index++;
- }
-
- /*
- * 2. Do port disc stop operation:
- * NOTE: Do DISC & release R_Port from busy_list back to
- * list_disc_rports_pool
- */
- disc = &lport->disc;
- if (!disc->disc_temp.unf_disc_stop) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) disc stop function is NULL",
- lport->port_id, lport->nport_id);
-
- return;
- }
-
- ret = disc->disc_temp.unf_disc_stop(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) do disc stop failed",
- lport->port_id, lport->nport_id);
- }
-}
-
-u32 unf_rport_relogin(struct unf_lport *lport, u32 nport_id)
-{
- /* Send GNN_ID */
- struct unf_rport *sns_port = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* Get SNS R_Port */
- sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Send GNN_ID now to SW */
- ret = unf_get_and_post_disc_event(lport, sns_port, nport_id,
- UNF_DISC_GET_NODE_NAME);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- lport->nport_id, UNF_DISC_GET_NODE_NAME, nport_id);
-
- /* NOTE: Continue to next stage */
- unf_rcv_gnn_id_rsp_unknown(lport, sns_port, nport_id);
- }
-
- return ret;
-}
-
-u32 unf_rport_check_wwn(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* Send GPN_ID */
- struct unf_rport *sns_port = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- /* Get SNS R_Port */
- sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find fabric Port", lport->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Send GPN_ID to SW */
- ret = unf_get_and_post_disc_event(lport, sns_port, rport->nport_id,
- UNF_DISC_GET_PORT_NAME);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- lport->nport_id, UNF_DISC_GET_PORT_NAME,
- rport->nport_id);
-
- unf_rcv_gpn_id_rsp_unknown(lport, rport->nport_id);
- }
-
- return ret;
-}
-
-u32 unf_handle_rscn_port_not_indisc(struct unf_lport *lport, u32 rscn_nport_id)
-{
- /* RSCN Port_ID not in GID_ACC payload table: Link Down */
- struct unf_rport *unf_rport = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* from R_Port busy list by N_Port_ID */
- unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
- if (unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) RPort(0x%x) wwpn(0x%llx) has been removed and link down it",
- lport->port_id, rscn_nport_id, unf_rport->port_name);
-
- unf_rport_linkdown(lport, unf_rport);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) has no RPort(0x%x) and do nothing",
- lport->nport_id, rscn_nport_id);
- }
-
- return ret;
-}
-
-u32 unf_handle_rscn_port_indisc(struct unf_lport *lport, u32 rscn_nport_id)
-{
- /* Send GPN_ID or re-login(GNN_ID) */
- struct unf_rport *unf_rport = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* from R_Port busy list by N_Port_ID */
- unf_rport = unf_get_rport_by_nport_id(lport, rscn_nport_id);
- if (unf_rport) {
- /* R_Port exist: send GPN_ID */
- ret = unf_rport_check_wwn(lport, unf_rport);
- } else {
- if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI))
- /* Re-LOGIN with INI mode: Send GNN_ID */
- ret = unf_rport_relogin(lport, rscn_nport_id);
- else
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) with no INI feature. Do nothing",
- lport->nport_id);
- }
-
- return ret;
-}
-
-static u32 unf_handle_rscn_port_addr(struct unf_port_id_page *portid_page,
- struct unf_gid_acc_pld *gid_acc_pld,
- struct unf_lport *lport)
-{
- /*
- * Input parameters:
- * 1. Port_ID_page: saved from RSCN payload
- * 2. GID_ACC_payload: back from GID_ACC (GID_PT or GID_FT)
- * *
- * Do work: check whether RSCN Port_ID within GID_ACC payload or not
- * then, re-login or link down rport
- */
- u32 rscn_nport_id = 0;
- u32 gid_port_id = 0;
- u32 nport_id = 0;
- u32 index = 0;
- u8 control = 0;
- u32 ret = RETURN_OK;
- bool have_same_id = false;
-
- FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* 1. get RSCN_NPort_ID from (L_Port->Disc->RSCN_Mgr)->RSCN_Port_ID_Page
- */
- rscn_nport_id = UNF_SERVICE_GET_NPORTID_FORM_GID_PAGE(portid_page);
-
- /*
- * 2. for RSCN_NPort_ID
- * check whether RSCN_NPort_ID within GID_ACC_Payload or not
- */
- while (index < UNF_GID_PORT_CNT) {
- gid_port_id = (gid_acc_pld->gid_port_id[index]);
- nport_id = UNF_NPORTID_MASK & gid_port_id;
- control = UNF_GID_CONTROL(gid_port_id);
-
- if (lport->nport_id != nport_id && nport_id != 0) {
- /* is not L_Port */
- if (nport_id == rscn_nport_id) {
- /* RSCN Port_ID within GID_ACC payload */
- have_same_id = true;
- break;
- }
- }
-
- if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
- break;
-
- index++;
- }
-
- /* 3. RSCN_Port_ID not within GID_ACC payload table */
- if (!have_same_id) {
- /* rport has been removed */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[warn]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table failed",
- lport->port_id, lport->nport_id, rscn_nport_id);
-
- /* Link down rport */
- ret = unf_handle_rscn_port_not_indisc(lport, rscn_nport_id);
-
- } else { /* 4. RSCN_Port_ID within GID_ACC payload table */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x_0x%x) find RSCN N_Port_ID(0x%x) in GID_ACC table succeed",
- lport->port_id, lport->nport_id, rscn_nport_id);
-
- /* Re-login with INI mode */
- ret = unf_handle_rscn_port_indisc(lport, rscn_nport_id);
- }
-
- return ret;
-}
-
-void unf_check_rport_rscn_process(struct unf_rport *rport,
- struct unf_port_id_page *portid_page)
-{
- struct unf_rport *unf_rport = rport;
- struct unf_port_id_page *unf_portid_page = portid_page;
- u8 addr_format = unf_portid_page->addr_format;
-
- switch (addr_format) {
- /* domain+area */
- case UNF_RSCN_AREA_ADDR_GROUP:
- if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain &&
- UNF_GET_AREA_ID(unf_rport->nport_id) == unf_portid_page->port_id_area)
- unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
-
- break;
- /* domain */
- case UNF_RSCN_DOMAIN_ADDR_GROUP:
- if (UNF_GET_DOMAIN_ID(unf_rport->nport_id) == unf_portid_page->port_id_domain)
- unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
-
- break;
- /* all */
- case UNF_RSCN_FABRIC_ADDR_GROUP:
- unf_rport->rscn_position = UNF_RPORT_NEED_PROCESS;
- break;
- default:
- break;
- }
-}
-
-static void unf_set_rport_rscn_position(struct unf_lport *lport,
- struct unf_port_id_page *portid_page)
-{
- struct unf_rport *unf_rport = NULL;
- struct list_head *list_node = NULL;
- struct list_head *list_nextnode = NULL;
- struct unf_disc *disc = NULL;
- ulong disc_flag = 0;
- ulong rport_flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- disc = &lport->disc;
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
- unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
- spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
-
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
- if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS)
- unf_check_rport_rscn_process(unf_rport, portid_page);
- } else {
- unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- }
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-}
-
-static void unf_set_rport_rscn_position_local(struct unf_lport *lport)
-{
- struct unf_rport *unf_rport = NULL;
- struct list_head *list_node = NULL;
- struct list_head *list_nextnode = NULL;
- struct unf_disc *disc = NULL;
- ulong disc_flag = 0;
- ulong rport_flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- disc = &lport->disc;
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
- unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
- spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
-
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
- if (unf_rport->rscn_position == UNF_RPORT_NEED_PROCESS)
- unf_rport->rscn_position = UNF_RPORT_ONLY_IN_LOCAL_PROCESS;
- } else {
- unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- }
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-}
-
-static void unf_reset_rport_rscn_setting(struct unf_lport *lport)
-{
- struct unf_rport *rport = NULL;
- struct list_head *list_node = NULL;
- struct list_head *list_nextnode = NULL;
- struct unf_disc *disc = NULL;
- ulong rport_flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- disc = &lport->disc;
-
- list_for_each_safe(list_node, list_nextnode, &disc->list_busy_rports) {
- rport = list_entry(list_node, struct unf_rport, entry_rport);
- spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
- rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
- }
-}
-
-void unf_compare_nport_id_with_rport_list(struct unf_lport *lport, u32 nport_id,
- struct unf_port_id_page *portid_page)
-{
- struct unf_rport *rport = NULL;
- ulong rport_flag = 0;
- u8 addr_format = portid_page->addr_format;
-
- FC_CHECK_RETURN_VOID(lport);
-
- switch (addr_format) {
- /* domain+area */
- case UNF_RSCN_AREA_ADDR_GROUP:
- if ((UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain) ||
- (UNF_GET_AREA_ID(nport_id) != portid_page->port_id_area))
- return;
-
- break;
- /* domain */
- case UNF_RSCN_DOMAIN_ADDR_GROUP:
- if (UNF_GET_DOMAIN_ID(nport_id) != portid_page->port_id_domain)
- return;
-
- break;
- /* all */
- case UNF_RSCN_FABRIC_ADDR_GROUP:
- break;
- /* can't enter this branch guarantee by outer */
- default:
- break;
- }
-
- rport = unf_get_rport_by_nport_id(lport, nport_id);
-
- if (!rport) {
- if (UNF_PORT_MODE_INI == (lport->options & UNF_PORT_MODE_INI)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x) Find Rport(0x%x) by RSCN",
- lport->nport_id, nport_id);
- unf_rport_relogin(lport, nport_id);
- }
- } else {
- spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
- if (rport->rscn_position == UNF_RPORT_NEED_PROCESS)
- rport->rscn_position = UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS;
-
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
- }
-}
-
-static void unf_compare_disc_with_local_rport(struct unf_lport *lport,
- struct unf_gid_acc_pld *pld,
- struct unf_port_id_page *page)
-{
- u32 gid_port_id = 0;
- u32 nport_id = 0;
- u32 index = 0;
- u8 control = 0;
-
- FC_CHECK_RETURN_VOID(pld);
- FC_CHECK_RETURN_VOID(lport);
-
- while (index < UNF_GID_PORT_CNT) {
- gid_port_id = (pld->gid_port_id[index]);
- nport_id = UNF_NPORTID_MASK & gid_port_id;
- control = UNF_GID_CONTROL(gid_port_id);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO, "[info]Port(0x%x) DISC N_Port_ID(0x%x)",
- lport->nport_id, nport_id);
-
- if (nport_id != 0 &&
- (!unf_lookup_lport_by_nportid(lport, nport_id)))
- unf_compare_nport_id_with_rport_list(lport, nport_id, page);
-
- if (UNF_GID_LAST_PORT_ID == (UNF_GID_LAST_PORT_ID & control))
- break;
-
- index++;
- }
-
- unf_set_rport_rscn_position_local(lport);
-}
-
-static u32 unf_process_each_rport_after_rscn(struct unf_lport *lport,
- struct unf_rport *sns_port,
- struct unf_rport *rport)
-{
- ulong rport_flag = 0;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
-
- if (rport->rscn_position == UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), check wwpn",
- lport->port_id, lport->nport_id, rport->nport_id,
- rport->rscn_position);
- rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
- ret = unf_rport_check_wwn(lport, rport);
- } else if (rport->rscn_position == UNF_RPORT_ONLY_IN_LOCAL_PROCESS) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x_0x%x) RPort(0x%x) rescan position(0x%x), linkdown it",
- lport->port_id, lport->nport_id, rport->nport_id,
- rport->rscn_position);
- rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
- unf_rport_linkdown(lport, rport);
- } else {
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
- }
-
- return ret;
-}
-
-static u32 unf_process_local_rport_after_rscn(struct unf_lport *lport,
- struct unf_rport *sns_port)
-{
- struct unf_rport *unf_rport = NULL;
- struct list_head *list_node = NULL;
- struct unf_disc *disc = NULL;
- ulong disc_flag = 0;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(sns_port, UNF_RETURN_ERROR);
- disc = &lport->disc;
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- if (list_empty(&disc->list_busy_rports)) {
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-
- return UNF_RETURN_ERROR;
- }
-
- list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
-
- do {
- unf_rport = list_entry(list_node, struct unf_rport, entry_rport);
-
- if (unf_rport->rscn_position == UNF_RPORT_NOT_NEED_PROCESS) {
- list_node = UNF_OS_LIST_NEXT(list_node);
- continue;
- } else {
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
- ret = unf_process_each_rport_after_rscn(lport, sns_port, unf_rport);
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- list_node = UNF_OS_LIST_NEXT(&disc->list_busy_rports);
- }
- } while (list_node != &disc->list_busy_rports);
-
- unf_reset_rport_rscn_setting(lport);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-
- return ret;
-}
-
-static u32 unf_handle_rscn_group_addr(struct unf_port_id_page *portid_page,
- struct unf_gid_acc_pld *gid_acc_pld,
- struct unf_lport *lport)
-{
- struct unf_rport *sns_port = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(portid_page, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(gid_acc_pld, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- sns_port = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find fabric port failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- unf_set_rport_rscn_position(lport, portid_page);
- unf_compare_disc_with_local_rport(lport, gid_acc_pld, portid_page);
-
- ret = unf_process_local_rport_after_rscn(lport, sns_port);
-
- return ret;
-}
-
-static void unf_handle_rscn_gid_acc(struct unf_gid_acc_pld *gid_acc_pid,
- struct unf_lport *lport)
-{
- /* for N_Port_ID table return from RSCN */
- struct unf_port_id_page *port_id_page = NULL;
- struct unf_rscn_mgr *rscn_mgr = NULL;
- struct list_head *list_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(gid_acc_pid);
- FC_CHECK_RETURN_VOID(lport);
- rscn_mgr = &lport->disc.rscn_mgr;
-
- spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
- while (!list_empty(&rscn_mgr->list_using_rscn_page)) {
- /*
- * for each RSCN_Using_Page(NPortID)
- * for each
- * L_Port->Disc->RSCN_Mgr->RSCN_Using_Page(Port_ID_Page)
- * * NOTE:
- * check using_page_port_id whether within GID_ACC payload or
- * not
- */
- list_node = UNF_OS_LIST_NEXT(&rscn_mgr->list_using_rscn_page);
- port_id_page = list_entry(list_node, struct unf_port_id_page, list_node_rscn);
- list_del(list_node); /* NOTE: here delete node (from RSCN using Page) */
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
-
- switch (port_id_page->addr_format) {
- /* each page of RSNC corresponding one of N_Port_ID */
- case UNF_RSCN_PORT_ADDR:
- (void)unf_handle_rscn_port_addr(port_id_page, gid_acc_pid, lport);
- break;
-
- /* each page of RSNC corresponding address group */
- case UNF_RSCN_AREA_ADDR_GROUP:
- case UNF_RSCN_DOMAIN_ADDR_GROUP:
- case UNF_RSCN_FABRIC_ADDR_GROUP:
- (void)unf_handle_rscn_group_addr(port_id_page, gid_acc_pid, lport);
- break;
-
- default:
- break;
- }
-
- /* NOTE: release this RSCN_Node */
- rscn_mgr->unf_release_rscn_node(rscn_mgr, port_id_page);
-
- /* go to next */
- spin_lock_irqsave(&rscn_mgr->rscn_id_list_lock, flag);
- }
-
- spin_unlock_irqrestore(&rscn_mgr->rscn_id_list_lock, flag);
-}
-
-static void unf_gid_acc_handle(struct unf_gid_acc_pld *gid_acc_pid,
- struct unf_lport *lport)
-{
-#define UNF_NONE_DISC 0X0 /* before enter DISC */
- struct unf_disc *disc = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(gid_acc_pid);
- FC_CHECK_RETURN_VOID(lport);
- disc = &lport->disc;
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- switch (disc->disc_option) {
- case UNF_INIT_DISC: /* from SCR callback with INI mode */
- disc->disc_option = UNF_NONE_DISC;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_handle_init_gid_acc(gid_acc_pid, lport); /* R_Port from Disc_list */
- break;
-
- case UNF_RSCN_DISC: /* from RSCN payload parse(analysis) */
- disc->disc_option = UNF_NONE_DISC;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_handle_rscn_gid_acc(gid_acc_pid, lport); /* R_Port from busy_list */
- break;
-
- default:
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x)'s disc option(0x%x) is abnormal",
- lport->port_id, lport->nport_id, disc->disc_option);
- break;
- }
-}
-
-static void unf_gid_ft_ob_callback(struct unf_xchg *xchg)
-{
- /* Do recovery */
- struct unf_lport *lport = NULL;
- union unf_sfs_u *sfs_ptr = NULL;
- struct unf_disc *disc = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!sfs_ptr)
- return;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- if (!lport)
- return;
-
- disc = &lport->disc;
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Do DISC recovery operation */
- unf_disc_error_recovery(lport);
-}
-
-static void unf_gid_ft_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_gid_acc_pld *gid_acc_pld = NULL;
- struct unf_xchg *unf_xchg = NULL;
- union unf_sfs_u *sfs_ptr = NULL;
- u32 cmnd_rsp_size = 0;
- u32 rjt_reason = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
- disc = &unf_lport->disc;
-
- sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
- if (!gid_acc_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) GID_FT response payload is NULL",
- unf_lport->port_id);
-
- return;
- }
-
- cmnd_rsp_size = gid_acc_pld->ctiu_pream.cmnd_rsp_size;
- if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Process GID_FT ACC */
- unf_gid_acc_handle(gid_acc_pld, unf_lport);
- } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) GID_FT was rejected with reason code(0x%x)",
- unf_lport->port_id, rjt_reason);
-
- if (UNF_CTIU_RJT_EXP_FC4TYPE_NO_REG ==
- (rjt_reason & UNF_CTIU_RJT_EXP_MASK)) {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_gid_acc_handle(gid_acc_pld, unf_lport);
- } else {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- }
- } else {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Do DISC recovery operation */
- unf_disc_error_recovery(unf_lport);
- }
-}
-
-static void unf_gid_pt_ob_callback(struct unf_xchg *xchg)
-{
- /* Do recovery */
- struct unf_lport *lport = NULL;
- union unf_sfs_u *sfs_ptr = NULL;
- struct unf_disc *disc = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- sfs_ptr = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!sfs_ptr)
- return;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- if (!lport)
- return;
-
- disc = &lport->disc;
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(lport, UNF_EVENT_DISC_FAILED);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Do DISC recovery operation */
- unf_disc_error_recovery(lport);
-}
-
-static void unf_gid_pt_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_gid_acc_pld *gid_acc_pld = NULL;
- struct unf_xchg *unf_xchg = NULL;
- union unf_sfs_u *sfs_ptr = NULL;
- u32 cmnd_rsp_size = 0;
- u32 rjt_reason = 0;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_rport = (struct unf_rport *)rport;
- disc = &unf_lport->disc;
- unf_xchg = (struct unf_xchg *)xchg;
- sfs_ptr = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
-
- gid_acc_pld = sfs_ptr->get_id.gid_rsp.gid_acc_pld;
- if (!gid_acc_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) GID_PT response payload is NULL",
- unf_lport->port_id);
- return;
- }
-
- cmnd_rsp_size = (gid_acc_pld->ctiu_pream.cmnd_rsp_size);
- if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_gid_acc_handle(gid_acc_pld, unf_lport);
- } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
- rjt_reason = (gid_acc_pld->ctiu_pream.frag_reason_exp_vend);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) GID_PT was rejected with reason code(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, rjt_reason);
-
- if ((rjt_reason & UNF_CTIU_RJT_EXP_MASK) ==
- UNF_CTIU_RJT_EXP_PORTTYPE_NO_REG) {
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_SUCCESS);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- unf_gid_acc_handle(gid_acc_pld, unf_lport);
- } else {
- ret = unf_send_gid_ft(unf_lport, unf_rport);
- if (ret != RETURN_OK)
- goto SEND_GID_PT_FT_FAILED;
- }
- } else {
- goto SEND_GID_PT_FT_FAILED;
- }
-
- return;
-SEND_GID_PT_FT_FAILED:
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- unf_disc_state_ma(unf_lport, UNF_EVENT_DISC_FAILED);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- unf_disc_error_recovery(unf_lport);
-}
-
-static void unf_gnn_id_ob_callback(struct unf_xchg *xchg)
-{
- /* Send GFF_ID */
- struct unf_lport *lport = NULL;
- struct unf_rport *sns_port = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 nport_id = 0;
- struct unf_lport *root_lport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
- lport = xchg->lport;
- FC_CHECK_RETURN_VOID(lport);
- sns_port = xchg->rport;
- FC_CHECK_RETURN_VOID(sns_port);
- nport_id = xchg->disc_portid;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send GNN_ID failed to inquire RPort(0x%x)",
- lport->port_id, nport_id);
-
- root_lport = (struct unf_lport *)lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- /* NOTE: continue next stage */
- ret = unf_get_and_post_disc_event(lport, sns_port, nport_id, UNF_DISC_GET_FEATURE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
-
- unf_rcv_gff_id_rsp_unknown(lport, nport_id);
- }
-}
-
-static void unf_rcv_gnn_id_acc(struct unf_lport *lport,
- struct unf_rport *sns_port,
- struct unf_gnnid_rsp *gnnid_rsp_pld,
- u32 nport_id)
-{
- /* Send GFF_ID or Link down immediately */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_sns_port = sns_port;
- struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
- struct unf_rport *rport = NULL;
- u64 node_name = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
- FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
-
- node_name = ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_0]) << UNF_SHIFT_32) |
- ((u64)(unf_gnnid_rsp_pld->node_name[ARRAY_INDEX_1]));
-
- if (unf_lport->node_name == node_name) {
- /* R_Port & L_Port with same Node Name */
- rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- if (rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) has the same node name(0x%llx) with RPort(0x%x), linkdown it",
- unf_lport->port_id, node_name, nport_id);
-
- /* Destroy immediately */
- unf_rport_immediate_link_down(unf_lport, rport);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x) got RPort(0x%x) with node name(0x%llx) by GNN_ID",
- unf_lport->port_id, nport_id, node_name);
-
- /* Start to Send GFF_ID */
- ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
- nport_id, UNF_DISC_GET_FEATURE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
-
- unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
- }
- }
-}
-
-static void unf_rcv_gnn_id_rjt(struct unf_lport *lport,
- struct unf_rport *sns_port,
- struct unf_gnnid_rsp *gnnid_rsp_pld,
- u32 nport_id)
-{
- /* Send GFF_ID */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_sns_port = sns_port;
- struct unf_gnnid_rsp *unf_gnnid_rsp_pld = gnnid_rsp_pld;
- u32 rjt_reason = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
- FC_CHECK_RETURN_VOID(gnnid_rsp_pld);
-
- rjt_reason = (unf_gnnid_rsp_pld->ctiu_pream.frag_reason_exp_vend);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) GNN_ID was rejected with reason code(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, rjt_reason);
-
- if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
- /* Node existence: Continue next stage */
- ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port,
- nport_id, UNF_DISC_GET_FEATURE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- unf_lport->port_id, UNF_DISC_GET_FEATURE, nport_id);
-
- unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
- }
- }
-}
-
-void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
- struct unf_rport *sns_port, u32 nport_id)
-{
- /* Send GFF_ID */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_sns_port = sns_port;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) Rportid(0x%x) GNN_ID response is unknown. Sending GFF_ID",
- unf_lport->port_id, unf_lport->nport_id, nport_id);
-
- ret = unf_get_and_post_disc_event(unf_lport, unf_sns_port, nport_id, UNF_DISC_GET_FEATURE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- unf_lport->port_id, UNF_DISC_GET_FEATURE,
- nport_id);
-
- /* NOTE: go to next stage */
- unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
- }
-}
-
-static void unf_gnn_id_callback(void *lport, void *sns_port, void *xchg)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_rport *unf_sns_port = (struct unf_rport *)sns_port;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_gnnid_rsp *gnnid_rsp_pld = NULL;
- u32 cmnd_rsp_size = 0;
- u32 nport_id = 0;
- struct unf_lport *root_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
- FC_CHECK_RETURN_VOID(xchg);
-
- nport_id = unf_xchg->disc_portid;
- gnnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gnn_id_rsp;
- cmnd_rsp_size = gnnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
-
- root_lport = (struct unf_lport *)unf_lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
- /* Case ACC: send GFF_ID or Link down immediately */
- unf_rcv_gnn_id_acc(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
- } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
- /* Case RJT: send GFF_ID */
- unf_rcv_gnn_id_rjt(unf_lport, unf_sns_port, gnnid_rsp_pld, nport_id);
- } else { /* NOTE: continue next stage */
- /* Case unknown: send GFF_ID */
- unf_rcv_gnn_id_rsp_unknown(unf_lport, unf_sns_port, nport_id);
- }
-}
-
-static void unf_gff_id_ob_callback(struct unf_xchg *xchg)
-{
- /* Send PLOGI */
- struct unf_lport *lport = NULL;
- struct unf_lport *root_lport = NULL;
- struct unf_rport *rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 nport_id = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- nport_id = xchg->disc_portid;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_CHECK_RETURN_VOID(lport);
-
- root_lport = (struct unf_lport *)lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- /* Get (safe) R_Port */
- rport = unf_get_rport_by_nport_id(lport, nport_id);
- rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
- if (!rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't allocate new RPort(0x%x)",
- lport->port_id, nport_id);
- return;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) send GFF_ID(0x%x_0x%x) to RPort(0x%x_0x%x) abnormal",
- lport->port_id, lport->nport_id, xchg->oxid, xchg->rxid,
- rport->rport_index, rport->nport_id);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* NOTE: Start to send PLOGI */
- ret = unf_send_plogi(lport, rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send PLOGI failed, enter recovry",
- lport->port_id);
-
- /* Do R_Port recovery */
- unf_rport_error_recovery(rport);
- }
-}
-
-void unf_rcv_gff_id_acc(struct unf_lport *lport,
- struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
-{
- /* Delay to LOGIN */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *rport = NULL;
- struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
- u32 fc_4feacture = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(gffid_rsp_pld);
-
- fc_4feacture = unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1];
- if ((UNF_GFF_ACC_MASK & fc_4feacture) == 0)
- fc_4feacture = be32_to_cpu(unf_gffid_rsp_pld->fc4_feature[ARRAY_INDEX_1]);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) RPort(0x%x) received GFF_ID ACC. FC4 feature is 0x%x(1:TGT,2:INI,3:COM)",
- unf_lport->port_id, unf_lport->nport_id, nport_id, fc_4feacture);
-
- /* Check (& Get new) R_Port */
- rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- if (rport)
- rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
-
- if (rport || (UNF_GET_PORT_OPTIONS(fc_4feacture) != UNF_PORT_MODE_INI)) {
- rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
- FC_CHECK_RETURN_VOID(rport);
- } else {
- return;
- }
-
- if ((fc_4feacture & UNF_GFF_ACC_MASK) != 0) {
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->options = UNF_GET_PORT_OPTIONS(fc_4feacture);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- } else if (rport->port_name != INVALID_WWPN) {
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->options = unf_get_port_feature(rport->port_name);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
-
- /* NOTE: Send PLOGI if necessary */
- unf_check_rport_need_delay_plogi(unf_lport, rport, rport->options);
-}
-
-void unf_rcv_gff_id_rjt(struct unf_lport *lport,
- struct unf_gffid_rsp *gffid_rsp_pld, u32 nport_id)
-{
- /* Delay LOGIN or LOGO */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *rport = NULL;
- struct unf_gffid_rsp *unf_gffid_rsp_pld = gffid_rsp_pld;
- u32 rjt_reason = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(gffid_rsp_pld);
-
- /* Check (& Get new) R_Port */
- rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- if (rport)
- rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
-
- if (!rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) get RPort by N_Port_ID(0x%x) failed and alloc new",
- unf_lport->port_id, nport_id);
-
- rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
-
- rjt_reason = unf_gffid_rsp_pld->ctiu_pream.frag_reason_exp_vend;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but was rejected. Reason code(0x%x)",
- unf_lport->port_id, nport_id, rjt_reason);
-
- if (!UNF_GNN_GFF_ID_RJT_REASON(rjt_reason)) {
- rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
- FC_CHECK_RETURN_VOID(rport);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Delay to send PLOGI */
- unf_rport_delay_login(rport);
- } else {
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- if (rport->rp_state == UNF_RPORT_ST_INIT) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Enter closing state */
- unf_rport_enter_logo(unf_lport, rport);
- } else {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
- }
-}
-
-void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
-{
- /* Send PLOGI */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *rport = NULL;
- ulong flag = 0;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send GFF_ID for RPort(0x%x) but response is unknown",
- unf_lport->port_id, nport_id);
-
- /* Get (Safe) R_Port & Set State */
- rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- if (rport)
- rport = unf_find_rport(unf_lport, nport_id, rport->port_name);
-
- if (!rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) can't get RPort by NPort ID(0x%x), allocate new RPort",
- unf_lport->port_id, unf_lport->nport_id, nport_id);
-
- rport = unf_rport_get_free_and_init(unf_lport, UNF_PORT_TYPE_FC, nport_id);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
-
- rport = unf_get_safe_rport(unf_lport, rport, UNF_RPORT_REUSE_ONLY, nport_id);
- FC_CHECK_RETURN_VOID(rport);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Start to send PLOGI */
- ret = unf_send_plogi(unf_lport, rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) can not send PLOGI for RPort(0x%x), enter recovery",
- unf_lport->port_id, nport_id);
-
- unf_rport_error_recovery(rport);
- }
-}
-
-static void unf_gff_id_callback(void *lport, void *sns_port, void *xchg)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_lport *root_lport = NULL;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_gffid_rsp *gffid_rsp_pld = NULL;
- u32 cmnd_rsp_size = 0;
- u32 nport_id = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
- FC_CHECK_RETURN_VOID(xchg);
-
- nport_id = unf_xchg->disc_portid;
-
- gffid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gff_id_rsp;
- cmnd_rsp_size = (gffid_rsp_pld->ctiu_pream.cmnd_rsp_size);
-
- root_lport = (struct unf_lport *)unf_lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
- /* Case for GFF_ID ACC: (Delay)PLOGI */
- unf_rcv_gff_id_acc(unf_lport, gffid_rsp_pld, nport_id);
- } else if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_REJECT) {
- /* Case for GFF_ID RJT: Delay PLOGI or LOGO directly */
- unf_rcv_gff_id_rjt(unf_lport, gffid_rsp_pld, nport_id);
- } else {
- /* Send PLOGI */
- unf_rcv_gff_id_rsp_unknown(unf_lport, nport_id);
- }
-}
-
-static void unf_rcv_gpn_id_acc(struct unf_lport *lport,
- u32 nport_id, u64 port_name)
-{
- /* then PLOGI or re-login */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- rport = unf_find_valid_rport(unf_lport, port_name, nport_id);
- if (rport) {
- /* R_Port with TGT mode & L_Port with INI mode:
- * send PLOGI with INIT state
- */
- if ((rport->options & UNF_PORT_MODE_TGT) == UNF_PORT_MODE_TGT) {
- rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
- FC_CHECK_RETURN_VOID(rport);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->nport_id = nport_id;
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Start to send PLOGI */
- ret = unf_send_plogi(unf_lport, rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI failed for 0x%x, enter recovry",
- unf_lport->port_id, unf_lport->nport_id, nport_id);
-
- unf_rport_error_recovery(rport);
- }
- } else {
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- if (rport->rp_state != UNF_RPORT_ST_PLOGI_WAIT &&
- rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
- rport->rp_state != UNF_RPORT_ST_READY) {
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Do LOGO operation */
- unf_rport_enter_logo(unf_lport, rport);
- } else {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
- }
- } else {
- /* Send GNN_ID */
- (void)unf_rport_relogin(unf_lport, nport_id);
- }
-}
-
-static void unf_rcv_gpn_id_rjt(struct unf_lport *lport, u32 nport_id)
-{
- struct unf_lport *unf_lport = lport;
- struct unf_rport *rport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- if (rport)
- /* Do R_Port Link down */
- unf_rport_linkdown(unf_lport, rport);
-}
-
-void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id)
-{
- struct unf_lport *unf_lport = lport;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) wrong response of GPN_ID with RPort(0x%x)",
- unf_lport->port_id, nport_id);
-
- /* NOTE: go to next stage */
- (void)unf_rport_relogin(unf_lport, nport_id);
-}
-
-static void unf_gpn_id_ob_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *lport = NULL;
- u32 nport_id = 0;
- struct unf_lport *root_lport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- lport = xchg->lport;
- nport_id = xchg->disc_portid;
- FC_CHECK_RETURN_VOID(lport);
-
- root_lport = (struct unf_lport *)lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send GPN_ID failed to inquire RPort(0x%x)",
- lport->port_id, nport_id);
-
- /* NOTE: go to next stage */
- (void)unf_rport_relogin(lport, nport_id);
-}
-
-static void unf_gpn_id_callback(void *lport, void *sns_port, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_gpnid_rsp *gpnid_rsp_pld = NULL;
- u64 port_name = 0;
- u32 cmnd_rsp_size = 0;
- u32 nport_id = 0;
- struct unf_lport *root_lport = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(sns_port);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
- nport_id = unf_xchg->disc_portid;
-
- root_lport = (struct unf_lport *)unf_lport->root_lport;
- atomic_inc(&root_lport->disc.disc_thread_info.disc_contrl_size);
- wake_up_process(root_lport->disc.disc_thread_info.thread);
-
- gpnid_rsp_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->gpn_id_rsp;
- cmnd_rsp_size = gpnid_rsp_pld->ctiu_pream.cmnd_rsp_size;
- if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- /* GPN_ID ACC */
- port_name = ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_0])
- << UNF_SHIFT_32) |
- ((u64)(gpnid_rsp_pld->port_name[ARRAY_INDEX_1]));
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x) GPN_ID ACC with WWN(0x%llx) RPort NPort ID(0x%x)",
- unf_lport->port_id, port_name, nport_id);
-
- /* Send PLOGI or LOGO or GNN_ID */
- unf_rcv_gpn_id_acc(unf_lport, nport_id, port_name);
- } else if (UNF_CT_IU_REJECT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- /* GPN_ID RJT: Link Down */
- unf_rcv_gpn_id_rjt(unf_lport, nport_id);
- } else {
- /* GPN_ID response type unknown: Send GNN_ID */
- unf_rcv_gpn_id_rsp_unknown(unf_lport, nport_id);
- }
-}
-
-static void unf_rff_id_ob_callback(struct unf_xchg *xchg)
-{
- /* Do recovery */
- struct unf_lport *lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
- lport->port_id, lport->nport_id);
-
- unf_lport_error_recovery(lport);
-}
-
-static void unf_rff_id_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_ctiu_prem *ctiu_prem = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 cmnd_rsp_size = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
- if (unlikely(!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr))
- return;
-
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FCTRL);
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
- UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FCTRL);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't allocate RPort(0x%x)",
- unf_lport->port_id, UNF_FC_FID_FCTRL);
- return;
- }
-
- unf_rport->nport_id = UNF_FC_FID_FCTRL;
- ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rff_id_rsp.ctiu_pream;
- cmnd_rsp_size = ctiu_prem->cmnd_rsp_size;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x_0x%x) RFF_ID rsp is (0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- (cmnd_rsp_size & UNF_CT_IU_RSP_MASK));
-
- /* RSP Type check: some SW not support RFF_ID, go to next stage also */
- if ((cmnd_rsp_size & UNF_CT_IU_RSP_MASK) == UNF_CT_IU_ACCEPT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) receive RFF ACC(0x%x) in state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) receive RFF RJT(0x%x) in state(0x%x) with RJT reason code(0x%x) explanation(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->states,
- (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
- (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
- }
-
- /* L_Port state check */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_RFF_ID_WAIT) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) receive RFF reply in state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
-
- return;
- }
- /* LPort: RFF_ID_WAIT --> SCR_WAIT */
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- ret = unf_send_scr(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send SCR failed",
- unf_lport->port_id, unf_lport->nport_id);
- unf_lport_error_recovery(unf_lport);
- }
-}
-
-static void unf_rft_id_ob_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send RFT_ID failed",
- lport->port_id, lport->nport_id);
- unf_lport_error_recovery(lport);
-}
-
-static void unf_rft_id_callback(void *lport, void *rport, void *xchg)
-{
- /* RFT_ID --->>> RFF_ID */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_ctiu_prem *ctiu_prem = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 cmnd_rsp_size = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_rport = (struct unf_rport *)rport;
- unf_xchg = (struct unf_xchg *)xchg;
-
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) SFS entry is NULL with state(0x%x)",
- unf_lport->port_id, unf_lport->states);
- return;
- }
-
- ctiu_prem = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
- ->rft_id_rsp.ctiu_pream;
- cmnd_rsp_size = (ctiu_prem->cmnd_rsp_size);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) RFT_ID response is (0x%x)",
- (cmnd_rsp_size & UNF_CT_IU_RSP_MASK), unf_lport->port_id,
- unf_lport->nport_id);
-
- if (UNF_CT_IU_ACCEPT == (cmnd_rsp_size & UNF_CT_IU_RSP_MASK)) {
- /* Case for RFT_ID ACC: send RFF_ID */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_RFT_ID_WAIT) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) receive RFT_ID ACC in state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_lport->states);
-
- return;
- }
-
- /* LPort: RFT_ID_WAIT --> RFF_ID_WAIT */
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- /* Start to send RFF_ID GS command */
- ret = unf_send_rff_id(unf_lport, unf_rport, UNF_FC4_FCP_TYPE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send RFF_ID failed",
- unf_lport->port_id, unf_lport->nport_id);
- unf_lport_error_recovery(unf_lport);
- }
- } else {
- /* Case for RFT_ID RJT: do recovery */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) receive RFT_ID RJT with reason_code(0x%x) explanation(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_REASON_MASK,
- (ctiu_prem->frag_reason_exp_vend) & UNF_CT_IU_EXPLAN_MASK);
-
- /* Do L_Port recovery */
- unf_lport_error_recovery(unf_lport);
- }
-}
-
-static void unf_scr_ob_callback(struct unf_xchg *xchg)
-{
- /* Callback fucnion for exception: Do L_Port error recovery */
- struct unf_lport *lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send SCR failed and do port recovery",
- lport->port_id);
-
- unf_lport_error_recovery(lport);
-}
-
-static void unf_scr_callback(void *lport, void *rport, void *xchg)
-{
- /* Callback function for SCR response: Send GID_PT with INI mode */
- struct unf_lport *unf_lport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_els_acc *els_acc = NULL;
- u32 ret = UNF_RETURN_ERROR;
- ulong port_flag = 0;
- ulong disc_flag = 0;
- u32 cmnd = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
- disc = &unf_lport->disc;
-
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
- return;
-
- els_acc = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_acc;
- if (unf_xchg->byte_orders & UNF_BIT_2)
- cmnd = be32_to_cpu(els_acc->cmnd);
- else
- cmnd = (els_acc->cmnd);
-
- if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
- spin_lock_irqsave(&unf_lport->lport_state_lock, port_flag);
- if (unf_lport->states != UNF_LPORT_ST_SCR_WAIT) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock,
- port_flag);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) receive SCR ACC with error state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_lport->states);
- return;
- }
-
- /* LPort: SCR_WAIT --> READY */
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
- if (unf_lport->states == UNF_LPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) enter READY state when received SCR response",
- unf_lport->port_id, unf_lport->nport_id);
- }
-
- /* Start to Discovery with INI mode: GID_PT */
- if ((unf_lport->options & UNF_PORT_MODE_INI) ==
- UNF_PORT_MODE_INI) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock,
- port_flag);
-
- if (unf_lport->disc.disc_temp.unf_disc_start) {
- spin_lock_irqsave(&disc->rport_busy_pool_lock,
- disc_flag);
- unf_lport->disc.disc_option = UNF_INIT_DISC;
- disc->last_disc_jiff = jiffies;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-
- ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x) DISC %s with INI mode",
- unf_lport->port_id,
- (ret != RETURN_OK) ? "failed" : "succeed");
- }
- return;
- }
-
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, port_flag);
- /* NOTE: set state with UNF_DISC_ST_END used for
- * RSCN process
- */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- unf_lport->disc.states = UNF_DISC_ST_END;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) is TGT mode, no need to discovery",
- unf_lport->port_id);
-
- return;
- }
- unf_lport_error_recovery(unf_lport);
-}
-
-void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
- struct unf_rport *rport, u32 port_feature)
-{
- /*
- * Called by:
- * 1. Private loop
- * 2. RCVD GFF_ID ACC
- */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- ulong flag = 0;
- u32 nport_id = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- nport_id = unf_rport->nport_id;
-
- /*
- * Send GFF_ID means L_Port has INI attribute
- * *
- * When to send PLOGI:
- * 1. R_Port has TGT mode (COM or TGT), send PLOGI immediately
- * 2. R_Port only with INI, send LOGO immediately
- * 3. R_Port with unknown attribute, delay to send PLOGI
- */
- if ((UNF_PORT_MODE_TGT & port_feature) ||
- (UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF &
- unf_lport->enhanced_features)) {
- /* R_Port has TGT mode: send PLOGI immediately */
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
- FC_CHECK_RETURN_VOID(unf_rport);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = nport_id;
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Start to send PLOGI */
- ret = unf_send_plogi(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI to RPort(0x%x) failed",
- unf_lport->port_id, unf_lport->nport_id,
- nport_id);
-
- unf_rport_error_recovery(unf_rport);
- }
- } else if (port_feature == UNF_PORT_MODE_INI) {
- /* R_Port only with INI mode: can't send PLOGI
- * --->>> LOGO/nothing
- */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- if (unf_rport->rp_state == UNF_RPORT_ST_INIT) {
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send LOGO to RPort(0x%x) which only with INI mode",
- unf_lport->port_id, unf_lport->nport_id, nport_id);
-
- /* Enter Closing state */
- unf_rport_enter_logo(unf_lport, unf_rport);
- } else {
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
- }
- } else {
- /* Unknown R_Port attribute: Delay to send PLOGI */
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
- FC_CHECK_RETURN_VOID(unf_rport);
-
- /* Update R_Port state: PLOGI_WAIT */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = nport_id;
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_rport_delay_login(unf_rport);
- }
-}
diff --git a/drivers/scsi/spfc/common/unf_gs.h b/drivers/scsi/spfc/common/unf_gs.h
deleted file mode 100644
index d9856133b3cd..000000000000
--- a/drivers/scsi/spfc/common/unf_gs.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_GS_H
-#define UNF_GS_H
-
-#include "unf_type.h"
-#include "unf_lport.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-
-u32 unf_send_scr(struct unf_lport *lport,
- struct unf_rport *rport);
-u32 unf_send_ctpass_thru(struct unf_lport *lport,
- void *buffer, u32 bufflen);
-
-u32 unf_send_gid_ft(struct unf_lport *lport,
- struct unf_rport *rport);
-u32 unf_send_gid_pt(struct unf_lport *lport,
- struct unf_rport *rport);
-u32 unf_send_gpn_id(struct unf_lport *lport,
- struct unf_rport *sns_port, u32 nport_id);
-u32 unf_send_gnn_id(struct unf_lport *lport,
- struct unf_rport *sns_port, u32 nport_id);
-u32 unf_send_gff_id(struct unf_lport *lport,
- struct unf_rport *sns_port, u32 nport_id);
-
-u32 unf_send_rff_id(struct unf_lport *lport,
- struct unf_rport *rport, u32 fc4_type);
-u32 unf_send_rft_id(struct unf_lport *lport,
- struct unf_rport *rport);
-void unf_rcv_gnn_id_rsp_unknown(struct unf_lport *lport,
- struct unf_rport *sns_port, u32 nport_id);
-void unf_rcv_gpn_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
-void unf_rcv_gff_id_rsp_unknown(struct unf_lport *lport, u32 nport_id);
-void unf_check_rport_need_delay_plogi(struct unf_lport *lport,
- struct unf_rport *rport, u32 port_feature);
-
-struct send_com_trans_in {
- unsigned char port_wwn[8];
- u32 req_buffer_count;
- unsigned char req_buffer[ARRAY_INDEX_1];
-};
-
-struct send_com_trans_out {
- u32 hba_status;
- u32 total_resp_buffer_cnt;
- u32 actual_resp_buffer_cnt;
- unsigned char resp_buffer[ARRAY_INDEX_1];
-};
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_init.c b/drivers/scsi/spfc/common/unf_init.c
deleted file mode 100644
index 7e6f98d16977..000000000000
--- a/drivers/scsi/spfc/common/unf_init.c
+++ /dev/null
@@ -1,353 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_type.h"
-#include "unf_log.h"
-#include "unf_scsi_common.h"
-#include "unf_event.h"
-#include "unf_exchg.h"
-#include "unf_portman.h"
-#include "unf_rport.h"
-#include "unf_service.h"
-#include "unf_io.h"
-#include "unf_io_abnormal.h"
-
-#define UNF_PID 12
-#define MY_PID UNF_PID
-
-#define RPORT_FEATURE_POOL_SIZE 4096
-struct task_struct *event_task_thread;
-struct workqueue_struct *unf_wq;
-
-atomic_t fc_mem_ref;
-
-struct unf_global_card_thread card_thread_mgr;
-u32 unf_dgb_level = UNF_MAJOR;
-u32 log_print_level = UNF_INFO;
-u32 log_limited_times = UNF_LOGIN_ATT_PRINT_TIMES;
-
-static struct unf_esgl_page *unf_get_one_free_esgl_page
- (void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(pkg, NULL);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)pkg->xchg_contex;
-
- return unf_get_and_add_one_free_esgl_page(unf_lport, unf_xchg);
-}
-
-static int unf_get_cfg_parms(char *section_name, struct unf_cfg_item *cfg_itm,
- u32 *cfg_value, u32 itemnum)
-{
- /* Maximum length of a configuration item value, including the end
- * character
- */
-#define UNF_MAX_ITEM_VALUE_LEN (256)
-
- u32 *unf_cfg_value = NULL;
- struct unf_cfg_item *unf_cfg_itm = NULL;
- u32 i = 0;
-
- unf_cfg_itm = cfg_itm;
- unf_cfg_value = cfg_value;
-
- for (i = 0; i < itemnum; i++) {
- if (!unf_cfg_itm || !unf_cfg_value) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR,
- "[err]Config name or value is NULL");
-
- return UNF_RETURN_ERROR;
- }
-
- if (strcmp("End", unf_cfg_itm->puc_name) == 0x0)
- break;
-
- if (strcmp("fw_path", unf_cfg_itm->puc_name) == 0x0) {
- unf_cfg_itm++;
- unf_cfg_value += UNF_MAX_ITEM_VALUE_LEN / sizeof(u32);
- continue;
- }
-
- *unf_cfg_value = unf_cfg_itm->default_value;
- unf_cfg_itm++;
- unf_cfg_value++;
- }
-
- return RETURN_OK;
-}
-
-struct unf_cm_handle_op unf_cm_handle_ops = {
- .unf_alloc_local_port = unf_lport_create_and_init,
- .unf_release_local_port = unf_release_local_port,
- .unf_receive_ls_gs_pkg = unf_receive_ls_gs_pkg,
- .unf_receive_bls_pkg = unf_receive_bls_pkg,
- .unf_send_els_done = unf_send_els_done,
- .unf_receive_ini_response = unf_ini_scsi_completed,
- .unf_get_cfg_parms = unf_get_cfg_parms,
- .unf_receive_marker_status = unf_recv_tmf_marker_status,
- .unf_receive_abts_marker_status = unf_recv_abts_marker_status,
-
- .unf_process_fcp_cmnd = NULL,
- .unf_tgt_cmnd_xfer_or_rsp_echo = NULL,
- .unf_cm_get_sgl_entry = unf_ini_get_sgl_entry,
- .unf_cm_get_dif_sgl_entry = unf_ini_get_dif_sgl_entry,
- .unf_get_one_free_esgl_page = unf_get_one_free_esgl_page,
- .unf_fc_port_event = unf_fc_port_link_event,
-};
-
-u32 unf_get_cm_handle_ops(struct unf_cm_handle_op *cm_handle)
-{
- FC_CHECK_RETURN_VALUE(cm_handle, UNF_RETURN_ERROR);
-
- memcpy(cm_handle, &unf_cm_handle_ops, sizeof(struct unf_cm_handle_op));
-
- return RETURN_OK;
-}
-
-static void unf_deinit_cm_handle_ops(void)
-{
- memset(&unf_cm_handle_ops, 0, sizeof(struct unf_cm_handle_op));
-}
-
-int unf_event_process(void *worker_ptr)
-{
- struct list_head *event_list = NULL;
- struct unf_cm_event_report *event_node = NULL;
- struct completion *create_done = (struct completion *)worker_ptr;
- ulong flags = 0;
-
- set_user_nice(current, UNF_OS_THRD_PRI_LOW);
- recalc_sigpending();
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[event]Enter event thread");
-
- if (create_done)
- complete(create_done);
-
- do {
- spin_lock_irqsave(&fc_event_list.fc_event_list_lock, flags);
- if (list_empty(&fc_event_list.list_head)) {
- spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
-
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
- } else {
- event_list = UNF_OS_LIST_NEXT(&fc_event_list.list_head);
- list_del_init(event_list);
- fc_event_list.list_num--;
- event_node = list_entry(event_list,
- struct unf_cm_event_report,
- list_entry);
- spin_unlock_irqrestore(&fc_event_list.fc_event_list_lock, flags);
-
- /* Process event node */
- unf_handle_event(event_node);
- }
- } while (!kthread_should_stop());
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "[event]Event thread exit");
-
- return RETURN_OK;
-}
-
-static int unf_creat_event_center(void)
-{
- struct completion create_done;
-
- init_completion(&create_done);
- INIT_LIST_HEAD(&fc_event_list.list_head);
- fc_event_list.list_num = 0;
- spin_lock_init(&fc_event_list.fc_event_list_lock);
-
- event_task_thread = kthread_run(unf_event_process, &create_done, "spfc_event");
- if (IS_ERR(event_task_thread)) {
- complete_and_exit(&create_done, 0);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create event thread failed(0x%p)",
- event_task_thread);
-
- return UNF_RETURN_ERROR;
- }
- wait_for_completion(&create_done);
- return RETURN_OK;
-}
-
-static void unf_cm_event_thread_exit(void)
-{
- if (event_task_thread)
- kthread_stop(event_task_thread);
-}
-
-static void unf_init_card_mgr_list(void)
-{
- /* So far, do not care */
- INIT_LIST_HEAD(&card_thread_mgr.card_list_head);
-
- spin_lock_init(&card_thread_mgr.global_card_list_lock);
-
- card_thread_mgr.card_num = 0;
-}
-
-int unf_port_feature_pool_init(void)
-{
- u32 index = 0;
- u32 rport_feature_pool_size = 0;
- struct unf_rport_feature_recard *rport_feature = NULL;
- unsigned long flags = 0;
-
- rport_feature_pool_size = sizeof(struct unf_rport_feature_pool);
- port_feature_pool = vmalloc(rport_feature_pool_size);
- if (!port_feature_pool) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]cannot allocate rport feature pool");
-
- return UNF_RETURN_ERROR;
- }
- memset(port_feature_pool, 0, rport_feature_pool_size);
- spin_lock_init(&port_feature_pool->port_fea_pool_lock);
- INIT_LIST_HEAD(&port_feature_pool->list_busy_head);
- INIT_LIST_HEAD(&port_feature_pool->list_free_head);
-
- port_feature_pool->port_feature_pool_addr =
- vmalloc((size_t)(RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard)));
- if (!port_feature_pool->port_feature_pool_addr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]cannot allocate rport feature pool address");
-
- vfree(port_feature_pool);
- port_feature_pool = NULL;
-
- return UNF_RETURN_ERROR;
- }
-
- memset(port_feature_pool->port_feature_pool_addr, 0,
- RPORT_FEATURE_POOL_SIZE * sizeof(struct unf_rport_feature_recard));
- rport_feature = (struct unf_rport_feature_recard *)
- port_feature_pool->port_feature_pool_addr;
-
- spin_lock_irqsave(&port_feature_pool->port_fea_pool_lock, flags);
- for (index = 0; index < RPORT_FEATURE_POOL_SIZE; index++) {
- list_add_tail(&rport_feature->entry_feature, &port_feature_pool->list_free_head);
- rport_feature++;
- }
- spin_unlock_irqrestore(&port_feature_pool->port_fea_pool_lock, flags);
-
- return RETURN_OK;
-}
-
-void unf_free_port_feature_pool(void)
-{
- if (port_feature_pool->port_feature_pool_addr) {
- vfree(port_feature_pool->port_feature_pool_addr);
- port_feature_pool->port_feature_pool_addr = NULL;
- }
-
- vfree(port_feature_pool);
- port_feature_pool = NULL;
-}
-
-int unf_common_init(void)
-{
- int ret = RETURN_OK;
-
- unf_dgb_level = UNF_MAJOR;
- log_print_level = UNF_KEVENT;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "UNF Driver Version:%s.", SPFC_DRV_VERSION);
-
- atomic_set(&fc_mem_ref, 0);
- ret = unf_port_feature_pool_init();
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port Feature Pool init failed");
- return ret;
- }
-
- ret = (int)unf_register_ini_transport();
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]INI interface init failed");
- goto REG_INITRANSPORT_FAIL;
- }
-
- unf_port_mgmt_init();
- unf_init_card_mgr_list();
- ret = (int)unf_init_global_event_msg();
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create global event center failed");
- goto CREAT_GLBEVENTMSG_FAIL;
- }
-
- ret = (int)unf_creat_event_center();
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create event center (thread) failed");
- goto CREAT_EVENTCENTER_FAIL;
- }
-
- unf_wq = create_workqueue("unf_wq");
- if (!unf_wq) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create work queue failed");
- goto CREAT_WORKQUEUE_FAIL;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Init common layer succeed");
- return ret;
-CREAT_WORKQUEUE_FAIL:
- unf_cm_event_thread_exit();
-CREAT_EVENTCENTER_FAIL:
- unf_destroy_global_event_msg();
-CREAT_GLBEVENTMSG_FAIL:
- unf_unregister_ini_transport();
-REG_INITRANSPORT_FAIL:
- unf_free_port_feature_pool();
- return UNF_RETURN_ERROR;
-}
-
-static void unf_destroy_dirty_port(void)
-{
- u32 ditry_port_num = 0;
-
- unf_show_dirty_port(false, &ditry_port_num);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Sys has %u dirty L_Port(s)", ditry_port_num);
-}
-
-void unf_common_exit(void)
-{
- unf_free_port_feature_pool();
-
- unf_destroy_dirty_port();
-
- flush_workqueue(unf_wq);
- destroy_workqueue(unf_wq);
- unf_wq = NULL;
-
- unf_cm_event_thread_exit();
-
- unf_destroy_global_event_msg();
-
- unf_deinit_cm_handle_ops();
-
- unf_port_mgmt_deinit();
-
- unf_unregister_ini_transport();
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[info]SPFC module remove succeed, memory reference count is %d",
- atomic_read(&fc_mem_ref));
-}
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
deleted file mode 100644
index 5de69f8ddc6d..000000000000
--- a/drivers/scsi/spfc/common/unf_io.c
+++ /dev/null
@@ -1,1219 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_io.h"
-#include "unf_log.h"
-#include "unf_portman.h"
-#include "unf_service.h"
-#include "unf_io_abnormal.h"
-
-u32 sector_size_flag;
-
-#define UNF_GET_FCP_CTL(pkg) ((((pkg)->status) >> UNF_SHIFT_8) & 0xFF)
-#define UNF_GET_SCSI_STATUS(pkg) (((pkg)->status) & 0xFF)
-
-static u32 unf_io_success_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status);
-static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg,
- u32 up_status);
-static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status);
-static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status);
-
-struct unf_ini_error_handler_s {
- u32 ini_error_code;
- u32 (*unf_ini_error_handler)(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status);
-};
-
-struct unf_ini_error_handler_s ini_error_handler_table[] = {
- {UNF_IO_SUCCESS, unf_io_success_handler},
- {UNF_IO_ABORTED, unf_ini_error_default_handler},
- {UNF_IO_FAILED, unf_ini_error_default_handler},
- {UNF_IO_ABORT_ABTS, unf_ini_error_default_handler},
- {UNF_IO_ABORT_LOGIN, unf_ini_error_default_handler},
- {UNF_IO_ABORT_REET, unf_ini_error_default_handler},
- {UNF_IO_ABORT_FAILED, unf_ini_error_default_handler},
- {UNF_IO_OUTOF_ORDER, unf_ini_error_default_handler},
- {UNF_IO_FTO, unf_ini_error_default_handler},
- {UNF_IO_LINK_FAILURE, unf_ini_error_default_handler},
- {UNF_IO_OVER_FLOW, unf_ini_error_default_handler},
- {UNF_IO_RSP_OVER, unf_ini_error_default_handler},
- {UNF_IO_LOST_FRAME, unf_ini_error_default_handler},
- {UNF_IO_UNDER_FLOW, unf_io_underflow_handler},
- {UNF_IO_HOST_PROG_ERROR, unf_ini_error_default_handler},
- {UNF_IO_SEST_PROG_ERROR, unf_ini_error_default_handler},
- {UNF_IO_INVALID_ENTRY, unf_ini_error_default_handler},
- {UNF_IO_ABORT_SEQ_NOT, unf_ini_error_default_handler},
- {UNF_IO_REJECT, unf_ini_error_default_handler},
- {UNF_IO_EDC_IN_ERROR, unf_ini_error_default_handler},
- {UNF_IO_EDC_OUT_ERROR, unf_ini_error_default_handler},
- {UNF_IO_UNINIT_KEK_ERR, unf_ini_error_default_handler},
- {UNF_IO_DEK_OUTOF_RANGE, unf_ini_error_default_handler},
- {UNF_IO_KEY_UNWRAP_ERR, unf_ini_error_default_handler},
- {UNF_IO_KEY_TAG_ERR, unf_ini_error_default_handler},
- {UNF_IO_KEY_ECC_ERR, unf_ini_error_default_handler},
- {UNF_IO_BLOCK_SIZE_ERROR, unf_ini_error_default_handler},
- {UNF_IO_ILLEGAL_CIPHER_MODE, unf_ini_error_default_handler},
- {UNF_IO_CLEAN_UP, unf_ini_error_default_handler},
- {UNF_IO_ABORTED_BY_TARGET, unf_ini_error_default_handler},
- {UNF_IO_TRANSPORT_ERROR, unf_ini_error_default_handler},
- {UNF_IO_LINK_FLASH, unf_ini_error_default_handler},
- {UNF_IO_TIMEOUT, unf_ini_error_default_handler},
- {UNF_IO_DMA_ERROR, unf_ini_error_default_handler},
- {UNF_IO_DIF_ERROR, unf_ini_dif_error_handler},
- {UNF_IO_INCOMPLETE, unf_ini_error_default_handler},
- {UNF_IO_DIF_REF_ERROR, unf_ini_dif_error_handler},
- {UNF_IO_DIF_GEN_ERROR, unf_ini_dif_error_handler},
- {UNF_IO_NO_XCHG, unf_ini_error_default_handler}
- };
-
-void unf_done_ini_xchg(struct unf_xchg *xchg)
-{
- /*
- * About I/O Done
- * 1. normal case
- * 2. Send ABTS & RCVD RSP
- * 3. Send ABTS & timer timeout
- */
- struct unf_scsi_cmnd scsi_cmd = {0};
- ulong flags = 0;
- struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- u32 scsi_id = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- if (unlikely(!xchg->scsi_cmnd_info.scsi_cmnd))
- return;
-
- /* 1. Free RX_ID for INI SIRT: Do not care */
-
- /*
- * 2. set & check exchange state
- * *
- * for Set UP_ABORT Tag:
- * 1) L_Port destroy
- * 2) LUN reset
- * 3) Target/Session reset
- * 4) SCSI send Abort(ABTS)
- */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->io_state |= INI_IO_STATE_DONE;
- if (unlikely(xchg->io_state &
- (INI_IO_STATE_UPABORT | INI_IO_STATE_UPSEND_ERR | INI_IO_STATE_TMF_ABORT))) {
- /*
- * a. UPABORT: scsi have send ABTS
- * --->>> do not call SCSI_Done, return directly
- * b. UPSEND_ERR: error happened duiring LLDD send SCSI_CMD
- * --->>> do not call SCSI_Done, scsi need retry
- */
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[event]Exchange(0x%p) Cmdsn:0x%lx upCmd:%p hottag(0x%x) with state(0x%x) has been aborted or send error",
- xchg, (ulong)xchg->cmnd_sn, xchg->scsi_cmnd_info.scsi_cmnd,
- xchg->hotpooltag, xchg->io_state);
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- return;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- scsi_cmnd_info = &xchg->scsi_cmnd_info;
-
- /*
- * 3. Set:
- * scsi_cmnd;
- * cmnd_done_func;
- * cmnd up_level_done;
- * sense_buff_addr;
- * resid_length;
- * cmnd_result;
- * dif_info
- * **
- * UNF_SCSI_CMND <<-- UNF_SCSI_CMND_INFO
- */
- UNF_SET_HOST_CMND((&scsi_cmd), scsi_cmnd_info->scsi_cmnd);
- UNF_SER_CMND_DONE_FUNC((&scsi_cmd), scsi_cmnd_info->done);
- UNF_SET_UP_LEVEL_CMND_DONE_FUNC(&scsi_cmd, scsi_cmnd_info->uplevel_done);
- scsi_cmd.drv_private = xchg->lport;
- if (unlikely((UNF_SCSI_STATUS(xchg->scsi_cmnd_info.result)) & FCP_SNS_LEN_VALID_MASK)) {
- unf_save_sense_data(scsi_cmd.upper_cmnd,
- (char *)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
- (int)xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len);
- }
- UNF_SET_RESID((&scsi_cmd), (u32)xchg->resid_len);
- UNF_SET_CMND_RESULT((&scsi_cmd), scsi_cmnd_info->result);
- memcpy(&scsi_cmd.dif_info, &xchg->dif_info, sizeof(struct dif_info));
-
- scsi_id = scsi_cmnd_info->scsi_id;
-
- UNF_DONE_SCSI_CMND((&scsi_cmd));
-
- /* 4. Update IO result CNT */
- if (likely(xchg->lport)) {
- scsi_image_table = &xchg->lport->rport_scsi_table;
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id,
- (scsi_cmnd_info->result >> UNF_SHIFT_16));
- }
-}
-
-static inline u32 unf_ini_get_sgl_entry_buf(ini_get_sgl_entry_buf ini_get_sgl,
- void *cmnd, void *driver_sgl,
- void **upper_sgl, u32 *req_index,
- u32 *index, char **buf,
- u32 *buf_len)
-{
- if (unlikely(!ini_get_sgl)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Command(0x%p) Get sgl Entry func Null.", cmnd);
-
- return UNF_RETURN_ERROR;
- }
-
- return ini_get_sgl(cmnd, driver_sgl, upper_sgl, req_index, index, buf, buf_len);
-}
-
-u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len)
-{
- struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
- struct unf_xchg *unf_xchg = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
-
- unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
- FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
-
- /* Get SGL Entry buffer for INI Mode */
- ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
- unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
- &unf_xchg->req_sgl_info.sgl,
- &unf_xchg->scsi_cmnd_info.port_id,
- &((unf_xchg->req_sgl_info).entry_index), buf, buf_len);
-
- return ret;
-}
-
-u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len)
-{
- struct unf_frame_pkg *unf_pkg = (struct unf_frame_pkg *)pkg;
- struct unf_xchg *unf_xchg = NULL;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf_len, UNF_RETURN_ERROR);
-
- unf_xchg = (struct unf_xchg *)unf_pkg->xchg_contex;
- FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
-
- /* Get SGL Entry buffer for INI Mode */
- ret = unf_ini_get_sgl_entry_buf(unf_xchg->scsi_cmnd_info.unf_get_sgl_entry_buf,
- unf_xchg->scsi_cmnd_info.scsi_cmnd, NULL,
- &unf_xchg->dif_sgl_info.sgl,
- &unf_xchg->scsi_cmnd_info.port_id,
- &((unf_xchg->dif_sgl_info).entry_index), buf, buf_len);
- return ret;
-}
-
-u32 unf_get_up_level_cmnd_errcode(struct unf_ini_error_code *err_table,
- u32 err_table_count, u32 drv_err_code)
-{
- u32 loop = 0;
-
- /* fail return UNF_RETURN_ERROR,adjust by up level */
- if (unlikely(!err_table)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Error Code Table is Null, Error Code(0x%x).", drv_err_code);
-
- return (u32)UNF_SCSI_HOST(DID_ERROR);
- }
-
- for (loop = 0; loop < err_table_count; loop++) {
- if (err_table[loop].drv_errcode == drv_err_code)
- return err_table[loop].ap_errcode;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Unsupported Ap Error code by Error Code(0x%x).", drv_err_code);
-
- return (u32)UNF_SCSI_HOST(DID_ERROR);
-}
-
-static u32 unf_ini_status_handle(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg)
-{
- u32 loop = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 up_status = 0;
-
- for (loop = 0; loop < sizeof(ini_error_handler_table) /
- sizeof(struct unf_ini_error_handler_s); loop++) {
- if (UNF_GET_LL_ERR(pkg) == ini_error_handler_table[loop].ini_error_code) {
- up_status =
- unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
- xchg->scsi_cmnd_info.err_code_table_cout,
- UNF_GET_LL_ERR(pkg));
-
- if (ini_error_handler_table[loop].unf_ini_error_handler) {
- ret = ini_error_handler_table[loop]
- .unf_ini_error_handler(xchg, pkg, up_status);
- } else {
- /* set exchange->result ---to--->>>scsi_result */
- ret = unf_ini_error_default_handler(xchg, pkg, up_status);
- }
-
- return ret;
- }
- }
-
- up_status = unf_get_up_level_cmnd_errcode(xchg->scsi_cmnd_info.err_code_table,
- xchg->scsi_cmnd_info.err_code_table_cout,
- UNF_IO_SOFT_ERR);
-
- ret = unf_ini_error_default_handler(xchg, pkg, up_status);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Can not find com status, SID(0x%x) exchange(0x%p) com_status(0x%x) DID(0x%x) hot_pool_tag(0x%x)",
- xchg->sid, xchg, pkg->status, xchg->did, xchg->hotpooltag);
-
- return ret;
-}
-
-static void unf_analysis_response_info(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg,
- u32 *up_status)
-{
- u8 *resp_buf = NULL;
-
- /* LL_Driver use Little End, and copy RSP_INFO to COM_Driver */
- if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Receive FCP response resp buffer len is invalid 0x%x",
- pkg->unf_rsp_pload_bl.length);
- return;
- }
-
- resp_buf = (u8 *)pkg->unf_rsp_pload_bl.buffer_ptr;
- if (resp_buf) {
- /* If chip use Little End, then change it to Big End */
- if ((pkg->byte_orders & UNF_BIT_3) == 0)
- unf_cpu_to_big_end(resp_buf, pkg->unf_rsp_pload_bl.length);
-
- /* Chip DAM data with Big End */
- if (resp_buf[ARRAY_INDEX_3] != UNF_FCP_TM_RSP_COMPLETE) {
- *up_status = UNF_SCSI_HOST(DID_BUS_BUSY);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%p) DID bus busy, scsi_status(0x%x)",
- xchg->lport, UNF_GET_SCSI_STATUS(pkg));
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Receive FCP response, resp buffer is NULL resp buffer len is 0x%x",
- pkg->unf_rsp_pload_bl.length);
- }
-}
-
-static void unf_analysis_sense_info(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 *up_status)
-{
- u32 length = 0;
-
- /* 4 bytes Align */
- length = MIN(SCSI_SENSE_DATA_LEN, pkg->unf_sense_pload_bl.length);
-
- if (unlikely(pkg->unf_sense_pload_bl.length > SCSI_SENSE_DATA_LEN)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[info]Receive FCP response resp buffer len is 0x%x",
- pkg->unf_sense_pload_bl.length);
- }
- /*
- * If have sense info then copy directly
- * else, the chip has been dma the data to sense buffer
- */
-
- if (length != 0 && pkg->unf_rsp_pload_bl.buffer_ptr) {
- /* has been dma to exchange buffer */
- if (unlikely(pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN)) {
- *up_status = UNF_SCSI_HOST(DID_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Receive FCP response resp buffer len is invalid 0x%x",
- pkg->unf_rsp_pload_bl.length);
-
- return;
- }
-
- xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = (u8 *)kmalloc(length, GFP_ATOMIC);
- if (!xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Alloc FCP sense buffer failed");
- return;
- }
-
- memcpy(xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu,
- ((u8 *)(pkg->unf_rsp_pload_bl.buffer_ptr)) +
- pkg->unf_rsp_pload_bl.length, length);
-
- xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = length;
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Receive FCP response, sense buffer is NULL sense buffer len is 0x%x",
- length);
- }
-}
-
-static u32 unf_io_success_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status)
-{
- u8 scsi_status = 0;
- u8 control = 0;
- u32 status = up_status;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- control = UNF_GET_FCP_CTL(pkg);
- scsi_status = UNF_GET_SCSI_STATUS(pkg);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Port(0x%p), Exchange(0x%p) Completed, Control(0x%x), Scsi Status(0x%x)",
- xchg->lport, xchg, control, scsi_status);
-
- if (control & FCP_SNS_LEN_VALID_MASK) {
- /* has sense info */
- if (scsi_status == FCP_SCSI_STATUS_GOOD)
- scsi_status = SCSI_CHECK_CONDITION;
-
- unf_analysis_sense_info(xchg, pkg, &status);
- } else {
- /*
- * When the FCP_RSP_LEN_VALID bit is set to one,
- * the content of the SCSI STATUS CODE field is not reliable
- * and shall be ignored by the application client.
- */
- if (control & FCP_RSP_LEN_VALID_MASK)
- unf_analysis_response_info(xchg, pkg, &status);
- }
-
- xchg->scsi_cmnd_info.result = status | UNF_SCSI_STATUS(scsi_status);
-
- return RETURN_OK;
-}
-
-static u32 unf_ini_error_default_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg,
- u32 up_status)
-{
- /* set exchange->result ---to--->>> scsi_cmnd->result */
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
- "[warn]SID(0x%x) exchange(0x%p) com_status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response_len(0x%x)",
- xchg->sid, xchg, pkg->status, up_status, xchg->did,
- xchg->hotpooltag, pkg->residus_len);
-
- xchg->scsi_cmnd_info.result =
- up_status | UNF_SCSI_STATUS(UNF_GET_SCSI_STATUS(pkg));
-
- return RETURN_OK;
-}
-
-static u32 unf_ini_dif_error_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status)
-{
- u8 *sense_data = NULL;
- u16 sense_code = 0;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- /*
- * According to DIF scheme
- * drive set check condition(0x2) when dif error occurs,
- * and returns the values base on the upper-layer verification resule
- * Check sequence: crc,Lba,App,
- * if CRC error is found, the subsequent check is not performed
- */
- xchg->scsi_cmnd_info.result = UNF_SCSI_STATUS(SCSI_CHECK_CONDITION);
-
- sense_code = (u16)pkg->status_sub_code;
- sense_data = (u8 *)kmalloc(SCSI_SENSE_DATA_LEN, GFP_ATOMIC);
- if (!sense_data) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Alloc FCP sense buffer failed");
-
- return UNF_RETURN_ERROR;
- }
- memset(sense_data, 0, SCSI_SENSE_DATA_LEN);
- sense_data[ARRAY_INDEX_0] = SENSE_DATA_RESPONSE_CODE; /* response code:0x70 */
- sense_data[ARRAY_INDEX_2] = ILLEGAL_REQUEST; /* sense key:0x05; */
- sense_data[ARRAY_INDEX_7] = ADDITINONAL_SENSE_LEN; /* additional sense length:0x7 */
- sense_data[ARRAY_INDEX_12] = (u8)(sense_code >> UNF_SHIFT_8);
- sense_data[ARRAY_INDEX_13] = (u8)sense_code;
-
- xchg->fcp_sfs_union.fcp_rsp_entry.fcp_rsp_iu = sense_data;
- xchg->fcp_sfs_union.fcp_rsp_entry.fcp_sense_len = SCSI_SENSE_DATA_LEN;
-
- /* valid sense data length snscode[13] */
- return RETURN_OK;
-}
-
-static u32 unf_io_underflow_handler(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg, u32 up_status)
-{
- /* under flow: residlen > 0 */
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_REPORT_LUN &&
- xchg->fcp_cmnd.cdb[ARRAY_INDEX_0] != SCSIOPC_INQUIRY) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]IO under flow: SID(0x%x) exchange(0x%p) com status(0x%x) up_status(0x%x) DID(0x%x) hot_pool_tag(0x%x) response SID(0x%x)",
- xchg->sid, xchg, pkg->status, up_status,
- xchg->did, xchg->hotpooltag, pkg->residus_len);
- }
-
- xchg->resid_len = (int)pkg->residus_len;
- (void)unf_io_success_handler(xchg, pkg, up_status);
-
- return RETURN_OK;
-}
-
-void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size)
-{
- /*
- * Exception during process Que_CMND
- * 1. L_Port == NULL;
- * 2. L_Port == removing;
- * 3. R_Port == NULL;
- * 4. Xchg == NULL.
- */
- FC_CHECK_RETURN_VOID((UNF_GET_CMND_DONE_FUNC(scsi_cmnd)));
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Command(0x%p), Result(0x%x).", scsi_cmnd, result_size);
-
- UNF_SET_CMND_RESULT(scsi_cmnd, result_size);
-
- UNF_DONE_SCSI_CMND(scsi_cmnd);
-}
-
-static inline void unf_bind_xchg_scsi_cmd(struct unf_xchg *xchg,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
-
- scsi_cmnd_info = &xchg->scsi_cmnd_info;
-
- /* UNF_SCSI_CMND_INFO <<-- UNF_SCSI_CMND */
- scsi_cmnd_info->err_code_table = UNF_GET_ERR_CODE_TABLE(scsi_cmnd);
- scsi_cmnd_info->err_code_table_cout = UNF_GET_ERR_CODE_TABLE_COUNT(scsi_cmnd);
- scsi_cmnd_info->done = UNF_GET_CMND_DONE_FUNC(scsi_cmnd);
- scsi_cmnd_info->scsi_cmnd = UNF_GET_HOST_CMND(scsi_cmnd);
- scsi_cmnd_info->sense_buf = (char *)UNF_GET_SENSE_BUF_ADDR(scsi_cmnd);
- scsi_cmnd_info->uplevel_done = UNF_GET_UP_LEVEL_CMND_DONE(scsi_cmnd);
- scsi_cmnd_info->unf_get_sgl_entry_buf = UNF_GET_SGL_ENTRY_BUF_FUNC(scsi_cmnd);
- scsi_cmnd_info->sgl = UNF_GET_CMND_SGL(scsi_cmnd);
- scsi_cmnd_info->time_out = scsi_cmnd->time_out;
- scsi_cmnd_info->entry_cnt = scsi_cmnd->entry_count;
- scsi_cmnd_info->port_id = (u32)scsi_cmnd->port_id;
- scsi_cmnd_info->scsi_id = UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd);
-}
-
-u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_fcp_cmnd *fcp_cmnd = NULL;
- u32 control = 0;
- u16 xchg_tag = 0x0ffff;
- u32 ret = UNF_RETURN_ERROR;
- ulong xchg_flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)lport;
- xchg_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
-
- /* 1. Find Exchange Context */
- unf_xchg = unf_cm_lookup_xchg_by_tag(lport, (u16)xchg_tag);
- if (unlikely(!unf_xchg)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) can not find exchange by tag(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, xchg_tag);
-
- /* NOTE: return directly */
- return UNF_RETURN_ERROR;
- }
-
- /* 2. Consistency check */
- UNF_CHECK_ALLOCTIME_VALID(unf_lport, xchg_tag, unf_xchg,
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
-
- /* 3. Increase ref_cnt for exchange protecting */
- ret = unf_xchg_ref_inc(unf_xchg, INI_RESPONSE_DONE); /* hold */
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
-
- fcp_cmnd = &unf_xchg->fcp_cmnd;
- control = fcp_cmnd->control;
- control = UNF_GET_TASK_MGMT_FLAGS(control);
-
- /* 4. Cancel timer if necessary */
- if (unf_xchg->scsi_cmnd_info.time_out != 0)
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
-
- /* 5. process scsi TMF if necessary */
- if (control != 0) {
- unf_process_scsi_mgmt_result(pkg, unf_xchg);
- unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
-
- /* NOTE: return directly */
- return RETURN_OK;
- }
-
- /* 6. Xchg Abort state check */
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
- unf_xchg->oxid = UNF_GET_OXID(pkg);
- unf_xchg->rxid = UNF_GET_RXID(pkg);
- if (INI_IO_STATE_UPABORT & unf_xchg->io_state) {
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]Port(0x%x) find exchange(%p) state(0x%x) has been aborted",
- unf_lport->port_id, unf_xchg, unf_xchg->io_state);
-
- /* NOTE: release exchange during SCSI ABORT(ABTS) */
- unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold */
-
- return ret;
- }
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
-
- /*
- * 7. INI SCSI CMND Status process
- * set exchange->result ---to--->>> scsi_result
- */
- ret = unf_ini_status_handle(unf_xchg, pkg);
-
- /* 8. release exchangenecessary */
- unf_cm_free_xchg(unf_lport, unf_xchg);
-
- /* 9. dec exch ref_cnt */
- unf_xchg_ref_dec(unf_xchg, INI_RESPONSE_DONE); /* cancel hold: release resource now */
-
- return ret;
-}
-
-u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- if (unlikely(!lport->low_level_func.service_op.unf_cmnd_send)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) low level send scsi function is NULL",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- return lport->low_level_func.service_op.unf_cmnd_send(lport->fc_port, pkg);
-}
-
-struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
- struct unf_ini_error_code *err_code_table,
- u32 err_code_table_cout, u32 scsi_id, u32 *scsi_result)
-{
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
-
- /* scsi_table -> session_table ->image_table */
- scsi_image_table = &lport->rport_scsi_table;
-
- /* 1. Scsi_Id validity check */
- if (unlikely(scsi_id >= scsi_image_table->max_scsi_id)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Input scsi_id(0x%x) bigger than max_scsi_id(0x%x).",
- scsi_id, scsi_image_table->max_scsi_id);
-
- *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table, err_code_table_cout,
- UNF_IO_SOFT_ERR); /* did_soft_error */
-
- return NULL;
- }
-
- /* 2. GetR_Port_Info/R_Port: use Scsi_Id find from L_Port's
- * Rport_Scsi_Table (image table)
- */
- spin_lock_irqsave(&scsi_image_table->scsi_image_table_lock, flags);
- wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
- unf_rport = wwpn_rport_info->rport;
- spin_unlock_irqrestore(&scsi_image_table->scsi_image_table_lock, flags);
-
- if (unlikely(!unf_rport)) {
- *scsi_result = unf_get_up_level_cmnd_errcode(err_code_table,
- err_code_table_cout,
- UNF_IO_PORT_LOGOUT);
-
- return NULL;
- }
-
- return unf_rport;
-}
-
-static u32 unf_build_xchg_fcpcmnd(struct unf_fcp_cmnd *fcp_cmnd,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- memcpy(fcp_cmnd->cdb, &UNF_GET_FCP_CMND(scsi_cmnd), scsi_cmnd->cmnd_len);
-
- if ((fcp_cmnd->control == UNF_FCP_WR_DATA &&
- (IS_READ_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0]))) ||
- (fcp_cmnd->control == UNF_FCP_RD_DATA &&
- (IS_WRITE_COMMAND(fcp_cmnd->cdb[ARRAY_INDEX_0])))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR,
- "Scsi command direction inconsistent, CDB[ARRAY_INDEX_0](0x%x), direction(0x%x).",
- fcp_cmnd->cdb[ARRAY_INDEX_0], fcp_cmnd->control);
-
- return UNF_RETURN_ERROR;
- }
-
- memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
-
- unf_big_end_to_cpu((void *)fcp_cmnd->cdb, sizeof(fcp_cmnd->cdb));
- fcp_cmnd->data_length = UNF_GET_DATA_LEN(scsi_cmnd);
-
- return RETURN_OK;
-}
-
-static void unf_adjust_xchg_len(struct unf_xchg *xchg, u32 scsi_cmnd)
-{
- switch (scsi_cmnd) {
- case SCSIOPC_REQUEST_SENSE: /* requires different buffer */
- xchg->data_len = UNF_SCSI_SENSE_BUFFERSIZE;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MINOR, "Request Sense new.");
- break;
-
- case SCSIOPC_TEST_UNIT_READY:
- case SCSIOPC_RESERVE:
- case SCSIOPC_RELEASE:
- case SCSIOPC_START_STOP_UNIT:
- xchg->data_len = 0;
- break;
-
- default:
- break;
- }
-}
-
-static void unf_copy_dif_control(struct unf_dif_control_info *dif_control,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- dif_control->fcp_dl = scsi_cmnd->dif_control.fcp_dl;
- dif_control->protect_opcode = scsi_cmnd->dif_control.protect_opcode;
- dif_control->start_lba = scsi_cmnd->dif_control.start_lba;
- dif_control->app_tag = scsi_cmnd->dif_control.app_tag;
-
- dif_control->flags = scsi_cmnd->dif_control.flags;
- dif_control->dif_sge_count = scsi_cmnd->dif_control.dif_sge_count;
- dif_control->dif_sgl = scsi_cmnd->dif_control.dif_sgl;
-}
-
-static void unf_adjust_dif_pci_transfer_len(struct unf_xchg *xchg, u32 direction)
-{
- struct unf_dif_control_info *dif_control = NULL;
- u32 sector_size = 0;
-
- dif_control = &xchg->dif_control;
-
- if (dif_control->protect_opcode == UNF_DIF_ACTION_NONE)
- return;
- if ((dif_control->flags & UNF_DIF_SECTSIZE_4KB) == 0)
- sector_size = SECTOR_SIZE_512;
- else
- sector_size = SECTOR_SIZE_4096;
- switch (dif_control->protect_opcode & UNF_DIF_ACTION_MASK) {
- case UNF_DIF_ACTION_INSERT:
- if (direction == DMA_TO_DEVICE) {
- /* write IO,insert,Indicates that data with DIF is
- * transmitted over the link.
- */
- dif_control->fcp_dl = xchg->data_len +
- UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
- } else {
- /* read IO,insert,Indicates that the internal DIf is
- * carried, and the link does not carry the DIf.
- */
- dif_control->fcp_dl = xchg->data_len;
- }
- break;
-
- case UNF_DIF_ACTION_VERIFY_AND_DELETE:
- if (direction == DMA_TO_DEVICE) {
- /* write IO,Delete,Indicates that the internal DIf is
- * carried, and the link does not carry the DIf.
- */
- dif_control->fcp_dl = xchg->data_len;
- } else {
- /* read IO,Delete,Indicates that data with DIF is
- * carried on the link and does not contain DIF on
- * internal.
- */
- dif_control->fcp_dl = xchg->data_len +
- UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
- }
- break;
-
- case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
- dif_control->fcp_dl = xchg->data_len +
- UNF_CAL_BLOCK_CNT(xchg->data_len, sector_size) * UNF_DIF_AREA_SIZE;
- break;
-
- default:
- dif_control->fcp_dl = xchg->data_len;
- break;
- }
-
- xchg->fcp_cmnd.data_length = dif_control->fcp_dl;
-}
-
-static void unf_get_dma_direction(struct unf_fcp_cmnd *fcp_cmnd,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_TO_DEVICE) {
- fcp_cmnd->control = UNF_FCP_WR_DATA;
- } else if (UNF_GET_DATA_DIRECTION(scsi_cmnd) == DMA_FROM_DEVICE) {
- fcp_cmnd->control = UNF_FCP_RD_DATA;
- } else {
- /* DMA Direction None */
- fcp_cmnd->control = 0;
- }
-}
-
-static int unf_save_scsi_cmnd_to_xchg(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_xchg *xchg,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- struct unf_xchg *unf_xchg = xchg;
- u32 result_size = 0;
-
- scsi_cmnd->driver_scribble = (void *)unf_xchg->start_jif;
- unf_xchg->rport = unf_rport;
- unf_xchg->rport_bind_jifs = unf_rport->rport_alloc_jifs;
-
- /* Build Xchg SCSI_CMND info */
- unf_bind_xchg_scsi_cmd(unf_xchg, scsi_cmnd);
-
- unf_xchg->data_len = UNF_GET_DATA_LEN(scsi_cmnd);
- unf_xchg->data_direction = UNF_GET_DATA_DIRECTION(scsi_cmnd);
- unf_xchg->sid = unf_lport->nport_id;
- unf_xchg->did = unf_rport->nport_id;
- unf_xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
- unf_xchg->world_id = scsi_cmnd->world_id;
- unf_xchg->cmnd_sn = scsi_cmnd->cmnd_sn;
- unf_xchg->pinitiator = scsi_cmnd->pinitiator;
- unf_xchg->scsi_id = scsi_cmnd->scsi_id;
- if (scsi_cmnd->qos_level == UNF_QOS_LEVEL_DEFAULT)
- unf_xchg->qos_level = unf_rport->qos_level;
- else
- unf_xchg->qos_level = scsi_cmnd->qos_level;
-
- unf_get_dma_direction(&unf_xchg->fcp_cmnd, scsi_cmnd);
- result_size = unf_build_xchg_fcpcmnd(&unf_xchg->fcp_cmnd, scsi_cmnd);
- if (unlikely(result_size != RETURN_OK))
- return UNF_RETURN_ERROR;
-
- unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
-
- unf_adjust_xchg_len(unf_xchg, UNF_GET_FCP_CMND(scsi_cmnd));
-
- /* Dif (control) info */
- unf_copy_dif_control(&unf_xchg->dif_control, scsi_cmnd);
- memcpy(&unf_xchg->dif_info, &scsi_cmnd->dif_info, sizeof(struct dif_info));
- unf_adjust_dif_pci_transfer_len(unf_xchg, UNF_GET_DATA_DIRECTION(scsi_cmnd));
-
- /* single sgl info */
- if (unf_xchg->data_direction != DMA_NONE && UNF_GET_CMND_SGL(scsi_cmnd)) {
- unf_xchg->req_sgl_info.sgl = UNF_GET_CMND_SGL(scsi_cmnd);
- unf_xchg->req_sgl_info.sgl_start = unf_xchg->req_sgl_info.sgl;
- /* Save the sgl header for easy
- * location and printing.
- */
- unf_xchg->req_sgl_info.req_index = 0;
- unf_xchg->req_sgl_info.entry_index = 0;
- }
-
- if (scsi_cmnd->dif_control.dif_sgl) {
- unf_xchg->dif_sgl_info.sgl = UNF_INI_GET_DIF_SGL(scsi_cmnd);
- unf_xchg->dif_sgl_info.entry_index = 0;
- unf_xchg->dif_sgl_info.req_index = 0;
- unf_xchg->dif_sgl_info.sgl_start = unf_xchg->dif_sgl_info.sgl;
- }
-
- return RETURN_OK;
-}
-
-static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
-#define UNF_MAX_PENDING_IO_CNT 3
- struct unf_scsi_cmd_info *scsi_cmnd_info = NULL;
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- struct unf_xchg *unf_xchg = xchg;
- struct unf_frame_pkg pkg = {0};
- u32 result_size = 0;
- ulong flags = 0;
-
- memcpy(&pkg.dif_control, &unf_xchg->dif_control, sizeof(struct unf_dif_control_info));
- pkg.dif_control.fcp_dl = unf_xchg->dif_control.fcp_dl;
- pkg.transfer_len = unf_xchg->data_len; /* Pcie data transfer length */
- pkg.xchg_contex = unf_xchg;
- pkg.qos_level = unf_xchg->qos_level;
- scsi_cmnd_info = &xchg->scsi_cmnd_info;
- pkg.entry_count = unf_xchg->scsi_cmnd_info.entry_cnt;
- if (unf_xchg->data_direction == DMA_NONE || !scsi_cmnd_info->sgl)
- pkg.entry_count = 0;
-
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
- pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
- pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
-
- unf_select_sq(unf_xchg, &pkg);
- pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
- pkg.frame_head.csctl_sid = unf_lport->nport_id;
- pkg.frame_head.rctl_did = unf_rport->nport_id;
- pkg.upper_cmd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
-
- /* exch->fcp_rsp_id --->>> pkg->buffer_ptr */
- pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16 | unf_xchg->rxid);
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "[info]LPort (0x%p), Nport ID(0x%x) RPort ID(0x%x) direction(0x%x) magic number(0x%x) IO to entry count(0x%x) hottag(0x%x)",
- unf_lport, unf_lport->nport_id, unf_rport->nport_id,
- xchg->data_direction, pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- pkg.entry_count, unf_xchg->hotpooltag);
-
- atomic_inc(&unf_rport->pending_io_cnt);
- if (unf_rport->tape_support_needed &&
- (atomic_read(&unf_rport->pending_io_cnt) <= UNF_MAX_PENDING_IO_CNT)) {
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_xchg->io_state |= INI_IO_STATE_REC_TIMEOUT_WAIT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- scsi_cmnd_info->abort_time_out = scsi_cmnd_info->time_out;
- scsi_cmnd_info->time_out = UNF_REC_TOV;
- }
- /* 3. add INI I/O timer if necessary */
- if (scsi_cmnd_info->time_out != 0) {
- /* I/O inner timer, do not used at this time */
- unf_lport->xchg_mgr_temp.unf_xchg_add_timer(unf_xchg,
- scsi_cmnd_info->time_out, UNF_TIMER_TYPE_REQ_IO);
- }
-
- /* 4. R_Port state check */
- if (unlikely(unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP ||
- unf_rport->rp_state > UNF_RPORT_ST_READY)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
- unf_lport->port_id, unf_rport, unf_rport->nport_id,
- unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
-
- result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
- scsi_cmnd_info->err_code_table_cout,
- UNF_IO_INCOMPLETE);
- scsi_cmnd_info->result = result_size;
-
- if (scsi_cmnd_info->time_out != 0)
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
-
- unf_cm_free_xchg(unf_lport, unf_xchg);
-
- /* DID_IMM_RETRY */
- return RETURN_OK;
- } else if (unf_rport->rp_state < UNF_RPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[info]Port(0x%x) RPort(0x%p) NPortId(0x%x) inistate(0x%x): RPort state(0x%x) pUpperCmd(0x%p) is not ready",
- unf_lport->port_id, unf_rport, unf_rport->nport_id,
- unf_rport->lport_ini_state, unf_rport->rp_state, pkg.upper_cmd);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR; /* need retry */
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- if (unlikely(scsi_cmnd_info->time_out != 0))
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)unf_xchg);
-
- /* Host busy & need scsi retry */
- return UNF_RETURN_ERROR;
- }
-
- /* 5. send scsi_cmnd to FC_LL Driver */
- if (unf_hardware_start_io(unf_lport, &pkg) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port (0x%x) pUpperCmd(0x%p) Hardware Send IO failed.",
- unf_lport->port_id, pkg.upper_cmd);
-
- unf_release_esgls(unf_xchg);
-
- result_size = unf_get_up_level_cmnd_errcode(scsi_cmnd_info->err_code_table,
- scsi_cmnd_info->err_code_table_cout,
- UNF_IO_INCOMPLETE);
- scsi_cmnd_info->result = result_size;
-
- if (scsi_cmnd_info->time_out != 0)
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(unf_xchg);
-
- unf_cm_free_xchg(unf_lport, unf_xchg);
-
- /* SCSI_DONE */
- return RETURN_OK;
- }
-
- return RETURN_OK;
-}
-
-int unf_prefer_to_send_scsi_cmnd(struct unf_xchg *xchg)
-{
- /*
- * About INI_IO_STATE_DRABORT:
- * 1. Set ABORT tag: Clean L_Port/V_Port Link Down I/O
- * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
- * *
- * 2. Set ABORT tag: for target session:
- * with: INI_busy_list, delay_list, delay_transfer_list, wait_list
- * a. R_Port remove
- * b. Send PLOGI_ACC callback
- * c. RCVD PLOGI
- * d. RCVD LOGO
- * *
- * 3. if set ABORT: prevent send scsi_cmnd to target
- */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- int ret = RETURN_OK;
- ulong flags = 0;
-
- unf_lport = xchg->lport;
-
- unf_rport = xchg->rport;
- if (unlikely(!unf_lport || !unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%p) or RPort(0x%p) is NULL", unf_lport, unf_rport);
-
- /* if happened (never happen): need retry */
- return UNF_RETURN_ERROR;
- }
-
- /* 1. inc ref_cnt to protect exchange */
- ret = (int)unf_xchg_ref_inc(xchg, INI_SEND_CMND);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) exhg(%p) exception ref(%d) ", unf_lport->port_id,
- xchg, atomic_read(&xchg->ref_cnt));
- /* exchange exception, need retry */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
- return UNF_RETURN_ERROR;
- }
-
- /* 2. Xchg Abort state check: Free EXCH if necessary */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (unlikely((xchg->io_state & INI_IO_STATE_UPABORT) ||
- (xchg->io_state & INI_IO_STATE_DRABORT))) {
- /* Prevent to send: UP_ABORT/DRV_ABORT */
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_IMM_RETRY);
- ret = RETURN_OK;
-
- unf_xchg_ref_dec(xchg, INI_SEND_CMND);
- unf_cm_free_xchg(unf_lport, xchg);
-
- /*
- * Release exchange & return directly:
- * 1. FC LLDD rcvd ABTS before scsi_cmnd: do nothing
- * 2. INI_IO_STATE_UPABORT/INI_IO_STATE_DRABORT: discard this
- * cmnd directly
- */
- return ret;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- /* 3. Send FCP_CMND to FC_LL Driver */
- ret = unf_send_fcpcmnd(unf_lport, unf_rport, xchg);
- if (unlikely(ret != RETURN_OK)) {
- /* exchange exception, need retry */
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send exhg(%p) hottag(0x%x) to Rport(%p) NPortID(0x%x) state(0x%x) scsi_id(0x%x) failed",
- unf_lport->port_id, xchg, xchg->hotpooltag, unf_rport,
- unf_rport->nport_id, unf_rport->rp_state, unf_rport->scsi_id);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
-
- xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
- /* need retry */
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- /* INI_IO_STATE_UPSEND_ERR: Host busy --->>> need retry */
- unf_cm_free_xchg(unf_lport, xchg);
- }
-
- /* 4. dec ref_cnt */
- unf_xchg_ref_dec(xchg, INI_SEND_CMND);
-
- return ret;
-}
-
-struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd)
-{
- struct unf_lport *unf_lport = NULL;
-
- /* cmd -->> L_Port */
- unf_lport = (struct unf_lport *)UNF_GET_HOST_PORT_BY_CMND(scsi_cmnd);
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Find Port by scsi_cmnd(0x%p) failed", scsi_cmnd);
-
- /* cmnd -->> scsi_host_id -->> L_Port */
- unf_lport = unf_find_lport_by_scsi_hostid(UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
- }
-
- return unf_lport;
-}
-
-int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd)
-{
- /* SCSI Command --->>> FC FCP Command */
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- u32 cmnd_result = 0;
- int ret = RETURN_OK;
- ulong flags = 0;
- u32 scsi_id = 0;
- u32 exhg_mgr_type = UNF_XCHG_MGR_TYPE_RANDOM;
-
- /* 1. Get L_Port */
- unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
-
- /*
- * corresponds to the insertion or removal scenario or the remove card
- * scenario. This method is used to search for LPort information based
- * on SCSI_HOST_ID. The Slave alloc is not invoked when LUNs are not
- * scanned. Therefore, the Lport cannot be obtained. You need to obtain
- * the Lport from the Lport linked list.
- * *
- * FC After Link Up, the first SCSI command is inquiry.
- * Before inquiry, SCSI delivers slave_alloc.
- */
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Find Port by scsi cmd(0x%p) failed", scsi_cmnd);
-
- /* find from ini_error_code_table1 */
- cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
- scsi_cmnd->err_code_table_cout,
- UNF_IO_NO_LPORT);
-
- /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
- unf_complete_cmnd(scsi_cmnd, cmnd_result);
- return RETURN_OK;
- }
-
- /* Get Local SCSI_Image_table & SCSI_ID */
- scsi_image_table = &unf_lport->rport_scsi_table;
- scsi_id = scsi_cmnd->scsi_id;
-
- /* 2. L_Port State check */
- if (unlikely(unf_lport->port_removing || unf_lport->pcie_link_down)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) is removing(%d) or pcielinkdown(%d) and return with scsi_id(0x%x)",
- unf_lport->port_id, unf_lport->port_removing,
- unf_lport->pcie_link_down, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- cmnd_result = unf_get_up_level_cmnd_errcode(scsi_cmnd->err_code_table,
- scsi_cmnd->err_code_table_cout,
- UNF_IO_NO_LPORT);
-
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
-
- /* DID_NOT_CONNECT & SCSI_DONE & RETURN_OK(0) & I/O error */
- unf_complete_cmnd(scsi_cmnd, cmnd_result);
- return RETURN_OK;
- }
-
- /* 3. Get R_Port */
- unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
- scsi_cmnd->err_code_table_cout,
- UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
- if (unlikely(!unf_rport)) {
- /* never happen: do not care */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) find RPort by scsi_id(0x%x) failed",
- unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, (cmnd_result >> UNF_SHIFT_16));
-
- /* DID_NOT_CONNECT/DID_SOFT_ERROR & SCSI_DONE & RETURN_OK(0) &
- * I/O error
- */
- unf_complete_cmnd(scsi_cmnd, cmnd_result);
- return RETURN_OK;
- }
-
- /* 4. Can't get exchange & return host busy, retry by uplevel */
- unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport,
- exhg_mgr_type << UNF_SHIFT_16 | UNF_XCHG_TYPE_INI);
- if (unlikely(!unf_xchg)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[err]Port(0x%x) get free exchange for INI IO(0x%x) failed",
- unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- /* NOTE: need scsi retry */
- return UNF_RETURN_ERROR;
- }
-
- unf_xchg->scsi_cmnd_info.result = UNF_SCSI_HOST(DID_ERROR);
-
- /* 5. Save the SCSI CMND information in advance. */
- ret = unf_save_scsi_cmnd_to_xchg(unf_lport, unf_rport, unf_xchg, scsi_cmnd);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[err]Port(0x%x) save scsi_cmnd info(0x%x) to exchange failed",
- unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
- unf_xchg->io_state |= INI_IO_STATE_UPSEND_ERR;
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
-
- /* INI_IO_STATE_UPSEND_ERR: Don't Do SCSI_DONE, need retry I/O */
- unf_cm_free_xchg(unf_lport, unf_xchg);
-
- /* NOTE: need scsi retry */
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Get exchange(0x%p) hottag(0x%x) for Pcmd:%p,Cmdsn:0x%lx,WorldId:%d",
- unf_xchg, unf_xchg->hotpooltag, scsi_cmnd->upper_cmnd,
- (ulong)scsi_cmnd->cmnd_sn, scsi_cmnd->world_id);
- /* 6. Send SCSI CMND */
- ret = unf_prefer_to_send_scsi_cmnd(unf_xchg);
-
- return ret;
-}
diff --git a/drivers/scsi/spfc/common/unf_io.h b/drivers/scsi/spfc/common/unf_io.h
deleted file mode 100644
index d8e50eb8035e..000000000000
--- a/drivers/scsi/spfc/common/unf_io.h
+++ /dev/null
@@ -1,96 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_IO_H
-#define UNF_IO_H
-
-#include "unf_type.h"
-#include "unf_scsi_common.h"
-#include "unf_exchg.h"
-#include "unf_rport.h"
-
-#define UNF_MAX_TARGET_NUMBER 2048
-#define UNF_DEFAULT_MAX_LUN 0xFFFF
-#define UNF_MAX_DMA_SEGS 0x400
-#define UNF_MAX_SCSI_CMND_LEN 16
-#define UNF_MAX_BUS_CHANNEL 0
-#define UNF_DMA_BOUNDARY 0xffffffffffffffff
-#define UNF_MAX_CMND_PER_LUN 64 /* LUN max command */
-#define UNF_CHECK_LUN_ID_MATCH(lun_id, raw_lun_id, scsi_id, xchg) \
- (((lun_id) == (raw_lun_id) || (lun_id) == INVALID_VALUE64) && \
- ((scsi_id) == (xchg)->scsi_id))
-
-#define NO_SENSE 0x00
-#define RECOVERED_ERROR 0x01
-#define NOT_READY 0x02
-#define MEDIUM_ERROR 0x03
-#define HARDWARE_ERROR 0x04
-#define ILLEGAL_REQUEST 0x05
-#define UNIT_ATTENTION 0x06
-#define DATA_PROTECT 0x07
-#define BLANK_CHECK 0x08
-#define COPY_ABORTED 0x0a
-#define ABORTED_COMMAND 0x0b
-#define VOLUME_OVERFLOW 0x0d
-#define MISCOMPARE 0x0e
-
-#define SENSE_DATA_RESPONSE_CODE 0x70
-#define ADDITINONAL_SENSE_LEN 0x7
-
-extern u32 sector_size_flag;
-
-#define UNF_GET_SCSI_HOST_ID_BY_CMND(cmd) ((cmd)->scsi_host_id)
-#define UNF_GET_SCSI_ID_BY_CMND(cmd) ((cmd)->scsi_id)
-#define UNF_GET_HOST_PORT_BY_CMND(cmd) ((cmd)->drv_private)
-#define UNF_GET_FCP_CMND(cmd) ((cmd)->pcmnd[ARRAY_INDEX_0])
-#define UNF_GET_DATA_LEN(cmd) ((cmd)->transfer_len)
-#define UNF_GET_DATA_DIRECTION(cmd) ((cmd)->data_direction)
-
-#define UNF_GET_HOST_CMND(cmd) ((cmd)->upper_cmnd)
-#define UNF_GET_CMND_DONE_FUNC(cmd) ((cmd)->done)
-#define UNF_GET_UP_LEVEL_CMND_DONE(cmd) ((cmd)->uplevel_done)
-#define UNF_GET_SGL_ENTRY_BUF_FUNC(cmd) ((cmd)->unf_ini_get_sgl_entry)
-#define UNF_GET_SENSE_BUF_ADDR(cmd) ((cmd)->sense_buf)
-#define UNF_GET_ERR_CODE_TABLE(cmd) ((cmd)->err_code_table)
-#define UNF_GET_ERR_CODE_TABLE_COUNT(cmd) ((cmd)->err_code_table_cout)
-
-#define UNF_SET_HOST_CMND(cmd, host_cmd) ((cmd)->upper_cmnd = (host_cmd))
-#define UNF_SER_CMND_DONE_FUNC(cmd, pfn) ((cmd)->done = (pfn))
-#define UNF_SET_UP_LEVEL_CMND_DONE_FUNC(cmd, pfn) ((cmd)->uplevel_done = (pfn))
-
-#define UNF_SET_RESID(cmd, uiresid) ((cmd)->resid = (uiresid))
-#define UNF_SET_CMND_RESULT(cmd, uiresult) ((cmd)->result = ((int)(uiresult)))
-
-#define UNF_DONE_SCSI_CMND(cmd) ((cmd)->done(cmd))
-
-#define UNF_GET_CMND_SGL(cmd) ((cmd)->sgl)
-#define UNF_INI_GET_DIF_SGL(cmd) ((cmd)->dif_control.dif_sgl)
-
-u32 unf_ini_scsi_completed(void *lport, struct unf_frame_pkg *pkg);
-u32 unf_ini_get_sgl_entry(void *pkg, char **buf, u32 *buf_len);
-u32 unf_ini_get_dif_sgl_entry(void *pkg, char **buf, u32 *buf_len);
-void unf_complete_cmnd(struct unf_scsi_cmnd *scsi_cmnd, u32 result_size);
-void unf_done_ini_xchg(struct unf_xchg *xchg);
-u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg);
-u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg);
-void unf_abts_timeout_recovery_default(void *rport, void *xchg);
-int unf_cm_queue_command(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_cm_virtual_reset_handler(struct unf_scsi_cmnd *scsi_cmnd);
-struct unf_rport *unf_find_rport_by_scsi_id(struct unf_lport *lport,
- struct unf_ini_error_code *errcode_table,
- u32 errcode_table_count,
- u32 scsi_id, u32 *scsi_result);
-u32 UNF_IOExchgDelayProcess(struct unf_lport *lport, struct unf_xchg *xchg);
-struct unf_lport *unf_find_lport_by_scsi_cmd(struct unf_scsi_cmnd *scsi_cmnd);
-int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_scsi_cmnd *scsi_cmnd,
- enum unf_task_mgmt_cmd task_mgnt_cmd_type);
-void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
deleted file mode 100644
index 4e268ac026ca..000000000000
--- a/drivers/scsi/spfc/common/unf_io_abnormal.c
+++ /dev/null
@@ -1,986 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_io_abnormal.h"
-#include "unf_log.h"
-#include "unf_scsi_common.h"
-#include "unf_rport.h"
-#include "unf_io.h"
-#include "unf_portman.h"
-#include "unf_service.h"
-
-static int unf_send_abts_success(struct unf_lport *lport, struct unf_xchg *xchg,
- struct unf_scsi_cmnd *scsi_cmnd,
- u32 time_out_value)
-{
- bool need_wait_marker = true;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- u32 scsi_id = 0;
- u32 return_value = 0;
- ulong xchg_flag = 0;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- need_wait_marker = (xchg->abts_state & MARKER_STS_RECEIVED) ? false : true;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- if (need_wait_marker) {
- if (down_timeout(&xchg->task_sema, (s64)msecs_to_jiffies(time_out_value))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) recv abts marker timeout,Exch(0x%p) OX_ID(0x%x 0x%x) RX_ID(0x%x)",
- lport->port_id, xchg, xchg->oxid,
- xchg->hotpooltag, xchg->rxid);
-
- /* Cancel abts rsp timer when sema timeout */
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- /* Cnacel the flag of INI_IO_STATE_UPABORT and process
- * the io in TMF
- */
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
- xchg->io_state |= INI_IO_STATE_TMF_ABORT;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- return UNF_SCSI_ABORT_FAIL;
- }
- } else {
- xchg->ucode_abts_state = UNF_IO_SUCCESS;
- }
-
- scsi_image_table = &lport->rport_scsi_table;
- scsi_id = scsi_cmnd->scsi_id;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- if (xchg->ucode_abts_state == UNF_IO_SUCCESS ||
- xchg->scsi_cmnd_info.result == UNF_IO_ABORT_PORT_REMOVING) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) Send ABTS succeed and recv marker Exch(0x%p) OX_ID(0x%x) RX_ID(0x%x) marker status(0x%x)",
- lport->port_id, xchg, xchg->oxid, xchg->rxid, xchg->ucode_abts_state);
- return_value = DID_RESET;
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
- unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
- xchg->io_state |= INI_IO_STATE_TMF_ABORT;
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- /* Cancel abts rsp timer when sema timeout */
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ABTS failed. Exch(0x%p) oxid(0x%x) hot_tag(0x%x) ret(0x%x) xchg->io_state (0x%x)",
- lport->port_id, xchg, xchg->oxid, xchg->hotpooltag,
- xchg->scsi_cmnd_info.result, xchg->io_state);
-
- /* return fail and then enter TMF */
- return UNF_SCSI_ABORT_FAIL;
-}
-
-static int unf_ini_abort_cmnd(struct unf_lport *lport, struct unf_xchg *xchg,
- struct unf_scsi_cmnd *scsi_cmnd)
-{
- /*
- * About INI_IO_STATE_UPABORT:
- * *
- * 1. Check: L_Port destroy
- * 2. Check: I/O XCHG timeout
- * 3. Set ABORT: send ABTS
- * 4. Set ABORT: LUN reset
- * 5. Set ABORT: Target reset
- * 6. Check: Prevent to send I/O to target
- * (unf_prefer_to_send_scsi_cmnd)
- * 7. Check: Done INI XCHG --->>> do not call scsi_done, return directly
- * 8. Check: INI SCSI Complete --->>> do not call scsi_done, return
- * directly
- */
-#define UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT (2000)
-
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong rport_flag = 0;
- ulong xchg_flag = 0;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- u32 scsi_id = 0;
- u32 time_out_value = (u32)UNF_WAIT_SEM_TIMEOUT;
- u32 return_value = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_SCSI_ABORT_FAIL);
- unf_lport = lport;
-
- /* 1. Xchg State Set: INI_IO_STATE_UPABORT */
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- xchg->io_state |= INI_IO_STATE_UPABORT;
- unf_rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- /* 2. R_Port check */
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ABTS but no RPort, OX_ID(0x%x) RX_ID(0x%x)",
- unf_lport->port_id, xchg->oxid, xchg->rxid);
-
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
- if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) find RPort's state(0x%x) is not ready but send ABTS also, exchange(0x%p) tag(0x%x)",
- unf_lport->port_id, unf_rport->rp_state, xchg, xchg->hotpooltag);
-
- /*
- * Important: Send ABTS also & update timer
- * Purpose: only used for release chip (uCode) resource,
- * continue
- */
- time_out_value = UNF_RPORT_NOTREADY_WAIT_SEM_TIMEOUT;
- }
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
-
- /* 3. L_Port State check */
- if (unlikely(unf_lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) is removing", unf_lport->port_id);
-
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
-
- return UNF_SCSI_ABORT_FAIL;
- }
-
- scsi_image_table = &unf_lport->rport_scsi_table;
- scsi_id = scsi_cmnd->scsi_id;
-
- /* If pcie linkdown, complete this io and flush all io */
- if (unlikely(unf_lport->pcie_link_down)) {
- return_value = DID_RESET;
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, return_value);
- unf_complete_cmnd(scsi_cmnd, DID_RESET << UNF_SHIFT_16);
- unf_free_lport_all_xchg(lport);
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[abort]Port(0x%x) Exchg(0x%p) delay(%llu) SID(0x%x) DID(0x%x) wwpn(0x%llx) hottag(0x%x) scsi_id(0x%x) lun_id(0x%x) cmdsn(0x%llx) Ini:%p",
- unf_lport->port_id, xchg,
- (u64)jiffies_to_msecs(jiffies) - (u64)jiffies_to_msecs(xchg->alloc_jif),
- xchg->sid, xchg->did, unf_rport->port_name, xchg->hotpooltag,
- scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id, scsi_cmnd->cmnd_sn,
- scsi_cmnd->pinitiator);
-
- /* Init abts marker semaphore */
- sema_init(&xchg->task_sema, 0);
-
- if (xchg->scsi_cmnd_info.time_out != 0)
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer(xchg);
-
- lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, (ulong)UNF_WAIT_ABTS_RSP_TIMEOUT,
- UNF_TIMER_TYPE_INI_ABTS);
-
- /* 4. Send INI ABTS CMND */
- if (unf_send_abts(unf_lport, xchg) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) Send ABTS failed. Exch(0x%p) hottag(0x%x)",
- unf_lport->port_id, xchg, xchg->hotpooltag);
-
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, xchg_flag);
- xchg->io_state &= ~INI_IO_STATE_UPABORT;
- xchg->io_state |= INI_IO_STATE_TMF_ABORT;
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, xchg_flag);
-
- return UNF_SCSI_ABORT_FAIL;
- }
-
- return unf_send_abts_success(unf_lport, xchg, scsi_cmnd, time_out_value);
-}
-
-static void unf_flush_ini_resp_que(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- if (lport->low_level_func.service_op.unf_flush_ini_resp_que)
- (void)lport->low_level_func.service_op.unf_flush_ini_resp_que(lport->fc_port);
-}
-
-int unf_cm_eh_abort_handler(struct unf_scsi_cmnd *scsi_cmnd)
-{
- /*
- * SCSI ABORT Command --->>> FC ABTS Command
- * If return ABORT_FAIL, then enter TMF process
- */
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_lport *xchg_lport = NULL;
- int ret = UNF_SCSI_ABORT_SUCCESS;
- ulong flag = 0;
-
- /* 1. Get L_Port: Point to Scsi_host */
- unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can't find port by scsi host id(0x%x)",
- UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
- return UNF_SCSI_ABORT_FAIL;
- }
-
- /* 2. find target Xchg for INI Abort CMND */
- unf_xchg = unf_cm_lookup_xchg_by_cmnd_sn(unf_lport, scsi_cmnd->cmnd_sn,
- scsi_cmnd->world_id,
- scsi_cmnd->pinitiator);
- if (unlikely(!unf_xchg)) {
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
- "[warn]Port(0x%x) can't find exchange by Cmdsn(0x%lx),Ini:%p",
- unf_lport->port_id, (ulong)scsi_cmnd->cmnd_sn,
- scsi_cmnd->pinitiator);
-
- unf_flush_ini_resp_que(unf_lport);
-
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- /* 3. increase ref_cnt to protect exchange */
- ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_ABORT);
- if (unlikely(ret != RETURN_OK)) {
- unf_flush_ini_resp_que(unf_lport);
-
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- scsi_cmnd->upper_cmnd = unf_xchg->scsi_cmnd_info.scsi_cmnd;
- unf_xchg->debug_hook = true;
-
- /* 4. Exchang L_Port/R_Port Get & check */
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, flag);
- xchg_lport = unf_xchg->lport;
- unf_rport = unf_xchg->rport;
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flag);
-
- if (unlikely(!xchg_lport || !unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Exchange(0x%p)'s L_Port or R_Port is NULL, state(0x%x)",
- unf_xchg, unf_xchg->io_state);
-
- unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
-
- if (!xchg_lport)
- /* for L_Port */
- return UNF_SCSI_ABORT_FAIL;
- /* for R_Port */
- return UNF_SCSI_ABORT_SUCCESS;
- }
-
- /* 5. Send INI Abort Cmnd */
- ret = unf_ini_abort_cmnd(xchg_lport, unf_xchg, scsi_cmnd);
-
- /* 6. decrease exchange ref_cnt */
- unf_xchg_ref_dec(unf_xchg, INI_EH_ABORT);
-
- return ret;
-}
-
-u32 unf_tmf_timeout_recovery_default(void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_rport *unf_rport = (struct unf_rport *)rport;
-
- unf_lport = unf_xchg->lport;
- FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_rport_enter_logo(unf_lport, unf_rport);
-
- return RETURN_OK;
-}
-
-void unf_abts_timeout_recovery_default(void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
- ulong flags = 0;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_rport *unf_rport = (struct unf_rport *)rport;
-
- unf_lport = unf_xchg->lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
- if (INI_IO_STATE_DONE & unf_xchg->io_state) {
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
-
- return;
- }
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
-
- if (unf_xchg->rport_bind_jifs != unf_rport->rport_alloc_jifs)
- return;
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_rport_enter_logo(unf_lport, unf_rport);
-}
-
-u32 unf_tmf_timeout_recovery_special(void *rport, void *xchg)
-{
- /* Do port reset or R_Port LOGO */
- int ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_rport *unf_rport = (struct unf_rport *)rport;
-
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
-
- unf_lport = unf_xchg->lport->root_lport;
- FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
-
- /* 1. TMF response timeout & Marker STS timeout */
- if (!(unf_xchg->tmf_state &
- (MARKER_STS_RECEIVED | TMF_RESPONSE_RECEIVED))) {
- /* TMF timeout & marker timeout */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive marker status timeout and do recovery",
- unf_lport->port_id);
-
- /* Do port reset */
- ret = unf_cm_reset_port(unf_lport->port_id);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) do reset failed",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
- }
-
- /* 2. default case: Do LOGO process */
- unf_tmf_timeout_recovery_default(unf_rport, unf_xchg);
-
- return RETURN_OK;
-}
-
-void unf_tmf_abnormal_recovery(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- /*
- * for device(lun)/target(session) reset:
- * Do port reset or R_Port LOGO
- */
- if (lport->unf_tmf_abnormal_recovery)
- lport->unf_tmf_abnormal_recovery((void *)rport, (void *)xchg);
-}
-
-int unf_cm_eh_device_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
-{
- /* SCSI Device/LUN Reset Command --->>> FC LUN/Device Reset Command */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- u32 cmnd_result = 0;
- int ret = SUCCESS;
-
- FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
- FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[event]Enter device/LUN reset handler");
-
- /* 1. Get L_Port */
- unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can't find port by scsi_host_id(0x%x)",
- UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
-
- return FAILED;
- }
-
- /* 2. L_Port State checking */
- if (unlikely(unf_lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%p) is removing", unf_lport);
-
- return FAILED;
- }
-
- /*
- * 3. Get R_Port: no rport is found or rport is not ready,return ok
- * from: L_Port -->> rport_scsi_table (image table) -->>
- * rport_info_table
- */
- unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
- scsi_cmnd->err_code_table_cout,
- UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) Can't find rport by scsi_id(0x%x)",
- unf_lport->port_id, UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- return SUCCESS;
- }
-
- /*
- * 4. Set the I/O of the corresponding LUN to abort.
- * *
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- unf_cm_xchg_abort_by_lun(unf_lport, unf_rport, *((u64 *)scsi_cmnd->lun_id), NULL, false);
-
- /* 5. R_Port state check */
- if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) state(0x%x) SCSI Command(0x%p), rport is not ready",
- unf_lport->port_id, unf_rport->nport_id,
- unf_rport->rp_state, scsi_cmnd);
-
- return SUCCESS;
- }
-
- /* 6. Get & inc ref_cnt free Xchg for Device reset */
- unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
- if (unlikely(!unf_xchg)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%p) can't get free exchange", unf_lport);
-
- return FAILED;
- }
-
- /* increase ref_cnt for protecting exchange */
- ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
-
- /* 7. Send Device/LUN Reset to Low level */
- ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
- UNF_FCP_TM_LOGICAL_UNIT_RESET);
- if (unlikely(ret == FAILED)) {
- /*
- * Do port reset or R_Port LOGO:
- * 1. FAILED: send failed
- * 2. FAILED: semaphore timeout
- * 3. SUCCESS: rcvd rsp & semaphore has been waken up
- */
- unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
- }
-
- /*
- * 8. Release resource immediately if necessary
- * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
- * up)
- */
- if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
- unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
-
- /* decrease ref_cnt */
- unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
-
- return SUCCESS;
-}
-
-int unf_cm_target_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
-{
- /* SCSI Target Reset Command --->>> FC Session Reset/Delete Command */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- u32 cmnd_result = 0;
- int ret = SUCCESS;
-
- FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
- FC_CHECK_RETURN_VALUE(scsi_cmnd->lun_id, FAILED);
-
- /* 1. Get L_Port */
- unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can't find port by scsi_host_id(0x%x)",
- UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
-
- return FAILED;
- }
-
- /* 2. L_Port State check */
- if (unlikely(unf_lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%p) is removing", unf_lport);
-
- return FAILED;
- }
-
- /*
- * 3. Get R_Port: no rport is found or rport is not ready,return ok
- * from: L_Port -->> rport_scsi_table (image table) -->>
- * rport_info_table
- */
- unf_rport = unf_find_rport_by_scsi_id(unf_lport, scsi_cmnd->err_code_table,
- scsi_cmnd->err_code_table_cout,
- UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd), &cmnd_result);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can't find rport by scsi_id(0x%x)",
- UNF_GET_SCSI_ID_BY_CMND(scsi_cmnd));
-
- return SUCCESS;
- }
-
- /*
- * 4. set UP_ABORT on Target IO and Session IO
- * *
- * LUN Reset: set UP_ABORT tag, with:
- * INI_Busy_list, IO_Wait_list,
- * IO_Delay_list, IO_Delay_transfer_list
- */
- unf_cm_xchg_abort_by_session(unf_lport, unf_rport);
-
- /* 5. R_Port state check */
- if (unlikely(unf_rport->rp_state != UNF_RPORT_ST_READY)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) state(0x%x) is not ready, SCSI Command(0x%p)",
- unf_lport->port_id, unf_rport->nport_id,
- unf_rport->rp_state, scsi_cmnd);
-
- return SUCCESS;
- }
-
- /* 6. Get free Xchg for Target Reset CMND */
- unf_xchg = (struct unf_xchg *)unf_cm_get_free_xchg(unf_lport, UNF_XCHG_TYPE_INI);
- if (unlikely(!unf_xchg)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%p) can't get free exchange", unf_lport);
-
- return FAILED;
- }
-
- /* increase ref_cnt to protect exchange */
- ret = (int)unf_xchg_ref_inc(unf_xchg, INI_EH_DEVICE_RESET);
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), FAILED);
-
- /* 7. Send Target Reset Cmnd to low-level */
- ret = unf_send_scsi_mgmt_cmnd(unf_xchg, unf_lport, unf_rport, scsi_cmnd,
- UNF_FCP_TM_TARGET_RESET);
- if (unlikely(ret == FAILED)) {
- /*
- * Do port reset or R_Port LOGO:
- * 1. FAILED: send failed
- * 2. FAILED: semaphore timeout
- * 3. SUCCESS: rcvd rsp & semaphore has been waken up
- */
- unf_tmf_abnormal_recovery(unf_lport, unf_rport, unf_xchg);
- }
-
- /*
- * 8. Release resource immediately if necessary
- * NOTE: here, semaphore timeout or rcvd rsp(semaphore has been waken
- * up)
- */
- if (likely(!unf_lport->port_removing || unf_lport->root_lport != unf_lport))
- unf_cm_free_xchg(unf_xchg->lport, unf_xchg);
-
- /* decrease exchange ref_cnt */
- unf_xchg_ref_dec(unf_xchg, INI_EH_DEVICE_RESET);
-
- return SUCCESS;
-}
-
-int unf_cm_bus_reset_handler(struct unf_scsi_cmnd *scsi_cmnd)
-{
- /* SCSI BUS Reset Command --->>> FC Port Reset Command */
- struct unf_lport *unf_lport = NULL;
- int cmnd_result = 0;
-
- /* 1. Get L_Port */
- unf_lport = unf_find_lport_by_scsi_cmd(scsi_cmnd);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can't find port by scsi_host_id(0x%x)",
- UNF_GET_SCSI_HOST_ID_BY_CMND(scsi_cmnd));
-
- return FAILED;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[event]Do port reset with scsi_bus_reset");
-
- cmnd_result = unf_cm_reset_port(unf_lport->port_id);
- if (unlikely(cmnd_result == UNF_RETURN_ERROR))
- return FAILED;
- else
- return SUCCESS;
-}
-
-void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg)
-{
- u8 *rsp_info = NULL;
- u8 rsp_code = 0;
- u32 code_index = 0;
-
- /*
- * LLT found that:RSP_CODE is the third byte of
- * FCP_RSP_INFO, on Little endian should be byte 0, For
- * detail FCP_4 Table 26 FCP_RSP_INFO field format
- * *
- * 1. state setting
- * 2. wake up semaphore
- */
- FC_CHECK_RETURN_VOID(pkg);
- FC_CHECK_RETURN_VOID(xchg);
-
- xchg->tmf_state |= TMF_RESPONSE_RECEIVED;
-
- if (UNF_GET_LL_ERR(pkg) != UNF_IO_SUCCESS ||
- pkg->unf_rsp_pload_bl.length > UNF_RESPONE_DATA_LEN) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Send scsi manage command failed with error code(0x%x) resp len(0x%x)",
- UNF_GET_LL_ERR(pkg), pkg->unf_rsp_pload_bl.length);
-
- xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
-
- /* wakeup semaphore & return */
- up(&xchg->task_sema);
-
- return;
- }
-
- rsp_info = pkg->unf_rsp_pload_bl.buffer_ptr;
- if (rsp_info && pkg->unf_rsp_pload_bl.length != 0) {
- /* change to little end if necessary */
- if (pkg->byte_orders & UNF_BIT_3)
- unf_big_end_to_cpu(rsp_info, pkg->unf_rsp_pload_bl.length);
- }
-
- if (!rsp_info) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]FCP response data pointer is NULL with Xchg TAG(0x%x)",
- xchg->hotpooltag);
-
- xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
-
- /* wakeup semaphore & return */
- up(&xchg->task_sema);
-
- return;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]FCP response data length(0x%x), RSP_CODE(0x%x:%x:%x:%x:%x:%x:%x:%x)",
- pkg->unf_rsp_pload_bl.length, rsp_info[ARRAY_INDEX_0],
- rsp_info[ARRAY_INDEX_1], rsp_info[ARRAY_INDEX_2],
- rsp_info[ARRAY_INDEX_3], rsp_info[ARRAY_INDEX_4],
- rsp_info[ARRAY_INDEX_5], rsp_info[ARRAY_INDEX_6],
- rsp_info[ARRAY_INDEX_7]);
-
- rsp_code = rsp_info[code_index];
- if (rsp_code == UNF_FCP_TM_RSP_COMPLETE || rsp_code == UNF_FCP_TM_RSP_SUCCEED)
- xchg->scsi_cmnd_info.result = UNF_IO_SUCCESS;
- else
- xchg->scsi_cmnd_info.result = UNF_IO_FAILED;
-
- /* wakeup semaphore & return */
- up(&xchg->task_sema);
-}
-
-static void unf_build_task_mgmt_fcp_cmnd(struct unf_fcp_cmnd *fcp_cmnd,
- struct unf_scsi_cmnd *scsi_cmnd,
- enum unf_task_mgmt_cmd task_mgmt)
-{
- FC_CHECK_RETURN_VOID(fcp_cmnd);
- FC_CHECK_RETURN_VOID(scsi_cmnd);
-
- unf_big_end_to_cpu((void *)scsi_cmnd->lun_id, UNF_FCP_LUNID_LEN_8);
- (*(u64 *)(scsi_cmnd->lun_id)) >>= UNF_SHIFT_8;
- memcpy(fcp_cmnd->lun, scsi_cmnd->lun_id, sizeof(fcp_cmnd->lun));
-
- /*
- * If the TASK MANAGEMENT FLAGS field is set to a nonzero value,
- * the FCP_CDB field, the FCP_DL field, the TASK ATTRIBUTE field,
- * the RDDATA bit, and the WRDATA bit shall be ignored and the
- * FCP_BIDIRECTIONAL_READ_DL field shall not be included in the FCP_CMND
- * IU payload
- */
- fcp_cmnd->control = UNF_SET_TASK_MGMT_FLAGS((u32)(task_mgmt));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "SCSI cmnd(0x%x) is task mgmt cmnd. ntrl Flag(LITTLE END) is 0x%x.",
- task_mgmt, fcp_cmnd->control);
-}
-
-int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_scsi_cmnd *scsi_cmnd,
- enum unf_task_mgmt_cmd task_mgnt_cmd_type)
-{
- /*
- * 1. Device/LUN reset
- * 2. Target/Session reset
- */
- struct unf_xchg *unf_xchg = NULL;
- int ret = SUCCESS;
- struct unf_frame_pkg pkg = {0};
- ulong xchg_flag = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, FAILED);
- FC_CHECK_RETURN_VALUE(rport, FAILED);
- FC_CHECK_RETURN_VALUE(xchg, FAILED);
- FC_CHECK_RETURN_VALUE(scsi_cmnd, FAILED);
- FC_CHECK_RETURN_VALUE(task_mgnt_cmd_type <= UNF_FCP_TM_TERMINATE_TASK &&
- task_mgnt_cmd_type >= UNF_FCP_TM_QUERY_TASK_SET, FAILED);
-
- unf_xchg = xchg;
- unf_xchg->lport = lport;
- unf_xchg->rport = rport;
-
- /* 1. State: Up_Task */
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, xchg_flag);
- unf_xchg->io_state |= INI_IO_STATE_UPTASK;
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, xchg_flag);
- pkg.frame_head.oxid_rxid = ((u32)unf_xchg->oxid << (u32)UNF_SHIFT_16) | unf_xchg->rxid;
-
- /* 2. Set TASK MANAGEMENT FLAGS of FCP_CMND to the corresponding task
- * management command
- */
- unf_build_task_mgmt_fcp_cmnd(&unf_xchg->fcp_cmnd, scsi_cmnd, task_mgnt_cmd_type);
-
- pkg.xchg_contex = unf_xchg;
- pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
- pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
- pkg.frame_head.csctl_sid = lport->nport_id;
- pkg.frame_head.rctl_did = rport->nport_id;
-
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
-
- if (unlikely(lport->pcie_link_down)) {
- unf_free_lport_all_xchg(lport);
- return SUCCESS;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) Hottag(0x%x) lunid(0x%llx)",
- lport->port_id, task_mgnt_cmd_type, rport->nport_id,
- unf_xchg->hotpooltag, *((u64 *)scsi_cmnd->lun_id));
-
- /* 3. Init exchange task semaphore */
- sema_init(&unf_xchg->task_sema, 0);
-
- /* 4. Send Mgmt Task to low-level */
- if (unf_hardware_start_io(lport, &pkg) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) failed",
- lport->port_id, task_mgnt_cmd_type, rport->nport_id);
-
- return FAILED;
- }
-
- /*
- * semaphore timeout
- **
- * Code review: The second input parameter needs to be converted to
- jiffies.
- * set semaphore after the message is sent successfully.The semaphore is
- returned when the semaphore times out or is woken up.
- **
- * 5. The semaphore is cleared and counted when the Mgmt Task message is
- sent, and is Wake Up when the RSP message is received.
- * If the semaphore is not Wake Up, the semaphore is triggered after
- timeout. That is, no RSP message is received within the timeout period.
- */
- if (down_timeout(&unf_xchg->task_sema, (s64)msecs_to_jiffies((u32)UNF_WAIT_SEM_TIMEOUT))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) timeout scsi id(0x%x) lun id(0x%x)",
- lport->nport_id, task_mgnt_cmd_type,
- rport->nport_id, scsi_cmnd->scsi_id,
- (u32)scsi_cmnd->raw_lun_id);
- unf_notify_chip_free_xid(unf_xchg);
- /* semaphore timeout */
- ret = FAILED;
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- if (lport->states == UNF_LPORT_ST_RESET)
- ret = SUCCESS;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- return ret;
- }
-
- /*
- * 6. NOTE: no timeout (has been waken up)
- * Do Scsi_Cmnd(Mgmt Task) result checking
- * *
- * FAILED: with error code or RSP is error
- * SUCCESS: others
- */
- if (unf_xchg->scsi_cmnd_info.result == UNF_IO_SUCCESS) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp succeed",
- lport->nport_id, task_mgnt_cmd_type, rport->nport_id);
-
- ret = SUCCESS;
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send task_cmnd(0x%x) to RPort(0x%x) and receive rsp failed scsi id(0x%x) lun id(0x%x)",
- lport->nport_id, task_mgnt_cmd_type, rport->nport_id,
- scsi_cmnd->scsi_id, (u32)scsi_cmnd->raw_lun_id);
-
- ret = FAILED;
- }
-
- return ret;
-}
-
-u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- u32 uret = RETURN_OK;
- struct unf_xchg *unf_xchg = NULL;
- u16 hot_pool_tag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- unf_lport = (struct unf_lport *)lport;
-
- /* Find exchange which point to marker sts */
- if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hot_pool_tag =
- (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
-
- unf_xchg =
- (struct unf_xchg *)(unf_lport->xchg_mgr_temp
- .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
- if (!unf_xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
- unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
-
- return UNF_RETURN_ERROR;
- }
-
- /*
- * NOTE: set exchange TMF state with MARKER_STS_RECEIVED
- * *
- * About TMF state
- * 1. STS received
- * 2. Response received
- * 3. Do check if necessary
- */
- unf_xchg->tmf_state |= MARKER_STS_RECEIVED;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Marker STS: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x), EXCH: D_ID(0x%x) S_ID(0x%x) OX_ID(0x%x) RX_ID(0x%x)",
- pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
- pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
- (u16)(pkg->frame_head.oxid_rxid >> UNF_SHIFT_16),
- (u16)(pkg->frame_head.oxid_rxid), unf_xchg->did, unf_xchg->sid,
- unf_xchg->oxid, unf_xchg->rxid);
-
- return uret;
-}
-
-u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- u16 hot_pool_tag = 0;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- unf_lport = (struct unf_lport *)lport;
-
- /* Find exchange by tag */
- if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) tag function is NULL", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
-
- unf_xchg =
- (struct unf_xchg *)(unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)unf_lport,
- hot_pool_tag));
- if (!unf_xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
- unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
-
- return UNF_RETURN_ERROR;
- }
-
- /*
- * NOTE: set exchange ABTS state with MARKER_STS_RECEIVED
- * *
- * About exchange ABTS state
- * 1. STS received
- * 2. Response received
- * 3. Do check if necessary
- * *
- * About Exchange status get from low level
- * 1. Set: when RCVD ABTS Marker
- * 2. Set: when RCVD ABTS Req Done
- * 3. value: set value with pkg->status
- */
- spin_lock_irqsave(&unf_xchg->xchg_state_lock, flags);
- unf_xchg->ucode_abts_state = pkg->status;
- unf_xchg->abts_state |= MARKER_STS_RECEIVED;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) wake up SEMA for Abts marker exchange(0x%p) oxid(0x%x 0x%x) hottag(0x%x) status(0x%x)",
- unf_lport->port_id, unf_xchg, unf_xchg->oxid, unf_xchg->rxid,
- unf_xchg->hotpooltag, pkg->abts_maker_status);
-
- /*
- * NOTE: Second time for ABTS marker received, or
- * ABTS response have been received, no need to wake up sema
- */
- if ((INI_IO_STATE_ABORT_TIMEOUT & unf_xchg->io_state) ||
- (ABTS_RESPONSE_RECEIVED & unf_xchg->abts_state)) {
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) no need to wake up SEMA for Abts marker ABTS_STATE(0x%x) IO_STATE(0x%x)",
- unf_lport->port_id, unf_xchg->abts_state, unf_xchg->io_state);
-
- return RETURN_OK;
- }
-
- if (unf_xchg->io_state & INI_IO_STATE_TMF_ABORT) {
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) receive Abts marker, exchange(%p) state(0x%x) free it",
- unf_lport->port_id, unf_xchg, unf_xchg->io_state);
-
- unf_cm_free_xchg(unf_lport, unf_xchg);
- } else {
- spin_unlock_irqrestore(&unf_xchg->xchg_state_lock, flags);
- up(&unf_xchg->task_sema);
- }
-
- return RETURN_OK;
-}
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.h b/drivers/scsi/spfc/common/unf_io_abnormal.h
deleted file mode 100644
index 31cc8e30e51a..000000000000
--- a/drivers/scsi/spfc/common/unf_io_abnormal.h
+++ /dev/null
@@ -1,19 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_IO_ABNORMAL_H
-#define UNF_IO_ABNORMAL_H
-
-#include "unf_type.h"
-#include "unf_lport.h"
-#include "unf_exchg.h"
-
-#define UNF_GET_LL_ERR(pkg) (((pkg)->status) >> 16)
-
-void unf_process_scsi_mgmt_result(struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg);
-u32 unf_hardware_start_io(struct unf_lport *lport, struct unf_frame_pkg *pkg);
-u32 unf_recv_abts_marker_status(void *lport, struct unf_frame_pkg *pkg);
-u32 unf_recv_tmf_marker_status(void *lport, struct unf_frame_pkg *pkg);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_log.h b/drivers/scsi/spfc/common/unf_log.h
deleted file mode 100644
index 801e23ac0829..000000000000
--- a/drivers/scsi/spfc/common/unf_log.h
+++ /dev/null
@@ -1,178 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_LOG_H
-#define UNF_LOG_H
-#include "unf_type.h"
-
-#define UNF_CRITICAL 1
-#define UNF_ERR 2
-#define UNF_WARN 3
-#define UNF_KEVENT 4
-#define UNF_MAJOR 5
-#define UNF_MINOR 6
-#define UNF_INFO 7
-#define UNF_DATA 7
-#define UNF_ALL 7
-
-enum unf_debug_type {
- UNF_DEBUG_TYPE_MML = 0,
- UNF_DEBUG_TYPE_DIAGNOSE = 1,
- UNF_DEBUG_TYPE_MESSAGE = 2,
- UNF_DEBUG_TYPE_BUTT
-};
-
-enum unf_log_attr {
- UNF_LOG_LOGIN_ATT = 0x1,
- UNF_LOG_IO_ATT = 0x2,
- UNF_LOG_EQUIP_ATT = 0x4,
- UNF_LOG_REG_ATT = 0x8,
- UNF_LOG_REG_MML_TEST = 0x10,
- UNF_LOG_EVENT = 0x20,
- UNF_LOG_NORMAL = 0x40,
- UNF_LOG_ABNORMAL = 0X80,
- UNF_LOG_BUTT
-};
-
-enum event_log {
- UNF_EVTLOG_DRIVER_SUC = 0,
- UNF_EVTLOG_DRIVER_INFO,
- UNF_EVTLOG_DRIVER_WARN,
- UNF_EVTLOG_DRIVER_ERR,
- UNF_EVTLOG_LINK_SUC,
- UNF_EVTLOG_LINK_INFO,
- UNF_EVTLOG_LINK_WARN,
- UNF_EVTLOG_LINK_ERR,
- UNF_EVTLOG_IO_SUC,
- UNF_EVTLOG_IO_INFO,
- UNF_EVTLOG_IO_WARN,
- UNF_EVTLOG_IO_ERR,
- UNF_EVTLOG_TOOL_SUC,
- UNF_EVTLOG_TOOL_INFO,
- UNF_EVTLOG_TOOL_WARN,
- UNF_EVTLOG_TOOL_ERR,
- UNF_EVTLOG_BUT
-};
-
-#define UNF_IO_ATT_PRINT_TIMES 2
-#define UNF_LOGIN_ATT_PRINT_TIMES 100
-
-#define UNF_IO_ATT_PRINT_LIMIT msecs_to_jiffies(2 * 1000)
-
-extern u32 unf_dgb_level;
-extern u32 log_print_level;
-extern u32 log_limited_times;
-
-#define DRV_LOG_LIMIT(module_id, log_level, log_att, format, ...) \
- do { \
- static unsigned long pre; \
- static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
- if (time_after_eq(jiffies, pre + (UNF_IO_ATT_PRINT_LIMIT))) { \
- if (log_att == UNF_LOG_ABNORMAL) { \
- should_print = UNF_IO_ATT_PRINT_TIMES; \
- } else { \
- should_print = log_limited_times; \
- } \
- } \
- if (should_print < 0) { \
- if (log_att != UNF_LOG_ABNORMAL) \
- pre = jiffies; \
- break; \
- } \
- if (should_print-- > 0) { \
- printk(log_level "[%d][FC_UNF]" format "[%s][%-5d]\n", \
- smp_processor_id(), ##__VA_ARGS__, __func__, \
- __LINE__); \
- } \
- if (should_print == 0) { \
- printk(log_level "[FC_UNF]log is limited[%s][%-5d]\n", \
- __func__, __LINE__); \
- } \
- pre = jiffies; \
- } while (0)
-
-#define FC_CHECK_RETURN_VALUE(condition, ret) \
- do { \
- if (unlikely(!(condition))) { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, \
- UNF_ERR, "Para check(%s) invalid", \
- #condition); \
- return ret; \
- } \
- } while (0)
-
-#define FC_CHECK_RETURN_VOID(condition) \
- do { \
- if (unlikely(!(condition))) { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, \
- UNF_ERR, "Para check(%s) invalid", \
- #condition); \
- return; \
- } \
- } while (0)
-
-#define FC_DRV_PRINT(log_att, log_level, format, ...) \
- do { \
- if (unlikely((log_level) <= log_print_level)) { \
- if (log_level == UNF_CRITICAL) { \
- DRV_LOG_LIMIT(UNF_PID, KERN_CRIT, \
- log_att, format, ##__VA_ARGS__); \
- } else if (log_level == UNF_WARN) { \
- DRV_LOG_LIMIT(UNF_PID, KERN_WARNING, \
- log_att, format, ##__VA_ARGS__); \
- } else if (log_level == UNF_ERR) { \
- DRV_LOG_LIMIT(UNF_PID, KERN_ERR, \
- log_att, format, ##__VA_ARGS__); \
- } else if (log_level == UNF_MAJOR || \
- log_level == UNF_MINOR || \
- log_level == UNF_KEVENT) { \
- DRV_LOG_LIMIT(UNF_PID, KERN_NOTICE, \
- log_att, format, ##__VA_ARGS__); \
- } else if (log_level == UNF_INFO || \
- log_level == UNF_DATA) { \
- DRV_LOG_LIMIT(UNF_PID, KERN_INFO, \
- log_att, format, ##__VA_ARGS__); \
- } \
- } \
- } while (0)
-
-#define UNF_PRINT_SFS(dbg_level, portid, data, size) \
- do { \
- if ((dbg_level) <= log_print_level) { \
- u32 cnt = 0; \
- printk(KERN_INFO "[INFO]Port(0x%x) sfs:0x", (portid)); \
- for (cnt = 0; cnt < (size) / 4; cnt++) { \
- printk(KERN_INFO "%08x ", \
- ((u32 *)(data))[cnt]); \
- } \
- printk(KERN_INFO "[FC_UNF][%s]\n", __func__); \
- } \
- } while (0)
-
-#define UNF_PRINT_SFS_LIMIT(dbg_level, portid, data, size) \
- do { \
- if ((dbg_level) <= log_print_level) { \
- static ulong pre; \
- static int should_print = UNF_LOGIN_ATT_PRINT_TIMES; \
- if (time_after_eq( \
- jiffies, pre + UNF_IO_ATT_PRINT_LIMIT)) { \
- should_print = log_limited_times; \
- } \
- if (should_print < 0) { \
- pre = jiffies; \
- break; \
- } \
- if (should_print-- > 0) { \
- UNF_PRINT_SFS(dbg_level, portid, data, size); \
- } \
- if (should_print == 0) { \
- printk( \
- KERN_INFO \
- "[FC_UNF]sfs log is limited[%s][%-5d]\n", \
- __func__, __LINE__); \
- } \
- pre = jiffies; \
- } \
- } while (0)
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_lport.c b/drivers/scsi/spfc/common/unf_lport.c
deleted file mode 100644
index 66d3ac14d676..000000000000
--- a/drivers/scsi/spfc/common/unf_lport.c
+++ /dev/null
@@ -1,1008 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_lport.h"
-#include "unf_log.h"
-#include "unf_rport.h"
-#include "unf_exchg.h"
-#include "unf_service.h"
-#include "unf_ls.h"
-#include "unf_gs.h"
-#include "unf_portman.h"
-
-static void unf_lport_config(struct unf_lport *lport);
-void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type)
-{
- FC_CHECK_RETURN_VOID((lport));
-
- lport->dirty_flag |= (u32)type;
-}
-
-u32 unf_init_lport_route(struct unf_lport *lport)
-{
- u32 ret = RETURN_OK;
- int ret_val = 0;
-
- FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
-
- /* Init L_Port route work */
- INIT_DELAYED_WORK(&lport->route_timer_work, unf_lport_route_work);
-
- /* Delay route work */
- ret_val = queue_delayed_work(unf_wq, &lport->route_timer_work,
- (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
- if (unlikely((!(bool)(ret_val)))) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
- "[warn]Port(0x%x) schedule route work failed",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- ret = unf_lport_ref_inc(lport);
- return ret;
-}
-
-void unf_destroy_lport_route(struct unf_lport *lport)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
-
- /* Cancel (route timer) delay work */
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id), (&lport->route_timer_work),
- "Route Timer work");
- if (ret == RETURN_OK)
- /* Corresponding to ADD operation */
- unf_lport_ref_dec(lport);
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE;
-}
-
-void unf_init_port_parms(struct unf_lport *lport)
-{
- INIT_LIST_HEAD(&lport->list_vports_head);
- INIT_LIST_HEAD(&lport->list_intergrad_vports);
- INIT_LIST_HEAD(&lport->list_destroy_vports);
- INIT_LIST_HEAD(&lport->entry_lport);
- INIT_LIST_HEAD(&lport->list_qos_head);
-
- spin_lock_init(&lport->qos_mgr_lock);
- spin_lock_init(&lport->lport_state_lock);
-
- lport->max_frame_size = max_frame_size;
- lport->ed_tov = UNF_DEFAULT_EDTOV;
- lport->ra_tov = UNF_DEFAULT_RATOV;
- lport->fabric_node_name = 0;
- lport->qos_level = UNF_QOS_LEVEL_DEFAULT;
- lport->qos_cs_ctrl = false;
- lport->priority = (bool)UNF_PRIORITY_DISABLE;
- lport->port_dirt_exchange = false;
-
- unf_lport_config(lport);
-
- unf_set_lport_state(lport, UNF_LPORT_ST_ONLINE);
-
- lport->link_up = UNF_PORT_LINK_DOWN;
- lport->port_removing = false;
- lport->lport_free_completion = NULL;
- lport->last_tx_fault_jif = 0;
- lport->enhanced_features = 0;
- lport->destroy_step = INVALID_VALUE32;
- lport->dirty_flag = 0;
- lport->switch_state = false;
- lport->bbscn_support = false;
- lport->loop_back_test_mode = false;
- lport->start_work_state = UNF_START_WORK_STOP;
- lport->sfp_power_fault_count = 0;
- lport->sfp_9545_fault_count = 0;
-
- atomic_set(&lport->lport_no_operate_flag, UNF_LPORT_NORMAL);
- atomic_set(&lport->port_ref_cnt, 0);
- atomic_set(&lport->scsi_session_add_success, 0);
- atomic_set(&lport->scsi_session_add_failed, 0);
- atomic_set(&lport->scsi_session_del_success, 0);
- atomic_set(&lport->scsi_session_del_failed, 0);
- atomic_set(&lport->add_start_work_failed, 0);
- atomic_set(&lport->add_closing_work_failed, 0);
- atomic_set(&lport->alloc_scsi_id, 0);
- atomic_set(&lport->resume_scsi_id, 0);
- atomic_set(&lport->reuse_scsi_id, 0);
- atomic_set(&lport->device_alloc, 0);
- atomic_set(&lport->device_destroy, 0);
- atomic_set(&lport->session_loss_tmo, 0);
- atomic_set(&lport->host_no, 0);
- atomic64_set(&lport->exchg_index, 0x1000);
- atomic_inc(&lport->port_ref_cnt);
-
- memset(&lport->port_dynamic_info, 0, sizeof(struct unf_port_dynamic_info));
- memset(&lport->link_service_info, 0, sizeof(struct unf_link_service_collect));
- memset(&lport->err_code_sum, 0, sizeof(struct unf_err_code));
-}
-
-void unf_reset_lport_params(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = lport;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport->link_up = UNF_PORT_LINK_DOWN;
- unf_lport->nport_id = 0;
- unf_lport->max_frame_size = max_frame_size;
- unf_lport->ed_tov = UNF_DEFAULT_EDTOV;
- unf_lport->ra_tov = UNF_DEFAULT_RATOV;
- unf_lport->fabric_node_name = 0;
-}
-
-static enum unf_lport_login_state
-unf_lport_state_online(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_LINK_UP:
- next_state = UNF_LPORT_ST_LINK_UP;
- break;
-
- case UNF_EVENT_LPORT_NORMAL_ENTER:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_initial(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_LINK_UP:
- next_state = UNF_LPORT_ST_LINK_UP;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_linkup(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_NORMAL_ENTER:
- next_state = UNF_LPORT_ST_FLOGI_WAIT;
- break;
-
- case UNF_EVENT_LPORT_READY:
- next_state = UNF_LPORT_ST_READY;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_flogi_wait(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_REMOTE_ACC:
- next_state = UNF_LPORT_ST_PLOGI_WAIT;
- break;
-
- case UNF_EVENT_LPORT_READY:
- next_state = UNF_LPORT_ST_READY;
- break;
-
- case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_plogi_wait(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_REMOTE_ACC:
- next_state = UNF_LPORT_ST_RFT_ID_WAIT;
- break;
-
- case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state
-unf_lport_state_rftid_wait(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_REMOTE_ACC:
- next_state = UNF_LPORT_ST_RFF_ID_WAIT;
- break;
-
- case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_rffid_wait(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_REMOTE_ACC:
- next_state = UNF_LPORT_ST_SCR_WAIT;
- break;
-
- case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_scr_wait(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_REMOTE_ACC:
- next_state = UNF_LPORT_ST_READY;
- break;
-
- case UNF_EVENT_LPORT_REMOTE_TIMEOUT:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state
-unf_lport_state_logo(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_NORMAL_ENTER:
- next_state = UNF_LPORT_ST_OFFLINE;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_offline(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_ONLINE:
- next_state = UNF_LPORT_ST_ONLINE;
- break;
-
- case UNF_EVENT_LPORT_RESET:
- next_state = UNF_LPORT_ST_RESET;
- break;
-
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_reset(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_NORMAL_ENTER:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_lport_login_state unf_lport_state_ready(enum unf_lport_login_state old_state,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
-
- switch (lport_event) {
- case UNF_EVENT_LPORT_LINK_DOWN:
- next_state = UNF_LPORT_ST_INITIAL;
- break;
-
- case UNF_EVENT_LPORT_RESET:
- next_state = UNF_LPORT_ST_RESET;
- break;
-
- case UNF_EVENT_LPORT_OFFLINE:
- next_state = UNF_LPORT_ST_LOGO;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static struct unf_lport_state_ma lport_state[] = {
- {UNF_LPORT_ST_ONLINE, unf_lport_state_online},
- {UNF_LPORT_ST_INITIAL, unf_lport_state_initial},
- {UNF_LPORT_ST_LINK_UP, unf_lport_state_linkup},
- {UNF_LPORT_ST_FLOGI_WAIT, unf_lport_state_flogi_wait},
- {UNF_LPORT_ST_PLOGI_WAIT, unf_lport_state_plogi_wait},
- {UNF_LPORT_ST_RFT_ID_WAIT, unf_lport_state_rftid_wait},
- {UNF_LPORT_ST_RFF_ID_WAIT, unf_lport_state_rffid_wait},
- {UNF_LPORT_ST_SCR_WAIT, unf_lport_state_scr_wait},
- {UNF_LPORT_ST_LOGO, unf_lport_state_logo},
- {UNF_LPORT_ST_OFFLINE, unf_lport_state_offline},
- {UNF_LPORT_ST_RESET, unf_lport_state_reset},
- {UNF_LPORT_ST_READY, unf_lport_state_ready},
-};
-
-void unf_lport_state_ma(struct unf_lport *lport,
- enum unf_lport_event lport_event)
-{
- enum unf_lport_login_state old_state = UNF_LPORT_ST_ONLINE;
- enum unf_lport_login_state next_state = UNF_LPORT_ST_ONLINE;
- u32 index = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- old_state = lport->states;
-
- while (index < (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
- if (lport->states == lport_state[index].lport_state) {
- next_state = lport_state[index].lport_state_ma(old_state, lport_event);
- break;
- }
- index++;
- }
-
- if (index >= (sizeof(lport_state) / sizeof(struct unf_lport_state_ma))) {
- next_state = old_state;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "[info]Port(0x%x) hold state(0x%x)",
- lport->port_id, lport->states);
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) with old state(0x%x) event(0x%x) next state(0x%x)",
- lport->port_id, old_state, lport_event, next_state);
-
- unf_set_lport_state(lport, next_state);
-}
-
-u32 unf_lport_retry_flogi(struct unf_lport *lport)
-{
- struct unf_rport *unf_rport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* Get (new) R_Port */
- unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Check L_Port state */
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- if (lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) no need to retry FLOGI with state(0x%x)",
- lport->port_id, lport->states);
-
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
- return RETURN_OK;
- }
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = UNF_FC_FID_FLOGI;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Send FLOGI or FDISC */
- if (lport->root_lport != lport) {
- ret = unf_send_fdisc(lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send FDISC failed", lport->port_id);
-
- /* Do L_Port recovery */
- unf_lport_error_recovery(lport);
- }
- } else {
- ret = unf_send_flogi(lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send FLOGI failed\n", lport->port_id);
-
- /* Do L_Port recovery */
- unf_lport_error_recovery(lport);
- }
- }
-
- return ret;
-}
-
-u32 unf_lport_name_server_register(struct unf_lport *lport,
- enum unf_lport_login_state state)
-{
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 fabric_id = UNF_FC_FID_DIR_SERV;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (state == UNF_LPORT_ST_SCR_WAIT)
- fabric_id = UNF_FC_FID_FCTRL;
-
- /* Get (safe) R_Port */
- unf_rport =
- unf_get_rport_by_nport_id(lport, fabric_id);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
- fabric_id);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Update R_Port & L_Port state */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = fabric_id;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- switch (state) {
- /* RFT_ID */
- case UNF_LPORT_ST_RFT_ID_WAIT:
- ret = unf_send_rft_id(lport, unf_rport);
- break;
- /* RFF_ID */
- case UNF_LPORT_ST_RFF_ID_WAIT:
- ret = unf_send_rff_id(lport, unf_rport, UNF_FC4_FCP_TYPE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) register SCSI FC4Type to fabric(0xfffffc) failed",
- lport->nport_id);
- unf_lport_error_recovery(lport);
- }
- break;
-
- /* SCR */
- case UNF_LPORT_ST_SCR_WAIT:
- ret = unf_send_scr(lport, unf_rport);
- break;
-
- /* PLOGI */
- case UNF_LPORT_ST_PLOGI_WAIT:
- default:
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- ret = unf_send_plogi(lport, unf_rport);
- break;
- }
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) register fabric(0xfffffc) failed",
- lport->nport_id);
-
- /* Do L_Port recovery */
- unf_lport_error_recovery(lport);
- }
-
- return ret;
-}
-
-u32 unf_lport_enter_sns_logo(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (!rport)
- unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_DIR_SERV);
- else
- unf_rport = rport;
-
- if (!unf_rport) {
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- return RETURN_OK;
- }
-
- /* Update L_Port & R_Port state */
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Do R_Port LOGO state */
- unf_rport_enter_logo(lport, unf_rport);
-
- return ret;
-}
-
-void unf_lport_enter_sns_plogi(struct unf_lport *lport)
-{
- /* Fabric or Public Loop Mode: Login with Name server */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
-
- /* Get (safe) R_Port */
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
- if (unf_rport) {
- /* for port swap: Delete old R_Port if necessary */
- if (unf_rport->local_nport_id != lport->nport_id) {
- unf_rport_immediate_link_down(lport, unf_rport);
- unf_rport = NULL;
- }
- }
-
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
- UNF_FC_FID_DIR_SERV);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) allocate RPort failed",
- lport->port_id);
-
- unf_lport_error_recovery(unf_lport);
- return;
- }
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = UNF_FC_FID_DIR_SERV;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Send PLOGI to Fabric(0xfffffc) */
- ret = unf_send_plogi(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send PLOGI to name server failed",
- lport->port_id);
-
- unf_lport_error_recovery(unf_lport);
- }
-}
-
-int unf_get_port_params(void *arg_in, void *arg_out)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)arg_in;
- struct unf_low_level_port_mgr_op *port_mgr = NULL;
- struct unf_port_param port_params = {0};
-
- FC_CHECK_RETURN_VALUE(arg_in, UNF_RETURN_ERROR);
-
- port_mgr = &unf_lport->low_level_func.port_mgr_op;
- if (!port_mgr->ll_port_config_get) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
- "[warn]Port(0x%x) low level port_config_get function is NULL",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "[warn]Port(0x%x) get parameters with default:R_A_TOV(%d) E_D_TOV(%d)",
- unf_lport->port_id, UNF_DEFAULT_FABRIC_RATOV,
- UNF_DEFAULT_EDTOV);
-
- port_params.ra_tov = UNF_DEFAULT_FABRIC_RATOV;
- port_params.ed_tov = UNF_DEFAULT_EDTOV;
-
- /* Update parameters with Fabric mode */
- if (unf_lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP ||
- unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
- unf_lport->ra_tov = port_params.ra_tov;
- unf_lport->ed_tov = port_params.ed_tov;
- }
-
- return RETURN_OK;
-}
-
-u32 unf_lport_enter_flogi(struct unf_lport *lport)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_cm_event_report *event = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 nport_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* Get (safe) R_Port */
- nport_id = UNF_FC_FID_FLOGI;
- unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
-
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate RPort failed",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Updtae L_Port state */
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_NORMAL_ENTER);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- /* Update R_Port N_Port_ID */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = UNF_FC_FID_FLOGI;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- event = unf_get_one_event_node(lport);
- if (event) {
- event->lport = lport;
- event->event_asy_flag = UNF_EVENT_ASYN;
- event->unf_event_task = unf_get_port_params;
- event->para_in = (void *)lport;
- unf_post_one_event_node(lport, event);
- }
-
- if (lport->root_lport != lport) {
- /* for NPIV */
- ret = unf_send_fdisc(lport, unf_rport);
- if (ret != RETURN_OK)
- unf_lport_error_recovery(lport);
- } else {
- /* for Physical Port */
- ret = unf_send_flogi(lport, unf_rport);
- if (ret != RETURN_OK)
- unf_lport_error_recovery(lport);
- }
-
- return ret;
-}
-
-void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state)
-{
- FC_CHECK_RETURN_VOID(lport);
- if (lport->states != state)
- lport->retries = 0;
-
- lport->states = state;
-}
-
-static void unf_lport_timeout(struct work_struct *work)
-{
- struct unf_lport *unf_lport = NULL;
- enum unf_lport_login_state state = UNF_LPORT_ST_READY;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(work);
- unf_lport = container_of(work, struct unf_lport, retry_work.work);
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- state = unf_lport->states;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is timeout with state(0x%x)",
- unf_lport->port_id, state);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- switch (state) {
- /* FLOGI retry */
- case UNF_LPORT_ST_FLOGI_WAIT:
- (void)unf_lport_retry_flogi(unf_lport);
- break;
-
- case UNF_LPORT_ST_PLOGI_WAIT:
- case UNF_LPORT_ST_RFT_ID_WAIT:
- case UNF_LPORT_ST_RFF_ID_WAIT:
- case UNF_LPORT_ST_SCR_WAIT:
- (void)unf_lport_name_server_register(unf_lport, state);
- break;
-
- /* Send LOGO External */
- case UNF_LPORT_ST_LOGO:
- break;
-
- /* Do nothing */
- case UNF_LPORT_ST_OFFLINE:
- case UNF_LPORT_ST_READY:
- case UNF_LPORT_ST_RESET:
- case UNF_LPORT_ST_ONLINE:
- case UNF_LPORT_ST_INITIAL:
- case UNF_LPORT_ST_LINK_UP:
-
- unf_lport->retries = 0;
- break;
- default:
- break;
- }
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-}
-
-static void unf_lport_config(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- INIT_DELAYED_WORK(&lport->retry_work, unf_lport_timeout);
-
- lport->max_retry_count = UNF_MAX_RETRY_COUNT;
- lport->retries = 0;
-}
-
-void unf_lport_error_recovery(struct unf_lport *lport)
-{
- ulong delay = 0;
- ulong flag = 0;
- int ret_val = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
-
- ret = unf_lport_ref_inc(lport);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is removing and no need process",
- lport->port_id);
- return;
- }
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
-
- /* Port State: removing */
- if (lport->port_removing) {
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is removing and no need process",
- lport->port_id);
-
- unf_lport_ref_dec_to_destroy(lport);
- return;
- }
-
- /* Port State: offline */
- if (lport->states == UNF_LPORT_ST_OFFLINE) {
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is offline and no need process",
- lport->port_id);
-
- unf_lport_ref_dec_to_destroy(lport);
- return;
- }
-
- /* Queue work state check */
- if (delayed_work_pending(&lport->retry_work)) {
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- unf_lport_ref_dec_to_destroy(lport);
- return;
- }
-
- /* Do retry operation */
- if (lport->retries < lport->max_retry_count) {
- lport->retries++;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) enter recovery and retry %u times",
- lport->port_id, lport->nport_id, lport->retries);
-
- delay = (ulong)lport->ed_tov;
- ret_val = queue_delayed_work(unf_wq, &lport->retry_work,
- (ulong)msecs_to_jiffies((u32)delay));
- if (ret_val != 0) {
- atomic_inc(&lport->port_ref_cnt);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) queue work success and reference count is %d",
- lport->port_id,
- atomic_read(&lport->port_ref_cnt));
- }
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
- } else {
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_TIMEOUT);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) register operation timeout and do LOGO",
- lport->port_id);
-
- (void)unf_lport_enter_sns_logo(lport, NULL);
- }
-
- unf_lport_ref_dec_to_destroy(lport);
-}
-
-struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index)
-{
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- if (vp_index == 0)
- return lport;
-
- if (!lport->lport_mgr_temp.unf_look_up_vport_by_index) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) function do look up vport by index is NULL",
- lport->port_id);
-
- return NULL;
- }
-
- return lport->lport_mgr_temp.unf_look_up_vport_by_index(lport, vp_index);
-}
-
-struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did)
-{
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- if (!lport->lport_mgr_temp.unf_look_up_vport_by_did) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) function do look up vport by D_ID is NULL",
- lport->port_id);
-
- return NULL;
- }
-
- return lport->lport_mgr_temp.unf_look_up_vport_by_did(lport, did);
-}
-
-struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn)
-{
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- if (!lport->lport_mgr_temp.unf_look_up_vport_by_wwpn) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) function do look up vport by WWPN is NULL",
- lport->port_id);
-
- return NULL;
- }
-
- return lport->lport_mgr_temp.unf_look_up_vport_by_wwpn(lport, wwpn);
-}
-
-void unf_cm_vport_remove(struct unf_lport *vport)
-{
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(vport);
- unf_lport = vport->root_lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- if (!unf_lport->lport_mgr_temp.unf_vport_remove) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) function do vport remove is NULL",
- unf_lport->port_id);
- return;
- }
-
- unf_lport->lport_mgr_temp.unf_vport_remove(vport);
-}
diff --git a/drivers/scsi/spfc/common/unf_lport.h b/drivers/scsi/spfc/common/unf_lport.h
deleted file mode 100644
index dbd531f15b13..000000000000
--- a/drivers/scsi/spfc/common/unf_lport.h
+++ /dev/null
@@ -1,519 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_LPORT_H
-#define UNF_LPORT_H
-
-#include "unf_type.h"
-#include "unf_disc.h"
-#include "unf_event.h"
-#include "unf_common.h"
-
-#define UNF_PORT_TYPE_FC 0
-#define UNF_PORT_TYPE_DISC 1
-#define UNF_FW_UPDATE_PATH_LEN_MAX 255
-#define UNF_EXCHG_MGR_NUM (4)
-#define UNF_ERR_CODE_PRINT_TIME 10 /* error code print times */
-#define UNF_MAX_IO_TYPE_STAT_NUM 48 /* IO abnormal max counter */
-#define UNF_MAX_IO_RETURN_VALUE 0x12
-#define UNF_MAX_SCSI_CMD 0xFF
-#define UNF_MAX_LPRT_SCSI_ID_MAP 2048
-
-enum unf_scsi_error_handle_type {
- UNF_SCSI_ABORT_IO_TYPE = 0,
- UNF_SCSI_DEVICE_RESET_TYPE,
- UNF_SCSI_TARGET_RESET_TYPE,
- UNF_SCSI_BUS_RESET_TYPE,
- UNF_SCSI_HOST_RESET_TYPE,
- UNF_SCSI_VIRTUAL_RESET_TYPE,
- UNF_SCSI_ERROR_HANDLE_BUTT
-};
-
-enum unf_lport_destroy_step {
- UNF_LPORT_DESTROY_STEP_0_SET_REMOVING = 0,
- UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT,
- UNF_LPORT_DESTROY_STEP_2_CLOSE_ROUTE,
- UNF_LPORT_DESTROY_STEP_3_DESTROY_EVENT_CENTER,
- UNF_LPORT_DESTROY_STEP_4_DESTROY_EXCH_MGR,
- UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL,
- UNF_LPORT_DESTROY_STEP_6_DESTROY_DISC_MGR,
- UNF_LPORT_DESTROY_STEP_7_DESTROY_XCHG_MGR_TMP,
- UNF_LPORT_DESTROY_STEP_8_DESTROY_RPORT_MG_TMP,
- UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP,
- UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE,
- UNF_LPORT_DESTROY_STEP_11_UNREG_TGT_HOST,
- UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST,
- UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE,
- UNF_LPORT_DESTROY_STEP_BUTT
-};
-
-enum unf_lport_enhanced_feature {
- /* Enhance GFF feature connect even if fail to get GFF feature */
- UNF_LPORT_ENHANCED_FEATURE_ENHANCED_GFF = 0x0001,
- UNF_LPORT_ENHANCED_FEATURE_IO_TRANSFERLIST = 0x0002, /* Enhance IO balance */
- UNF_LPORT_ENHANCED_FEATURE_IO_CHECKPOINT = 0x0004, /* Enhance IO check */
- UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE = 0x0008, /* Close FW ROUTE */
- /* lowest frequency read SFP information */
- UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE = 0x0010,
- UNF_LPORT_ENHANCED_FEATURE_BUTT
-};
-
-enum unf_lport_login_state {
- UNF_LPORT_ST_ONLINE = 0x2000, /* uninitialized */
- UNF_LPORT_ST_INITIAL, /* initialized and LinkDown */
- UNF_LPORT_ST_LINK_UP, /* initialized and Link UP */
- UNF_LPORT_ST_FLOGI_WAIT, /* waiting for FLOGI completion */
- UNF_LPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
- UNF_LPORT_ST_RNN_ID_WAIT, /* waiting for RNN_ID completion */
- UNF_LPORT_ST_RSNN_NN_WAIT, /* waiting for RSNN_NN completion */
- UNF_LPORT_ST_RSPN_ID_WAIT, /* waiting for RSPN_ID completion */
- UNF_LPORT_ST_RPN_ID_WAIT, /* waiting for RPN_ID completion */
- UNF_LPORT_ST_RFT_ID_WAIT, /* waiting for RFT_ID completion */
- UNF_LPORT_ST_RFF_ID_WAIT, /* waiting for RFF_ID completion */
- UNF_LPORT_ST_SCR_WAIT, /* waiting for SCR completion */
- UNF_LPORT_ST_READY, /* ready for use */
- UNF_LPORT_ST_LOGO, /* waiting for LOGO completion */
- UNF_LPORT_ST_RESET, /* being reset and will restart */
- UNF_LPORT_ST_OFFLINE, /* offline */
- UNF_LPORT_ST_BUTT
-};
-
-enum unf_lport_event {
- UNF_EVENT_LPORT_NORMAL_ENTER = 0x8000, /* next state enter */
- UNF_EVENT_LPORT_ONLINE = 0x8001, /* LPort link up */
- UNF_EVENT_LPORT_LINK_UP = 0x8002, /* LPort link up */
- UNF_EVENT_LPORT_LINK_DOWN = 0x8003, /* LPort link down */
- UNF_EVENT_LPORT_OFFLINE = 0x8004, /* lPort bing stopped */
- UNF_EVENT_LPORT_RESET = 0x8005,
- UNF_EVENT_LPORT_REMOTE_ACC = 0x8006, /* next state enter */
- UNF_EVENT_LPORT_REMOTE_RJT = 0x8007, /* rport reject */
- UNF_EVENT_LPORT_REMOTE_TIMEOUT = 0x8008, /* rport time out */
- UNF_EVENT_LPORT_READY = 0x8009,
- UNF_EVENT_LPORT_REMOTE_BUTT
-};
-
-struct unf_cm_disc_mg_template {
- /* start input:L_Port,return:ok/fail */
- u32 (*unf_disc_start)(void *lport);
- /* stop input: L_Port,return:ok/fail */
- u32 (*unf_disc_stop)(void *lport);
-
- /* Callback after disc complete[with event:ok/fail]. */
- void (*unf_disc_callback)(void *lport, u32 result);
-};
-
-struct unf_chip_manage_info {
- struct list_head list_chip_thread_entry;
- struct list_head list_head;
- spinlock_t chip_event_list_lock;
- struct task_struct *thread;
- u32 list_num;
- u32 slot_id;
- u8 chip_id;
- u8 rsv;
- u8 sfp_9545_fault;
- u8 sfp_power_fault;
- atomic_t ref_cnt;
- u32 thread_exit;
- struct unf_chip_info chip_info;
- atomic_t card_loop_test_flag;
- spinlock_t card_loop_back_state_lock;
- char update_path[UNF_FW_UPDATE_PATH_LEN_MAX];
-};
-
-enum unf_timer_type {
- UNF_TIMER_TYPE_TGT_IO,
- UNF_TIMER_TYPE_INI_IO,
- UNF_TIMER_TYPE_REQ_IO,
- UNF_TIMER_TYPE_TGT_RRQ,
- UNF_TIMER_TYPE_INI_RRQ,
- UNF_TIMER_TYPE_SFS,
- UNF_TIMER_TYPE_INI_ABTS
-};
-
-struct unf_cm_xchg_mgr_template {
- void *(*unf_xchg_get_free_and_init)(void *lport, u32 xchg_type);
- void *(*unf_look_up_xchg_by_id)(void *lport, u16 ox_id, u32 oid);
- void *(*unf_look_up_xchg_by_tag)(void *lport, u16 hot_pool_tag);
- void (*unf_xchg_release)(void *lport, void *xchg);
- void (*unf_xchg_mgr_io_xchg_abort)(void *lport, void *rport, u32 sid, u32 did,
- u32 extra_io_state);
- void (*unf_xchg_mgr_sfs_xchg_abort)(void *lport, void *rport, u32 sid, u32 did);
- void (*unf_xchg_add_timer)(void *xchg, ulong time_ms, enum unf_timer_type time_type);
- void (*unf_xchg_cancel_timer)(void *xchg);
- void (*unf_xchg_abort_all_io)(void *lport, u32 xchg_type, bool clean);
- void *(*unf_look_up_xchg_by_cmnd_sn)(void *lport, u64 command_sn,
- u32 world_id, void *pinitiator);
- void (*unf_xchg_abort_by_lun)(void *lport, void *rport, u64 lun_id, void *xchg,
- bool abort_all_lun_flag);
-
- void (*unf_xchg_abort_by_session)(void *lport, void *rport);
-};
-
-struct unf_cm_lport_template {
- void *(*unf_look_up_vport_by_index)(void *lport, u16 vp_index);
- void *(*unf_look_up_vport_by_port_id)(void *lport, u32 port_id);
- void *(*unf_look_up_vport_by_wwpn)(void *lport, u64 wwpn);
- void *(*unf_look_up_vport_by_did)(void *lport, u32 did);
- void (*unf_vport_remove)(void *vport);
-};
-
-struct unf_lport_state_ma {
- enum unf_lport_login_state lport_state;
- enum unf_lport_login_state (*lport_state_ma)(enum unf_lport_login_state old_state,
- enum unf_lport_event event);
-};
-
-struct unf_rport_pool {
- u32 rport_pool_count;
- void *rport_pool_add;
- struct list_head list_rports_pool;
- spinlock_t rport_free_pool_lock;
- /* for synchronous reuse RPort POOL completion */
- struct completion *rport_pool_completion;
- ulong *rpi_bitmap;
-};
-
-struct unf_vport_pool {
- u16 vport_pool_count;
- void *vport_pool_addr;
- struct list_head list_vport_pool;
- spinlock_t vport_pool_lock;
- struct completion *vport_pool_completion;
- u16 slab_next_index; /* Next free vport */
- u16 slab_total_sum; /* Total Vport num */
- struct unf_lport *vport_slab[ARRAY_INDEX_0];
-};
-
-struct unf_esgl_pool {
- u32 esgl_pool_count;
- void *esgl_pool_addr;
- struct list_head list_esgl_pool;
- spinlock_t esgl_pool_lock;
- struct buf_describe esgl_buff_list;
-};
-
-/* little endium */
-struct unf_port_id_page {
- struct list_head list_node_rscn;
- u8 port_id_port;
- u8 port_id_area;
- u8 port_id_domain;
- u8 addr_format : 2;
- u8 event_qualifier : 4;
- u8 reserved : 2;
-};
-
-struct unf_rscn_mgr {
- spinlock_t rscn_id_list_lock;
- u32 free_rscn_count;
- struct list_head list_free_rscn_page;
- struct list_head list_using_rscn_page;
- void *rscn_pool_add;
- struct unf_port_id_page *(*unf_get_free_rscn_node)(void *rscn_mg);
- void (*unf_release_rscn_node)(void *rscn_mg, void *rscn_node);
-};
-
-struct unf_disc_rport_mg {
- void *disc_pool_add;
- struct list_head list_disc_rports_pool;
- struct list_head list_disc_rports_busy;
-};
-
-struct unf_disc_manage_info {
- struct list_head list_head;
- spinlock_t disc_event_list_lock;
- atomic_t disc_contrl_size;
-
- u32 thread_exit;
- struct task_struct *thread;
-};
-
-struct unf_disc {
- u32 retry_count;
- u32 max_retry_count;
- u32 disc_flag;
-
- struct completion *disc_completion;
- atomic_t disc_ref_cnt;
-
- struct list_head list_busy_rports;
- struct list_head list_delete_rports;
- struct list_head list_destroy_rports;
-
- spinlock_t rport_busy_pool_lock;
-
- struct unf_lport *lport;
- enum unf_disc_state states;
- struct delayed_work disc_work;
-
- /* Disc operation template */
- struct unf_cm_disc_mg_template disc_temp;
-
- /* UNF_INIT_DISC/UNF_RSCN_DISC */
- u32 disc_option;
-
- /* RSCN list */
- struct unf_rscn_mgr rscn_mgr;
- struct unf_disc_rport_mg disc_rport_mgr;
- struct unf_disc_manage_info disc_thread_info;
-
- u64 last_disc_jiff;
-};
-
-enum unf_service_item {
- UNF_SERVICE_ITEM_FLOGI = 0,
- UNF_SERVICE_ITEM_PLOGI,
- UNF_SERVICE_ITEM_PRLI,
- UNF_SERVICE_ITEM_RSCN,
- UNF_SERVICE_ITEM_ABTS,
- UNF_SERVICE_ITEM_PDISC,
- UNF_SERVICE_ITEM_ADISC,
- UNF_SERVICE_ITEM_LOGO,
- UNF_SERVICE_ITEM_SRR,
- UNF_SERVICE_ITEM_RRQ,
- UNF_SERVICE_ITEM_ECHO,
- UNF_SERVICE_BUTT
-};
-
-/* Link service counter */
-struct unf_link_service_collect {
- u64 service_cnt[UNF_SERVICE_BUTT];
-};
-
-struct unf_pcie_error_count {
- u32 pcie_error_count[UNF_PCIE_BUTT];
-};
-
-#define INVALID_WWPN 0
-
-enum unf_device_scsi_state {
- UNF_SCSI_ST_INIT = 0,
- UNF_SCSI_ST_OFFLINE,
- UNF_SCSI_ST_ONLINE,
- UNF_SCSI_ST_DEAD,
- UNF_SCSI_ST_BUTT
-};
-
-struct unf_wwpn_dfx_counter_info {
- atomic64_t io_done_cnt[UNF_MAX_IO_RETURN_VALUE];
- atomic64_t scsi_cmd_cnt[UNF_MAX_SCSI_CMD];
- atomic64_t target_busy;
- atomic64_t host_busy;
- atomic_t error_handle[UNF_SCSI_ERROR_HANDLE_BUTT];
- atomic_t error_handle_result[UNF_SCSI_ERROR_HANDLE_BUTT];
- atomic_t device_alloc;
- atomic_t device_destroy;
-};
-
-#define UNF_MAX_LUN_PER_TARGET 256
-struct unf_wwpn_rport_info {
- u64 wwpn;
- struct unf_rport *rport; /* Rport which linkup */
- void *lport; /* Lport */
- u32 target_id; /* target_id distribute by scsi */
- u32 las_ten_scsi_state;
- atomic_t scsi_state;
- struct unf_wwpn_dfx_counter_info *dfx_counter;
- struct delayed_work loss_tmo_work;
- bool need_scan;
- struct list_head fc_lun_list;
- u8 *lun_qos_level;
-};
-
-struct unf_rport_scsi_id_image {
- spinlock_t scsi_image_table_lock;
- struct unf_wwpn_rport_info
- *wwn_rport_info_table;
- u32 max_scsi_id;
-};
-
-enum unf_lport_dirty_flag {
- UNF_LPORT_DIRTY_FLAG_NONE = 0,
- UNF_LPORT_DIRTY_FLAG_XCHGMGR_DIRTY = 0x100,
- UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY = 0x200,
- UNF_LPORT_DIRTY_FLAG_DISC_DIRTY = 0x400,
- UNF_LPORT_DIRTY_FLAG_BUTT
-};
-
-typedef struct unf_rport *(*unf_rport_set_qualifier)(struct unf_lport *lport,
- struct unf_rport *rport_by_nport_id,
- struct unf_rport *rport_by_wwpn,
- u64 wwpn, u32 sid);
-
-typedef u32 (*unf_tmf_status_recovery)(void *rport, void *xchg);
-
-enum unf_start_work_state {
- UNF_START_WORK_STOP,
- UNF_START_WORK_BEGIN,
- UNF_START_WORK_COMPLETE
-};
-
-struct unf_qos_info {
- u64 wwpn;
- u32 nport_id;
- enum unf_rport_qos_level qos_level;
- struct list_head entry_qos_info;
-};
-
-struct unf_ini_private_info {
- u32 driver_type; /* Driver Type */
- void *lower; /* driver private pointer */
-};
-
-struct unf_product_host_info {
- void *tgt_host;
- struct Scsi_Host *host;
- struct unf_ini_private_info drv_private_info;
- struct Scsi_Host scsihost;
-};
-
-struct unf_lport {
- u32 port_type; /* Port Type, fc or fcoe */
- atomic_t port_ref_cnt; /* LPort reference counter */
- void *fc_port; /* hard adapter hba pointer */
- void *rport, *drport; /* Used for SCSI interface */
- void *vport;
- ulong system_io_bus_num;
-
- struct unf_product_host_info host_info; /* scsi host mg */
- struct unf_rport_scsi_id_image rport_scsi_table;
- bool port_removing;
- bool io_allowed;
- bool port_dirt_exchange;
-
- spinlock_t xchg_mgr_lock;
- struct list_head list_xchg_mgr_head;
- struct list_head list_drty_xchg_mgr_head;
- void *xchg_mgr[UNF_EXCHG_MGR_NUM];
- bool qos_cs_ctrl;
- bool priority;
- enum unf_rport_qos_level qos_level;
- spinlock_t qos_mgr_lock;
- struct list_head list_qos_head;
- struct list_head list_vports_head; /* Vport Mg */
- struct list_head list_intergrad_vports; /* Vport intergrad list */
- struct list_head list_destroy_vports; /* Vport destroy list */
-
- struct list_head entry_vport; /* VPort entry, hook in list_vports_head */
-
- struct list_head entry_lport; /* LPort entry */
- spinlock_t lport_state_lock; /* UL Port Lock */
- struct unf_disc disc; /* Disc and rport Mg */
- struct unf_rport_pool rport_pool; /* rport pool,Vport share Lport pool */
- struct unf_esgl_pool esgl_pool; /* external sgl pool */
- u32 port_id; /* Port Management ,0x11000 etc. */
- enum unf_lport_login_state states;
- u32 link_up;
- u32 speed;
-
- u64 node_name;
- u64 port_name;
- u64 fabric_node_name;
- u32 nport_id;
- u32 max_frame_size;
- u32 ed_tov;
- u32 ra_tov;
- u32 class_of_service;
- u32 options; /* ini or tgt */
- u32 retries;
- u32 max_retry_count;
- enum unf_act_topo act_topo;
- bool switch_state; /* TRUE---->ON,false---->OFF */
- bool last_switch_state; /* TRUE---->ON,false---->OFF */
- bool bbscn_support; /* TRUE---->ON,false---->OFF */
-
- enum unf_start_work_state start_work_state;
- struct unf_cm_xchg_mgr_template xchg_mgr_temp; /* Xchg Mg operation template */
- struct unf_cm_lport_template lport_mgr_temp; /* Xchg LPort operation template */
- struct unf_low_level_functioon_op low_level_func;
- struct unf_event_mgr event_mgr; /* Disc and rport Mg */
- struct delayed_work retry_work; /* poll work or delay work */
-
- struct workqueue_struct *link_event_wq;
- struct workqueue_struct *xchg_wq;
- atomic64_t io_stat[UNF_MAX_IO_TYPE_STAT_NUM];
- struct unf_err_code err_code_sum; /* Error code counter */
- struct unf_port_dynamic_info port_dynamic_info;
- struct unf_link_service_collect link_service_info;
- struct unf_pcie_error_count pcie_error_cnt;
- unf_rport_set_qualifier unf_qualify_rport; /* Qualify Rport */
-
- unf_tmf_status_recovery unf_tmf_abnormal_recovery; /* tmf marker recovery */
-
- struct delayed_work route_timer_work; /* L_Port timer route */
-
- u16 vp_index; /* Vport Index, Lport:0 */
- u16 path_id;
- struct unf_vport_pool *vport_pool; /* Only for Lport */
- void *lport_mgr[UNF_MAX_LPRT_SCSI_ID_MAP];
- bool vport_remove_flags;
-
- void *root_lport; /* Point to physic Lport */
-
- struct completion *lport_free_completion; /* Free LPort Completion */
-
-#define UNF_LPORT_NOP 1
-#define UNF_LPORT_NORMAL 0
-
- atomic_t lport_no_operate_flag;
-
- bool loop_back_test_mode;
- bool switch_state_before_test_mode; /* TRUE---->ON,false---->OFF */
- u32 enhanced_features; /* Enhanced Features */
-
- u32 destroy_step;
- u32 dirty_flag;
- struct unf_chip_manage_info *chip_info;
-
- u8 unique_position;
- u8 sfp_power_fault_count;
- u8 sfp_9545_fault_count;
- u64 last_tx_fault_jif; /* SFP last tx fault jiffies */
- u32 target_cnt;
- /* Server card: UNF_FC_SERVER_BOARD_32_G(6) for 32G mode,
- * UNF_FC_SERVER_BOARD_16_G(7) for 16G mode
- */
- u32 card_type;
- atomic_t scsi_session_add_success;
- atomic_t scsi_session_add_failed;
- atomic_t scsi_session_del_success;
- atomic_t scsi_session_del_failed;
- atomic_t add_start_work_failed;
- atomic_t add_closing_work_failed;
- atomic_t device_alloc;
- atomic_t device_destroy;
- atomic_t session_loss_tmo;
- atomic_t alloc_scsi_id;
- atomic_t resume_scsi_id;
- atomic_t reuse_scsi_id;
- atomic64_t last_exchg_mgr_idx;
- atomic_t host_no;
- atomic64_t exchg_index;
- int scan_world_id;
- struct semaphore wmi_task_sema;
- bool ready_to_remove;
- u32 pcie_link_down_cnt;
- bool pcie_link_down;
- u8 fw_version[SPFC_VER_LEN];
- atomic_t link_lose_tmo;
- u32 max_ssq_num;
-};
-
-void unf_lport_state_ma(struct unf_lport *lport, enum unf_lport_event lport_event);
-void unf_lport_error_recovery(struct unf_lport *lport);
-void unf_set_lport_state(struct unf_lport *lport, enum unf_lport_login_state state);
-void unf_init_port_parms(struct unf_lport *lport);
-u32 unf_lport_enter_flogi(struct unf_lport *lport);
-void unf_lport_enter_sns_plogi(struct unf_lport *lport);
-u32 unf_init_disc_mgr(struct unf_lport *lport);
-u32 unf_init_lport_route(struct unf_lport *lport);
-void unf_destroy_lport_route(struct unf_lport *lport);
-void unf_reset_lport_params(struct unf_lport *lport);
-void unf_cm_mark_dirty_mem(struct unf_lport *lport, enum unf_lport_dirty_flag type);
-struct unf_lport *unf_cm_lookup_vport_by_vp_index(struct unf_lport *lport, u16 vp_index);
-struct unf_lport *unf_cm_lookup_vport_by_did(struct unf_lport *lport, u32 did);
-struct unf_lport *unf_cm_lookup_vport_by_wwpn(struct unf_lport *lport, u64 wwpn);
-void unf_cm_vport_remove(struct unf_lport *vport);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_ls.c b/drivers/scsi/spfc/common/unf_ls.c
deleted file mode 100644
index 6a2e1fd1872f..000000000000
--- a/drivers/scsi/spfc/common/unf_ls.c
+++ /dev/null
@@ -1,4883 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_ls.h"
-#include "unf_log.h"
-#include "unf_service.h"
-#include "unf_portman.h"
-#include "unf_gs.h"
-#include "unf_npiv.h"
-
-static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_prli_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_logo_acc_ob_callback(struct unf_xchg *xchg);
-static void unf_logo_ob_callback(struct unf_xchg *xchg);
-static void unf_logo_callback(void *lport, void *rport, void *xchg);
-static void unf_rrq_callback(void *lport, void *rport, void *xchg);
-static void unf_rrq_ob_callback(struct unf_xchg *xchg);
-static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id);
-static void
-unf_lport_update_time_params(struct unf_lport *lport,
- struct unf_flogi_fdisc_payload *flogi_payload);
-
-static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
- u64 remote_port_name,
- u64 remote_node_name);
-#define UNF_LOWLEVEL_BBCREDIT 0x6
-#define UNF_DEFAULT_BB_SC_N 0
-
-#define UNF_ECHO_REQ_SIZE 0
-#define UNF_ECHO_WAIT_SEM_TIMEOUT(lport) (2 * (ulong)(lport)->ra_tov)
-
-#define UNF_SERVICE_COLLECT(service_collect, item) \
- do { \
- if ((item) < UNF_SERVICE_BUTT) { \
- (service_collect).service_cnt[(item)]++; \
- } \
- } while (0)
-
-static void unf_check_rport_need_delay_prli(struct unf_lport *lport,
- struct unf_rport *rport,
- u32 port_feature)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- port_feature &= UNF_PORT_MODE_BOTH;
-
- /* Used for: L_Port has INI mode & R_Port is not SW */
- if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
- /*
- * 1. immediately: R_Port only with TGT, or
- * L_Port only with INI & R_Port has TGT mode, send PRLI
- * immediately
- */
- if ((port_feature == UNF_PORT_MODE_TGT ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) ||
- (UNF_PORT_MODE_TGT == (port_feature & UNF_PORT_MODE_TGT))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI",
- lport->port_id, lport->nport_id,
- rport->nport_id, port_feature);
- ret = unf_send_prli(lport, rport, ELS_PRLI);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) send PRLI failed",
- lport->port_id, lport->nport_id,
- rport->nport_id, port_feature);
-
- unf_rport_error_recovery(rport);
- }
- }
- /* 2. R_Port has BOTH mode or unknown, Delay to send PRLI */
- else if (port_feature != UNF_PORT_MODE_INI) {
- /* Prevent: PRLI done before PLOGI */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) delay to send PRLI",
- lport->port_id, lport->nport_id,
- rport->nport_id, port_feature);
-
- /* Delay to send PRLI to R_Port */
- unf_rport_delay_login(rport);
- } else {
- /* 3. R_Port only with INI mode: wait for R_Port's PRLI:
- * Do not care
- */
- /* Cancel recovery(timer) work */
- if (delayed_work_pending(&rport->recovery_work)) {
- if (cancel_delayed_work(&rport->recovery_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) is pure INI",
- lport->port_id,
- lport->nport_id,
- rport->nport_id,
- port_feature);
-
- unf_rport_ref_dec(rport);
- }
- }
-
- /* Server: R_Port only support INI, do not care this
- * case
- */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x) Rport(0x%x) with feature(0x%x) wait for PRLI",
- lport->port_id, lport->nport_id,
- rport->nport_id, port_feature);
- }
- }
-}
-
-static u32 unf_low_level_bb_credit(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 bb_credit = UNF_LOWLEVEL_BBCREDIT;
-
- if (unlikely(!lport))
- return bb_credit;
-
- unf_lport = lport;
-
- if (unlikely(!unf_lport->low_level_func.port_mgr_op.ll_port_config_get))
- return bb_credit;
-
- ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get((void *)unf_lport->fc_port,
- UNF_PORT_CFG_GET_WORKBALE_BBCREDIT,
- (void *)&bb_credit);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[warn]Port(0x%x) get BB_Credit failed, use default value(%d)",
- unf_lport->port_id, UNF_LOWLEVEL_BBCREDIT);
-
- bb_credit = UNF_LOWLEVEL_BBCREDIT;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) with BB_Credit(%u)", unf_lport->port_id,
- bb_credit);
-
- return bb_credit;
-}
-
-u32 unf_low_level_bb_scn(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_low_level_port_mgr_op *port_mgr = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 bb_scn = UNF_DEFAULT_BB_SC_N;
-
- if (unlikely(!lport))
- return bb_scn;
-
- unf_lport = lport;
- port_mgr = &unf_lport->low_level_func.port_mgr_op;
-
- if (unlikely(!port_mgr->ll_port_config_get))
- return bb_scn;
-
- ret = port_mgr->ll_port_config_get((void *)unf_lport->fc_port,
- UNF_PORT_CFG_GET_WORKBALE_BBSCN,
- (void *)&bb_scn);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[warn]Port(0x%x) get bbscn failed, use default value(%d)",
- unf_lport->port_id, UNF_DEFAULT_BB_SC_N);
-
- bb_scn = UNF_DEFAULT_BB_SC_N;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x)'s bbscn(%d)", unf_lport->port_id, bb_scn);
-
- return bb_scn;
-}
-
-static void unf_fill_rec_pld(struct unf_rec_pld *rec_pld, u32 sid)
-{
- FC_CHECK_RETURN_VOID(rec_pld);
-
- rec_pld->rec_cmnd = (UNF_ELS_CMND_REC);
- rec_pld->xchg_org_sid = sid;
- rec_pld->ox_id = INVALID_VALUE16;
- rec_pld->rx_id = INVALID_VALUE16;
-}
-
-u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *io_xchg)
-{
- struct unf_rec_pld *rec_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(io_xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PLOGI",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_REC;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
- pkg.origin_hottag = io_xchg->hotpooltag;
- pkg.origin_magicnum = io_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
- rec_pld = &fc_entry->rec.rec_pld;
- memset(rec_pld, 0, sizeof(struct unf_rec_pld));
-
- unf_fill_rec_pld(rec_pld, lport->nport_id);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]LOGIN: Send REC %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, lport->port_name, rport->nport_id,
- rport->port_name, xchg->hotpooltag);
-
- return ret;
-}
-
-static void unf_fill_flogi_pld(struct unf_flogi_fdisc_payload *flogi_pld,
- struct unf_lport *lport)
-{
- struct unf_fabric_parm *fabric_parms = NULL;
-
- FC_CHECK_RETURN_VOID(flogi_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- fabric_parms = &flogi_pld->fabric_parms;
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
- lport->act_topo == UNF_TOP_P2P_MASK) {
- /* Fabric or P2P or FCoE VN2VN topology */
- fabric_parms->co_parms.bb_credit = unf_low_level_bb_credit(lport);
- fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- fabric_parms->co_parms.bbscn = unf_low_level_bb_scn(lport);
- } else {
- /* Loop topology here */
- fabric_parms->co_parms.clean_address = UNF_CLEAN_ADDRESS_DEFAULT;
- fabric_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
- fabric_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- fabric_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- fabric_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
- fabric_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- }
-
- if (lport->low_level_func.support_max_npiv_num != 0)
- /* support NPIV */
- fabric_parms->co_parms.clean_address = 1;
-
- fabric_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
-
- /* according the user value to set the priority */
- if (lport->qos_cs_ctrl)
- fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_ENABLE;
- else
- fabric_parms->cl_parms[ARRAY_INDEX_2].priority = UNF_PRIORITY_DISABLE;
-
- fabric_parms->cl_parms[ARRAY_INDEX_2].sequential_delivery = UNF_SEQUEN_DELIVERY_REQ;
- fabric_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
-
- fabric_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- fabric_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- fabric_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- fabric_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-}
-
-u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_xchg *xchg = NULL;
- struct unf_flogi_fdisc_payload *flogi_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for FLOGI",
- lport->port_id);
-
- return ret;
- }
-
- /* FLOGI */
- xchg->cmnd_code = ELS_FLOGI;
-
- /* for rcvd flogi acc/rjt processer */
- xchg->callback = unf_flogi_callback;
- /* for send flogi failed processer */
- xchg->ob_callback = unf_flogi_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- flogi_pld = &fc_entry->flogi.flogi_payload;
- memset(flogi_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
- unf_fill_flogi_pld(flogi_pld, lport);
- flogi_pld->cmnd = (UNF_ELS_CMND_FLOGI);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Begin to send FLOGI. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
- lport->port_id, rport->nport_id, xchg->hotpooltag);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, flogi_pld,
- sizeof(struct unf_flogi_fdisc_payload));
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]LOGIN: Send FLOGI failed. Port(0x%x)--->RPort(0x%x)",
- lport->port_id, rport->nport_id);
-
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- }
-
- return ret;
-}
-
-u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_xchg *exch = NULL;
- struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- exch = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!exch) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for FDISC",
- lport->port_id);
-
- return ret;
- }
-
- exch->cmnd_code = ELS_FDISC;
-
- exch->callback = unf_fdisc_callback;
- exch->ob_callback = unf_fdisc_ob_callback;
-
- unf_fill_package(&pkg, exch, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- fdisc_pld = &fc_entry->fdisc.fdisc_payload;
- memset(fdisc_pld, 0, sizeof(struct unf_flogi_fdisc_payload));
- unf_fill_flogi_pld(fdisc_pld, lport);
- fdisc_pld->cmnd = UNF_ELS_CMND_FDISC;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, exch);
-
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)exch);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: FDISC send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, exch->hotpooltag);
-
- return ret;
-}
-
-static void unf_fill_plogi_pld(struct unf_plogi_payload *plogi_pld,
- struct unf_lport *lport)
-{
- struct unf_lgn_parm *login_parms = NULL;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(plogi_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = lport->root_lport;
- plogi_pld->cmnd = (UNF_ELS_CMND_PLOGI);
- login_parms = &plogi_pld->stparms;
-
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- /* P2P or Fabric mode or FCoE VN2VN */
- login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
- login_parms->co_parms.bbscn =
- (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
- ? 0
- : unf_low_level_bb_scn(lport);
- } else {
- /* Public loop & Private loop mode */
- login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
- }
-
- login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
- login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
- login_parms->co_parms.e_d_tov = UNF_DEFAULT_EDTOV;
- if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
- login_parms->cl_parms[ARRAY_INDEX_2].priority =
- UNF_PRIORITY_ENABLE;
- } else {
- login_parms->cl_parms[ARRAY_INDEX_2].priority =
- UNF_PRIORITY_DISABLE;
- }
-
- /* for class_3 */
- login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
- login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
- login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
-
- login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_pld, sizeof(struct unf_plogi_payload));
-}
-
-u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_plogi_payload *plogi_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PLOGI",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_PLOGI;
-
- xchg->callback = unf_plogi_callback;
- xchg->ob_callback = unf_plogi_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
- unf_cm_xchg_mgr_abort_io_by_id(lport, rport, xchg->sid, xchg->did, 0);
-
- plogi_pld = &fc_entry->plogi.payload;
- memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
- unf_fill_plogi_pld(plogi_pld, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send PLOGI %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with hottag(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- lport->nport_id, lport->port_name, rport->nport_id,
- rport->port_name, xchg->hotpooltag);
-
- return ret;
-}
-
-static void unf_fill_logo_pld(struct unf_logo_payload *logo_pld,
- struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(logo_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- logo_pld->cmnd = (UNF_ELS_CMND_LOGO);
- logo_pld->nport_id = (lport->nport_id);
- logo_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- logo_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, logo_pld, sizeof(struct unf_logo_payload));
-}
-
-u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_logo_payload *logo_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for LOGO",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_LOGO;
- /* retry or link down immediately */
- xchg->callback = unf_logo_callback;
- /* do nothing */
- xchg->ob_callback = unf_logo_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- logo_pld = &fc_entry->logo.payload;
- memset(logo_pld, 0, sizeof(struct unf_logo_payload));
- unf_fill_logo_pld(logo_pld, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- rport->logo_retries++;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) hottag(0x%x) Retries(%d)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, xchg->hotpooltag, rport->logo_retries);
-
- return ret;
-}
-
-u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did)
-{
- struct unf_logo_payload *logo_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, did, NULL, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for LOGO",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_LOGO;
-
- unf_fill_package(&pkg, xchg, NULL);
- pkg.type = UNF_PKG_ELS_REQ;
-
- logo_pld = &fc_entry->logo.payload;
- memset(logo_pld, 0, sizeof(struct unf_logo_payload));
- unf_fill_logo_pld(logo_pld, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: LOGO send %s. Port(0x%x)--->RPort(0x%x) with hottag(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- did, xchg->hotpooltag);
-
- return ret;
-}
-
-static void unf_echo_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_rport *unf_rport = (struct unf_rport *)rport;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_echo_payload *echo_rsp_pld = NULL;
- u32 cmnd = 0;
- u32 mag_ver_local = 0;
- u32 mag_ver_remote = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_xchg = (struct unf_xchg *)xchg;
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
- return;
-
- echo_rsp_pld = unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.echo_pld;
- FC_CHECK_RETURN_VOID(echo_rsp_pld);
-
- if (unf_xchg->byte_orders & UNF_BIT_2) {
- unf_big_end_to_cpu((u8 *)echo_rsp_pld, sizeof(struct unf_echo_payload));
- cmnd = echo_rsp_pld->cmnd;
- } else {
- cmnd = echo_rsp_pld->cmnd;
- }
-
- mag_ver_local = echo_rsp_pld->data[ARRAY_INDEX_0];
- mag_ver_remote = echo_rsp_pld->data[ARRAY_INDEX_1];
-
- if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
- if (mag_ver_local == ECHO_MG_VERSION_LOCAL &&
- mag_ver_remote == ECHO_MG_VERSION_REMOTE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), remote rcv echo:(0x%x), remote snd echo acc:(0x%x), local rcv echo acc:(0x%x)",
- unf_lport->port_id, unf_rport->nport_id,
- unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
- unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
- unf_xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
- unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
- } else if ((mag_ver_local == ECHO_MG_VERSION_LOCAL) &&
- (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
- /* the peer don't supprt smartping, only local snd and
- * rcv rsp time stamp
- */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local snd echo:(0x%x), local rcv echo acc:(0x%x)",
- unf_lport->port_id, unf_rport->nport_id,
- unf_xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
- unf_xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
- } else if ((mag_ver_local != ECHO_MG_VERSION_LOCAL) &&
- (mag_ver_remote != ECHO_MG_VERSION_REMOTE)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR,
- "LPort(0x%x) send ECHO to RPort(0x%x), received ACC. local and remote is not FC HBA",
- unf_lport->port_id, unf_rport->nport_id);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ECHO to RPort(0x%x) and received RJT",
- unf_lport->port_id, unf_rport->nport_id);
- }
-
- unf_xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_OK;
- unf_xchg->echo_info.response_time = jiffies - unf_xchg->echo_info.response_time;
-
- /* wake up semaphore */
- up(&unf_xchg->echo_info.echo_sync_sema);
-}
-
-static void unf_echo_ob_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
- unf_lport = xchg->lport;
- FC_CHECK_RETURN_VOID(unf_lport);
- unf_rport = xchg->rport;
- FC_CHECK_RETURN_VOID(unf_rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ECHO to RPort(0x%x) but timeout",
- unf_lport->port_id, unf_rport->nport_id);
-
- xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
-
- /* wake up semaphore */
- up(&xchg->echo_info.echo_sync_sema);
-}
-
-u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time)
-{
- struct unf_echo_payload *echo_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
- ulong delay = 0;
- dma_addr_t phy_echo_addr;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(time, UNF_RETURN_ERROR);
-
- delay = UNF_ECHO_WAIT_SEM_TIMEOUT(lport);
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for ECHO",
- lport->port_id);
-
- return ret;
- }
-
- /* ECHO */
- xchg->cmnd_code = ELS_ECHO;
- xchg->fcp_sfs_union.sfs_entry.cur_offset = UNF_ECHO_REQ_SIZE;
-
- /* Set callback function, wake up semaphore */
- xchg->callback = unf_echo_callback;
- /* wake up semaphore */
- xchg->ob_callback = unf_echo_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- echo_pld = (struct unf_echo_payload *)unf_get_one_big_sfs_buf(xchg);
- if (!echo_pld) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't allocate buffer for ECHO",
- lport->port_id);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- fc_entry->echo.echo_pld = echo_pld;
- phy_echo_addr = pci_map_single(lport->low_level_func.dev, echo_pld,
- UNF_ECHO_PAYLOAD_LEN,
- DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_addr)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) pci map err", lport->port_id);
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
- fc_entry->echo.phy_echo_addr = phy_echo_addr;
- memset(echo_pld, 0, sizeof(struct unf_echo_payload));
- echo_pld->cmnd = (UNF_ELS_CMND_ECHO);
- echo_pld->data[ARRAY_INDEX_0] = ECHO_MG_VERSION_LOCAL;
-
- ret = unf_xchg_ref_inc(xchg, SEND_ELS);
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
-
- xchg->echo_info.response_time = jiffies;
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK) {
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- } else {
- if (down_timeout(&xchg->echo_info.echo_sync_sema,
- (long)msecs_to_jiffies((u32)delay))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]ECHO send %s. Port(0x%x)--->RPort(0x%x) but response timeout ",
- (ret != RETURN_OK) ? "failed" : "succeed",
- lport->port_id, rport->nport_id);
-
- xchg->echo_info.echo_result = UNF_ELS_ECHO_RESULT_FAIL;
- }
-
- if (xchg->echo_info.echo_result == UNF_ELS_ECHO_RESULT_FAIL) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "Echo send fail or timeout");
-
- ret = UNF_RETURN_ERROR;
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "echo acc rsp,echo_cmd_snd(0x%xus)-->echo_cmd_rcv(0x%xus)-->echo_acc_ snd(0x%xus)-->echo_acc_rcv(0x%xus).",
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME],
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME],
- xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME],
- xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME]);
-
- *time =
- (xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] -
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME]) -
- (xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] -
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME]);
- }
- }
-
- pci_unmap_single(lport->low_level_func.dev, phy_echo_addr,
- UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
- fc_entry->echo.phy_echo_addr = 0;
- unf_xchg_ref_dec(xchg, SEND_ELS);
-
- return ret;
-}
-
-static void unf_fill_prli_pld(struct unf_prli_payload *prli_pld,
- struct unf_lport *lport)
-{
- u32 pld_len = 0;
-
- FC_CHECK_RETURN_VOID(prli_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- pld_len = sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE;
- prli_pld->cmnd =
- (UNF_ELS_CMND_PRLI |
- ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
- ((u32)pld_len));
-
- prli_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR);
- prli_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
- prli_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
-
- /* About Read Xfer_rdy disable */
- prli_pld->parms[ARRAY_INDEX_3] = (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | lport->options);
-
- /* About FCP confirm */
- if (lport->low_level_func.lport_cfg_items.fcp_conf)
- prli_pld->parms[ARRAY_INDEX_3] |= UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
-
- /* About Tape support */
- if (lport->low_level_func.lport_cfg_items.tape_support)
- prli_pld->parms[ARRAY_INDEX_3] |=
- (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
- UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
- UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
- UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x)'s PRLI payload: options(0x%x) parameter-3(0x%x)",
- lport->port_id, lport->options,
- prli_pld->parms[ARRAY_INDEX_3]);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_pld, sizeof(struct unf_prli_payload));
-}
-
-u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
- u32 cmnd_code)
-{
- struct unf_prli_payload *prli_pal = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PRLI",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = cmnd_code;
-
- /* for rcvd prli acc/rjt processer */
- xchg->callback = unf_prli_callback;
- /* for send prli failed processer */
- xchg->ob_callback = unf_prli_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- prli_pal = &fc_entry->prli.payload;
- memset(prli_pal, 0, sizeof(struct unf_prli_payload));
- unf_fill_prli_pld(prli_pal, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PRLI send %s. Port(0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_prlo_pld(struct unf_prli_payload *prlo_pld,
- struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(prlo_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- prlo_pld->cmnd = (UNF_ELS_CMND_PRLO);
- prlo_pld->parms[ARRAY_INDEX_0] = (UNF_FC4_FRAME_PARM_0_FCP);
- prlo_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
- prlo_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
- prlo_pld->parms[ARRAY_INDEX_3] = UNF_NO_SERVICE_PARAMS;
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_pld, sizeof(struct unf_prli_payload));
-}
-
-u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_prli_payload *prlo_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PRLO", lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_PRLO;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- prlo_pld = &fc_entry->prlo.payload;
- memset(prlo_pld, 0, sizeof(struct unf_prli_payload));
- unf_fill_prlo_pld(prlo_pld, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PRLO send %s. Port(0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_pdisc_pld(struct unf_plogi_payload *pdisc_pld,
- struct unf_lport *lport)
-{
- struct unf_lgn_parm *login_parms = NULL;
-
- FC_CHECK_RETURN_VOID(pdisc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- pdisc_pld->cmnd = (UNF_ELS_CMND_PDISC);
- login_parms = &pdisc_pld->stparms;
-
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- /* P2P & Fabric */
- login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
- login_parms->co_parms.bbscn =
- (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
- ? 0
- : unf_low_level_bb_scn(lport);
- } else {
- /* Public loop & Private loop */
- login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
- /* :1 */
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
- }
-
- login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
- login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
- login_parms->co_parms.e_d_tov = (lport->ed_tov);
-
- login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- /* class-3 */
- login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID;
- login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
- login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_pld, sizeof(struct unf_plogi_payload));
-}
-
-u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* PLOGI/PDISC with same payload */
- struct unf_plogi_payload *pdisc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PDISC",
- lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_PDISC;
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- pdisc_pld = &fc_entry->pdisc.payload;
- memset(pdisc_pld, 0, sizeof(struct unf_plogi_payload));
- unf_fill_pdisc_pld(pdisc_pld, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PDISC send %s. Port(0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_adisc_pld(struct unf_adisc_payload *adisc_pld,
- struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(adisc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- adisc_pld->cmnd = (UNF_ELS_CMND_ADISC);
- adisc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- adisc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- adisc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- adisc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
-}
-
-u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_adisc_payload *adisc_pal = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for ADISC", lport->port_id);
-
- return ret;
- }
-
- xchg->cmnd_code = ELS_ADISC;
-
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
-
- adisc_pal = &fc_entry->adisc.adisc_payl;
- memset(adisc_pal, 0, sizeof(struct unf_adisc_payload));
- unf_fill_adisc_pld(adisc_pal, lport);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: ADISC send %s. Port(0x%x)--->RPort(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id);
-
- return ret;
-}
-
-static void unf_fill_rrq_pld(struct unf_rrq *rrq_pld, struct unf_xchg *xchg)
-{
- FC_CHECK_RETURN_VOID(rrq_pld);
- FC_CHECK_RETURN_VOID(xchg);
-
- rrq_pld->cmnd = UNF_ELS_CMND_RRQ;
- rrq_pld->sid = xchg->sid;
- rrq_pld->oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
-}
-
-u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- /* after ABTS Done */
- struct unf_rrq *rrq_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- unf_xchg = unf_get_sfs_free_xchg_and_init(lport, rport->nport_id, rport, &fc_entry);
- if (!unf_xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for RRQ",
- lport->port_id);
-
- return ret;
- }
-
- unf_xchg->cmnd_code = ELS_RRQ; /* RRQ */
-
- unf_xchg->callback = unf_rrq_callback; /* release I/O exchange context */
- unf_xchg->ob_callback = unf_rrq_ob_callback; /* release I/O exchange context */
- unf_xchg->io_xchg = xchg; /* pointer to IO XCHG */
-
- unf_fill_package(&pkg, unf_xchg, rport);
- pkg.type = UNF_PKG_ELS_REQ;
- rrq_pld = &fc_entry->rrq;
- memset(rrq_pld, 0, sizeof(struct unf_rrq));
- unf_fill_rrq_pld(rrq_pld, xchg);
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, unf_xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)unf_xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]RRQ send %s. Port(0x%x)--->RPort(0x%x) free old exchange(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, xchg->hotpooltag);
-
- return ret;
-}
-
-u32 unf_send_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_flogi_fdisc_payload *flogi_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_FLOGI);
-
- xchg->did = 0; /* D_ID must be 0 */
- xchg->sid = UNF_FC_FID_FLOGI; /* S_ID must be 0xfffffe */
- xchg->oid = xchg->sid;
- xchg->callback = NULL;
- xchg->lport = lport;
- xchg->rport = rport;
- xchg->ob_callback = unf_flogi_acc_ob_callback; /* call back for sending
- * FLOGI response
- */
-
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- flogi_acc_pld = &fc_entry->flogi_acc.flogi_payload;
- flogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
- unf_fill_flogi_pld(flogi_acc_pld, lport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]LOGIN: FLOGI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
- return ret;
-}
-
-static void unf_fill_plogi_acc_pld(struct unf_plogi_payload *plogi_acc_pld,
- struct unf_lport *lport)
-{
- struct unf_lgn_parm *login_parms = NULL;
-
- FC_CHECK_RETURN_VOID(plogi_acc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- plogi_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
- login_parms = &plogi_acc_pld->stparms;
-
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT; /* 0 */
- login_parms->co_parms.bbscn =
- (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
- ? 0
- : unf_low_level_bb_scn(lport);
- } else {
- login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT; /* 1 */
- }
-
- login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
- login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
- login_parms->co_parms.e_d_tov = (lport->ed_tov);
- login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
- login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
- login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
- login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, plogi_acc_pld,
- sizeof(struct unf_plogi_payload));
-}
-
-u32 unf_send_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_plogi_payload *plogi_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PLOGI);
-
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->callback = NULL;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->ob_callback = unf_plogi_acc_ob_callback; /* call back for sending PLOGI ACC */
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- plogi_acc_pld = &fc_entry->plogi_acc.payload;
- unf_fill_plogi_acc_pld(plogi_acc_pld, lport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PLOGI ACC send %s. Port(0x%x_0x%x_0x%llx)--->RPort(0x%x_0x%llx) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed",
- lport->port_id, lport->nport_id, lport->port_name,
- rport->nport_id, rport->port_name, ox_id, rx_id);
- }
-
- return ret;
-}
-
-static void unf_fill_prli_acc_pld(struct unf_prli_payload *prli_acc_pld,
- struct unf_lport *lport,
- struct unf_rport *rport)
-{
- u32 port_mode = UNF_FC4_FRAME_PARM_3_TGT;
-
- FC_CHECK_RETURN_VOID(prli_acc_pld);
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- prli_acc_pld->cmnd =
- (UNF_ELS_CMND_ACC |
- ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
- ((u32)(sizeof(struct unf_prli_payload) - UNF_PRLI_SIRT_EXTRA_SIZE)));
-
- prli_acc_pld->parms[ARRAY_INDEX_0] =
- (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_I_PAIR |
- UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
- prli_acc_pld->parms[ARRAY_INDEX_1] = UNF_NOT_MEANINGFUL;
- prli_acc_pld->parms[ARRAY_INDEX_2] = UNF_NOT_MEANINGFUL;
-
- /* About INI/TGT mode */
- if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
- /* return INI (0x20): R_Port has TGT mode, L_Port has INI mode
- */
- port_mode = UNF_FC4_FRAME_PARM_3_INI;
- } else {
- port_mode = lport->options;
- }
-
- /* About Read xfer_rdy disable */
- prli_acc_pld->parms[ARRAY_INDEX_3] =
- (UNF_FC4_FRAME_PARM_3_R_XFER_DIS | port_mode); /* 0x2 */
-
- /* About Tape support */
- if (rport->tape_support_needed) {
- prli_acc_pld->parms[ARRAY_INDEX_3] |=
- (UNF_FC4_FRAME_PARM_3_REC_SUPPORT |
- UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT |
- UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT |
- UNF_FC4_FRAME_PARM_3_CONF_ALLOW);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "PRLI ACC tape support");
- }
-
- /* About confirm */
- if (lport->low_level_func.lport_cfg_items.fcp_conf)
- prli_acc_pld->parms[ARRAY_INDEX_3] |=
- UNF_FC4_FRAME_PARM_3_CONF_ALLOW; /* 0x80 */
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prli_acc_pld,
- sizeof(struct unf_prli_payload));
-}
-
-u32 unf_send_prli_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_prli_payload *prli_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLI);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->callback = NULL;
- xchg->ob_callback =
- unf_prli_acc_ob_callback; /* callback when send succeed */
-
- unf_fill_package(&pkg, xchg, rport);
-
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- prli_acc_pld = &fc_entry->prli_acc.payload;
- unf_fill_prli_acc_pld(prli_acc_pld, lport, rport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- if (rport->nport_id < UNF_FC_FID_DOM_MGR ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PRLI ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed",
- lport->port_id, rport->nport_id, ox_id, rx_id);
- }
-
- return ret;
-}
-
-u32 unf_send_rec_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- /* Reserved */
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- return RETURN_OK;
-}
-
-static void unf_rrq_acc_ob_callback(struct unf_xchg *xchg)
-{
- FC_CHECK_RETURN_VOID(xchg);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]RRQ ACC Xchg(0x%p) tag(0x%x)", xchg,
- xchg->hotpooltag);
-}
-
-static void unf_fill_els_acc_pld(struct unf_els_acc *els_acc_pld)
-{
- FC_CHECK_RETURN_VOID(els_acc_pld);
-
- els_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
-}
-
-u32 unf_send_rscn_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_els_acc *rscn_acc = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RSCN);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->callback = NULL;
- xchg->ob_callback = unf_rscn_acc_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- rscn_acc = &fc_entry->els_acc;
- unf_fill_els_acc_pld(rscn_acc);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: RSCN ACC send %s. Port(0x%x)--->RPort(0x%x) with OXID(0x%x) RXID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
-
- return ret;
-}
-
-u32 unf_send_logo_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_els_acc *logo_acc = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_LOGO);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
- xchg->callback = NULL;
- xchg->ob_callback = unf_logo_acc_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- logo_acc = &fc_entry->els_acc;
- unf_fill_els_acc_pld(logo_acc);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: LOGO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed",
- lport->port_id, rport->nport_id, ox_id, rx_id);
- }
-
- return ret;
-}
-
-static u32 unf_send_rrq_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_els_acc *rrq_acc = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
- xchg->callback = NULL; /* do noting */
-
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- rrq_acc = &fc_entry->els_acc;
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_RRQ);
- xchg->ob_callback = unf_rrq_acc_ob_callback; /* do noting */
- unf_fill_els_acc_pld(rrq_acc);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]RRQ ACC send %s. Port(0x%x)--->RPort(0x%x) with Xchg(0x%p) OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, xchg, ox_id, rx_id);
-
- return ret;
-}
-
-static void unf_fill_pdisc_acc_pld(struct unf_plogi_payload *pdisc_acc_pld,
- struct unf_lport *lport)
-{
- struct unf_lgn_parm *login_parms = NULL;
-
- FC_CHECK_RETURN_VOID(pdisc_acc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- pdisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
- login_parms = &pdisc_acc_pld->stparms;
-
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- login_parms->co_parms.bb_credit = (unf_low_level_bb_credit(lport));
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_NFPORT;
- login_parms->co_parms.bbscn =
- (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC)
- ? 0
- : unf_low_level_bb_scn(lport);
- } else {
- login_parms->co_parms.bb_credit = UNF_BBCREDIT_LPORT;
- login_parms->co_parms.alternate_bb_credit_mgmt = UNF_BBCREDIT_MANAGE_LPORT;
- }
-
- login_parms->co_parms.lowest_version = UNF_PLOGI_VERSION_LOWER;
- login_parms->co_parms.highest_version = UNF_PLOGI_VERSION_UPPER;
- login_parms->co_parms.continuously_increasing = UNF_CONTIN_INCREASE_SUPPORT;
- login_parms->co_parms.bb_receive_data_field_size = (lport->max_frame_size);
- login_parms->co_parms.nport_total_concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->co_parms.relative_offset = (UNF_PLOGI_RO_CATEGORY);
- login_parms->co_parms.e_d_tov = (lport->ed_tov);
-
- login_parms->cl_parms[ARRAY_INDEX_2].valid = UNF_CLASS_VALID; /* class-3 */
- login_parms->cl_parms[ARRAY_INDEX_2].received_data_field_size = (lport->max_frame_size);
- login_parms->cl_parms[ARRAY_INDEX_2].concurrent_sequences = (UNF_PLOGI_CONCURRENT_SEQ);
- login_parms->cl_parms[ARRAY_INDEX_2].open_sequence_per_exchange = (UNF_PLOGI_SEQ_PER_XCHG);
-
- login_parms->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- login_parms->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- login_parms->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- login_parms->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, pdisc_acc_pld,
- sizeof(struct unf_plogi_payload));
-}
-
-u32 unf_send_pdisc_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_plogi_payload *pdisc_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PDISC);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->callback = NULL;
- xchg->ob_callback = unf_pdisc_acc_ob_callback;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- pdisc_acc_pld = &fc_entry->pdisc_acc.payload;
- unf_fill_pdisc_acc_pld(pdisc_acc_pld, lport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send PDISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
-
- return ret;
-}
-
-static void unf_fill_adisc_acc_pld(struct unf_adisc_payload *adisc_acc_pld,
- struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(adisc_acc_pld);
- FC_CHECK_RETURN_VOID(lport);
-
- adisc_acc_pld->cmnd = (UNF_ELS_CMND_ACC);
-
- adisc_acc_pld->hard_address = (lport->nport_id & UNF_ALPA_MASK);
- adisc_acc_pld->high_node_name = UNF_GET_NAME_HIGH_WORD(lport->node_name);
- adisc_acc_pld->low_node_name = UNF_GET_NAME_LOW_WORD(lport->node_name);
- adisc_acc_pld->high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- adisc_acc_pld->low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
- adisc_acc_pld->nport_id = lport->nport_id;
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_acc_pld,
- sizeof(struct unf_adisc_payload));
-}
-
-u32 unf_send_adisc_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_adisc_payload *adisc_acc_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ADISC);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->callback = NULL;
- xchg->ob_callback = unf_adisc_acc_ob_callback;
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- adisc_acc_pld = &fc_entry->adisc_acc.adisc_payl;
- unf_fill_adisc_acc_pld(adisc_acc_pld, lport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send ADISC ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
-
- return ret;
-}
-
-static void unf_fill_prlo_acc_pld(struct unf_prli_prlo *prlo_acc,
- struct unf_lport *lport)
-{
- struct unf_prli_payload *prlo_acc_pld = NULL;
-
- FC_CHECK_RETURN_VOID(prlo_acc);
-
- prlo_acc_pld = &prlo_acc->payload;
- prlo_acc_pld->cmnd =
- (UNF_ELS_CMND_ACC |
- ((u32)UNF_FC4_FRAME_PAGE_SIZE << UNF_FC4_FRAME_PAGE_SIZE_SHIFT) |
- ((u32)sizeof(struct unf_prli_payload)));
- prlo_acc_pld->parms[ARRAY_INDEX_0] =
- (UNF_FC4_FRAME_PARM_0_FCP | UNF_FC4_FRAME_PARM_0_GOOD_RSP_CODE);
- prlo_acc_pld->parms[ARRAY_INDEX_1] = 0;
- prlo_acc_pld->parms[ARRAY_INDEX_2] = 0;
- prlo_acc_pld->parms[ARRAY_INDEX_3] = 0;
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, prlo_acc_pld,
- sizeof(struct unf_prli_payload));
-}
-
-u32 unf_send_prlo_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg)
-{
- struct unf_prli_prlo *prlo_acc = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_PRLO);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
-
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- memset(fc_entry, 0, sizeof(union unf_sfs_u));
- prlo_acc = &fc_entry->prlo_acc;
- unf_fill_prlo_acc_pld(prlo_acc, lport);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send PRLO ACC %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
-
- return ret;
-}
-
-static void unf_prli_acc_ob_callback(struct unf_xchg *xchg)
-{
- /* Report R_Port scsi Link Up */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
- enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
-
- FC_CHECK_RETURN_VOID(xchg);
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
- FC_CHECK_RETURN_VOID(unf_lport);
- FC_CHECK_RETURN_VOID(unf_rport);
-
- /* Update & Report Link Up */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
- rport_state = unf_rport->rp_state;
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]LOGIN: Port(0x%x) RPort(0x%x) state(0x%x) WWN(0x%llx) prliacc",
- unf_lport->port_id, unf_rport->nport_id,
- unf_rport->rp_state, unf_rport->port_name);
- }
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- if (rport_state == UNF_RPORT_ST_READY) {
- unf_rport->logo_retries = 0;
- unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
- unf_rport->options);
- }
-}
-
-static void unf_schedule_open_work(struct unf_lport *lport,
- struct unf_rport *rport)
-{
- /* Used for L_Port port only with TGT, or R_Port only with INI */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- ulong delay = 0;
- ulong flag = 0;
- u32 ret = 0;
- u32 port_feature = INVALID_VALUE32;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- delay = (ulong)unf_lport->ed_tov;
- port_feature = unf_rport->options & UNF_PORT_MODE_BOTH;
-
- if (unf_lport->options == UNF_PORT_MODE_TGT ||
- port_feature == UNF_PORT_MODE_INI) {
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
-
- ret = unf_rport_ref_inc(unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) RPort(0x%x) abnormal, no need open",
- unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
- return;
- }
-
- /* Delay work pending check */
- if (delayed_work_pending(&unf_rport->open_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) RPort(0x%x) open work is running, no need re-open",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
- unf_rport_ref_dec(unf_rport);
- return;
- }
-
- /* start open work */
- if (queue_delayed_work(unf_wq, &unf_rport->open_work,
- (ulong)msecs_to_jiffies((u32)delay))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x) start open work",
- unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
-
- (void)unf_rport_ref_inc(unf_rport);
- }
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_rport_ref_dec(unf_rport);
- }
-}
-
-static void unf_plogi_acc_ob_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- FC_CHECK_RETURN_VOID(unf_lport);
- FC_CHECK_RETURN_VOID(unf_rport);
-
- /*
- * 1. According to FC-LS 4.2.7.1:
- * after RCVD PLOGI or sending PLOGI ACC, need to termitate open EXCH
- */
- unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport,
- unf_rport->nport_id, unf_lport->nport_id, 0);
-
- /* 2. Send PLOGI ACC fail */
- if (xchg->ob_callback_sts != UNF_IO_SUCCESS) {
- /* Do R_Port recovery */
- unf_rport_error_recovery(unf_rport);
-
- /* Do not care: Just used for L_Port only is TGT mode or R_Port
- * only is INI mode
- */
- unf_schedule_open_work(unf_lport, unf_rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC failed(0x%x) with RPort(0x%x) feature(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_lport->options, xchg->ob_callback_sts,
- unf_rport->nport_id, unf_rport->options);
-
- return;
- }
-
- /* 3. Private Loop: check whether or not need to send PRLI */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
- (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
- unf_rport->rp_state == UNF_RPORT_ST_READY)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x) with State(0x%x) return directly",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id, unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
- return;
- }
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* 4. Set Port Feature with BOTH: cancel */
- if (unf_rport->options == UNF_PORT_MODE_UNKNOWN && unf_rport->port_name != INVALID_WWPN)
- unf_rport->options = unf_get_port_feature(unf_rport->port_name);
-
- /*
- * 5. Check whether need to send PRLI delay
- * Call by: RCVD PLOGI ACC or callback for sending PLOGI ACC succeed
- */
- unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
-
- /* 6. Do not care: Just used for L_Port only is TGT mode or R_Port only
- * is INI mode
- */
- unf_schedule_open_work(unf_lport, unf_rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x_0x%x) send PLOGI ACC succeed with RPort(0x%x) feature(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
- unf_rport->nport_id, unf_rport->options);
-}
-
-static void unf_flogi_acc_ob_callback(struct unf_xchg *xchg)
-{
- /* Callback for Sending FLOGI ACC succeed */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
- u64 rport_port_name = 0;
- u64 rport_node_name = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
- FC_CHECK_RETURN_VOID(xchg->lport);
- FC_CHECK_RETURN_VOID(xchg->rport);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- if (unf_rport->port_name == 0 && unf_rport->node_name == 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x_0x%x_0x%x) already send Plogi with RPort(0x%x) feature(0x%x).",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->options,
- unf_rport->nport_id, unf_rport->options);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
- return;
- }
-
- rport_port_name = unf_rport->port_name;
- rport_node_name = unf_rport->node_name;
-
- /* Swap case: Set WWPN & WWNN with zero */
- unf_rport->port_name = 0;
- unf_rport->node_name = 0;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* Enter PLOGI stage: after send FLOGI ACC succeed */
- unf_login_with_rport_in_n2n(unf_lport, rport_port_name, rport_node_name);
-}
-
-static void unf_rscn_acc_ob_callback(struct unf_xchg *xchg)
-{
-}
-
-static void unf_logo_acc_ob_callback(struct unf_xchg *xchg)
-{
-}
-
-static void unf_adisc_acc_ob_callback(struct unf_xchg *xchg)
-{
-}
-
-static void unf_pdisc_acc_ob_callback(struct unf_xchg *xchg)
-{
-}
-
-static inline u8 unf_determin_bbscn(u8 local_bbscn, u8 remote_bbscn)
-{
- if (remote_bbscn == 0 || local_bbscn == 0)
- local_bbscn = 0;
- else
- local_bbscn = local_bbscn > remote_bbscn ? local_bbscn : remote_bbscn;
-
- return local_bbscn;
-}
-
-static void unf_cfg_lowlevel_fabric_params(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_fabric_parm *login_parms)
-{
- struct unf_port_login_parms login_co_parms = {0};
- u32 remote_edtov = 0;
- u32 ret = 0;
- u8 remote_edtov_resolution = 0; /* 0:ms; 1:ns */
-
- if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
- return;
-
- login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
- login_co_parms.remote_edtov_tag = 0;
- login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
- login_co_parms.compared_bbscn =
- (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
- (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
-
- remote_edtov_resolution = (u8)UNF_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(login_parms);
- remote_edtov = UNF_GET_E_D_TOV_FROM_PARAMS(login_parms);
- login_co_parms.compared_edtov_val =
- remote_edtov_resolution ? (remote_edtov / UNF_OS_MS_TO_NS)
- : remote_edtov;
-
- login_co_parms.compared_ratov_val = UNF_GET_RA_TOV_FROM_PARAMS(login_parms);
- login_co_parms.els_cmnd_code = ELS_FLOGI;
-
- if (UNF_TOP_P2P_MASK & (u32)lport->act_topo) {
- login_co_parms.act_topo = (login_parms->co_parms.nport == UNF_F_PORT)
- ? UNF_ACT_TOP_P2P_FABRIC
- : UNF_ACT_TOP_P2P_DIRECT;
- } else {
- login_co_parms.act_topo = lport->act_topo;
- }
-
- ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
- UNF_PORT_CFG_UPDATE_FABRIC_PARAM, (void *)&login_co_parms);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Lowlevel unsupport fabric config");
- }
-}
-
-u32 unf_check_flogi_params(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_fabric_parm *fabric_parms)
-{
- u32 ret = RETURN_OK;
- u32 high_port_name;
- u32 low_port_name;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(fabric_parms, UNF_RETURN_ERROR);
-
- if (fabric_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID) {
- /* Discard directly */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) NPort_ID(0x%x) FLOGI not support class3",
- lport->port_id, rport->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- high_port_name = UNF_GET_NAME_HIGH_WORD(lport->port_name);
- low_port_name = UNF_GET_NAME_LOW_WORD(lport->port_name);
- if (fabric_parms->high_port_name == high_port_name &&
- fabric_parms->low_port_name == low_port_name) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]The wwpn(0x%x%x) of lport(0x%x) is same as the wwpn of rport(0x%x)",
- high_port_name, low_port_name, lport->port_id, rport->nport_id);
- return UNF_RETURN_ERROR;
- }
-
- return ret;
-}
-
-static void unf_save_fabric_params(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_fabric_parm *fabric_parms)
-{
- u64 fabric_node_name = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(fabric_parms);
-
- fabric_node_name = (u64)(((u64)(fabric_parms->high_node_name) << UNF_SHIFT_32) |
- ((u64)(fabric_parms->low_node_name)));
-
- /* R_Port for 0xfffffe is used for FLOGI, not need to save WWN */
- if (fabric_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
- rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
- else
- rport->max_frame_size = fabric_parms->co_parms.bb_receive_data_field_size;
-
- /* with Fabric attribute */
- if (fabric_parms->co_parms.nport == UNF_F_PORT) {
- rport->ed_tov = fabric_parms->co_parms.e_d_tov;
- rport->ra_tov = fabric_parms->co_parms.r_a_tov;
- lport->ed_tov = fabric_parms->co_parms.e_d_tov;
- lport->ra_tov = fabric_parms->co_parms.r_a_tov;
- lport->fabric_node_name = fabric_node_name;
- }
-
- /* Configure info from FLOGI to chip */
- unf_cfg_lowlevel_fabric_params(lport, rport, fabric_parms);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) Rport(0x%x) login parameter: E_D_TOV = %u. LPort E_D_TOV = %u. fabric nodename: 0x%x%x",
- lport->port_id, rport->nport_id, (fabric_parms->co_parms.e_d_tov),
- lport->ed_tov, fabric_parms->high_node_name, fabric_parms->low_node_name);
-}
-
-u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_flogi_fdisc_acc *flogi_frame = NULL;
- struct unf_fabric_parm *fabric_login_parms = NULL;
- u32 ret = UNF_RETURN_ERROR;
- ulong flag = 0;
- u64 wwpn = 0;
- u64 wwnn = 0;
- enum unf_act_topo unf_active_topo;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x)<---RPort(0x%x) Receive FLOGI with OX_ID(0x%x)",
- lport->port_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_FLOGI);
-
- /* Check L_Port state: Offline */
- if (lport->states >= UNF_LPORT_ST_OFFLINE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with state(0x%x) not need to handle FLOGI",
- lport->port_id, lport->states);
-
- unf_cm_free_xchg(lport, xchg);
- return ret;
- }
-
- flogi_frame = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi;
- fabric_login_parms = &flogi_frame->flogi_payload.fabric_parms;
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &flogi_frame->flogi_payload,
- sizeof(struct unf_flogi_fdisc_payload));
- wwpn = (u64)(((u64)(fabric_login_parms->high_port_name) << UNF_SHIFT_32) |
- ((u64)fabric_login_parms->low_port_name));
- wwnn = (u64)(((u64)(fabric_login_parms->high_node_name) << UNF_SHIFT_32) |
- ((u64)fabric_login_parms->low_node_name));
-
- /* Get (new) R_Port: reuse only */
- unf_rport = unf_get_rport_by_nport_id(lport, UNF_FC_FID_FLOGI);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has no RPort. do nothing", lport->port_id);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- /* Update R_Port info */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->port_name = wwpn;
- unf_rport->node_name = wwnn;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Check RCVD FLOGI parameters: only for class-3 */
- ret = unf_check_flogi_params(lport, unf_rport, fabric_login_parms);
- if (ret != RETURN_OK) {
- /* Discard directly */
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- /* Save fabric parameters */
- unf_save_fabric_params(lport, unf_rport, fabric_login_parms);
-
- if ((u32)lport->act_topo & UNF_TOP_P2P_MASK) {
- unf_active_topo =
- (fabric_login_parms->co_parms.nport == UNF_F_PORT)
- ? UNF_ACT_TOP_P2P_FABRIC
- : UNF_ACT_TOP_P2P_DIRECT;
- unf_lport_update_topo(lport, unf_active_topo);
- }
- /* Send ACC for FLOGI */
- ret = unf_send_flogi_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send FLOGI ACC failed and do recover",
- lport->port_id);
-
- /* Do L_Port recovery */
- unf_lport_error_recovery(lport);
- }
-
- return ret;
-}
-
-static void unf_cfg_lowlevel_port_params(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_lgn_parm *login_parms,
- u32 cmd_type)
-{
- struct unf_port_login_parms login_co_parms = {0};
- u32 ret = 0;
-
- if (!lport->low_level_func.port_mgr_op.ll_port_config_set)
- return;
-
- login_co_parms.rport_index = rport->rport_index;
- login_co_parms.seq_cnt = 0;
- login_co_parms.ed_tov = 0; /* ms */
- login_co_parms.ed_tov_timer_val = lport->ed_tov;
- login_co_parms.tx_mfs = rport->max_frame_size;
-
- login_co_parms.remote_rttov_tag = (u8)UNF_GET_RT_TOV_FROM_PARAMS(login_parms);
- login_co_parms.remote_edtov_tag = 0;
- login_co_parms.remote_bb_credit = (u16)UNF_GET_BB_CREDIT_FROM_PARAMS(login_parms);
- login_co_parms.els_cmnd_code = cmd_type;
-
- if (lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- login_co_parms.compared_bbscn = 0;
- } else {
- login_co_parms.compared_bbscn =
- (u32)unf_determin_bbscn((u8)lport->low_level_func.lport_cfg_items.bbscn,
- (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
- }
-
- login_co_parms.compared_edtov_val = lport->ed_tov;
- login_co_parms.compared_ratov_val = lport->ra_tov;
-
- ret = lport->low_level_func.port_mgr_op.ll_port_config_set((void *)lport->fc_port,
- UNF_PORT_CFG_UPDATE_PLOGI_PARAM, (void *)&login_co_parms);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) Lowlevel unsupport port config", lport->port_id);
- }
-}
-
-u32 unf_check_plogi_params(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_lgn_parm *login_parms)
-{
- u32 ret = RETURN_OK;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(login_parms, UNF_RETURN_ERROR);
-
- /* Parameters check: Class-type */
- if (login_parms->cl_parms[ARRAY_INDEX_2].valid == UNF_CLASS_INVALID ||
- login_parms->co_parms.bb_receive_data_field_size == 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort N_Port_ID(0x%x) with PLOGI parameters invalid: class3(%u), BBReceiveDataFieldSize(0x%x), send LOGO",
- lport->port_id, rport->nport_id,
- login_parms->cl_parms[ARRAY_INDEX_2].valid,
- login_parms->co_parms.bb_receive_data_field_size);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO); /* --->>> LOGO */
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* Enter LOGO stage */
- unf_rport_enter_logo(lport, rport);
- return UNF_RETURN_ERROR;
- }
-
- /* 16G FC Brocade SW, Domain Controller's PLOGI both support CLASS-1 &
- * CLASS-2
- */
- if (login_parms->cl_parms[ARRAY_INDEX_0].valid == UNF_CLASS_VALID ||
- login_parms->cl_parms[ARRAY_INDEX_1].valid == UNF_CLASS_VALID) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) get PLOGI class1(%u) class2(%u) from N_Port_ID(0x%x)",
- lport->port_id, login_parms->cl_parms[ARRAY_INDEX_0].valid,
- login_parms->cl_parms[ARRAY_INDEX_1].valid, rport->nport_id);
- }
-
- return ret;
-}
-
-static void unf_save_plogi_params(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_lgn_parm *login_parms,
- u32 cmd_code)
-{
-#define UNF_DELAY_TIME 100 /* WWPN smaller delay to send PRLI with COM mode */
-
- u64 wwpn = INVALID_VALUE64;
- u64 wwnn = INVALID_VALUE64;
- u32 ed_tov = 0;
- u32 remote_edtov = 0;
-
- if (login_parms->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
- rport->max_frame_size = UNF_MAX_FRAME_SIZE; /* 2112 */
- else
- rport->max_frame_size = login_parms->co_parms.bb_receive_data_field_size;
-
- wwnn = (u64)(((u64)(login_parms->high_node_name) << UNF_SHIFT_32) |
- ((u64)login_parms->low_node_name));
- wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
- ((u64)login_parms->low_port_name));
-
- remote_edtov = login_parms->co_parms.e_d_tov;
- ed_tov = login_parms->co_parms.e_d_tov_resolution
- ? (remote_edtov / UNF_OS_MS_TO_NS)
- : remote_edtov;
-
- rport->port_name = wwpn;
- rport->node_name = wwnn;
- rport->local_nport_id = lport->nport_id;
-
- if (lport->act_topo == UNF_ACT_TOP_P2P_DIRECT ||
- lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- /* P2P or Private Loop or FCoE VN2VN */
- lport->ed_tov = (lport->ed_tov > ed_tov) ? lport->ed_tov : ed_tov;
- lport->ra_tov = 2 * lport->ed_tov; /* 2 * E_D_TOV */
-
- if (ed_tov != 0)
- rport->ed_tov = ed_tov;
- else
- rport->ed_tov = UNF_DEFAULT_EDTOV;
- } else {
- /* SAN: E_D_TOV updated by FLOGI */
- rport->ed_tov = lport->ed_tov;
- }
-
- /* WWPN smaller: delay to send PRLI */
- if (rport->port_name > lport->port_name)
- rport->ed_tov += UNF_DELAY_TIME; /* 100ms */
-
- /* Configure port parameters to low level (chip) */
- unf_cfg_lowlevel_port_params(lport, rport, login_parms, cmd_code);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) RPort(0x%x) with WWPN(0x%llx) WWNN(0x%llx) login: ED_TOV(%u) Port: ED_TOV(%u)",
- lport->port_id, rport->nport_id, rport->port_name, rport->node_name,
- ed_tov, lport->ed_tov);
-}
-
-static bool unf_check_bbscn_is_enabled(u8 local_bbscn, u8 remote_bbscn)
-{
- return unf_determin_bbscn(local_bbscn, remote_bbscn) ? true : false;
-}
-
-static u32 unf_irq_process_switch2thread(void *lport, struct unf_xchg *xchg,
- unf_event_task evt_task)
-{
- struct unf_cm_event_report *event = NULL;
- struct unf_xchg *unf_xchg = NULL;
- u32 ret = 0;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- unf_lport = lport;
- unf_xchg = xchg;
-
- if (unlikely(!unf_lport->event_mgr.unf_get_free_event_func ||
- !unf_lport->event_mgr.unf_post_event_func ||
- !unf_lport->event_mgr.unf_release_event)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) event function is NULL",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- ret = unf_xchg_ref_inc(unf_xchg, SFS_RESPONSE);
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
-
- event = unf_lport->event_mgr.unf_get_free_event_func((void *)lport);
- FC_CHECK_RETURN_VALUE(event, UNF_RETURN_ERROR);
-
- event->lport = unf_lport;
- event->event_asy_flag = UNF_EVENT_ASYN;
- event->unf_event_task = evt_task;
- event->para_in = xchg;
- unf_lport->event_mgr.unf_post_event_func(unf_lport, event);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) start to switch thread process now",
- unf_lport->port_id);
-
- return ret;
-}
-
-u32 unf_plogi_handler_com_process(struct unf_xchg *xchg)
-{
- struct unf_xchg *unf_xchg = xchg;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_plogi_pdisc *plogi_frame = NULL;
- struct unf_lgn_parm *login_parms = NULL;
- u32 ret = UNF_RETURN_ERROR;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(unf_xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(unf_xchg->rport, UNF_RETURN_ERROR);
-
- unf_lport = unf_xchg->lport;
- unf_rport = unf_xchg->rport;
- plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
- login_parms = &plogi_frame->payload.stparms;
-
- unf_save_plogi_params(unf_lport, unf_rport, login_parms, ELS_PLOGI);
-
- /* Update state: PLOGI_WAIT */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = unf_xchg->sid;
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Send PLOGI ACC to remote port */
- ret = unf_send_plogi_acc(unf_lport, unf_rport, unf_xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send PLOGI ACC failed",
- unf_lport->port_id);
-
- /* NOTE: exchange has been freed inner(before) */
- unf_rport_error_recovery(unf_rport);
- return ret;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x) send PLOGI ACC to Port(0x%x) succeed",
- unf_lport->port_id, unf_rport->nport_id);
-
- return ret;
-}
-
-int unf_plogi_async_handle(void *argc_in, void *argc_out)
-{
- struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- ret = unf_plogi_handler_com_process(xchg);
-
- unf_xchg_ref_dec(xchg, SFS_RESPONSE);
-
- return (int)ret;
-}
-
-u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_xchg *unf_xchg = xchg;
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = NULL;
- struct unf_plogi_pdisc *plogi_frame = NULL;
- struct unf_lgn_parm *login_parms = NULL;
- struct unf_rjt_info rjt_info = {0};
- u64 wwpn = INVALID_VALUE64;
- u32 ret = UNF_RETURN_ERROR;
- bool bbscn_enabled = false;
- bool switch2thread = false;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- /* 1. Maybe: PLOGI is sent by Name server */
- if (sid < UNF_FC_FID_DOM_MGR ||
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Receive PLOGI. Port(0x%x_0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- lport->port_id, lport->nport_id, sid, xchg->oxid);
- }
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PLOGI);
-
- /* 2. State check: Offline */
- if (unf_lport->states >= UNF_LPORT_ST_OFFLINE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) received PLOGI with state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
-
- unf_cm_free_xchg(unf_lport, unf_xchg);
- return UNF_RETURN_ERROR;
- }
-
- /* Get R_Port by WWpn */
- plogi_frame = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi;
- login_parms = &plogi_frame->payload.stparms;
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &plogi_frame->payload,
- sizeof(struct unf_plogi_payload));
-
- wwpn = (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
- ((u64)login_parms->low_port_name));
-
- /* 3. Get (new) R_Port (by wwpn) */
- unf_rport = unf_find_rport(unf_lport, sid, wwpn);
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, sid);
- if (!unf_rport) {
- memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
- rjt_info.els_cmnd_code = ELS_PLOGI;
- rjt_info.reason_code = UNF_LS_RJT_BUSY;
- rjt_info.reason_explanation = UNF_LS_RJT_INSUFFICIENT_RESOURCES;
-
- /* R_Port is NULL: Send ELS RJT for PLOGI */
- (void)unf_send_els_rjt_by_did(unf_lport, unf_xchg, sid, &rjt_info);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has no RPort and send PLOGI reject",
- unf_lport->port_id);
- return RETURN_OK;
- }
-
- /*
- * 4. According to FC-LS 4.2.7.1:
- * After RCVD PLogi or send Plogi ACC, need to termitate open EXCH
- */
- unf_cm_xchg_mgr_abort_io_by_id(unf_lport, unf_rport, sid, unf_lport->nport_id, 0);
-
- /* 5. Cancel recovery timer work after RCVD PLOGI */
- if (cancel_delayed_work(&unf_rport->recovery_work))
- atomic_dec(&unf_rport->rport_ref_cnt);
-
- /*
- * 6. Plogi parameters check
- * Call by: (RCVD) PLOGI handler & callback function for RCVD PLOGI_ACC
- */
- ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
- if (ret != RETURN_OK) {
- unf_cm_free_xchg(unf_lport, unf_xchg);
- return UNF_RETURN_ERROR;
- }
-
- unf_xchg->lport = lport;
- unf_xchg->rport = unf_rport;
- unf_xchg->sid = sid;
-
- /* 7. About bbscn for context change */
- bbscn_enabled =
- unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
- (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
- if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT && bbscn_enabled) {
- switch2thread = true;
- unf_lport->bbscn_support = true;
- }
-
- /* 8. Process PLOGI Frame: switch to thread if necessary */
- if (switch2thread && unf_lport->root_lport == unf_lport) {
- /* Wait for LR complete sync */
- ret = unf_irq_process_switch2thread(unf_lport, unf_xchg, unf_plogi_async_handle);
- } else {
- ret = unf_plogi_handler_com_process(unf_xchg);
- }
-
- return ret;
-}
-
-static void unf_obtain_tape_capacity(struct unf_lport *lport,
- struct unf_rport *rport, u32 tape_parm)
-{
- u32 rec_support = 0;
- u32 task_retry_support = 0;
- u32 retry_support = 0;
-
- rec_support = tape_parm & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
- task_retry_support =
- tape_parm & UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
- retry_support = tape_parm & UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
-
- if (lport->low_level_func.lport_cfg_items.tape_support &&
- rec_support && task_retry_support && retry_support) {
- rport->tape_support_needed = true;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) FC_tape is needed for RPort(0x%x)",
- lport->port_id, lport->nport_id, rport->nport_id);
- }
-
- if ((tape_parm & UNF_FC4_FRAME_PARM_3_CONF_ALLOW) &&
- lport->low_level_func.lport_cfg_items.fcp_conf) {
- rport->fcp_conf_needed = true;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) FCP confirm is needed for RPort(0x%x)",
- lport->port_id, lport->nport_id, rport->nport_id);
- }
-}
-
-static u32 unf_prli_handler_com_process(struct unf_xchg *xchg)
-{
- struct unf_prli_prlo *prli = NULL;
- u32 ret = UNF_RETURN_ERROR;
- ulong flags = 0;
- u32 sid = 0;
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
-
- unf_xchg = xchg;
- FC_CHECK_RETURN_VALUE(unf_xchg->lport, UNF_RETURN_ERROR);
- unf_lport = unf_xchg->lport;
- sid = xchg->sid;
-
- UNF_SERVICE_COLLECT(unf_lport->link_service_info, UNF_SERVICE_ITEM_PRLI);
-
- /* 1. Get R_Port: for each R_Port from rport_busy_list */
- unf_rport = unf_get_rport_by_nport_id(unf_lport, sid);
- if (!unf_rport) {
- /* non session (R_Port) existence */
- (void)unf_send_logo_by_did(unf_lport, sid);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) received PRLI but no RPort SID(0x%x) OX_ID(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, sid, xchg->oxid);
-
- unf_cm_free_xchg(unf_lport, xchg);
- return ret;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Receive PRLI. Port(0x%x)<---RPort(0x%x) with S_ID(0x%x)",
- unf_lport->port_id, unf_rport->nport_id, sid);
-
- /* 2. Get PRLI info */
- prli = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli;
- if (sid < UNF_FC_FID_DOM_MGR || unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Receive PRLI. Port(0x%x_0x%x)<---RPort(0x%x) parameter-3(0x%x) OX_ID(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, sid,
- prli->payload.parms[ARRAY_INDEX_3], xchg->oxid);
- }
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, unf_lport->port_id, &prli->payload,
- sizeof(struct unf_prli_payload));
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
-
- /* 3. Increase R_Port ref_cnt */
- ret = unf_rport_ref_inc(unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x_0x%p) is removing and do nothing",
- unf_lport->port_id, unf_rport->nport_id, unf_rport);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- unf_cm_free_xchg(unf_lport, xchg);
- return RETURN_ERROR;
- }
-
- /* 4. Cancel R_Port Open work */
- if (cancel_delayed_work(&unf_rport->open_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x) cancel open work succeed",
- unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
-
- /* This is not the last counter */
- atomic_dec(&unf_rport->rport_ref_cnt);
- }
-
- /* 5. Check R_Port state */
- if (unf_rport->rp_state != UNF_RPORT_ST_PRLI_WAIT &&
- unf_rport->rp_state != UNF_RPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) RPort(0x%x) with state(0x%x) when received PRLI, send LOGO",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id, unf_rport->rp_state);
-
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* NOTE: Start to send LOGO */
- unf_rport_enter_logo(unf_lport, unf_rport);
-
- unf_cm_free_xchg(unf_lport, xchg);
- unf_rport_ref_dec(unf_rport);
-
- return RETURN_ERROR;
- }
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* 6. Update R_Port options(INI/TGT/BOTH) */
- unf_rport->options =
- prli->payload.parms[ARRAY_INDEX_3] &
- (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
-
- unf_update_port_feature(unf_rport->port_name, unf_rport->options);
-
- /* for Confirm */
- unf_rport->fcp_conf_needed = false;
-
- unf_obtain_tape_capacity(unf_lport, unf_rport, prli->payload.parms[ARRAY_INDEX_3]);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x_0x%x) RPort(0x%x) parameter-3(0x%x) options(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
- prli->payload.parms[ARRAY_INDEX_3], unf_rport->options);
-
- /* 7. Send PRLI ACC */
- ret = unf_send_prli_acc(unf_lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI ACC failed",
- unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id);
-
- /* NOTE: exchange has been freed inner(before) */
- unf_rport_error_recovery(unf_rport);
- }
-
- /* 8. Decrease R_Port ref_cnt */
- unf_rport_ref_dec(unf_rport);
-
- return ret;
-}
-
-int unf_prli_async_handle(void *argc_in, void *argc_out)
-{
- struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- ret = unf_prli_handler_com_process(xchg);
-
- unf_xchg_ref_dec(xchg, SFS_RESPONSE);
-
- return (int)ret;
-}
-
-u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- u32 ret = UNF_RETURN_ERROR;
- bool switch2thread = false;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->sid = sid;
- xchg->lport = lport;
- unf_lport = lport;
-
- if (lport->bbscn_support &&
- lport->act_topo == UNF_ACT_TOP_P2P_DIRECT)
- switch2thread = true;
-
- if (switch2thread && unf_lport->root_lport == unf_lport) {
- /* Wait for LR done sync */
- ret = unf_irq_process_switch2thread(lport, xchg, unf_prli_async_handle);
- } else {
- ret = unf_prli_handler_com_process(xchg);
- }
-
- return ret;
-}
-
-static void unf_save_rscn_port_id(struct unf_rscn_mgr *rscn_mg,
- struct unf_rscn_port_id_page *rscn_port_id)
-{
- struct unf_port_id_page *exit_port_id_page = NULL;
- struct unf_port_id_page *new_port_id_page = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- bool is_repeat = false;
-
- FC_CHECK_RETURN_VOID(rscn_mg);
- FC_CHECK_RETURN_VOID(rscn_port_id);
-
- /* 1. check new RSCN Port_ID (RSNC_Page) whether within RSCN_Mgr or not
- */
- spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
- if (list_empty(&rscn_mg->list_using_rscn_page)) {
- is_repeat = false;
- } else {
- /* Check repeat: for each exist RSCN page form RSCN_Mgr Page
- * list
- */
- list_for_each_safe(node, next_node, &rscn_mg->list_using_rscn_page) {
- exit_port_id_page = list_entry(node, struct unf_port_id_page,
- list_node_rscn);
- if (exit_port_id_page->port_id_port == rscn_port_id->port_id_port &&
- exit_port_id_page->port_id_area == rscn_port_id->port_id_area &&
- exit_port_id_page->port_id_domain == rscn_port_id->port_id_domain) {
- is_repeat = true;
- break;
- }
- }
- }
- spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
-
- FC_CHECK_RETURN_VOID(rscn_mg->unf_get_free_rscn_node);
-
- /* 2. Get & add free RSNC Node --->>> RSCN_Mgr */
- if (!is_repeat) {
- new_port_id_page = rscn_mg->unf_get_free_rscn_node(rscn_mg);
- if (!new_port_id_page) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_ERR, "[err]Get free RSCN node failed");
-
- return;
- }
-
- new_port_id_page->addr_format = rscn_port_id->addr_format;
- new_port_id_page->event_qualifier = rscn_port_id->event_qualifier;
- new_port_id_page->reserved = rscn_port_id->reserved;
- new_port_id_page->port_id_domain = rscn_port_id->port_id_domain;
- new_port_id_page->port_id_area = rscn_port_id->port_id_area;
- new_port_id_page->port_id_port = rscn_port_id->port_id_port;
-
- /* Add entry to list: using_rscn_page */
- spin_lock_irqsave(&rscn_mg->rscn_id_list_lock, flag);
- list_add_tail(&new_port_id_page->list_node_rscn, &rscn_mg->list_using_rscn_page);
- spin_unlock_irqrestore(&rscn_mg->rscn_id_list_lock, flag);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has repeat RSCN node with domain(0x%x) area(0x%x)",
- rscn_port_id->port_id_domain, rscn_port_id->port_id_area,
- rscn_port_id->port_id_port);
- }
-}
-
-static u32 unf_analysis_rscn_payload(struct unf_lport *lport,
- struct unf_rscn_pld *rscn_pld)
-{
-#define UNF_OS_DISC_REDISC_TIME 10000
-
- struct unf_rscn_port_id_page *rscn_port_id = NULL;
- struct unf_disc *disc = NULL;
- struct unf_rscn_mgr *rscn_mgr = NULL;
- u32 index = 0;
- u32 pld_len = 0;
- u32 port_id_page_cnt = 0;
- u32 ret = RETURN_OK;
- ulong flag = 0;
- bool eb_need_disc_flag = false;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
-
- /* This field is the length in bytes of the entire Payload, inclusive of
- * the word 0
- */
- pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
- pld_len -= sizeof(rscn_pld->cmnd);
- port_id_page_cnt = pld_len / UNF_RSCN_PAGE_LEN;
-
- /* Pages within payload is nor more than 255 */
- if (port_id_page_cnt > UNF_RSCN_PAGE_SUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x_0x%x) page num(0x%x) exceed 255 in RSCN",
- lport->port_id, lport->nport_id, port_id_page_cnt);
-
- return UNF_RETURN_ERROR;
- }
-
- /* L_Port-->Disc-->Rscn_Mgr */
- disc = &lport->disc;
- rscn_mgr = &disc->rscn_mgr;
-
- /* for each ID from RSCN_Page: check whether need to Disc or not */
- while (index < port_id_page_cnt) {
- rscn_port_id = &rscn_pld->port_id_page[index];
- if (unf_lookup_lport_by_nportid(lport, *(u32 *)rscn_port_id)) {
- /* Prevent to create session with L_Port which have the
- * same N_Port_ID
- */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) find local N_Port_ID(0x%x) within RSCN payload",
- ((struct unf_lport *)(lport->root_lport))->nport_id,
- *(u32 *)rscn_port_id);
- } else {
- /* New RSCN_Page ID find, save it to RSCN_Mgr */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x_0x%x) save RSCN N_Port_ID(0x%x)",
- lport->port_id, lport->nport_id,
- *(u32 *)rscn_port_id);
-
- /* 1. new RSCN_Page ID find, save it to RSCN_Mgr */
- unf_save_rscn_port_id(rscn_mgr, rscn_port_id);
- eb_need_disc_flag = true;
- }
- index++;
- }
-
- if (!eb_need_disc_flag) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Port(0x%x) find all N_Port_ID and do not need to disc",
- ((struct unf_lport *)(lport->root_lport))->nport_id);
-
- return RETURN_OK;
- }
-
- /* 2. Do/Start Disc: Check & do Disc (GID_PT) process */
- if (!disc->disc_temp.unf_disc_start) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) DISC start function is NULL",
- lport->nport_id, lport->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- if (disc->states == UNF_DISC_ST_END ||
- ((jiffies - disc->last_disc_jiff) > msecs_to_jiffies(UNF_OS_DISC_REDISC_TIME))) {
- disc->disc_option = UNF_RSCN_DISC;
- disc->last_disc_jiff = jiffies;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- ret = disc->disc_temp.unf_disc_start(lport);
- } else {
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_INFO,
- "[info]Port(0x%x_0x%x) DISC state(0x%x) with last time(%llu) and don't do DISC",
- lport->port_id, lport->nport_id, disc->states,
- disc->last_disc_jiff);
-
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- }
-
- return ret;
-}
-
-u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- /*
- * A RSCN ELS shall be sent to registered Nx_Ports
- * when an event occurs that may have affected the state of
- * one or more Nx_Ports, or the ULP state within the Nx_Port.
- * *
- * The Payload of a RSCN Request includes a list
- * containing the addresses of the affected Nx_Ports.
- * *
- * Each affected Port_ID page contains the ID of the Nx_Port,
- * Fabric Controller, E_Port, domain, or area for which the event was
- * detected.
- */
- struct unf_rscn_pld *rscn_pld = NULL;
- struct unf_rport *unf_rport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 pld_len = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Receive RSCN Port(0x%x_0x%x)<---RPort(0x%x) OX_ID(0x%x)",
- lport->port_id, lport->nport_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RSCN);
-
- /* 1. Get R_Port by S_ID */
- unf_rport = unf_get_rport_by_nport_id(lport, sid); /* rport busy_list */
- if (!unf_rport) {
- unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, sid);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) received RSCN but has no RPort(0x%x) with OX_ID(0x%x)",
- lport->port_id, lport->nport_id, sid, xchg->oxid);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- unf_rport->nport_id = sid;
- }
-
- rscn_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
- FC_CHECK_RETURN_VALUE(rscn_pld, UNF_RETURN_ERROR);
- pld_len = UNF_GET_RSCN_PLD_LEN(rscn_pld->cmnd);
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, rscn_pld, pld_len);
-
- /* 2. NOTE: Analysis RSCN payload(save & disc if necessary) */
- ret = unf_analysis_rscn_payload(lport, rscn_pld);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) analysis RSCN failed",
- lport->port_id, lport->nport_id);
- }
-
- /* 3. send rscn_acc after analysis payload */
- ret = unf_send_rscn_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) send RSCN response failed",
- lport->port_id, lport->nport_id);
- }
-
- return ret;
-}
-
-static void unf_analysis_pdisc_pld(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_plogi_pdisc *pdisc)
-{
- struct unf_lgn_parm *pdisc_params = NULL;
- u64 wwpn = INVALID_VALUE64;
- u64 wwnn = INVALID_VALUE64;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(pdisc);
-
- pdisc_params = &pdisc->payload.stparms;
- if (pdisc_params->co_parms.bb_receive_data_field_size > UNF_MAX_FRAME_SIZE)
- rport->max_frame_size = UNF_MAX_FRAME_SIZE;
- else
- rport->max_frame_size = pdisc_params->co_parms.bb_receive_data_field_size;
-
- wwnn = (u64)(((u64)(pdisc_params->high_node_name) << UNF_SHIFT_32) |
- ((u64)pdisc_params->low_node_name));
- wwpn = (u64)(((u64)(pdisc_params->high_port_name) << UNF_SHIFT_32) |
- ((u64)pdisc_params->low_port_name));
-
- rport->port_name = wwpn;
- rport->node_name = wwnn;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) save PDISC parameters to Rport(0x%x) WWPN(0x%llx) WWNN(0x%llx)",
- lport->port_id, rport->nport_id, rport->port_name,
- rport->node_name);
-}
-
-u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport, struct unf_xchg *xchg)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct unf_rjt_info rjt_info;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
- rjt_info.els_cmnd_code = ELS_PDISC;
- rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
- rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
-
- ret = unf_send_els_rjt_by_rport(lport, xchg, rport, &rjt_info);
-
- return ret;
-}
-
-u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_plogi_pdisc *pdisc = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flags = 0;
- u32 ret = RETURN_OK;
- u64 wwpn = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Receive PDISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- lport->port_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_PDISC);
- pdisc = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->pdisc;
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &pdisc->payload,
- sizeof(struct unf_plogi_payload));
- wwpn = (u64)(((u64)(pdisc->payload.stparms.high_port_name) << UNF_SHIFT_32) |
- ((u64)pdisc->payload.stparms.low_port_name));
-
- unf_rport = unf_find_rport(lport, sid, wwpn);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
- lport->port_id, sid);
-
- unf_cm_free_xchg(lport, xchg);
- (void)unf_send_logo_by_did(lport, sid);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
- "[info]Port(0x%x) get exist RPort(0x%x) when receive PDISC with S_Id(0x%x)",
- lport->port_id, unf_rport->nport_id, sid);
-
- if (sid >= UNF_FC_FID_DOM_MGR)
- return unf_send_pdisc_rjt(lport, unf_rport, xchg);
-
- unf_analysis_pdisc_pld(lport, unf_rport, pdisc);
-
- /* State: READY */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving PDISC",
- lport->port_id, sid);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) handle PDISC failed",
- lport->port_id);
-
- return ret;
- }
-
- /* Report Down/Up event to scsi */
- unf_update_lport_state_by_linkup_event(lport,
- unf_rport, unf_rport->options);
- } else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
- (unf_rport->session)) {
- /* State: Closing */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
- lport->port_id, sid, unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- unf_cm_free_xchg(lport, xchg);
- (void)unf_send_logo_by_did(lport, sid);
- } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
- /* State: PRLI_WAIT */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC",
- lport->port_id, sid, unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- ret = unf_send_pdisc_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) handle PDISC failed",
- lport->port_id);
-
- return ret;
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving PDISC, send LOGO",
- lport->port_id, sid, unf_rport->rp_state);
-
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- unf_rport_enter_logo(lport, unf_rport);
- unf_cm_free_xchg(lport, xchg);
- }
- }
-
- return ret;
-}
-
-static void unf_analysis_adisc_pld(struct unf_lport *lport,
- struct unf_rport *rport,
- struct unf_adisc_payload *adisc_pld)
-{
- u64 wwpn = INVALID_VALUE64;
- u64 wwnn = INVALID_VALUE64;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(adisc_pld);
-
- wwnn = (u64)(((u64)(adisc_pld->high_node_name) << UNF_SHIFT_32) |
- ((u64)adisc_pld->low_node_name));
- wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
- ((u64)adisc_pld->low_port_name));
-
- rport->port_name = wwpn;
- rport->node_name = wwnn;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) save ADISC parameters to RPort(0x%x), WWPN(0x%llx) WWNN(0x%llx) NPort ID(0x%x)",
- lport->port_id, rport->nport_id, rport->port_name,
- rport->node_name, adisc_pld->nport_id);
-}
-
-u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_adisc_payload *adisc_pld = NULL;
- ulong flags = 0;
- u64 wwpn = 0;
- u32 ret = RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Receive ADISC. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- lport->port_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ADISC);
- adisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->adisc.adisc_payl;
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, adisc_pld, sizeof(struct unf_adisc_payload));
- wwpn = (u64)(((u64)(adisc_pld->high_port_name) << UNF_SHIFT_32) |
- ((u64)adisc_pld->low_port_name));
-
- unf_rport = unf_find_rport(lport, sid, wwpn);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find RPort by NPort ID(0x%x). Free exchange and send LOGO",
- lport->port_id, sid);
-
- unf_cm_free_xchg(lport, xchg);
- (void)unf_send_logo_by_did(lport, sid);
-
- return ret;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
- "[info]Port(0x%x) get exist RPort(0x%x) when receive ADISC with S_ID(0x%x)",
- lport->port_id, unf_rport->nport_id, sid);
-
- unf_analysis_adisc_pld(lport, unf_rport, adisc_pld);
-
- /* State: READY */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- if (unf_rport->rp_state == UNF_RPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find RPort(0x%x) state is READY when receiving ADISC",
- lport->port_id, sid);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* Return ACC directly */
- ret = unf_send_adisc_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
-
- return ret;
- }
-
- /* Report Down/Up event to SCSI */
- unf_update_lport_state_by_linkup_event(lport, unf_rport, unf_rport->options);
- }
- /* State: Closing */
- else if ((unf_rport->rp_state == UNF_RPORT_ST_CLOSING) &&
- (unf_rport->session)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
- lport->port_id, sid, unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- unf_rport = unf_get_safe_rport(lport, unf_rport,
- UNF_RPORT_REUSE_RECOVER,
- unf_rport->nport_id);
- if (unf_rport) {
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- unf_rport->nport_id = sid;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- ret = unf_send_adisc_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ADISC ACC failed",
- lport->port_id);
-
- return ret;
- }
-
- unf_update_lport_state_by_linkup_event(lport,
- unf_rport, unf_rport->options);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find RPort by NPort_ID(0x%x). Free exchange and send LOGO",
- lport->port_id, sid);
-
- unf_cm_free_xchg(lport, xchg);
- (void)unf_send_logo_by_did(lport, sid);
- }
- } else if (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT) {
- /* State: PRLI_WAIT */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC",
- lport->port_id, sid, unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- ret = unf_send_adisc_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ADISC ACC failed", lport->port_id);
-
- return ret;
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find RPort(0x%x) state is 0x%x when receiving ADISC, send LOGO",
- lport->port_id, sid, unf_rport->rp_state);
-
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- unf_rport_enter_logo(lport, unf_rport);
- unf_cm_free_xchg(lport, xchg);
- }
-
- return ret;
-}
-
-u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x) receive REC", lport->port_id);
-
- /* Send rec acc */
- ret = unf_send_rec_acc(lport, unf_rport, xchg); /* discard directly */
-
- return ret;
-}
-
-u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_rrq *rrq = NULL;
- struct unf_xchg *xchg_reused = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- u32 unf_sid = 0;
- ulong flags = 0;
- struct unf_rjt_info rjt_info = {0};
- struct unf_xchg_hot_pool *hot_pool = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_RRQ);
- rrq = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rrq;
- ox_id = (u16)(rrq->oxid_rxid >> UNF_SHIFT_16);
- rx_id = (u16)(rrq->oxid_rxid);
- unf_sid = rrq->sid & UNF_NPORTID_MASK;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[warn]Receive RRQ. Port(0x%x)<---RPort(0x%x) sfsXchg(0x%p) OX_ID(0x%x,0x%x) RX_ID(0x%x)",
- lport->port_id, sid, xchg, ox_id, xchg->oxid, rx_id);
-
- /* Get R_Port */
- unf_rport = unf_get_rport_by_nport_id(lport, sid);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive RRQ but has no RPort(0x%x)",
- lport->port_id, sid);
-
- /* NOTE: send LOGO */
- unf_send_logo_by_did(lport, unf_sid);
-
- unf_cm_free_xchg(lport, xchg);
- return ret;
- }
-
- /* Get Target (Abort I/O) exchange context */
- xchg_reused = unf_cm_lookup_xchg_by_id(lport, ox_id, unf_sid); /* unf_find_xchg_by_ox_id */
- if (!xchg_reused) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) cannot find exchange with OX_ID(0x%x) RX_ID(0x%x) S_ID(0x%x)",
- lport->port_id, ox_id, rx_id, unf_sid);
-
- rjt_info.els_cmnd_code = ELS_RRQ;
- rjt_info.reason_code = FCXLS_BA_RJT_LOGICAL_ERROR | FCXLS_LS_RJT_INVALID_OXID_RXID;
-
- /* NOTE: send ELS RJT */
- if (unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info) != RETURN_OK) {
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
- }
-
- hot_pool = xchg_reused->hot_pool;
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) OxId(0x%x) Rxid(0x%x) Sid(0x%x) Hot Pool is NULL.",
- lport->port_id, ox_id, rx_id, unf_sid);
-
- return ret;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- xchg_reused->oxid = INVALID_VALUE16;
- xchg_reused->rxid = INVALID_VALUE16;
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
-
- /* NOTE: release I/O exchange context */
- unf_xchg_ref_dec(xchg_reused, SFS_RESPONSE);
-
- /* Send RRQ ACC */
- ret = unf_send_rrq_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) can not send RRQ rsp. Xchg(0x%p) Ioxchg(0x%p) OX_RX_ID(0x%x 0x%x) S_ID(0x%x)",
- lport->port_id, xchg, xchg_reused, ox_id, rx_id, unf_sid);
-
- unf_cm_free_xchg(lport, xchg);
- }
-
- return ret;
-}
-
-u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_rport *logo_rport = NULL;
- struct unf_logo *logo = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 nport_id = 0;
- struct unf_rjt_info rjt_info = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
- logo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->logo;
- nport_id = logo->payload.nport_id & UNF_NPORTID_MASK;
-
- if (sid < UNF_FC_FID_DOM_MGR) {
- /* R_Port is not fabric port */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]LOGIN: Receive LOGO. Port(0x%x)<---RPort(0x%x) NPort_ID(0x%x) OXID(0x%x)",
- lport->port_id, sid, nport_id, xchg->oxid);
- }
-
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &logo->payload,
- sizeof(struct unf_logo_payload));
-
- /*
- * 1. S_ID unequal to NPort_ID:
- * link down Rport find by NPort_ID immediately
- */
- if (sid != nport_id) {
- logo_rport = unf_get_rport_by_nport_id(lport, nport_id);
- if (logo_rport)
- unf_rport_immediate_link_down(lport, logo_rport);
- }
-
- /* 2. Get R_Port by S_ID (frame header) */
- unf_rport = unf_get_rport_by_nport_id(lport, sid);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
- if (!unf_rport) {
- memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
- rjt_info.els_cmnd_code = ELS_LOGO;
- rjt_info.reason_code = UNF_LS_RJT_LOGICAL_ERROR;
- rjt_info.reason_explanation = UNF_LS_RJT_NO_ADDITIONAL_INFO;
- ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive LOGO but has no RPort(0x%x)",
- lport->port_id, sid);
-
- return ret;
- }
-
- /*
- * 3. I/O resource release: set ABORT tag
- * *
- * Call by: R_Port remove; RCVD LOGO; RCVD PLOGI; send PLOGI ACC
- */
- unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, sid, lport->nport_id, INI_IO_STATE_LOGO);
-
- /* 4. Send LOGO ACC */
- ret = unf_send_logo_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) send LOGO failed", lport->port_id);
- }
- /*
- * 5. Do same operations with RCVD LOGO/PRLO & Send LOGO:
- * retry (LOGIN or LOGO) or link down immediately
- */
- unf_process_rport_after_logo(lport, unf_rport);
-
- return ret;
-}
-
-u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_prli_prlo *prlo = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Receive PRLO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- lport->port_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_LOGO);
-
- /* Get (new) R_Port */
- unf_rport = unf_get_rport_by_nport_id(lport, sid);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_INIT, sid); /* INIT */
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive PRLO but has no RPort",
- lport->port_id);
-
- /* Discard directly */
- unf_cm_free_xchg(lport, xchg);
- return ret;
- }
-
- prlo = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prlo;
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, &prlo->payload,
- sizeof(struct unf_prli_payload));
-
- /* Send PRLO ACC to remote */
- ret = unf_send_prlo_acc(lport, unf_rport, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) send PRLO ACC failed", lport->port_id);
- }
-
- /* Enter Enhanced action after LOGO (retry LOGIN or LOGO) */
- unf_process_rport_after_logo(lport, unf_rport);
-
- return ret;
-}
-
-static void unf_fill_echo_acc_pld(struct unf_echo *echo_acc)
-{
- struct unf_echo_payload *echo_acc_pld = NULL;
-
- FC_CHECK_RETURN_VOID(echo_acc);
-
- echo_acc_pld = echo_acc->echo_pld;
- FC_CHECK_RETURN_VOID(echo_acc_pld);
-
- echo_acc_pld->cmnd = UNF_ELS_CMND_ACC;
-}
-
-static void unf_echo_acc_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = xchg->lport;
-
- FC_CHECK_RETURN_VOID(unf_lport);
- if (xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr) {
- pci_unmap_single(unf_lport->low_level_func.dev,
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc
- .phy_echo_addr,
- UNF_ECHO_PAYLOAD_LEN, DMA_BIDIRECTIONAL);
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo_acc.phy_echo_addr = 0;
- }
-}
-
-static u32 unf_send_echo_acc(struct unf_lport *lport, u32 did,
- struct unf_xchg *xchg)
-{
- struct unf_echo *echo_acc = NULL;
- union unf_sfs_u *fc_entry = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
- struct unf_frame_pkg pkg;
- dma_addr_t phy_echo_acc_addr;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
- xchg->cmnd_code = UNF_SET_ELS_ACC_TYPE(ELS_ECHO);
- xchg->did = did;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
-
- xchg->callback = NULL;
- xchg->ob_callback = unf_echo_acc_callback;
-
- unf_fill_package(&pkg, xchg, xchg->rport);
- pkg.type = UNF_PKG_ELS_REPLY;
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- echo_acc = &fc_entry->echo_acc;
- unf_fill_echo_acc_pld(echo_acc);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
- phy_echo_acc_addr = pci_map_single(lport->low_level_func.dev,
- echo_acc->echo_pld,
- UNF_ECHO_PAYLOAD_LEN,
- DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(lport->low_level_func.dev, phy_echo_acc_addr)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) pci map err",
- lport->port_id);
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
- echo_acc->phy_echo_addr = phy_echo_acc_addr;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK) {
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- pci_unmap_single(lport->low_level_func.dev,
- phy_echo_acc_addr, UNF_ECHO_PAYLOAD_LEN,
- DMA_BIDIRECTIONAL);
- echo_acc->phy_echo_addr = 0;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]ECHO ACC send %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- did, ox_id, rx_id);
-
- return ret;
-}
-
-u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg)
-{
- struct unf_echo_payload *echo_pld = NULL;
- struct unf_rport *unf_rport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 data_len = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- data_len = xchg->fcp_sfs_union.sfs_entry.cur_offset;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Receive ECHO. Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x))",
- lport->port_id, sid, xchg->oxid);
-
- UNF_SERVICE_COLLECT(lport->link_service_info, UNF_SERVICE_ITEM_ECHO);
- echo_pld = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
- UNF_PRINT_SFS_LIMIT(UNF_INFO, lport->port_id, echo_pld, data_len);
- unf_rport = unf_get_rport_by_nport_id(lport, sid);
- xchg->rport = unf_rport;
-
- ret = unf_send_echo_acc(lport, sid, xchg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) send ECHO ACC failed", lport->port_id);
- }
-
- return ret;
-}
-
-static void unf_login_with_rport_in_n2n(struct unf_lport *lport,
- u64 remote_port_name,
- u64 remote_node_name)
-{
- /*
- * Call by (P2P):
- * 1. RCVD FLOGI ACC
- * 2. Send FLOGI ACC succeed
- * *
- * Compare WWN, larger is master, then send PLOGI
- */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = NULL;
- ulong lport_flag = 0;
- ulong rport_flag = 0;
- u64 port_name = 0;
- u64 node_name = 0;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VOID(lport);
-
- spin_lock_irqsave(&unf_lport->lport_state_lock, lport_flag);
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: FLOGI_WAIT --> READY */
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, lport_flag);
-
- port_name = remote_port_name;
- node_name = remote_node_name;
-
- if (unf_lport->port_name > port_name) {
- /* Master case: send PLOGI */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x)'s WWN(0x%llx) is larger than rport(0x%llx), should be master",
- unf_lport->port_id, unf_lport->port_name, port_name);
-
- /* Update N_Port_ID now: 0xEF */
- unf_lport->nport_id = UNF_P2P_LOCAL_NPORT_ID;
-
- unf_rport = unf_find_valid_rport(lport, port_name, UNF_P2P_REMOTE_NPORT_ID);
- unf_rport = unf_get_safe_rport(lport, unf_rport, UNF_RPORT_REUSE_ONLY,
- UNF_P2P_REMOTE_NPORT_ID);
- if (unf_rport) {
- unf_rport->node_name = node_name;
- unf_rport->port_name = port_name;
- unf_rport->nport_id = UNF_P2P_REMOTE_NPORT_ID; /* 0xD6 */
- unf_rport->local_nport_id = UNF_P2P_LOCAL_NPORT_ID; /* 0xEF */
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, rport_flag);
- if (unf_rport->rp_state == UNF_RPORT_ST_PLOGI_WAIT ||
- unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
- unf_rport->rp_state == UNF_RPORT_ST_READY) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x) Rport(0x%x) have sent PLOGI or PRLI with state(0x%x)",
- unf_lport->port_id,
- unf_rport->nport_id,
- unf_rport->rp_state);
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock,
- rport_flag);
- return;
- }
- /* Update L_Port State: PLOGI_WAIT */
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, rport_flag);
-
- /* P2P with master: Start to Send PLOGI */
- ret = unf_send_plogi(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) with WWN(0x%llx) send PLOGI to(0x%llx) failed",
- unf_lport->port_id,
- unf_lport->port_name, port_name);
-
- unf_rport_error_recovery(unf_rport);
- }
- } else {
- /* Get/Alloc R_Port failed */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with WWN(0x%llx) allocate RPort(ID:0x%x,WWPN:0x%llx) failed",
- unf_lport->port_id, unf_lport->port_name,
- UNF_P2P_REMOTE_NPORT_ID, port_name);
- }
- } else {
- /* Slave case: L_Port's Port Name is smaller than R_Port */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) with WWN(0x%llx) is smaller than rport(0x%llx), do nothing",
- unf_lport->port_id, unf_lport->port_name, port_name);
- }
-}
-
-void unf_lport_enter_mns_plogi(struct unf_lport *lport)
-{
- /* Fabric or Public Loop Mode: Login with Name server */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_plogi_payload *plogi_pld = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VOID(lport);
-
- /* Get (safe) R_Port */
- unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, UNF_FC_FID_MGMT_SERV);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate RPort failed", lport->port_id);
- return;
- }
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = UNF_FC_FID_MGMT_SERV; /* 0xfffffa */
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
-
- /* Get & Set new free exchange */
- xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange can't be NULL for PLOGI", lport->port_id);
-
- return;
- }
-
- xchg->cmnd_code = ELS_PLOGI; /* PLOGI */
- xchg->did = unf_rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = unf_lport;
- xchg->rport = unf_rport;
-
- /* Set callback function */
- xchg->callback = NULL; /* for rcvd plogi acc/rjt processer */
- xchg->ob_callback = NULL; /* for send plogi failed processer */
-
- unf_fill_package(&pkg, xchg, unf_rport);
- pkg.type = UNF_PKG_ELS_REQ;
- /* Fill PLOGI payload */
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return;
- }
-
- plogi_pld = &fc_entry->plogi.payload;
- memset(plogi_pld, 0, sizeof(struct unf_plogi_payload));
- unf_fill_plogi_pld(plogi_pld, lport);
-
- /* Start to Send PLOGI command */
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-}
-
-static void unf_register_to_switch(struct unf_lport *lport)
-{
- /* Register to Fabric, used for: FABRIC & PUBLI LOOP */
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_REMOTE_ACC);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- /* Login with Name server: PLOGI */
- unf_lport_enter_sns_plogi(lport);
-
- unf_lport_enter_mns_plogi(lport);
-
- /* Physical Port */
- if (lport->root_lport == lport &&
- lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
- unf_linkup_all_vports(lport);
- }
-}
-
-void unf_fdisc_ob_callback(struct unf_xchg *xchg)
-{
- /* Do recovery */
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- unf_lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: FDISC send failed");
-
- FC_CHECK_RETURN_VOID(unf_lport);
-
- /* Do L_Port error recovery */
- unf_lport_error_recovery(unf_lport);
-}
-
-void unf_fdisc_callback(void *lport, void *rport, void *exch)
-{
- /* Register to Name Server or Do recovery */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *xchg = NULL;
- struct unf_flogi_fdisc_payload *fdisc_pld = NULL;
- ulong flag = 0;
- u32 cmd = 0;
-
- unf_lport = (struct unf_lport *)lport;
- unf_rport = (struct unf_rport *)rport;
- xchg = (struct unf_xchg *)exch;
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(exch);
- FC_CHECK_RETURN_VOID(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
- fdisc_pld = &xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->fdisc_acc.fdisc_payload;
- if (xchg->byte_orders & UNF_BIT_2)
- unf_big_end_to_cpu((u8 *)fdisc_pld, sizeof(struct unf_flogi_fdisc_payload));
-
- cmd = fdisc_pld->cmnd;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
- UNF_RPORT_REUSE_ONLY, UNF_FC_FID_FLOGI);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has no Rport", unf_lport->port_id);
- return;
- }
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = UNF_FC_FID_FLOGI;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- if ((cmd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
- /* Case for ACC */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) receive Flogi/Fdisc ACC in state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
-
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
- return;
- }
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- unf_lport_update_nport_id(unf_lport, xchg->sid);
- unf_lport_update_time_params(unf_lport, fdisc_pld);
- unf_register_to_switch(unf_lport);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: FDISC response is (0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- cmd, unf_lport->port_id, unf_rport->nport_id, xchg->oxid);
-
- /* Case for RJT: Do L_Port recovery */
- unf_lport_error_recovery(unf_lport);
- }
-}
-
-void unf_flogi_ob_callback(struct unf_xchg *xchg)
-{
- /* Send FLOGI failed & Do L_Port recovery */
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- /* Get L_port from exchange context */
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- unf_lport = xchg->lport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
- FC_CHECK_RETURN_VOID(unf_lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send FLOGI failed",
- unf_lport->port_id);
-
- /* Check L_Port state */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send FLOGI failed with state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
-
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
- return;
- }
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- /* Do L_Port error recovery */
- unf_lport_error_recovery(unf_lport);
-}
-
-static void unf_lport_update_nport_id(struct unf_lport *lport, u32 nport_id)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- lport->nport_id = nport_id;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-}
-
-static void
-unf_lport_update_time_params(struct unf_lport *lport,
- struct unf_flogi_fdisc_payload *flogi_payload)
-{
- ulong flag = 0;
- u32 ed_tov = 0;
- u32 ra_tov = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(flogi_payload);
-
- ed_tov = flogi_payload->fabric_parms.co_parms.e_d_tov;
- ra_tov = flogi_payload->fabric_parms.co_parms.r_a_tov;
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
-
- /* FC-FS-3: 21.3.4, 21.3.5 */
- if (lport->act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- lport->act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
- lport->ed_tov = ed_tov;
- lport->ra_tov = ra_tov;
- } else {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) with topo(0x%x) no need to save time parameters",
- lport->port_id, lport->nport_id, lport->act_topo);
- }
-
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-}
-
-static void unf_rcv_flogi_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_flogi_fdisc_payload *flogi_pld,
- u32 nport_id, struct unf_xchg *xchg)
-{
- /* PLOGI to Name server or remote port */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- struct unf_flogi_fdisc_payload *unf_flogi_pld = flogi_pld;
- struct unf_fabric_parm *fabric_params = NULL;
- u64 port_name = 0;
- u64 node_name = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(flogi_pld);
-
- /* Check L_Port state: FLOGI_WAIT */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_FLOGI_WAIT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[info]Port(0x%x_0x%x) receive FLOGI ACC with state(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, unf_lport->states);
-
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
- return;
- }
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- fabric_params = &unf_flogi_pld->fabric_parms;
- node_name =
- (u64)(((u64)(fabric_params->high_node_name) << UNF_SHIFT_32) |
- ((u64)(fabric_params->low_node_name)));
- port_name =
- (u64)(((u64)(fabric_params->high_port_name) << UNF_SHIFT_32) |
- ((u64)(fabric_params->low_port_name)));
-
- /* flogi acc pyload class 3 service priority value */
- if (unf_lport->root_lport == unf_lport && unf_lport->qos_cs_ctrl &&
- fabric_params->cl_parms[ARRAY_INDEX_2].priority == UNF_PRIORITY_ENABLE)
- unf_lport->priority = (bool)UNF_PRIORITY_ENABLE;
- else
- unf_lport->priority = (bool)UNF_PRIORITY_DISABLE;
-
- /* Save Flogi parameters */
- unf_save_fabric_params(unf_lport, unf_rport, fabric_params);
-
- if (UNF_CHECK_NPORT_FPORT_BIT(unf_flogi_pld) == UNF_N_PORT) {
- /* P2P Mode */
- unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_DIRECT);
- unf_login_with_rport_in_n2n(unf_lport, port_name, node_name);
- } else {
- /* for:
- * UNF_ACT_TOP_PUBLIC_LOOP/UNF_ACT_TOP_P2P_FABRIC
- * /UNF_TOP_P2P_MASK
- */
- if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP)
- unf_lport_update_topo(unf_lport, UNF_ACT_TOP_P2P_FABRIC);
-
- unf_lport_update_nport_id(unf_lport, nport_id);
- unf_lport_update_time_params(unf_lport, unf_flogi_pld);
-
- /* Save process both for Public loop & Fabric */
- unf_register_to_switch(unf_lport);
- }
-}
-
-static void unf_flogi_acc_com_process(struct unf_xchg *xchg)
-{
- /* Maybe within interrupt or thread context */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_flogi_fdisc_payload *flogi_pld = NULL;
- u32 nport_id = 0;
- u32 cmnd = 0;
- ulong flags = 0;
- struct unf_xchg *unf_xchg = xchg;
-
- FC_CHECK_RETURN_VOID(unf_xchg);
- FC_CHECK_RETURN_VOID(unf_xchg->lport);
-
- unf_lport = unf_xchg->lport;
- unf_rport = unf_xchg->rport;
- flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
- cmnd = flogi_pld->cmnd;
-
- /* Get N_Port_ID & R_Port */
- /* Others: 0xFFFFFE */
- unf_rport = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_FLOGI);
- nport_id = UNF_FC_FID_FLOGI;
-
- /* Get Safe R_Port: reuse only */
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_ONLY, nport_id);
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can not allocate new Rport", unf_lport->port_id);
-
- return;
- }
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- unf_rport->nport_id = UNF_FC_FID_FLOGI;
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- /* Process FLOGI ACC or RJT */
- if ((cmnd & UNF_ELS_CMND_HIGH_MASK) == UNF_ELS_CMND_ACC) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- cmnd, unf_lport->port_id, unf_rport->nport_id, unf_xchg->oxid);
-
- /* Case for ACC */
- unf_rcv_flogi_acc(unf_lport, unf_rport, flogi_pld, unf_xchg->sid, unf_xchg);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: FLOGI response is(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- cmnd, unf_lport->port_id, unf_rport->nport_id,
- unf_xchg->oxid);
-
- /* Case for RJT: do L_Port error recovery */
- unf_lport_error_recovery(unf_lport);
- }
-}
-
-static int unf_rcv_flogi_acc_async_callback(void *argc_in, void *argc_out)
-{
- struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- unf_flogi_acc_com_process(xchg);
-
- unf_xchg_ref_dec(xchg, SFS_RESPONSE);
-
- return RETURN_OK;
-}
-
-void unf_flogi_callback(void *lport, void *rport, void *xchg)
-{
- /* Callback function for FLOGI ACC or RJT */
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_flogi_fdisc_payload *flogi_pld = NULL;
- bool bbscn_enabled = false;
- enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
- bool switch2thread = false;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
- FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
-
- unf_xchg->lport = lport;
- flogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->flogi_acc.flogi_payload;
-
- if (unf_xchg->byte_orders & UNF_BIT_2)
- unf_big_end_to_cpu((u8 *)flogi_pld, sizeof(struct unf_flogi_fdisc_payload));
-
- if (unf_lport->act_topo != UNF_ACT_TOP_PUBLIC_LOOP &&
- (UNF_CHECK_NPORT_FPORT_BIT(flogi_pld) == UNF_F_PORT))
- /* Get Top Mode (P2P_F) --->>> used for BBSCN */
- act_topo = UNF_ACT_TOP_P2P_FABRIC;
-
- bbscn_enabled =
- unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
- (u8)UNF_GET_BB_SC_N_FROM_PARAMS(&flogi_pld->fabric_parms));
- if (act_topo == UNF_ACT_TOP_P2P_FABRIC && bbscn_enabled) {
- /* BBSCN Enable or not --->>> used for Context change */
- unf_lport->bbscn_support = true;
- switch2thread = true;
- }
-
- if (switch2thread && unf_lport->root_lport == unf_lport) {
- /* Wait for LR done sync: for Root Port */
- (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
- unf_rcv_flogi_acc_async_callback);
- } else {
- /* Process FLOGI response directly */
- unf_flogi_acc_com_process(unf_xchg);
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ALL,
- "[info]Port(0x%x) process FLOGI response: switch(%d) to thread done",
- unf_lport->port_id, switch2thread);
-}
-
-void unf_plogi_ob_callback(struct unf_xchg *xchg)
-{
- /* Do L_Port or R_Port recovery */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- unf_lport = xchg->lport;
- unf_rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_CHECK_RETURN_VOID(unf_lport);
- FC_CHECK_RETURN_VOID(unf_rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) send PLOGI(0x%x_0x%x) to RPort(%p:0x%x_0x%x) failed",
- unf_lport->port_id, unf_lport->nport_id, xchg->oxid,
- xchg->rxid, unf_rport, unf_rport->rport_index,
- unf_rport->nport_id);
-
- /* Start to recovery */
- if (unf_rport->nport_id > UNF_FC_FID_DOM_MGR) {
- /* with Name server: R_Port is fabric --->>> L_Port error
- * recovery
- */
- unf_lport_error_recovery(unf_lport);
- } else {
- /* R_Port is not fabric --->>> R_Port error recovery */
- unf_rport_error_recovery(unf_rport);
- }
-}
-
-void unf_rcv_plogi_acc(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_lgn_parm *login_parms)
-{
- /* PLOGI ACC: PRLI(non fabric) or RFT_ID(fabric) */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- struct unf_lgn_parm *unf_login_parms = login_parms;
- u64 node_name = 0;
- u64 port_name = 0;
- ulong flag = 0;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(login_parms);
-
- node_name = (u64)(((u64)(unf_login_parms->high_node_name) << UNF_SHIFT_32) |
- ((u64)(unf_login_parms->low_node_name)));
- port_name = (u64)(((u64)(unf_login_parms->high_port_name) << UNF_SHIFT_32) |
- ((u64)(unf_login_parms->low_port_name)));
-
- /* ACC & Case for: R_Port is fabric (RFT_ID) */
- if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR) {
- /* Check L_Port state */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- if (unf_lport->states != UNF_LPORT_ST_PLOGI_WAIT) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive PLOGI ACC with error state(0x%x)",
- lport->port_id, unf_lport->states);
-
- return;
- }
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_REMOTE_ACC);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- /* PLOGI parameters save */
- unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
-
- /* Update R_Port WWPN & WWNN */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->node_name = node_name;
- unf_rport->port_name = port_name;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Start to Send RFT_ID */
- ret = unf_send_rft_id(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send RFT_ID failed",
- lport->port_id);
-
- unf_lport_error_recovery(unf_lport);
- }
- } else {
- /* ACC & Case for: R_Port is not fabric */
- if (unf_rport->options == UNF_PORT_MODE_UNKNOWN &&
- unf_rport->port_name != INVALID_WWPN)
- unf_rport->options = unf_get_port_feature(port_name);
-
- /* Set Port Feature with BOTH: cancel */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->node_name = node_name;
- unf_rport->port_name = port_name;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x)<---LS_ACC(DID:0x%x SID:0x%x) for PLOGI ACC with RPort state(0x%x) NodeName(0x%llx) E_D_TOV(%u)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id, unf_rport->rp_state,
- unf_rport->node_name, unf_rport->ed_tov);
-
- if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP &&
- (unf_rport->rp_state == UNF_RPORT_ST_PRLI_WAIT ||
- unf_rport->rp_state == UNF_RPORT_ST_READY)) {
- /* Do nothing, return directly */
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
- return;
- }
-
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PRLI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* PLOGI parameters save */
- unf_save_plogi_params(unf_lport, unf_rport, unf_login_parms, ELS_ACC);
-
- /*
- * Need Delay to Send PRLI or not
- * Used for: L_Port with INI mode & R_Port is not Fabric
- */
- unf_check_rport_need_delay_prli(unf_lport, unf_rport, unf_rport->options);
-
- /* Do not care: Just used for L_Port only is TGT mode or R_Port
- * only is INI mode
- */
- unf_schedule_open_work(unf_lport, unf_rport);
- }
-}
-
-void unf_plogi_acc_com_process(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_plogi_payload *plogi_pld = NULL;
- struct unf_lgn_parm *login_parms = NULL;
- ulong flag = 0;
- u64 port_name = 0;
- u32 rport_nport_id = 0;
- u32 cmnd = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(unf_xchg);
- FC_CHECK_RETURN_VOID(unf_xchg->lport);
- FC_CHECK_RETURN_VOID(unf_xchg->rport);
-
- unf_lport = unf_xchg->lport;
- unf_rport = unf_xchg->rport;
- rport_nport_id = unf_rport->nport_id;
- plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
- login_parms = &plogi_pld->stparms;
- cmnd = (plogi_pld->cmnd);
-
- if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
- /* Case for PLOGI ACC: Go to next stage */
- port_name =
- (u64)(((u64)(login_parms->high_port_name) << UNF_SHIFT_32) |
- ((u64)(login_parms->low_port_name)));
-
- /* Get (new) R_Port: 0xfffffc has same WWN with 0xfffcxx */
- unf_rport = unf_find_rport(unf_lport, rport_nport_id, port_name);
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport,
- UNF_RPORT_REUSE_ONLY, rport_nport_id);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) alloc new RPort with wwpn(0x%llx) failed",
- unf_lport->port_id, unf_lport->nport_id, port_name);
- return;
- }
-
- /* PLOGI parameters check */
- ret = unf_check_plogi_params(unf_lport, unf_rport, login_parms);
- if (ret != RETURN_OK)
- return;
-
- /* Update R_Port state */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->nport_id = rport_nport_id;
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Start to process PLOGI ACC */
- unf_rcv_plogi_acc(unf_lport, unf_rport, login_parms);
- } else {
- /* Case for PLOGI RJT: L_Port or R_Port recovery */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x)<---RPort(0x%p) with LS_RJT(DID:0x%x SID:0x%x) for PLOGI",
- unf_lport->port_id, unf_rport, unf_lport->nport_id,
- unf_rport->nport_id);
-
- if (unf_rport->nport_id >= UNF_FC_FID_DOM_MGR)
- unf_lport_error_recovery(unf_lport);
- else
- unf_rport_error_recovery(unf_rport);
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PLOGI response(0x%x). Port(0x%x_0x%x)<---RPort(0x%x_0x%p) wwpn(0x%llx) OX_ID(0x%x)",
- cmnd, unf_lport->port_id, unf_lport->nport_id, unf_rport->nport_id,
- unf_rport, port_name, unf_xchg->oxid);
-}
-
-static int unf_rcv_plogi_acc_async_callback(void *argc_in, void *argc_out)
-{
- struct unf_xchg *xchg = (struct unf_xchg *)argc_in;
-
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- unf_plogi_acc_com_process(xchg);
-
- unf_xchg_ref_dec(xchg, SFS_RESPONSE);
-
- return RETURN_OK;
-}
-
-void unf_plogi_callback(void *lport, void *rport, void *xchg)
-{
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_xchg *unf_xchg = (struct unf_xchg *)xchg;
- struct unf_plogi_payload *plogi_pld = NULL;
- struct unf_lgn_parm *login_parms = NULL;
- bool bbscn_enabled = false;
- bool switch2thread = false;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
- FC_CHECK_RETURN_VOID(unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr);
-
- plogi_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->plogi_acc.payload;
- login_parms = &plogi_pld->stparms;
- unf_xchg->lport = lport;
-
- if (unf_xchg->byte_orders & UNF_BIT_2)
- unf_big_end_to_cpu((u8 *)plogi_pld, sizeof(struct unf_plogi_payload));
-
- bbscn_enabled =
- unf_check_bbscn_is_enabled((u8)unf_lport->low_level_func.lport_cfg_items.bbscn,
- (u8)UNF_GET_BB_SC_N_FROM_PARAMS(login_parms));
- if ((bbscn_enabled) &&
- unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- switch2thread = true;
- unf_lport->bbscn_support = true;
- }
-
- if (switch2thread && unf_lport->root_lport == unf_lport) {
- /* Wait for LR done sync: just for ROOT Port */
- (void)unf_irq_process_switch2thread(unf_lport, unf_xchg,
- unf_rcv_plogi_acc_async_callback);
- } else {
- unf_plogi_acc_com_process(unf_xchg);
- }
-}
-
-static void unf_logo_ob_callback(struct unf_xchg *xchg)
-{
- struct unf_lport *lport = NULL;
- struct unf_rport *rport = NULL;
- struct unf_rport *old_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- u32 nport_id = 0;
- u32 logo_retry = 0;
- u32 max_frame_size = 0;
- u64 port_name = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
- unf_xchg = xchg;
- old_rport = unf_xchg->rport;
- logo_retry = old_rport->logo_retries;
- max_frame_size = old_rport->max_frame_size;
- port_name = old_rport->port_name;
- unf_rport_enter_closing(old_rport);
-
- lport = unf_xchg->lport;
- if (unf_is_lport_valid(lport) != RETURN_OK)
- return;
-
- /* Get R_Port by exchange info: Init state */
- nport_id = unf_xchg->did;
- rport = unf_get_rport_by_nport_id(lport, nport_id);
- rport = unf_get_safe_rport(lport, rport, UNF_RPORT_REUSE_INIT, nport_id);
- if (!rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) cannot allocate RPort", lport->port_id);
- return;
- }
-
- rport->logo_retries = logo_retry;
- rport->max_frame_size = max_frame_size;
- rport->port_name = port_name;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[info]LOGIN: Port(0x%x) received LOGO RSP timeout topo(0x%x) retries(%u)",
- lport->port_id, lport->act_topo, rport->logo_retries);
-
- /* RCVD LOGO/PRLO & SEND LOGO: the same process */
- if (rport->logo_retries < UNF_MAX_RETRY_COUNT) {
- /* <: retry (LOGIN or LOGO) if necessary */
- unf_process_rport_after_logo(lport, rport);
- } else {
- /* >=: Link down */
- unf_rport_immediate_link_down(lport, rport);
- }
-}
-
-static void unf_logo_callback(void *lport, void *rport, void *xchg)
-{
- /* RCVD LOGO ACC/RJT: retry(LOGIN/LOGO) or link down immediately */
- struct unf_lport *unf_lport = (struct unf_lport *)lport;
- struct unf_rport *unf_rport = NULL;
- struct unf_rport *old_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_els_rjt *els_acc_rjt = NULL;
- u32 cmnd = 0;
- u32 nport_id = 0;
- u32 logo_retry = 0;
- u32 max_frame_size = 0;
- u64 port_name = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_xchg = (struct unf_xchg *)xchg;
- old_rport = unf_xchg->rport;
-
- logo_retry = old_rport->logo_retries;
- max_frame_size = old_rport->max_frame_size;
- port_name = old_rport->port_name;
- unf_rport_enter_closing(old_rport);
-
- if (unf_is_lport_valid(lport) != RETURN_OK)
- return;
-
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr)
- return;
-
- /* Get R_Port by exchange info: Init state */
- els_acc_rjt = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->els_rjt;
- nport_id = unf_xchg->did;
- unf_rport = unf_get_rport_by_nport_id(unf_lport, nport_id);
- unf_rport = unf_get_safe_rport(unf_lport, unf_rport, UNF_RPORT_REUSE_INIT, nport_id);
-
- if (!unf_rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Port(0x%x) cannot allocate RPort",
- unf_lport->port_id);
- return;
- }
-
- unf_rport->logo_retries = logo_retry;
- unf_rport->max_frame_size = max_frame_size;
- unf_rport->port_name = port_name;
- cmnd = be32_to_cpu(els_acc_rjt->cmnd);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x) received LOGO RSP(0x%x),topo(0x%x) Port options(0x%x) RPort options(0x%x) retries(%u)",
- unf_lport->port_id, (cmnd & UNF_ELS_CMND_HIGH_MASK),
- unf_lport->act_topo, unf_lport->options, unf_rport->options,
- unf_rport->logo_retries);
-
- /* RCVD LOGO/PRLO & SEND LOGO: the same process */
- if (unf_rport->logo_retries < UNF_MAX_RETRY_COUNT) {
- /* <: retry (LOGIN or LOGO) if necessary */
- unf_process_rport_after_logo(unf_lport, unf_rport);
- } else {
- /* >=: Link down */
- unf_rport_immediate_link_down(unf_lport, unf_rport);
- }
-}
-
-void unf_prli_ob_callback(struct unf_xchg *xchg)
-{
- /* Do R_Port recovery */
- struct unf_lport *lport = NULL;
- struct unf_rport *rport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flag);
- lport = xchg->lport;
- rport = xchg->rport;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flag);
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x_0x%x) RPort(0x%x) send PRLI failed and do recovery",
- lport->port_id, lport->nport_id, rport->nport_id);
-
- /* Start to do R_Port error recovery */
- unf_rport_error_recovery(rport);
-}
-
-void unf_prli_callback(void *lport, void *rport, void *xchg)
-{
- /* RCVD PRLI RSP: ACC or RJT --->>> SCSI Link Up */
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_prli_payload *prli_acc_pld = NULL;
- ulong flag = 0;
- u32 cmnd = 0;
- u32 options = 0;
- u32 fcp_conf = 0;
- u32 rec_support = 0;
- u32 task_retry_support = 0;
- u32 retry_support = 0;
- u32 tape_support = 0;
- u32 fc4_type = 0;
- enum unf_rport_login_state rport_state = UNF_RPORT_ST_INIT;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
- unf_lport = (struct unf_lport *)lport;
- unf_rport = (struct unf_rport *)rport;
- unf_xchg = (struct unf_xchg *)xchg;
-
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) exchange(%p) entry is NULL",
- unf_lport->port_id, unf_xchg);
- return;
- }
-
- /* Get PRLI ACC payload */
- prli_acc_pld = &unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->prli_acc.payload;
- if (unf_xchg->byte_orders & UNF_BIT_2) {
- /* Change to little End, About INI/TGT mode & confirm info */
- options = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
- (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
-
- cmnd = be32_to_cpu(prli_acc_pld->cmnd);
- fcp_conf = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
- UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
- rec_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
- UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
- task_retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
- UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
- retry_support = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_3]) &
- UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
- fc4_type = be32_to_cpu(prli_acc_pld->parms[ARRAY_INDEX_0]) >>
- UNF_FC4_TYPE_SHIFT & UNF_FC4_TYPE_MASK;
- } else {
- options = (prli_acc_pld->parms[ARRAY_INDEX_3]) &
- (UNF_FC4_FRAME_PARM_3_TGT | UNF_FC4_FRAME_PARM_3_INI);
-
- cmnd = (prli_acc_pld->cmnd);
- fcp_conf = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_CONF_ALLOW;
- rec_support = prli_acc_pld->parms[ARRAY_INDEX_3] & UNF_FC4_FRAME_PARM_3_REC_SUPPORT;
- task_retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
- UNF_FC4_FRAME_PARM_3_TASK_RETRY_ID_SUPPORT;
- retry_support = prli_acc_pld->parms[ARRAY_INDEX_3] &
- UNF_FC4_FRAME_PARM_3_RETRY_SUPPORT;
- fc4_type = prli_acc_pld->parms[ARRAY_INDEX_0] >> UNF_FC4_TYPE_SHIFT;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: PRLI RSP: RPort(0x%x) parameter-3(0x%x) option(0x%x) cmd(0x%x) uiRecSupport:%u",
- unf_rport->nport_id, prli_acc_pld->parms[ARRAY_INDEX_3],
- options, cmnd, rec_support);
-
- /* PRLI ACC: R_Port READY & Report R_Port Link Up */
- if (UNF_ELS_CMND_ACC == (cmnd & UNF_ELS_CMND_HIGH_MASK)) {
- /* Update R_Port options(INI/TGT/BOTH) */
- unf_rport->options = options;
-
- unf_update_port_feature(unf_rport->port_name, unf_rport->options);
-
- /* NOTE: R_Port only with INI mode, send LOGO */
- if (unf_rport->options == UNF_PORT_MODE_INI) {
- /* Update R_Port state: LOGO */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* NOTE: Start to Send LOGO */
- unf_rport_enter_logo(unf_lport, unf_rport);
- return;
- }
-
- /* About confirm */
- if (fcp_conf && unf_lport->low_level_func.lport_cfg_items.fcp_conf) {
- unf_rport->fcp_conf_needed = true;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) FCP config is need for RPort(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id);
- }
-
- tape_support = (rec_support && task_retry_support && retry_support);
- if (tape_support && unf_lport->low_level_func.lport_cfg_items.tape_support) {
- unf_rport->tape_support_needed = true;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x_0x%x) Rec is enabled for RPort(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id);
- }
-
- /* Update R_Port state: READY */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_READY);
- rport_state = unf_rport->rp_state;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Report R_Port online (Link Up) event to SCSI */
- if (rport_state == UNF_RPORT_ST_READY) {
- unf_rport->logo_retries = 0;
- unf_update_lport_state_by_linkup_event(unf_lport, unf_rport,
- unf_rport->options);
- }
- } else {
- /* PRLI RJT: Do R_Port error recovery */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Port(0x%x)<---LS_RJT(DID:0x%x SID:0x%x) for PRLI. RPort(0x%p) OX_ID(0x%x)",
- unf_lport->port_id, unf_lport->nport_id,
- unf_rport->nport_id, unf_rport, unf_xchg->oxid);
-
- unf_rport_error_recovery(unf_rport);
- }
-}
-
-static void unf_rrq_callback(void *lport, void *rport, void *xchg)
-{
- /* Release I/O */
- struct unf_lport *unf_lport = NULL;
- struct unf_xchg *unf_xchg = NULL;
- struct unf_xchg *io_xchg = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(xchg);
-
- unf_lport = (struct unf_lport *)lport;
- unf_xchg = (struct unf_xchg *)xchg;
-
- if (!unf_xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) exchange(0x%p) SfsEntryPtr is NULL",
- unf_lport->port_id, unf_xchg);
- return;
- }
-
- io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
- if (!io_xchg) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) IO exchange is NULL. RRQ cb sfs xchg(0x%p) tag(0x%x)",
- unf_lport->port_id, unf_xchg, unf_xchg->hotpooltag);
- return;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) release IO exch(0x%p) tag(0x%x). RRQ cb sfs xchg(0x%p) tag(0x%x)",
- unf_lport->port_id, unf_xchg->io_xchg, io_xchg->hotpooltag,
- unf_xchg, unf_xchg->hotpooltag);
-
- /* After RRQ Success, Free xid */
- unf_notify_chip_free_xid(io_xchg);
-
- /* NOTE: release I/O exchange resource */
- unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
-}
-
-static void unf_rrq_ob_callback(struct unf_xchg *xchg)
-{
- /* Release I/O */
- struct unf_xchg *unf_xchg = NULL;
- struct unf_xchg *io_xchg = NULL;
-
- unf_xchg = (struct unf_xchg *)xchg;
- if (!unf_xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Exchange can't be NULL");
- return;
- }
-
- io_xchg = (struct unf_xchg *)unf_xchg->io_xchg;
- if (!io_xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]IO exchange can't be NULL with Sfs exch(0x%p) tag(0x%x)",
- unf_xchg, unf_xchg->hotpooltag);
- return;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]send RRQ failed: SFS exch(0x%p) tag(0x%x) exch(0x%p) tag(0x%x) OXID_RXID(0x%x_0x%x) SID_DID(0x%x_0x%x)",
- unf_xchg, unf_xchg->hotpooltag, io_xchg, io_xchg->hotpooltag,
- io_xchg->oxid, io_xchg->rxid, io_xchg->sid, io_xchg->did);
-
- /* If RRQ failure or timepout, Free xid. */
- unf_notify_chip_free_xid(io_xchg);
-
- /* NOTE: Free I/O exchange resource */
- unf_xchg_ref_dec(io_xchg, XCHG_ALLOC);
-}
diff --git a/drivers/scsi/spfc/common/unf_ls.h b/drivers/scsi/spfc/common/unf_ls.h
deleted file mode 100644
index 5fdd9e1a258d..000000000000
--- a/drivers/scsi/spfc/common/unf_ls.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_LS_H
-#define UNF_LS_H
-
-#include "unf_type.h"
-#include "unf_exchg.h"
-#include "unf_rport.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-
-u32 unf_send_adisc(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_pdisc(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_flogi(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_fdisc(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_plogi(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_prli(struct unf_lport *lport, struct unf_rport *rport,
- u32 cmnd_code);
-u32 unf_send_prlo(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_logo(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_send_logo_by_did(struct unf_lport *lport, u32 did);
-u32 unf_send_echo(struct unf_lport *lport, struct unf_rport *rport, u32 *time);
-u32 unf_send_plogi_rjt_by_did(struct unf_lport *lport, u32 did);
-u32 unf_send_rrq(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg);
-void unf_flogi_ob_callback(struct unf_xchg *xchg);
-void unf_flogi_callback(void *lport, void *rport, void *xchg);
-void unf_fdisc_ob_callback(struct unf_xchg *xchg);
-void unf_fdisc_callback(void *lport, void *rport, void *xchg);
-
-void unf_plogi_ob_callback(struct unf_xchg *xchg);
-void unf_plogi_callback(void *lport, void *rport, void *xchg);
-void unf_prli_ob_callback(struct unf_xchg *xchg);
-void unf_prli_callback(void *lport, void *rport, void *xchg);
-u32 unf_flogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_plogi_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_rec_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_prli_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_prlo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_rscn_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_logo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_echo_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_pdisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_send_pdisc_rjt(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *xchg);
-u32 unf_adisc_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_rrq_handler(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-u32 unf_send_rec(struct unf_lport *lport, struct unf_rport *rport,
- struct unf_xchg *io_xchg);
-
-u32 unf_low_level_bb_scn(struct unf_lport *lport);
-typedef int (*unf_event_task)(void *arg_in, void *arg_out);
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-
-#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_npiv.c b/drivers/scsi/spfc/common/unf_npiv.c
deleted file mode 100644
index 0d441f1c9e06..000000000000
--- a/drivers/scsi/spfc/common/unf_npiv.c
+++ /dev/null
@@ -1,1005 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_npiv.h"
-#include "unf_log.h"
-#include "unf_rport.h"
-#include "unf_exchg.h"
-#include "unf_portman.h"
-#include "unf_npiv_portman.h"
-
-#define UNF_DELETE_VPORT_MAX_WAIT_TIME_MS 60000
-
-u32 unf_init_vport_pool(struct unf_lport *lport)
-{
- u32 ret = RETURN_OK;
- u32 i;
- u16 vport_cnt = 0;
- struct unf_lport *vport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- u32 vport_pool_size;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, RETURN_ERROR);
-
- UNF_TOU16_CHECK(vport_cnt, lport->low_level_func.support_max_npiv_num,
- return RETURN_ERROR);
- if (vport_cnt == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) do not support NPIV",
- lport->port_id);
-
- return RETURN_OK;
- }
-
- vport_pool_size = sizeof(struct unf_vport_pool) + sizeof(struct unf_lport *) * vport_cnt;
- lport->vport_pool = vmalloc(vport_pool_size);
- if (!lport->vport_pool) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) cannot allocate vport pool",
- lport->port_id);
-
- return RETURN_ERROR;
- }
- memset(lport->vport_pool, 0, vport_pool_size);
- vport_pool = lport->vport_pool;
- vport_pool->vport_pool_count = vport_cnt;
- vport_pool->vport_pool_completion = NULL;
- spin_lock_init(&vport_pool->vport_pool_lock);
- INIT_LIST_HEAD(&vport_pool->list_vport_pool);
-
- vport_pool->vport_pool_addr =
- vmalloc((size_t)(vport_cnt * sizeof(struct unf_lport)));
- if (!vport_pool->vport_pool_addr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) cannot allocate vport pool address",
- lport->port_id);
- vfree(lport->vport_pool);
- lport->vport_pool = NULL;
-
- return RETURN_ERROR;
- }
-
- memset(vport_pool->vport_pool_addr, 0,
- vport_cnt * sizeof(struct unf_lport));
- vport = (struct unf_lport *)vport_pool->vport_pool_addr;
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- for (i = 0; i < vport_cnt; i++) {
- list_add_tail(&vport->entry_vport, &vport_pool->list_vport_pool);
- vport++;
- }
-
- vport_pool->slab_next_index = 0;
- vport_pool->slab_total_sum = vport_cnt;
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- return ret;
-}
-
-void unf_free_vport_pool(struct unf_lport *lport)
-{
- struct unf_vport_pool *vport_pool = NULL;
- bool wait = false;
- ulong flag = 0;
- u32 remain = 0;
- struct completion vport_pool_completion;
-
- init_completion(&vport_pool_completion);
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(lport->vport_pool);
- vport_pool = lport->vport_pool;
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
-
- if (vport_pool->slab_total_sum != vport_pool->vport_pool_count) {
- vport_pool->vport_pool_completion = &vport_pool_completion;
- remain = vport_pool->slab_total_sum - vport_pool->vport_pool_count;
- wait = true;
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- if (wait) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to wait for vport pool completion remain(0x%x)",
- lport->port_id, remain);
-
- wait_for_completion(vport_pool->vport_pool_completion);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) wait for vport pool completion end",
- lport->port_id);
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- vport_pool->vport_pool_completion = NULL;
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- }
-
- if (lport->vport_pool->vport_pool_addr) {
- vfree(lport->vport_pool->vport_pool_addr);
- lport->vport_pool->vport_pool_addr = NULL;
- }
-
- vfree(lport->vport_pool);
- lport->vport_pool = NULL;
-}
-
-struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
- u16 slab_index)
-{
- FC_CHECK_RETURN_VALUE(vport_pool, NULL);
-
- return vport_pool->vport_slab[slab_index];
-}
-
-static inline void unf_vport_pool_slab_set(struct unf_vport_pool *vport_pool,
- u16 slab_index,
- struct unf_lport *vport)
-{
- FC_CHECK_RETURN_VOID(vport_pool);
-
- vport_pool->vport_slab[slab_index] = vport;
-}
-
-u32 unf_alloc_vp_index(struct unf_vport_pool *vport_pool,
- struct unf_lport *vport, u16 vpid)
-{
- u16 slab_index;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(vport_pool, RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- if (vpid == 0) {
- slab_index = vport_pool->slab_next_index;
- while (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
- slab_index = (slab_index + 1) % vport_pool->slab_total_sum;
-
- if (vport_pool->slab_next_index == slab_index) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]VPort pool has no slab ");
-
- return RETURN_ERROR;
- }
- }
- } else {
- slab_index = vpid - 1;
- if (unf_get_vport_by_slab_index(vport_pool, slab_index)) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_WARN,
- "[warn]VPort Index(0x%x) is occupy", vpid);
-
- return RETURN_ERROR;
- }
- }
-
- unf_vport_pool_slab_set(vport_pool, slab_index, vport);
-
- vport_pool->slab_next_index = (slab_index + 1) % vport_pool->slab_total_sum;
-
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- spin_lock_irqsave(&vport->lport_state_lock, flags);
- vport->vp_index = slab_index + 1;
- spin_unlock_irqrestore(&vport->lport_state_lock, flags);
-
- return RETURN_OK;
-}
-
-void unf_free_vp_index(struct unf_vport_pool *vport_pool,
- struct unf_lport *vport)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(vport_pool);
- FC_CHECK_RETURN_VOID(vport);
-
- if (vport->vp_index == 0 ||
- vport->vp_index > vport_pool->slab_total_sum) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Input vpoot index(0x%x) is beyond the normal range, min(0x1), max(0x%x).",
- vport->vp_index, vport_pool->slab_total_sum);
- return;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- unf_vport_pool_slab_set(vport_pool, vport->vp_index - 1,
- NULL); /* SlabIndex=VpIndex-1 */
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- spin_lock_irqsave(&vport->lport_state_lock, flags);
- vport->vp_index = INVALID_VALUE16;
- spin_unlock_irqrestore(&vport->lport_state_lock, flags);
-}
-
-struct unf_lport *unf_get_free_vport(struct unf_lport *lport)
-{
- struct unf_lport *vport = NULL;
- struct list_head *list_head = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(lport->vport_pool, NULL);
-
- vport_pool = lport->vport_pool;
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- if (!list_empty(&vport_pool->list_vport_pool)) {
- list_head = UNF_OS_LIST_NEXT(&vport_pool->list_vport_pool);
- list_del(list_head);
- vport_pool->vport_pool_count--;
- list_add_tail(list_head, &lport->list_vports_head);
- vport = list_entry(list_head, struct unf_lport, entry_vport);
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]LPort(0x%x)'s vport pool is empty", lport->port_id);
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- return NULL;
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- return vport;
-}
-
-void unf_vport_back_to_pool(void *vport)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *list = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(vport);
- unf_vport = vport;
- unf_lport = (struct unf_lport *)(unf_vport->root_lport);
- FC_CHECK_RETURN_VOID(unf_lport);
- FC_CHECK_RETURN_VOID(unf_lport->vport_pool);
-
- unf_free_vp_index(unf_lport->vport_pool, unf_vport);
-
- spin_lock_irqsave(&unf_lport->vport_pool->vport_pool_lock, flag);
-
- list = &unf_vport->entry_vport;
- list_del(list);
- list_add_tail(list, &unf_lport->vport_pool->list_vport_pool);
- unf_lport->vport_pool->vport_pool_count++;
-
- spin_unlock_irqrestore(&unf_lport->vport_pool->vport_pool_lock, flag);
-}
-
-void unf_init_vport_from_lport(struct unf_lport *vport, struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(vport);
- FC_CHECK_RETURN_VOID(lport);
-
- vport->port_type = lport->port_type;
- vport->fc_port = lport->fc_port;
- vport->act_topo = lport->act_topo;
- vport->root_lport = lport;
- vport->unf_qualify_rport = lport->unf_qualify_rport;
- vport->link_event_wq = lport->link_event_wq;
- vport->xchg_wq = lport->xchg_wq;
-
- memcpy(&vport->xchg_mgr_temp, &lport->xchg_mgr_temp,
- sizeof(struct unf_cm_xchg_mgr_template));
-
- memcpy(&vport->event_mgr, &lport->event_mgr, sizeof(struct unf_event_mgr));
-
- memset(&vport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
-
- memcpy(&vport->low_level_func, &lport->low_level_func,
- sizeof(struct unf_low_level_functioon_op));
-}
-
-void unf_check_vport_pool_status(struct unf_lport *lport)
-{
- struct unf_vport_pool *vport_pool = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- vport_pool = lport->vport_pool;
- FC_CHECK_RETURN_VOID(vport_pool);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
-
- if (vport_pool->vport_pool_completion &&
- vport_pool->slab_total_sum == vport_pool->vport_pool_count) {
- complete(vport_pool->vport_pool_completion);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-}
-
-void unf_vport_fabric_logo(struct unf_lport *vport)
-{
- struct unf_rport *unf_rport = NULL;
- ulong flag = 0;
-
- unf_rport = unf_get_rport_by_nport_id(vport, UNF_FC_FID_FLOGI);
- FC_CHECK_RETURN_VOID(unf_rport);
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_rport_enter_logo(vport, unf_rport);
-}
-
-void unf_vport_deinit(void *vport)
-{
- struct unf_lport *unf_vport = NULL;
-
- FC_CHECK_RETURN_VOID(vport);
- unf_vport = (struct unf_lport *)vport;
-
- unf_unregister_scsi_host(unf_vport);
-
- unf_disc_mgr_destroy(unf_vport);
-
- unf_release_xchg_mgr_temp(unf_vport);
-
- unf_release_vport_mgr_temp(unf_vport);
-
- unf_destroy_scsi_id_table(unf_vport);
-
- unf_lport_release_lw_funop(unf_vport);
- unf_vport->fc_port = NULL;
- unf_vport->vport = NULL;
-
- if (unf_vport->lport_free_completion) {
- complete(unf_vport->lport_free_completion);
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]VPort(0x%x) point(0x%p) completion free function is NULL",
- unf_vport->port_id, unf_vport);
- dump_stack();
- }
-}
-
-void unf_vport_ref_dec(struct unf_lport *vport)
-{
- FC_CHECK_RETURN_VOID(vport);
-
- if (atomic_dec_and_test(&vport->port_ref_cnt)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]VPort(0x%x) point(0x%p) reference count is 0 and freevport",
- vport->port_id, vport);
-
- unf_vport_deinit(vport);
- }
-}
-
-u32 unf_vport_init(void *vport)
-{
- struct unf_lport *unf_vport = NULL;
-
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
- unf_vport = (struct unf_lport *)vport;
-
- unf_vport->options = UNF_PORT_MODE_INI;
- unf_vport->nport_id = 0;
-
- if (unf_init_scsi_id_table(unf_vport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Vport(0x%x) can not initialize SCSI ID table",
- unf_vport->port_id);
-
- return RETURN_ERROR;
- }
-
- if (unf_init_disc_mgr(unf_vport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Vport(0x%x) can not initialize discover manager",
- unf_vport->port_id);
- unf_destroy_scsi_id_table(unf_vport);
-
- return RETURN_ERROR;
- }
-
- if (unf_register_scsi_host(unf_vport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Vport(0x%x) vport can not register SCSI host",
- unf_vport->port_id);
- unf_disc_mgr_destroy(unf_vport);
- unf_destroy_scsi_id_table(unf_vport);
-
- return RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Vport(0x%x) Create succeed with wwpn(0x%llx)",
- unf_vport->port_id, unf_vport->port_name);
-
- return RETURN_OK;
-}
-
-void unf_vport_remove(void *vport)
-{
- struct unf_lport *unf_vport = NULL;
- struct unf_lport *unf_lport = NULL;
- struct completion port_free_completion;
-
- init_completion(&port_free_completion);
- FC_CHECK_RETURN_VOID(vport);
- unf_vport = (struct unf_lport *)vport;
- unf_lport = (struct unf_lport *)(unf_vport->root_lport);
- unf_vport->lport_free_completion = &port_free_completion;
-
- unf_set_lport_removing(unf_vport);
-
- unf_vport_ref_dec(unf_vport);
-
- wait_for_completion(unf_vport->lport_free_completion);
- unf_vport_back_to_pool(unf_vport);
-
- unf_check_vport_pool_status(unf_lport);
-}
-
-u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level)
-{
-#define VPORT_WWN_MASK 0xff00ffffffffffff
-#define VPORT_WWN_SHIFT 48
-
- struct fc_vport_identifiers vid = {0};
- struct Scsi_Host *host = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *unf_vport = NULL;
- u16 vport_id = 0;
-
- unf_lport = unf_find_lport_by_port_id(port_id);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Cannot find LPort by (0x%x).", port_id);
-
- return RETURN_ERROR;
- }
-
- unf_vport = unf_cm_lookup_vport_by_wwpn(unf_lport, wwpn);
- if (unf_vport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Port(0x%x) has find vport with wwpn(0x%llx), can't create again",
- unf_lport->port_id, wwpn);
-
- return RETURN_ERROR;
- }
-
- unf_vport = unf_get_free_vport(unf_lport);
- if (!unf_vport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Can not get free vport from pool");
-
- return RETURN_ERROR;
- }
-
- unf_init_port_parms(unf_vport);
- unf_init_vport_from_lport(unf_vport, unf_lport);
-
- if ((unf_lport->port_name & VPORT_WWN_MASK) == (wwpn & VPORT_WWN_MASK)) {
- vport_id = (wwpn & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
- if (vport_id == 0)
- vport_id = (unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT;
- }
-
- if (unf_alloc_vp_index(unf_lport->vport_pool, unf_vport, vport_id) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Vport can not allocate vport index");
- unf_vport_back_to_pool(unf_vport);
-
- return RETURN_ERROR;
- }
- unf_vport->port_id = (((u32)unf_vport->vp_index) << PORTID_VPINDEX_SHIT) |
- unf_lport->port_id;
-
- vid.roles = FC_PORT_ROLE_FCP_INITIATOR;
- vid.vport_type = FC_PORTTYPE_NPIV;
- vid.disable = false;
- vid.node_name = unf_lport->node_name;
-
- if (wwpn) {
- vid.port_name = wwpn;
- } else {
- if ((unf_lport->port_name & ~VPORT_WWN_MASK) >> VPORT_WWN_SHIFT !=
- unf_vport->vp_index) {
- vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK) |
- (((u64)unf_vport->vp_index) << VPORT_WWN_SHIFT);
- } else {
- vid.port_name = (unf_lport->port_name & VPORT_WWN_MASK);
- }
- }
-
- unf_vport->port_name = vid.port_name;
-
- host = unf_lport->host_info.host;
-
- if (!fc_vport_create(host, 0, &vid)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) Cannot Failed to create vport wwpn=%llx",
- unf_lport->port_id, vid.port_name);
-
- unf_vport_back_to_pool(unf_vport);
-
- return RETURN_ERROR;
- }
-
- unf_vport->qos_level = qos_level;
- return RETURN_OK;
-}
-
-struct unf_lport *unf_creat_vport(struct unf_lport *lport,
- struct vport_config *vport_config)
-{
- u32 ret = RETURN_OK;
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *vport = NULL;
- enum unf_act_topo lport_topo;
- enum unf_lport_login_state lport_state;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(vport_config, NULL);
-
- if (vport_config->port_mode != FC_PORT_ROLE_FCP_INITIATOR) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Only support INITIATOR port mode(0x%x)",
- vport_config->port_mode);
-
- return NULL;
- }
- unf_lport = lport;
-
- if (unf_lport->root_lport != unf_lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) not root port return",
- unf_lport->port_id);
-
- return NULL;
- }
-
- vport = unf_cm_lookup_vport_by_wwpn(unf_lport, vport_config->port_name);
- if (!vport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Port(0x%x) can not find vport with wwpn(0x%llx)",
- unf_lport->port_id, vport_config->port_name);
-
- return NULL;
- }
-
- ret = unf_vport_init(vport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]VPort(0x%x) can not initialize vport",
- vport->port_id);
-
- return NULL;
- }
-
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- lport_topo = unf_lport->act_topo;
- lport_state = unf_lport->states;
-
- vport_config->node_name = unf_lport->node_name;
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- vport->port_name = vport_config->port_name;
- vport->node_name = vport_config->node_name;
-
- if (lport_topo == UNF_ACT_TOP_P2P_FABRIC &&
- lport_state >= UNF_LPORT_ST_PLOGI_WAIT &&
- lport_state <= UNF_LPORT_ST_READY) {
- vport->link_up = unf_lport->link_up;
- (void)unf_lport_login(vport, lport_topo);
- }
-
- return vport;
-}
-
-u32 unf_drop_vport(struct unf_lport *vport)
-{
- u32 ret = RETURN_ERROR;
- struct fc_vport *unf_vport = NULL;
-
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
-
- unf_vport = vport->vport;
- if (!unf_vport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]VPort(0x%x) find vport in scsi is NULL",
- vport->port_id);
-
- return ret;
- }
-
- ret = fc_vport_terminate(unf_vport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]VPort(0x%x) terminate vport(%p) in scsi failed",
- vport->port_id, unf_vport);
-
- return ret;
- }
- return ret;
-}
-
-u32 unf_delete_vport(u32 port_id, u32 vp_index)
-{
- struct unf_lport *unf_lport = NULL;
- u16 unf_vp_index = 0;
- struct unf_lport *vport = NULL;
-
- unf_lport = unf_find_lport_by_port_id(port_id);
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can not be found by portid", port_id);
-
- return RETURN_ERROR;
- }
-
- if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) is in NOP, destroy all vports function will be called",
- unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- UNF_TOU16_CHECK(unf_vp_index, vp_index, return RETURN_ERROR);
- vport = unf_cm_lookup_vport_by_vp_index(unf_lport, unf_vp_index);
- if (!vport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Can not lookup VPort by VPort index(0x%x)",
- unf_vp_index);
-
- return RETURN_ERROR;
- }
-
- return unf_drop_vport(vport);
-}
-
-void unf_vport_abort_all_sfs_exch(struct unf_lport *vport)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *exch = NULL;
- ulong pool_lock_flags = 0;
- ulong exch_lock_flags = 0;
- u32 i;
-
- FC_CHECK_RETURN_VOID(vport);
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) hot pool is NULL",
- ((struct unf_lport *)(vport->root_lport))->port_id);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->sfs_busylist) {
- exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
- spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
- if (vport == exch->lport && (atomic_read(&exch->ref_cnt) > 0)) {
- exch->io_state |= TGT_IO_STATE_ABORT;
- spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
- unf_disc_ctrl_size_inc(vport, exch->cmnd_code);
- /* Transfer exch to destroy chain */
- list_del(xchg_node);
- list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
- } else {
- spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
- }
- }
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- }
-}
-
-void unf_vport_abort_ini_io_exch(struct unf_lport *vport)
-{
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *exch = NULL;
- ulong pool_lock_flags = 0;
- ulong exch_lock_flags = 0;
- u32 i;
-
- FC_CHECK_RETURN_VOID(vport);
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool = unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) MgrIdex %u hot pool is NULL",
- ((struct unf_lport *)(vport->root_lport))->port_id, i);
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- list_for_each_safe(xchg_node, next_xchg_node, &hot_pool->ini_busylist) {
- exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
-
- if (vport == exch->lport && atomic_read(&exch->ref_cnt) > 0) {
- /* Transfer exch to destroy chain */
- list_del(xchg_node);
- list_add_tail(xchg_node, &hot_pool->list_destroy_xchg);
-
- spin_lock_irqsave(&exch->xchg_state_lock, exch_lock_flags);
- exch->io_state |= INI_IO_STATE_DRABORT;
- spin_unlock_irqrestore(&exch->xchg_state_lock, exch_lock_flags);
- }
- }
-
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, pool_lock_flags);
- }
-}
-
-void unf_vport_abort_exch(struct unf_lport *vport)
-{
- FC_CHECK_RETURN_VOID(vport);
-
- unf_vport_abort_all_sfs_exch(vport);
-
- unf_vport_abort_ini_io_exch(vport);
-}
-
-u32 unf_vport_wait_all_exch_removed(struct unf_lport *vport)
-{
-#define UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS 1000
- struct unf_xchg_hot_pool *hot_pool = NULL;
- struct list_head *xchg_node = NULL;
- struct list_head *next_xchg_node = NULL;
- struct unf_xchg *exch = NULL;
- u32 vport_uses = 0;
- ulong flags = 0;
- u32 wait_timeout = 0;
- u32 i = 0;
-
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
-
- while (1) {
- vport_uses = 0;
-
- for (i = 0; i < UNF_EXCHG_MGR_NUM; i++) {
- hot_pool =
- unf_get_hot_pool_by_lport((struct unf_lport *)(vport->root_lport), i);
- if (unlikely(!hot_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) hot Pool is NULL",
- ((struct unf_lport *)(vport->root_lport))->port_id);
-
- continue;
- }
-
- spin_lock_irqsave(&hot_pool->xchg_hotpool_lock, flags);
- list_for_each_safe(xchg_node, next_xchg_node,
- &hot_pool->list_destroy_xchg) {
- exch = list_entry(xchg_node, struct unf_xchg, list_xchg_entry);
-
- if (exch->lport != vport)
- continue;
- vport_uses++;
- if (wait_timeout >=
- UNF_DELETE_VPORT_MAX_WAIT_TIME_MS) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[error]VPort(0x%x) Abort Exch(0x%p) Type(0x%x) OxRxid(0x%x 0x%x),sid did(0x%x 0x%x) SeqId(0x%x) IOState(0x%x) Ref(0x%x)",
- vport->port_id, exch,
- (u32)exch->xchg_type,
- (u32)exch->oxid,
- (u32)exch->rxid, (u32)exch->sid,
- (u32)exch->did, (u32)exch->seq_id,
- (u32)exch->io_state,
- atomic_read(&exch->ref_cnt));
- }
- }
- spin_unlock_irqrestore(&hot_pool->xchg_hotpool_lock, flags);
- }
-
- if (vport_uses == 0) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]VPort(0x%x) has removed all exchanges it used",
- vport->port_id);
- break;
- }
-
- if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
- return RETURN_ERROR;
-
- msleep(UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS);
- wait_timeout += UNF_WAIT_EXCH_REMOVE_ONE_TIME_MS;
- }
-
- return RETURN_OK;
-}
-
-u32 unf_vport_wait_rports_removed(struct unf_lport *vport)
-{
-#define UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS 5000
-
- struct unf_disc *disc = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- u32 vport_uses = 0;
- ulong flags = 0;
- u32 wait_timeout = 0;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
- disc = &vport->disc;
-
- while (1) {
- vport_uses = 0;
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
- list_for_each_safe(node, next_node, &disc->list_delete_rports) {
- unf_rport = list_entry(node, struct unf_rport, entry_rport);
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Delete",
- vport->port_id, unf_rport->nport_id, unf_rport);
- vport_uses++;
- }
-
- list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
- unf_rport = list_entry(node, struct unf_rport, entry_rport);
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Vport(0x%x) Rport(0x%x) point(%p) is in Destroy",
- vport->port_id, unf_rport->nport_id, unf_rport);
- vport_uses++;
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
-
- if (vport_uses == 0) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]VPort(0x%x) has removed all RPorts it used",
- vport->port_id);
- break;
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Vport(0x%x) has %u RPorts not removed wait timeout(%u ms)",
- vport->port_id, vport_uses, wait_timeout);
-
- if (wait_timeout >= UNF_DELETE_VPORT_MAX_WAIT_TIME_MS)
- return RETURN_ERROR;
-
- msleep(UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS);
- wait_timeout += UNF_WAIT_RPORT_REMOVE_ONE_TIME_MS;
- }
-
- return RETURN_OK;
-}
-
-u32 unf_destroy_one_vport(struct unf_lport *vport)
-{
- u32 ret;
- struct unf_lport *root_port = NULL;
-
- FC_CHECK_RETURN_VALUE(vport, RETURN_ERROR);
-
- root_port = (struct unf_lport *)vport->root_lport;
-
- unf_vport_fabric_logo(vport);
-
- /* 1 set NOP */
- atomic_set(&vport->lport_no_operate_flag, UNF_LPORT_NOP);
- vport->port_removing = true;
-
- /* 2 report linkdown to scsi and delele rpot */
- unf_linkdown_one_vport(vport);
-
- /* 3 set abort for exchange */
- unf_vport_abort_exch(vport);
-
- /* 4 wait exch return freepool */
- if (!root_port->port_dirt_exchange) {
- ret = unf_vport_wait_all_exch_removed(vport);
- if (ret != RETURN_OK) {
- if (!root_port->port_removing) {
- vport->port_removing = false;
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]VPort(0x%x) can not wait Exchange return freepool",
- vport->port_id);
-
- return RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]Port(0x%x) is removing, there is dirty exchange, continue",
- root_port->port_id);
-
- root_port->port_dirt_exchange = true;
- }
- }
-
- /* wait rport return rportpool */
- ret = unf_vport_wait_rports_removed(vport);
- if (ret != RETURN_OK) {
- vport->port_removing = false;
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]VPort(0x%x) can not wait Rport return freepool",
- vport->port_id);
-
- return RETURN_ERROR;
- }
-
- unf_cm_vport_remove(vport);
-
- return RETURN_OK;
-}
-
-void unf_destroy_all_vports(struct unf_lport *lport)
-{
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
-
- unf_lport = lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Lport(0x%x) VPort pool is NULL", unf_lport->port_id);
-
- return;
- }
-
- /* Transfer to the transition chain */
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- vport = list_entry(node, struct unf_lport, entry_vport);
- list_del_init(&vport->entry_vport);
- list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- vport = list_entry(node, struct unf_lport, entry_vport);
- list_del_init(&vport->entry_vport);
- list_add_tail(&vport->entry_vport, &unf_lport->list_destroy_vports);
- atomic_dec(&vport->port_ref_cnt);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- while (!list_empty(&unf_lport->list_destroy_vports)) {
- node = UNF_OS_LIST_NEXT(&unf_lport->list_destroy_vports);
- vport = list_entry(node, struct unf_lport, entry_vport);
-
- list_del_init(&vport->entry_vport);
- list_add_tail(&vport->entry_vport, &unf_lport->list_vports_head);
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]VPort(0x%x) Destroy begin", vport->port_id);
- unf_drop_vport(vport);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[info]VPort(0x%x) Destroy end", vport->port_id);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-}
-
-u32 unf_init_vport_mgr_temp(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- lport->lport_mgr_temp.unf_look_up_vport_by_index = unf_lookup_vport_by_index;
- lport->lport_mgr_temp.unf_look_up_vport_by_port_id = unf_lookup_vport_by_portid;
- lport->lport_mgr_temp.unf_look_up_vport_by_did = unf_lookup_vport_by_did;
- lport->lport_mgr_temp.unf_look_up_vport_by_wwpn = unf_lookup_vport_by_wwpn;
- lport->lport_mgr_temp.unf_vport_remove = unf_vport_remove;
-
- return RETURN_OK;
-}
-
-void unf_release_vport_mgr_temp(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- memset(&lport->lport_mgr_temp, 0, sizeof(struct unf_cm_lport_template));
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_9_DESTROY_LPORT_MG_TMP;
-}
diff --git a/drivers/scsi/spfc/common/unf_npiv.h b/drivers/scsi/spfc/common/unf_npiv.h
deleted file mode 100644
index 6f522470f47a..000000000000
--- a/drivers/scsi/spfc/common/unf_npiv.h
+++ /dev/null
@@ -1,47 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_NPIV_H
-#define UNF_NPIV_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "unf_lport.h"
-
-/* product VPORT configure */
-struct vport_config {
- u64 node_name;
- u64 port_name;
- u32 port_mode; /* INI, TGT or both */
-};
-
-/* product Vport function */
-#define PORTID_VPINDEX_MASK 0xff000000
-#define PORTID_VPINDEX_SHIT 24
-u32 unf_npiv_conf(u32 port_id, u64 wwpn, enum unf_rport_qos_level qos_level);
-struct unf_lport *unf_creat_vport(struct unf_lport *lport,
- struct vport_config *vport_config);
-u32 unf_delete_vport(u32 port_id, u32 vp_index);
-
-/* Vport pool creat and release function */
-u32 unf_init_vport_pool(struct unf_lport *lport);
-void unf_free_vport_pool(struct unf_lport *lport);
-
-/* Lport resigster stLPortMgTemp function */
-void unf_vport_remove(void *vport);
-void unf_vport_ref_dec(struct unf_lport *vport);
-
-/* linkdown all Vport after receive linkdown event */
-void unf_linkdown_all_vports(void *lport);
-/* Lport receive Flogi Acc linkup all Vport */
-void unf_linkup_all_vports(struct unf_lport *lport);
-/* Lport remove delete all Vport */
-void unf_destroy_all_vports(struct unf_lport *lport);
-void unf_vport_fabric_logo(struct unf_lport *vport);
-u32 unf_destroy_one_vport(struct unf_lport *vport);
-u32 unf_drop_vport(struct unf_lport *vport);
-u32 unf_init_vport_mgr_temp(struct unf_lport *lport);
-void unf_release_vport_mgr_temp(struct unf_lport *lport);
-struct unf_lport *unf_get_vport_by_slab_index(struct unf_vport_pool *vport_pool,
- u16 slab_index);
-#endif
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.c b/drivers/scsi/spfc/common/unf_npiv_portman.c
deleted file mode 100644
index b4f393f2e732..000000000000
--- a/drivers/scsi/spfc/common/unf_npiv_portman.c
+++ /dev/null
@@ -1,360 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_npiv_portman.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "unf_rport.h"
-#include "unf_npiv.h"
-#include "unf_portman.h"
-
-void *unf_lookup_vport_by_index(void *lport, u16 vp_index)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)lport;
-
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return NULL;
- }
-
- if (vp_index == 0 || vp_index > vport_pool->slab_total_sum) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) input vport index(0x%x) is beyond the normal range(0x1~0x%x)",
- unf_lport->port_id, vp_index, vport_pool->slab_total_sum);
-
- return NULL;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- unf_vport = unf_get_vport_by_slab_index(vport_pool, vp_index - 1);
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- return (void *)unf_vport;
-}
-
-void *unf_lookup_vport_by_portid(void *lport, u32 port_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)lport;
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return NULL;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->port_id == port_id) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->port_id == port_id) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has no vport ID(0x%x).",
- unf_lport->port_id, port_id);
- return NULL;
-}
-
-void *unf_lookup_vport_by_did(void *lport, u32 did)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)lport;
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return NULL;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->nport_id == did) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- return unf_vport;
- }
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->nport_id == did) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) has no vport Nport ID(0x%x)", unf_lport->port_id, did);
- return NULL;
-}
-
-void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)lport;
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return NULL;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->port_name == wwpn) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- return unf_vport;
- }
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->port_name == wwpn) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has no vport WWPN(0x%llx)",
- unf_lport->port_id, wwpn);
-
- return NULL;
-}
-
-void unf_linkdown_one_vport(struct unf_lport *vport)
-{
- ulong flag = 0;
- struct unf_lport *root_lport = NULL;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_KEVENT,
- "[info]VPort(0x%x) linkdown", vport->port_id);
-
- spin_lock_irqsave(&vport->lport_state_lock, flag);
- vport->link_up = UNF_PORT_LINK_DOWN;
- vport->nport_id = 0; /* set nportid 0 before send fdisc again */
- unf_lport_state_ma(vport, UNF_EVENT_LPORT_LINK_DOWN);
- spin_unlock_irqrestore(&vport->lport_state_lock, flag);
-
- root_lport = (struct unf_lport *)vport->root_lport;
-
- unf_flush_disc_event(&root_lport->disc, vport);
-
- unf_clean_linkdown_rport(vport);
-}
-
-void unf_linkdown_all_vports(void *lport)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = (struct unf_lport *)lport;
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) VPort pool is NULL", unf_lport->port_id);
-
- return;
- }
-
- /* Transfer to the transition chain */
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- list_del_init(&unf_vport->entry_vport);
- list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
- (void)unf_lport_ref_inc(unf_vport);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- while (!list_empty(&unf_lport->list_intergrad_vports)) {
- node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
-
- list_del_init(&unf_vport->entry_vport);
- list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- unf_linkdown_one_vport(unf_vport);
-
- unf_vport_ref_dec(unf_vport);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-}
-
-int unf_process_vports_linkup(void *arg_in, void *arg_out)
-{
-#define UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS 100
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
- int ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(arg_in, RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)arg_in;
-
- if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is NOP don't continue", unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- if (unf_lport->link_up != UNF_PORT_LINK_UP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is not linkup don't continue.",
- unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) VPort pool is NULL.", unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- /* Transfer to the transition chain */
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- list_del_init(&unf_vport->entry_vport);
- list_add_tail(&unf_vport->entry_vport, &unf_lport->list_intergrad_vports);
- (void)unf_lport_ref_inc(unf_vport);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- while (!list_empty(&unf_lport->list_intergrad_vports)) {
- node = UNF_OS_LIST_NEXT(&unf_lport->list_intergrad_vports);
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
-
- list_del_init(&unf_vport->entry_vport);
- list_add_tail(&unf_vport->entry_vport, &unf_lport->list_vports_head);
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- if (atomic_read(&unf_vport->lport_no_operate_flag) == UNF_LPORT_NOP) {
- unf_vport_ref_dec(unf_vport);
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- continue;
- }
-
- if (unf_lport->link_up == UNF_PORT_LINK_UP &&
- unf_lport->act_topo == UNF_ACT_TOP_P2P_FABRIC) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Vport(0x%x) begin login", unf_vport->port_id);
-
- unf_vport->link_up = UNF_PORT_LINK_UP;
- (void)unf_lport_login(unf_vport, unf_lport->act_topo);
-
- msleep(UNF_WAIT_VPORT_LOGIN_ONE_TIME_MS);
- } else {
- unf_linkdown_one_vport(unf_vport);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Vport(0x%x) login failed because root port linkdown",
- unf_vport->port_id);
- }
-
- unf_vport_ref_dec(unf_vport);
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flags);
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flags);
-
- return ret;
-}
-
-void unf_linkup_all_vports(struct unf_lport *lport)
-{
- struct unf_cm_event_report *event = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
-
- if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
- !lport->event_mgr.unf_post_event_func ||
- !lport->event_mgr.unf_release_event)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) Event fun is NULL",
- lport->port_id);
- return;
- }
-
- event = lport->event_mgr.unf_get_free_event_func((void *)lport);
- FC_CHECK_RETURN_VOID(event);
-
- event->lport = lport;
- event->event_asy_flag = UNF_EVENT_ASYN;
- event->unf_event_task = unf_process_vports_linkup;
- event->para_in = (void *)lport;
-
- lport->event_mgr.unf_post_event_func(lport, event);
-}
diff --git a/drivers/scsi/spfc/common/unf_npiv_portman.h b/drivers/scsi/spfc/common/unf_npiv_portman.h
deleted file mode 100644
index 284c23c9abe4..000000000000
--- a/drivers/scsi/spfc/common/unf_npiv_portman.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_NPIV_PORTMAN_H
-#define UNF_NPIV_PORTMAN_H
-
-#include "unf_type.h"
-#include "unf_lport.h"
-
-/* Lport resigster stLPortMgTemp function */
-void *unf_lookup_vport_by_index(void *lport, u16 vp_index);
-void *unf_lookup_vport_by_portid(void *lport, u32 port_id);
-void *unf_lookup_vport_by_did(void *lport, u32 did);
-void *unf_lookup_vport_by_wwpn(void *lport, u64 wwpn);
-void unf_linkdown_one_vport(struct unf_lport *vport);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_portman.c b/drivers/scsi/spfc/common/unf_portman.c
deleted file mode 100644
index ef8f90eb3777..000000000000
--- a/drivers/scsi/spfc/common/unf_portman.c
+++ /dev/null
@@ -1,2431 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_portman.h"
-#include "unf_log.h"
-#include "unf_exchg.h"
-#include "unf_rport.h"
-#include "unf_io.h"
-#include "unf_npiv.h"
-#include "unf_scsi_common.h"
-
-#define UNF_LPORT_CHIP_ERROR(unf_lport) \
- ((unf_lport)->pcie_error_cnt.pcie_error_count[UNF_PCIE_FATALERRORDETECTED])
-
-struct unf_global_lport global_lport_mgr;
-
-static int unf_port_switch(struct unf_lport *lport, bool switch_flag);
-static u32 unf_build_lport_wwn(struct unf_lport *lport);
-static int unf_lport_destroy(void *lport, void *arg_out);
-static u32 unf_port_linkup(struct unf_lport *lport, void *input);
-static u32 unf_port_linkdown(struct unf_lport *lport, void *input);
-static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input);
-static u32 unf_port_reset_start(struct unf_lport *lport, void *input);
-static u32 unf_port_reset_end(struct unf_lport *lport, void *input);
-static u32 unf_port_nop(struct unf_lport *lport, void *input);
-static void unf_destroy_card_thread(struct unf_lport *lport);
-static u32 unf_creat_card_thread(struct unf_lport *lport);
-static u32 unf_find_card_thread(struct unf_lport *lport);
-static u32 unf_port_begin_remove(struct unf_lport *lport, void *input);
-
-static struct unf_port_action g_lport_action[] = {
- {UNF_PORT_LINK_UP, unf_port_linkup},
- {UNF_PORT_LINK_DOWN, unf_port_linkdown},
- {UNF_PORT_RESET_START, unf_port_reset_start},
- {UNF_PORT_RESET_END, unf_port_reset_end},
- {UNF_PORT_NOP, unf_port_nop},
- {UNF_PORT_BEGIN_REMOVE, unf_port_begin_remove},
- {UNF_PORT_RELEASE_RPORT_INDEX, unf_port_release_rport_index},
- {UNF_PORT_ABNORMAL_RESET, unf_port_abnormal_reset},
-};
-
-static void unf_destroy_dirty_rport(struct unf_lport *lport, bool show_only)
-{
- u32 dirty_rport = 0;
-
- /* for whole L_Port */
- if (lport->dirty_flag & UNF_LPORT_DIRTY_FLAG_RPORT_POOL_DIRTY) {
- dirty_rport = lport->rport_pool.rport_pool_count;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has %d dirty RPort(s)",
- lport->port_id, dirty_rport);
-
- /* Show L_Port's R_Port(s) from busy_list & destroy_list */
- unf_show_all_rport(lport);
-
- /* free R_Port pool memory & bitmap */
- if (!show_only) {
- vfree(lport->rport_pool.rport_pool_add);
- lport->rport_pool.rport_pool_add = NULL;
- vfree(lport->rport_pool.rpi_bitmap);
- lport->rport_pool.rpi_bitmap = NULL;
- }
- }
-}
-
-void unf_show_dirty_port(bool show_only, u32 *dirty_port_num)
-{
- struct list_head *node = NULL;
- struct list_head *node_next = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flags = 0;
- u32 port_num = 0;
-
- FC_CHECK_RETURN_VOID(dirty_port_num);
-
- /* for each dirty L_Port from global L_Port list */
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- list_for_each_safe(node, node_next, &global_lport_mgr.dirty_list_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has dirty data(0x%x)",
- unf_lport->port_id, unf_lport->dirty_flag);
-
- /* Destroy dirty L_Port's exchange(s) & R_Port(s) */
- unf_destroy_dirty_xchg(unf_lport, show_only);
- unf_destroy_dirty_rport(unf_lport, show_only);
-
- /* Delete (dirty L_Port) list entry if necessary */
- if (!show_only) {
- list_del_init(node);
- vfree(unf_lport);
- }
-
- port_num++;
- }
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-
- *dirty_port_num = port_num;
-}
-
-void unf_show_all_rport(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_disc *disc = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- u32 rport_cnt = 0;
- u32 target_cnt = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- unf_lport = lport;
- disc = &unf_lport->disc;
-
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Port(0x%x) disc state(0x%x)", unf_lport->port_id, disc->states);
-
- /* for each R_Port from busy_list */
- list_for_each_safe(node, next_node, &disc->list_busy_rports) {
- unf_rport = list_entry(node, struct unf_rport, entry_rport);
- rport_cnt++;
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Port(0x%x) busy RPorts(%u_%p) WWPN(0x%016llx) scsi_id(0x%x) local N_Port_ID(0x%x) N_Port_ID(0x%06x). State(0x%04x) options(0x%04x) index(0x%04x) ref(%d) pend:%d",
- unf_lport->port_id, rport_cnt, unf_rport,
- unf_rport->port_name, unf_rport->scsi_id,
- unf_rport->local_nport_id, unf_rport->nport_id,
- unf_rport->rp_state, unf_rport->options,
- unf_rport->rport_index,
- atomic_read(&unf_rport->rport_ref_cnt),
- atomic_read(&unf_rport->pending_io_cnt));
-
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR)
- target_cnt++;
- }
-
- unf_lport->target_cnt = target_cnt;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) targetnum=(%u)", unf_lport->port_id,
- unf_lport->target_cnt);
-
- /* for each R_Port from destroy_list */
- list_for_each_safe(node, next_node, &disc->list_destroy_rports) {
- unf_rport = list_entry(node, struct unf_rport, entry_rport);
- rport_cnt++;
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Port(0x%x) destroy RPorts(%u) WWPN(0x%016llx) N_Port_ID(0x%06x) State(0x%04x) options(0x%04x) index(0x%04x) ref(%d)",
- unf_lport->port_id, rport_cnt, unf_rport->port_name,
- unf_rport->nport_id, unf_rport->rp_state,
- unf_rport->options, unf_rport->rport_index,
- atomic_read(&unf_rport->rport_ref_cnt));
- }
-
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-}
-
-u32 unf_lport_ref_inc(struct unf_lport *lport)
-{
- ulong lport_flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
- if (atomic_read(&lport->port_ref_cnt) <= 0) {
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
-
- return UNF_RETURN_ERROR;
- }
-
- atomic_inc(&lport->port_ref_cnt);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%p) port_id(0x%x) reference count is %d",
- lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
-
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
-
- return RETURN_OK;
-}
-
-void unf_lport_ref_dec(struct unf_lport *lport)
-{
- ulong flags = 0;
- ulong lport_flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "LPort(0x%p), port ID(0x%x), reference count is %d.",
- lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
-
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
- if (atomic_dec_and_test(&lport->port_ref_cnt)) {
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
- list_del(&lport->entry_lport);
- global_lport_mgr.lport_sum--;
-
- /* attaches the lport to the destroy linked list for dfx */
- list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-
- (void)unf_lport_destroy(lport, NULL);
- } else {
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
- }
-}
-
-void unf_lport_update_topo(struct unf_lport *lport,
- enum unf_act_topo active_topo)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- if (active_topo > UNF_ACT_TOP_UNKNOWN || active_topo < UNF_ACT_TOP_PUBLIC_LOOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) set invalid topology(0x%x) with current value(0x%x)",
- lport->nport_id, active_topo, lport->act_topo);
-
- return;
- }
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- lport->act_topo = active_topo;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-}
-
-void unf_set_lport_removing(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- lport->fc_port = NULL;
- lport->port_removing = true;
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_0_SET_REMOVING;
-}
-
-u32 unf_release_local_port(void *lport)
-{
- struct unf_lport *unf_lport = lport;
- struct completion lport_free_completion;
-
- init_completion(&lport_free_completion);
- FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
-
- unf_lport->lport_free_completion = &lport_free_completion;
- unf_set_lport_removing(unf_lport);
- unf_lport_ref_dec(unf_lport);
- wait_for_completion(unf_lport->lport_free_completion);
- /* for dirty case */
- if (unf_lport->dirty_flag == 0)
- vfree(unf_lport);
-
- return RETURN_OK;
-}
-
-static void unf_free_all_esgl_pages(struct unf_lport *lport)
-{
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- u32 i;
-
- FC_CHECK_RETURN_VOID(lport);
- spin_lock_irqsave(&lport->esgl_pool.esgl_pool_lock, flag);
- list_for_each_safe(node, next_node, &lport->esgl_pool.list_esgl_pool) {
- list_del(node);
- }
-
- spin_unlock_irqrestore(&lport->esgl_pool.esgl_pool_lock, flag);
- if (lport->esgl_pool.esgl_buff_list.buflist) {
- for (i = 0; i < lport->esgl_pool.esgl_buff_list.buf_num; i++) {
- if (lport->esgl_pool.esgl_buff_list.buflist[i].vaddr) {
- dma_free_coherent(&lport->low_level_func.dev->dev,
- lport->esgl_pool.esgl_buff_list.buf_size,
- lport->esgl_pool.esgl_buff_list.buflist[i].vaddr,
- lport->esgl_pool.esgl_buff_list.buflist[i].paddr);
- lport->esgl_pool.esgl_buff_list.buflist[i].vaddr = NULL;
- }
- }
- kfree(lport->esgl_pool.esgl_buff_list.buflist);
- lport->esgl_pool.esgl_buff_list.buflist = NULL;
- }
-}
-
-static u32 unf_init_esgl_pool(struct unf_lport *lport)
-{
- struct unf_esgl *esgl = NULL;
- u32 ret = RETURN_OK;
- u32 index = 0;
- u32 buf_total_size;
- u32 buf_num;
- u32 alloc_idx;
- u32 curbuf_idx = 0;
- u32 curbuf_offset = 0;
- u32 buf_cnt_perhugebuf;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- lport->esgl_pool.esgl_pool_count = lport->low_level_func.lport_cfg_items.max_io;
- spin_lock_init(&lport->esgl_pool.esgl_pool_lock);
- INIT_LIST_HEAD(&lport->esgl_pool.list_esgl_pool);
-
- lport->esgl_pool.esgl_pool_addr =
- vmalloc((size_t)((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
- if (!lport->esgl_pool.esgl_pool_addr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "LPort(0x%x) cannot allocate ESGL Pool.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- esgl = (struct unf_esgl *)lport->esgl_pool.esgl_pool_addr;
- memset(esgl, 0, ((lport->esgl_pool.esgl_pool_count) * sizeof(struct unf_esgl)));
-
- buf_total_size = (u32)(PAGE_SIZE * lport->esgl_pool.esgl_pool_count);
-
- lport->esgl_pool.esgl_buff_list.buf_size =
- buf_total_size > BUF_LIST_PAGE_SIZE ? BUF_LIST_PAGE_SIZE : buf_total_size;
- buf_cnt_perhugebuf = lport->esgl_pool.esgl_buff_list.buf_size / PAGE_SIZE;
- buf_num = lport->esgl_pool.esgl_pool_count % buf_cnt_perhugebuf
- ? lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf + 1
- : lport->esgl_pool.esgl_pool_count / buf_cnt_perhugebuf;
- lport->esgl_pool.esgl_buff_list.buflist =
- (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
- lport->esgl_pool.esgl_buff_list.buf_num = buf_num;
-
- if (!lport->esgl_pool.esgl_buff_list.buflist) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate Esgl pool buf list failed out of memory");
- goto free_buff;
- }
- memset(lport->esgl_pool.esgl_buff_list.buflist, 0, buf_num * sizeof(struct buff_list));
-
- for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
- lport->esgl_pool.esgl_buff_list.buflist[alloc_idx]
- .vaddr = dma_alloc_coherent(&lport->low_level_func.dev->dev,
- lport->esgl_pool.esgl_buff_list.buf_size,
- &lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].paddr, GFP_KERNEL);
- if (!lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr)
- goto free_buff;
- memset(lport->esgl_pool.esgl_buff_list.buflist[alloc_idx].vaddr, 0,
- lport->esgl_pool.esgl_buff_list.buf_size);
- }
-
- /* allocates the Esgl page, and the DMA uses the */
- for (index = 0; index < lport->esgl_pool.esgl_pool_count; index++) {
- if (index != 0 && !(index % buf_cnt_perhugebuf))
- curbuf_idx++;
- curbuf_offset = (u32)(PAGE_SIZE * (index % buf_cnt_perhugebuf));
- esgl->page.page_address =
- (u64)lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].vaddr + curbuf_offset;
- esgl->page.page_size = PAGE_SIZE;
- esgl->page.esgl_phy_addr =
- lport->esgl_pool.esgl_buff_list.buflist[curbuf_idx].paddr + curbuf_offset;
- list_add_tail(&esgl->entry_esgl, &lport->esgl_pool.list_esgl_pool);
- esgl++;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
-
- return ret;
-free_buff:
- unf_free_all_esgl_pages(lport);
- vfree(lport->esgl_pool.esgl_pool_addr);
-
- return UNF_RETURN_ERROR;
-}
-
-static void unf_free_esgl_pool(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- unf_free_all_esgl_pages(lport);
- lport->esgl_pool.esgl_pool_count = 0;
-
- if (lport->esgl_pool.esgl_pool_addr) {
- vfree(lport->esgl_pool.esgl_pool_addr);
- lport->esgl_pool.esgl_pool_addr = NULL;
- }
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_5_DESTROY_ESGL_POOL;
-}
-
-struct unf_lport *unf_find_lport_by_port_id(u32 port_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
- u32 portid = port_id & (~PORTID_VPINDEX_MASK);
- u16 vport_index;
- spinlock_t *lport_list_lock = NULL;
-
- lport_list_lock = &global_lport_mgr.global_lport_list_lock;
- vport_index = (port_id & PORTID_VPINDEX_MASK) >> PORTID_VPINDEX_SHIT;
- spin_lock_irqsave(lport_list_lock, flags);
-
- list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
- if (unf_lport->port_id == portid && !unf_lport->port_removing) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
- }
- }
-
- list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
- if (unf_lport->port_id == portid && !unf_lport->port_removing) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_cm_lookup_vport_by_vp_index(unf_lport, vport_index);
- }
- }
-
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return NULL;
-}
-
-u32 unf_is_vport_valid(struct unf_lport *lport, struct unf_lport *vport)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- spinlock_t *vport_pool_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(vport, UNF_RETURN_ERROR);
-
- unf_lport = lport;
- vport_pool = unf_lport->vport_pool;
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- vport_pool_lock = &vport_pool->vport_pool_lock;
- spin_lock_irqsave(vport_pool_lock, flag);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
-
- if (unf_vport == vport && !unf_vport->port_removing) {
- spin_unlock_irqrestore(vport_pool_lock, flag);
-
- return RETURN_OK;
- }
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
-
- if (unf_vport == vport && !unf_vport->port_removing) {
- spin_unlock_irqrestore(vport_pool_lock, flag);
-
- return RETURN_OK;
- }
- }
- spin_unlock_irqrestore(vport_pool_lock, flag);
-
- return UNF_RETURN_ERROR;
-}
-
-u32 unf_is_lport_valid(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
- spinlock_t *lport_list_lock = NULL;
-
- lport_list_lock = &global_lport_mgr.global_lport_list_lock;
- spin_lock_irqsave(lport_list_lock, flags);
-
- list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
-
- if (unf_lport == lport && !unf_lport->port_removing) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
-
- if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
- }
-
- list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
-
- if (unf_lport == lport && !unf_lport->port_removing) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
-
- if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
- }
-
- list_for_each_safe(node, next_node, &global_lport_mgr.destroy_list_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
-
- if (unf_lport == lport && !unf_lport->port_removing) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
-
- if (unf_is_vport_valid(unf_lport, lport) == RETURN_OK) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return RETURN_OK;
- }
- }
-
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return UNF_RETURN_ERROR;
-}
-
-static void unf_clean_linkdown_io(struct unf_lport *lport, bool clean_flag)
-{
- /* Clean L_Port/V_Port Link Down I/O: Set Abort Tag */
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(lport->xchg_mgr_temp.unf_xchg_abort_all_io);
-
- lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_INI, clean_flag);
- lport->xchg_mgr_temp.unf_xchg_abort_all_io(lport, UNF_XCHG_TYPE_SFS, clean_flag);
-}
-
-u32 unf_fc_port_link_event(void *lport, u32 events, void *input)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 index = 0;
-
- if (unlikely(!lport))
- return UNF_RETURN_ERROR;
- unf_lport = (struct unf_lport *)lport;
-
- ret = unf_lport_ref_inc(unf_lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) is removing and do nothing",
- unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- /* process port event */
- while (index < (sizeof(g_lport_action) / sizeof(struct unf_port_action))) {
- if (g_lport_action[index].action == events) {
- ret = g_lport_action[index].unf_action(unf_lport, input);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-
- return ret;
- }
- index++;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive unknown event(0x%x)",
- unf_lport->port_id, events);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-
- return ret;
-}
-
-void unf_port_mgmt_init(void)
-{
- memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
-
- INIT_LIST_HEAD(&global_lport_mgr.lport_list_head);
-
- INIT_LIST_HEAD(&global_lport_mgr.intergrad_head);
-
- INIT_LIST_HEAD(&global_lport_mgr.destroy_list_head);
-
- INIT_LIST_HEAD(&global_lport_mgr.dirty_list_head);
-
- spin_lock_init(&global_lport_mgr.global_lport_list_lock);
-
- UNF_SET_NOMAL_MODE(global_lport_mgr.dft_mode);
-
- global_lport_mgr.start_work = true;
-}
-
-void unf_port_mgmt_deinit(void)
-{
- if (global_lport_mgr.lport_sum != 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]There are %u port pool memory giveaway",
- global_lport_mgr.lport_sum);
- }
-
- memset(&global_lport_mgr, 0, sizeof(struct unf_global_lport));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Common port manager exit succeed");
-}
-
-static void unf_port_register(struct unf_lport *lport)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Register LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
-
- /* Add to the global management linked list header */
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- list_add_tail(&lport->entry_lport, &global_lport_mgr.lport_list_head);
- global_lport_mgr.lport_sum++;
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-}
-
-static void unf_port_unregister(struct unf_lport *lport)
-{
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Unregister LPort(0x%p), port ID(0x%x).", lport, lport->port_id);
-
- /* Remove from the global management linked list header */
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- list_del(&lport->entry_lport);
- global_lport_mgr.lport_sum--;
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-}
-
-int unf_port_start_work(struct unf_lport *lport)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- if (lport->start_work_state != UNF_START_WORK_STOP) {
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- return RETURN_OK;
- }
- lport->start_work_state = UNF_START_WORK_COMPLETE;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- /* switch sfp to start work */
- (void)unf_port_switch(lport, true);
-
- return RETURN_OK;
-}
-
-static u32
-unf_lport_init_lw_funop(struct unf_lport *lport,
- struct unf_low_level_functioon_op *low_level_op)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(low_level_op, UNF_RETURN_ERROR);
-
- lport->port_id = low_level_op->lport_cfg_items.port_id;
- lport->port_name = low_level_op->sys_port_name;
- lport->node_name = low_level_op->sys_node_name;
- lport->options = low_level_op->lport_cfg_items.port_mode;
- lport->act_topo = UNF_ACT_TOP_UNKNOWN;
- lport->max_ssq_num = low_level_op->support_max_ssq_num;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) .", lport->port_id);
-
- memcpy(&lport->low_level_func, low_level_op, sizeof(struct unf_low_level_functioon_op));
-
- return RETURN_OK;
-}
-
-void unf_lport_release_lw_funop(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VOID(lport);
-
- memset(&lport->low_level_func, 0, sizeof(struct unf_low_level_functioon_op));
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_13_DESTROY_LW_INTERFACE;
-}
-
-struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id)
-{
- struct list_head *node = NULL, *next_node = NULL;
- struct list_head *vp_node = NULL, *next_vp_node = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_lport *unf_vport = NULL;
- ulong flags = 0;
- ulong pool_flags = 0;
- spinlock_t *vp_pool_lock = NULL;
- spinlock_t *lport_list_lock = &global_lport_mgr.global_lport_list_lock;
-
- spin_lock_irqsave(lport_list_lock, flags);
- list_for_each_safe(node, next_node, &global_lport_mgr.lport_list_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
- vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
- if (scsi_host_id == UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_lport;
- }
-
- /* support NPIV */
- if (unf_lport->vport_pool) {
- spin_lock_irqsave(vp_pool_lock, pool_flags);
- list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
-
- if (scsi_host_id ==
- UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
- spin_unlock_irqrestore(vp_pool_lock, pool_flags);
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(vp_pool_lock, pool_flags);
- }
- }
-
- list_for_each_safe(node, next_node, &global_lport_mgr.intergrad_head) {
- unf_lport = list_entry(node, struct unf_lport, entry_lport);
- vp_pool_lock = &unf_lport->vport_pool->vport_pool_lock;
- if (scsi_host_id ==
- UNF_GET_SCSI_HOST_ID(unf_lport->host_info.host)) {
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_lport;
- }
-
- /* support NPIV */
- if (unf_lport->vport_pool) {
- spin_lock_irqsave(vp_pool_lock, pool_flags);
- list_for_each_safe(vp_node, next_vp_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(vp_node, struct unf_lport, entry_vport);
-
- if (scsi_host_id ==
- UNF_GET_SCSI_HOST_ID(unf_vport->host_info.host)) {
- spin_unlock_irqrestore(vp_pool_lock, pool_flags);
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(vp_pool_lock, pool_flags);
- }
- }
- spin_unlock_irqrestore(lport_list_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Can not find port by scsi_host_id(0x%x), may be removing",
- scsi_host_id);
-
- return NULL;
-}
-
-u32 unf_init_scsi_id_table(struct unf_lport *lport)
-{
- struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
- struct unf_wwpn_rport_info *wwpn_port_info = NULL;
- u32 idx;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- rport_scsi_id_image = &lport->rport_scsi_table;
- rport_scsi_id_image->max_scsi_id = UNF_MAX_SCSI_ID;
-
- /* If the number of remote connections supported by the L_Port is 0, an
- * exception occurs
- */
- if (rport_scsi_id_image->max_scsi_id == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x), supported maximum login is zero.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- rport_scsi_id_image->wwn_rport_info_table =
- vmalloc(rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
- if (!rport_scsi_id_image->wwn_rport_info_table) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't allocate SCSI ID Table(0x%x).",
- lport->port_id, rport_scsi_id_image->max_scsi_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(rport_scsi_id_image->wwn_rport_info_table, 0,
- rport_scsi_id_image->max_scsi_id * sizeof(struct unf_wwpn_rport_info));
-
- wwpn_port_info = rport_scsi_id_image->wwn_rport_info_table;
-
- for (idx = 0; idx < rport_scsi_id_image->max_scsi_id; idx++) {
- INIT_DELAYED_WORK(&wwpn_port_info->loss_tmo_work, unf_sesion_loss_timeout);
- INIT_LIST_HEAD(&wwpn_port_info->fc_lun_list);
- wwpn_port_info->lport = lport;
- wwpn_port_info->target_id = INVALID_VALUE32;
- wwpn_port_info++;
- }
-
- spin_lock_init(&rport_scsi_id_image->scsi_image_table_lock);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) supported maximum login is %u.",
- lport->port_id, rport_scsi_id_image->max_scsi_id);
-
- return RETURN_OK;
-}
-
-void unf_destroy_scsi_id_table(struct unf_lport *lport)
-{
- struct unf_rport_scsi_id_image *rport_scsi_id_image = NULL;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- u32 i = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
-
- rport_scsi_id_image = &lport->rport_scsi_table;
- if (rport_scsi_id_image->wwn_rport_info_table) {
- for (i = 0; i < UNF_MAX_SCSI_ID; i++) {
- wwn_rport_info = &rport_scsi_id_image->wwn_rport_info_table[i];
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
- (&wwn_rport_info->loss_tmo_work),
- "loss tmo Timer work");
- if (wwn_rport_info->lun_qos_level) {
- vfree(wwn_rport_info->lun_qos_level);
- wwn_rport_info->lun_qos_level = NULL;
- }
- }
-
- if (ret) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x) cancel loss tmo work success", lport->port_id);
- }
- vfree(rport_scsi_id_image->wwn_rport_info_table);
- rport_scsi_id_image->wwn_rport_info_table = NULL;
- }
-
- rport_scsi_id_image->max_scsi_id = 0;
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_10_DESTROY_SCSI_TABLE;
-}
-
-static u32 unf_lport_init(struct unf_lport *lport, void *private_data,
- struct unf_low_level_functioon_op *low_level_op)
-{
- u32 ret = RETURN_OK;
- char work_queue_name[13];
-
- unf_init_port_parms(lport);
-
- /* Associating LPort with FCPort */
- lport->fc_port = private_data;
-
- /* VpIndx=0 is reserved for Lport, and rootLport points to its own */
- lport->vp_index = 0;
- lport->root_lport = lport;
- lport->chip_info = NULL;
-
- /* Initialize the units related to L_Port and lw func */
- ret = unf_lport_init_lw_funop(lport, low_level_op);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) initialize lowlevel function unsuccessful.",
- lport->port_id);
-
- return ret;
- }
-
- /* Init Linkevent workqueue */
- snprintf(work_queue_name, sizeof(work_queue_name), "%x_lkq", lport->port_id);
-
- lport->link_event_wq = create_singlethread_workqueue(work_queue_name);
- if (!lport->link_event_wq) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]Port(0x%x) creat link event work queue failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- snprintf(work_queue_name, sizeof(work_queue_name), "%x_xchgwq", lport->port_id);
- lport->xchg_wq = create_workqueue(work_queue_name);
- if (!lport->xchg_wq) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]Port(0x%x) creat Exchg work queue failed",
- lport->port_id);
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
- return UNF_RETURN_ERROR;
- }
- /* scsi table (R_Port) required for initializing INI
- * Initialize the scsi id Table table to manage the mapping between SCSI
- * ID, WWN, and Rport.
- */
-
- ret = unf_init_scsi_id_table(lport);
- if (ret != RETURN_OK) {
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
- return ret;
- }
-
- /* Initialize the EXCH resource */
- ret = unf_alloc_xchg_resource(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) can't allocate exchange resource.", lport->port_id);
-
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
- unf_destroy_scsi_id_table(lport);
-
- return ret;
- }
-
- /* Initialize the ESGL resource pool used by Lport */
- ret = unf_init_esgl_pool(lport);
- if (ret != RETURN_OK) {
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
- unf_free_all_xchg_mgr(lport);
- unf_destroy_scsi_id_table(lport);
-
- return ret;
- }
- /* Initialize the disc manager under Lport */
- ret = unf_init_disc_mgr(lport);
- if (ret != RETURN_OK) {
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
- unf_free_esgl_pool(lport);
- unf_free_all_xchg_mgr(lport);
- unf_destroy_scsi_id_table(lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) initialize discover manager unsuccessful.",
- lport->port_id);
-
- return ret;
- }
-
- /* Initialize the LPort manager */
- ret = unf_init_vport_mgr_temp(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) initialize RPort manager unsuccessful.", lport->port_id);
-
- goto RELEASE_LPORT;
- }
-
- /* Initialize the EXCH manager */
- ret = unf_init_xchg_mgr_temp(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) initialize exchange manager unsuccessful.",
- lport->port_id);
- goto RELEASE_LPORT;
- }
- /* Initialize the resources required by the event processing center */
- ret = unf_init_event_center(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) initialize event center unsuccessful.", lport->port_id);
- goto RELEASE_LPORT;
- }
- /* Initialize the initialization status of Lport */
- unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
-
- /* Initialize the Lport route test case */
- ret = unf_init_lport_route(lport);
- if (ret != RETURN_OK) {
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
- (void)unf_event_center_destroy(lport);
- unf_disc_mgr_destroy(lport);
- unf_free_esgl_pool(lport);
- unf_free_all_xchg_mgr(lport);
- unf_destroy_scsi_id_table(lport);
-
- return ret;
- }
- /* Thesupports the initialization stepof the NPIV */
- ret = unf_init_vport_pool(lport);
- if (ret != RETURN_OK) {
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
-
- unf_destroy_lport_route(lport);
- (void)unf_event_center_destroy(lport);
- unf_disc_mgr_destroy(lport);
- unf_free_esgl_pool(lport);
- unf_free_all_xchg_mgr(lport);
- unf_destroy_scsi_id_table(lport);
-
- return ret;
- }
-
- /* qualifier rport callback */
- lport->unf_qualify_rport = unf_rport_set_qualifier_key_reuse;
- lport->unf_tmf_abnormal_recovery = unf_tmf_timeout_recovery_special;
- return RETURN_OK;
-RELEASE_LPORT:
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
-
- unf_disc_mgr_destroy(lport);
- unf_free_esgl_pool(lport);
- unf_free_all_xchg_mgr(lport);
- unf_destroy_scsi_id_table(lport);
-
- return ret;
-}
-
-void unf_free_qos_info(struct unf_lport *lport)
-{
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_qos_info *qos_info = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- spin_lock_irqsave(&lport->qos_mgr_lock, flag);
- list_for_each_safe(node, next_node, &lport->list_qos_head) {
- qos_info = (struct unf_qos_info *)list_entry(node,
- struct unf_qos_info, entry_qos_info);
- list_del_init(&qos_info->entry_qos_info);
- kfree(qos_info);
- }
-
- spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
-}
-
-u32 unf_lport_deinit(struct unf_lport *lport)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- unf_free_qos_info(lport);
-
- unf_unregister_scsi_host(lport);
-
- /* If the card is unloaded normally, the thread is stopped once. The
- * problem does not occur if you stop the thread again.
- */
- unf_destroy_lport_route(lport);
-
- /* minus the reference count of the card event; the last port deletes
- * the card thread
- */
- unf_destroy_card_thread(lport);
- flush_workqueue(lport->link_event_wq);
- destroy_workqueue(lport->link_event_wq);
- lport->link_event_wq = NULL;
-
- (void)unf_event_center_destroy(lport);
- unf_free_vport_pool(lport);
- unf_xchg_mgr_destroy(lport);
-
- unf_free_esgl_pool(lport);
-
- /* reliability review :Disc should release after Xchg. Destroy the disc
- * manager
- */
- unf_disc_mgr_destroy(lport);
-
- unf_release_xchg_mgr_temp(lport);
-
- unf_release_vport_mgr_temp(lport);
-
- unf_destroy_scsi_id_table(lport);
-
- flush_workqueue(lport->xchg_wq);
- destroy_workqueue(lport->xchg_wq);
- lport->xchg_wq = NULL;
-
- /* Releasing the lw Interface Template */
- unf_lport_release_lw_funop(lport);
- lport->fc_port = NULL;
-
- return RETURN_OK;
-}
-
-static int unf_card_event_process(void *arg)
-{
- struct list_head *node = NULL;
- struct unf_cm_event_report *event_node = NULL;
- ulong flags = 0;
- struct unf_chip_manage_info *chip_info = (struct unf_chip_manage_info *)arg;
-
- set_user_nice(current, UNF_OS_THRD_PRI_LOW);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Slot(%u) chip(0x%x) enter event thread.",
- chip_info->slot_id, chip_info->chip_id);
-
- while (!kthread_should_stop()) {
- if (chip_info->thread_exit)
- break;
-
- spin_lock_irqsave(&chip_info->chip_event_list_lock, flags);
- if (list_empty(&chip_info->list_head)) {
- spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
-
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout((long)msecs_to_jiffies(UNF_S_TO_MS));
- } else {
- node = UNF_OS_LIST_NEXT(&chip_info->list_head);
- list_del_init(node);
- chip_info->list_num--;
- event_node = list_entry(node, struct unf_cm_event_report, list_entry);
- spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flags);
- unf_handle_event(event_node);
- }
- }
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "Slot(%u) chip(0x%x) exit event thread.",
- chip_info->slot_id, chip_info->chip_id);
-
- return RETURN_OK;
-}
-
-static void unf_destroy_card_thread(struct unf_lport *lport)
-{
- struct unf_event_mgr *event_mgr = NULL;
- struct unf_chip_manage_info *chip_info = NULL;
- struct list_head *list = NULL;
- struct list_head *list_tmp = NULL;
- struct unf_cm_event_report *event_node = NULL;
- ulong event_lock_flag = 0;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- /* If the thread cannot be found, apply for a new thread. */
- chip_info = lport->chip_info;
- if (!chip_info) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) has no event thread.", lport->port_id);
- return;
- }
- event_mgr = &lport->event_mgr;
-
- spin_lock_irqsave(&chip_info->chip_event_list_lock, flag);
- if (!list_empty(&chip_info->list_head)) {
- list_for_each_safe(list, list_tmp, &chip_info->list_head) {
- event_node = list_entry(list, struct unf_cm_event_report, list_entry);
-
- /* The LPort under the global event node is null. */
- if (event_node->lport == lport) {
- list_del_init(&event_node->list_entry);
- if (event_node->event_asy_flag == UNF_EVENT_SYN) {
- event_node->result = UNF_RETURN_ERROR;
- complete(&event_node->event_comp);
- }
-
- spin_lock_irqsave(&event_mgr->port_event_lock, event_lock_flag);
- event_mgr->free_event_count++;
- list_add_tail(&event_node->list_entry, &event_mgr->list_free_event);
- spin_unlock_irqrestore(&event_mgr->port_event_lock,
- event_lock_flag);
- }
- }
- }
- spin_unlock_irqrestore(&chip_info->chip_event_list_lock, flag);
-
- /* If the number of events introduced by the event thread is 0,
- * it indicates that no interface is used. In this case, thread
- * resources need to be consumed
- */
- if (atomic_dec_and_test(&chip_info->ref_cnt)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) destroy slot(%u) chip(0x%x) event thread succeed.",
- lport->port_id, chip_info->slot_id, chip_info->chip_id);
- chip_info->thread_exit = true;
- wake_up_process(chip_info->thread);
- kthread_stop(chip_info->thread);
- chip_info->thread = NULL;
-
- spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
- list_del_init(&chip_info->list_chip_thread_entry);
- card_thread_mgr.card_num--;
- spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
-
- vfree(chip_info);
- }
-
- lport->chip_info = NULL;
-}
-
-static u32 unf_creat_card_thread(struct unf_lport *lport)
-{
- ulong flag = 0;
- struct unf_chip_manage_info *chip_manage_info = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* If the thread cannot be found, apply for a new thread. */
- chip_manage_info = (struct unf_chip_manage_info *)
- vmalloc(sizeof(struct unf_chip_manage_info));
- if (!chip_manage_info) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) cannot allocate thread memory.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(chip_manage_info, 0, sizeof(struct unf_chip_manage_info));
-
- memcpy(&chip_manage_info->chip_info, &lport->low_level_func.chip_info,
- sizeof(struct unf_chip_info));
- chip_manage_info->slot_id = UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id);
- chip_manage_info->chip_id = lport->low_level_func.chip_id;
- chip_manage_info->list_num = 0;
- chip_manage_info->sfp_9545_fault = false;
- chip_manage_info->sfp_power_fault = false;
- atomic_set(&chip_manage_info->ref_cnt, 1);
- atomic_set(&chip_manage_info->card_loop_test_flag, false);
- spin_lock_init(&chip_manage_info->card_loop_back_state_lock);
- INIT_LIST_HEAD(&chip_manage_info->list_head);
- spin_lock_init(&chip_manage_info->chip_event_list_lock);
-
- chip_manage_info->thread_exit = false;
- chip_manage_info->thread = kthread_create(unf_card_event_process,
- chip_manage_info, "%x_et", lport->port_id);
-
- if (IS_ERR(chip_manage_info->thread) || !chip_manage_info->thread) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) creat event thread(0x%p) unsuccessful.",
- lport->port_id, chip_manage_info->thread);
-
- vfree(chip_manage_info);
-
- return UNF_RETURN_ERROR;
- }
-
- lport->chip_info = chip_manage_info;
- wake_up_process(chip_manage_info->thread);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) creat slot(%u) chip(0x%x) event thread succeed.",
- lport->port_id, chip_manage_info->slot_id,
- chip_manage_info->chip_id);
-
- spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
- list_add_tail(&chip_manage_info->list_chip_thread_entry, &card_thread_mgr.card_list_head);
- card_thread_mgr.card_num++;
- spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
-
- return RETURN_OK;
-}
-
-static u32 unf_find_card_thread(struct unf_lport *lport)
-{
- ulong flag = 0;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_chip_manage_info *chip_info = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- spin_lock_irqsave(&card_thread_mgr.global_card_list_lock, flag);
- list_for_each_safe(node, next_node, &card_thread_mgr.card_list_head) {
- chip_info = list_entry(node, struct unf_chip_manage_info, list_chip_thread_entry);
-
- if (chip_info->chip_id == lport->low_level_func.chip_id &&
- chip_info->slot_id ==
- UNF_GET_BOARD_TYPE_AND_SLOT_ID_BY_PORTID(lport->port_id)) {
- atomic_inc(&chip_info->ref_cnt);
- lport->chip_info = chip_info;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) find card(%u) chip(0x%x) event thread succeed.",
- lport->port_id, chip_info->slot_id, chip_info->chip_id);
-
- spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
-
- return RETURN_OK;
- }
- }
- spin_unlock_irqrestore(&card_thread_mgr.global_card_list_lock, flag);
-
- ret = unf_creat_card_thread(lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) creat event thread unsuccessful. Destroy LPort.",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- } else {
- return RETURN_OK;
- }
-}
-
-void *unf_lport_create_and_init(void *private_data, struct unf_low_level_functioon_op *low_level_op)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- if (!private_data) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Private Data is NULL");
-
- return NULL;
- }
- if (!low_level_op) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LowLevel port(0x%p) function is NULL", private_data);
-
- return NULL;
- }
-
- /* 1. vmalloc & Memset L_Port */
- unf_lport = vmalloc(sizeof(struct unf_lport));
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Alloc LPort memory failed.");
-
- return NULL;
- }
- memset(unf_lport, 0, sizeof(struct unf_lport));
-
- /* 2. L_Port Init */
- if (unf_lport_init(unf_lport, private_data, low_level_op) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort initialize unsuccessful.");
-
- vfree(unf_lport);
-
- return NULL;
- }
-
- /* 4. Get or Create Chip Thread
- * Chip_ID & Slot_ID
- */
- ret = unf_find_card_thread(unf_lport);
- if (ret != RETURN_OK) {
- (void)unf_lport_deinit(unf_lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%x) Find Chip thread unsuccessful. Destroy LPort.",
- unf_lport->port_id);
-
- vfree(unf_lport);
- return NULL;
- }
-
- /* 5. Registers with in the port management global linked list */
- unf_port_register(unf_lport);
- /* update WWN */
- if (unf_build_lport_wwn(unf_lport) != RETURN_OK) {
- unf_port_unregister(unf_lport);
- (void)unf_lport_deinit(unf_lport);
- vfree(unf_lport);
- return NULL;
- }
-
- // unf_init_link_lose_tmo(unf_lport);//TO DO
-
- /* initialize Scsi Host */
- if (unf_register_scsi_host(unf_lport) != RETURN_OK) {
- unf_port_unregister(unf_lport);
- (void)unf_lport_deinit(unf_lport);
- vfree(unf_lport);
- return NULL;
- }
- /* 7. Here, start work now */
- if (global_lport_mgr.start_work) {
- if (unf_port_start_work(unf_lport) != RETURN_OK) {
- unf_port_unregister(unf_lport);
-
- (void)unf_lport_deinit(unf_lport);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) start work failed", unf_lport->port_id);
- vfree(unf_lport);
- return NULL;
- }
- }
-
- return unf_lport;
-}
-
-static int unf_lport_destroy(void *lport, void *arg_out)
-{
- struct unf_lport *unf_lport = NULL;
- ulong flags = 0;
-
- if (!lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR, "LPort is NULL.");
-
- return UNF_RETURN_ERROR;
- }
-
- unf_lport = (struct unf_lport *)lport;
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "Destroy LPort(0x%p), ID(0x%x).", unf_lport, unf_lport->port_id);
- /* NPIV Ensure that all Vport are deleted */
- unf_destroy_all_vports(unf_lport);
- unf_lport->destroy_step = UNF_LPORT_DESTROY_STEP_1_REPORT_PORT_OUT;
-
- (void)unf_lport_deinit(lport);
-
- /* The port is removed from the destroy linked list. The next step is to
- * release the memory
- */
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- list_del(&unf_lport->entry_lport);
-
- /* If the port has dirty memory, the port is mounted to the linked list
- * of dirty ports
- */
- if (unf_lport->dirty_flag)
- list_add_tail(&unf_lport->entry_lport, &global_lport_mgr.dirty_list_head);
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-
- if (unf_lport->lport_free_completion) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Complete LPort(0x%p), port ID(0x%x)'s Free Completion.",
- unf_lport, unf_lport->port_id);
- complete(unf_lport->lport_free_completion);
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "LPort(0x%p), port ID(0x%x)'s Free Completion is NULL.",
- unf_lport, unf_lport->port_id);
- dump_stack();
- }
-
- return RETURN_OK;
-}
-
-static int unf_port_switch(struct unf_lport *lport, bool switch_flag)
-{
- struct unf_lport *unf_lport = lport;
- int ret = UNF_RETURN_ERROR;
- bool flag = false;
-
- FC_CHECK_RETURN_VALUE(unf_lport, UNF_RETURN_ERROR);
-
- if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_set) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_WARN,
- "[warn]Port(0x%x)'s config(switch) function is NULL",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- flag = switch_flag ? true : false;
-
- ret = (int)unf_lport->low_level_func.port_mgr_op.ll_port_config_set(unf_lport->fc_port,
- UNF_PORT_CFG_SET_PORT_SWITCH, (void *)&flag);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_WARN, "[warn]Port(0x%x) switch %s failed",
- unf_lport->port_id, switch_flag ? "On" : "Off");
-
- return UNF_RETURN_ERROR;
- }
-
- unf_lport->switch_state = (bool)flag;
-
- return RETURN_OK;
-}
-
-static int unf_send_event(u32 port_id, u32 syn_flag, void *argc_in, void *argc_out,
- int (*func)(void *argc_in, void *argc_out))
-{
- struct unf_lport *lport = NULL;
- struct unf_cm_event_report *event = NULL;
- int ret = 0;
-
- lport = unf_find_lport_by_port_id(port_id);
- if (!lport) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_INFO, "Cannot find LPort(0x%x).", port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- if (unf_lport_ref_inc(lport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "LPort(0x%x) is removing, no need process.",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- if (unlikely(!lport->event_mgr.unf_get_free_event_func ||
- !lport->event_mgr.unf_post_event_func ||
- !lport->event_mgr.unf_release_event)) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_MAJOR, "Event function is NULL.");
-
- unf_lport_ref_dec_to_destroy(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- if (lport->port_removing) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "LPort(0x%x) is removing, no need process.",
- lport->port_id);
-
- unf_lport_ref_dec_to_destroy(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- event = lport->event_mgr.unf_get_free_event_func((void *)lport);
- if (!event) {
- unf_lport_ref_dec_to_destroy(lport);
-
- return UNF_RETURN_ERROR;
- }
-
- init_completion(&event->event_comp);
- event->lport = lport;
- event->event_asy_flag = syn_flag;
- event->unf_event_task = func;
- event->para_in = argc_in;
- event->para_out = argc_out;
- lport->event_mgr.unf_post_event_func(lport, event);
-
- if (event->event_asy_flag) {
- /* You must wait for the other party to return. Otherwise, the
- * linked list may be in disorder.
- */
- wait_for_completion(&event->event_comp);
- ret = (int)event->result;
- lport->event_mgr.unf_release_event(lport, event);
- } else {
- ret = RETURN_OK;
- }
-
- unf_lport_ref_dec_to_destroy(lport);
- return ret;
-}
-
-static int unf_reset_port(void *arg_in, void *arg_out)
-{
- struct unf_reset_port_argin *input = (struct unf_reset_port_argin *)arg_in;
- struct unf_lport *lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- enum unf_port_config_state port_state = UNF_PORT_CONFIG_STATE_RESET;
-
- FC_CHECK_RETURN_VALUE(input, UNF_RETURN_ERROR);
-
- lport = unf_find_lport_by_port_id(input->port_id);
- if (!lport) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_MAJOR, "Not find LPort(0x%x).",
- input->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* reset port */
- if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
- "Port(0x%x)'s corresponding function is NULL.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- lport->act_topo = UNF_ACT_TOP_UNKNOWN;
- lport->speed = UNF_PORT_SPEED_UNKNOWN;
- lport->fabric_node_name = 0;
-
- ret = lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
- UNF_PORT_CFG_SET_PORT_STATE,
- (void *)&port_state);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_MAJOR, "Reset port(0x%x) unsuccessful.",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-int unf_cm_reset_port(u32 port_id)
-{
- int ret = UNF_RETURN_ERROR;
-
- ret = unf_send_event(port_id, UNF_EVENT_SYN, (void *)&port_id,
- (void *)NULL, unf_reset_port);
- return ret;
-}
-
-int unf_lport_reset_port(struct unf_lport *lport, u32 flag)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- return unf_send_event(lport->port_id, flag, (void *)&lport->port_id,
- (void *)NULL, unf_reset_port);
-}
-
-static inline u32 unf_get_loop_alpa(struct unf_lport *lport, void *loop_alpa)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport->low_level_func.port_mgr_op.ll_port_config_get,
- UNF_RETURN_ERROR);
-
- ret = lport->low_level_func.port_mgr_op.ll_port_config_get(lport->fc_port,
- UNF_PORT_CFG_GET_LOOP_ALPA, loop_alpa);
-
- return ret;
-}
-
-static u32 unf_lport_enter_private_loop_login(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = lport;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_READY); /* LPort: LINK_UP --> READY */
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- unf_lport_update_topo(unf_lport, UNF_ACT_TOP_PRIVATE_LOOP);
-
- /* NOP: check L_Port state */
- if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "[info]Port(0x%x) is NOP, do nothing",
- unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- /* INI: check L_Port mode */
- if (UNF_PORT_MODE_INI != (unf_lport->options & UNF_PORT_MODE_INI)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) has no INI feature(0x%x), do nothing",
- unf_lport->port_id, unf_lport->options);
-
- return RETURN_OK;
- }
-
- if (unf_lport->disc.disc_temp.unf_disc_start) {
- ret = unf_lport->disc.disc_temp.unf_disc_start(unf_lport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with nportid(0x%x) start discovery failed",
- unf_lport->port_id, unf_lport->nport_id);
- }
- }
-
- return ret;
-}
-
-u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo)
-{
- u32 loop_alpa = 0;
- u32 ret = RETURN_OK;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* 1. Update (set) L_Port topo which get from low level */
- unf_lport_update_topo(lport, act_topo);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
-
- /* 2. Link state check */
- if (lport->link_up != UNF_PORT_LINK_UP) {
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with link_state(0x%x) port_state(0x%x) when login",
- lport->port_id, lport->link_up, lport->states);
-
- return UNF_RETURN_ERROR;
- }
-
- /* 3. Update L_Port state */
- unf_lport_state_ma(lport, UNF_EVENT_LPORT_LINK_UP); /* LPort: INITIAL --> LINK UP */
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]LOGIN: Port(0x%x) start to login with topology(0x%x)",
- lport->port_id, lport->act_topo);
-
- /* 4. Start logoin */
- if (act_topo == UNF_TOP_P2P_MASK ||
- act_topo == UNF_ACT_TOP_P2P_FABRIC ||
- act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- /* P2P or Fabric mode */
- ret = unf_lport_enter_flogi(lport);
- } else if (act_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
- /* Public loop */
- (void)unf_get_loop_alpa(lport, &loop_alpa);
-
- /* Before FLOGI ALPA just low 8 bit, after FLOGI ACC, switch
- * will assign complete addresses
- */
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- lport->nport_id = loop_alpa;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- ret = unf_lport_enter_flogi(lport);
- } else if (act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- /* Private loop */
- (void)unf_get_loop_alpa(lport, &loop_alpa);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- lport->nport_id = loop_alpa;
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- ret = unf_lport_enter_private_loop_login(lport);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]LOGIN: Port(0x%x) login with unknown topology(0x%x)",
- lport->port_id, lport->act_topo);
- }
-
- return ret;
-}
-
-static u32 unf_port_linkup(struct unf_lport *lport, void *input)
-{
- struct unf_lport *unf_lport = lport;
- u32 ret = RETURN_OK;
- enum unf_act_topo act_topo = UNF_ACT_TOP_UNKNOWN;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- /* If NOP state, stop */
- if (atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) is NOP and do nothing", unf_lport->port_id);
-
- return RETURN_OK;
- }
-
- /* Update port state */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- unf_lport->link_up = UNF_PORT_LINK_UP;
- unf_lport->speed = *((u32 *)input);
- unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL); /* INITIAL state */
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- /* set hot pool wait state: so far, do not care */
- unf_set_hot_pool_wait_state(unf_lport, true);
-
- unf_lport->enhanced_features |= UNF_LPORT_ENHANCED_FEATURE_READ_SFP_ONCE;
-
- /* Get port active topopolgy (from low level) */
- if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) get topo function is NULL", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
- ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
- UNF_PORT_CFG_GET_TOPO_ACT, (void *)&act_topo);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) get topo from low level failed",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Start Login process */
- ret = unf_lport_login(unf_lport, act_topo);
-
- return ret;
-}
-
-static u32 unf_port_linkdown(struct unf_lport *lport, void *input)
-{
- ulong flag = 0;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- unf_lport = lport;
-
- /* To prevent repeated reporting linkdown */
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- unf_lport->speed = UNF_PORT_SPEED_UNKNOWN;
- unf_lport->act_topo = UNF_ACT_TOP_UNKNOWN;
- if (unf_lport->link_up == UNF_PORT_LINK_DOWN) {
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- return RETURN_OK;
- }
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
- unf_reset_lport_params(unf_lport);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- unf_set_hot_pool_wait_state(unf_lport, false);
-
- /*
- * clear I/O:
- * 1. INI do ABORT only,
- * 2. TGT need do source clear with Wait_IO
- * *
- * for INI: busy/delay/delay_transfer/wait
- * Clean L_Port/V_Port Link Down I/O: only set ABORT tag
- */
- unf_flush_disc_event(&unf_lport->disc, NULL);
-
- unf_clean_linkdown_io(unf_lport, false);
-
- /* for L_Port's R_Ports */
- unf_clean_linkdown_rport(unf_lport);
- /* for L_Port's all Vports */
- unf_linkdown_all_vports(lport);
- return RETURN_OK;
-}
-
-static u32 unf_port_abnormal_reset(struct unf_lport *lport, void *input)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- unf_lport = lport;
-
- ret = (u32)unf_lport_reset_port(unf_lport, UNF_EVENT_ASYN);
-
- return ret;
-}
-
-static u32 unf_port_reset_start(struct unf_lport *lport, void *input)
-{
- u32 ret = RETURN_OK;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_set_lport_state(lport, UNF_LPORT_ST_RESET);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x) begin to reset.", lport->port_id);
-
- return ret;
-}
-
-static u32 unf_port_reset_end(struct unf_lport *lport, void *input)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x) reset end.", lport->port_id);
-
- /* Task management command returns success and avoid repair measures
- * case offline device
- */
- unf_wake_up_scsi_task_cmnd(lport);
-
- spin_lock_irqsave(&lport->lport_state_lock, flag);
- unf_set_lport_state(lport, UNF_LPORT_ST_INITIAL);
- spin_unlock_irqrestore(&lport->lport_state_lock, flag);
-
- return RETURN_OK;
-}
-
-static u32 unf_port_nop(struct unf_lport *lport, void *input)
-{
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- unf_lport = lport;
-
- atomic_set(&unf_lport->lport_no_operate_flag, UNF_LPORT_NOP);
-
- spin_lock_irqsave(&unf_lport->lport_state_lock, flag);
- unf_lport_state_ma(unf_lport, UNF_EVENT_LPORT_LINK_DOWN);
- unf_reset_lport_params(unf_lport);
- spin_unlock_irqrestore(&unf_lport->lport_state_lock, flag);
-
- /* Set Tag prevent pending I/O to wait_list when close sfp failed */
- unf_set_hot_pool_wait_state(unf_lport, false);
-
- unf_flush_disc_event(&unf_lport->disc, NULL);
-
- /* L_Port/V_Port's I/O(s): Clean Link Down I/O: Set Abort Tag */
- unf_clean_linkdown_io(unf_lport, false);
-
- /* L_Port/V_Port's R_Port(s): report link down event to scsi & clear
- * resource
- */
- unf_clean_linkdown_rport(unf_lport);
- unf_linkdown_all_vports(unf_lport);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) report NOP event done", unf_lport->nport_id);
-
- return RETURN_OK;
-}
-
-static u32 unf_port_begin_remove(struct unf_lport *lport, void *input)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- /* Cancel route timer delay work */
- unf_destroy_lport_route(lport);
-
- return RETURN_OK;
-}
-
-static u32 unf_get_pcie_link_state(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = lport;
- bool linkstate = true;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(unf_lport->low_level_func.port_mgr_op.ll_port_config_get,
- UNF_RETURN_ERROR);
-
- ret = unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
- UNF_PORT_CFG_GET_PCIE_LINK_STATE, (void *)&linkstate);
- if (ret != RETURN_OK || linkstate != true) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_KEVENT, "[err]Can't Get Pcie Link State");
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-void unf_root_lport_ref_dec(struct unf_lport *lport)
-{
- ulong flags = 0;
- ulong lport_flags = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%p) port_id(0x%x) reference count is %d",
- lport, lport->port_id, atomic_read(&lport->port_ref_cnt));
-
- spin_lock_irqsave(&global_lport_mgr.global_lport_list_lock, flags);
- spin_lock_irqsave(&lport->lport_state_lock, lport_flags);
- if (atomic_dec_and_test(&lport->port_ref_cnt)) {
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
-
- list_del(&lport->entry_lport);
- global_lport_mgr.lport_sum--;
-
- /* Put L_Port to destroy list for debuging */
- list_add_tail(&lport->entry_lport, &global_lport_mgr.destroy_list_head);
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
-
- ret = unf_schedule_global_event((void *)lport, UNF_GLOBAL_EVENT_ASYN,
- unf_lport_destroy);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
- "[warn]Schedule global event faile. remain nodes(0x%x)",
- global_event_queue.list_number);
- }
- } else {
- spin_unlock_irqrestore(&lport->lport_state_lock, lport_flags);
- spin_unlock_irqrestore(&global_lport_mgr.global_lport_list_lock, flags);
- }
-}
-
-void unf_lport_ref_dec_to_destroy(struct unf_lport *lport)
-{
- if (lport->root_lport != lport)
- unf_vport_ref_dec(lport);
- else
- unf_root_lport_ref_dec(lport);
-}
-
-void unf_lport_route_work(struct work_struct *work)
-{
-#define UNF_MAX_PCIE_LINK_DOWN_TIMES 3
- struct unf_lport *unf_lport = NULL;
- int ret = 0;
-
- FC_CHECK_RETURN_VOID(work);
-
- unf_lport = container_of(work, struct unf_lport, route_timer_work.work);
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_KEVENT, "[err]LPort is NULL");
-
- return;
- }
-
- if (unlikely(unf_lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
- "[warn]LPort(0x%x) route work is closing.", unf_lport->port_id);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-
- return;
- }
-
- if (unlikely(unf_get_pcie_link_state(unf_lport)))
- unf_lport->pcie_link_down_cnt++;
- else
- unf_lport->pcie_link_down_cnt = 0;
-
- if (unf_lport->pcie_link_down_cnt >= UNF_MAX_PCIE_LINK_DOWN_TIMES) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
- "[warn]LPort(0x%x) detected pcie linkdown, closing route work",
- unf_lport->port_id);
- unf_lport->pcie_link_down = true;
- unf_free_lport_all_xchg(unf_lport);
- unf_lport_ref_dec_to_destroy(unf_lport);
- return;
- }
-
- if (unlikely(UNF_LPORT_CHIP_ERROR(unf_lport))) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
- "[warn]LPort(0x%x) reported chip error, closing route work. ",
- unf_lport->port_id);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-
- return;
- }
-
- if (unf_lport->enhanced_features &
- UNF_LPORT_ENHANCED_FEATURE_CLOSE_FW_ROUTE) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
- "[warn]User close LPort(0x%x) route work. ", unf_lport->port_id);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
-
- return;
- }
-
- /* Scheduling 1 second */
- ret = queue_delayed_work(unf_wq, &unf_lport->route_timer_work,
- (ulong)msecs_to_jiffies(UNF_LPORT_POLL_TIMER));
- if (ret == 0) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_KEVENT,
- "[warn]LPort(0x%x) schedule work unsuccessful.", unf_lport->port_id);
-
- unf_lport_ref_dec_to_destroy(unf_lport);
- }
-}
-
-static int unf_cm_get_mac_adr(void *argc_in, void *argc_out)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_get_chip_info_argout *chip_info = NULL;
-
- FC_CHECK_RETURN_VALUE(argc_in, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(argc_out, UNF_RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)argc_in;
- chip_info = (struct unf_get_chip_info_argout *)argc_out;
-
- if (!unf_lport) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT,
- UNF_MAJOR, " LPort is null.");
-
- return UNF_RETURN_ERROR;
- }
-
- if (!unf_lport->low_level_func.port_mgr_op.ll_port_config_get) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x)'s corresponding function is NULL.", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- if (unf_lport->low_level_func.port_mgr_op.ll_port_config_get(unf_lport->fc_port,
- UNF_PORT_CFG_GET_MAC_ADDR,
- chip_info) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) get .", unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-int unf_build_sys_wwn(u32 port_id, u64 *sys_port_name, u64 *sys_node_name)
-{
- struct unf_get_chip_info_argout wwn = {0};
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE((sys_port_name), UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE((sys_node_name), UNF_RETURN_ERROR);
-
- unf_lport = unf_find_lport_by_port_id(port_id);
- if (!unf_lport)
- return UNF_RETURN_ERROR;
-
- ret = (u32)unf_send_event(unf_lport->port_id, UNF_EVENT_SYN,
- (void *)unf_lport, (void *)&wwn, unf_cm_get_mac_adr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "send event(port get mac adr) fail.");
- return UNF_RETURN_ERROR;
- }
-
- /* save card mode: UNF_FC_SERVER_BOARD_32_G(6):32G;
- * UNF_FC_SERVER_BOARD_16_G(7):16G MODE
- */
- unf_lport->card_type = wwn.board_type;
-
- /* update port max speed */
- if (wwn.board_type == UNF_FC_SERVER_BOARD_32_G)
- unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
- else if (wwn.board_type == UNF_FC_SERVER_BOARD_16_G)
- unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_16_G;
- else if (wwn.board_type == UNF_FC_SERVER_BOARD_8_G)
- unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_8_G;
- else
- unf_lport->low_level_func.fc_ser_max_speed = UNF_PORT_SPEED_32_G;
-
- *sys_port_name = wwn.wwpn;
- *sys_node_name = wwn.wwnn;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) Port Name(0x%llx), Node Name(0x%llx.)",
- port_id, *sys_port_name, *sys_node_name);
-
- return RETURN_OK;
-}
-
-static u32 unf_update_port_wwn(struct unf_lport *lport,
- struct unf_port_wwn *port_wwn)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(port_wwn, UNF_RETURN_ERROR);
-
- /* Now notice lowlevel to update */
- if (!lport->low_level_func.port_mgr_op.ll_port_config_set) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x)'s corresponding function is NULL.",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- if (lport->low_level_func.port_mgr_op.ll_port_config_set(lport->fc_port,
- UNF_PORT_CFG_UPDATE_WWN,
- port_wwn) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) update WWN unsuccessful.",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) update WWN: previous(0x%llx, 0x%llx), now(0x%llx, 0x%llx).",
- lport->port_id, lport->port_name, lport->node_name,
- port_wwn->sys_port_wwn, port_wwn->sys_node_name);
-
- lport->port_name = port_wwn->sys_port_wwn;
- lport->node_name = port_wwn->sys_node_name;
-
- return RETURN_OK;
-}
-
-static u32 unf_build_lport_wwn(struct unf_lport *lport)
-{
- struct unf_port_wwn port_wwn = {0};
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (unf_build_sys_wwn(lport->port_id, &port_wwn.sys_port_wwn,
- &port_wwn.sys_node_name) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "Port(0x%x) build WWN unsuccessful.", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) build WWN succeed", lport->port_id);
-
- if (unf_update_port_wwn(lport, &port_wwn) != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- return RETURN_OK;
-}
-
-u32 unf_port_release_rport_index(struct unf_lport *lport, void *input)
-{
- u32 rport_index = INVALID_VALUE32;
- ulong flag = 0;
- struct unf_rport_pool *rport_pool = NULL;
- struct unf_lport *unf_lport = NULL;
- spinlock_t *rport_pool_lock = NULL;
-
- unf_lport = (struct unf_lport *)lport->root_lport;
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (input) {
- rport_index = *(u32 *)input;
- if (rport_index < lport->low_level_func.support_max_rport) {
- rport_pool = &unf_lport->rport_pool;
- rport_pool_lock = &rport_pool->rport_free_pool_lock;
- spin_lock_irqsave(rport_pool_lock, flag);
- if (test_bit((int)rport_index, rport_pool->rpi_bitmap)) {
- clear_bit((int)rport_index, rport_pool->rpi_bitmap);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) try to release a free rport index(0x%x)",
- lport->port_id, rport_index);
- }
- spin_unlock_irqrestore(rport_pool_lock, flag);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) try to release a not exist rport index(0x%x)",
- lport->port_id, rport_index);
- }
- }
-
- return RETURN_OK;
-}
-
-void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_vport_pool *vport_pool = NULL;
- struct unf_lport *unf_vport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- unf_lport = (struct unf_lport *)lport;
- unf_lport = unf_lport->root_lport;
- vport_pool = unf_lport->vport_pool;
-
- if (unf_lport->nport_id == nport_id)
- return unf_lport;
-
- if (unlikely(!vport_pool)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) vport pool is NULL", unf_lport->port_id);
-
- return NULL;
- }
-
- spin_lock_irqsave(&vport_pool->vport_pool_lock, flag);
- list_for_each_safe(node, next_node, &unf_lport->list_vports_head) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->nport_id == nport_id) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
-
- list_for_each_safe(node, next_node, &unf_lport->list_intergrad_vports) {
- unf_vport = list_entry(node, struct unf_lport, entry_vport);
- if (unf_vport->nport_id == nport_id) {
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
- return unf_vport;
- }
- }
- spin_unlock_irqrestore(&vport_pool->vport_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x) has no vport Nport ID(0x%x)",
- unf_lport->port_id, nport_id);
-
- return NULL;
-}
-
-int unf_get_link_lose_tmo(struct unf_lport *lport)
-{
- u32 tmo_value = 0;
-
- if (!lport)
- return UNF_LOSE_TMO;
-
- tmo_value = atomic_read(&lport->link_lose_tmo);
-
- if (!tmo_value)
- tmo_value = UNF_LOSE_TMO;
-
- return (int)tmo_value;
-}
-
-u32 unf_register_scsi_host(struct unf_lport *lport)
-{
- struct unf_host_param host_param = {0};
-
- struct Scsi_Host **scsi_host = NULL;
- struct unf_lport_cfg_item *lport_cfg_items = NULL;
-
- FC_CHECK_RETURN_VALUE((lport), UNF_RETURN_ERROR);
-
- /* Point to -->> L_port->Scsi_host */
- scsi_host = &lport->host_info.host;
-
- lport_cfg_items = &lport->low_level_func.lport_cfg_items;
- host_param.can_queue = (int)lport_cfg_items->max_queue_depth;
-
- /* Performance optimization */
- host_param.cmnd_per_lun = UNF_MAX_CMND_PER_LUN;
-
- host_param.sg_table_size = UNF_MAX_DMA_SEGS;
- host_param.max_id = UNF_MAX_TARGET_NUMBER;
- host_param.max_lun = UNF_DEFAULT_MAX_LUN;
- host_param.max_channel = UNF_MAX_BUS_CHANNEL;
- host_param.max_cmnd_len = UNF_MAX_SCSI_CMND_LEN; /* CDB-16 */
- host_param.dma_boundary = UNF_DMA_BOUNDARY;
- host_param.max_sectors = UNF_MAX_SECTORS;
- host_param.port_id = lport->port_id;
- host_param.lport = lport;
- host_param.pdev = &lport->low_level_func.dev->dev;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) allocate scsi host: can queue(%u), command performance LUN(%u), max lun(%u)",
- lport->port_id, host_param.can_queue, host_param.cmnd_per_lun,
- host_param.max_lun);
-
- if (unf_alloc_scsi_host(scsi_host, &host_param) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate scsi host failed", lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Port(0x%x) allocate scsi host(0x%x) succeed",
- lport->port_id, UNF_GET_SCSI_HOST_ID(*scsi_host));
-
- return RETURN_OK;
-}
-
-void unf_unregister_scsi_host(struct unf_lport *lport)
-{
- struct Scsi_Host *scsi_host = NULL;
- u32 host_no = 0;
-
- FC_CHECK_RETURN_VOID(lport);
-
- scsi_host = lport->host_info.host;
-
- if (scsi_host) {
- host_no = UNF_GET_SCSI_HOST_ID(scsi_host);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]Port(0x%x) starting unregister scsi host(0x%x)",
- lport->port_id, host_no);
- unf_free_scsi_host(scsi_host);
- /* can`t set scsi_host for NULL, since it does`t alloc by itself */
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[warn]Port(0x%x) unregister scsi host, invalid scsi_host ",
- lport->port_id);
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]Port(0x%x) unregister scsi host(0x%x) succeed",
- lport->port_id, host_no);
-
- lport->destroy_step = UNF_LPORT_DESTROY_STEP_12_UNREG_SCSI_HOST;
-}
diff --git a/drivers/scsi/spfc/common/unf_portman.h b/drivers/scsi/spfc/common/unf_portman.h
deleted file mode 100644
index 4ad93d32bcaa..000000000000
--- a/drivers/scsi/spfc/common/unf_portman.h
+++ /dev/null
@@ -1,96 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_PORTMAN_H
-#define UNF_PORTMAN_H
-
-#include "unf_type.h"
-#include "unf_lport.h"
-
-#define UNF_LPORT_POLL_TIMER ((u32)(1 * 1000))
-#define UNF_TX_CREDIT_REG_32_G 0x2289420
-#define UNF_RX_CREDIT_REG_32_G 0x228950c
-#define UNF_CREDIT_REG_16_G 0x2283418
-#define UNF_PORT_OFFSET_BASE 0x10000
-#define UNF_CREDIT_EMU_VALUE 0x20
-#define UNF_CREDIT_VALUE_32_G 0x8
-#define UNF_CREDIT_VALUE_16_G 0x8000000080008
-
-struct unf_nportid_map {
- u32 sid;
- u32 did;
- void *rport[1024];
- void *lport;
-};
-
-struct unf_global_card_thread {
- struct list_head card_list_head;
- spinlock_t global_card_list_lock;
- u32 card_num;
-};
-
-/* Global L_Port MG,manage all L_Port */
-struct unf_global_lport {
- struct list_head lport_list_head;
-
- /* Temporary list,used in hold list traverse */
- struct list_head intergrad_head;
-
- /* destroy list,used in card remove */
- struct list_head destroy_list_head;
-
- /* Dirty list,abnormal port */
- struct list_head dirty_list_head;
- spinlock_t global_lport_list_lock;
- u32 lport_sum;
- u8 dft_mode;
- bool start_work;
-};
-
-struct unf_port_action {
- u32 action;
- u32 (*unf_action)(struct unf_lport *lport, void *input);
-};
-
-struct unf_reset_port_argin {
- u32 port_id;
-};
-
-extern struct unf_global_lport global_lport_mgr;
-extern struct unf_global_card_thread card_thread_mgr;
-extern struct workqueue_struct *unf_wq;
-
-struct unf_lport *unf_find_lport_by_port_id(u32 port_id);
-struct unf_lport *unf_find_lport_by_scsi_hostid(u32 scsi_host_id);
-void *
-unf_lport_create_and_init(void *private_data,
- struct unf_low_level_functioon_op *low_level_op);
-u32 unf_fc_port_link_event(void *lport, u32 events, void *input);
-u32 unf_release_local_port(void *lport);
-void unf_lport_route_work(struct work_struct *work);
-void unf_lport_update_topo(struct unf_lport *lport,
- enum unf_act_topo active_topo);
-void unf_lport_ref_dec(struct unf_lport *lport);
-u32 unf_lport_ref_inc(struct unf_lport *lport);
-void unf_lport_ref_dec_to_destroy(struct unf_lport *lport);
-void unf_port_mgmt_deinit(void);
-void unf_port_mgmt_init(void);
-void unf_show_dirty_port(bool show_only, u32 *dirty_port_num);
-void *unf_lookup_lport_by_nportid(void *lport, u32 nport_id);
-u32 unf_is_lport_valid(struct unf_lport *lport);
-int unf_lport_reset_port(struct unf_lport *lport, u32 flag);
-int unf_cm_ops_handle(u32 type, void **arg_in);
-u32 unf_register_scsi_host(struct unf_lport *lport);
-void unf_unregister_scsi_host(struct unf_lport *lport);
-void unf_destroy_scsi_id_table(struct unf_lport *lport);
-u32 unf_lport_login(struct unf_lport *lport, enum unf_act_topo act_topo);
-u32 unf_init_scsi_id_table(struct unf_lport *lport);
-void unf_set_lport_removing(struct unf_lport *lport);
-void unf_lport_release_lw_funop(struct unf_lport *lport);
-void unf_show_all_rport(struct unf_lport *lport);
-void unf_disc_state_ma(struct unf_lport *lport, enum unf_disc_event evnet);
-int unf_get_link_lose_tmo(struct unf_lport *lport);
-u32 unf_port_release_rport_index(struct unf_lport *lport, void *input);
-int unf_cm_reset_port(u32 port_id);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
deleted file mode 100644
index 9b06df884524..000000000000
--- a/drivers/scsi/spfc/common/unf_rport.c
+++ /dev/null
@@ -1,2286 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_rport.h"
-#include "unf_log.h"
-#include "unf_exchg.h"
-#include "unf_ls.h"
-#include "unf_service.h"
-#include "unf_portman.h"
-
-/* rport state:ready --->>> link_down --->>> closing --->>> timeout --->>> delete */
-struct unf_rport_feature_pool *port_feature_pool;
-
-void unf_sesion_loss_timeout(struct work_struct *work)
-{
- struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
-
- FC_CHECK_RETURN_VOID(work);
-
- wwpn_rport_info = container_of(work, struct unf_wwpn_rport_info, loss_tmo_work.work);
- if (unlikely(!wwpn_rport_info)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]wwpn_rport_info is NULL");
- return;
- }
-
- atomic_set(&wwpn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[info]Port(0x%x) wwpn(0x%llx) set target(0x%x) scsi state to dead",
- ((struct unf_lport *)(wwpn_rport_info->lport))->port_id,
- wwpn_rport_info->wwpn, wwpn_rport_info->target_id);
-}
-
-u32 unf_alloc_scsi_id(struct unf_lport *lport, struct unf_rport *rport)
-{
- struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- ulong flags = 0;
- u32 index = 0;
- u32 ret = UNF_RETURN_ERROR;
- spinlock_t *rport_scsi_tb_lock = NULL;
-
- rport_scsi_table = &lport->rport_scsi_table;
- rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
-
- /* 1. At first, existence check */
- for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
- if (rport->port_name == wwn_rport_info->wwpn) {
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
- (&wwn_rport_info->loss_tmo_work),
- "loss tmo Timer work");
-
- /* Plug case: reuse again */
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
- wwn_rport_info->rport = rport;
- wwn_rport_info->las_ten_scsi_state =
- atomic_read(&wwn_rport_info->scsi_state);
- atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find the same scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
- lport->port_id, index, wwn_rport_info->wwpn, rport,
- rport->nport_id);
-
- atomic_inc(&lport->resume_scsi_id);
- goto find;
- }
- }
-
- /* 2. Alloc new SCSI ID */
- for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
- if (wwn_rport_info->wwpn == INVALID_WWPN) {
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
- (&wwn_rport_info->loss_tmo_work),
- "loss tmo Timer work");
- /* Use the free space */
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
- wwn_rport_info->rport = rport;
- wwn_rport_info->wwpn = rport->port_name;
- wwn_rport_info->las_ten_scsi_state =
- atomic_read(&wwn_rport_info->scsi_state);
- atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) allco new scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
- lport->port_id, index, wwn_rport_info->wwpn, rport,
- rport->nport_id);
-
- atomic_inc(&lport->alloc_scsi_id);
- goto find;
- }
- }
-
- /* 3. Reuse space has been used */
- for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
- if (atomic_read(&wwn_rport_info->scsi_state) == UNF_SCSI_ST_DEAD) {
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
- UNF_DELAYED_WORK_SYNC(ret, (lport->port_id),
- (&wwn_rport_info->loss_tmo_work),
- "loss tmo Timer work");
-
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
- if (wwn_rport_info->dfx_counter) {
- memset(wwn_rport_info->dfx_counter, 0,
- sizeof(struct unf_wwpn_dfx_counter_info));
- }
- if (wwn_rport_info->lun_qos_level) {
- memset(wwn_rport_info->lun_qos_level, 0,
- sizeof(u8) * UNF_MAX_LUN_PER_TARGET);
- }
- wwn_rport_info->rport = rport;
- wwn_rport_info->wwpn = rport->port_name;
- wwn_rport_info->las_ten_scsi_state =
- atomic_read(&wwn_rport_info->scsi_state);
- atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[info]Port(0x%x) reuse a dead scsi_id(0x%x) by wwpn(0x%llx) RPort(%p) N_Port_ID(0x%x)",
- lport->port_id, index, wwn_rport_info->wwpn, rport,
- rport->nport_id);
-
- atomic_inc(&lport->reuse_scsi_id);
- goto find;
- }
- }
-
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) there is not enough scsi_id with max_value(0x%x)",
- lport->port_id, index);
-
- return INVALID_VALUE32;
-
-find:
- if (!wwn_rport_info->dfx_counter) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) allocate Rport(0x%x) DFX buffer",
- lport->port_id, wwn_rport_info->rport->nport_id);
- wwn_rport_info->dfx_counter = vmalloc(sizeof(struct unf_wwpn_dfx_counter_info));
- if (!wwn_rport_info->dfx_counter) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate DFX buffer fail",
- lport->port_id);
-
- return INVALID_VALUE32;
- }
-
- memset(wwn_rport_info->dfx_counter, 0, sizeof(struct unf_wwpn_dfx_counter_info));
- }
-
- return index;
-}
-
-u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn)
-{
- struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- ulong flags = 0;
- u32 index = 0;
- spinlock_t *rport_scsi_tb_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, INVALID_VALUE32);
- rport_scsi_table = &lport->rport_scsi_table;
- rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
-
- if (wwpn == 0)
- return INVALID_VALUE32;
-
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
-
- for (index = 0; index < rport_scsi_table->max_scsi_id; index++) {
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[index];
- if (wwn_rport_info->wwpn == wwpn) {
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
- return index;
- }
- }
-
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- return INVALID_VALUE32;
-}
-
-void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state)
-{
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- struct unf_wwpn_rport_info *wwpn_rport_info = NULL;
-
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) RPort scsi_id(0x%x) is max than 0x%x",
- lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
- return;
- }
-
- scsi_image_table = &lport->rport_scsi_table;
- wwpn_rport_info = &scsi_image_table->wwn_rport_info_table[scsi_id];
- atomic_set(&wwpn_rport_info->scsi_state, scsi_state);
-}
-
-void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport)
-{
- /*
- * 1. port_logout
- * 2. rcvd_rscn_port_not_in_disc
- * 3. each_rport_after_rscn
- * 4. rcvd_gpnid_rjt
- * 5. rport_after_logout(rport is fabric port)
- */
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- /* 1. Update R_Port state: Link Down Event --->>> closing state */
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* 3. Port enter closing (then enter to Delete) process */
- unf_rport_enter_closing(rport);
-}
-
-static struct unf_rport *unf_rport_is_changed(struct unf_lport *lport,
- struct unf_rport *rport, u32 sid)
-{
- if (rport) {
- /* S_ID or D_ID has been changed */
- if (rport->nport_id != sid || rport->local_nport_id != lport->nport_id) {
- /* 1. Swap case: (SID or DID changed): Report link down
- * & delete immediately
- */
- unf_rport_immediate_link_down(lport, rport);
- return NULL;
- }
- }
-
- return rport;
-}
-
-struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
- struct unf_rport *rport_by_nport_id,
- struct unf_rport *rport_by_wwpn,
- u64 wwpn, u32 sid)
-{
- /* Used for SPFC Chip */
- struct unf_rport *rport = NULL;
- struct unf_rport *rporta = NULL;
- struct unf_rport *rportb = NULL;
- bool wwpn_flag = false;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* About R_Port by N_Port_ID */
- rporta = unf_rport_is_changed(lport, rport_by_nport_id, sid);
-
- /* About R_Port by WWpn */
- rportb = unf_rport_is_changed(lport, rport_by_wwpn, sid);
-
- if (!rporta && !rportb) {
- return NULL;
- } else if (!rporta && rportb) {
- /* 3. Plug case: reuse again */
- rport = rportb;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by wwpn",
- lport->port_id, rport, rport->port_name,
- rport->nport_id, rport->local_nport_id);
-
- return rport;
- } else if (rporta && !rportb) {
- wwpn_flag = (rporta->port_name != wwpn && rporta->port_name != 0 &&
- rporta->port_name != INVALID_VALUE64);
- if (wwpn_flag) {
- /* 4. WWPN changed: Report link down & delete
- * immediately
- */
- unf_rport_immediate_link_down(lport, rporta);
- return NULL;
- }
-
- /* Updtae WWPN */
- rporta->port_name = wwpn;
- rport = rporta;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x) reused by N_Port_ID",
- lport->port_id, rport, rport->port_name,
- rport->nport_id, rport->local_nport_id);
-
- return rport;
- }
-
- /* 5. Case for A == B && A != NULL && B != NULL */
- if (rportb == rporta) {
- rport = rporta;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) find the same RPort(0x%p) WWPN(0x%llx) S_ID(0x%x) D_ID(0x%x)",
- lport->port_id, rport, rport->port_name, rport->nport_id,
- rport->local_nport_id);
-
- return rport;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find two duplicate login. RPort(A:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x) RPort(B:0x%p, WWPN:0x%llx, S_ID:0x%x, D_ID:0x%x)",
- lport->port_id, rporta, rporta->port_name, rporta->nport_id,
- rporta->local_nport_id, rportb, rportb->port_name, rportb->nport_id,
- rportb->local_nport_id);
-
- /* 6. Case for A != B && A != NULL && B != NULL: Immediate
- * Report && Deletion
- */
- unf_rport_immediate_link_down(lport, rporta);
- unf_rport_immediate_link_down(lport, rportb);
-
- return NULL;
-}
-
-struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 sid)
-{
- struct unf_rport *rport = NULL;
- struct unf_rport *rport_by_nport_id = NULL;
- struct unf_rport *rport_by_wwpn = NULL;
- ulong flags = 0;
- spinlock_t *rport_state_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(lport->unf_qualify_rport, NULL);
-
- /* Get R_Port by WWN & N_Port_ID */
- rport_by_nport_id = unf_get_rport_by_nport_id(lport, sid);
- rport_by_wwpn = unf_get_rport_by_wwn(lport, wwpn);
- rport_state_lock = &rport_by_wwpn->rport_state_lock;
-
- /* R_Port check: by WWPN */
- if (rport_by_wwpn) {
- spin_lock_irqsave(rport_state_lock, flags);
- if (rport_by_wwpn->nport_id == UNF_FC_FID_FLOGI) {
- spin_unlock_irqrestore(rport_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
- lport->port_id, rport_by_wwpn, wwpn);
-
- rport_by_wwpn = NULL;
- } else {
- spin_unlock_irqrestore(rport_state_lock, flags);
- }
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%p) find by N_Port_ID(0x%x) and RPort(0x%p) by WWPN(0x%llx)",
- lport->port_id, lport->nport_id, rport_by_nport_id, sid, rport_by_wwpn, wwpn);
-
- /* R_Port validity check: get by WWPN & N_Port_ID */
- rport = lport->unf_qualify_rport(lport, rport_by_nport_id,
- rport_by_wwpn, wwpn, sid);
-
- return rport;
-}
-
-void unf_rport_delay_login(struct unf_rport *rport)
-{
- FC_CHECK_RETURN_VOID(rport);
-
- /* Do R_Port recovery: PLOGI or PRLI or LOGO */
- unf_rport_error_recovery(rport);
-}
-
-void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport)
-{
- /*
- * 1. TMF/ABTS timeout recovery :Y
- * 2. L_Port error recovery --->>> larger than retry_count :Y
- * 3. R_Port error recovery --->>> larger than retry_count :Y
- * 4. Check PLOGI parameter --->>> parameter is error :Y
- * 5. PRLI handler --->>> R_Port state is error :Y
- * 6. PDISC handler --->>> R_Port state is not PRLI_WAIT :Y
- * 7. ADISC handler --->>> R_Port state is not PRLI_WAIT :Y
- * 8. PLOGI wait timeout with R_PORT is INI mode :Y
- * 9. RCVD GFFID_RJT --->>> R_Port state is INIT :Y
- * 10. RCVD GPNID_ACC --->>> R_Port state is error :Y
- * 11. Private Loop mode with LOGO case :Y
- * 12. P2P mode with LOGO case :Y
- * 13. Fabric mode with LOGO case :Y
- * 14. RCVD PRLI_ACC with R_Port is INI :Y
- * 15. TGT RCVD BLS_REQ with session is error :Y
- */
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flags);
-
- if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
- rport->rp_state == UNF_RPORT_ST_DELETE) {
- /* 1. Already within Closing or Delete: Do nothing */
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- return;
- } else if (rport->rp_state == UNF_RPORT_ST_LOGO) {
- /* 2. Update R_Port state: Normal Enter Event --->>> closing
- * state
- */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_NORMAL_ENTER);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- /* Send Logo if necessary */
- if (unf_send_logo(lport, rport) != RETURN_OK)
- unf_rport_enter_closing(rport);
- } else {
- /* 3. Update R_Port state: Link Down Event --->>> closing state
- */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- unf_rport_enter_closing(rport);
- }
-}
-
-u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id)
-{
- ulong flags = 0;
- struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- spinlock_t *rport_scsi_tb_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (unlikely(lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) is removing and do nothing",
- lport->port_id, lport->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x_0x%x) scsi_id(0x%x) is bigger than %d",
- lport->port_id, lport->nport_id, scsi_id, UNF_MAX_SCSI_ID);
-
- return UNF_RETURN_ERROR;
- }
-
- rport_scsi_table = &lport->rport_scsi_table;
- rport_scsi_tb_lock = &rport_scsi_table->scsi_image_table_lock;
- if (rport_scsi_table->wwn_rport_info_table) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[warn]Port(0x%x_0x%x) RPort(0x%p) free scsi_id(0x%x) wwpn(0x%llx) target_id(0x%x) succeed",
- lport->port_id, lport->nport_id,
- rport_scsi_table->wwn_rport_info_table[scsi_id].rport,
- scsi_id, rport_scsi_table->wwn_rport_info_table[scsi_id].wwpn,
- rport_scsi_table->wwn_rport_info_table[scsi_id].target_id);
-
- spin_lock_irqsave(rport_scsi_tb_lock, flags);
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
- if (wwn_rport_info->rport) {
- wwn_rport_info->rport->rport = NULL;
- wwn_rport_info->rport = NULL;
- }
- wwn_rport_info->target_id = INVALID_VALUE32;
- atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_DEAD);
-
- /* NOTE: remain WWPN/Port_Name unchanged(un-cleared) */
- spin_unlock_irqrestore(rport_scsi_tb_lock, flags);
-
- return RETURN_OK;
- }
-
- return UNF_RETURN_ERROR;
-}
-
-static void unf_report_ini_linkwown_event(struct unf_lport *lport, struct unf_rport *rport)
-{
- u32 scsi_id = 0;
- struct fc_rport *unf_rport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- /*
- * 1. set local device(rport/rport_info_table) state
- * -------------------------------------------------OFF_LINE
- * *
- * about rport->scsi_id
- * valid during rport link up to link down
- */
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- scsi_id = rport->scsi_id;
- unf_set_device_state(lport, scsi_id, UNF_SCSI_ST_OFFLINE);
-
- /* 2. delete scsi's rport */
- unf_rport = (struct fc_rport *)rport->rport;
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- if (unf_rport) {
- fc_remote_port_delete(unf_rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x_0x%x) delete RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) succeed",
- lport->port_id, lport->nport_id, rport->nport_id,
- rport->port_name, scsi_id);
-
- atomic_inc(&lport->scsi_session_del_success);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[warn]Port(0x%x_0x%x) delete RPort(0x%x_0x%p) failed",
- lport->port_id, lport->nport_id, rport->nport_id, rport);
- }
-}
-
-static void unf_report_ini_linkup_event(struct unf_lport *lport, struct unf_rport *rport)
-{
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[event]Port(0x%x) RPort(0x%x_0x%p) put INI link up work(%p) to work_queue",
- lport->port_id, rport->nport_id, rport, &rport->start_work);
-
- if (unlikely(!queue_work(lport->link_event_wq, &rport->start_work))) {
- atomic_inc(&lport->add_start_work_failed);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]Port(0x%x) RPort(0x%x_0x%p) put INI link up to work_queue failed",
- lport->port_id, rport->nport_id, rport);
- }
-}
-
-void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
- struct unf_rport *rport,
- u32 rport_att)
-{
- /* Report R_Port Link Up/Down Event */
- ulong flag = 0;
- enum unf_port_state lport_state = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
-
- /* 1. R_Port does not has TGT mode any more */
- if (((rport_att & UNF_FC4_FRAME_PARM_3_TGT) == 0) &&
- rport->lport_ini_state == UNF_PORT_STATE_LINKUP) {
- rport->last_lport_ini_state = rport->lport_ini_state;
- rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) does not have TGT attribute(0x%x) any more",
- lport->port_id, rport->nport_id, rport_att);
- }
-
- /* 2. R_Port with TGT mode, L_Port with INI mode */
- if ((rport_att & UNF_FC4_FRAME_PARM_3_TGT) &&
- (lport->options & UNF_FC4_FRAME_PARM_3_INI)) {
- rport->last_lport_ini_state = rport->lport_ini_state;
- rport->lport_ini_state = UNF_PORT_STATE_LINKUP;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[warn]Port(0x%x) update INI state with last(0x%x) and now(0x%x)",
- lport->port_id, rport->last_lport_ini_state,
- rport->lport_ini_state);
- }
-
- /* 3. Report L_Port INI/TGT Down/Up event to SCSI */
- if (rport->last_lport_ini_state == rport->lport_ini_state) {
- if (rport->nport_id < UNF_FC_FID_DOM_MGR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
- lport->port_id, rport->nport_id, rport,
- rport->lport_ini_state);
- }
-
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- return;
- }
-
- lport_state = rport->lport_ini_state;
-
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- switch (lport_state) {
- case UNF_PORT_STATE_LINKDOWN:
- unf_report_ini_linkwown_event(lport, rport);
- break;
- case UNF_PORT_STATE_LINKUP:
- unf_report_ini_linkup_event(lport, rport);
- break;
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with unknown link status(0x%x)",
- lport->port_id, rport->lport_ini_state);
- break;
- }
-}
-
-static void unf_rport_callback(void *rport, void *lport, u32 result)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(rport);
- FC_CHECK_RETURN_VOID(lport);
- unf_rport = (struct unf_rport *)rport;
- unf_lport = (struct unf_lport *)lport;
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport->last_lport_ini_state = unf_rport->lport_ini_state;
- unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
- unf_rport->last_lport_tgt_state = unf_rport->lport_tgt_state;
- unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
-
- /* Report R_Port Link Down Event to scsi */
- if (unf_rport->last_lport_ini_state == unf_rport->lport_ini_state) {
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x %p) INI state(0x%x) has not been changed",
- unf_lport->port_id, unf_rport->nport_id,
- unf_rport, unf_rport->lport_ini_state);
- }
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- return;
- }
-
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- unf_report_ini_linkwown_event(unf_lport, unf_rport);
-}
-
-static void unf_rport_recovery_timeout(struct work_struct *work)
-{
- struct unf_lport *lport = NULL;
- struct unf_rport *rport = NULL;
- u32 ret = RETURN_OK;
- ulong flag = 0;
- enum unf_rport_login_state rp_state = UNF_RPORT_ST_INIT;
-
- FC_CHECK_RETURN_VOID(work);
-
- rport = container_of(work, struct unf_rport, recovery_work.work);
- if (unlikely(!rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort is NULL");
-
- return;
- }
-
- lport = rport->lport;
- if (unlikely(!lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort(0x%x) Port is NULL", rport->nport_id);
-
- /* for timer */
- unf_rport_ref_dec(rport);
- return;
- }
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rp_state = rport->rp_state;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x) state(0x%x) recovery timer timeout",
- lport->port_id, lport->nport_id, rport->nport_id, rp_state);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- switch (rp_state) {
- case UNF_RPORT_ST_PLOGI_WAIT:
- if ((lport->act_topo == UNF_ACT_TOP_P2P_DIRECT &&
- lport->port_name > rport->port_name) ||
- lport->act_topo != UNF_ACT_TOP_P2P_DIRECT) {
- /* P2P: Name is master with P2P_D
- * or has INI Mode
- */
- ret = unf_send_plogi(rport->lport, rport);
- }
- break;
- case UNF_RPORT_ST_PRLI_WAIT:
- ret = unf_send_prli(rport->lport, rport, ELS_PRLI);
- if (ret != RETURN_OK)
- unf_rport_error_recovery(rport);
- fallthrough;
- default:
- break;
- }
-
- if (ret != RETURN_OK)
- unf_rport_error_recovery(rport);
-
- /* company with timer */
- unf_rport_ref_dec(rport);
-}
-
-void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport)
-{
- ulong flags = 0;
- struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- u32 scsi_id = 0;
- u32 ret = 0;
- u32 delay = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- delay = (u32)(unf_get_link_lose_tmo(lport) * 1000);
-
- rport_scsi_table = &lport->rport_scsi_table;
- scsi_id = rport->scsi_id;
- spin_lock_irqsave(&rport->rport_state_lock, flags);
-
- /* 1. Cancel recovery_work */
- if (cancel_delayed_work(&rport->recovery_work)) {
- atomic_dec(&rport->rport_ref_cnt);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel recovery work succeed",
- lport->port_id, lport->nport_id, rport->nport_id, rport);
- }
-
- /* 2. Cancel Open_work */
- if (cancel_delayed_work(&rport->open_work)) {
- atomic_dec(&rport->rport_ref_cnt);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) RPort(0x%x_0x%p) cancel open work succeed",
- lport->port_id, lport->nport_id, rport->nport_id, rport);
- }
-
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- /* 3. Work in-queue (switch to thread context) */
- if (!queue_work(lport->link_event_wq, &rport->closing_work)) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[warn]Port(0x%x) RPort(0x%x_0x%p) add link down to work queue failed",
- lport->port_id, rport->nport_id, rport);
-
- atomic_inc(&lport->add_closing_work_failed);
- } else {
- spin_lock_irqsave(&rport->rport_state_lock, flags);
- (void)unf_rport_ref_inc(rport);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%x_0x%p) add link down to work(%p) queue succeed",
- lport->port_id, rport->nport_id, rport,
- &rport->closing_work);
- }
-
- if (rport->nport_id > UNF_FC_FID_DOM_MGR)
- return;
-
- if (scsi_id >= UNF_MAX_SCSI_ID) {
- scsi_id = unf_get_scsi_id_by_wwpn(lport, rport->port_name);
- if (scsi_id >= UNF_MAX_SCSI_ID) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%p) NPortId(0x%x) wwpn(0x%llx) option(0x%x) scsi_id(0x%x) is max than(0x%x)",
- lport->port_id, rport, rport->nport_id,
- rport->port_name, rport->options, scsi_id,
- UNF_MAX_SCSI_ID);
-
- return;
- }
- }
-
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
- ret = queue_delayed_work(unf_wq, &wwn_rport_info->loss_tmo_work,
- (ulong)msecs_to_jiffies((u32)delay));
- if (!ret) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info] Port(0x%x) add RPort(0x%p) NPortId(0x%x) scsi_id(0x%x) wwpn(0x%llx) loss timeout work failed",
- lport->port_id, rport, rport->nport_id, scsi_id,
- rport->port_name);
- }
-}
-
-static void unf_rport_closing_timeout(struct work_struct *work)
-{
- /* closing --->>>(timeout)--->>> delete */
- struct unf_rport *rport = NULL;
- struct unf_lport *lport = NULL;
- struct unf_disc *disc = NULL;
- ulong rport_flag = 0;
- ulong disc_flag = 0;
- void (*unf_rport_callback)(void *, void *, u32) = NULL;
- enum unf_rport_login_state old_state;
-
- FC_CHECK_RETURN_VOID(work);
-
- /* Get R_Port & L_Port & Disc */
- rport = container_of(work, struct unf_rport, closing_work);
- if (unlikely(!rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort is NULL");
- return;
- }
-
- lport = rport->lport;
- if (unlikely(!lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort(0x%x_0x%p) Port is NULL",
- rport->nport_id, rport);
-
- /* Release directly (for timer) */
- unf_rport_ref_dec(rport);
- return;
- }
- disc = &lport->disc;
-
- spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
-
- old_state = rport->rp_state;
- /* 1. Update R_Port state: event_timeout --->>> state_delete */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_CLS_TIMEOUT);
-
- /* Check R_Port state */
- if (rport->rp_state != UNF_RPORT_ST_DELETE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) closing timeout with error state(0x%x->0x%x)",
- lport->port_id, lport->nport_id, rport->nport_id,
- rport, old_state, rport->rp_state);
-
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
-
- /* Dec ref_cnt for timer */
- unf_rport_ref_dec(rport);
- return;
- }
-
- unf_rport_callback = rport->unf_rport_callback;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
-
- /* 2. Put R_Port to delete list */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_flag);
- list_del_init(&rport->entry_rport);
- list_add_tail(&rport->entry_rport, &disc->list_delete_rports);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_flag);
-
- /* 3. Report rport link down event to scsi */
- if (unf_rport_callback) {
- unf_rport_callback((void *)rport, (void *)rport->lport, RETURN_OK);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort(0x%x) callback is NULL",
- rport->nport_id);
- }
-
- /* 4. Remove/delete R_Port */
- unf_rport_ref_dec(rport);
- unf_rport_ref_dec(rport);
-}
-
-static void unf_rport_linkup_to_scsi(struct work_struct *work)
-{
- struct fc_rport_identifiers rport_ids;
- struct fc_rport *rport = NULL;
- ulong flags = RETURN_OK;
- struct unf_wwpn_rport_info *wwn_rport_info = NULL;
- struct unf_rport_scsi_id_image *rport_scsi_table = NULL;
- u32 scsi_id = 0;
-
- struct unf_lport *lport = NULL;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VOID(work);
-
- unf_rport = container_of(work, struct unf_rport, start_work);
- if (unlikely(!unf_rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort is NULL for work(%p)", work);
- return;
- }
-
- lport = unf_rport->lport;
- if (unlikely(!lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]RPort(0x%x_0x%p) Port is NULL",
- unf_rport->nport_id, unf_rport);
- return;
- }
-
- /* 1. Alloc R_Port SCSI_ID (image table) */
- unf_rport->scsi_id = unf_alloc_scsi_id(lport, unf_rport);
- if (unlikely(unf_rport->scsi_id == INVALID_VALUE32)) {
- atomic_inc(&lport->scsi_session_add_failed);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) scsi_id(0x%x) is invalid",
- lport->port_id, lport->nport_id,
- unf_rport->nport_id, unf_rport,
- unf_rport->port_name, unf_rport->scsi_id);
-
- /* NOTE: return */
- return;
- }
-
- /* 2. Add rport to scsi */
- scsi_id = unf_rport->scsi_id;
- rport_ids.node_name = unf_rport->node_name;
- rport_ids.port_name = unf_rport->port_name;
- rport_ids.port_id = unf_rport->nport_id;
- rport_ids.roles = FC_RPORT_ROLE_UNKNOWN;
- rport = fc_remote_port_add(lport->host_info.host, 0, &rport_ids);
- if (unlikely(!rport)) {
- atomic_inc(&lport->scsi_session_add_failed);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x_0x%x) RPort(0x%x_0x%p) wwpn(0x%llx) report link up to scsi failed",
- lport->port_id, lport->nport_id, unf_rport->nport_id, unf_rport,
- unf_rport->port_name);
-
- unf_free_scsi_id(lport, scsi_id);
- return;
- }
-
- /* 3. Change rport role */
- *((u32 *)rport->dd_data) = scsi_id; /* save local SCSI_ID to scsi rport */
- rport->supported_classes = FC_COS_CLASS3;
- rport_ids.roles |= FC_PORT_ROLE_FCP_TARGET;
- rport->dev_loss_tmo = (u32)unf_get_link_lose_tmo(lport); /* default 30s */
- fc_remote_port_rolechg(rport, rport_ids.roles);
-
- /* 4. Save scsi rport info to local R_Port */
- spin_lock_irqsave(&unf_rport->rport_state_lock, flags);
- unf_rport->rport = rport;
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flags);
-
- rport_scsi_table = &lport->rport_scsi_table;
- spin_lock_irqsave(&rport_scsi_table->scsi_image_table_lock, flags);
- wwn_rport_info = &rport_scsi_table->wwn_rport_info_table[scsi_id];
- wwn_rport_info->target_id = rport->scsi_target_id;
- wwn_rport_info->rport = unf_rport;
- atomic_set(&wwn_rport_info->scsi_state, UNF_SCSI_ST_ONLINE);
- spin_unlock_irqrestore(&rport_scsi_table->scsi_image_table_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x_0x%x) RPort(0x%x) wwpn(0x%llx) scsi_id(0x%x) link up to scsi succeed",
- lport->port_id, lport->nport_id, unf_rport->nport_id,
- unf_rport->port_name, scsi_id);
-
- atomic_inc(&lport->scsi_session_add_success);
-}
-
-static void unf_rport_open_timeout(struct work_struct *work)
-{
- struct unf_rport *rport = NULL;
- struct unf_lport *lport = NULL;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VOID(work);
-
- rport = container_of(work, struct unf_rport, open_work.work);
- if (!rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]RPort is NULL");
-
- return;
- }
-
- spin_lock_irqsave(&rport->rport_state_lock, flags);
- lport = rport->lport;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) RPort(0x%x) open work timeout with state(0x%x)",
- lport->port_id, lport->nport_id, rport->nport_id,
- rport->rp_state);
-
- /* NOTE: R_Port state check */
- if (rport->rp_state != UNF_RPORT_ST_PRLI_WAIT) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- /* Dec ref_cnt for timer case */
- unf_rport_ref_dec(rport);
- return;
- }
-
- /* Report R_Port Link Down event */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- unf_rport_enter_closing(rport);
- /* Dec ref_cnt for timer case */
- unf_rport_ref_dec(rport);
-}
-
-static u32 unf_alloc_index_for_rport(struct unf_lport *lport, struct unf_rport *rport)
-{
- ulong rport_flag = 0;
- ulong pool_flag = 0;
- u32 alloc_indx = 0;
- u32 max_rport = 0;
- struct unf_rport_pool *rport_pool = NULL;
- spinlock_t *rport_scsi_tb_lock = NULL;
-
- rport_pool = &lport->rport_pool;
- rport_scsi_tb_lock = &rport_pool->rport_free_pool_lock;
- max_rport = lport->low_level_func.lport_cfg_items.max_login;
-
- max_rport = max_rport > SPFC_DEFAULT_RPORT_INDEX ? SPFC_DEFAULT_RPORT_INDEX : max_rport;
-
- spin_lock_irqsave(rport_scsi_tb_lock, pool_flag);
- while (alloc_indx < max_rport) {
- if (!test_bit((int)alloc_indx, rport_pool->rpi_bitmap)) {
- /* Case for SPFC */
- if (unlikely(atomic_read(&lport->lport_no_operate_flag) == UNF_LPORT_NOP)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) is within NOP", lport->port_id);
-
- spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
- return UNF_RETURN_ERROR;
- }
-
- spin_lock_irqsave(&rport->rport_state_lock, rport_flag);
- rport->rport_index = alloc_indx;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) RPort(0x%x) alloc index(0x%x) succeed",
- lport->port_id, alloc_indx, rport->nport_id);
-
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_flag);
-
- /* Set (index) bit */
- set_bit((int)alloc_indx, rport_pool->rpi_bitmap);
-
- /* Break here */
- break;
- }
- alloc_indx++;
- }
- spin_unlock_irqrestore(rport_scsi_tb_lock, pool_flag);
-
- if (max_rport == alloc_indx)
- return UNF_RETURN_ERROR;
- return RETURN_OK;
-}
-
-static void unf_check_rport_pool_status(struct unf_lport *lport)
-{
- struct unf_lport *unf_lport = lport;
- struct unf_rport_pool *rport_pool = NULL;
- ulong flags = 0;
- u32 max_rport = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- rport_pool = &unf_lport->rport_pool;
-
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flags);
- max_rport = unf_lport->low_level_func.lport_cfg_items.max_login;
- if (rport_pool->rport_pool_completion &&
- rport_pool->rport_pool_count == max_rport) {
- complete(rport_pool->rport_pool_completion);
- }
-
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flags);
-}
-
-static void unf_init_rport_sq_num(struct unf_rport *rport, struct unf_lport *lport)
-{
- u32 session_order;
- u32 ssq_average_session_num;
-
- ssq_average_session_num = (lport->max_ssq_num - 1) / UNF_SQ_NUM_PER_SESSION;
- session_order = (rport->rport_index) % ssq_average_session_num;
- rport->sqn_base = (session_order * UNF_SQ_NUM_PER_SESSION);
-}
-
-void unf_init_rport_params(struct unf_rport *rport, struct unf_lport *lport)
-{
- struct unf_rport *unf_rport = rport;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(unf_rport);
- FC_CHECK_RETURN_VOID(lport);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_set_rport_state(unf_rport, UNF_RPORT_ST_INIT);
- unf_rport->unf_rport_callback = unf_rport_callback;
- unf_rport->lport = lport;
- unf_rport->fcp_conf_needed = false;
- unf_rport->tape_support_needed = false;
- unf_rport->max_retries = UNF_MAX_RETRY_COUNT;
- unf_rport->logo_retries = 0;
- unf_rport->retries = 0;
- unf_rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- unf_rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
- unf_rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
- unf_rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
- unf_rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
- unf_rport->node_name = 0;
- unf_rport->port_name = INVALID_WWPN;
- unf_rport->disc_done = 0;
- unf_rport->scsi_id = INVALID_VALUE32;
- unf_rport->data_thread = NULL;
- sema_init(&unf_rport->task_sema, 0);
- atomic_set(&unf_rport->rport_ref_cnt, 0);
- atomic_set(&unf_rport->pending_io_cnt, 0);
- unf_rport->rport_alloc_jifs = jiffies;
-
- unf_rport->ed_tov = UNF_DEFAULT_EDTOV + 500;
- unf_rport->ra_tov = UNF_DEFAULT_RATOV;
-
- INIT_WORK(&unf_rport->closing_work, unf_rport_closing_timeout);
- INIT_WORK(&unf_rport->start_work, unf_rport_linkup_to_scsi);
- INIT_DELAYED_WORK(&unf_rport->recovery_work, unf_rport_recovery_timeout);
- INIT_DELAYED_WORK(&unf_rport->open_work, unf_rport_open_timeout);
-
- atomic_inc(&unf_rport->rport_ref_cnt);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-}
-
-static u32 unf_alloc_ll_rport_resource(struct unf_lport *lport,
- struct unf_rport *rport, u32 nport_id)
-{
- u32 ret = RETURN_OK;
- struct unf_port_info rport_info = {0};
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_qos_info *qos_info = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong flag = 0;
-
- unf_lport = lport->root_lport;
-
- if (unf_lport->low_level_func.service_op.unf_alloc_rport_res) {
- spin_lock_irqsave(&lport->qos_mgr_lock, flag);
- rport_info.qos_level = lport->qos_level;
- list_for_each_safe(node, next_node, &lport->list_qos_head) {
- qos_info = (struct unf_qos_info *)list_entry(node, struct unf_qos_info,
- entry_qos_info);
-
- if (qos_info && qos_info->nport_id == nport_id) {
- rport_info.qos_level = qos_info->qos_level;
- break;
- }
- }
-
- spin_unlock_irqrestore(&lport->qos_mgr_lock, flag);
-
- unf_init_rport_sq_num(rport, unf_lport);
-
- rport->qos_level = rport_info.qos_level;
- rport_info.nport_id = nport_id;
- rport_info.rport_index = rport->rport_index;
- rport_info.local_nport_id = lport->nport_id;
- rport_info.port_name = 0;
- rport_info.cs_ctrl = UNF_CSCTRL_INVALID;
- rport_info.sqn_base = rport->sqn_base;
-
- if (unf_lport->priority == UNF_PRIORITY_ENABLE) {
- if (rport_info.qos_level == UNF_QOS_LEVEL_DEFAULT)
- rport_info.cs_ctrl = UNF_CSCTRL_LOW;
- else if (rport_info.qos_level == UNF_QOS_LEVEL_MIDDLE)
- rport_info.cs_ctrl = UNF_CSCTRL_MIDDLE;
- else if (rport_info.qos_level == UNF_QOS_LEVEL_HIGH)
- rport_info.cs_ctrl = UNF_CSCTRL_HIGH;
- }
-
- ret = unf_lport->low_level_func.service_op.unf_alloc_rport_res(unf_lport->fc_port,
- &rport_info);
- } else {
- ret = RETURN_OK;
- }
-
- return ret;
-}
-
-static void *unf_add_rport_to_busy_list(struct unf_lport *lport,
- struct unf_rport *new_rport,
- u32 nport_id)
-{
- struct unf_rport_pool *rport_pool = NULL;
- struct unf_lport *unf_lport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_rport *unf_new_rport = new_rport;
- struct unf_rport *old_rport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- spinlock_t *rport_free_lock = NULL;
- spinlock_t *rport_busy_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(new_rport, NULL);
-
- unf_lport = lport->root_lport;
- disc = &lport->disc;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
- rport_pool = &unf_lport->rport_pool;
- rport_free_lock = &rport_pool->rport_free_pool_lock;
- rport_busy_lock = &disc->rport_busy_pool_lock;
-
- spin_lock_irqsave(rport_busy_lock, flag);
- list_for_each_safe(node, next_node, &disc->list_busy_rports) {
- /* According to N_Port_ID */
- old_rport = list_entry(node, struct unf_rport, entry_rport);
- if (old_rport->nport_id == nport_id)
- break;
- old_rport = NULL;
- }
-
- if (old_rport) {
- spin_unlock_irqrestore(rport_busy_lock, flag);
-
- /* Use old R_Port & Add new R_Port back to R_Port Pool */
- spin_lock_irqsave(rport_free_lock, flag);
- clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
- list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
- rport_pool->rport_pool_count++;
- spin_unlock_irqrestore(rport_free_lock, flag);
-
- unf_check_rport_pool_status(unf_lport);
- return (void *)old_rport;
- }
- spin_unlock_irqrestore(rport_busy_lock, flag);
- if (nport_id != UNF_FC_FID_FLOGI) {
- if (unf_alloc_ll_rport_resource(lport, unf_new_rport, nport_id) != RETURN_OK) {
- /* Add new R_Port back to R_Port Pool */
- spin_lock_irqsave(rport_free_lock, flag);
- clear_bit((int)unf_new_rport->rport_index, rport_pool->rpi_bitmap);
- list_add_tail(&unf_new_rport->entry_rport, &rport_pool->list_rports_pool);
- rport_pool->rport_pool_count++;
- spin_unlock_irqrestore(rport_free_lock, flag);
- unf_check_rport_pool_status(unf_lport);
-
- return NULL;
- }
- }
-
- spin_lock_irqsave(rport_busy_lock, flag);
- /* Add new R_Port to busy list */
- list_add_tail(&unf_new_rport->entry_rport, &disc->list_busy_rports);
- unf_new_rport->nport_id = nport_id;
- unf_new_rport->local_nport_id = lport->nport_id;
- spin_unlock_irqrestore(rport_busy_lock, flag);
- unf_init_rport_params(unf_new_rport, lport);
-
- return (void *)unf_new_rport;
-}
-
-void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_rport_pool *rport_pool = NULL;
- struct unf_disc *disc = NULL;
- struct unf_disc *vport_disc = NULL;
- struct unf_rport *rport = NULL;
- struct list_head *list_head = NULL;
- ulong flag = 0;
- struct unf_disc_rport *disc_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- unf_lport = ((struct unf_lport *)lport)->root_lport;
- FC_CHECK_RETURN_VALUE(unf_lport, NULL);
-
- /* Check L_Port state: NOP */
- if (unlikely(atomic_read(&unf_lport->lport_no_operate_flag) == UNF_LPORT_NOP))
- return NULL;
-
- rport_pool = &unf_lport->rport_pool;
- disc = &unf_lport->disc;
-
- /* 1. UNF_PORT_TYPE_DISC: Get from disc_rport_pool */
- if (port_type == UNF_PORT_TYPE_DISC) {
- vport_disc = &((struct unf_lport *)lport)->disc;
- /* NOTE: list_disc_rports_pool used with list_disc_rports_busy */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- if (!list_empty(&disc->disc_rport_mgr.list_disc_rports_pool)) {
- /* Get & delete from Disc R_Port Pool & Add it to Busy list */
- list_head = UNF_OS_LIST_NEXT(&disc->disc_rport_mgr.list_disc_rports_pool);
- list_del_init(list_head);
- disc_rport = list_entry(list_head, struct unf_disc_rport, entry_rport);
- disc_rport->nport_id = nport_id;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- /* Add to list_disc_rports_busy */
- spin_lock_irqsave(&vport_disc->rport_busy_pool_lock, flag);
- list_add_tail(list_head, &vport_disc->disc_rport_mgr.list_disc_rports_busy);
- spin_unlock_irqrestore(&vport_disc->rport_busy_pool_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "Port(0x%x_0x%x) add nportid:0x%x to rportbusy list",
- unf_lport->port_id, unf_lport->nport_id,
- disc_rport->nport_id);
- } else {
- disc_rport = NULL;
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
- }
-
- /* NOTE: return */
- return disc_rport;
- }
-
- /* 2. UNF_PORT_TYPE_FC (rport_pool): Get from list_rports_pool */
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- if (!list_empty(&rport_pool->list_rports_pool)) {
- /* Get & delete from R_Port free Pool */
- list_head = UNF_OS_LIST_NEXT(&rport_pool->list_rports_pool);
- list_del_init(list_head);
- rport_pool->rport_pool_count--;
- rport = list_entry(list_head, struct unf_rport, entry_rport);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) RPort pool is empty",
- unf_lport->port_id, unf_lport->nport_id);
-
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
-
- return NULL;
- }
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
-
- /* 3. Alloc (& set bit) R_Port index */
- if (unf_alloc_index_for_rport(unf_lport, rport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) allocate index for new RPort failed",
- unf_lport->nport_id);
-
- /* Alloc failed: Add R_Port back to R_Port Pool */
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- list_add_tail(&rport->entry_rport, &rport_pool->list_rports_pool);
- rport_pool->rport_pool_count++;
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
- unf_check_rport_pool_status(unf_lport);
- return NULL;
- }
-
- /* 4. Add R_Port to busy list */
- rport = unf_add_rport_to_busy_list(lport, rport, nport_id);
-
- return (void *)rport;
-}
-
-u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct unf_port_info rport_info;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- memset(&rport_info, 0, sizeof(struct unf_port_info));
-
- rport_info.rport_index = rport->rport_index;
- rport_info.nport_id = rport->nport_id;
- rport_info.port_name = rport->port_name;
- rport_info.sqn_base = rport->sqn_base;
-
- /* 2. release R_Port(parent context/Session) resource */
- if (!lport->low_level_func.service_op.unf_release_rport_res) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) release rport resource function can't be NULL",
- lport->port_id);
-
- return ret;
- }
-
- ret = lport->low_level_func.service_op.unf_release_rport_res(lport->fc_port, &rport_info);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) rport_index(0x%x, %p) send release session CMND failed",
- lport->port_id, rport_info.rport_index, rport);
- }
-
- return ret;
-}
-
-static void unf_reset_rport_attribute(struct unf_rport *rport)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- rport->unf_rport_callback = NULL;
- rport->lport = NULL;
- rport->node_name = INVALID_VALUE64;
- rport->port_name = INVALID_WWPN;
- rport->nport_id = INVALID_VALUE32;
- rport->local_nport_id = INVALID_VALUE32;
- rport->max_frame_size = UNF_MAX_FRAME_SIZE;
- rport->ed_tov = UNF_DEFAULT_EDTOV;
- rport->ra_tov = UNF_DEFAULT_RATOV;
- rport->rport_index = INVALID_VALUE32;
- rport->scsi_id = INVALID_VALUE32;
- rport->rport_alloc_jifs = INVALID_VALUE64;
-
- /* ini or tgt */
- rport->options = 0;
-
- /* fcp conf */
- rport->fcp_conf_needed = false;
-
- /* special req retry times */
- rport->retries = 0;
- rport->logo_retries = 0;
-
- /* special req retry times */
- rport->max_retries = UNF_MAX_RETRY_COUNT;
-
- /* for target mode */
- rport->session = NULL;
- rport->last_lport_ini_state = UNF_PORT_STATE_LINKDOWN;
- rport->lport_ini_state = UNF_PORT_STATE_LINKDOWN;
- rport->rp_state = UNF_RPORT_ST_INIT;
- rport->last_lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
- rport->lport_tgt_state = UNF_PORT_STATE_LINKDOWN;
- rport->rscn_position = UNF_RPORT_NOT_NEED_PROCESS;
- rport->disc_done = 0;
- rport->sqn_base = 0;
-
- /* for scsi */
- rport->data_thread = NULL;
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-}
-
-u32 unf_rport_remove(void *rport)
-{
- struct unf_lport *lport = NULL;
- struct unf_rport *unf_rport = NULL;
- struct unf_rport_pool *rport_pool = NULL;
- ulong flag = 0;
- u32 rport_index = 0;
- u32 nport_id = 0;
-
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- unf_rport = (struct unf_rport *)rport;
- lport = unf_rport->lport;
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- rport_pool = &((struct unf_lport *)lport->root_lport)->rport_pool;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Remove RPort(0x%p) with remote_nport_id(0x%x) local_nport_id(0x%x)",
- unf_rport, unf_rport->nport_id, unf_rport->local_nport_id);
-
- /* 1. Terminate open exchange before rport remove: set ABORT tag */
- unf_cm_xchg_mgr_abort_io_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id, 0);
-
- /* 2. Abort sfp exchange before rport remove */
- unf_cm_xchg_mgr_abort_sfs_by_id(lport, unf_rport, unf_rport->nport_id, lport->nport_id);
-
- /* 3. Release R_Port resource: session reset/delete */
- if (likely(unf_rport->nport_id != UNF_FC_FID_FLOGI))
- (void)unf_release_rport_res(lport, unf_rport);
-
- nport_id = unf_rport->nport_id;
-
- /* 4.1 Delete R_Port from disc destroy/delete list */
- spin_lock_irqsave(&lport->disc.rport_busy_pool_lock, flag);
- list_del_init(&unf_rport->entry_rport);
- spin_unlock_irqrestore(&lport->disc.rport_busy_pool_lock, flag);
-
- rport_index = unf_rport->rport_index;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x) release RPort(0x%x_%p) with index(0x%x)",
- lport->port_id, unf_rport->nport_id, unf_rport,
- unf_rport->rport_index);
-
- unf_reset_rport_attribute(unf_rport);
-
- /* 4.2 Add rport to --->>> rport_pool (free pool) & clear bitmap */
- spin_lock_irqsave(&rport_pool->rport_free_pool_lock, flag);
- if (unlikely(nport_id == UNF_FC_FID_FLOGI)) {
- if (test_bit((int)rport_index, rport_pool->rpi_bitmap))
- clear_bit((int)rport_index, rport_pool->rpi_bitmap);
- }
-
- list_add_tail(&unf_rport->entry_rport, &rport_pool->list_rports_pool);
- rport_pool->rport_pool_count++;
- spin_unlock_irqrestore(&rport_pool->rport_free_pool_lock, flag);
-
- unf_check_rport_pool_status((struct unf_lport *)lport->root_lport);
- up(&unf_rport->task_sema);
-
- return RETURN_OK;
-}
-
-u32 unf_rport_ref_inc(struct unf_rport *rport)
-{
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- if (atomic_read(&rport->rport_ref_cnt) <= 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Rport(0x%x) reference count is wrong %d",
- rport->nport_id,
- atomic_read(&rport->rport_ref_cnt));
- return UNF_RETURN_ERROR;
- }
-
- atomic_inc(&rport->rport_ref_cnt);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Rport(0x%x) reference count is %d", rport->nport_id,
- atomic_read(&rport->rport_ref_cnt));
-
- return RETURN_OK;
-}
-
-void unf_rport_ref_dec(struct unf_rport *rport)
-{
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(rport);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Rport(0x%x) reference count is %d", rport->nport_id,
- atomic_read(&rport->rport_ref_cnt));
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- if (atomic_dec_and_test(&rport->rport_ref_cnt)) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- (void)unf_rport_remove(rport);
- } else {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- }
-}
-
-void unf_set_rport_state(struct unf_rport *rport,
- enum unf_rport_login_state states)
-{
- FC_CHECK_RETURN_VOID(rport);
-
- if (rport->rp_state != states) {
- /* Reset R_Port retry count */
- rport->retries = 0;
- }
-
- rport->rp_state = states;
-}
-
-static enum unf_rport_login_state
-unf_rport_stat_init(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_LOGO:
- next_state = UNF_RPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_RPORT_ENTER_PLOGI:
- next_state = UNF_RPORT_ST_PLOGI_WAIT;
- break;
-
- case UNF_EVENT_RPORT_LINK_DOWN:
- next_state = UNF_RPORT_ST_CLOSING;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_rport_login_state unf_rport_stat_plogi_wait(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_ENTER_PRLI:
- next_state = UNF_RPORT_ST_PRLI_WAIT;
- break;
-
- case UNF_EVENT_RPORT_LINK_DOWN:
- next_state = UNF_RPORT_ST_CLOSING;
- break;
-
- case UNF_EVENT_RPORT_LOGO:
- next_state = UNF_RPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_RPORT_RECOVERY:
- next_state = UNF_RPORT_ST_READY;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_rport_login_state unf_rport_stat_prli_wait(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_READY:
- next_state = UNF_RPORT_ST_READY;
- break;
-
- case UNF_EVENT_RPORT_LOGO:
- next_state = UNF_RPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_RPORT_LINK_DOWN:
- next_state = UNF_RPORT_ST_CLOSING;
- break;
-
- case UNF_EVENT_RPORT_RECOVERY:
- next_state = UNF_RPORT_ST_READY;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_rport_login_state unf_rport_stat_ready(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_LOGO:
- next_state = UNF_RPORT_ST_LOGO;
- break;
-
- case UNF_EVENT_RPORT_LINK_DOWN:
- next_state = UNF_RPORT_ST_CLOSING;
- break;
-
- case UNF_EVENT_RPORT_ENTER_PLOGI:
- next_state = UNF_RPORT_ST_PLOGI_WAIT;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_rport_login_state unf_rport_stat_closing(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_CLS_TIMEOUT:
- next_state = UNF_RPORT_ST_DELETE;
- break;
-
- case UNF_EVENT_RPORT_RELOGIN:
- next_state = UNF_RPORT_ST_INIT;
- break;
-
- case UNF_EVENT_RPORT_RECOVERY:
- next_state = UNF_RPORT_ST_READY;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-static enum unf_rport_login_state unf_rport_stat_logo(enum unf_rport_login_state old_state,
- enum unf_rport_event event)
-{
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- switch (event) {
- case UNF_EVENT_RPORT_NORMAL_ENTER:
- next_state = UNF_RPORT_ST_CLOSING;
- break;
-
- case UNF_EVENT_RPORT_RECOVERY:
- next_state = UNF_RPORT_ST_READY;
- break;
-
- default:
- next_state = old_state;
- break;
- }
-
- return next_state;
-}
-
-void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event)
-{
- enum unf_rport_login_state old_state = UNF_RPORT_ST_INIT;
- enum unf_rport_login_state next_state = UNF_RPORT_ST_INIT;
-
- FC_CHECK_RETURN_VOID(rport);
-
- old_state = rport->rp_state;
-
- switch (rport->rp_state) {
- case UNF_RPORT_ST_INIT:
- next_state = unf_rport_stat_init(old_state, event);
- break;
- case UNF_RPORT_ST_PLOGI_WAIT:
- next_state = unf_rport_stat_plogi_wait(old_state, event);
- break;
- case UNF_RPORT_ST_PRLI_WAIT:
- next_state = unf_rport_stat_prli_wait(old_state, event);
- break;
- case UNF_RPORT_ST_LOGO:
- next_state = unf_rport_stat_logo(old_state, event);
- break;
- case UNF_RPORT_ST_CLOSING:
- next_state = unf_rport_stat_closing(old_state, event);
- break;
- case UNF_RPORT_ST_READY:
- next_state = unf_rport_stat_ready(old_state, event);
- break;
- case UNF_RPORT_ST_DELETE:
- default:
- next_state = UNF_RPORT_ST_INIT;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "[info]RPort(0x%x) hold state(0x%x)",
- rport->nport_id, rport->rp_state);
- break;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MINOR,
- "[info]RPort(0x%x) with oldstate(0x%x) event(0x%x) nextstate(0x%x)",
- rport->nport_id, old_state, event, next_state);
-
- unf_set_rport_state(rport, next_state);
-}
-
-void unf_clean_linkdown_rport(struct unf_lport *lport)
-{
- /* for L_Port's R_Port(s) */
- struct unf_disc *disc = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct unf_rport *rport = NULL;
- struct unf_lport *unf_lport = NULL;
- ulong disc_lock_flag = 0;
- ulong rport_lock_flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- disc = &lport->disc;
-
- /* for each busy R_Port */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, disc_lock_flag);
- list_for_each_safe(node, next_node, &disc->list_busy_rports) {
- rport = list_entry(node, struct unf_rport, entry_rport);
-
- /* 1. Prevent process Repeatly: Closing */
- spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
- if (rport->rp_state == UNF_RPORT_ST_CLOSING) {
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
- continue;
- }
-
- /* 2. Increase ref_cnt to protect R_Port */
- if (unf_rport_ref_inc(rport) != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
- continue;
- }
-
- /* 3. Update R_Port state: Link Down Event --->>> closing state
- */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
-
- /* 4. Put R_Port from busy to destroy list */
- list_del_init(&rport->entry_rport);
- list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
-
- unf_lport = rport->lport;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
-
- /* 5. Schedule Closing work (Enqueuing workqueue) */
- unf_schedule_closing_work(unf_lport, rport);
-
- /* 6. decrease R_Port ref_cnt (company with 2) */
- unf_rport_ref_dec(rport);
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, disc_lock_flag);
-}
-
-void unf_rport_enter_closing(struct unf_rport *rport)
-{
- /*
- * call by
- * 1. with RSCN processer
- * 2. with LOGOUT processer
- * *
- * from
- * 1. R_Port Link Down
- * 2. R_Port enter LOGO
- */
- ulong rport_lock_flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_lport *lport = NULL;
- struct unf_disc *disc = NULL;
-
- FC_CHECK_RETURN_VOID(rport);
-
- /* 1. Increase ref_cnt to protect R_Port */
- spin_lock_irqsave(&rport->rport_state_lock, rport_lock_flag);
- ret = unf_rport_ref_inc(rport);
- if (ret != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) is removing and no need process",
- rport->nport_id, rport);
-
- return;
- }
-
- /* NOTE: R_Port state has been set(with closing) */
-
- lport = rport->lport;
- spin_unlock_irqrestore(&rport->rport_state_lock, rport_lock_flag);
-
- /* 2. Put R_Port from busy to destroy list */
- disc = &lport->disc;
- spin_lock_irqsave(&disc->rport_busy_pool_lock, rport_lock_flag);
- list_del_init(&rport->entry_rport);
- list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, rport_lock_flag);
-
- /* 3. Schedule Closing work (Enqueuing workqueue) */
- unf_schedule_closing_work(lport, rport);
-
- /* 4. dec R_Port ref_cnt */
- unf_rport_ref_dec(rport);
-}
-
-void unf_rport_error_recovery(struct unf_rport *rport)
-{
- ulong delay = 0;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
-
- ret = unf_rport_ref_inc(rport);
- if (ret != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) is removing and no need process",
- rport->nport_id, rport);
- return;
- }
-
- /* Check R_Port state */
- if (rport->rp_state == UNF_RPORT_ST_CLOSING ||
- rport->rp_state == UNF_RPORT_ST_DELETE) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]RPort(0x%x_0x%p) offline and no need process",
- rport->nport_id, rport);
-
- unf_rport_ref_dec(rport);
- return;
- }
-
- /* Check repeatability with recovery work */
- if (delayed_work_pending(&rport->recovery_work)) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]RPort(0x%x_0x%p) recovery work is running and no need process",
- rport->nport_id, rport);
-
- unf_rport_ref_dec(rport);
- return;
- }
-
- /* NOTE: Re-login or Logout directly (recovery work) */
- if (rport->retries < rport->max_retries) {
- rport->retries++;
- delay = UNF_DEFAULT_EDTOV / 4;
-
- if (queue_delayed_work(unf_wq, &rport->recovery_work,
- (ulong)msecs_to_jiffies((u32)delay))) {
- /* Inc ref_cnt: corresponding to this work timer */
- (void)unf_rport_ref_inc(rport);
- }
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) state(0x%x) retry login failed",
- rport->nport_id, rport, rport->rp_state);
-
- /* Update R_Port state: LOGO event --->>> ST_LOGO */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LOGO);
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- unf_rport_enter_logo(rport->lport, rport);
- }
-
- unf_rport_ref_dec(rport);
-}
-
-static u32 unf_rport_reuse_only(struct unf_rport *rport)
-{
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- ret = unf_rport_ref_inc(rport);
- if (ret != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* R_Port with delete state */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) is removing and no need process",
- rport->nport_id, rport);
-
- return UNF_RETURN_ERROR;
- }
-
- /* R_Port State check: delete */
- if (rport->rp_state == UNF_RPORT_ST_DELETE ||
- rport->rp_state == UNF_RPORT_ST_CLOSING) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) state(0x%x) is delete or closing no need process",
- rport->nport_id, rport, rport->rp_state);
-
- ret = UNF_RETURN_ERROR;
- }
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- unf_rport_ref_dec(rport);
-
- return ret;
-}
-
-static u32 unf_rport_reuse_recover(struct unf_rport *rport)
-{
- ulong flags = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&rport->rport_state_lock, flags);
- ret = unf_rport_ref_inc(rport);
- if (ret != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- /* R_Port with delete state */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) is removing and no need process",
- rport->nport_id, rport);
-
- return UNF_RETURN_ERROR;
- }
-
- /* R_Port state check: delete */
- if (rport->rp_state == UNF_RPORT_ST_DELETE ||
- rport->rp_state == UNF_RPORT_ST_CLOSING) {
- ret = UNF_RETURN_ERROR;
- }
-
- /* Update R_Port state: recovery --->>> ready */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_RECOVERY);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- unf_rport_ref_dec(rport);
-
- return ret;
-}
-
-static u32 unf_rport_reuse_init(struct unf_rport *rport)
-{
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&rport->rport_state_lock, flag);
- ret = unf_rport_ref_inc(rport);
- if (ret != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- /* R_Port with delete state */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]RPort(0x%x_0x%p) is removing and no need process",
- rport->nport_id, rport);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]RPort(0x%x)'s state is 0x%x with use_init flag",
- rport->nport_id, rport->rp_state);
-
- /* R_Port State check: delete */
- if (rport->rp_state == UNF_RPORT_ST_DELETE ||
- rport->rp_state == UNF_RPORT_ST_CLOSING) {
- ret = UNF_RETURN_ERROR;
- } else {
- /* Update R_Port state: re-enter Init state */
- unf_set_rport_state(rport, UNF_RPORT_ST_INIT);
- }
- spin_unlock_irqrestore(&rport->rport_state_lock, flag);
-
- unf_rport_ref_dec(rport);
-
- return ret;
-}
-
-struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport,
- u32 nport_id)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_rport *rport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- struct unf_rport *find_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- unf_lport = (struct unf_lport *)lport;
- disc = &unf_lport->disc;
-
- /* for each r_port from rport_busy_list: compare N_Port_ID */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- list_for_each_safe(node, next_node, &disc->list_busy_rports) {
- rport = list_entry(node, struct unf_rport, entry_rport);
- if (rport && rport->nport_id == nport_id) {
- find_rport = rport;
- break;
- }
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- return find_rport;
-}
-
-struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn)
-{
- struct unf_lport *unf_lport = NULL;
- struct unf_disc *disc = NULL;
- struct unf_rport *rport = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flag = 0;
- struct unf_rport *find_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- unf_lport = (struct unf_lport *)lport;
- disc = &unf_lport->disc;
-
- /* for each r_port from busy_list: compare wwpn(port name) */
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flag);
- list_for_each_safe(node, next_node, &disc->list_busy_rports) {
- rport = list_entry(node, struct unf_rport, entry_rport);
- if (rport && rport->port_name == wwpn) {
- find_rport = rport;
- break;
- }
- }
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flag);
-
- return find_rport;
-}
-
-struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
- struct unf_rport *rport,
- enum unf_rport_reuse_flag reuse_flag,
- u32 nport_id)
-{
- /*
- * New add or plug
- * *
- * retry_flogi --->>> reuse_only
- * name_server_register --->>> reuse_only
- * SNS_plogi --->>> reuse_only
- * enter_flogi --->>> reuse_only
- * logout --->>> reuse_only
- * flogi_handler --->>> reuse_only
- * plogi_handler --->>> reuse_only
- * adisc_handler --->>> reuse_recovery
- * logout_handler --->>> reuse_init
- * prlo_handler --->>> reuse_init
- * login_with_loop --->>> reuse_only
- * gffid_callback --->>> reuse_only
- * delay_plogi --->>> reuse_only
- * gffid_rjt --->>> reuse_only
- * gffid_rsp_unknown --->>> reuse_only
- * gpnid_acc --->>> reuse_init
- * fdisc_callback --->>> reuse_only
- * flogi_acc --->>> reuse_only
- * plogi_acc --->>> reuse_only
- * logo_callback --->>> reuse_init
- * rffid_callback --->>> reuse_only
- */
-#define UNF_AVOID_LINK_FLASH_TIME 3000
-
- struct unf_rport *unf_rport = rport;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- /* 1. Alloc New R_Port or Update R_Port Property */
- if (!unf_rport) {
- /* If NULL, get/Alloc new node (R_Port from R_Port pool)
- * directly
- */
- unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC, nport_id);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO,
- "[info]Port(0x%x) get exist RPort(0x%x) with state(0x%x) and reuse_flag(0x%x)",
- lport->port_id, unf_rport->nport_id,
- unf_rport->rp_state, reuse_flag);
-
- switch (reuse_flag) {
- case UNF_RPORT_REUSE_ONLY:
- ret = unf_rport_reuse_only(unf_rport);
- if (ret != RETURN_OK) {
- /* R_Port within delete list: need get new */
- unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
- nport_id);
- }
- break;
-
- case UNF_RPORT_REUSE_INIT:
- ret = unf_rport_reuse_init(unf_rport);
- if (ret != RETURN_OK) {
- /* R_Port within delete list: need get new */
- unf_rport = unf_rport_get_free_and_init(lport, UNF_PORT_TYPE_FC,
- nport_id);
- }
- break;
-
- case UNF_RPORT_REUSE_RECOVER:
- ret = unf_rport_reuse_recover(unf_rport);
- if (ret != RETURN_OK) {
- /* R_Port within delete list,
- * NOTE: do nothing
- */
- unf_rport = NULL;
- }
- break;
-
- default:
- break;
- }
- } // end else: R_Port != NULL
-
- return unf_rport;
-}
-
-u32 unf_get_port_feature(u64 wwpn)
-{
- struct unf_rport_feature_recard *port_fea = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- ulong flags = 0;
- struct list_head list_temp_node;
- struct list_head *list_busy_head = NULL;
- struct list_head *list_free_head = NULL;
- spinlock_t *feature_lock = NULL;
-
- list_busy_head = &port_feature_pool->list_busy_head;
- list_free_head = &port_feature_pool->list_free_head;
- feature_lock = &port_feature_pool->port_fea_pool_lock;
- spin_lock_irqsave(feature_lock, flags);
- list_for_each_safe(node, next_node, list_busy_head) {
- port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
-
- if (port_fea->wwpn == wwpn) {
- list_del(&port_fea->entry_feature);
- list_add(&port_fea->entry_feature, list_busy_head);
- spin_unlock_irqrestore(feature_lock, flags);
-
- return port_fea->port_feature;
- }
- }
-
- list_for_each_safe(node, next_node, list_free_head) {
- port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
-
- if (port_fea->wwpn == wwpn) {
- list_del(&port_fea->entry_feature);
- list_add(&port_fea->entry_feature, list_busy_head);
- spin_unlock_irqrestore(feature_lock, flags);
-
- return port_fea->port_feature;
- }
- }
-
- /* can't find wwpn */
- if (list_empty(list_free_head)) {
- /* free is empty, transport busy to free */
- list_temp_node = port_feature_pool->list_free_head;
- port_feature_pool->list_free_head = port_feature_pool->list_busy_head;
- port_feature_pool->list_busy_head = list_temp_node;
- }
-
- port_fea = list_entry(UNF_OS_LIST_PREV(list_free_head),
- struct unf_rport_feature_recard,
- entry_feature);
- list_del(&port_fea->entry_feature);
- list_add(&port_fea->entry_feature, list_busy_head);
-
- port_fea->wwpn = wwpn;
- port_fea->port_feature = UNF_PORT_MODE_UNKNOWN;
-
- spin_unlock_irqrestore(feature_lock, flags);
- return UNF_PORT_MODE_UNKNOWN;
-}
-
-void unf_update_port_feature(u64 wwpn, u32 port_feature)
-{
- struct unf_rport_feature_recard *port_fea = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct list_head *busy_head = NULL;
- struct list_head *free_head = NULL;
- ulong flags = 0;
- spinlock_t *feature_lock = NULL;
-
- feature_lock = &port_feature_pool->port_fea_pool_lock;
- busy_head = &port_feature_pool->list_busy_head;
- free_head = &port_feature_pool->list_free_head;
-
- spin_lock_irqsave(feature_lock, flags);
- list_for_each_safe(node, next_node, busy_head) {
- port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
-
- if (port_fea->wwpn == wwpn) {
- port_fea->port_feature = port_feature;
- list_del(&port_fea->entry_feature);
- list_add(&port_fea->entry_feature, busy_head);
- spin_unlock_irqrestore(feature_lock, flags);
-
- return;
- }
- }
-
- list_for_each_safe(node, next_node, free_head) {
- port_fea = list_entry(node, struct unf_rport_feature_recard, entry_feature);
-
- if (port_fea->wwpn == wwpn) {
- port_fea->port_feature = port_feature;
- list_del(&port_fea->entry_feature);
- list_add(&port_fea->entry_feature, busy_head);
-
- spin_unlock_irqrestore(feature_lock, flags);
-
- return;
- }
- }
-
- spin_unlock_irqrestore(feature_lock, flags);
-}
diff --git a/drivers/scsi/spfc/common/unf_rport.h b/drivers/scsi/spfc/common/unf_rport.h
deleted file mode 100644
index a9d58cb29b8a..000000000000
--- a/drivers/scsi/spfc/common/unf_rport.h
+++ /dev/null
@@ -1,301 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_RPORT_H
-#define UNF_RPORT_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "unf_lport.h"
-
-extern struct unf_rport_feature_pool *port_feature_pool;
-
-#define UNF_MAX_SCSI_ID 2048
-#define UNF_LOSE_TMO 30
-#define UNF_RPORT_INVALID_INDEX 0xffff
-
-/* RSCN compare DISC list with local RPort macro */
-#define UNF_RPORT_NEED_PROCESS 0x1
-#define UNF_RPORT_ONLY_IN_DISC_PROCESS 0x2
-#define UNF_RPORT_ONLY_IN_LOCAL_PROCESS 0x3
-#define UNF_RPORT_IN_DISC_AND_LOCAL_PROCESS 0x4
-#define UNF_RPORT_NOT_NEED_PROCESS 0x5
-
-#define UNF_ECHO_SEND_MAX_TIMES 1
-
-/* csctrl level value */
-#define UNF_CSCTRL_LOW 0x81
-#define UNF_CSCTRL_MIDDLE 0x82
-#define UNF_CSCTRL_HIGH 0x83
-#define UNF_CSCTRL_INVALID 0x0
-
-enum unf_rport_login_state {
- UNF_RPORT_ST_INIT = 0x1000, /* initialized */
- UNF_RPORT_ST_PLOGI_WAIT, /* waiting for PLOGI completion */
- UNF_RPORT_ST_PRLI_WAIT, /* waiting for PRLI completion */
- UNF_RPORT_ST_READY, /* ready for use */
- UNF_RPORT_ST_LOGO, /* port logout sent */
- UNF_RPORT_ST_CLOSING, /* being closed */
- UNF_RPORT_ST_DELETE, /* port being deleted */
- UNF_RPORT_ST_BUTT
-};
-
-enum unf_rport_event {
- UNF_EVENT_RPORT_NORMAL_ENTER = 0x9000,
- UNF_EVENT_RPORT_ENTER_PLOGI = 0x9001,
- UNF_EVENT_RPORT_ENTER_PRLI = 0x9002,
- UNF_EVENT_RPORT_READY = 0x9003,
- UNF_EVENT_RPORT_LOGO = 0x9004,
- UNF_EVENT_RPORT_CLS_TIMEOUT = 0x9005,
- UNF_EVENT_RPORT_RECOVERY = 0x9006,
- UNF_EVENT_RPORT_RELOGIN = 0x9007,
- UNF_EVENT_RPORT_LINK_DOWN = 0x9008,
- UNF_EVENT_RPORT_BUTT
-};
-
-/* RPort local link state */
-enum unf_port_state {
- UNF_PORT_STATE_LINKUP = 0x1001,
- UNF_PORT_STATE_LINKDOWN = 0x1002
-};
-
-enum unf_rport_reuse_flag {
- UNF_RPORT_REUSE_ONLY = 0x1001,
- UNF_RPORT_REUSE_INIT = 0x1002,
- UNF_RPORT_REUSE_RECOVER = 0x1003
-};
-
-struct unf_disc_rport {
- /* RPort entry */
- struct list_head entry_rport;
-
- u32 nport_id; /* Remote port NPortID */
- u32 disc_done; /* 1:Disc done */
-};
-
-struct unf_rport_feature_pool {
- struct list_head list_busy_head;
- struct list_head list_free_head;
- void *port_feature_pool_addr;
- spinlock_t port_fea_pool_lock;
-};
-
-struct unf_rport_feature_recard {
- struct list_head entry_feature;
- u64 wwpn;
- u32 port_feature;
- u32 reserved;
-};
-
-struct unf_os_thread_private_data {
- struct list_head list;
- spinlock_t spin_lock;
- struct task_struct *thread;
- unsigned int in_process;
- unsigned int cpu_id;
- atomic_t user_count;
-};
-
-/* Remote Port struct */
-struct unf_rport {
- u32 max_frame_size;
- u32 supported_classes;
-
- /* Dynamic Attributes */
- /* Remote Port loss timeout in seconds. */
- u32 dev_loss_tmo;
-
- u64 node_name;
- u64 port_name;
- u32 nport_id; /* Remote port NPortID */
- u32 local_nport_id;
-
- u32 roles;
-
- /* Remote port local INI state */
- enum unf_port_state lport_ini_state;
- enum unf_port_state last_lport_ini_state;
-
- /* Remote port local TGT state */
- enum unf_port_state lport_tgt_state;
- enum unf_port_state last_lport_tgt_state;
-
- /* Port Type,fc or fcoe */
- u32 port_type;
-
- /* RPort reference counter */
- atomic_t rport_ref_cnt;
-
- /* Pending IO count */
- atomic_t pending_io_cnt;
-
- /* RPort entry */
- struct list_head entry_rport;
-
- /* Port State,delay reclaim when uiRpState == complete. */
- enum unf_rport_login_state rp_state;
- u32 disc_done; /* 1:Disc done */
-
- struct unf_lport *lport;
- void *rport;
- spinlock_t rport_state_lock;
-
- /* Port attribution */
- u32 ed_tov;
- u32 ra_tov;
- u32 options; /* ini or tgt */
- u32 last_report_link_up_options;
- u32 fcp_conf_needed; /* INI Rport send FCP CONF flag */
- u32 tape_support_needed; /* INI tape support flag */
- u32 retries; /* special req retry times */
- u32 logo_retries; /* logo error recovery retry times */
- u32 max_retries; /* special req retry times */
- u64 rport_alloc_jifs; /* Rport alloc jiffies */
-
- void *session;
-
- /* binding with SCSI */
- u32 scsi_id;
-
- /* disc list compare flag */
- u32 rscn_position;
-
- u32 rport_index;
-
- u32 sqn_base;
- enum unf_rport_qos_level qos_level;
-
- /* RPort timer,closing status */
- struct work_struct closing_work;
-
- /* RPort timer,rport linkup */
- struct work_struct start_work;
-
- /* RPort timer,recovery */
- struct delayed_work recovery_work;
-
- /* RPort timer,TGT mode,PRLI waiting */
- struct delayed_work open_work;
-
- struct semaphore task_sema;
- /* Callback after rport Ready/delete.[with state:ok/fail].Creat/free TGT session here */
- /* input : L_Port,R_Port,state:ready --creat session/delete--free session */
- void (*unf_rport_callback)(void *rport, void *lport, u32 result);
-
- struct unf_os_thread_private_data *data_thread;
-};
-
-#define UNF_IO_RESULT_CNT(scsi_table, scsi_id, io_result) \
- do { \
- if (likely(((io_result) < UNF_MAX_IO_RETURN_VALUE) && \
- ((scsi_id) < UNF_MAX_SCSI_ID) && \
- ((scsi_table)->wwn_rport_info_table) && \
- (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) {\
- atomic64_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
- .dfx_counter->io_done_cnt[(io_result)])); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
- UNF_ERR, \
- "[err] io return value(0x%x) or " \
- "scsi id(0x%x) is invalid", \
- io_result, scsi_id); \
- } \
- } while (0)
-
-#define UNF_SCSI_CMD_CNT(scsi_table, scsi_id, io_type) \
- do { \
- if (likely(((io_type) < UNF_MAX_SCSI_CMD) && \
- ((scsi_id) < UNF_MAX_SCSI_ID) && \
- ((scsi_table)->wwn_rport_info_table) && \
- (((scsi_table)->wwn_rport_info_table[scsi_id].dfx_counter)))) { \
- atomic64_inc(&(((scsi_table)->wwn_rport_info_table[scsi_id]) \
- .dfx_counter->scsi_cmd_cnt[io_type])); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
- UNF_ERR, \
- "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
- "is invalid", \
- io_type, scsi_id); \
- } \
- } while (0)
-
-#define UNF_SCSI_ERROR_HANDLE_CNT(scsi_table, scsi_id, io_type) \
- do { \
- if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
- ((scsi_id) < UNF_MAX_SCSI_ID) && \
- ((scsi_table)->wwn_rport_info_table) && \
- (((scsi_table)->wwn_rport_info_table[scsi_id] \
- .dfx_counter)))) { \
- atomic_inc(&((scsi_table)->wwn_rport_info_table[scsi_id] \
- .dfx_counter->error_handle[io_type])); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
- UNF_ERR, \
- "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
- "is invalid", \
- (io_type), (scsi_id)); \
- } \
- } while (0)
-
-#define UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_table, scsi_id, io_type) \
- do { \
- if (likely(((io_type) < UNF_SCSI_ERROR_HANDLE_BUTT) && \
- ((scsi_id) < UNF_MAX_SCSI_ID) && \
- ((scsi_table)->wwn_rport_info_table) &&\
- (((scsi_table)-> \
- wwn_rport_info_table[scsi_id].dfx_counter)))) { \
- atomic_inc(&( \
- (scsi_table) \
- ->wwn_rport_info_table[scsi_id] \
- .dfx_counter->error_handle_result[io_type])); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, \
- UNF_ERR, \
- "[err] scsi_cmd(0x%x) or scsi id(0x%x) " \
- "is invalid", \
- io_type, scsi_id); \
- } \
- } while (0)
-
-void unf_rport_state_ma(struct unf_rport *rport, enum unf_rport_event event);
-void unf_update_lport_state_by_linkup_event(struct unf_lport *lport,
- struct unf_rport *rport,
- u32 rport_att);
-
-void unf_set_rport_state(struct unf_rport *rport, enum unf_rport_login_state states);
-void unf_rport_enter_closing(struct unf_rport *rport);
-u32 unf_release_rport_res(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_initrport_mgr_temp(struct unf_lport *lport);
-void unf_clean_linkdown_rport(struct unf_lport *lport);
-void unf_rport_error_recovery(struct unf_rport *rport);
-struct unf_rport *unf_get_rport_by_nport_id(struct unf_lport *lport, u32 nport_id);
-struct unf_rport *unf_get_rport_by_wwn(struct unf_lport *lport, u64 wwpn);
-void unf_rport_enter_logo(struct unf_lport *lport, struct unf_rport *rport);
-u32 unf_rport_ref_inc(struct unf_rport *rport);
-void unf_rport_ref_dec(struct unf_rport *rport);
-
-struct unf_rport *unf_rport_set_qualifier_key_reuse(struct unf_lport *lport,
- struct unf_rport *rport_by_nport_id,
- struct unf_rport *rport_by_wwpn,
- u64 wwpn, u32 sid);
-void unf_rport_delay_login(struct unf_rport *rport);
-struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn,
- u32 sid);
-void unf_rport_linkdown(struct unf_lport *lport, struct unf_rport *rport);
-void unf_apply_for_session(struct unf_lport *lport, struct unf_rport *rport);
-struct unf_rport *unf_get_safe_rport(struct unf_lport *lport,
- struct unf_rport *rport,
- enum unf_rport_reuse_flag reuse_flag,
- u32 nport_id);
-void *unf_rport_get_free_and_init(void *lport, u32 port_type, u32 nport_id);
-
-void unf_set_device_state(struct unf_lport *lport, u32 scsi_id, int scsi_state);
-u32 unf_get_scsi_id_by_wwpn(struct unf_lport *lport, u64 wwpn);
-u32 unf_get_device_state(struct unf_lport *lport, u32 scsi_id);
-u32 unf_free_scsi_id(struct unf_lport *lport, u32 scsi_id);
-void unf_schedule_closing_work(struct unf_lport *lport, struct unf_rport *rport);
-void unf_sesion_loss_timeout(struct work_struct *work);
-u32 unf_get_port_feature(u64 wwpn);
-void unf_update_port_feature(u64 wwpn, u32 port_feature);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_scsi.c b/drivers/scsi/spfc/common/unf_scsi.c
deleted file mode 100644
index 3615d95c77e9..000000000000
--- a/drivers/scsi/spfc/common/unf_scsi.c
+++ /dev/null
@@ -1,1462 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_type.h"
-#include "unf_log.h"
-#include "unf_scsi_common.h"
-#include "unf_lport.h"
-#include "unf_rport.h"
-#include "unf_portman.h"
-#include "unf_exchg.h"
-#include "unf_exchg_abort.h"
-#include "unf_npiv.h"
-#include "unf_io.h"
-
-#define UNF_LUN_ID_MASK 0x00000000ffff0000
-#define UNF_CMD_PER_LUN 3
-
-static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd);
-static int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *v_cmnd);
-static int unf_scsi_device_reset_handler(struct scsi_cmnd *v_cmnd);
-static int unf_scsi_bus_reset_handler(struct scsi_cmnd *v_cmnd);
-static int unf_scsi_target_reset_handler(struct scsi_cmnd *v_cmnd);
-static int unf_scsi_slave_alloc(struct scsi_device *sdev);
-static void unf_scsi_destroy_slave(struct scsi_device *sdev);
-static int unf_scsi_slave_configure(struct scsi_device *sdev);
-static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time);
-static void unf_scsi_scan_start(struct Scsi_Host *shost);
-
-static struct scsi_transport_template *scsi_transport_template;
-static struct scsi_transport_template *scsi_transport_template_v;
-
-struct unf_ini_error_code ini_error_code_table1[] = {
- {UNF_IO_SUCCESS, UNF_SCSI_HOST(DID_OK)},
- {UNF_IO_ABORTED, UNF_SCSI_HOST(DID_ABORT)},
- {UNF_IO_FAILED, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_ABORT_ABTS, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_ABORT_LOGIN, UNF_SCSI_HOST(DID_NO_CONNECT)},
- {UNF_IO_ABORT_REET, UNF_SCSI_HOST(DID_RESET)},
- {UNF_IO_ABORT_FAILED, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_OUTOF_ORDER, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_FTO, UNF_SCSI_HOST(DID_TIME_OUT)},
- {UNF_IO_LINK_FAILURE, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_OVER_FLOW, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_RSP_OVER, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_LOST_FRAME, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_UNDER_FLOW, UNF_SCSI_HOST(DID_OK)},
- {UNF_IO_HOST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_SEST_PROG_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_INVALID_ENTRY, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_ABORT_SEQ_NOT, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_REJECT, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_EDC_IN_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_EDC_OUT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_UNINIT_KEK_ERR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_DEK_OUTOF_RANGE, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_KEY_UNWRAP_ERR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_KEY_TAG_ERR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_KEY_ECC_ERR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_BLOCK_SIZE_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_ILLEGAL_CIPHER_MODE, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_CLEAN_UP, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_ABORTED_BY_TARGET, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_TRANSPORT_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_LINK_FLASH, UNF_SCSI_HOST(DID_NO_CONNECT)},
- {UNF_IO_TIMEOUT, UNF_SCSI_HOST(DID_TIME_OUT)},
- {UNF_IO_DMA_ERROR, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_NO_LPORT, UNF_SCSI_HOST(DID_NO_CONNECT)},
- {UNF_IO_NO_XCHG, UNF_SCSI_HOST(DID_SOFT_ERROR)},
- {UNF_IO_SOFT_ERR, UNF_SCSI_HOST(DID_SOFT_ERROR)},
- {UNF_IO_PORT_LOGOUT, UNF_SCSI_HOST(DID_NO_CONNECT)},
- {UNF_IO_ERREND, UNF_SCSI_HOST(DID_ERROR)},
- {UNF_IO_DIF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
- {UNF_IO_INCOMPLETE, UNF_SCSI_HOST(DID_IMM_RETRY)},
- {UNF_IO_DIF_REF_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))},
- {UNF_IO_DIF_GEN_ERROR, (UNF_SCSI_HOST(DID_OK) | UNF_SCSI_STATUS(SCSI_CHECK_CONDITION))}
-};
-
-u32 ini_err_code_table_cnt1 = sizeof(ini_error_code_table1) / sizeof(struct unf_ini_error_code);
-
-static void unf_set_rport_loss_tmo(struct fc_rport *rport, u32 timeout)
-{
- if (timeout)
- rport->dev_loss_tmo = timeout;
- else
- rport->dev_loss_tmo = 1;
-}
-
-static void unf_get_host_port_id(struct Scsi_Host *shost)
-{
- struct unf_lport *unf_lport = NULL;
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
- return;
- }
-
- fc_host_port_id(shost) = unf_lport->port_id;
-}
-
-static void unf_get_host_speed(struct Scsi_Host *shost)
-{
- struct unf_lport *unf_lport = NULL;
- u32 speed = FC_PORTSPEED_UNKNOWN;
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
- return;
- }
-
- switch (unf_lport->speed) {
- case UNF_PORT_SPEED_2_G:
- speed = FC_PORTSPEED_2GBIT;
- break;
- case UNF_PORT_SPEED_4_G:
- speed = FC_PORTSPEED_4GBIT;
- break;
- case UNF_PORT_SPEED_8_G:
- speed = FC_PORTSPEED_8GBIT;
- break;
- case UNF_PORT_SPEED_16_G:
- speed = FC_PORTSPEED_16GBIT;
- break;
- case UNF_PORT_SPEED_32_G:
- speed = FC_PORTSPEED_32GBIT;
- break;
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with unknown speed(0x%x) for FC mode",
- unf_lport->port_id, unf_lport->speed);
- break;
- }
-
- fc_host_speed(shost) = speed;
-}
-
-static void unf_get_host_port_type(struct Scsi_Host *shost)
-{
- struct unf_lport *unf_lport = NULL;
- u32 port_type = FC_PORTTYPE_UNKNOWN;
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
- return;
- }
-
- switch (unf_lport->act_topo) {
- case UNF_ACT_TOP_PRIVATE_LOOP:
- port_type = FC_PORTTYPE_LPORT;
- break;
- case UNF_ACT_TOP_PUBLIC_LOOP:
- port_type = FC_PORTTYPE_NLPORT;
- break;
- case UNF_ACT_TOP_P2P_DIRECT:
- port_type = FC_PORTTYPE_PTP;
- break;
- case UNF_ACT_TOP_P2P_FABRIC:
- port_type = FC_PORTTYPE_NPORT;
- break;
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) with unknown topo type(0x%x) for FC mode",
- unf_lport->port_id, unf_lport->act_topo);
- break;
- }
-
- fc_host_port_type(shost) = port_type;
-}
-
-static void unf_get_symbolic_name(struct Scsi_Host *shost)
-{
- u8 *name = NULL;
- struct unf_lport *unf_lport = NULL;
-
- unf_lport = (struct unf_lport *)(uintptr_t)shost->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Check l_port failed");
- return;
- }
-
- name = fc_host_symbolic_name(shost);
- if (name)
- snprintf(name, FC_SYMBOLIC_NAME_SIZE, "SPFC_FW_RELEASE:%s SPFC_DRV_RELEASE:%s",
- unf_lport->fw_version, SPFC_DRV_VERSION);
-}
-
-static void unf_get_host_fabric_name(struct Scsi_Host *shost)
-{
- struct unf_lport *unf_lport = NULL;
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
-
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
- return;
- }
- fc_host_fabric_name(shost) = unf_lport->fabric_node_name;
-}
-
-static void unf_get_host_port_state(struct Scsi_Host *shost)
-{
- struct unf_lport *unf_lport = NULL;
- enum fc_port_state port_state;
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is null");
- return;
- }
-
- switch (unf_lport->link_up) {
- case UNF_PORT_LINK_DOWN:
- port_state = FC_PORTSTATE_OFFLINE;
- break;
- case UNF_PORT_LINK_UP:
- port_state = FC_PORTSTATE_ONLINE;
- break;
- default:
- port_state = FC_PORTSTATE_UNKNOWN;
- break;
- }
-
- fc_host_port_state(shost) = port_state;
-}
-
-static void unf_dev_loss_timeout_callbk(struct fc_rport *rport)
-{
- /*
- * NOTE: about rport->dd_data
- * --->>> local SCSI_ID
- * 1. Assignment during scsi rport link up
- * 2. Released when scsi rport link down & timeout(30s)
- * 3. Used during scsi do callback with slave_alloc function
- */
- struct Scsi_Host *host = NULL;
- struct unf_lport *unf_lport = NULL;
- u32 scsi_id = 0;
-
- if (unlikely(!rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
- return;
- }
-
- host = rport_to_shost(rport);
- if (unlikely(!host)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
- return;
- }
-
- scsi_id = *(u32 *)(rport->dd_data); /* according to Local SCSI_ID */
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]rport(0x%p) scsi_id(0x%x) is max than(0x%x)",
- rport, scsi_id, UNF_MAX_SCSI_ID);
- return;
- }
-
- unf_lport = (struct unf_lport *)host->hostdata[0];
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[event]Port(0x%x_0x%x) rport(0x%p) scsi_id(0x%x) target_id(0x%x) loss timeout",
- unf_lport->port_id, unf_lport->nport_id, rport,
- scsi_id, rport->scsi_target_id);
-
- atomic_inc(&unf_lport->session_loss_tmo);
-
- /* Free SCSI ID & set table state with DEAD */
- (void)unf_free_scsi_id(unf_lport, scsi_id);
- unf_xchg_up_abort_io_by_scsi_id(unf_lport, scsi_id);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(%p) is invalid", unf_lport);
- }
-
- *((u32 *)rport->dd_data) = INVALID_VALUE32;
-}
-
-int unf_scsi_create_vport(struct fc_vport *fc_port, bool disabled)
-{
- struct unf_lport *vport = NULL;
- struct unf_lport *unf_lport = NULL;
- struct Scsi_Host *shost = NULL;
- struct vport_config vport_config = {0};
-
- shost = vport_to_shost(fc_port);
-
- unf_lport = (struct unf_lport *)shost->hostdata[0];
- if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(%p) is invalid", unf_lport);
-
- return RETURN_ERROR;
- }
-
- vport_config.port_name = fc_port->port_name;
-
- vport_config.port_mode = fc_port->roles;
-
- vport = unf_creat_vport(unf_lport, &vport_config);
- if (!vport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) Create Vport failed on lldrive",
- unf_lport->port_id);
-
- return RETURN_ERROR;
- }
-
- fc_port->dd_data = vport;
- vport->vport = fc_port;
-
- return RETURN_OK;
-}
-
-int unf_scsi_delete_vport(struct fc_vport *fc_port)
-{
- int ret = RETURN_ERROR;
- struct unf_lport *vport = NULL;
-
- vport = (struct unf_lport *)fc_port->dd_data;
- if (unf_is_lport_valid(vport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]VPort(%p) is invalid or is removing", vport);
-
- fc_port->dd_data = NULL;
-
- return ret;
- }
-
- ret = (int)unf_destroy_one_vport(vport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]VPort(0x%x) destroy failed on drive", vport->port_id);
-
- return ret;
- }
-
- fc_port->dd_data = NULL;
- return ret;
-}
-
-struct fc_function_template function_template = {
- .show_host_node_name = 1,
- .show_host_port_name = 1,
- .show_host_supported_classes = 1,
- .show_host_supported_speeds = 1,
-
- .get_host_port_id = unf_get_host_port_id,
- .show_host_port_id = 1,
- .get_host_speed = unf_get_host_speed,
- .show_host_speed = 1,
- .get_host_port_type = unf_get_host_port_type,
- .show_host_port_type = 1,
- .get_host_symbolic_name = unf_get_symbolic_name,
- .show_host_symbolic_name = 1,
- .set_host_system_hostname = NULL,
- .show_host_system_hostname = 1,
- .get_host_fabric_name = unf_get_host_fabric_name,
- .show_host_fabric_name = 1,
- .get_host_port_state = unf_get_host_port_state,
- .show_host_port_state = 1,
-
- .dd_fcrport_size = sizeof(void *),
- .show_rport_supported_classes = 1,
-
- .get_starget_node_name = NULL,
- .show_starget_node_name = 1,
- .get_starget_port_name = NULL,
- .show_starget_port_name = 1,
- .get_starget_port_id = NULL,
- .show_starget_port_id = 1,
-
- .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
- .show_rport_dev_loss_tmo = 0,
-
- .issue_fc_host_lip = NULL,
- .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
- .terminate_rport_io = NULL,
- .get_fc_host_stats = NULL,
-
- .vport_create = unf_scsi_create_vport,
- .vport_disable = NULL,
- .vport_delete = unf_scsi_delete_vport,
- .bsg_request = NULL,
- .bsg_timeout = NULL,
-};
-
-struct fc_function_template function_template_v = {
- .show_host_node_name = 1,
- .show_host_port_name = 1,
- .show_host_supported_classes = 1,
- .show_host_supported_speeds = 1,
-
- .get_host_port_id = unf_get_host_port_id,
- .show_host_port_id = 1,
- .get_host_speed = unf_get_host_speed,
- .show_host_speed = 1,
- .get_host_port_type = unf_get_host_port_type,
- .show_host_port_type = 1,
- .get_host_symbolic_name = unf_get_symbolic_name,
- .show_host_symbolic_name = 1,
- .set_host_system_hostname = NULL,
- .show_host_system_hostname = 1,
- .get_host_fabric_name = unf_get_host_fabric_name,
- .show_host_fabric_name = 1,
- .get_host_port_state = unf_get_host_port_state,
- .show_host_port_state = 1,
-
- .dd_fcrport_size = sizeof(void *),
- .show_rport_supported_classes = 1,
-
- .get_starget_node_name = NULL,
- .show_starget_node_name = 1,
- .get_starget_port_name = NULL,
- .show_starget_port_name = 1,
- .get_starget_port_id = NULL,
- .show_starget_port_id = 1,
-
- .set_rport_dev_loss_tmo = unf_set_rport_loss_tmo,
- .show_rport_dev_loss_tmo = 0,
-
- .issue_fc_host_lip = NULL,
- .dev_loss_tmo_callbk = unf_dev_loss_timeout_callbk,
- .terminate_rport_io = NULL,
- .get_fc_host_stats = NULL,
-
- .vport_create = NULL,
- .vport_disable = NULL,
- .vport_delete = NULL,
- .bsg_request = NULL,
- .bsg_timeout = NULL,
-};
-
-struct scsi_host_template scsi_host_template = {
- .module = THIS_MODULE,
- .name = "SPFC",
-
- .queuecommand = unf_scsi_queue_cmd,
- .eh_timed_out = fc_eh_timed_out,
- .eh_abort_handler = unf_scsi_abort_scsi_cmnd,
- .eh_device_reset_handler = unf_scsi_device_reset_handler,
-
- .eh_target_reset_handler = unf_scsi_target_reset_handler,
- .eh_bus_reset_handler = unf_scsi_bus_reset_handler,
- .eh_host_reset_handler = NULL,
-
- .slave_configure = unf_scsi_slave_configure,
- .slave_alloc = unf_scsi_slave_alloc,
- .slave_destroy = unf_scsi_destroy_slave,
-
- .scan_finished = unf_scsi_scan_finished,
- .scan_start = unf_scsi_scan_start,
-
- .this_id = -1, /* this_id: -1 */
- .cmd_per_lun = UNF_CMD_PER_LUN,
- .shost_attrs = NULL,
- .sg_tablesize = SG_ALL,
- .max_sectors = UNF_MAX_SECTORS,
- .supported_mode = MODE_INITIATOR,
-};
-
-void unf_unmap_prot_sgl(struct scsi_cmnd *cmnd)
-{
- struct device *dev = NULL;
-
- if ((scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) && spfc_dif_enable &&
- (scsi_prot_sg_count(cmnd))) {
- dev = cmnd->device->host->dma_dev;
- dma_unmap_sg(dev, scsi_prot_sglist(cmnd),
- (int)scsi_prot_sg_count(cmnd),
- cmnd->sc_data_direction);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "scsi done cmd:%p op:%u, difsglcount:%u", cmnd,
- scsi_get_prot_op(cmnd), scsi_prot_sg_count(cmnd));
- }
-}
-
-void unf_scsi_done(struct unf_scsi_cmnd *scsi_cmd)
-{
- struct scsi_cmnd *cmd = NULL;
-
- cmd = (struct scsi_cmnd *)scsi_cmd->upper_cmnd;
- FC_CHECK_RETURN_VOID(scsi_cmd);
- FC_CHECK_RETURN_VOID(cmd);
- FC_CHECK_RETURN_VOID(cmd->scsi_done);
- scsi_set_resid(cmd, (int)scsi_cmd->resid);
-
- cmd->result = scsi_cmd->result;
- scsi_dma_unmap(cmd);
- unf_unmap_prot_sgl(cmd);
- return cmd->scsi_done(cmd);
-}
-
-static void unf_get_protect_op(struct scsi_cmnd *cmd,
- struct unf_dif_control_info *dif_control_info)
-{
- switch (scsi_get_prot_op(cmd)) {
- /* OS-HBA: Unprotected, HBA-Target: Protected */
- case SCSI_PROT_READ_STRIP:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
- break;
- case SCSI_PROT_WRITE_INSERT:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
- break;
-
- /* OS-HBA: Protected, HBA-Target: Unprotected */
- case SCSI_PROT_READ_INSERT:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_INSERT;
- break;
- case SCSI_PROT_WRITE_STRIP:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_DELETE;
- break;
-
- /* OS-HBA: Protected, HBA-Target: Protected */
- case SCSI_PROT_READ_PASS:
- case SCSI_PROT_WRITE_PASS:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
- break;
-
- default:
- dif_control_info->protect_opcode |= UNF_DIF_ACTION_VERIFY_AND_FORWARD;
- break;
- }
-}
-
-int unf_get_protect_mode(struct unf_lport *lport, struct scsi_cmnd *scsi_cmd,
- struct unf_scsi_cmnd *unf_scsi_cmd)
-{
- struct scsi_cmnd *cmd = NULL;
- int dif_seg_cnt = 0;
- struct unf_dif_control_info *dif_control_info = NULL;
-
- cmd = scsi_cmd;
- dif_control_info = &unf_scsi_cmd->dif_control;
-
- unf_get_protect_op(cmd, dif_control_info);
-
- if (dif_sgl_mode)
- dif_control_info->flags |= UNF_DIF_DOUBLE_SGL;
- dif_control_info->flags |= ((cmd->device->sector_size) == SECTOR_SIZE_4096)
- ? UNF_DIF_SECTSIZE_4KB : UNF_DIF_SECTSIZE_512;
- dif_control_info->protect_opcode |= UNF_VERIFY_CRC_MASK | UNF_VERIFY_LBA_MASK;
- dif_control_info->dif_sge_count = scsi_prot_sg_count(cmd);
- dif_control_info->dif_sgl = scsi_prot_sglist(cmd);
- dif_control_info->start_lba = cpu_to_le32(((uint32_t)(0xffffffff & scsi_get_lba(cmd))));
-
- if (cmd->device->sector_size == SECTOR_SIZE_4096)
- dif_control_info->start_lba = dif_control_info->start_lba >> UNF_SHIFT_3;
-
- if (scsi_prot_sg_count(cmd)) {
- dif_seg_cnt = dma_map_sg(&lport->low_level_func.dev->dev, scsi_prot_sglist(cmd),
- (int)scsi_prot_sg_count(cmd), cmd->sc_data_direction);
- if (unlikely(!dif_seg_cnt)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) cmd:%p map dif sgl err",
- lport->port_id, cmd);
- return UNF_RETURN_ERROR;
- }
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "build scsi cmd:%p op:%u,difsglcount:%u,difsegcnt:%u", cmd,
- scsi_get_prot_op(cmd), scsi_prot_sg_count(cmd),
- dif_seg_cnt);
- return RETURN_OK;
-}
-
-static u32 unf_get_rport_qos_level(struct scsi_cmnd *cmd, u32 scsi_id,
- struct unf_rport_scsi_id_image *scsi_image_table)
-{
- enum unf_rport_qos_level level = 0;
-
- if (!scsi_image_table->wwn_rport_info_table[scsi_id].lun_qos_level ||
- cmd->device->lun >= UNF_MAX_LUN_PER_TARGET) {
- level = 0;
- } else {
- level = (scsi_image_table->wwn_rport_info_table[scsi_id]
- .lun_qos_level[cmd->device->lun]);
- }
- return level;
-}
-
-u32 unf_get_frame_entry_buf(void *up_cmnd, void *driver_sgl, void **upper_sgl,
- u32 *port_id, u32 *index, char **buf, u32 *buf_len)
-{
-#define SPFC_MAX_DMA_LENGTH (0x20000 - 1)
- struct scatterlist *scsi_sgl = *upper_sgl;
-
- if (unlikely(!scsi_sgl)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Command(0x%p) can not get SGL.", up_cmnd);
- return RETURN_ERROR;
- }
- *buf = (char *)sg_dma_address(scsi_sgl);
- *buf_len = sg_dma_len(scsi_sgl);
- *upper_sgl = (void *)sg_next(scsi_sgl);
- if (unlikely((*buf_len > SPFC_MAX_DMA_LENGTH) || (*buf_len == 0))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Command(0x%p) dmalen:0x%x is not support.",
- up_cmnd, *buf_len);
- return RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-static void unf_init_scsi_cmnd(struct Scsi_Host *host, struct scsi_cmnd *cmd,
- struct unf_scsi_cmnd *scsi_cmnd,
- struct unf_rport_scsi_id_image *scsi_image_table,
- int datasegcnt)
-{
- static atomic64_t count;
- enum unf_rport_qos_level level = 0;
- u32 scsi_id = 0;
-
- scsi_id = (u32)((u64)cmd->device->hostdata);
- level = unf_get_rport_qos_level(cmd, scsi_id, scsi_image_table);
- scsi_cmnd->scsi_host_id = host->host_no; /* save host_no to scsi_cmnd->scsi_host_id */
- scsi_cmnd->scsi_id = scsi_id;
- scsi_cmnd->raw_lun_id = ((u64)cmd->device->lun << 16) & UNF_LUN_ID_MASK;
- scsi_cmnd->data_direction = cmd->sc_data_direction;
- scsi_cmnd->under_flow = cmd->underflow;
- scsi_cmnd->cmnd_len = cmd->cmd_len;
- scsi_cmnd->pcmnd = cmd->cmnd;
- scsi_cmnd->transfer_len = cpu_to_le32((uint32_t)scsi_bufflen(cmd));
- scsi_cmnd->sense_buflen = UNF_SCSI_SENSE_BUFFERSIZE;
- scsi_cmnd->sense_buf = cmd->sense_buffer;
- scsi_cmnd->time_out = 0;
- scsi_cmnd->upper_cmnd = cmd;
- scsi_cmnd->drv_private = (void *)(*(u64 *)shost_priv(host));
- scsi_cmnd->entry_count = datasegcnt;
- scsi_cmnd->sgl = scsi_sglist(cmd);
- scsi_cmnd->unf_ini_get_sgl_entry = unf_get_frame_entry_buf;
- scsi_cmnd->done = unf_scsi_done;
- scsi_cmnd->lun_id = (u8 *)&scsi_cmnd->raw_lun_id;
- scsi_cmnd->err_code_table_cout = ini_err_code_table_cnt1;
- scsi_cmnd->err_code_table = ini_error_code_table1;
- scsi_cmnd->world_id = INVALID_WORLD_ID;
- scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
- scsi_cmnd->qos_level = level;
- if (unlikely(scsi_cmnd->cmnd_sn == 0))
- scsi_cmnd->cmnd_sn = atomic64_inc_return(&count);
-}
-
-static void unf_io_error_done(struct scsi_cmnd *cmd,
- struct unf_rport_scsi_id_image *scsi_image_table,
- u32 scsi_id, u32 result)
-{
- cmd->result = (int)(result << UNF_SHIFT_16);
- cmd->scsi_done(cmd);
- if (scsi_image_table)
- UNF_IO_RESULT_CNT(scsi_image_table, scsi_id, result);
-}
-
-static bool unf_scan_device_cmd(struct scsi_cmnd *cmd)
-{
- return ((cmd->cmnd[0] == INQUIRY) || (cmd->cmnd[0] == REPORT_LUNS));
-}
-
-static int unf_scsi_queue_cmd(struct Scsi_Host *phost, struct scsi_cmnd *pcmd)
-{
- struct Scsi_Host *host = NULL;
- struct scsi_cmnd *cmd = NULL;
- struct unf_scsi_cmnd scsi_cmd = {0};
- u32 scsi_id = 0;
- u32 scsi_state = 0;
- int ret = SCSI_MLQUEUE_HOST_BUSY;
- struct unf_lport *unf_lport = NULL;
- struct fc_rport *rport = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- struct unf_rport *unf_rport = NULL;
- u32 cmnd_result = 0;
- u32 rport_state_err = 0;
- bool scan_device_cmd = false;
- int datasegcnt = 0;
-
- host = phost;
- cmd = pcmd;
- FC_CHECK_RETURN_VALUE(host, RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(cmd, RETURN_ERROR);
-
- /* Get L_Port from scsi_cmd */
- unf_lport = (struct unf_lport *)host->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Check l_port failed, cmd(%p)", cmd);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- /* Check device/session local state by device_id */
- scsi_id = (u32)((u64)cmd->device->hostdata);
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) scsi_id(0x%x) is max than %d",
- unf_lport->port_id, scsi_id, UNF_MAX_SCSI_ID);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- scsi_image_table = &unf_lport->rport_scsi_table;
- UNF_SCSI_CMD_CNT(scsi_image_table, scsi_id, cmd->cmnd[0]);
-
- /* Get scsi r_port */
- rport = starget_to_rport(scsi_target(cmd->device));
- if (unlikely(!rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) cmd(%p) to get scsi rport failed",
- unf_lport->port_id, cmd);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- if (unlikely(!scsi_image_table->wwn_rport_info_table)) {
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
- "[warn]LPort porid(0x%x) WwnRportInfoTable NULL",
- unf_lport->port_id);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- if (unlikely(unf_lport->port_removing)) {
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
- "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p) unf_lport removing",
- unf_lport->port_id, scsi_id, rport, rport->scsi_target_id, cmd);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- scsi_state = atomic_read(&scsi_image_table->wwn_rport_info_table[scsi_id].scsi_state);
- if (unlikely(scsi_state != UNF_SCSI_ST_ONLINE)) {
- if (scsi_state == UNF_SCSI_ST_OFFLINE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) scsi_state(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is busy",
- unf_lport->port_id, scsi_state, scsi_id, rport,
- rport->scsi_target_id, cmd);
-
- scan_device_cmd = unf_scan_device_cmd(cmd);
- /* report lun or inquiry cmd, if send failed, do not
- * retry, prevent
- * the scan_mutex in scsi host locked up by eachother
- */
- if (scan_device_cmd) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) DID_NO_CONNECT",
- unf_lport->port_id, host->host_no, scsi_id,
- (u64)cmd->device->lun, cmd->cmnd[0]);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- if (likely(scsi_image_table->wwn_rport_info_table)) {
- if (likely(scsi_image_table->wwn_rport_info_table[scsi_id]
- .dfx_counter)) {
- atomic64_inc(&(scsi_image_table
- ->wwn_rport_info_table[scsi_id]
- .dfx_counter->target_busy));
- }
- }
-
- /* Target busy: need scsi retry */
- return SCSI_MLQUEUE_TARGET_BUSY;
- }
- /* timeout(DEAD): scsi_done & return 0 & I/O error */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), target is loss timeout",
- unf_lport->port_id, scsi_id, rport,
- rport->scsi_target_id, cmd);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- return 0;
- }
-
- if (scsi_sg_count(cmd)) {
- datasegcnt = dma_map_sg(&unf_lport->low_level_func.dev->dev, scsi_sglist(cmd),
- (int)scsi_sg_count(cmd), cmd->sc_data_direction);
- if (unlikely(!datasegcnt)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) scsi_id(0x%x) rport(0x%p) target_id(0x%x) cmd(0x%p), dma map sg err",
- unf_lport->port_id, scsi_id, rport,
- rport->scsi_target_id, cmd);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
- return SCSI_MLQUEUE_HOST_BUSY;
- }
- }
-
- /* Construct local SCSI CMND info */
- unf_init_scsi_cmnd(host, cmd, &scsi_cmd, scsi_image_table, datasegcnt);
-
- if ((scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) && spfc_dif_enable) {
- ret = unf_get_protect_mode(unf_lport, cmd, &scsi_cmd);
- if (ret != RETURN_OK) {
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_BUS_BUSY);
- scsi_dma_unmap(cmd);
- return SCSI_MLQUEUE_HOST_BUSY;
- }
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) transfer length(0x%x) cmd_len(0x%x) direction(0x%x) cmd(0x%x) under_flow(0x%x) protect_opcode is (0x%x) dif_sgl_mode is %d, sector size(%d)",
- unf_lport->port_id, host->host_no, scsi_id, (u64)cmd->device->lun,
- scsi_cmd.transfer_len, scsi_cmd.cmnd_len, cmd->sc_data_direction,
- scsi_cmd.pcmnd[0], scsi_cmd.under_flow,
- scsi_cmd.dif_control.protect_opcode, dif_sgl_mode,
- (cmd->device->sector_size));
-
- /* Bind the Exchange address corresponding to scsi_cmd to
- * scsi_cmd->host_scribble
- */
- cmd->host_scribble = (unsigned char *)scsi_cmd.cmnd_sn;
- ret = unf_cm_queue_command(&scsi_cmd);
- if (ret != RETURN_OK) {
- unf_rport = unf_find_rport_by_scsi_id(unf_lport, ini_error_code_table1,
- ini_err_code_table_cnt1,
- scsi_id, &cmnd_result);
- rport_state_err = (!unf_rport) ||
- (unf_rport->lport_ini_state != UNF_PORT_STATE_LINKUP) ||
- (unf_rport->rp_state == UNF_RPORT_ST_CLOSING);
- scan_device_cmd = unf_scan_device_cmd(cmd);
-
- /* report lun or inquiry cmd if send failed, do not
- * retry,prevent the scan_mutex in scsi host locked up by
- * eachother
- */
- if (rport_state_err && scan_device_cmd) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) host(0x%x) scsi_id(0x%x) lun(0x%llx) cmd(0x%x) cmResult(0x%x) DID_NO_CONNECT",
- unf_lport->port_id, host->host_no, scsi_id,
- (u64)cmd->device->lun, cmd->cmnd[0],
- cmnd_result);
- unf_io_error_done(cmd, scsi_image_table, scsi_id, DID_NO_CONNECT);
- scsi_dma_unmap(cmd);
- unf_unmap_prot_sgl(cmd);
- return 0;
- }
-
- /* Host busy: scsi need to retry */
- ret = SCSI_MLQUEUE_HOST_BUSY;
- if (likely(scsi_image_table->wwn_rport_info_table)) {
- if (likely(scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter)) {
- atomic64_inc(&(scsi_image_table->wwn_rport_info_table[scsi_id]
- .dfx_counter->host_busy));
- }
- }
- scsi_dma_unmap(cmd);
- unf_unmap_prot_sgl(cmd);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) return(0x%x) to process INI IO falid",
- unf_lport->port_id, ret);
- }
- return ret;
-}
-
-static void unf_init_abts_tmf_scsi_cmd(struct scsi_cmnd *cmnd,
- struct unf_scsi_cmnd *scsi_cmd,
- bool abort_cmd)
-{
- struct Scsi_Host *scsi_host = NULL;
-
- scsi_host = cmnd->device->host;
- scsi_cmd->scsi_host_id = scsi_host->host_no;
- scsi_cmd->scsi_id = (u32)((u64)cmnd->device->hostdata);
- scsi_cmd->raw_lun_id = (u64)cmnd->device->lun;
- scsi_cmd->upper_cmnd = cmnd;
- scsi_cmd->drv_private = (void *)(*(u64 *)shost_priv(scsi_host));
- scsi_cmd->cmnd_sn = (u64)(cmnd->host_scribble);
- scsi_cmd->lun_id = (u8 *)&scsi_cmd->raw_lun_id;
- if (abort_cmd) {
- scsi_cmd->done = unf_scsi_done;
- scsi_cmd->world_id = INVALID_WORLD_ID;
- }
-}
-
-int unf_scsi_abort_scsi_cmnd(struct scsi_cmnd *cmnd)
-{
- /* SCSI ABORT Command --->>> FC ABTS */
- struct unf_scsi_cmnd scsi_cmd = {0};
- int ret = FAILED;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- struct unf_lport *unf_lport = NULL;
- u32 scsi_id = 0;
- u32 err_handle = 0;
-
- FC_CHECK_RETURN_VALUE(cmnd, FAILED);
-
- unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
- scsi_id = (u32)((u64)cmnd->device->hostdata);
-
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_ABORT_IO_TYPE;
- UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[abort]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
- unf_lport->port_id, scsi_id,
- (u32)cmnd->device->lun, cmnd->cmnd[0]);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Lport(%p) is moving or null", unf_lport);
- return UNF_SCSI_ABORT_FAIL;
- }
-
- /* Check local SCSI_ID validity */
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
- UNF_MAX_SCSI_ID);
- return UNF_SCSI_ABORT_FAIL;
- }
-
- /* Block scsi (check rport state -> whether offline or not) */
- ret = fc_block_scsi_eh(cmnd);
- if (unlikely(ret != 0)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Block scsi eh failed(0x%x)", ret);
- return ret;
- }
-
- unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, true);
- /* Process scsi Abort cmnd */
- ret = unf_cm_eh_abort_handler(&scsi_cmd);
- if (ret == UNF_SCSI_ABORT_SUCCESS) {
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_ABORT_IO_TYPE;
- UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
- scsi_id, err_handle);
- }
- }
-
- return ret;
-}
-
-int unf_scsi_device_reset_handler(struct scsi_cmnd *cmnd)
-{
- /* LUN reset */
- struct unf_scsi_cmnd scsi_cmd = {0};
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- int ret = FAILED;
- struct unf_lport *unf_lport = NULL;
- u32 scsi_id = 0;
- u32 err_handle = 0;
-
- FC_CHECK_RETURN_VALUE(cmnd, FAILED);
-
- unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[device_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
- unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
-
- return FAILED;
- }
-
- /* Check local SCSI_ID validity */
- scsi_id = (u32)((u64)cmnd->device->hostdata);
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
- UNF_MAX_SCSI_ID);
-
- return FAILED;
- }
-
- /* Block scsi (check rport state -> whether offline or not) */
- ret = fc_block_scsi_eh(cmnd);
- if (unlikely(ret != 0)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Block scsi eh failed(0x%x)", ret);
-
- return ret;
- }
-
- unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
- /* Process scsi device/LUN reset cmnd */
- ret = unf_cm_eh_device_reset_handler(&scsi_cmd);
- if (ret == UNF_SCSI_ABORT_SUCCESS) {
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_DEVICE_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table,
- scsi_id, err_handle);
- }
- }
-
- return ret;
-}
-
-int unf_scsi_bus_reset_handler(struct scsi_cmnd *cmnd)
-{
- /* BUS Reset */
- struct unf_scsi_cmnd scsi_cmd = {0};
- struct unf_lport *unf_lport = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- int ret = FAILED;
- u32 scsi_id = 0;
- u32 err_handle = 0;
-
- FC_CHECK_RETURN_VALUE(cmnd, FAILED);
-
- unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port is null");
-
- return FAILED;
- }
-
- /* Check local SCSI_ID validity */
- scsi_id = (u32)((u64)cmnd->device->hostdata);
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id,
- UNF_MAX_SCSI_ID);
-
- return FAILED;
- }
-
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_BUS_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info][bus_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
- unf_lport->port_id, scsi_id, (u32)cmnd->device->lun,
- cmnd->cmnd[0]);
- }
-
- /* Block scsi (check rport state -> whether offline or not) */
- ret = fc_block_scsi_eh(cmnd);
- if (unlikely(ret != 0)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Block scsi eh failed(0x%x)", ret);
-
- return ret;
- }
-
- unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
- /* Process scsi BUS Reset cmnd */
- ret = unf_cm_bus_reset_handler(&scsi_cmd);
- if (ret == UNF_SCSI_ABORT_SUCCESS) {
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_BUS_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
- }
- }
-
- return ret;
-}
-
-int unf_scsi_target_reset_handler(struct scsi_cmnd *cmnd)
-{
- /* Session reset/delete */
- struct unf_scsi_cmnd scsi_cmd = {0};
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
- int ret = FAILED;
- struct unf_lport *unf_lport = NULL;
- u32 scsi_id = 0;
- u32 err_handle = 0;
-
- FC_CHECK_RETURN_VALUE(cmnd, RETURN_ERROR);
-
- unf_lport = (struct unf_lport *)cmnd->device->host->hostdata[0];
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port is null");
-
- return FAILED;
- }
-
- /* Check local SCSI_ID validity */
- scsi_id = (u32)((u64)cmnd->device->hostdata);
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
-
- return FAILED;
- }
-
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_TARGET_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_CNT(scsi_image_table, scsi_id, err_handle);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[target_reset]Port(0x%x) scsi_id(0x%x) lun_id(0x%x) cmnd_type(0x%x)",
- unf_lport->port_id, scsi_id, (u32)cmnd->device->lun, cmnd->cmnd[0]);
- }
-
- /* Block scsi (check rport state -> whether offline or not) */
- ret = fc_block_scsi_eh(cmnd);
- if (unlikely(ret != 0)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Block scsi eh failed(0x%x)", ret);
-
- return ret;
- }
-
- unf_init_abts_tmf_scsi_cmd(cmnd, &scsi_cmd, false);
- /* Process scsi Target/Session reset/delete cmnd */
- ret = unf_cm_target_reset_handler(&scsi_cmd);
- if (ret == UNF_SCSI_ABORT_SUCCESS) {
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- err_handle = UNF_SCSI_TARGET_RESET_TYPE;
- UNF_SCSI_ERROR_HANDLE_RESULT_CNT(scsi_image_table, scsi_id, err_handle);
- }
- }
-
- return ret;
-}
-
-static int unf_scsi_slave_alloc(struct scsi_device *sdev)
-{
- struct fc_rport *rport = NULL;
- u32 scsi_id = 0;
- struct unf_lport *unf_lport = NULL;
- struct Scsi_Host *host = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
-
- /* About device */
- if (unlikely(!sdev)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]SDev is null");
- return -ENXIO;
- }
-
- /* About scsi rport */
- rport = starget_to_rport(scsi_target(sdev));
- if (unlikely(!rport || fc_remote_port_chkready(rport))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]SCSI rport is null");
-
- if (rport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]SCSI rport is not ready(0x%x)",
- fc_remote_port_chkready(rport));
- }
-
- return -ENXIO;
- }
-
- /* About host */
- host = rport_to_shost(rport);
- if (unlikely(!host)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
-
- return -ENXIO;
- }
-
- /* About Local Port */
- unf_lport = (struct unf_lport *)host->hostdata[0];
- if (unf_is_lport_valid(unf_lport) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Port is invalid");
-
- return -ENXIO;
- }
-
- /* About Local SCSI_ID */
- scsi_id =
- *(u32 *)rport->dd_data;
- if (unlikely(scsi_id >= UNF_MAX_SCSI_ID)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]scsi_id(0x%x) is max than(0x%x)", scsi_id, UNF_MAX_SCSI_ID);
-
- return -ENXIO;
- }
-
- scsi_image_table = &unf_lport->rport_scsi_table;
- if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
- atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
- .dfx_counter->device_alloc);
- }
- atomic_inc(&unf_lport->device_alloc);
- sdev->hostdata = (void *)(u64)scsi_id;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x) use scsi_id(%u) to alloc device[%u:%u:%u:%u]",
- unf_lport->port_id, scsi_id, host->host_no, sdev->channel, sdev->id,
- (u32)sdev->lun);
-
- return 0;
-}
-
-static void unf_scsi_destroy_slave(struct scsi_device *sdev)
-{
- /*
- * NOTE: about sdev->hostdata
- * --->>> pointing to local SCSI_ID
- * 1. Assignment during slave allocation
- * 2. Released when callback for slave destroy
- * 3. Used during: Queue_CMND, Abort CMND, Device Reset, Target Reset &
- * Bus Reset
- */
- struct fc_rport *rport = NULL;
- u32 scsi_id = 0;
- struct unf_lport *unf_lport = NULL;
- struct Scsi_Host *host = NULL;
- struct unf_rport_scsi_id_image *scsi_image_table = NULL;
-
- /* About scsi device */
- if (unlikely(!sdev)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]SDev is null");
-
- return;
- }
-
- /* About scsi rport */
- rport = starget_to_rport(scsi_target(sdev));
- if (unlikely(!rport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]SCSI rport is null or remote port is not ready");
- return;
- }
-
- /* About host */
- host = rport_to_shost(rport);
- if (unlikely(!host)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Host is null");
-
- return;
- }
-
- /* About L_Port */
- unf_lport = (struct unf_lport *)host->hostdata[0];
- if (unf_is_lport_valid(unf_lport) == RETURN_OK) {
- scsi_image_table = &unf_lport->rport_scsi_table;
- atomic_inc(&unf_lport->device_destroy);
-
- scsi_id = (u32)((u64)sdev->hostdata);
- if (scsi_id < UNF_MAX_SCSI_ID && scsi_image_table->wwn_rport_info_table) {
- if (scsi_image_table->wwn_rport_info_table[scsi_id].dfx_counter) {
- atomic_inc(&scsi_image_table->wwn_rport_info_table[scsi_id]
- .dfx_counter->device_destroy);
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x) with scsi_id(%u) to destroy slave device[%u:%u:%u:%u]",
- unf_lport->port_id, scsi_id, host->host_no,
- sdev->channel, sdev->id, (u32)sdev->lun);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[err]Port(0x%x) scsi_id(%u) is invalid and destroy device[%u:%u:%u:%u]",
- unf_lport->port_id, scsi_id, host->host_no,
- sdev->channel, sdev->id, (u32)sdev->lun);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(%p) is invalid", unf_lport);
- }
-
- sdev->hostdata = NULL;
-}
-
-static int unf_scsi_slave_configure(struct scsi_device *sdev)
-{
-#define UNF_SCSI_DEV_DEPTH 32
- blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
-
- scsi_change_queue_depth(sdev, UNF_SCSI_DEV_DEPTH);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[event]Enter slave configure, set depth is %d, sdev->tagged_supported is (%d)",
- UNF_SCSI_DEV_DEPTH, sdev->tagged_supported);
-
- return 0;
-}
-
-static int unf_scsi_scan_finished(struct Scsi_Host *shost, unsigned long time)
-{
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Scan finished");
-
- return 1;
-}
-
-static void unf_scsi_scan_start(struct Scsi_Host *shost)
-{
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Start scsi scan...");
-}
-
-void unf_host_init_attr_setting(struct Scsi_Host *scsi_host)
-{
- struct unf_lport *unf_lport = NULL;
- u32 speed = FC_PORTSPEED_UNKNOWN;
-
- unf_lport = (struct unf_lport *)scsi_host->hostdata[0];
- fc_host_supported_classes(scsi_host) = FC_COS_CLASS3;
- fc_host_dev_loss_tmo(scsi_host) = (u32)unf_get_link_lose_tmo(unf_lport);
- fc_host_node_name(scsi_host) = unf_lport->node_name;
- fc_host_port_name(scsi_host) = unf_lport->port_name;
-
- fc_host_max_npiv_vports(scsi_host) = (u16)((unf_lport == unf_lport->root_lport) ?
- unf_lport->low_level_func.support_max_npiv_num
- : 0);
- fc_host_npiv_vports_inuse(scsi_host) = 0;
- fc_host_next_vport_number(scsi_host) = 0;
-
- /* About speed mode */
- if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_32_G &&
- unf_lport->card_type == UNF_FC_SERVER_BOARD_32_G) {
- speed = FC_PORTSPEED_32GBIT | FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT;
- } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_16_G &&
- unf_lport->card_type == UNF_FC_SERVER_BOARD_16_G) {
- speed = FC_PORTSPEED_16GBIT | FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT;
- } else if (unf_lport->low_level_func.fc_ser_max_speed == UNF_PORT_SPEED_8_G &&
- unf_lport->card_type == UNF_FC_SERVER_BOARD_8_G) {
- speed = FC_PORTSPEED_8GBIT | FC_PORTSPEED_4GBIT | FC_PORTSPEED_2GBIT;
- }
-
- fc_host_supported_speeds(scsi_host) = speed;
-}
-
-int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host,
- struct unf_host_param *host_param)
-{
- int ret = RETURN_ERROR;
- struct Scsi_Host *scsi_host = NULL;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(unf_scsi_host, RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(host_param, RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR, "[event]Alloc scsi host...");
-
- /* Check L_Port validity */
- unf_lport = (struct unf_lport *)(host_param->lport);
- if (unlikely(!unf_lport)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port is NULL and return directly");
-
- return RETURN_ERROR;
- }
-
- scsi_host_template.can_queue = host_param->can_queue;
- scsi_host_template.cmd_per_lun = host_param->cmnd_per_lun;
- scsi_host_template.sg_tablesize = host_param->sg_table_size;
- scsi_host_template.max_sectors = host_param->max_sectors;
-
- /* Alloc scsi host */
- scsi_host = scsi_host_alloc(&scsi_host_template, sizeof(u64));
- if (unlikely(!scsi_host)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR, "[err]Register scsi host failed");
-
- return RETURN_ERROR;
- }
-
- scsi_host->max_channel = host_param->max_channel;
- scsi_host->max_lun = host_param->max_lun;
- scsi_host->max_cmd_len = host_param->max_cmnd_len;
- scsi_host->unchecked_isa_dma = 0;
- scsi_host->hostdata[0] = (unsigned long)(uintptr_t)unf_lport; /* save lport to scsi */
- scsi_host->unique_id = scsi_host->host_no;
- scsi_host->max_id = host_param->max_id;
- scsi_host->transportt = (unf_lport == unf_lport->root_lport)
- ? scsi_transport_template
- : scsi_transport_template_v;
-
- /* register DIF/DIX protection */
- if (spfc_dif_enable) {
- /* Enable DIF and DIX function */
- scsi_host_set_prot(scsi_host, spfc_dif_type);
-
- spfc_guard = SHOST_DIX_GUARD_CRC;
- /* Enable IP checksum algorithm in DIX */
- if (dix_flag)
- spfc_guard |= SHOST_DIX_GUARD_IP;
- scsi_host_set_guard(scsi_host, spfc_guard);
- }
-
- /* Add scsi host */
- ret = scsi_add_host(scsi_host, host_param->pdev);
- if (unlikely(ret)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Add scsi host failed with return value %d", ret);
-
- scsi_host_put(scsi_host);
- return RETURN_ERROR;
- }
-
- /* Set scsi host attribute */
- unf_host_init_attr_setting(scsi_host);
- *unf_scsi_host = scsi_host;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Alloc and add scsi host(0x%llx) succeed",
- (u64)scsi_host);
-
- return RETURN_OK;
-}
-
-void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host)
-{
- struct Scsi_Host *scsi_host = NULL;
-
- scsi_host = unf_scsi_host;
- fc_remove_host(scsi_host);
- scsi_remove_host(scsi_host);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Remove scsi host(%u) succeed", scsi_host->host_no);
-
- scsi_host_put(scsi_host);
-}
-
-u32 unf_register_ini_transport(void)
-{
- /* Register INI Transport */
- scsi_transport_template = fc_attach_transport(&function_template);
-
- if (!scsi_transport_template) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Register FC transport to scsi failed");
-
- return RETURN_ERROR;
- }
-
- scsi_transport_template_v = fc_attach_transport(&function_template_v);
- if (!scsi_transport_template_v) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Register FC vport transport to scsi failed");
-
- fc_release_transport(scsi_transport_template);
-
- return RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Register FC transport to scsi succeed");
-
- return RETURN_OK;
-}
-
-void unf_unregister_ini_transport(void)
-{
- fc_release_transport(scsi_transport_template);
- fc_release_transport(scsi_transport_template_v);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[event]Unregister FC transport succeed");
-}
-
-void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len)
-{
- struct scsi_cmnd *cmd = NULL;
-
- FC_CHECK_RETURN_VOID(scsi_cmd);
- FC_CHECK_RETURN_VOID(sense);
-
- cmd = (struct scsi_cmnd *)scsi_cmd;
- memcpy(cmd->sense_buffer, sense, sens_len);
-}
diff --git a/drivers/scsi/spfc/common/unf_scsi_common.h b/drivers/scsi/spfc/common/unf_scsi_common.h
deleted file mode 100644
index f20cdd7f0479..000000000000
--- a/drivers/scsi/spfc/common/unf_scsi_common.h
+++ /dev/null
@@ -1,570 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_SCSI_COMMON_H
-#define UNF_SCSI_COMMON_H
-
-#include "unf_type.h"
-
-#define SCSI_SENSE_DATA_LEN 96
-#define DRV_SCSI_CDB_LEN 16
-#define DRV_SCSI_LUN_LEN 8
-
-#define DRV_ENTRY_PER_SGL 64 /* Size of an entry array in a hash table */
-
-#define UNF_DIF_AREA_SIZE (8)
-
-struct unf_dif_control_info {
- u16 app_tag;
- u16 flags;
- u32 protect_opcode;
- u32 fcp_dl;
- u32 start_lba;
- u8 actual_dif[UNF_DIF_AREA_SIZE];
- u8 expected_dif[UNF_DIF_AREA_SIZE];
- u32 dif_sge_count;
- void *dif_sgl;
-};
-
-struct dif_result_info {
- unsigned char actual_idf[UNF_DIF_AREA_SIZE];
- unsigned char expect_dif[UNF_DIF_AREA_SIZE];
-};
-
-struct drv_sge {
- char *buf;
- void *page_ctrl;
- u32 Length;
- u32 offset;
-};
-
-struct drv_scsi_cmd_result {
- u32 Status;
- u16 sense_data_length; /* sense data length */
- u8 sense_data[SCSI_SENSE_DATA_LEN]; /* fail sense info */
-};
-
-enum drv_io_direction {
- DRV_IO_BIDIRECTIONAL = 0,
- DRV_IO_DIRECTION_WRITE = 1,
- DRV_IO_DIRECTION_READ = 2,
- DRV_IO_DIRECTION_NONE = 3,
-};
-
-struct drv_sgl {
- struct drv_sgl *next_sgl; /* poin to SGL,SGL list */
- unsigned short num_sges_in_chain;
- unsigned short num_sges_in_sgl;
- u32 flag;
- u64 serial_num;
- struct drv_sge sge[DRV_ENTRY_PER_SGL];
- struct list_head node;
- u32 cpu_id;
-};
-
-struct dif_info {
-/* Indicates the result returned when the data protection
- *information is inconsistent,add by pangea
- */
- struct dif_result_info dif_result;
-/* Data protection information operation code
- * bit[31-24] other operation code
- * bit[23-16] Data Protection Information Operation
- * bit[15-8] Data protection information
- * verification bit[7-0] Data protection information
- * replace
- */
- u32 protect_opcode;
- unsigned short apptag;
- u64 start_lba; /* IO start LBA */
- struct drv_sgl *protection_sgl;
-};
-
-struct drv_device_address {
- u16 initiator_id; /* ini id */
- u16 bus_id; /* device bus id */
- u16 target_id; /* device target id,for PCIe SSD,device id */
- u16 function_id; /* function id */
-};
-
-struct drv_ini_cmd {
- struct drv_scsi_cmd_result result;
- void *upper; /* product private pointer */
- void *lower; /* driver private pointer */
- u8 cdb[DRV_SCSI_CDB_LEN]; /* CDB edit by product */
- u8 lun[DRV_SCSI_LUN_LEN];
- u16 cmd_len;
- u16 tag; /* SCSI cmd add by driver */
- enum drv_io_direction io_direciton;
- u32 data_length;
- u32 underflow;
- u32 overflow;
- u32 resid;
- u64 port_id;
- u64 cmd_sn;
- struct drv_device_address addr;
- struct drv_sgl *sgl;
- void *device;
- void (*done)(struct drv_ini_cmd *cmd); /* callback pointer */
- struct dif_info dif_info;
-};
-
-typedef void (*uplevel_cmd_done)(struct drv_ini_cmd *scsi_cmnd);
-
-#ifndef SUCCESS
-#define SUCCESS 0x2002
-#endif
-#ifndef FAILED
-#define FAILED 0x2003
-#endif
-
-#ifndef DRIVER_OK
-#define DRIVER_OK 0x00 /* Driver status */
-#endif
-
-#ifndef PCI_FUNC
-#define PCI_FUNC(devfn) ((devfn) & 0x07)
-#endif
-
-#define UNF_SCSI_ABORT_SUCCESS SUCCESS
-#define UNF_SCSI_ABORT_FAIL FAILED
-
-#define UNF_SCSI_STATUS(byte) (byte)
-#define UNF_SCSI_MSG(byte) ((byte) << 8)
-#define UNF_SCSI_HOST(byte) ((byte) << 16)
-#define UNF_SCSI_DRIVER(byte) ((byte) << 24)
-#define UNF_GET_SCSI_HOST_ID(scsi_host) ((scsi_host)->host_no)
-
-struct unf_ini_error_code {
- u32 drv_errcode; /* driver error code */
- u32 ap_errcode; /* up level error code */
-};
-
-typedef u32 (*ini_get_sgl_entry_buf)(void *upper_cmnd, void *driver_sgl,
- void **upper_sgl, u32 *req_index,
- u32 *index, char **buf,
- u32 *buf_len);
-
-#define UNF_SCSI_SENSE_BUFFERSIZE 96
-struct unf_scsi_cmnd {
- u32 scsi_host_id;
- u32 scsi_id; /* cmd->dev->id */
- u64 raw_lun_id;
- u64 port_id;
- u32 under_flow; /* Underflow */
- u32 transfer_len; /* Transfer Length */
- u32 resid; /* Resid */
- u32 sense_buflen;
- int result;
- u32 entry_count; /* IO Buffer counter */
- u32 abort;
- u32 err_code_table_cout; /* error code size */
- u64 cmnd_sn;
- ulong time_out; /* EPL driver add timer */
- u16 cmnd_len; /* Cdb length */
- u8 data_direction; /* data direction */
- u8 *pcmnd; /* SCSI CDB */
- u8 *sense_buf;
- void *drv_private; /* driver host pionter */
- void *driver_scribble; /* Xchg pionter */
- void *upper_cmnd; /* UpperCmnd pointer by driver */
- u8 *lun_id; /* new lunid */
- u32 world_id;
- struct unf_dif_control_info dif_control; /* DIF control */
- struct unf_ini_error_code *err_code_table; /* error code table */
- void *sgl; /* Sgl pointer */
- ini_get_sgl_entry_buf unf_ini_get_sgl_entry;
- void (*done)(struct unf_scsi_cmnd *cmd);
- uplevel_cmd_done uplevel_done;
- struct dif_info dif_info;
- u32 qos_level;
- void *pinitiator;
-};
-
-#ifndef FC_PORTSPEED_32GBIT
-#define FC_PORTSPEED_32GBIT 0x40
-#endif
-
-#define UNF_GID_PORT_CNT 2048
-#define UNF_RSCN_PAGE_SUM 255
-
-#define UNF_CPU_ENDIAN
-
-#define UNF_NPORTID_MASK 0x00FFFFFF
-#define UNF_DOMAIN_MASK 0x00FF0000
-#define UNF_AREA_MASK 0x0000FF00
-#define UNF_ALPA_MASK 0x000000FF
-
-struct unf_fc_head {
- u32 rctl_did; /* Routing control and Destination address of the seq */
- u32 csctl_sid; /* Class control and Source address of the sequence */
- u32 type_fctl; /* Data type and Initial frame control value of the seq
- */
- u32 seqid_dfctl_seqcnt; /* Seq ID, Data Field and Initial seq count */
- u32 oxid_rxid; /* Originator & Responder exchange IDs for the sequence
- */
- u32 parameter; /* Relative offset of the first frame of the sequence */
-};
-
-#define UNF_FCPRSP_CTL_LEN (24)
-#define UNF_MAX_RSP_INFO_LEN (8)
-#define UNF_RSP_LEN_VLD (1 << 0)
-#define UNF_SENSE_LEN_VLD (1 << 1)
-#define UNF_RESID_OVERRUN (1 << 2)
-#define UNF_RESID_UNDERRUN (1 << 3)
-#define UNF_FCP_CONF_REQ (1 << 4)
-
-/* T10: FCP2r.07 9.4.1 Overview and format of FCP_RSP IU */
-struct unf_fcprsp_iu {
- u32 reserved[2];
- u8 reserved2[2];
- u8 control;
- u8 fcp_status;
- u32 fcp_residual;
- u32 fcp_sense_len; /* Length of sense info field */
- u32 fcp_response_len; /* Length of response info field in bytes 0,4 or 8
- */
- u8 fcp_resp_info[UNF_MAX_RSP_INFO_LEN]; /* Buffer for response info */
- u8 fcp_sense_info[SCSI_SENSE_DATA_LEN]; /* Buffer for sense info */
-} __attribute__((packed));
-
-#define UNF_CMD_REF_MASK 0xFF000000
-#define UNF_TASK_ATTR_MASK 0x00070000
-#define UNF_TASK_MGMT_MASK 0x0000FF00
-#define UNF_FCP_WR_DATA 0x00000001
-#define UNF_FCP_RD_DATA 0x00000002
-#define UNF_CDB_LEN_MASK 0x0000007C
-#define UNF_FCP_CDB_LEN_16 (16)
-#define UNF_FCP_CDB_LEN_32 (32)
-#define UNF_FCP_LUNID_LEN_8 (8)
-
-/* FCP-4 :Table 27 - RSP_CODE field */
-#define UNF_FCP_TM_RSP_COMPLETE (0)
-#define UNF_FCP_TM_INVALID_CMND (0x2)
-#define UNF_FCP_TM_RSP_REJECT (0x4)
-#define UNF_FCP_TM_RSP_FAIL (0x5)
-#define UNF_FCP_TM_RSP_SUCCEED (0x8)
-#define UNF_FCP_TM_RSP_INCRECT_LUN (0x9)
-
-#define UNF_SET_TASK_MGMT_FLAGS(fcp_tm_code) ((fcp_tm_code) << 8)
-#define UNF_GET_TASK_MGMT_FLAGS(control) (((control) & UNF_TASK_MGMT_MASK) >> 8)
-
-enum unf_task_mgmt_cmd {
- UNF_FCP_TM_QUERY_TASK_SET = (1 << 0),
- UNF_FCP_TM_ABORT_TASK_SET = (1 << 1),
- UNF_FCP_TM_CLEAR_TASK_SET = (1 << 2),
- UNF_FCP_TM_QUERY_UNIT_ATTENTION = (1 << 3),
- UNF_FCP_TM_LOGICAL_UNIT_RESET = (1 << 4),
- UNF_FCP_TM_TARGET_RESET = (1 << 5),
- UNF_FCP_TM_CLEAR_ACA = (1 << 6),
- UNF_FCP_TM_TERMINATE_TASK = (1 << 7) /* obsolete */
-};
-
-struct unf_fcp_cmnd {
- u8 lun[UNF_FCP_LUNID_LEN_8]; /* Logical unit number */
- u32 control;
- u8 cdb[UNF_FCP_CDB_LEN_16]; /* Payload data containing cdb info */
- u32 data_length; /* Number of bytes expected to be transferred */
-} __attribute__((packed));
-
-struct unf_fcp_cmd_hdr {
- struct unf_fc_head frame_hdr; /* FCHS structure */
- struct unf_fcp_cmnd fcp_cmnd; /* Fcp Cmnd struct */
-};
-
-/* FC-LS-2 Common Service Parameter applicability */
-struct unf_fabric_coparm {
-#if defined(UNF_CPU_ENDIAN)
- u32 bb_credit : 16; /* 0 [0-15] */
- u32 lowest_version : 8; /* 0 [16-23] */
- u32 highest_version : 8; /* 0 [24-31] */
-#else
- u32 highest_version : 8; /* 0 [24-31] */
- u32 lowest_version : 8; /* 0 [16-23] */
- u32 bb_credit : 16; /* 0 [0-15] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
- u32 bbscn : 4; /* 1 [12-15] */
- u32 payload_length : 1; /* 1 [16] */
- u32 seq_cnt : 1; /* 1 [17] */
- u32 dynamic_half_duplex : 1; /* 1 [18] */
- u32 r_t_tov : 1; /* 1 [19] */
- u32 reserved_co2 : 6; /* 1 [20-25] */
- u32 e_d_tov_resolution : 1; /* 1 [26] */
- u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
- u32 nport : 1; /* 1 [28] */
- u32 mnid_assignment : 1; /* 1 [29] */
- u32 random_relative_offset : 1; /* 1 [30] */
- u32 clean_address : 1; /* 1 [31] */
-#else
- u32 reserved_co2 : 2; /* 1 [24-25] */
- u32 e_d_tov_resolution : 1; /* 1 [26] */
- u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
- u32 nport : 1; /* 1 [28] */
- u32 mnid_assignment : 1; /* 1 [29] */
- u32 random_relative_offset : 1; /* 1 [30] */
- u32 clean_address : 1; /* 1 [31] */
-
- u32 payload_length : 1; /* 1 [16] */
- u32 seq_cnt : 1; /* 1 [17] */
- u32 dynamic_half_duplex : 1; /* 1 [18] */
- u32 r_t_tov : 1; /* 1 [19] */
- u32 reserved_co5 : 4; /* 1 [20-23] */
-
- u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
- u32 bbscn : 4; /* 1 [12-15] */
-#endif
- u32 r_a_tov; /* 2 [0-31] */
- u32 e_d_tov; /* 3 [0-31] */
-};
-
-/* FC-LS-2 Common Service Parameter applicability */
-/*Common Service Parameters - PLOGI and PLOGI LS_ACC */
-struct lgn_port_coparm {
-#if defined(UNF_CPU_ENDIAN)
- u32 bb_credit : 16; /* 0 [0-15] */
- u32 lowest_version : 8; /* 0 [16-23] */
- u32 highest_version : 8; /* 0 [24-31] */
-#else
- u32 highest_version : 8; /* 0 [24-31] */
- u32 lowest_version : 8; /* 0 [16-23] */
- u32 bb_credit : 16; /* 0 [0-15] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
- u32 bbscn : 4; /* 1 [12-15] */
- u32 payload_length : 1; /* 1 [16] */
- u32 seq_cnt : 1; /* 1 [17] */
- u32 dynamic_half_duplex : 1; /* 1 [18] */
- u32 reserved_co2 : 7; /* 1 [19-25] */
- u32 e_d_tov_resolution : 1; /* 1 [26] */
- u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
- u32 nport : 1; /* 1 [28] */
- u32 vendor_version_level : 1; /* 1 [29] */
- u32 random_relative_offset : 1; /* 1 [30] */
- u32 continuously_increasing : 1; /* 1 [31] */
-#else
- u32 reserved_co2 : 2; /* 1 [24-25] */
- u32 e_d_tov_resolution : 1; /* 1 [26] */
- u32 alternate_bb_credit_mgmt : 1; /* 1 [27] */
- u32 nport : 1; /* 1 [28] */
- u32 vendor_version_level : 1; /* 1 [29] */
- u32 random_relative_offset : 1; /* 1 [30] */
- u32 continuously_increasing : 1; /* 1 [31] */
-
- u32 payload_length : 1; /* 1 [16] */
- u32 seq_cnt : 1; /* 1 [17] */
- u32 dynamic_half_duplex : 1; /* 1 [18] */
- u32 reserved_co5 : 5; /* 1 [19-23] */
-
- u32 bb_receive_data_field_size : 12; /* 1 [0-11] */
- u32 reserved_co1 : 4; /* 1 [12-15] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 relative_offset : 16; /* 2 [0-15] */
- u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
-#else
- u32 nport_total_concurrent_sequences : 16; /* 2 [16-31] */
- u32 relative_offset : 16; /* 2 [0-15] */
-#endif
-
- u32 e_d_tov;
-};
-
-/* FC-LS-2 Class Service Parameters Applicability */
-struct unf_lgn_port_clparm {
-#if defined(UNF_CPU_ENDIAN)
- u32 reserved_cl1 : 6; /* 0 [0-5] */
- u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
- u32 ic_data_compression_capable : 1; /* 0 [8] */
-
- u32 ic_ack_generation_assistance : 1; /* 0 [9] */
- u32 ic_ack_n_capable : 1; /* 0 [10] */
- u32 ic_ack_o_capable : 1; /* 0 [11] */
- u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
- u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
-
- u32 reserved_cl2 : 7; /* 0 [16-22] */
- u32 priority : 1; /* 0 [23] */
- u32 buffered_class : 1; /* 0 [24] */
- u32 camp_on : 1; /* 0 [25] */
- u32 dedicated_simplex : 1; /* 0 [26] */
- u32 sequential_delivery : 1; /* 0 [27] */
- u32 stacked_connect_request : 2; /* 0 [28-29] */
- u32 intermix_mode : 1; /* 0 [30] */
- u32 valid : 1; /* 0 [31] */
-#else
- u32 buffered_class : 1; /* 0 [24] */
- u32 camp_on : 1; /* 0 [25] */
- u32 dedicated_simplex : 1; /* 0 [26] */
- u32 sequential_delivery : 1; /* 0 [27] */
- u32 stacked_connect_request : 2; /* 0 [28-29] */
- u32 intermix_mode : 1; /* 0 [30] */
- u32 valid : 1; /* 0 [31] */
- u32 reserved_cl2 : 7; /* 0 [16-22] */
- u32 priority : 1; /* 0 [23] */
- u32 ic_data_compression_capable : 1; /* 0 [8] */
- u32 ic_ack_generation_assistance : 1; /* 0 [9] */
- u32 ic_ack_n_capable : 1; /* 0 [10] */
- u32 ic_ack_o_capable : 1; /* 0 [11] */
- u32 ic_initial_responder_processes_accociator : 2; /* 0 [12-13] */
- u32 ic_x_id_reassignment : 2; /* 0 [14-15] */
-
- u32 reserved_cl1 : 6; /* 0 [0-5] */
- u32 ic_data_compression_history_buffer_size : 2; /* 0 [6-7] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 received_data_field_size : 16; /* 1 [0-15] */
-
- u32 reserved_cl3 : 5; /* 1 [16-20] */
- u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
- u32 rc_data_compression_capable : 1; /* 1 [23] */
-
- u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
- u32 reserved_cl4 : 1; /* 1 [26] */
- u32 rc_error_policy_supported : 2; /* 1 [27-28] */
- u32 rc_x_id_interlock : 1; /* 1 [29] */
- u32 rc_ack_n_capable : 1; /* 1 [30] */
- u32 rc_ack_o_capable : 1; /* 1 [31] */
-#else
- u32 rc_data_categories_per_sequence : 2; /* 1 [24-25] */
- u32 reserved_cl4 : 1; /* 1 [26] */
- u32 rc_error_policy_supported : 2; /* 1 [27-28] */
- u32 rc_x_id_interlock : 1; /* 1 [29] */
- u32 rc_ack_n_capable : 1; /* 1 [30] */
- u32 rc_ack_o_capable : 1; /* 1 [31] */
-
- u32 reserved_cl3 : 5; /* 1 [16-20] */
- u32 rc_data_compression_history_buffer_size : 2; /* 1 [21-22] */
- u32 rc_data_compression_capable : 1; /* 1 [23] */
-
- u32 received_data_field_size : 16; /* 1 [0-15] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
- u32 reserved_cl5 : 1; /* 2 [15] */
-
- u32 concurrent_sequences : 16; /* 2 [16-31] */
-#else
- u32 concurrent_sequences : 16; /* 2 [16-31] */
-
- u32 nport_end_to_end_credit : 15; /* 2 [0-14] */
- u32 reserved_cl5 : 1; /* 2 [15] */
-#endif
-
-#if defined(UNF_CPU_ENDIAN)
- u32 reserved_cl6 : 16; /* 3 [0-15] */
- u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
-#else
- u32 open_sequence_per_exchange : 16; /* 3 [16-31] */
- u32 reserved_cl6 : 16; /* 3 [0-15] */
-#endif
-};
-
-struct unf_fabric_parm {
- struct unf_fabric_coparm co_parms;
- u32 high_port_name;
- u32 low_port_name;
- u32 high_node_name;
- u32 low_node_name;
- struct unf_lgn_port_clparm cl_parms[3];
- u32 reserved_1[4];
- u32 vendor_version_level[4];
-};
-
-struct unf_lgn_parm {
- struct lgn_port_coparm co_parms;
- u32 high_port_name;
- u32 low_port_name;
- u32 high_node_name;
- u32 low_node_name;
- struct unf_lgn_port_clparm cl_parms[3];
- u32 reserved_1[4];
- u32 vendor_version_level[4];
-};
-
-#define ELS_RJT 0x1
-#define ELS_ACC 0x2
-#define ELS_PLOGI 0x3
-#define ELS_FLOGI 0x4
-#define ELS_LOGO 0x5
-#define ELS_ECHO 0x10
-#define ELS_RRQ 0x12
-#define ELS_REC 0x13
-#define ELS_PRLI 0x20
-#define ELS_PRLO 0x21
-#define ELS_TPRLO 0x24
-#define ELS_PDISC 0x50
-#define ELS_FDISC 0x51
-#define ELS_ADISC 0x52
-#define ELS_RSCN 0x61 /* registered state change notification */
-#define ELS_SCR 0x62 /* state change registration */
-
-#define NS_GIEL 0X0101
-#define NS_GA_NXT 0X0100
-#define NS_GPN_ID 0x0112 /* get port name by ID */
-#define NS_GNN_ID 0x0113 /* get node name by ID */
-#define NS_GFF_ID 0x011f /* get FC-4 features by ID */
-#define NS_GID_PN 0x0121 /* get ID for port name */
-#define NS_GID_NN 0x0131 /* get IDs for node name */
-#define NS_GID_FT 0x0171 /* get IDs by FC4 type */
-#define NS_GPN_FT 0x0172 /* get port names by FC4 type */
-#define NS_GID_PT 0x01a1 /* get IDs by port type */
-#define NS_RFT_ID 0x0217 /* reg FC4 type for ID */
-#define NS_RPN_ID 0x0212 /* reg port name for ID */
-#define NS_RNN_ID 0x0213 /* reg node name for ID */
-#define NS_RSNPN 0x0218 /* reg symbolic port name */
-#define NS_RFF_ID 0x021f /* reg FC4 Features for ID */
-#define NS_RSNN 0x0239 /* reg symbolic node name */
-#define ST_NULL 0xffff /* reg symbolic node name */
-
-#define BLS_ABTS 0xA001 /* ABTS */
-
-#define FCP_SRR 0x14 /* Sequence Retransmission Request */
-#define UNF_FC_FID_DOM_MGR 0xfffc00 /* domain manager base */
-enum unf_fc_well_known_fabric_id {
- UNF_FC_FID_NONE = 0x000000, /* No destination */
- UNF_FC_FID_DOM_CTRL = 0xfffc01, /* domain controller */
- UNF_FC_FID_BCAST = 0xffffff, /* broadcast */
- UNF_FC_FID_FLOGI = 0xfffffe, /* fabric login */
- UNF_FC_FID_FCTRL = 0xfffffd, /* fabric controller */
- UNF_FC_FID_DIR_SERV = 0xfffffc, /* directory server */
- UNF_FC_FID_TIME_SERV = 0xfffffb, /* time server */
- UNF_FC_FID_MGMT_SERV = 0xfffffa, /* management server */
- UNF_FC_FID_QOS = 0xfffff9, /* QoS Facilitator */
- UNF_FC_FID_ALIASES = 0xfffff8, /* alias server (FC-PH2) */
- UNF_FC_FID_SEC_KEY = 0xfffff7, /* Security key dist. server */
- UNF_FC_FID_CLOCK = 0xfffff6, /* clock synch server */
- UNF_FC_FID_MCAST_SERV = 0xfffff5 /* multicast server */
-};
-
-#define INVALID_WORLD_ID 0xfffffffc
-
-struct unf_host_param {
- int can_queue;
- u16 sg_table_size;
- short cmnd_per_lun;
- u32 max_id;
- u32 max_lun;
- u32 max_channel;
- u16 max_cmnd_len;
- u16 max_sectors;
- u64 dma_boundary;
- u32 port_id;
- void *lport;
- struct device *pdev;
-};
-
-int unf_alloc_scsi_host(struct Scsi_Host **unf_scsi_host, struct unf_host_param *host_param);
-void unf_free_scsi_host(struct Scsi_Host *unf_scsi_host);
-u32 unf_register_ini_transport(void);
-void unf_unregister_ini_transport(void);
-void unf_save_sense_data(void *scsi_cmd, const char *sense, int sens_len);
-
-#endif
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
deleted file mode 100644
index 9c86c99374c8..000000000000
--- a/drivers/scsi/spfc/common/unf_service.c
+++ /dev/null
@@ -1,1430 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "unf_service.h"
-#include "unf_log.h"
-#include "unf_rport.h"
-#include "unf_ls.h"
-#include "unf_gs.h"
-
-struct unf_els_handle_table els_handle_table[] = {
- {ELS_PLOGI, unf_plogi_handler}, {ELS_FLOGI, unf_flogi_handler},
- {ELS_LOGO, unf_logo_handler}, {ELS_ECHO, unf_echo_handler},
- {ELS_RRQ, unf_rrq_handler}, {ELS_REC, unf_rec_handler},
- {ELS_PRLI, unf_prli_handler}, {ELS_PRLO, unf_prlo_handler},
- {ELS_PDISC, unf_pdisc_handler}, {ELS_ADISC, unf_adisc_handler},
- {ELS_RSCN, unf_rscn_handler} };
-
-u32 max_frame_size = UNF_DEFAULT_FRAME_SIZE;
-
-#define UNF_NEED_BIG_RESPONSE_BUFF(cmnd_code) \
- (((cmnd_code) == ELS_ECHO) || ((cmnd_code) == NS_GID_PT) || \
- ((cmnd_code) == NS_GID_FT))
-
-#define NEED_REFRESH_NPORTID(pkg) \
- ((((pkg)->cmnd == ELS_PLOGI) || ((pkg)->cmnd == ELS_PDISC) || \
- ((pkg)->cmnd == ELS_ADISC)))
-
-void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
-{
- u32 ssq_index = 0;
- struct unf_rport *unf_rport = NULL;
-
- if (likely(xchg)) {
- unf_rport = xchg->rport;
-
- if (unf_rport) {
- ssq_index = (xchg->hotpooltag % UNF_SQ_NUM_PER_SESSION) +
- unf_rport->sqn_base;
- }
- }
-
- pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] = ssq_index;
-}
-
-u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg)
-{
- u32 ret = UNF_RETURN_ERROR;
- ulong time_out = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- if (unlikely(!lport->low_level_func.service_op.unf_ls_gs_send)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) LS/GS send function is NULL",
- lport->port_id);
-
- return ret;
- }
-
- if (pkg->type == UNF_PKG_GS_REQ)
- time_out = UNF_GET_GS_SFS_XCHG_TIMER(lport);
- else
- time_out = UNF_GET_ELS_SFS_XCHG_TIMER(lport);
-
- if (xchg->cmnd_code == ELS_RRQ) {
- time_out = ((ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport) > UNF_RRQ_MIN_TIMEOUT_INTERVAL)
- ? (ulong)UNF_GET_ELS_SFS_XCHG_TIMER(lport)
- : UNF_RRQ_MIN_TIMEOUT_INTERVAL;
- } else if (xchg->cmnd_code == ELS_LOGO) {
- time_out = UNF_LOGO_TIMEOUT_INTERVAL;
- }
-
- pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)time_out;
- lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_out, UNF_TIMER_TYPE_SFS);
-
- unf_select_sq(xchg, pkg);
-
- ret = lport->low_level_func.service_op.unf_ls_gs_send(lport->fc_port, pkg);
- if (unlikely(ret != RETURN_OK))
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- return ret;
-}
-
-static u32 unf_bls_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg)
-{
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- pkg->private_data[PKG_PRIVATE_XCHG_TIMEER] = (u32)UNF_GET_BLS_SFS_XCHG_TIMER(lport);
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
-
- unf_select_sq(xchg, pkg);
-
- return lport->low_level_func.service_op.unf_bls_send(lport->fc_port, pkg);
-}
-
-void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
- struct unf_rport *rport)
-{
- /* v_pstRport maybe NULL */
- FC_CHECK_RETURN_VOID(pkg);
- FC_CHECK_RETURN_VOID(xchg);
-
- pkg->cmnd = xchg->cmnd_code;
- pkg->fcp_cmnd = &xchg->fcp_cmnd;
- pkg->frame_head.csctl_sid = xchg->sid;
- pkg->frame_head.rctl_did = xchg->did;
- pkg->frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
- pkg->xchg_contex = xchg;
-
- FC_CHECK_RETURN_VOID(xchg->lport);
- pkg->private_data[PKG_PRIVATE_XCHG_VP_INDEX] = xchg->lport->vp_index;
-
- if (!rport) {
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = UNF_RPORT_INVALID_INDEX;
- pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = INVALID_VALUE32;
- } else {
- if (likely(rport->nport_id != UNF_FC_FID_FLOGI))
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
- else
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
-
- pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
- }
-
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
- pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
- xchg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD];
- pkg->unf_cmnd_pload_bl.buffer_ptr =
- (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- pkg->unf_cmnd_pload_bl.buf_dma_addr =
- xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
-
- /* Low level need to know payload length if send ECHO response */
- pkg->unf_cmnd_pload_bl.length = xchg->fcp_sfs_union.sfs_entry.cur_offset;
-}
-
-struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport, u32 did,
- struct unf_rport *rport,
- union unf_sfs_u **fc_entry)
-{
- struct unf_xchg *xchg = NULL;
- union unf_sfs_u *sfs_fc_entry = NULL;
-
- xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
- if (!xchg)
- return NULL;
-
- xchg->did = did;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
- xchg->disc_rport = NULL;
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- sfs_fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!sfs_fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return NULL;
- }
-
- *fc_entry = sfs_fc_entry;
-
- return xchg;
-}
-
-void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg)
-{
- struct unf_big_sfs *big_sfs = NULL;
- struct list_head *list_head = NULL;
- struct unf_xchg_mgr *xchg_mgr = NULL;
- ulong flag = 0;
- spinlock_t *big_sfs_pool_lock = NULL;
-
- FC_CHECK_RETURN_VALUE(xchg, NULL);
- xchg_mgr = xchg->xchg_mgr;
- FC_CHECK_RETURN_VALUE(xchg_mgr, NULL);
- big_sfs_pool_lock = &xchg_mgr->big_sfs_pool.big_sfs_pool_lock;
-
- spin_lock_irqsave(big_sfs_pool_lock, flag);
- if (!list_empty(&xchg_mgr->big_sfs_pool.list_freepool)) {
- /* from free to busy */
- list_head = UNF_OS_LIST_NEXT(&xchg_mgr->big_sfs_pool.list_freepool);
- list_del(list_head);
- xchg_mgr->big_sfs_pool.free_count--;
- list_add_tail(list_head, &xchg_mgr->big_sfs_pool.list_busypool);
- big_sfs = list_entry(list_head, struct unf_big_sfs, entry_bigsfs);
- } else {
- spin_unlock_irqrestore(big_sfs_pool_lock, flag);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Allocate big sfs buf failed, count(0x%x) exchange(0x%p) command(0x%x)",
- xchg_mgr->big_sfs_pool.free_count, xchg, xchg->cmnd_code);
-
- return NULL;
- }
- spin_unlock_irqrestore(big_sfs_pool_lock, flag);
-
- xchg->big_sfs_buf = big_sfs;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Allocate one big sfs buffer(0x%p), remaining count(0x%x) exchange(0x%p) command(0x%x)",
- big_sfs->addr, xchg_mgr->big_sfs_pool.free_count, xchg,
- xchg->cmnd_code);
-
- return big_sfs->addr;
-}
-
-static void unf_fill_rjt_pld(struct unf_els_rjt *els_rjt, u32 reason_code,
- u32 reason_explanation)
-{
- FC_CHECK_RETURN_VOID(els_rjt);
-
- els_rjt->cmnd = UNF_ELS_CMND_RJT;
- els_rjt->reason_code = (reason_code | reason_explanation);
-}
-
-u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
-{
- struct unf_rport *unf_rport = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct unf_frame_pkg pkg;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- unf_rport = xchg->rport;
- FC_CHECK_RETURN_VALUE(unf_rport, UNF_RETURN_ERROR);
-
- /* set pkg info */
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
- pkg.type = UNF_PKG_BLS_REQ;
- pkg.frame_head.csctl_sid = xchg->sid;
- pkg.frame_head.rctl_did = xchg->did;
- pkg.frame_head.oxid_rxid = ((u32)xchg->oxid << UNF_SHIFT_16 | xchg->rxid);
- pkg.xchg_contex = xchg;
- pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
-
- pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
-
- UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
- UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
-
- pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] =
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
-
- /* Send ABTS frame to target */
- ret = unf_bls_cmnd_send(lport, &pkg, xchg);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x_0x%x) send ABTS %s. Abort exch(0x%p) Cmdsn:0x%lx, tag(0x%x) iotype(0x%x)",
- lport->port_id, lport->nport_id,
- (ret == UNF_RETURN_ERROR) ? "failed" : "succeed", xchg,
- (ulong)xchg->cmnd_sn, xchg->hotpooltag, xchg->data_direction);
-
- return ret;
-}
-
-u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
- struct unf_rport *rport, struct unf_rjt_info *rjt_info)
-{
- struct unf_els_rjt *els_rjt = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
- xchg->did = rport->nport_id;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = rport;
- xchg->disc_rport = NULL;
-
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- unf_fill_package(&pkg, xchg, rport);
- pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
- pkg.type = UNF_PKG_ELS_REPLY;
-
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- els_rjt = &fc_entry->els_rjt;
- memset(els_rjt, 0, sizeof(struct unf_els_rjt));
- unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send LS_RJT for 0x%x %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- rjt_info->els_cmnd_code,
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id,
- rport->nport_id, ox_id, rx_id);
-
- return ret;
-}
-
-u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
- u32 did, struct unf_rjt_info *rjt_info)
-{
- struct unf_els_rjt *els_rjt = NULL;
- union unf_sfs_u *fc_entry = NULL;
- struct unf_frame_pkg pkg = {0};
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- xchg->cmnd_code = UNF_SET_ELS_RJT_TYPE(rjt_info->els_cmnd_code);
- xchg->did = did;
- xchg->sid = lport->nport_id;
- xchg->oid = xchg->sid;
- xchg->lport = lport;
- xchg->rport = NULL;
- xchg->disc_rport = NULL;
-
- xchg->callback = NULL;
- xchg->ob_callback = NULL;
-
- unf_fill_package(&pkg, xchg, NULL);
- pkg.class_mode = UNF_FC_PROTOCOL_CLASS_3;
- pkg.type = UNF_PKG_ELS_REPLY;
-
- if (rjt_info->reason_code == UNF_LS_RJT_CLASS_ERROR &&
- rjt_info->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
- pkg.class_mode = rjt_info->class_mode;
- }
-
- fc_entry = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
- if (!fc_entry) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) entry can't be NULL with tag(0x%x)",
- lport->port_id, xchg->hotpooltag);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
-
- els_rjt = &fc_entry->els_rjt;
- memset(els_rjt, 0, sizeof(struct unf_els_rjt));
- unf_fill_rjt_pld(els_rjt, rjt_info->reason_code, rjt_info->reason_explanation);
- ox_id = xchg->oxid;
- rx_id = xchg->rxid;
-
- ret = unf_ls_gs_cmnd_send(lport, &pkg, xchg);
- if (ret != RETURN_OK)
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]LOGIN: Send LS_RJT %s. Port(0x%x)--->RPort(0x%x) with OX_ID(0x%x) RX_ID(0x%x)",
- (ret != RETURN_OK) ? "failed" : "succeed", lport->port_id, did, ox_id, rx_id);
-
- return ret;
-}
-
-static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg *xchg, u32 sid,
- u32 els_cmnd_code)
-{
- struct unf_rport *unf_rport = NULL;
- struct unf_rjt_info rjt_info = {0};
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(xchg, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_KEVENT,
- "[info]Receive Unknown ELS command(0x%x). Port(0x%x)<---RPort(0x%x) with OX_ID(0x%x)",
- els_cmnd_code, lport->port_id, sid, xchg->oxid);
-
- memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
- rjt_info.els_cmnd_code = els_cmnd_code;
- rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
-
- unf_rport = unf_get_rport_by_nport_id(lport, sid);
- if (unf_rport)
- ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
- else
- ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
-
- return ret;
-}
-
-static struct unf_xchg *unf_alloc_xchg_for_rcv_cmnd(struct unf_lport *lport,
- struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- ulong flags = 0;
- u32 i = 0;
- u32 offset = 0;
- u8 *cmnd_pld = NULL;
- u32 first_dword = 0;
- u32 alloc_time = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(pkg, NULL);
-
- if (!pkg->xchg_contex) {
- xchg = unf_cm_get_free_xchg(lport, UNF_XCHG_TYPE_SFS);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[warn]Port(0x%x) get new exchange failed",
- lport->port_id);
-
- return NULL;
- }
-
- offset = (xchg->fcp_sfs_union.sfs_entry.cur_offset);
- cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
- first_dword = xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
- ->sfs_common.frame_head.rctl_did;
-
- if (cmnd_pld || first_dword != 0 || offset != 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) exchange(0x%p) abnormal, maybe data overrun, start(%llu) command(0x%x)",
- lport->port_id, xchg, xchg->alloc_jif, pkg->cmnd);
-
- UNF_PRINT_SFS(UNF_INFO, lport->port_id,
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr,
- sizeof(union unf_sfs_u));
- }
-
- memset(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr, 0, sizeof(union unf_sfs_u));
-
- pkg->xchg_contex = (void *)xchg;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->fcp_sfs_union.sfs_entry.cur_offset = 0;
- alloc_time = xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
- for (i = 0; i < PKG_MAX_PRIVATE_DATA_SIZE; i++)
- xchg->private_data[i] = pkg->private_data[i];
-
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = alloc_time;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- } else {
- xchg = (struct unf_xchg *)pkg->xchg_contex;
- }
-
- if (!xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) {
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- return NULL;
- }
-
- return xchg;
-}
-
-static u8 *unf_calc_big_cmnd_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
-{
- u8 *cmnd_pld = NULL;
- void *buf = NULL;
- u8 *dest = NULL;
-
- FC_CHECK_RETURN_VALUE(xchg, NULL);
-
- if (cmnd_code == ELS_RSCN)
- cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld;
- else
- cmnd_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
-
- if (!cmnd_pld) {
- buf = unf_get_one_big_sfs_buf(xchg);
- if (!buf)
- return NULL;
-
- if (cmnd_code == ELS_RSCN) {
- memset(buf, 0, sizeof(struct unf_rscn_pld));
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->rscn.rscn_pld = buf;
- } else {
- memset(buf, 0, sizeof(struct unf_echo_payload));
- xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld = buf;
- }
-
- dest = (u8 *)buf;
- } else {
- dest = (u8 *)(cmnd_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
- }
-
- return dest;
-}
-
-static u8 *unf_calc_other_pld_buffer(struct unf_xchg *xchg)
-{
- u8 *dest = NULL;
- u32 offset = 0;
-
- FC_CHECK_RETURN_VALUE(xchg, NULL);
-
- offset = (sizeof(struct unf_fc_head)) + (xchg->fcp_sfs_union.sfs_entry.cur_offset);
- dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
-
- return dest;
-}
-
-struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- u8 *dest = NULL;
- u32 length = 0;
- ulong flags = 0;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
- FC_CHECK_RETURN_VALUE(pkg, NULL);
-
- xchg = unf_alloc_xchg_for_rcv_cmnd(lport, pkg);
- if (!xchg)
- return NULL;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
-
- memcpy(&xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->sfs_common.frame_head,
- &pkg->frame_head, sizeof(pkg->frame_head));
-
- if (pkg->cmnd == ELS_RSCN || pkg->cmnd == ELS_ECHO)
- dest = unf_calc_big_cmnd_pld_buffer(xchg, pkg->cmnd);
- else
- dest = unf_calc_other_pld_buffer(xchg);
-
- if (!dest) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- return NULL;
- }
-
- if (((xchg->fcp_sfs_union.sfs_entry.cur_offset +
- pkg->unf_cmnd_pload_bl.length) > (u32)sizeof(union unf_sfs_u)) &&
- pkg->cmnd != ELS_RSCN && pkg->cmnd != ELS_ECHO) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) excange(0x%p) command(0x%x,0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
- lport->port_id, xchg, pkg->cmnd, xchg->hotpooltag,
- xchg->fcp_sfs_union.sfs_entry.cur_offset,
- pkg->unf_cmnd_pload_bl.length, (u32)sizeof(union unf_sfs_u));
-
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_cm_free_xchg((void *)lport, (void *)xchg);
-
- return NULL;
- }
-
- length = pkg->unf_cmnd_pload_bl.length;
- if (length > 0)
- memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
-
- xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return xchg;
-}
-
-static u32 unf_check_els_cmnd_valid(struct unf_lport *lport, struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg)
-{
- struct unf_rjt_info rjt_info = {0};
- struct unf_lport *vport = NULL;
- u32 sid = 0;
- u32 did = 0;
-
- sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
- did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
-
- memset(&rjt_info, 0, sizeof(struct unf_rjt_info));
-
- if (pkg->class_mode != UNF_FC_PROTOCOL_CLASS_3) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) unsupport class 0x%x cmd 0x%x and send RJT",
- lport->port_id, pkg->class_mode, pkg->cmnd);
-
- rjt_info.reason_code = UNF_LS_RJT_CLASS_ERROR;
- rjt_info.els_cmnd_code = pkg->cmnd;
- rjt_info.class_mode = pkg->class_mode;
- (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
-
- return UNF_RETURN_ERROR;
- }
-
- rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
-
- if (pkg->cmnd == ELS_FLOGI && lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) receive FLOGI in top (0x%x) and send LS_RJT",
- lport->port_id, lport->act_topo);
-
- rjt_info.els_cmnd_code = ELS_FLOGI;
- (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
-
- return UNF_RETURN_ERROR;
- }
-
- if (pkg->cmnd == ELS_PLOGI && did >= UNF_FC_FID_DOM_MGR) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x)receive PLOGI with wellknown address(0x%x) and Send LS_RJT",
- lport->port_id, did);
-
- rjt_info.els_cmnd_code = ELS_PLOGI;
- (void)unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
-
- return UNF_RETURN_ERROR;
- }
-
- if ((lport->nport_id == 0 || lport->nport_id == INVALID_VALUE32) &&
- (NEED_REFRESH_NPORTID(pkg))) {
- lport->nport_id = did;
- } else if ((lport->nport_id != did) && (pkg->cmnd != ELS_FLOGI)) {
- vport = unf_cm_lookup_vport_by_did(lport, did);
- if (!vport) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive ELS cmd(0x%x) with abnormal D_ID(0x%x)",
- lport->nport_id, pkg->cmnd, did);
-
- unf_cm_free_xchg(lport, xchg);
- return UNF_RETURN_ERROR;
- }
- }
-
- return RETURN_OK;
-}
-
-static u32 unf_rcv_els_cmnd_req(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 i = 0;
- u32 sid = 0;
- u32 did = 0;
- struct unf_lport *vport = NULL;
- u32 (*els_cmnd_handler)(struct unf_lport *, u32, struct unf_xchg *) = NULL;
-
- sid = (pkg->frame_head.csctl_sid) & UNF_NPORTID_MASK;
- did = (pkg->frame_head.rctl_did) & UNF_NPORTID_MASK;
-
- xchg = unf_mv_data_2_xchg(lport, pkg);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) receive ElsCmnd(0x%x), exchange is NULL",
- lport->port_id, pkg->cmnd);
- return UNF_RETURN_ERROR;
- }
-
- if (!pkg->last_pkg_flag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Exchange(%u) waiting for last WQE",
- xchg->hotpooltag);
- return RETURN_OK;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Exchange(%u) get last WQE", xchg->hotpooltag);
-
- xchg->oxid = UNF_GET_OXID(pkg);
- xchg->abort_oxid = xchg->oxid;
- xchg->rxid = UNF_GET_RXID(pkg);
- xchg->cmnd_code = pkg->cmnd;
-
- ret = unf_check_els_cmnd_valid(lport, pkg, xchg);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- if (lport->nport_id != did && pkg->cmnd != ELS_FLOGI) {
- vport = unf_cm_lookup_vport_by_did(lport, did);
- if (!vport) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) received unknown ELS command with S_ID(0x%x) D_ID(0x%x))",
- lport->port_id, sid, did);
- return UNF_RETURN_ERROR;
- }
- lport = vport;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]VPort(0x%x) received ELS command with S_ID(0x%x) D_ID(0x%x)",
- lport->port_id, sid, did);
- }
-
- do {
- if (pkg->cmnd == els_handle_table[i].cmnd) {
- els_cmnd_handler = els_handle_table[i].els_cmnd_handler;
- break;
- }
- i++;
- } while (i < (sizeof(els_handle_table) / sizeof(struct unf_els_handle_table)));
-
- if (els_cmnd_handler) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) receive ELS(0x%x) from RPort(0x%x) and process it",
- lport->port_id, pkg->cmnd, sid);
- ret = els_cmnd_handler(lport, sid, xchg);
- } else {
- ret = unf_els_cmnd_default_handler(lport, xchg, sid, pkg->cmnd);
- }
- return ret;
-}
-
-u32 unf_send_els_rsp_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- u32 ret = RETURN_OK;
- u16 hot_pool_tag = 0;
- ulong flags = 0;
- void (*ob_callback)(struct unf_xchg *) = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) lookup exchange by tag function is NULL",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
- xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
- hot_pool_tag));
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) find exhange by tag(0x%x) failed",
- lport->port_id, hot_pool_tag);
-
- return UNF_RETURN_ERROR;
- }
-
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (xchg->ob_callback &&
- (!(xchg->io_state & TGT_IO_STATE_ABORT))) {
- ob_callback = xchg->ob_callback;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) with exchange(0x%p) tag(0x%x) do callback",
- lport->port_id, xchg, hot_pool_tag);
-
- ob_callback(xchg);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- }
-
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- return ret;
-}
-
-static u8 *unf_calc_big_resp_pld_buffer(struct unf_xchg *xchg, u32 cmnd_code)
-{
- u8 *resp_pld = NULL;
- u8 *dest = NULL;
-
- FC_CHECK_RETURN_VALUE(xchg, NULL);
-
- if (cmnd_code == ELS_ECHO) {
- resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr->echo.echo_pld;
- } else {
- resp_pld = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr
- ->get_id.gid_rsp.gid_acc_pld;
- }
-
- if (resp_pld)
- dest = (u8 *)(resp_pld + xchg->fcp_sfs_union.sfs_entry.cur_offset);
-
- return dest;
-}
-
-static u8 *unf_calc_other_resp_pld_buffer(struct unf_xchg *xchg)
-{
- u8 *dest = NULL;
- u32 offset = 0;
-
- FC_CHECK_RETURN_VALUE(xchg, NULL);
-
- offset = xchg->fcp_sfs_union.sfs_entry.cur_offset;
- dest = (u8 *)((u8 *)(xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr) + offset);
-
- return dest;
-}
-
-u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
-{
- u8 *dest = NULL;
- u32 length = 0;
- u32 offset = 0;
- u32 max_frame_len = 0;
- ulong flags = 0;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
-
- if (UNF_NEED_BIG_RESPONSE_BUFF(xchg->cmnd_code)) {
- dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
- offset = 0;
- max_frame_len = sizeof(struct unf_gid_acc_pld);
- } else if (NS_GA_NXT == xchg->cmnd_code || NS_GIEL == xchg->cmnd_code) {
- dest = unf_calc_big_resp_pld_buffer(xchg, xchg->cmnd_code);
- offset = 0;
- max_frame_len = xchg->fcp_sfs_union.sfs_entry.sfs_buff_len;
- } else {
- dest = unf_calc_other_resp_pld_buffer(xchg);
- offset = sizeof(struct unf_fc_head);
- max_frame_len = sizeof(union unf_sfs_u);
- }
-
- if (!dest) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return UNF_RETURN_ERROR;
- }
-
- if (xchg->fcp_sfs_union.sfs_entry.cur_offset == 0) {
- xchg->fcp_sfs_union.sfs_entry.cur_offset += offset;
- dest = dest + offset;
- }
-
- length = pkg->unf_cmnd_pload_bl.length;
-
- if ((xchg->fcp_sfs_union.sfs_entry.cur_offset + length) >
- max_frame_len) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Exchange(0x%p) command(0x%x) hotpooltag(0x%x) OX_RX_ID(0x%x) S_ID(0x%x) D_ID(0x%x) copy payload overrun(0x%x:0x%x:0x%x)",
- xchg, xchg->cmnd_code, xchg->hotpooltag, pkg->frame_head.oxid_rxid,
- pkg->frame_head.csctl_sid & UNF_NPORTID_MASK,
- pkg->frame_head.rctl_did & UNF_NPORTID_MASK,
- xchg->fcp_sfs_union.sfs_entry.cur_offset,
- pkg->unf_cmnd_pload_bl.length, max_frame_len);
-
- length = max_frame_len - xchg->fcp_sfs_union.sfs_entry.cur_offset;
- }
-
- if (length > 0)
- memcpy(dest, pkg->unf_cmnd_pload_bl.buffer_ptr, length);
-
- xchg->fcp_sfs_union.sfs_entry.cur_offset += length;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- return RETURN_OK;
-}
-
-static void unf_ls_gs_do_callback(struct unf_xchg *xchg,
- struct unf_frame_pkg *pkg)
-{
- ulong flags = 0;
- void (*callback)(void *, void *, void *) = NULL;
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (xchg->callback &&
- (xchg->cmnd_code == ELS_RRQ ||
- xchg->cmnd_code == ELS_LOGO ||
- !(xchg->io_state & TGT_IO_STATE_ABORT))) {
- callback = xchg->callback;
-
- if (xchg->cmnd_code == ELS_FLOGI || xchg->cmnd_code == ELS_FDISC)
- xchg->sid = pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
-
- if (xchg->cmnd_code == ELS_ECHO) {
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
- pkg->private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME];
- xchg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
- pkg->private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME];
- xchg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
- pkg->private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME];
- xchg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
- pkg->private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME];
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- callback(xchg->lport, xchg->rport, xchg);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- }
-}
-
-u32 unf_send_ls_gs_cmnd_succ(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- u32 ret = RETURN_OK;
- u16 hot_pool_tag = 0;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- unf_lport = lport;
-
- if (!unf_lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
- unf_lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
- xchg = (struct unf_xchg *)(unf_lport->xchg_mgr_temp
- .unf_look_up_xchg_by_tag((void *)unf_lport, hot_pool_tag));
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) find exchange by tag(0x%x) failed",
- unf_lport->port_id, unf_lport->nport_id, hot_pool_tag);
-
- return UNF_RETURN_ERROR;
- }
-
- UNF_CHECK_ALLOCTIME_VALID(unf_lport, hot_pool_tag, xchg,
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
-
- if ((pkg->frame_head.csctl_sid & UNF_NPORTID_MASK) != xchg->did) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) find exhange invalid, package S_ID(0x%x) exchange S_ID(0x%x) D_ID(0x%x)",
- unf_lport->port_id, pkg->frame_head.csctl_sid, xchg->sid, xchg->did);
-
- return UNF_RETURN_ERROR;
- }
-
- if (pkg->last_pkg_flag == UNF_PKG_NOT_LAST_RESPONSE) {
- ret = unf_mv_resp_2_xchg(xchg, pkg);
- return ret;
- }
-
- xchg->byte_orders = pkg->byte_orders;
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
- unf_ls_gs_do_callback(xchg, pkg);
- unf_cm_free_xchg((void *)unf_lport, (void *)xchg);
- return ret;
-}
-
-u32 unf_send_ls_gs_cmnd_failed(struct unf_lport *lport,
- struct unf_frame_pkg *pkg)
-{
- struct unf_xchg *xchg = NULL;
- u32 ret = RETURN_OK;
- u16 hot_pool_tag = 0;
- ulong flags = 0;
- void (*ob_callback)(struct unf_xchg *) = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (!lport->xchg_mgr_temp.unf_look_up_xchg_by_tag) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) lookup exchange by tag function can't be NULL",
- lport->port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hot_pool_tag = (u16)(pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX]);
- xchg = (struct unf_xchg *)(lport->xchg_mgr_temp.unf_look_up_xchg_by_tag((void *)lport,
- hot_pool_tag));
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) find exhange by tag(0x%x) failed",
- lport->port_id, lport->nport_id, hot_pool_tag);
-
- return UNF_RETURN_ERROR;
- }
-
- UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
-
- lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (xchg->ob_callback &&
- (xchg->cmnd_code == ELS_RRQ || xchg->cmnd_code == ELS_LOGO ||
- (!(xchg->io_state & TGT_IO_STATE_ABORT)))) {
- ob_callback = xchg->ob_callback;
- xchg->ob_callback_sts = pkg->status;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- ob_callback(xchg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) exchange(0x%p) tag(0x%x) do callback",
- lport->port_id, xchg, hot_pool_tag);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- }
-
- unf_cm_free_xchg((void *)lport, (void *)xchg);
- return ret;
-}
-
-static u32 unf_rcv_ls_gs_cmnd_reply(struct unf_lport *lport,
- struct unf_frame_pkg *pkg)
-{
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
- ret = unf_send_ls_gs_cmnd_succ(lport, pkg);
- else
- ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
-
- return ret;
-}
-
-u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- unf_lport = (struct unf_lport *)lport;
-
- switch (pkg->type) {
- case UNF_PKG_ELS_REQ_DONE:
- case UNF_PKG_GS_REQ_DONE:
- ret = unf_rcv_ls_gs_cmnd_reply(unf_lport, pkg);
- break;
-
- case UNF_PKG_ELS_REQ:
- ret = unf_rcv_els_cmnd_req(unf_lport, pkg);
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) with exchange type(0x%x) abnormal",
- unf_lport->port_id, unf_lport->nport_id, pkg->type);
- break;
- }
-
- return ret;
-}
-
-u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (pkg->type == UNF_PKG_ELS_REPLY_DONE) {
- if (pkg->status == UNF_IO_SUCCESS || pkg->status == UNF_IO_UNDER_FLOW)
- ret = unf_send_els_rsp_succ(lport, pkg);
- else
- ret = unf_send_ls_gs_cmnd_failed(lport, pkg);
- }
-
- return ret;
-}
-
-void unf_rport_immediate_link_down(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* Swap case: Report Link Down immediately & release R_Port */
- ulong flags = 0;
- struct unf_disc *disc = NULL;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&rport->rport_state_lock, flags);
- /* 1. Inc R_Port ref_cnt */
- if (unf_rport_ref_inc(rport) != RETURN_OK) {
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) Rport(0x%p,0x%x) is removing and no need process",
- lport->port_id, rport, rport->nport_id);
-
- return;
- }
-
- /* 2. R_PORT state update: Link Down Event --->>> closing state */
- unf_rport_state_ma(rport, UNF_EVENT_RPORT_LINK_DOWN);
- spin_unlock_irqrestore(&rport->rport_state_lock, flags);
-
- /* 3. Put R_Port from busy to destroy list */
- disc = &lport->disc;
- spin_lock_irqsave(&disc->rport_busy_pool_lock, flags);
- list_del_init(&rport->entry_rport);
- list_add_tail(&rport->entry_rport, &disc->list_destroy_rports);
- spin_unlock_irqrestore(&disc->rport_busy_pool_lock, flags);
-
- /* 4. Schedule Closing work (Enqueuing workqueue) */
- unf_schedule_closing_work(lport, rport);
-
- unf_rport_ref_dec(rport);
-}
-
-struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
- u64 lport_name)
-{
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, NULL);
-
- if (rport_nport_id >= UNF_FC_FID_DOM_MGR) {
- /* R_Port is Fabric: by N_Port_ID */
- unf_rport = unf_get_rport_by_nport_id(unf_lport, rport_nport_id);
- } else {
- /* Others: by WWPN & N_Port_ID */
- unf_rport = unf_find_valid_rport(unf_lport, lport_name, rport_nport_id);
- }
-
- return unf_rport;
-}
-
-void unf_process_logo_in_pri_loop(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* Send PLOGI or LOGO */
- struct unf_rport *unf_rport = rport;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI); /* PLOGI WAIT */
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- /* Private Loop with INI mode, Avoid COM Mode problem */
- unf_rport_delay_login(unf_rport);
-}
-
-void unf_process_logo_in_n2n(struct unf_lport *lport, struct unf_rport *rport)
-{
- /* Send PLOGI or LOGO */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- spin_lock_irqsave(&unf_rport->rport_state_lock, flag);
-
- unf_rport_state_ma(unf_rport, UNF_EVENT_RPORT_ENTER_PLOGI);
- spin_unlock_irqrestore(&unf_rport->rport_state_lock, flag);
-
- if (unf_lport->port_name > unf_rport->port_name) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x)'s WWN(0x%llx) is larger than(0x%llx), should be master",
- unf_lport->port_id, unf_lport->port_name, unf_rport->port_name);
-
- ret = unf_send_plogi(unf_lport, unf_rport);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]LOGIN: Port(0x%x) send PLOGI failed, enter recovery",
- lport->port_id);
-
- unf_rport_error_recovery(unf_rport);
- }
- } else {
- unf_rport_enter_logo(unf_lport, unf_rport);
- }
-}
-
-void unf_process_logo_in_fabric(struct unf_lport *lport,
- struct unf_rport *rport)
-{
- /* Send GFF_ID or LOGO */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
- struct unf_rport *sns_port = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- /* L_Port with INI Mode: Send GFF_ID */
- sns_port = unf_get_rport_by_nport_id(unf_lport, UNF_FC_FID_DIR_SERV);
- if (!sns_port) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find fabric port",
- unf_lport->port_id);
- return;
- }
-
- ret = unf_get_and_post_disc_event(lport, sns_port, unf_rport->nport_id,
- UNF_DISC_GET_FEATURE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) add discovery event(0x%x) failed Rport(0x%x)",
- unf_lport->port_id, UNF_DISC_GET_FEATURE,
- unf_rport->nport_id);
-
- unf_rcv_gff_id_rsp_unknown(unf_lport, unf_rport->nport_id);
- }
-}
-
-void unf_process_rport_after_logo(struct unf_lport *lport, struct unf_rport *rport)
-{
- /*
- * 1. LOGO handler
- * 2. RPLO handler
- * 3. LOGO_CALL_BACK (send LOGO ACC) handler
- */
- struct unf_lport *unf_lport = lport;
- struct unf_rport *unf_rport = rport;
-
- FC_CHECK_RETURN_VOID(lport);
- FC_CHECK_RETURN_VOID(rport);
-
- if (unf_rport->nport_id < UNF_FC_FID_DOM_MGR) {
- /* R_Port is not fabric port (retry LOGIN or LOGO) */
- if (unf_lport->act_topo == UNF_ACT_TOP_PRIVATE_LOOP) {
- /* Private Loop: PLOGI or LOGO */
- unf_process_logo_in_pri_loop(unf_lport, unf_rport);
- } else if (unf_lport->act_topo == UNF_ACT_TOP_P2P_DIRECT) {
- /* Point to Point: LOGIN or LOGO */
- unf_process_logo_in_n2n(unf_lport, unf_rport);
- } else {
- /* Fabric or Public Loop: GFF_ID or LOGO */
- unf_process_logo_in_fabric(unf_lport, unf_rport);
- }
- } else {
- /* Rport is fabric port: link down now */
- unf_rport_linkdown(unf_lport, unf_rport);
- }
-}
-
-static u32 unf_rcv_bls_req_done(struct unf_lport *lport, struct unf_frame_pkg *pkg)
-{
- /*
- * About I/O resource:
- * 1. normal: Release I/O resource during RRQ processer
- * 2. exception: Release I/O resource immediately
- */
- struct unf_xchg *xchg = NULL;
- u16 hot_pool_tag = 0;
- ulong flags = 0;
- ulong time_ms = 0;
- u32 ret = RETURN_OK;
- struct unf_lport *unf_lport = NULL;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- unf_lport = lport;
-
- hot_pool_tag = (u16)pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX];
- xchg = (struct unf_xchg *)unf_cm_lookup_xchg_by_tag((void *)unf_lport, hot_pool_tag);
- if (!xchg) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) can't find exchange by tag(0x%x) when receiving ABTS response",
- unf_lport->port_id, hot_pool_tag);
- return UNF_RETURN_ERROR;
- }
-
- UNF_CHECK_ALLOCTIME_VALID(lport, hot_pool_tag, xchg,
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME]);
-
- ret = unf_xchg_ref_inc(xchg, TGT_ABTS_DONE);
- FC_CHECK_RETURN_VALUE((ret == RETURN_OK), UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- xchg->oxid = UNF_GET_OXID(pkg);
- xchg->rxid = UNF_GET_RXID(pkg);
- xchg->io_state |= INI_IO_STATE_DONE;
- xchg->abts_state |= ABTS_RESPONSE_RECEIVED;
- if (!(INI_IO_STATE_UPABORT & xchg->io_state)) {
- /* NOTE: I/O exchange has been released and used again */
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) SID(0x%x) exch(0x%p) (0x%x:0x%x:0x%x:0x%x) state(0x%x) is abnormal with cnt(0x%x)",
- unf_lport->port_id, unf_lport->nport_id, xchg->sid,
- xchg, xchg->hotpooltag, xchg->oxid, xchg->rxid,
- xchg->oid, xchg->io_state,
- atomic_read(&xchg->ref_cnt));
-
- unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
- return UNF_RETURN_ERROR;
- }
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- unf_lport->xchg_mgr_temp.unf_xchg_cancel_timer((void *)xchg);
- /*
- * Exchage I/O Status check: Succ-> Add RRQ Timer
- * ***** pkg->status --- to --->>> scsi_cmnd->result *****
- * *
- * FAILED: ERR_Code or X_ID is err, or BA_RSP type is err
- */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (pkg->status == UNF_IO_SUCCESS) {
- /* Succeed: PKG status -->> EXCH status -->> scsi status */
- UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_SUCCESS);
- xchg->io_state |= INI_IO_STATE_WAIT_RRQ;
- xchg->rxid = UNF_GET_RXID(pkg);
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- /* Add RRQ timer */
- time_ms = (ulong)(unf_lport->ra_tov);
- unf_lport->xchg_mgr_temp.unf_xchg_add_timer((void *)xchg, time_ms,
- UNF_TIMER_TYPE_INI_RRQ);
- } else {
- /* Failed: PKG status -->> EXCH status -->> scsi status */
- UNF_SET_SCSI_CMND_RESULT(xchg, UNF_IO_FAILED);
- if (MARKER_STS_RECEIVED & xchg->abts_state) {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
-
- /* NOTE: release I/O resource immediately */
- unf_cm_free_xchg(unf_lport, xchg);
- } else {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) exch(0x%p) OX_RX(0x%x:0x%x) IOstate(0x%x) ABTSstate(0x%x) receive response abnormal ref(0x%x)",
- unf_lport->port_id, xchg, xchg->oxid, xchg->rxid,
- xchg->io_state, xchg->abts_state, atomic_read(&xchg->ref_cnt));
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- }
- }
-
- /*
- * If abts response arrived before
- * marker sts received just wake up abts marker sema
- */
- spin_lock_irqsave(&xchg->xchg_state_lock, flags);
- if (!(MARKER_STS_RECEIVED & xchg->abts_state)) {
- xchg->ucode_abts_state = pkg->status;
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- up(&xchg->task_sema);
- } else {
- spin_unlock_irqrestore(&xchg->xchg_state_lock, flags);
- }
-
- unf_xchg_ref_dec(xchg, TGT_ABTS_DONE);
- return ret;
-}
-
-u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- unf_lport = (struct unf_lport *)lport;
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- if (pkg->type == UNF_PKG_BLS_REQ_DONE) {
- /* INI: RCVD BLS Req Done */
- ret = unf_rcv_bls_req_done(lport, pkg);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) received BLS packet type(%xh) is error",
- unf_lport->port_id, pkg->type);
-
- return UNF_RETURN_ERROR;
- }
-
- return ret;
-}
-
-static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg)
-{
- pkg->frame_head.csctl_sid = xchg->sid;
- pkg->frame_head.rctl_did = xchg->did;
- pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
- UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
-
- if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
- if (UNF_XCHG_IS_ELS_REPLY(xchg)) {
- pkg->type = UNF_PKG_ELS_REPLY;
- pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
- } else {
- pkg->type = UNF_PKG_ELS_REQ;
- pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
- }
- } else if (xchg->xchg_type == UNF_XCHG_TYPE_INI) {
- pkg->type = UNF_PKG_INI_IO;
- pkg->rx_or_ox_id = UNF_PKG_FREE_OXID;
- }
-}
-
-void unf_notify_chip_free_xid(struct unf_xchg *xchg)
-{
- struct unf_lport *unf_lport = NULL;
- u32 ret = RETURN_ERROR;
- struct unf_frame_pkg pkg = {0};
-
- FC_CHECK_RETURN_VOID(xchg);
- unf_lport = xchg->lport;
- FC_CHECK_RETURN_VOID(unf_lport);
-
- unf_fill_free_xid_pkg(xchg, &pkg);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Sid_Did(0x%x)(0x%x) Xchg(0x%p) RXorOX(0x%x) tag(0x%x) xid(0x%x) magic(0x%x) Stat(0x%x)type(0x%x) wait timeout.",
- xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], pkg.frame_head.oxid_rxid,
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME], xchg->io_state, pkg.type);
-
- ret = unf_lport->low_level_func.service_op.ll_release_xid(unf_lport->fc_port, &pkg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Free xid abnormal:Sid_Did(0x%x 0x%x) Xchg(0x%p) RXorOX(0x%x) xid(0x%x) Stat(0x%x) tag(0x%x) magic(0x%x) type(0x%x).",
- xchg->sid, xchg->did, xchg, pkg.rx_or_ox_id,
- pkg.frame_head.oxid_rxid, xchg->io_state,
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME],
- pkg.type);
- }
-}
diff --git a/drivers/scsi/spfc/common/unf_service.h b/drivers/scsi/spfc/common/unf_service.h
deleted file mode 100644
index 0dd2975c6a7b..000000000000
--- a/drivers/scsi/spfc/common/unf_service.h
+++ /dev/null
@@ -1,66 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_SERVICE_H
-#define UNF_SERVICE_H
-
-#include "unf_type.h"
-#include "unf_exchg.h"
-#include "unf_rport.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif /* __cplusplus */
-
-extern u32 max_frame_size;
-#define UNF_INIT_DISC 0x1 /* first time DISC */
-#define UNF_RSCN_DISC 0x2 /* RSCN Port Addr DISC */
-#define UNF_SET_ELS_ACC_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_ACC)
-#define UNF_SET_ELS_RJT_TYPE(els_cmd) ((u32)(els_cmd) << 16 | ELS_RJT)
-#define UNF_XCHG_IS_ELS_REPLY(xchg) \
- ((ELS_ACC == ((xchg)->cmnd_code & 0x0ffff)) || \
- (ELS_RJT == ((xchg)->cmnd_code & 0x0ffff)))
-
-struct unf_els_handle_table {
- u32 cmnd;
- u32 (*els_cmnd_handler)(struct unf_lport *lport, u32 sid, struct unf_xchg *xchg);
-};
-
-void unf_select_sq(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
-void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
- struct unf_rport *rport);
-struct unf_xchg *unf_get_sfs_free_xchg_and_init(struct unf_lport *lport,
- u32 did,
- struct unf_rport *rport,
- union unf_sfs_u **fc_entry);
-void *unf_get_one_big_sfs_buf(struct unf_xchg *xchg);
-u32 unf_mv_resp_2_xchg(struct unf_xchg *xchg, struct unf_frame_pkg *pkg);
-void unf_rport_immediate_link_down(struct unf_lport *lport,
- struct unf_rport *rport);
-struct unf_rport *unf_find_rport(struct unf_lport *lport, u32 rport_nport_id,
- u64 port_name);
-void unf_process_logo_in_fabric(struct unf_lport *lport,
- struct unf_rport *rport);
-void unf_notify_chip_free_xid(struct unf_xchg *xchg);
-
-u32 unf_ls_gs_cmnd_send(struct unf_lport *lport, struct unf_frame_pkg *pkg,
- struct unf_xchg *xchg);
-u32 unf_receive_ls_gs_pkg(void *lport, struct unf_frame_pkg *pkg);
-struct unf_xchg *unf_mv_data_2_xchg(struct unf_lport *lport,
- struct unf_frame_pkg *pkg);
-u32 unf_receive_bls_pkg(void *lport, struct unf_frame_pkg *pkg);
-u32 unf_send_els_done(void *lport, struct unf_frame_pkg *pkg);
-u32 unf_send_els_rjt_by_did(struct unf_lport *lport, struct unf_xchg *xchg,
- u32 did, struct unf_rjt_info *rjt_info);
-u32 unf_send_els_rjt_by_rport(struct unf_lport *lport, struct unf_xchg *xchg,
- struct unf_rport *rport,
- struct unf_rjt_info *rjt_info);
-u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg);
-void unf_process_rport_after_logo(struct unf_lport *lport,
- struct unf_rport *rport);
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-
-#endif /* __UNF_SERVICE_H__ */
diff --git a/drivers/scsi/spfc/common/unf_type.h b/drivers/scsi/spfc/common/unf_type.h
deleted file mode 100644
index 28e163d0543c..000000000000
--- a/drivers/scsi/spfc/common/unf_type.h
+++ /dev/null
@@ -1,216 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef UNF_TYPE_H
-#define UNF_TYPE_H
-
-#include <linux/sched.h>
-#include <linux/kthread.h>
-#include <linux/fs.h>
-#include <linux/vmalloc.h>
-#include <linux/version.h>
-#include <linux/list.h>
-#include <linux/spinlock.h>
-#include <linux/delay.h>
-#include <linux/workqueue.h>
-#include <linux/kref.h>
-#include <linux/scatterlist.h>
-#include <linux/crc-t10dif.h>
-#include <linux/ctype.h>
-#include <linux/types.h>
-#include <linux/compiler.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <linux/random.h>
-#include <linux/jiffies.h>
-#include <linux/cpufreq.h>
-#include <linux/semaphore.h>
-#include <linux/jiffies.h>
-
-#include <scsi/scsi.h>
-#include <scsi/scsi_host.h>
-#include <scsi/scsi_device.h>
-#include <scsi/scsi_cmnd.h>
-#include <scsi/scsi_transport_fc.h>
-#include <linux/sched/signal.h>
-
-#ifndef SPFC_FT
-#define SPFC_FT
-#endif
-
-#define BUF_LIST_PAGE_SIZE (PAGE_SIZE << 8)
-
-#define UNF_S_TO_NS (1000000000)
-#define UNF_S_TO_MS (1000)
-
-enum UNF_OS_THRD_PRI_E {
- UNF_OS_THRD_PRI_HIGHEST = 0,
- UNF_OS_THRD_PRI_HIGH,
- UNF_OS_THRD_PRI_SUBHIGH,
- UNF_OS_THRD_PRI_MIDDLE,
- UNF_OS_THRD_PRI_LOW,
- UNF_OS_THRD_PRI_BUTT
-};
-
-#define UNF_OS_LIST_NEXT(a) ((a)->next)
-#define UNF_OS_LIST_PREV(a) ((a)->prev)
-
-#define UNF_OS_PER_NS (1000000000)
-#define UNF_OS_MS_TO_NS (1000000)
-
-#ifndef MIN
-#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
-#endif
-
-#ifndef MAX
-#define MAX(X, Y) ((X) > (Y) ? (X) : (Y))
-#endif
-
-#ifndef INVALID_VALUE64
-#define INVALID_VALUE64 0xFFFFFFFFFFFFFFFFULL
-#endif /* INVALID_VALUE64 */
-
-#ifndef INVALID_VALUE32
-#define INVALID_VALUE32 0xFFFFFFFF
-#endif /* INVALID_VALUE32 */
-
-#ifndef INVALID_VALUE16
-#define INVALID_VALUE16 0xFFFF
-#endif /* INVALID_VALUE16 */
-
-#ifndef INVALID_VALUE8
-#define INVALID_VALUE8 0xFF
-#endif /* INVALID_VALUE8 */
-
-#ifndef RETURN_OK
-#define RETURN_OK 0
-#endif
-
-#ifndef RETURN_ERROR
-#define RETURN_ERROR (~0)
-#endif
-#define UNF_RETURN_ERROR (~0)
-
-/* define shift bits */
-#define UNF_SHIFT_1 1
-#define UNF_SHIFT_2 2
-#define UNF_SHIFT_3 3
-#define UNF_SHIFT_4 4
-#define UNF_SHIFT_6 6
-#define UNF_SHIFT_7 7
-#define UNF_SHIFT_8 8
-#define UNF_SHIFT_11 11
-#define UNF_SHIFT_12 12
-#define UNF_SHIFT_15 15
-#define UNF_SHIFT_16 16
-#define UNF_SHIFT_17 17
-#define UNF_SHIFT_19 19
-#define UNF_SHIFT_20 20
-#define UNF_SHIFT_23 23
-#define UNF_SHIFT_24 24
-#define UNF_SHIFT_25 25
-#define UNF_SHIFT_26 26
-#define UNF_SHIFT_28 28
-#define UNF_SHIFT_29 29
-#define UNF_SHIFT_32 32
-#define UNF_SHIFT_35 35
-#define UNF_SHIFT_37 37
-#define UNF_SHIFT_39 39
-#define UNF_SHIFT_40 40
-#define UNF_SHIFT_43 43
-#define UNF_SHIFT_48 48
-#define UNF_SHIFT_51 51
-#define UNF_SHIFT_56 56
-#define UNF_SHIFT_57 57
-#define UNF_SHIFT_59 59
-#define UNF_SHIFT_60 60
-#define UNF_SHIFT_61 61
-
-/* array index */
-#define ARRAY_INDEX_0 0
-#define ARRAY_INDEX_1 1
-#define ARRAY_INDEX_2 2
-#define ARRAY_INDEX_3 3
-#define ARRAY_INDEX_4 4
-#define ARRAY_INDEX_5 5
-#define ARRAY_INDEX_6 6
-#define ARRAY_INDEX_7 7
-#define ARRAY_INDEX_8 8
-#define ARRAY_INDEX_10 10
-#define ARRAY_INDEX_11 11
-#define ARRAY_INDEX_12 12
-#define ARRAY_INDEX_13 13
-
-/* define mask bits */
-#define UNF_MASK_BIT_7_0 0xff
-#define UNF_MASK_BIT_15_0 0x0000ffff
-#define UNF_MASK_BIT_31_16 0xffff0000
-
-#define UNF_IO_SUCCESS 0x00000000
-#define UNF_IO_ABORTED 0x00000001 /* the host system aborted the command */
-#define UNF_IO_FAILED 0x00000002
-#define UNF_IO_ABORT_ABTS 0x00000003
-#define UNF_IO_ABORT_LOGIN 0x00000004 /* abort login */
-#define UNF_IO_ABORT_REET 0x00000005 /* reset event aborted the transport */
-#define UNF_IO_ABORT_FAILED 0x00000006 /* abort failed */
-/* data out of order ,data reassembly error */
-#define UNF_IO_OUTOF_ORDER 0x00000007
-#define UNF_IO_FTO 0x00000008 /* frame time out */
-#define UNF_IO_LINK_FAILURE 0x00000009
-#define UNF_IO_OVER_FLOW 0x0000000a /* data over run */
-#define UNF_IO_RSP_OVER 0x0000000b
-#define UNF_IO_LOST_FRAME 0x0000000c
-#define UNF_IO_UNDER_FLOW 0x0000000d /* data under run */
-#define UNF_IO_HOST_PROG_ERROR 0x0000000e
-#define UNF_IO_SEST_PROG_ERROR 0x0000000f
-#define UNF_IO_INVALID_ENTRY 0x00000010
-#define UNF_IO_ABORT_SEQ_NOT 0x00000011
-#define UNF_IO_REJECT 0x00000012
-#define UNF_IO_RS_INFO 0x00000013
-#define UNF_IO_EDC_IN_ERROR 0x00000014
-#define UNF_IO_EDC_OUT_ERROR 0x00000015
-#define UNF_IO_UNINIT_KEK_ERR 0x00000016
-#define UNF_IO_DEK_OUTOF_RANGE 0x00000017
-#define UNF_IO_KEY_UNWRAP_ERR 0x00000018
-#define UNF_IO_KEY_TAG_ERR 0x00000019
-#define UNF_IO_KEY_ECC_ERR 0x0000001a
-#define UNF_IO_BLOCK_SIZE_ERROR 0x0000001b
-#define UNF_IO_ILLEGAL_CIPHER_MODE 0x0000001c
-#define UNF_IO_CLEAN_UP 0x0000001d
-#define UNF_SRR_RECEIVE 0x0000001e /* receive srr */
-/* The target device sent an ABTS to abort the I/O.*/
-#define UNF_IO_ABORTED_BY_TARGET 0x0000001f
-#define UNF_IO_TRANSPORT_ERROR 0x00000020
-#define UNF_IO_LINK_FLASH 0x00000021
-#define UNF_IO_TIMEOUT 0x00000022
-#define UNF_IO_PORT_UNAVAILABLE 0x00000023
-#define UNF_IO_PORT_LOGOUT 0x00000024
-#define UNF_IO_PORT_CFG_CHG 0x00000025
-#define UNF_IO_FIRMWARE_RES_UNAVAILABLE 0x00000026
-#define UNF_IO_TASK_MGT_OVERRUN 0x00000027
-#define UNF_IO_DMA_ERROR 0x00000028
-#define UNF_IO_DIF_ERROR 0x00000029
-#define UNF_IO_NO_LPORT 0x0000002a
-#define UNF_IO_NO_XCHG 0x0000002b
-#define UNF_IO_SOFT_ERR 0x0000002c
-#define UNF_IO_XCHG_ADD_ERROR 0x0000002d
-#define UNF_IO_NO_LOGIN 0x0000002e
-#define UNF_IO_NO_BUFFER 0x0000002f
-#define UNF_IO_DID_ERROR 0x00000030
-#define UNF_IO_UNSUPPORT 0x00000031
-#define UNF_IO_NOREADY 0x00000032
-#define UNF_IO_NPORTID_REUSED 0x00000033
-#define UNF_IO_NPORT_HANDLE_REUSED 0x00000034
-#define UNF_IO_NO_NPORT_HANDLE 0x00000035
-#define UNF_IO_ABORT_BY_FW 0x00000036
-#define UNF_IO_ABORT_PORT_REMOVING 0x00000037
-#define UNF_IO_INCOMPLETE 0x00000038
-#define UNF_IO_DIF_REF_ERROR 0x00000039
-#define UNF_IO_DIF_GEN_ERROR 0x0000003a
-
-#define UNF_IO_ERREND 0xFFFFFFFF
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.c b/drivers/scsi/spfc/hw/spfc_chipitf.c
deleted file mode 100644
index be6073ff4dc0..000000000000
--- a/drivers/scsi/spfc/hw/spfc_chipitf.c
+++ /dev/null
@@ -1,1105 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_chipitf.h"
-#include "sphw_hw.h"
-#include "sphw_crm.h"
-
-#define SPFC_MBOX_TIME_SEC_MAX (60)
-
-#define SPFC_LINK_UP_COUNT 1
-#define SPFC_LINK_DOWN_COUNT 2
-#define SPFC_FC_DELETE_CMND_COUNT 3
-
-#define SPFC_MBX_MAX_TIMEOUT 10000
-
-u32 spfc_get_chip_msg(void *hba, void *mac)
-{
- struct spfc_hba_info *spfc_hba = NULL;
- struct unf_get_chip_info_argout *wwn = NULL;
- struct spfc_inmbox_get_chip_info get_chip_info;
- union spfc_outmbox_generic *get_chip_info_sts = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(mac, UNF_RETURN_ERROR);
-
- spfc_hba = (struct spfc_hba_info *)hba;
- wwn = (struct unf_get_chip_info_argout *)mac;
-
- memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
-
- get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!get_chip_info_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
-
- get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
- get_chip_info.header.length =
- SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
-
- if (spfc_mb_send_and_wait_mbox(spfc_hba, &get_chip_info,
- sizeof(struct spfc_inmbox_get_chip_info),
- get_chip_info_sts) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]spfc can't send and wait mailbox, command type: 0x%x.",
- get_chip_info.header.cmnd_type);
-
- goto exit;
- }
-
- if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) mailbox status incorrect status(0x%x) .",
- spfc_hba->port_cfg.port_id,
- get_chip_info_sts->get_chip_info_sts.status);
-
- goto exit;
- }
-
- if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
- spfc_hba->port_cfg.port_id,
- get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
-
- goto exit;
- }
-
- wwn->board_type = get_chip_info_sts->get_chip_info_sts.board_type;
- spfc_hba->card_info.card_type = get_chip_info_sts->get_chip_info_sts.board_type;
- wwn->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
- wwn->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
-
- ret = RETURN_OK;
-exit:
- kfree(get_chip_info_sts);
-
- return ret;
-}
-
-u32 spfc_get_chip_capability(void *hwdev_handle,
- struct spfc_chip_info *chip_info)
-{
- struct spfc_inmbox_get_chip_info get_chip_info;
- union spfc_outmbox_generic *get_chip_info_sts = NULL;
- u16 out_size = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hwdev_handle, UNF_RETURN_ERROR);
-
- memset(&get_chip_info, 0, sizeof(struct spfc_inmbox_get_chip_info));
-
- get_chip_info_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!get_chip_info_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(get_chip_info_sts, 0, sizeof(union spfc_outmbox_generic));
-
- get_chip_info.header.cmnd_type = SPFC_MBOX_GET_CHIP_INFO;
- get_chip_info.header.length =
- SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_chip_info));
- get_chip_info.header.port_id = (u8)sphw_global_func_id(hwdev_handle);
- out_size = sizeof(union spfc_outmbox_generic);
-
- if (sphw_msg_to_mgmt_sync(hwdev_handle, COMM_MOD_FC, SPFC_MBOX_GET_CHIP_INFO,
- (void *)&get_chip_info.header,
- sizeof(struct spfc_inmbox_get_chip_info),
- (union spfc_outmbox_generic *)(get_chip_info_sts), &out_size,
- (SPFC_MBX_MAX_TIMEOUT), SPHW_CHANNEL_FC) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "spfc can't send and wait mailbox, command type: 0x%x.",
- SPFC_MBOX_GET_CHIP_INFO);
-
- goto exit;
- }
-
- if (get_chip_info_sts->get_chip_info_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port mailbox status incorrect status(0x%x) .",
- get_chip_info_sts->get_chip_info_sts.status);
-
- goto exit;
- }
-
- if (get_chip_info_sts->get_chip_info_sts.header.cmnd_type != SPFC_MBOX_GET_CHIP_INFO_STS) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port receive mailbox type incorrect type: 0x%x.",
- get_chip_info_sts->get_chip_info_sts.header.cmnd_type);
-
- goto exit;
- }
-
- chip_info->wwnn = get_chip_info_sts->get_chip_info_sts.wwnn;
- chip_info->wwpn = get_chip_info_sts->get_chip_info_sts.wwpn;
-
- ret = RETURN_OK;
-exit:
- kfree(get_chip_info_sts);
-
- return ret;
-}
-
-u32 spfc_config_port_table(struct spfc_hba_info *hba)
-{
- struct spfc_inmbox_config_api config_api;
- union spfc_outmbox_generic *out_mbox = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- memset(&config_api, 0, sizeof(config_api));
- out_mbox = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!out_mbox) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(out_mbox, 0, sizeof(union spfc_outmbox_generic));
-
- config_api.header.cmnd_type = SPFC_MBOX_CONFIG_API;
- config_api.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_api));
-
- config_api.op_code = UNDEFINEOPCODE;
-
- /* change switching top cmd of CM to the cmd that up recognize */
- /* if the cmd equals UNF_TOP_P2P_MASK sending in CM means that it
- * should be changed into P2P top, LL using SPFC_TOP_NON_LOOP_MASK
- */
- if (((u8)(hba->port_topo_cfg)) == UNF_TOP_P2P_MASK) {
- config_api.topy_mode = 0x2;
- /* if the cmd equals UNF_TOP_LOOP_MASK sending in CM means that it
- *should be changed into loop top, LL using SPFC_TOP_LOOP_MASK
- */
- } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_LOOP_MASK) {
- config_api.topy_mode = 0x1;
- /* if the cmd equals UNF_TOP_AUTO_MASK sending in CM means that it
- *should be changed into loop top, LL using SPFC_TOP_AUTO_MASK
- */
- } else if (((u8)(hba->port_topo_cfg)) == UNF_TOP_AUTO_MASK) {
- config_api.topy_mode = 0x0;
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) topo cmd is error, command type: 0x%x",
- hba->port_cfg.port_id, (u8)(hba->port_topo_cfg));
-
- goto exit;
- }
-
- /* About speed */
- config_api.sfp_speed = (u8)(hba->port_speed_cfg);
- config_api.max_speed = (u8)(hba->max_support_speed);
-
- config_api.rx_6432g_bb_credit = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
- config_api.rx_16g_bb_credit = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
- config_api.rx_84g_bb_credit = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
- config_api.rdy_cnt_bf_fst_frm = SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT;
- config_api.esch_32g_value = SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE;
- config_api.esch_16g_value = SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE;
- config_api.esch_8g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
- config_api.esch_4g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
- config_api.esch_64g_value = SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE;
- config_api.esch_bust_size = SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE;
-
- /* default value:0xFF */
- config_api.hard_alpa = 0xFF;
- memcpy(config_api.port_name, hba->sys_port_name, UNF_WWN_LEN);
-
- /* if only for slave, the value is 1; if participate master choosing,
- * the value is 0
- */
- config_api.slave = hba->port_loop_role;
-
- /* 1:auto negotiate, 0:fixed mode negotiate */
- if (config_api.sfp_speed == 0)
- config_api.auto_sneg = 0x1;
- else
- config_api.auto_sneg = 0x0;
-
- if (spfc_mb_send_and_wait_mbox(hba, &config_api, sizeof(config_api),
- out_mbox) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x",
- hba->port_cfg.port_id,
- config_api.header.cmnd_type);
-
- goto exit;
- }
-
- if (out_mbox->config_api_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[err]Port(0x%x) receive mailbox type(0x%x) with status(0x%x) error",
- hba->port_cfg.port_id,
- out_mbox->config_api_sts.header.cmnd_type,
- out_mbox->config_api_sts.status);
-
- goto exit;
- }
-
- if (out_mbox->config_api_sts.header.cmnd_type != SPFC_MBOX_CONFIG_API_STS) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[err]Port(0x%x) receive mailbox type(0x%x) error",
- hba->port_cfg.port_id,
- out_mbox->config_api_sts.header.cmnd_type);
-
- goto exit;
- }
-
- ret = RETURN_OK;
-exit:
- kfree(out_mbox);
-
- return ret;
-}
-
-u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on)
-{
- struct spfc_inmbox_port_switch port_switch;
- union spfc_outmbox_generic *port_switch_sts = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- memset(&port_switch, 0, sizeof(port_switch));
-
- port_switch_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!port_switch_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(port_switch_sts, 0, sizeof(union spfc_outmbox_generic));
-
- port_switch.header.cmnd_type = SPFC_MBOX_PORT_SWITCH;
- port_switch.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_switch));
- port_switch.op_code = (u8)turn_on;
-
- if (spfc_mb_send_and_wait_mbox(hba, &port_switch, sizeof(port_switch),
- (union spfc_outmbox_generic *)((void *)port_switch_sts)) !=
- RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[warn]Port(0x%x) SPFC can't send and wait mailbox, command type(0x%x) opcode(0x%x)",
- hba->port_cfg.port_id,
- port_switch.header.cmnd_type, port_switch.op_code);
-
- goto exit;
- }
-
- if (port_switch_sts->port_switch_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[err]Port(0x%x) receive mailbox type(0x%x) status(0x%x) error",
- hba->port_cfg.port_id,
- port_switch_sts->port_switch_sts.header.cmnd_type,
- port_switch_sts->port_switch_sts.status);
-
- goto exit;
- }
-
- if (port_switch_sts->port_switch_sts.header.cmnd_type != SPFC_MBOX_PORT_SWITCH_STS) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[err]Port(0x%x) receive mailbox type(0x%x) error",
- hba->port_cfg.port_id,
- port_switch_sts->port_switch_sts.header.cmnd_type);
-
- goto exit;
- }
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
- "[event]Port(0x%x) switch succeed, turns to %s",
- hba->port_cfg.port_id, (turn_on) ? "on" : "off");
-
- ret = RETURN_OK;
-exit:
- kfree(port_switch_sts);
-
- return ret;
-}
-
-u32 spfc_config_login_api(struct spfc_hba_info *hba,
- struct unf_port_login_parms *login_parms)
-{
-#define SPFC_LOOP_RDYNUM 8
- int iret = RETURN_OK;
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_inmbox_config_login config_login;
- union spfc_outmbox_generic *cfg_login_sts = NULL;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- memset(&config_login, 0, sizeof(config_login));
- cfg_login_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!cfg_login_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(cfg_login_sts, 0, sizeof(union spfc_outmbox_generic));
-
- config_login.header.cmnd_type = SPFC_MBOX_CONFIG_LOGIN_API;
- config_login.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_config_login));
- config_login.header.port_id = hba->port_index;
-
- config_login.op_code = UNDEFINEOPCODE;
-
- config_login.tx_bb_credit = hba->remote_bb_credit;
-
- config_login.etov = hba->compared_edtov_val;
- config_login.rtov = hba->compared_ratov_val;
-
- config_login.rt_tov_tag = hba->remote_rttov_tag;
- config_login.ed_tov_tag = hba->remote_edtov_tag;
- config_login.bb_credit = hba->remote_bb_credit;
- config_login.bb_scn = SPFC_LSB(hba->compared_bb_scn);
-
- if (config_login.bb_scn) {
- config_login.lr_flag = (login_parms->els_cmnd_code == ELS_PLOGI) ? 0 : 1;
- ret = spfc_mb_send_and_wait_mbox(hba, &config_login, sizeof(config_login),
- (union spfc_outmbox_generic *)cfg_login_sts);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) SPFC can't send and wait mailbox, command type: 0x%x.",
- hba->port_cfg.port_id, config_login.header.cmnd_type);
-
- goto exit;
- }
-
- if (cfg_login_sts->config_login_sts.header.cmnd_type !=
- SPFC_MBOX_CONFIG_LOGIN_API_STS) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) Receive mailbox type incorrect. Type: 0x%x.",
- hba->port_cfg.port_id,
- cfg_login_sts->config_login_sts.header.cmnd_type);
-
- goto exit;
- }
-
- if (cfg_login_sts->config_login_sts.status != STATUS_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x.",
- hba->port_cfg.port_id,
- cfg_login_sts->config_login_sts.header.cmnd_type,
- cfg_login_sts->config_login_sts.status);
-
- goto exit;
- }
- } else {
- iret = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
- SPFC_MBOX_CONFIG_LOGIN_API, &config_login,
- sizeof(config_login), SPHW_CHANNEL_FC);
-
- if (iret != 0) {
- SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI_FAIL);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) spfc can't send config login cmd to up,ret:%d.",
- hba->port_cfg.port_id, iret);
-
- goto exit;
- }
-
- SPFC_MAILBOX_STAT(hba, SPFC_SEND_CONFIG_LOGINAPI);
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) Topo(0x%x) Config login param to up: txbbcredit(0x%x), BB_SC_N(0x%x).",
- hba->port_cfg.port_id, hba->active_topo,
- config_login.tx_bb_credit, config_login.bb_scn);
-
- ret = RETURN_OK;
-exit:
- kfree(cfg_login_sts);
-
- return ret;
-}
-
-u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox,
- u16 in_size,
- union spfc_outmbox_generic *out_mbox)
-{
- void *handle = NULL;
- u16 out_size = 0;
- ulong time_out = 0;
- int ret = 0;
- struct spfc_mbox_header *header = NULL;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(in_mbox, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(out_mbox, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(hba->dev_handle, UNF_RETURN_ERROR);
- header = (struct spfc_mbox_header *)in_mbox;
- out_size = sizeof(union spfc_outmbox_generic);
- handle = hba->dev_handle;
- header->port_id = (u8)sphw_global_func_id(handle);
-
- /* Wait for las mailbox completion: */
- time_out = wait_for_completion_timeout(&hba->mbox_complete,
- (ulong)msecs_to_jiffies(SPFC_MBOX_TIME_SEC_MAX *
- UNF_S_TO_MS));
- if (time_out == SPFC_ZERO) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[err]Port(0x%x) wait mailbox(0x%x) completion timeout: %d sec",
- hba->port_cfg.port_id, header->cmnd_type,
- SPFC_MBOX_TIME_SEC_MAX);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Send Msg to uP Sync: timer 10s */
- ret = sphw_msg_to_mgmt_sync(handle, COMM_MOD_FC, header->cmnd_type,
- (void *)in_mbox, in_size,
- (union spfc_outmbox_generic *)out_mbox,
- &out_size, (SPFC_MBX_MAX_TIMEOUT),
- SPHW_CHANNEL_FC);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[warn]Port(0x%x) can not send mailbox(0x%x) with ret:%d",
- hba->port_cfg.port_id, header->cmnd_type, ret);
-
- complete(&hba->mbox_complete);
- return UNF_RETURN_ERROR;
- }
-
- complete(&hba->mbox_complete);
-
- return RETURN_OK;
-}
-
-void spfc_initial_dynamic_info(struct spfc_hba_info *fc_port)
-{
- struct spfc_hba_info *hba = fc_port;
- ulong flag = 0;
-
- FC_CHECK_RETURN_VOID(hba);
-
- spin_lock_irqsave(&hba->hba_lock, flag);
- hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
- hba->active_topo = UNF_ACT_TOP_UNKNOWN;
- hba->phy_link = UNF_PORT_LINK_DOWN;
- hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
- hba->loop_map_valid = LOOP_MAP_INVALID;
- hba->srq_delay_info.srq_delay_flag = 0;
- hba->srq_delay_info.root_rq_rcvd_flag = 0;
- spin_unlock_irqrestore(&hba->hba_lock, flag);
-}
-
-static u32 spfc_recv_fc_linkup(struct spfc_hba_info *hba, void *buf_in)
-{
-#define SPFC_LOOP_MASK 0x1
-#define SPFC_LOOPMAP_COUNT 128
-
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_link_event *link_event = NULL;
-
- link_event = (struct spfc_link_event *)buf_in;
- hba->phy_link = UNF_PORT_LINK_UP;
- hba->active_port_speed = link_event->speed;
- hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
- hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
- hba->led_states.ac_led = (u8)(link_event->ac_led);
-
- if (link_event->top_type == SPFC_LOOP_MASK &&
- (link_event->loop_map_info[ARRAY_INDEX_1] == UNF_FL_PORT_LOOP_ADDR ||
- link_event->loop_map_info[ARRAY_INDEX_2] == UNF_FL_PORT_LOOP_ADDR)) {
- hba->active_topo = UNF_ACT_TOP_PUBLIC_LOOP; /* Public Loop */
- hba->active_alpa = link_event->alpa_value; /* AL_PA */
- memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
- hba->loop_map_valid = LOOP_MAP_VALID;
- } else if (link_event->top_type == SPFC_LOOP_MASK) {
- hba->active_topo = UNF_ACT_TOP_PRIVATE_LOOP; /* Private Loop */
- hba->active_alpa = link_event->alpa_value; /* AL_PA */
- memcpy(hba->loop_map, link_event->loop_map_info, SPFC_LOOPMAP_COUNT);
- hba->loop_map_valid = LOOP_MAP_VALID;
- } else {
- hba->active_topo = UNF_TOP_P2P_MASK; /* P2P_D or P2P_F */
- }
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
- "[event]Port(0x%x) receive link up event(0x%x) with speed(0x%x) uP_topo(0x%x) driver_topo(0x%x)",
- hba->port_cfg.port_id, link_event->link_event,
- link_event->speed, link_event->top_type, hba->active_topo);
-
- /* Set clear & flush state */
- spfc_set_hba_clear_state(hba, false);
- spfc_set_hba_flush_state(hba, false);
- spfc_set_rport_flush_state(hba, false);
-
- /* Report link up event to COM */
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_UP,
- &hba->active_port_speed);
-
- SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_UP_COUNT);
-
- return ret;
-}
-
-static u32 spfc_recv_fc_linkdown(struct spfc_hba_info *hba, void *buf_in)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_link_event *link_event = NULL;
-
- link_event = (struct spfc_link_event *)buf_in;
-
- /* 1. Led state setting */
- hba->led_states.green_speed_led = (u8)(link_event->green_speed_led);
- hba->led_states.yellow_speed_led = (u8)(link_event->yellow_speed_led);
- hba->led_states.ac_led = (u8)(link_event->ac_led);
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
- "[event]Port(0x%x) receive link down event(0x%x) reason(0x%x)",
- hba->port_cfg.port_id, link_event->link_event, link_event->reason);
-
- spfc_initial_dynamic_info(hba);
-
- /* 2. set HBA flush state */
- spfc_set_hba_flush_state(hba, true);
-
- /* 3. set R_Port (parent SQ) flush state */
- spfc_set_rport_flush_state(hba, true);
-
- /* 4. Report link down event to COM */
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_LINK_DOWN, 0);
-
- /* DFX setting */
- SPFC_LINK_REASON_STAT(hba, link_event->reason);
- SPFC_LINK_EVENT_STAT(hba, SPFC_LINK_DOWN_COUNT);
-
- return ret;
-}
-
-static u32 spfc_recv_fc_delcmd(struct spfc_hba_info *hba, void *buf_in)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_link_event *link_event = NULL;
-
- link_event = (struct spfc_link_event *)buf_in;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_KEVENT,
- "[event]Port(0x%x) receive delete cmd event(0x%x)",
- hba->port_cfg.port_id, link_event->link_event);
-
- /* Send buffer clear cmnd */
- ret = spfc_clear_fetched_sq_wqe(hba);
-
- hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_SCANNING;
- SPFC_LINK_EVENT_STAT(hba, SPFC_FC_DELETE_CMND_COUNT);
-
- return ret;
-}
-
-static u32 spfc_recv_fc_error(struct spfc_hba_info *hba, void *buf_in)
-{
-#define FC_ERR_LEVEL_DEAD 0
-#define FC_ERR_LEVEL_HIGH 1
-#define FC_ERR_LEVEL_LOW 2
-
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_up_error_event *up_error_event = NULL;
-
- up_error_event = (struct spfc_up_error_event *)buf_in;
- if (up_error_event->error_type >= SPFC_UP_ERR_BUTT ||
- up_error_event->error_value >= SPFC_ERR_VALUE_BUTT) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) receive a unsupported UP Error Event Type(0x%x) Value(0x%x).",
- hba->port_cfg.port_id, up_error_event->error_type,
- up_error_event->error_value);
- return ret;
- }
-
- switch (up_error_event->error_level) {
- case FC_ERR_LEVEL_DEAD:
- ret = RETURN_OK;
- break;
-
- case FC_ERR_LEVEL_HIGH:
- /* port reset */
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
- UNF_PORT_ABNORMAL_RESET, NULL);
- break;
-
- case FC_ERR_LEVEL_LOW:
- ret = RETURN_OK;
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) receive a unsupported UP Error Event Level(0x%x), Can not Process.",
- hba->port_cfg.port_id,
- up_error_event->error_level);
- return ret;
- }
- if (up_error_event->error_value < SPFC_ERR_VALUE_BUTT)
- SPFC_UP_ERR_EVENT_STAT(hba, up_error_event->error_value);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Port(0x%x) process UP Error Event Level(0x%x) Type(0x%x) Value(0x%x) %s.",
- hba->port_cfg.port_id, up_error_event->error_level,
- up_error_event->error_type, up_error_event->error_value,
- (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
-
- return ret;
-}
-
-static struct spfc_up2drv_msg_handle up_msg_handle[] = {
- {SPFC_MBOX_RECV_FC_LINKUP, spfc_recv_fc_linkup},
- {SPFC_MBOX_RECV_FC_LINKDOWN, spfc_recv_fc_linkdown},
- {SPFC_MBOX_RECV_FC_DELCMD, spfc_recv_fc_delcmd},
- {SPFC_MBOX_RECV_FC_ERROR, spfc_recv_fc_error}
-};
-
-void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
- void *buf_in, u16 in_size, void *buf_out,
- u16 *out_size)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 index = 0;
- struct spfc_hba_info *hba = NULL;
- struct spfc_mbox_header *mbx_header = NULL;
-
- FC_CHECK_RETURN_VOID(hwdev_handle);
- FC_CHECK_RETURN_VOID(pri_handle);
- FC_CHECK_RETURN_VOID(buf_in);
- FC_CHECK_RETURN_VOID(buf_out);
- FC_CHECK_RETURN_VOID(out_size);
-
- hba = (struct spfc_hba_info *)pri_handle;
- if (!hba) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR, "[err]Hba is null");
- return;
- }
-
- mbx_header = (struct spfc_mbox_header *)buf_in;
- if (mbx_header->cmnd_type != cmd) {
- *out_size = sizeof(struct spfc_link_event);
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
- "[err]Port(0x%x) cmd(0x%x) is not matched with header cmd type(0x%x)",
- hba->port_cfg.port_id, cmd, mbx_header->cmnd_type);
- return;
- }
-
- while (index < (sizeof(up_msg_handle) / sizeof(struct spfc_up2drv_msg_handle))) {
- if (up_msg_handle[index].cmd == cmd &&
- up_msg_handle[index].spfc_msg_up2driver_handler) {
- ret = up_msg_handle[index].spfc_msg_up2driver_handler(hba, buf_in);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
- "[warn]Port(0x%x) process up cmd(0x%x) failed",
- hba->port_cfg.port_id, cmd);
- }
- *out_size = sizeof(struct spfc_link_event);
- return;
- }
- index++;
- }
-
- *out_size = sizeof(struct spfc_link_event);
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_ERR,
- "[err]Port(0x%x) process up cmd(0x%x) failed",
- hba->port_cfg.port_id, cmd);
-}
-
-u32 spfc_get_topo_act(void *hba, void *topo_act)
-{
- struct spfc_hba_info *spfc_hba = hba;
- enum unf_act_topo *pen_topo_act = topo_act;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(topo_act, UNF_RETURN_ERROR);
-
- /* Get topo from low_level */
- *pen_topo_act = spfc_hba->active_topo;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Get active topology: 0x%x", *pen_topo_act);
-
- return RETURN_OK;
-}
-
-u32 spfc_get_loop_alpa(void *hba, void *alpa)
-{
- ulong flags = 0;
- struct spfc_hba_info *spfc_hba = hba;
- u8 *alpa_temp = alpa;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(alpa, UNF_RETURN_ERROR);
-
- spin_lock_irqsave(&spfc_hba->hba_lock, flags);
- *alpa_temp = spfc_hba->active_alpa;
- spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Get active AL_PA(0x%x)", *alpa_temp);
-
- return RETURN_OK;
-}
-
-static void spfc_get_fabric_login_params(struct spfc_hba_info *hba,
- struct unf_port_login_parms *params_addr)
-{
- ulong flag = 0;
-
- spin_lock_irqsave(&hba->hba_lock, flag);
- hba->active_topo = params_addr->act_topo;
- hba->compared_ratov_val = params_addr->compared_ratov_val;
- hba->compared_edtov_val = params_addr->compared_edtov_val;
- hba->compared_bb_scn = params_addr->compared_bbscn;
- hba->remote_edtov_tag = params_addr->remote_edtov_tag;
- hba->remote_rttov_tag = params_addr->remote_rttov_tag;
- hba->remote_bb_credit = params_addr->remote_bb_credit;
- spin_unlock_irqrestore(&hba->hba_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) topo(0x%x) get fabric params: R_A_TOV(0x%x) E_D_TOV(%u) BB_CREDIT(0x%x) BB_SC_N(0x%x)",
- hba->port_cfg.port_id, hba->active_topo,
- hba->compared_ratov_val, hba->compared_edtov_val,
- hba->remote_bb_credit, hba->compared_bb_scn);
-}
-
-static void spfc_get_port_login_params(struct spfc_hba_info *hba,
- struct unf_port_login_parms *params_addr)
-{
- ulong flag = 0;
-
- spin_lock_irqsave(&hba->hba_lock, flag);
- hba->compared_ratov_val = params_addr->compared_ratov_val;
- hba->compared_edtov_val = params_addr->compared_edtov_val;
- hba->compared_bb_scn = params_addr->compared_bbscn;
- hba->remote_edtov_tag = params_addr->remote_edtov_tag;
- hba->remote_rttov_tag = params_addr->remote_rttov_tag;
- hba->remote_bb_credit = params_addr->remote_bb_credit;
- spin_unlock_irqrestore(&hba->hba_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) Topo(0x%x) Get Port Params: R_A_TOV(0x%x), E_D_TOV(0x%x), BB_CREDIT(0x%x), BB_SC_N(0x%x).",
- hba->port_cfg.port_id, hba->active_topo,
- hba->compared_ratov_val, hba->compared_edtov_val,
- hba->remote_bb_credit, hba->compared_bb_scn);
-}
-
-u32 spfc_update_fabric_param(void *hba, void *para_in)
-{
- u32 ret = RETURN_OK;
- struct spfc_hba_info *spfc_hba = hba;
- struct unf_port_login_parms *login_coparms = para_in;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
-
- spfc_get_fabric_login_params(spfc_hba, login_coparms);
-
- if (spfc_hba->active_topo == UNF_ACT_TOP_P2P_FABRIC ||
- spfc_hba->active_topo == UNF_ACT_TOP_PUBLIC_LOOP) {
- if (spfc_hba->work_mode == SPFC_SMARTIO_WORK_MODE_FC)
- ret = spfc_config_login_api(spfc_hba, login_coparms);
- }
-
- return ret;
-}
-
-u32 spfc_update_port_param(void *hba, void *para_in)
-{
- u32 ret = RETURN_OK;
- struct spfc_hba_info *spfc_hba = hba;
- struct unf_port_login_parms *login_coparms =
- (struct unf_port_login_parms *)para_in;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
-
- if (spfc_hba->active_topo == UNF_ACT_TOP_PRIVATE_LOOP ||
- spfc_hba->active_topo == UNF_ACT_TOP_P2P_DIRECT) {
- spfc_get_port_login_params(spfc_hba, login_coparms);
- ret = spfc_config_login_api(spfc_hba, login_coparms);
- }
-
- spfc_save_login_parms_in_sq_info(spfc_hba, login_coparms);
-
- return ret;
-}
-
-u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit)
-{
- u32 *bb_credit_temp = (u32 *)bb_credit;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(bb_credit, UNF_RETURN_ERROR);
- if (spfc_hba->active_port_speed == UNF_PORT_SPEED_32_G)
- *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT;
- else if (spfc_hba->active_port_speed == UNF_PORT_SPEED_16_G)
- *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT;
- else
- *bb_credit_temp = SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT;
-
- return RETURN_OK;
-}
-
-u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn)
-{
- u32 *bb_scn_temp = (u32 *)bb_scn;
- struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(bb_scn, UNF_RETURN_ERROR);
-
- *bb_scn_temp = spfc_hba->port_bb_scn_cfg;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Return BBSCN(0x%x) to CM", *bb_scn_temp);
-
- return RETURN_OK;
-}
-
-u32 spfc_get_loop_map(void *hba, void *buf)
-{
- ulong flags = 0;
- struct unf_buf *buf_temp = (struct unf_buf *)buf;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf_temp, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf_temp->buf, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(buf_temp->buf_len, UNF_RETURN_ERROR);
-
- if (buf_temp->buf_len > UNF_LOOPMAP_COUNT)
- return UNF_RETURN_ERROR;
-
- spin_lock_irqsave(&spfc_hba->hba_lock, flags);
- if (spfc_hba->loop_map_valid != LOOP_MAP_VALID) {
- spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
- return UNF_RETURN_ERROR;
- }
- memcpy(buf_temp->buf, spfc_hba->loop_map, buf_temp->buf_len);
- spin_unlock_irqrestore(&spfc_hba->hba_lock, flags);
-
- return RETURN_OK;
-}
-
-u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type)
-{
- struct spfc_inmbox_port_reset port_reset;
- union spfc_outmbox_generic *port_reset_sts = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- memset(&port_reset, 0, sizeof(port_reset));
-
- port_reset_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!port_reset_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(port_reset_sts, 0, sizeof(union spfc_outmbox_generic));
- port_reset.header.cmnd_type = SPFC_MBOX_PORT_RESET;
- port_reset.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_port_reset));
- port_reset.op_code = sub_type;
-
- if (spfc_mb_send_and_wait_mbox(hba, &port_reset, sizeof(port_reset),
- port_reset_sts) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[warn]Port(0x%x) can't send and wait mailbox with command type(0x%x)",
- hba->port_cfg.port_id, port_reset.header.cmnd_type);
-
- goto exit;
- }
-
- if (port_reset_sts->port_reset_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[warn]Port(0x%x) receive mailbox type(0x%x) status(0x%x) incorrect",
- hba->port_cfg.port_id,
- port_reset_sts->port_reset_sts.header.cmnd_type,
- port_reset_sts->port_reset_sts.status);
-
- goto exit;
- }
-
- if (port_reset_sts->port_reset_sts.header.cmnd_type != SPFC_MBOX_PORT_RESET_STS) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "[warn]Port(0x%x) recv mailbox type(0x%x) incorrect",
- hba->port_cfg.port_id,
- port_reset_sts->port_reset_sts.header.cmnd_type);
-
- goto exit;
- }
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_MAJOR,
- "[info]Port(0x%x) reset chip mailbox success",
- hba->port_cfg.port_id);
-
- ret = RETURN_OK;
-exit:
- kfree(port_reset_sts);
-
- return ret;
-}
-
-u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba)
-{
- int ret1 = RETURN_OK;
- u32 ret2 = RETURN_OK;
- struct spfc_inmbox_clear_done clear_done;
-
- clear_done.header.cmnd_type = SPFC_MBOX_BUFFER_CLEAR_DONE;
- clear_done.header.length = SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_clear_done));
- clear_done.header.port_id = hba->port_index;
-
- ret1 = sphw_msg_to_mgmt_async(hba->dev_handle, COMM_MOD_FC,
- SPFC_MBOX_BUFFER_CLEAR_DONE, &clear_done,
- sizeof(clear_done), SPHW_CHANNEL_FC);
-
- if (ret1 != 0) {
- SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE_FAIL);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC Port(0x%x) can't send clear done cmd to up, ret:%d",
- hba->port_cfg.port_id, ret1);
-
- return UNF_RETURN_ERROR;
- }
-
- SPFC_MAILBOX_STAT(hba, SPFC_SEND_CLEAR_DONE);
- hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
- hba->next_clear_sq = 0;
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
- "[info]Port(0x%x) clear done msg(0x%x) sent to up succeed with stage(0x%x)",
- hba->port_cfg.port_id, clear_done.header.cmnd_type,
- hba->queue_set_stage);
-
- return ret2;
-}
-
-u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state)
-{
- struct spfc_inmbox_get_clear_state get_clr_state;
- union spfc_outmbox_generic *port_clear_state_sts = NULL;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(clear_state, UNF_RETURN_ERROR);
-
- memset(&get_clr_state, 0, sizeof(get_clr_state));
-
- port_clear_state_sts = kmalloc(sizeof(union spfc_outmbox_generic), GFP_ATOMIC);
- if (!port_clear_state_sts) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "malloc outmbox memory failed");
- return UNF_RETURN_ERROR;
- }
- memset(port_clear_state_sts, 0, sizeof(union spfc_outmbox_generic));
-
- get_clr_state.header.cmnd_type = SPFC_MBOX_GET_CLEAR_STATE;
- get_clr_state.header.length =
- SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_get_clear_state));
-
- if (spfc_mb_send_and_wait_mbox(hba, &get_clr_state, sizeof(get_clr_state),
- port_clear_state_sts) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "spfc can't send and wait mailbox, command type: 0x%x",
- get_clr_state.header.cmnd_type);
-
- goto exit;
- }
-
- if (port_clear_state_sts->get_clr_state_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "Port(0x%x) Receive mailbox type(0x%x) status incorrect. Status: 0x%x, state 0x%x.",
- hba->port_cfg.port_id,
- port_clear_state_sts->get_clr_state_sts.header.cmnd_type,
- port_clear_state_sts->get_clr_state_sts.status,
- port_clear_state_sts->get_clr_state_sts.state);
-
- goto exit;
- }
-
- if (port_clear_state_sts->get_clr_state_sts.header.cmnd_type !=
- SPFC_MBOX_GET_CLEAR_STATE_STS) {
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_ERR,
- "Port(0x%x) recv mailbox type(0x%x) incorrect.",
- hba->port_cfg.port_id,
- port_clear_state_sts->get_clr_state_sts.header.cmnd_type);
-
- goto exit;
- }
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "Port(0x%x) get port clear state 0x%x.",
- hba->port_cfg.port_id,
- port_clear_state_sts->get_clr_state_sts.state);
-
- *clear_state = port_clear_state_sts->get_clr_state_sts.state;
-
- ret = RETURN_OK;
-exit:
- kfree(port_clear_state_sts);
-
- return ret;
-}
-
-u32 spfc_mbx_config_default_session(void *hba, u32 flag)
-{
- struct spfc_hba_info *spfc_hba = NULL;
- struct spfc_inmbox_default_sq_info default_sq_info;
- union spfc_outmbox_generic default_sq_info_sts;
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- spfc_hba = (struct spfc_hba_info *)hba;
-
- memset(&default_sq_info, 0, sizeof(struct spfc_inmbox_default_sq_info));
- memset(&default_sq_info_sts, 0, sizeof(union spfc_outmbox_generic));
-
- default_sq_info.header.cmnd_type = SPFC_MBOX_SEND_DEFAULT_SQ_INFO;
- default_sq_info.header.length =
- SPFC_BYTES_TO_DW_NUM(sizeof(struct spfc_inmbox_default_sq_info));
- default_sq_info.func_id = sphw_global_func_id(spfc_hba->dev_handle);
-
- /* When flag is 1, set default SQ info when probe, when 0, clear when
- * remove
- */
- if (flag) {
- default_sq_info.sq_cid = spfc_hba->default_sq_info.sq_cid;
- default_sq_info.sq_xid = spfc_hba->default_sq_info.sq_xid;
- default_sq_info.valid = 1;
- }
-
- ret =
- spfc_mb_send_and_wait_mbox(spfc_hba, &default_sq_info, sizeof(default_sq_info),
- (union spfc_outmbox_generic *)(void *)&default_sq_info_sts);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "spfc can't send and wait mailbox, command type: 0x%x.",
- default_sq_info.header.cmnd_type);
-
- return UNF_RETURN_ERROR;
- }
-
- if (default_sq_info_sts.default_sq_sts.status != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) mailbox status incorrect status(0x%x) .",
- spfc_hba->port_cfg.port_id,
- default_sq_info_sts.default_sq_sts.status);
-
- return UNF_RETURN_ERROR;
- }
-
- if (SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS !=
- default_sq_info_sts.default_sq_sts.header.cmnd_type) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Port(0x%x) receive mailbox type incorrect type: 0x%x.",
- spfc_hba->port_cfg.port_id,
- default_sq_info_sts.default_sq_sts.header.cmnd_type);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_chipitf.h b/drivers/scsi/spfc/hw/spfc_chipitf.h
deleted file mode 100644
index acd770514edf..000000000000
--- a/drivers/scsi/spfc/hw/spfc_chipitf.h
+++ /dev/null
@@ -1,797 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_CHIPITF_H
-#define SPFC_CHIPITF_H
-
-#include "unf_type.h"
-#include "unf_log.h"
-#include "spfc_utils.h"
-#include "spfc_module.h"
-
-#include "spfc_service.h"
-
-/* CONF_API_CMND */
-#define SPFC_MBOX_CONFIG_API (0x00)
-#define SPFC_MBOX_CONFIG_API_STS (0xA0)
-
-/* GET_CHIP_INFO_API_CMD */
-#define SPFC_MBOX_GET_CHIP_INFO (0x01)
-#define SPFC_MBOX_GET_CHIP_INFO_STS (0xA1)
-
-/* PORT_RESET */
-#define SPFC_MBOX_PORT_RESET (0x02)
-#define SPFC_MBOX_PORT_RESET_STS (0xA2)
-
-/* SFP_SWITCH_API_CMND */
-#define SPFC_MBOX_PORT_SWITCH (0x03)
-#define SPFC_MBOX_PORT_SWITCH_STS (0xA3)
-
-/* CONF_AF_LOGIN_API_CMND */
-#define SPFC_MBOX_CONFIG_LOGIN_API (0x06)
-#define SPFC_MBOX_CONFIG_LOGIN_API_STS (0xA6)
-
-/* BUFFER_CLEAR_DONE_CMND */
-#define SPFC_MBOX_BUFFER_CLEAR_DONE (0x07)
-#define SPFC_MBOX_BUFFER_CLEAR_DONE_STS (0xA7)
-
-#define SPFC_MBOX_GET_UP_STATE (0x09)
-#define SPFC_MBOX_GET_UP_STATE_STS (0xA9)
-
-/* GET CLEAR DONE STATE */
-#define SPFC_MBOX_GET_CLEAR_STATE (0x0E)
-#define SPFC_MBOX_GET_CLEAR_STATE_STS (0xAE)
-
-/* CONFIG TIMER */
-#define SPFC_MBOX_CONFIG_TIMER (0x10)
-#define SPFC_MBOX_CONFIG_TIMER_STS (0xB0)
-
-/* Led Test */
-#define SPFC_MBOX_LED_TEST (0x12)
-#define SPFC_MBOX_LED_TEST_STS (0xB2)
-
-/* set esch */
-#define SPFC_MBOX_SET_ESCH (0x13)
-#define SPFC_MBOX_SET_ESCH_STS (0xB3)
-
-/* set get tx serdes */
-#define SPFC_MBOX_SET_GET_SERDES_TX (0x14)
-#define SPFC_MBOX_SET_GET_SERDES_TX_STS (0xB4)
-
-/* get rx serdes */
-#define SPFC_MBOX_GET_SERDES_RX (0x15)
-#define SPFC_MBOX_GET_SERDES_RX_STS (0xB5)
-
-/* i2c read write */
-#define SPFC_MBOX_I2C_WR_RD (0x16)
-#define SPFC_MBOX_I2C_WR_RD_STS (0xB6)
-
-/* GET UCODE STATS CMD */
-#define SPFC_MBOX_GET_UCODE_STAT (0x18)
-#define SPFC_MBOX_GET_UCODE_STAT_STS (0xB8)
-
-/* gpio read write */
-#define SPFC_MBOX_GPIO_WR_RD (0x19)
-#define SPFC_MBOX_GPIO_WR_RD_STS (0xB9)
-
-#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO (0x26)
-#define SPFC_MBOX_SEND_DEFAULT_SQ_INFO_STS (0xc6)
-
-/* FC: DRV->UP */
-#define SPFC_MBOX_SEND_ELS_CMD (0x2A)
-#define SPFC_MBOX_SEND_VPORT_INFO (0x2B)
-
-/* FC: UP->DRV */
-#define SPFC_MBOX_RECV_FC_LINKUP (0x40)
-#define SPFC_MBOX_RECV_FC_LINKDOWN (0x41)
-#define SPFC_MBOX_RECV_FC_DELCMD (0x42)
-#define SPFC_MBOX_RECV_FC_ERROR (0x43)
-
-#define LOOP_MAP_VALID (1)
-#define LOOP_MAP_INVALID (0)
-
-#define SPFC_MBOX_SIZE (1024)
-#define SPFC_MBOX_HEADER_SIZE (4)
-
-#define UNDEFINEOPCODE (0)
-
-#define VALUEMASK_L 0x00000000FFFFFFFF
-#define VALUEMASK_H 0xFFFFFFFF00000000
-
-#define STATUS_OK (0)
-#define STATUS_FAIL (1)
-
-enum spfc_drv2up_unblock_msg_cmd_code {
- SPFC_SEND_ELS_CMD,
- SPFC_SEND_ELS_CMD_FAIL,
- SPFC_RCV_ELS_CMD_RSP,
- SPFC_SEND_CONFIG_LOGINAPI,
- SPFC_SEND_CONFIG_LOGINAPI_FAIL,
- SPFC_RCV_CONFIG_LOGIN_API_RSP,
- SPFC_SEND_CLEAR_DONE,
- SPFC_SEND_CLEAR_DONE_FAIL,
- SPFC_RCV_CLEAR_DONE_RSP,
- SPFC_SEND_VPORT_INFO_DONE,
- SPFC_SEND_VPORT_INFO_FAIL,
- SPFC_SEND_VPORT_INFO_RSP,
- SPFC_MBOX_CMD_BUTT
-};
-
-/* up to dirver cmd code */
-enum spfc_up2drv_msg_cmd_code {
- SPFC_UP2DRV_MSG_CMD_LINKUP = 0x1,
- SPFC_UP2DRV_MSG_CMD_LINKDOWN = 0x2,
- SPFC_UP2DRV_MSG_CMD_BUTT
-};
-
-/* up to driver handle templete */
-struct spfc_up2drv_msg_handle {
- u8 cmd;
- u32 (*spfc_msg_up2driver_handler)(struct spfc_hba_info *hba, void *buf_in);
-};
-
-/* tile to driver cmd code */
-enum spfc_tile2drv_msg_cmd_code {
- SPFC_TILE2DRV_MSG_CMD_SCAN_DONE,
- SPFC_TILE2DRV_MSG_CMD_FLUSH_DONE,
- SPFC_TILE2DRV_MSG_CMD_BUTT
-};
-
-/* tile to driver handle templete */
-struct spfc_tile2drv_msg_handle {
- u8 cmd;
- u32 (*spfc_msg_tile2driver_handler)(struct spfc_hba_info *hba, u8 cmd, u64 val);
-};
-
-/* Mbox Common Header */
-struct spfc_mbox_header {
- u8 cmnd_type;
- u8 length;
- u8 port_id;
- u8 reserved;
-};
-
-/* open or close the sfp */
-struct spfc_inmbox_port_switch {
- struct spfc_mbox_header header;
- u32 op_code : 8;
- u32 rsvd0 : 24;
- u32 rsvd1[6];
-};
-
-struct spfc_inmbox_send_vport_info {
- struct spfc_mbox_header header;
-
- u64 sys_port_wwn;
- u64 sys_node_name;
-
- u32 nport_id : 24;
- u32 vpi : 8;
-};
-
-struct spfc_outmbox_port_switch_sts {
- struct spfc_mbox_header header;
-
- u16 reserved1;
- u8 reserved2;
- u8 status;
-};
-
-/* config API */
-struct spfc_inmbox_config_api {
- struct spfc_mbox_header header;
-
- u32 op_code : 8;
- u32 reserved1 : 24;
-
- u8 topy_mode;
- u8 sfp_speed;
- u8 max_speed;
- u8 hard_alpa;
-
- u8 port_name[UNF_WWN_LEN];
-
- u32 slave : 1;
- u32 auto_sneg : 1;
- u32 reserved2 : 30;
-
- u32 rx_6432g_bb_credit : 16; /* 160 */
- u32 rx_16g_bb_credit : 16; /* 80 */
- u32 rx_84g_bb_credit : 16; /* 50 */
- u32 rdy_cnt_bf_fst_frm : 16; /* 8 */
-
- u32 esch_32g_value;
- u32 esch_16g_value;
- u32 esch_8g_value;
- u32 esch_4g_value;
- u32 esch_64g_value;
- u32 esch_bust_size;
-};
-
-struct spfc_outmbox_config_api_sts {
- struct spfc_mbox_header header;
- u16 reserved1;
- u8 reserved2;
- u8 status;
-};
-
-/* Get chip info */
-struct spfc_inmbox_get_chip_info {
- struct spfc_mbox_header header;
-};
-
-struct spfc_outmbox_get_chip_info_sts {
- struct spfc_mbox_header header;
- u8 status;
- u8 board_type;
- u8 rvsd0[2];
- u64 wwpn;
- u64 wwnn;
- u64 rsvd1;
-};
-
-/* Get reg info */
-struct spfc_inmbox_get_reg_info {
- struct spfc_mbox_header header;
- u32 op_code : 1;
- u32 reg_len : 8;
- u32 rsvd1 : 23;
- u32 reg_addr;
- u32 reg_value_l32;
- u32 reg_value_h32;
- u32 rsvd2[27];
-};
-
-/* Get reg info sts */
-struct spfc_outmbox_get_reg_info_sts {
- struct spfc_mbox_header header;
-
- u16 rsvd0;
- u8 rsvd1;
- u8 status;
- u32 reg_value_l32;
- u32 reg_value_h32;
- u32 rsvd2[28];
-};
-
-/* Config login API */
-struct spfc_inmbox_config_login {
- struct spfc_mbox_header header;
-
- u32 op_code : 8;
- u32 reserved1 : 24;
-
- u16 tx_bb_credit;
- u16 reserved2;
-
- u32 rtov;
- u32 etov;
-
- u32 rt_tov_tag : 1;
- u32 ed_tov_tag : 1;
- u32 bb_credit : 6;
- u32 bb_scn : 8;
- u32 lr_flag : 16;
-};
-
-struct spfc_outmbox_config_login_sts {
- struct spfc_mbox_header header;
-
- u16 reserved1;
- u8 reserved2;
- u8 status;
-};
-
-/* port reset */
-#define SPFC_MBOX_SUBTYPE_LIGHT_RESET (0x0)
-#define SPFC_MBOX_SUBTYPE_HEAVY_RESET (0x1)
-
-struct spfc_inmbox_port_reset {
- struct spfc_mbox_header header;
-
- u32 op_code : 8;
- u32 reserved : 24;
-};
-
-struct spfc_outmbox_port_reset_sts {
- struct spfc_mbox_header header;
-
- u16 reserved1;
- u8 reserved2;
- u8 status;
-};
-
-/* led test */
-struct spfc_inmbox_led_test {
- struct spfc_mbox_header header;
-
- /* 0->act type;1->low speed;1->high speed */
- u8 led_type;
- /* 0:twinkle;1:light on;2:light off;0xff:defalut */
- u8 led_mode;
- u8 resvd[ARRAY_INDEX_2];
-};
-
-struct spfc_outmbox_led_test_sts {
- struct spfc_mbox_header header;
-
- u16 rsvd1;
- u8 rsvd2;
- u8 status;
-};
-
-/* set esch */
-struct spfc_inmbox_set_esch {
- struct spfc_mbox_header header;
-
- u32 esch_value;
- u32 esch_bust_size;
-};
-
-struct spfc_outmbox_set_esch_sts {
- struct spfc_mbox_header header;
-
- u16 rsvd1;
- u8 rsvd2;
- u8 status;
-};
-
-struct spfc_inmbox_set_serdes_tx {
- struct spfc_mbox_header header;
-
- u8 swing; /* amplitude setting */
- char serdes_pre1; /* pre1 setting */
- char serdes_pre2; /* pre2 setting */
- char serdes_post; /* post setting */
- u8 serdes_main; /* main setting */
- u8 op_code; /* opcode,0:setting;1:read */
- u8 rsvd[ARRAY_INDEX_2];
-};
-
-struct spfc_outmbox_set_serdes_tx_sts {
- struct spfc_mbox_header header;
- u16 rvsd0;
- u8 rvsd1;
- u8 status;
- u8 swing;
- char serdes_pre1;
- char serdes_pre2;
- char serdes_post;
- u8 serdes_main;
- u8 rsvd2[ARRAY_INDEX_3];
-};
-
-struct spfc_inmbox_i2c_wr_rd {
- struct spfc_mbox_header header;
- u8 op_code; /* 0 write, 1 read */
- u8 rsvd[ARRAY_INDEX_3];
-
- u32 dev_addr;
- u32 offset;
- u32 wr_data;
-};
-
-struct spfc_outmbox_i2c_wr_rd_sts {
- struct spfc_mbox_header header;
- u8 status;
- u8 resvd[ARRAY_INDEX_3];
-
- u32 rd_data;
-};
-
-struct spfc_inmbox_gpio_wr_rd {
- struct spfc_mbox_header header;
- u8 op_code; /* 0 write,1 read */
- u8 rsvd[ARRAY_INDEX_3];
-
- u32 pin;
- u32 wr_data;
-};
-
-struct spfc_outmbox_gpio_wr_rd_sts {
- struct spfc_mbox_header header;
- u8 status;
- u8 resvd[ARRAY_INDEX_3];
-
- u32 rd_data;
-};
-
-struct spfc_inmbox_get_serdes_rx {
- struct spfc_mbox_header header;
-
- u8 op_code;
- u8 h16_macro;
- u8 h16_lane;
- u8 rsvd;
-};
-
-struct spfc_inmbox_get_serdes_rx_sts {
- struct spfc_mbox_header header;
- u16 rvsd0;
- u8 rvsd1;
- u8 status;
- int left_eye;
- int right_eye;
- int low_eye;
- int high_eye;
-};
-
-struct spfc_ser_op_m_l {
- u8 op_code;
- u8 h16_macro;
- u8 h16_lane;
- u8 rsvd;
-};
-
-/* get sfp info */
-#define SPFC_MBOX_GET_SFP_INFO_MB_LENGTH 1
-#define OFFSET_TWO_DWORD 2
-#define OFFSET_ONE_DWORD 1
-
-struct spfc_inmbox_get_sfp_info {
- struct spfc_mbox_header header;
-};
-
-struct spfc_outmbox_get_sfp_info_sts {
- struct spfc_mbox_header header;
-
- u32 rcvd : 8;
- u32 length : 16;
- u32 status : 8;
-};
-
-/* get ucode stats */
-#define SPFC_UCODE_STAT_NUM 64
-
-struct spfc_outmbox_get_ucode_stat {
- struct spfc_mbox_header header;
-};
-
-struct spfc_outmbox_get_ucode_stat_sts {
- struct spfc_mbox_header header;
-
- u16 rsvd;
- u8 rsvd2;
- u8 status;
-
- u32 ucode_stat[SPFC_UCODE_STAT_NUM];
-};
-
-/* uP-->Driver asyn event API */
-struct spfc_link_event {
- struct spfc_mbox_header header;
-
- u8 link_event;
- u8 reason;
- u8 speed;
- u8 top_type;
-
- u8 alpa_value;
- u8 reserved1;
- u16 paticpate : 1;
- u16 ac_led : 1;
- u16 yellow_speed_led : 1;
- u16 green_speed_led : 1;
- u16 reserved2 : 12;
-
- u8 loop_map_info[128];
-};
-
-enum spfc_up_err_type {
- SPFC_UP_ERR_DRV_PARA = 0,
- SPFC_UP_ERR_SFP = 1,
- SPFC_UP_ERR_32G_PUB = 2,
- SPFC_UP_ERR_32G_UA = 3,
- SPFC_UP_ERR_32G_MAC = 4,
- SPFC_UP_ERR_NON32G_DFX = 5,
- SPFC_UP_ERR_NON32G_MAC = 6,
- SPFC_UP_ERR_BUTT
-
-};
-
-enum spfc_up_err_value {
- /* ERR type 0 */
- SPFC_DRV_2_UP_PARA_ERR = 0,
-
- /* ERR type 1 */
- SPFC_SFP_SPEED_ERR,
-
- /* ERR type 2 */
- SPFC_32GPUB_UA_RXESCH_FIFO_OF,
- SPFC_32GPUB_UA_RXESCH_FIFO_UCERR,
-
- /* ERR type 3 */
- SPFC_32G_UA_UATX_LEN_ABN,
- SPFC_32G_UA_RXAFIFO_OF,
- SPFC_32G_UA_TXAFIFO_OF,
- SPFC_32G_UA_RXAFIFO_UCERR,
- SPFC_32G_UA_TXAFIFO_UCERR,
-
- /* ERR type 4 */
- SPFC_32G_MAC_RX_BBC_FATAL,
- SPFC_32G_MAC_TX_BBC_FATAL,
- SPFC_32G_MAC_TXFIFO_UF,
- SPFC_32G_MAC_PCS_TXFIFO_UF,
- SPFC_32G_MAC_RXBBC_CRDT_TO,
- SPFC_32G_MAC_PCS_RXAFIFO_OF,
- SPFC_32G_MAC_PCS_TXFIFO_OF,
- SPFC_32G_MAC_FC2P_RXFIFO_OF,
- SPFC_32G_MAC_FC2P_TXFIFO_OF,
- SPFC_32G_MAC_FC2P_CAFIFO_OF,
- SPFC_32G_MAC_PCS_RXRSFECM_UCEER,
- SPFC_32G_MAC_PCS_RXAFIFO_UCEER,
- SPFC_32G_MAC_PCS_TXFIFO_UCEER,
- SPFC_32G_MAC_FC2P_RXFIFO_UCEER,
- SPFC_32G_MAC_FC2P_TXFIFO_UCEER,
-
- /* ERR type 5 */
- SPFC_NON32G_DFX_FC1_DFX_BF_FIFO,
- SPFC_NON32G_DFX_FC1_DFX_BP_FIFO,
- SPFC_NON32G_DFX_FC1_DFX_RX_AFIFO_ERR,
- SPFC_NON32G_DFX_FC1_DFX_TX_AFIFO_ERR,
- SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBUF_FIFO1,
- SPFC_NON32G_DFX_FC1_DFX_DIRQ_RXBBC_TO,
- SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXDAT_FIFO,
- SPFC_NON32G_DFX_FC1_DFX_DIRQ_TXCMD_FIFO,
- SPFC_NON32G_DFX_FC1_ERR_R_RDY,
-
- /* ERR type 6 */
- SPFC_NON32G_MAC_FC1_FAIRNESS_ERROR,
-
- SPFC_ERR_VALUE_BUTT
-
-};
-
-struct spfc_up_error_event {
- struct spfc_mbox_header header;
-
- u8 link_event;
- u8 error_level;
- u8 error_type;
- u8 error_value;
-};
-
-struct spfc_inmbox_clear_done {
- struct spfc_mbox_header header;
-};
-
-/* receive els cmd */
-struct spfc_inmbox_rcv_els {
- struct spfc_mbox_header header;
- u16 pkt_type;
- u16 pkt_len;
- u8 frame[ARRAY_INDEX_0];
-};
-
-/* FCF event type */
-enum spfc_fcf_event_type {
- SPFC_FCF_SELECTED = 0,
- SPFC_FCF_DEAD,
- SPFC_FCF_CLEAR_VLINK,
- SPFC_FCF_CLEAR_VLINK_APPOINTED
-};
-
-struct spfc_nport_id_info {
- u32 nport_id : 24;
- u32 vp_index : 8;
-};
-
-struct spfc_inmbox_fcf_event {
- struct spfc_mbox_header header;
-
- u8 fcf_map[ARRAY_INDEX_3];
- u8 event_type;
-
- u8 fcf_mac_h4[ARRAY_INDEX_4];
-
- u16 vlan_info;
- u8 fcf_mac_l2[ARRAY_INDEX_2];
-
- struct spfc_nport_id_info nport_id_info[UNF_SPFC_MAXNPIV_NUM + 1];
-};
-
-/* send els cmd */
-struct spfc_inmbox_send_els {
- struct spfc_mbox_header header;
-
- u8 oper_code;
- u8 rsvd[ARRAY_INDEX_3];
-
- u8 resvd;
- u8 els_cmd_type;
- u16 pkt_len;
-
- u8 fcf_mac_h4[ARRAY_INDEX_4];
-
- u16 vlan_info;
- u8 fcf_mac_l2[ARRAY_INDEX_2];
-
- u8 fc_frame[SPFC_FC_HEAD_LEN + UNF_FLOGI_PAYLOAD_LEN];
-};
-
-struct spfc_inmbox_send_els_sts {
- struct spfc_mbox_header header;
-
- u16 rx_id;
- u16 err_code;
-
- u16 ox_id;
- u16 rsvd;
-};
-
-struct spfc_inmbox_get_clear_state {
- struct spfc_mbox_header header;
- u32 resvd[31];
-};
-
-struct spfc_outmbox_get_clear_state_sts {
- struct spfc_mbox_header header;
- u16 rsvd1;
- u8 state; /* 1--clear doing. 0---clear done. */
- u8 status; /* 0--ok,!0---fail */
- u32 rsvd2[30];
-};
-
-#define SPFC_FIP_MODE_VN2VF (0)
-#define SPFC_FIP_MODE_VN2VN (1)
-
-/* get up state */
-struct spfc_inmbox_get_up_state {
- struct spfc_mbox_header header;
-
- u64 cur_jiff_time;
-};
-
-/* get port state */
-struct spfc_inmbox_get_port_info {
- struct spfc_mbox_header header;
-};
-
-struct spfc_outmbox_get_up_state_sts {
- struct spfc_mbox_header header;
-
- u8 status;
- u8 rsv0;
- u16 rsv1;
- struct unf_port_dynamic_info dymic_info;
-};
-
-struct spfc_outmbox_get_port_info_sts {
- struct spfc_mbox_header header;
-
- u32 status : 8;
- u32 fe_16g_cvis_tts : 8;
- u32 bb_scn : 8;
- u32 loop_credit : 8;
-
- u32 non_loop_rx_credit : 8;
- u32 non_loop_tx_credit : 8;
- u32 sfp_speed : 8;
- u32 present : 8;
-};
-
-struct spfc_inmbox_config_timer {
- struct spfc_mbox_header header;
-
- u16 op_code;
- u16 fun_id;
- u32 user_data;
-};
-
-struct spfc_inmbox_config_srqc {
- struct spfc_mbox_header header;
-
- u16 valid;
- u16 fun_id;
- u32 srqc_gpa_hi;
- u32 srqc_gpa_lo;
-};
-
-struct spfc_outmbox_config_timer_sts {
- struct spfc_mbox_header header;
-
- u8 status;
- u8 rsv[ARRAY_INDEX_3];
-};
-
-struct spfc_outmbox_config_srqc_sts {
- struct spfc_mbox_header header;
-
- u8 status;
- u8 rsv[ARRAY_INDEX_3];
-};
-
-struct spfc_inmbox_default_sq_info {
- struct spfc_mbox_header header;
- u32 sq_cid;
- u32 sq_xid;
- u16 func_id;
- u16 valid;
-};
-
-struct spfc_outmbox_default_sq_info_sts {
- struct spfc_mbox_header header;
- u8 status;
- u8 rsv[ARRAY_INDEX_3];
-};
-
-/* Generic Inmailbox and Outmailbox */
-union spfc_inmbox_generic {
- struct {
- struct spfc_mbox_header header;
- u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
- } generic;
-
- struct spfc_inmbox_port_switch port_switch;
- struct spfc_inmbox_config_api config_api;
- struct spfc_inmbox_get_chip_info get_chip_info;
- struct spfc_inmbox_config_login config_login;
- struct spfc_inmbox_port_reset port_reset;
- struct spfc_inmbox_set_esch esch_set;
- struct spfc_inmbox_led_test led_test;
- struct spfc_inmbox_get_sfp_info get_sfp_info;
- struct spfc_inmbox_clear_done clear_done;
- struct spfc_outmbox_get_ucode_stat get_ucode_stat;
- struct spfc_inmbox_get_clear_state get_clr_state;
- struct spfc_inmbox_send_vport_info send_vport_info;
- struct spfc_inmbox_get_up_state get_up_state;
- struct spfc_inmbox_config_timer timer_config;
- struct spfc_inmbox_config_srqc config_srqc;
- struct spfc_inmbox_get_port_info get_port_info;
-};
-
-union spfc_outmbox_generic {
- struct {
- struct spfc_mbox_header header;
- u32 rsvd[(SPFC_MBOX_SIZE - SPFC_MBOX_HEADER_SIZE) / sizeof(u32)];
- } generic;
-
- struct spfc_outmbox_port_switch_sts port_switch_sts;
- struct spfc_outmbox_config_api_sts config_api_sts;
- struct spfc_outmbox_get_chip_info_sts get_chip_info_sts;
- struct spfc_outmbox_get_reg_info_sts get_reg_info_sts;
- struct spfc_outmbox_config_login_sts config_login_sts;
- struct spfc_outmbox_port_reset_sts port_reset_sts;
- struct spfc_outmbox_led_test_sts led_test_sts;
- struct spfc_outmbox_set_esch_sts esch_set_sts;
- struct spfc_inmbox_get_serdes_rx_sts serdes_rx_get_sts;
- struct spfc_outmbox_set_serdes_tx_sts serdes_tx_set_sts;
- struct spfc_outmbox_i2c_wr_rd_sts i2c_wr_rd_sts;
- struct spfc_outmbox_gpio_wr_rd_sts gpio_wr_rd_sts;
- struct spfc_outmbox_get_sfp_info_sts get_sfp_info_sts;
- struct spfc_outmbox_get_ucode_stat_sts get_ucode_stat_sts;
- struct spfc_outmbox_get_clear_state_sts get_clr_state_sts;
- struct spfc_outmbox_get_up_state_sts get_up_state_sts;
- struct spfc_outmbox_config_timer_sts timer_config_sts;
- struct spfc_outmbox_config_srqc_sts config_srqc_sts;
- struct spfc_outmbox_get_port_info_sts get_port_info_sts;
- struct spfc_outmbox_default_sq_info_sts default_sq_sts;
-};
-
-u32 spfc_get_chip_msg(void *hba, void *mac);
-u32 spfc_config_port_table(struct spfc_hba_info *hba);
-u32 spfc_port_switch(struct spfc_hba_info *hba, bool turn_on);
-u32 spfc_get_loop_map(void *hba, void *buf);
-u32 spfc_get_workable_bb_credit(void *hba, void *bb_credit);
-u32 spfc_get_workable_bb_scn(void *hba, void *bb_scn);
-u32 spfc_get_port_current_info(void *hba, void *port_info);
-u32 spfc_get_port_fec(void *hba, void *para_out);
-
-u32 spfc_get_loop_alpa(void *hba, void *alpa);
-u32 spfc_get_topo_act(void *hba, void *topo_act);
-u32 spfc_config_login_api(struct spfc_hba_info *hba, struct unf_port_login_parms *login_parms);
-u32 spfc_mb_send_and_wait_mbox(struct spfc_hba_info *hba, const void *in_mbox, u16 in_size,
- union spfc_outmbox_generic *out_mbox);
-void spfc_up_msg2driver_proc(void *hwdev_handle, void *pri_handle, u16 cmd,
- void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
-
-u32 spfc_mb_reset_chip(struct spfc_hba_info *hba, u8 sub_type);
-u32 spfc_clear_sq_wqe_done(struct spfc_hba_info *hba);
-u32 spfc_update_fabric_param(void *hba, void *para_in);
-u32 spfc_update_port_param(void *hba, void *para_in);
-u32 spfc_update_fdisc_param(void *hba, void *vport_info);
-u32 spfc_mbx_get_fw_clear_stat(struct spfc_hba_info *hba, u32 *clear_state);
-u32 spfc_get_chip_capability(void *hwdev_handle, struct spfc_chip_info *chip_info);
-u32 spfc_mbx_config_default_session(void *hba, u32 flag);
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
deleted file mode 100644
index 0c1d97d9e3e6..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
+++ /dev/null
@@ -1,1611 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include <linux/types.h>
-#include <linux/sched.h>
-#include <linux/pci.h>
-#include <linux/module.h>
-#include <linux/vmalloc.h>
-#include <linux/mm.h>
-#include <linux/device.h>
-#include <linux/gfp.h>
-#include "sphw_crm.h"
-#include "sphw_hw.h"
-#include "sphw_hwdev.h"
-#include "sphw_hwif.h"
-
-#include "spfc_cqm_object.h"
-#include "spfc_cqm_bitmap_table.h"
-#include "spfc_cqm_bat_cla.h"
-#include "spfc_cqm_main.h"
-
-static unsigned char cqm_ver = 8;
-module_param(cqm_ver, byte, 0644);
-MODULE_PARM_DESC(cqm_ver, "for cqm version control (default=8)");
-
-static void
-cqm_bat_fill_cla_common_gpa(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- struct cqm_bat_entry_standerd *bat_entry_standerd)
-{
- u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
- struct sphw_func_attr *func_attr = NULL;
- struct cqm_bat_entry_vf2pf gpa = {0};
- u32 cla_gpa_h = 0;
- dma_addr_t pa;
-
- if (cla_table->cla_lvl == CQM_CLA_LVL_0)
- pa = cla_table->cla_z_buf.buf_list[0].pa;
- else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
- pa = cla_table->cla_y_buf.buf_list[0].pa;
- else
- pa = cla_table->cla_x_buf.buf_list[0].pa;
-
- gpa.cla_gpa_h = CQM_ADDR_HI(pa) & CQM_CHIP_GPA_HIMASK;
-
- /* On the SPU, the value of spu_en in the GPA address
- * in the BAT is determined by the host ID and fun IDx.
- */
- if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
- func_attr = &cqm_handle->func_attribute;
- gpa.acs_spu_en = func_attr->func_global_idx & 0x1;
- } else {
- gpa.acs_spu_en = 0;
- }
-
- memcpy(&cla_gpa_h, &gpa, sizeof(u32));
- bat_entry_standerd->cla_gpa_h = cla_gpa_h;
-
- /* GPA is valid when gpa[0] = 1.
- * CQM_BAT_ENTRY_T_REORDER does not support GPA validity check.
- */
- if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
- bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa);
- else
- bat_entry_standerd->cla_gpa_l = CQM_ADDR_LW(pa) | gpa_check_enable;
-}
-
-static void cqm_bat_fill_cla_common(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 *entry_base_addr)
-{
- struct cqm_bat_entry_standerd *bat_entry_standerd = NULL;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 cache_line = 0;
-
- if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
- cache_line = CQM_CHIP_TIMER_CACHELINE;
- else
- cache_line = CQM_CHIP_CACHELINE;
-
- if (cla_table->obj_num == 0) {
- cqm_info(handle->dev_hdl,
- "Cla alloc: cla_type %u, obj_num=0, don't init bat entry\n",
- cla_table->type);
- return;
- }
-
- bat_entry_standerd = (struct cqm_bat_entry_standerd *)entry_base_addr;
-
- /* The QPC value is 256/512/1024 and the timer value is 512.
- * The other cacheline value is 256B.
- * The conversion operation is performed inside the chip.
- */
- if (cla_table->obj_size > cache_line) {
- if (cla_table->obj_size == CQM_OBJECT_512)
- bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
- else
- bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_1024;
- bat_entry_standerd->max_number = cla_table->max_buffer_size / cla_table->obj_size;
- } else {
- if (cache_line == CQM_CHIP_CACHELINE) {
- bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_256;
- bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
- } else {
- bat_entry_standerd->entry_size = CQM_BAT_ENTRY_SIZE_512;
- bat_entry_standerd->max_number = cla_table->max_buffer_size / cache_line;
- }
- }
-
- bat_entry_standerd->max_number = bat_entry_standerd->max_number - 1;
-
- bat_entry_standerd->bypass = CQM_BAT_NO_BYPASS_CACHE;
- bat_entry_standerd->z = cla_table->cacheline_z;
- bat_entry_standerd->y = cla_table->cacheline_y;
- bat_entry_standerd->x = cla_table->cacheline_x;
- bat_entry_standerd->cla_level = cla_table->cla_lvl;
-
- cqm_bat_fill_cla_common_gpa(cqm_handle, cla_table, bat_entry_standerd);
-}
-
-static void cqm_bat_fill_cla_cfg(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 **entry_base_addr)
-{
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
- struct cqm_bat_entry_cfg *bat_entry_cfg = NULL;
-
- bat_entry_cfg = (struct cqm_bat_entry_cfg *)(*entry_base_addr);
- bat_entry_cfg->cur_conn_cache = 0;
- bat_entry_cfg->max_conn_cache =
- func_cap->flow_table_based_conn_cache_number;
- bat_entry_cfg->cur_conn_num_h_4 = 0;
- bat_entry_cfg->cur_conn_num_l_16 = 0;
- bat_entry_cfg->max_conn_num = func_cap->flow_table_based_conn_number;
-
- /* Aligns with 64 buckets and shifts rightward by 6 bits.
- * The maximum value of this field is 16 bits. A maximum of 4M buckets
- * can be supported. The value is subtracted by 1. It is used for &hash
- * value.
- */
- if ((func_cap->hash_number >> CQM_HASH_NUMBER_UNIT) != 0) {
- bat_entry_cfg->bucket_num = ((func_cap->hash_number >>
- CQM_HASH_NUMBER_UNIT) - 1);
- }
- if (func_cap->bloomfilter_length != 0) {
- bat_entry_cfg->bloom_filter_len = func_cap->bloomfilter_length -
- 1;
- bat_entry_cfg->bloom_filter_addr = func_cap->bloomfilter_addr;
- }
-
- (*entry_base_addr) += sizeof(struct cqm_bat_entry_cfg);
-}
-
-static void cqm_bat_fill_cla_other(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 **entry_base_addr)
-{
- cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
-
- (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
-}
-
-static void cqm_bat_fill_cla_taskmap(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 **entry_base_addr)
-{
- struct cqm_bat_entry_taskmap *bat_entry_taskmap = NULL;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- int i;
-
- if (cqm_handle->func_capability.taskmap_number != 0) {
- bat_entry_taskmap =
- (struct cqm_bat_entry_taskmap *)(*entry_base_addr);
- for (i = 0; i < CQM_BAT_ENTRY_TASKMAP_NUM; i++) {
- bat_entry_taskmap->addr[i].gpa_h =
- (u32)(cla_table->cla_z_buf.buf_list[i].pa >>
- CQM_CHIP_GPA_HSHIFT);
- bat_entry_taskmap->addr[i].gpa_l =
- (u32)(cla_table->cla_z_buf.buf_list[i].pa &
- CQM_CHIP_GPA_LOMASK);
- cqm_info(handle->dev_hdl,
- "Cla alloc: taskmap bat entry: 0x%x 0x%x\n",
- bat_entry_taskmap->addr[i].gpa_h,
- bat_entry_taskmap->addr[i].gpa_l);
- }
- }
-
- (*entry_base_addr) += sizeof(struct cqm_bat_entry_taskmap);
-}
-
-static void cqm_bat_fill_cla_timer(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 **entry_base_addr)
-{
- /* Only the PPF allocates timer resources. */
- if (cqm_handle->func_attribute.func_type != CQM_PPF) {
- (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
- } else {
- cqm_bat_fill_cla_common(cqm_handle, cla_table, *entry_base_addr);
-
- (*entry_base_addr) += sizeof(struct cqm_bat_entry_standerd);
- }
-}
-
-static void cqm_bat_fill_cla_invalid(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u8 **entry_base_addr)
-{
- (*entry_base_addr) += CQM_BAT_ENTRY_SIZE;
-}
-
-static void cqm_bat_fill_cla(struct cqm_handle *cqm_handle)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = NULL;
- u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
- u8 *entry_base_addr = NULL;
- u32 i = 0;
-
- /* Fills each item in the BAT table according to the BAT format. */
- entry_base_addr = bat_table->bat;
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- entry_type = bat_table->bat_entry_type[i];
- cla_table = &bat_table->entry[i];
-
- if (entry_type == CQM_BAT_ENTRY_T_CFG)
- cqm_bat_fill_cla_cfg(cqm_handle, cla_table, &entry_base_addr);
- else if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
- cqm_bat_fill_cla_taskmap(cqm_handle, cla_table, &entry_base_addr);
- else if (entry_type == CQM_BAT_ENTRY_T_INVALID)
- cqm_bat_fill_cla_invalid(cqm_handle, cla_table, &entry_base_addr);
- else if (entry_type == CQM_BAT_ENTRY_T_TIMER)
- cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
- else
- cqm_bat_fill_cla_other(cqm_handle, cla_table, &entry_base_addr);
-
- /* Check whether entry_base_addr is out-of-bounds array. */
- if (entry_base_addr >= (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
- break;
- }
-}
-
-u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle)
-{
- u32 funcid = 0;
- u32 smf_sel = 0;
- u32 smf_id = 0;
- u32 smf_pg_partial = 0;
- /* SMF_Selection is selected based on
- * the lower two bits of the function id
- */
- u32 lbf_smfsel[4] = {0, 2, 1, 3};
- /* SMFID is selected based on SMF_PG[1:0] and SMF_Selection(0-1) */
- u32 smfsel_smfid01[4][2] = { {0, 0}, {0, 0}, {1, 1}, {0, 1} };
- /* SMFID is selected based on SMF_PG[3:2] and SMF_Selection(2-4) */
- u32 smfsel_smfid23[4][2] = { {2, 2}, {2, 2}, {3, 3}, {2, 3} };
-
- /* When the LB mode is disabled, SMF0 is always returned. */
- if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL) {
- smf_id = 0;
- } else {
- funcid = cqm_handle->func_attribute.func_global_idx & 0x3;
- smf_sel = lbf_smfsel[funcid];
-
- if (smf_sel < 2) {
- smf_pg_partial = cqm_handle->func_capability.smf_pg & 0x3;
- smf_id = smfsel_smfid01[smf_pg_partial][smf_sel];
- } else {
- smf_pg_partial = (cqm_handle->func_capability.smf_pg >> 2) & 0x3;
- smf_id = smfsel_smfid23[smf_pg_partial][smf_sel - 2];
- }
- }
-
- return smf_id;
-}
-
-/* This function is used in LB mode 1/2. The timer spoker info
- * of independent space needs to be configured for 4 SMFs.
- */
-static void cqm_update_timer_gpa(struct cqm_handle *cqm_handle, u32 smf_id)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = NULL;
- u32 entry_type = CQM_BAT_ENTRY_T_INVALID;
- u8 *entry_base_addr = NULL;
- u32 i = 0;
-
- if (cqm_handle->func_attribute.func_type != CQM_PPF)
- return;
-
- if (cqm_handle->func_capability.lb_mode != CQM_LB_MODE_1 &&
- cqm_handle->func_capability.lb_mode != CQM_LB_MODE_2)
- return;
-
- cla_table = &bat_table->timer_entry[smf_id];
- entry_base_addr = bat_table->bat;
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- entry_type = bat_table->bat_entry_type[i];
-
- if (entry_type == CQM_BAT_ENTRY_T_TIMER) {
- cqm_bat_fill_cla_timer(cqm_handle, cla_table, &entry_base_addr);
- break;
- }
-
- if (entry_type == CQM_BAT_ENTRY_T_TASKMAP)
- entry_base_addr += sizeof(struct cqm_bat_entry_taskmap);
- else
- entry_base_addr += CQM_BAT_ENTRY_SIZE;
-
- /* Check whether entry_base_addr is out-of-bounds array. */
- if (entry_base_addr >=
- (bat_table->bat + CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE))
- break;
- }
-}
-
-static s32 cqm_bat_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
- u32 smf_id, u32 func_id)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cmdq_bat_update *bat_update_cmd = NULL;
- s32 ret = CQM_FAIL;
-
- bat_update_cmd = (struct cqm_cmdq_bat_update *)(buf_in->buf);
- bat_update_cmd->offset = 0;
-
- if (cqm_handle->bat_table.bat_size > CQM_BAT_MAX_SIZE) {
- cqm_err(handle->dev_hdl,
- "bat_size = %u, which is more than %d.\n",
- cqm_handle->bat_table.bat_size, CQM_BAT_MAX_SIZE);
- return CQM_FAIL;
- }
- bat_update_cmd->byte_len = cqm_handle->bat_table.bat_size;
-
- memcpy(bat_update_cmd->data, cqm_handle->bat_table.bat, bat_update_cmd->byte_len);
-
- bat_update_cmd->smf_id = smf_id;
- bat_update_cmd->func_id = func_id;
-
- cqm_info(handle->dev_hdl, "Bat update: smf_id=%u\n", bat_update_cmd->smf_id);
- cqm_info(handle->dev_hdl, "Bat update: func_id=%u\n", bat_update_cmd->func_id);
-
- cqm_swab32((u8 *)bat_update_cmd, sizeof(struct cqm_cmdq_bat_update) >> CQM_DW_SHIFT);
-
- ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
- CQM_CMD_T_BAT_UPDATE, buf_in, NULL, NULL,
- CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
- cqm_err(handle->dev_hdl, "%s: send_cmd_box ret=%d\n", __func__,
- ret);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_bat_update(struct cqm_handle *cqm_handle)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cmd_buf *buf_in = NULL;
- s32 ret = CQM_FAIL;
- u32 smf_id = 0;
- u32 func_id = 0;
- u32 i = 0;
-
- buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
- buf_in->size = sizeof(struct cqm_cmdq_bat_update);
-
- /* In non-fake mode, func_id is set to 0xffff */
- func_id = 0xffff;
-
- /* The LB scenario is supported.
- * The normal mode is the traditional mode and is configured on SMF0.
- * In mode 0, load is balanced to four SMFs based on the func ID (except
- * the PPF func ID). The PPF in mode 0 needs to be configured on four
- * SMF, so the timer resources can be shared by the four timer engine.
- * Mode 1/2 is load balanced to four SMF by flow. Therefore, one
- * function needs to be configured to four SMF.
- */
- if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
- cqm_handle->func_attribute.func_type != CQM_PPF)) {
- smf_id = cqm_funcid2smfid(cqm_handle);
- ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
- } else if ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1) ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2) ||
- ((cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0) &&
- (cqm_handle->func_attribute.func_type == CQM_PPF))) {
- for (i = 0; i < CQM_LB_SMF_MAX; i++) {
- cqm_update_timer_gpa(cqm_handle, i);
-
- /* The smf_pg variable stores the currently enabled SMF. */
- if (cqm_handle->func_capability.smf_pg & (1U << i)) {
- smf_id = i;
- ret = cqm_bat_update_cmd(cqm_handle, buf_in, smf_id, func_id);
- if (ret != CQM_SUCCESS)
- goto out;
- }
- }
- } else {
- cqm_err(handle->dev_hdl, "Bat update: unsupport lb mode=%u\n",
- cqm_handle->func_capability.lb_mode);
- ret = CQM_FAIL;
- }
-
-out:
- cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
- return ret;
-}
-
-s32 cqm_bat_init_ft(struct cqm_handle *cqm_handle, struct cqm_bat_table *bat_table,
- enum func_type function_type)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 i = 0;
-
- bat_table->bat_entry_type[CQM_BAT_INDEX0] = CQM_BAT_ENTRY_T_CFG;
- bat_table->bat_entry_type[CQM_BAT_INDEX1] = CQM_BAT_ENTRY_T_HASH;
- bat_table->bat_entry_type[CQM_BAT_INDEX2] = CQM_BAT_ENTRY_T_QPC;
- bat_table->bat_entry_type[CQM_BAT_INDEX3] = CQM_BAT_ENTRY_T_SCQC;
- bat_table->bat_entry_type[CQM_BAT_INDEX4] = CQM_BAT_ENTRY_T_LUN;
- bat_table->bat_entry_type[CQM_BAT_INDEX5] = CQM_BAT_ENTRY_T_TASKMAP;
-
- if (function_type == CQM_PF || function_type == CQM_PPF) {
- bat_table->bat_entry_type[CQM_BAT_INDEX6] = CQM_BAT_ENTRY_T_L3I;
- bat_table->bat_entry_type[CQM_BAT_INDEX7] = CQM_BAT_ENTRY_T_CHILDC;
- bat_table->bat_entry_type[CQM_BAT_INDEX8] = CQM_BAT_ENTRY_T_TIMER;
- bat_table->bat_entry_type[CQM_BAT_INDEX9] = CQM_BAT_ENTRY_T_XID2CID;
- bat_table->bat_entry_type[CQM_BAT_INDEX10] = CQM_BAT_ENTRY_T_REORDER;
- bat_table->bat_size = CQM_BAT_SIZE_FT_PF;
- } else if (function_type == CQM_VF) {
- bat_table->bat_size = CQM_BAT_SIZE_FT_VF;
- } else {
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
- bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
-
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(function_type));
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_bat_init(struct cqm_handle *cqm_handle)
-{
- struct cqm_func_capability *capability = &cqm_handle->func_capability;
- enum func_type function_type = cqm_handle->func_attribute.func_type;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 i;
-
- memset(bat_table, 0, sizeof(struct cqm_bat_table));
-
- /* Initialize the type of each bat entry. */
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
- bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
-
- /* Select BATs based on service types. Currently,
- * feature-related resources of the VF are stored in the BATs of the VF.
- */
- if (capability->ft_enable)
- return cqm_bat_init_ft(cqm_handle, bat_table, function_type);
-
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(capability->ft_enable));
-
- return CQM_FAIL;
-}
-
-void cqm_bat_uninit(struct cqm_handle *cqm_handle)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 i;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++)
- bat_table->bat_entry_type[i] = CQM_BAT_ENTRY_T_INVALID;
-
- memset(bat_table->bat, 0, CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE);
-
- /* Instruct the chip to update the BAT table. */
- if (cqm_bat_update(cqm_handle) != CQM_SUCCESS)
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
-}
-
-s32 cqm_cla_fill_buf(struct cqm_handle *cqm_handle, struct cqm_buf *cla_base_buf,
- struct cqm_buf *cla_sub_buf, u8 gpa_check_enable)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct sphw_func_attr *func_attr = NULL;
- dma_addr_t *base = NULL;
- u64 fake_en = 0;
- u64 spu_en = 0;
- u64 pf_id = 0;
- u32 i = 0;
- u32 addr_num;
- u32 buf_index = 0;
-
- /* Apply for space for base_buf */
- if (!cla_base_buf->buf_list) {
- if (cqm_buf_alloc(cqm_handle, cla_base_buf, false) ==
- CQM_FAIL) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_base_buf));
- return CQM_FAIL;
- }
- }
-
- /* Apply for space for sub_buf */
- if (!cla_sub_buf->buf_list) {
- if (cqm_buf_alloc(cqm_handle, cla_sub_buf, false) == CQM_FAIL) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(cla_sub_buf));
- cqm_buf_free(cla_base_buf, cqm_handle->dev);
- return CQM_FAIL;
- }
- }
-
- /* Fill base_buff with the gpa of sub_buf */
- addr_num = cla_base_buf->buf_size / sizeof(dma_addr_t);
- base = (dma_addr_t *)(cla_base_buf->buf_list[0].va);
- for (i = 0; i < cla_sub_buf->buf_number; i++) {
- /* The SPU SMF supports load balancing from the SMF to the CPI,
- * depending on the host ID and func ID.
- */
- if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID) {
- func_attr = &cqm_handle->func_attribute;
- spu_en = (u64)(func_attr->func_global_idx & 0x1) << 63;
- } else {
- spu_en = 0;
- }
-
- *base = (((((cla_sub_buf->buf_list[i].pa & CQM_CHIP_GPA_MASK) |
- spu_en) |
- fake_en) |
- pf_id) |
- gpa_check_enable);
-
- cqm_swab64((u8 *)base, 1);
- if ((i + 1) % addr_num == 0) {
- buf_index++;
- if (buf_index < cla_base_buf->buf_number)
- base = cla_base_buf->buf_list[buf_index].va;
- } else {
- base++;
- }
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_xyz_lvl1(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 trunk_size)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf *cla_y_buf = NULL;
- struct cqm_buf *cla_z_buf = NULL;
- s32 shift = 0;
- s32 ret = CQM_FAIL;
- u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
- u32 cache_line = 0;
-
- if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
- cache_line = CQM_CHIP_TIMER_CACHELINE;
- else
- cache_line = CQM_CHIP_CACHELINE;
-
- if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
- gpa_check_enable = 0;
-
- cla_table->cla_lvl = CQM_CLA_LVL_1;
-
- shift = cqm_shift(trunk_size / cla_table->obj_size);
- cla_table->z = shift ? (shift - 1) : (shift);
- cla_table->y = CQM_MAX_INDEX_BIT;
- cla_table->x = 0;
-
- if (cla_table->obj_size >= cache_line) {
- cla_table->cacheline_z = cla_table->z;
- cla_table->cacheline_y = cla_table->y;
- cla_table->cacheline_x = cla_table->x;
- } else {
- shift = cqm_shift(trunk_size / cache_line);
- cla_table->cacheline_z = shift ? (shift - 1) : (shift);
- cla_table->cacheline_y = CQM_MAX_INDEX_BIT;
- cla_table->cacheline_x = 0;
- }
-
- /* Applying for CLA_Y_BUF Space */
- cla_y_buf = &cla_table->cla_y_buf;
- cla_y_buf->buf_size = trunk_size;
- cla_y_buf->buf_number = 1;
- cla_y_buf->page_number = cla_y_buf->buf_number <<
- cla_table->trunk_order;
- ret = cqm_buf_alloc(cqm_handle, cla_y_buf, false);
- CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
- CQM_ALLOC_FAIL(lvl_1_y_buf));
-
- /* Applying for CLA_Z_BUF Space */
- cla_z_buf = &cla_table->cla_z_buf;
- cla_z_buf->buf_size = trunk_size;
- cla_z_buf->buf_number =
- (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
- cla_z_buf->page_number = cla_z_buf->buf_number <<
- cla_table->trunk_order;
- /* All buffer space must be statically allocated. */
- if (cla_table->alloc_static) {
- ret = cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
- gpa_check_enable);
- CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
- CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
- } else { /* Only the buffer list space is initialized. The buffer space
- * is dynamically allocated in services.
- */
- cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
- sizeof(struct cqm_buf_list));
- if (!cla_z_buf->buf_list) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_1_z_buf));
- cqm_buf_free(cla_y_buf, cqm_handle->dev);
- return CQM_FAIL;
- }
- memset(cla_z_buf->buf_list, 0,
- cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_xyz_lvl2(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 trunk_size)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf *cla_x_buf = NULL;
- struct cqm_buf *cla_y_buf = NULL;
- struct cqm_buf *cla_z_buf = NULL;
- s32 shift = 0;
- s32 ret = CQM_FAIL;
- u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
- u32 cache_line = 0;
-
- if (cla_table->type == CQM_BAT_ENTRY_T_TIMER && cqm_ver == 8)
- cache_line = CQM_CHIP_TIMER_CACHELINE;
- else
- cache_line = CQM_CHIP_CACHELINE;
-
- if (cla_table->type == CQM_BAT_ENTRY_T_REORDER)
- gpa_check_enable = 0;
-
- cla_table->cla_lvl = CQM_CLA_LVL_2;
-
- shift = cqm_shift(trunk_size / cla_table->obj_size);
- cla_table->z = shift ? (shift - 1) : (shift);
- shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
- cla_table->y = cla_table->z + shift;
- cla_table->x = CQM_MAX_INDEX_BIT;
-
- if (cla_table->obj_size >= cache_line) {
- cla_table->cacheline_z = cla_table->z;
- cla_table->cacheline_y = cla_table->y;
- cla_table->cacheline_x = cla_table->x;
- } else {
- shift = cqm_shift(trunk_size / cache_line);
- cla_table->cacheline_z = shift ? (shift - 1) : (shift);
- shift = cqm_shift(trunk_size / sizeof(dma_addr_t));
- cla_table->cacheline_y = cla_table->cacheline_z + shift;
- cla_table->cacheline_x = CQM_MAX_INDEX_BIT;
- }
-
- /* Apply for CLA_X_BUF Space */
- cla_x_buf = &cla_table->cla_x_buf;
- cla_x_buf->buf_size = trunk_size;
- cla_x_buf->buf_number = 1;
- cla_x_buf->page_number = cla_x_buf->buf_number <<
- cla_table->trunk_order;
- ret = cqm_buf_alloc(cqm_handle, cla_x_buf, false);
- CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
- CQM_ALLOC_FAIL(lvl_2_x_buf));
-
- /* Apply for CLA_Z_BUF and CLA_Y_BUF Space */
- cla_z_buf = &cla_table->cla_z_buf;
- cla_z_buf->buf_size = trunk_size;
- cla_z_buf->buf_number =
- (ALIGN(cla_table->max_buffer_size, trunk_size)) / trunk_size;
- cla_z_buf->page_number = cla_z_buf->buf_number <<
- cla_table->trunk_order;
-
- cla_y_buf = &cla_table->cla_y_buf;
- cla_y_buf->buf_size = trunk_size;
- cla_y_buf->buf_number =
- (ALIGN(cla_z_buf->buf_number * sizeof(dma_addr_t), trunk_size)) /
- trunk_size;
- cla_y_buf->page_number = cla_y_buf->buf_number <<
- cla_table->trunk_order;
- /* All buffer space must be statically allocated. */
- if (cla_table->alloc_static) {
- /* Apply for y buf and z buf, and fill the gpa of
- * z buf list in y buf
- */
- if (cqm_cla_fill_buf(cqm_handle, cla_y_buf, cla_z_buf,
- gpa_check_enable) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_cla_fill_buf));
- cqm_buf_free(cla_x_buf, cqm_handle->dev);
- return CQM_FAIL;
- }
-
- /* Fill the gpa of the y buf list into the x buf.
- * After the x and y bufs are applied for,
- * this function will not fail.
- * Use void to forcibly convert the return of the function.
- */
- (void)cqm_cla_fill_buf(cqm_handle, cla_x_buf, cla_y_buf,
- gpa_check_enable);
- } else { /* Only the buffer list space is initialized. The buffer space
- * is dynamically allocated in services.
- */
- cla_z_buf->buf_list = vmalloc(cla_z_buf->buf_number *
- sizeof(struct cqm_buf_list));
- if (!cla_z_buf->buf_list) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_z_buf));
- cqm_buf_free(cla_x_buf, cqm_handle->dev);
- return CQM_FAIL;
- }
- memset(cla_z_buf->buf_list, 0,
- cla_z_buf->buf_number * sizeof(struct cqm_buf_list));
-
- cla_y_buf->buf_list = vmalloc(cla_y_buf->buf_number *
- sizeof(struct cqm_buf_list));
- if (!cla_y_buf->buf_list) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(lvl_2_y_buf));
- cqm_buf_free(cla_z_buf, cqm_handle->dev);
- cqm_buf_free(cla_x_buf, cqm_handle->dev);
- return CQM_FAIL;
- }
- memset(cla_y_buf->buf_list, 0,
- cla_y_buf->buf_number * sizeof(struct cqm_buf_list));
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_xyz_check(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 *size)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 trunk_size = 0;
-
- /* If the capability(obj_num) is set to 0, the CLA does not need to be
- * initialized and exits directly.
- */
- if (cla_table->obj_num == 0) {
- cqm_info(handle->dev_hdl,
- "Cla alloc: cla_type %u, obj_num=0, don't alloc buffer\n",
- cla_table->type);
- return CQM_SUCCESS;
- }
-
- cqm_info(handle->dev_hdl,
- "Cla alloc: cla_type %u, obj_num=0x%x, gpa_check_enable=%d\n",
- cla_table->type, cla_table->obj_num,
- cqm_handle->func_capability.gpa_check_enable);
-
- /* Check whether obj_size is 2^n-aligned. An error is reported when
- * obj_size is 0 or 1.
- */
- if (!cqm_check_align(cla_table->obj_size)) {
- cqm_err(handle->dev_hdl,
- "Cla alloc: cla_type %u, obj_size 0x%x is not align on 2^n\n",
- cla_table->type, cla_table->obj_size);
- return CQM_FAIL;
- }
-
- trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
-
- if (trunk_size < cla_table->obj_size) {
- cqm_err(handle->dev_hdl,
- "Cla alloc: cla type %u, obj_size 0x%x is out of trunk size\n",
- cla_table->type, cla_table->obj_size);
- return CQM_FAIL;
- }
-
- *size = trunk_size;
-
- return CQM_CONTINUE;
-}
-
-s32 cqm_cla_xyz(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf *cla_z_buf = NULL;
- u32 trunk_size = 0;
- s32 ret = CQM_FAIL;
-
- ret = cqm_cla_xyz_check(cqm_handle, cla_table, &trunk_size);
- if (ret != CQM_CONTINUE)
- return ret;
-
- /* Level-0 CLA occupies a small space.
- * Only CLA_Z_BUF can be allocated during initialization.
- */
- if (cla_table->max_buffer_size <= trunk_size) {
- cla_table->cla_lvl = CQM_CLA_LVL_0;
-
- cla_table->z = CQM_MAX_INDEX_BIT;
- cla_table->y = 0;
- cla_table->x = 0;
-
- cla_table->cacheline_z = cla_table->z;
- cla_table->cacheline_y = cla_table->y;
- cla_table->cacheline_x = cla_table->x;
-
- /* Applying for CLA_Z_BUF Space */
- cla_z_buf = &cla_table->cla_z_buf;
- cla_z_buf->buf_size = trunk_size; /* (u32)(PAGE_SIZE <<
- * cla_table->trunk_order);
- */
- cla_z_buf->buf_number = 1;
- cla_z_buf->page_number = cla_z_buf->buf_number << cla_table->trunk_order;
- ret = cqm_buf_alloc(cqm_handle, cla_z_buf, false);
- CQM_CHECK_EQUAL_RET(handle->dev_hdl, ret, CQM_SUCCESS, CQM_FAIL,
- CQM_ALLOC_FAIL(lvl_0_z_buf));
- }
- /* Level-1 CLA
- * Allocates CLA_Y_BUF and CLA_Z_BUF during initialization.
- */
- else if (cla_table->max_buffer_size <= (trunk_size * (trunk_size / sizeof(dma_addr_t)))) {
- if (cqm_cla_xyz_lvl1(cqm_handle, cla_table, trunk_size) == CQM_FAIL) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl1));
- return CQM_FAIL;
- }
- }
- /* Level-2 CLA
- * Allocates CLA_X_BUF, CLA_Y_BUF, and CLA_Z_BUF during initialization.
- */
- else if (cla_table->max_buffer_size <=
- (trunk_size * (trunk_size / sizeof(dma_addr_t)) *
- (trunk_size / sizeof(dma_addr_t)))) {
- if (cqm_cla_xyz_lvl2(cqm_handle, cla_table, trunk_size) ==
- CQM_FAIL) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_xyz_lvl2));
- return CQM_FAIL;
- }
- } else { /* The current memory management mode does not support such
- * a large buffer addressing. The order value needs to
- * be increased.
- */
- cqm_err(handle->dev_hdl,
- "Cla alloc: cla max_buffer_size 0x%x exceeds support range\n",
- cla_table->max_buffer_size);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-void cqm_cla_init_entry_normal(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- struct cqm_func_capability *capability)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
-
- switch (cla_table->type) {
- case CQM_BAT_ENTRY_T_HASH:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->hash_number * capability->hash_basic_size;
- cla_table->obj_size = capability->hash_basic_size;
- cla_table->obj_num = capability->hash_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_QPC:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->qpc_number * capability->qpc_basic_size;
- cla_table->obj_size = capability->qpc_basic_size;
- cla_table->obj_num = capability->qpc_number;
- cla_table->alloc_static = capability->qpc_alloc_static;
- cqm_info(handle->dev_hdl, "Cla alloc: qpc alloc_static=%d\n",
- cla_table->alloc_static);
- break;
- case CQM_BAT_ENTRY_T_MPT:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->mpt_number * capability->mpt_basic_size;
- cla_table->obj_size = capability->mpt_basic_size;
- cla_table->obj_num = capability->mpt_number;
- /* CCB decided. MPT uses only static application scenarios. */
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_SCQC:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->scqc_number * capability->scqc_basic_size;
- cla_table->obj_size = capability->scqc_basic_size;
- cla_table->obj_num = capability->scqc_number;
- cla_table->alloc_static = capability->scqc_alloc_static;
- cqm_info(handle->dev_hdl, "Cla alloc: scqc alloc_static=%d\n",
- cla_table->alloc_static);
- break;
- case CQM_BAT_ENTRY_T_SRQC:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->srqc_number * capability->srqc_basic_size;
- cla_table->obj_size = capability->srqc_basic_size;
- cla_table->obj_num = capability->srqc_number;
- cla_table->alloc_static = false;
- break;
- default:
- break;
- }
-}
-
-void cqm_cla_init_entry_extern(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- struct cqm_func_capability *capability)
-{
- switch (cla_table->type) {
- case CQM_BAT_ENTRY_T_GID:
- /* Level-0 CLA table required */
- cla_table->max_buffer_size = capability->gid_number * capability->gid_basic_size;
- cla_table->trunk_order =
- (u32)cqm_shift(ALIGN(cla_table->max_buffer_size, PAGE_SIZE) / PAGE_SIZE);
- cla_table->obj_size = capability->gid_basic_size;
- cla_table->obj_num = capability->gid_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_LUN:
- cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
- cla_table->max_buffer_size = capability->lun_number * capability->lun_basic_size;
- cla_table->obj_size = capability->lun_basic_size;
- cla_table->obj_num = capability->lun_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_TASKMAP:
- cla_table->trunk_order = CQM_4K_PAGE_ORDER;
- cla_table->max_buffer_size = capability->taskmap_number *
- capability->taskmap_basic_size;
- cla_table->obj_size = capability->taskmap_basic_size;
- cla_table->obj_num = capability->taskmap_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_L3I:
- cla_table->trunk_order = CLA_TABLE_PAGE_ORDER;
- cla_table->max_buffer_size = capability->l3i_number * capability->l3i_basic_size;
- cla_table->obj_size = capability->l3i_basic_size;
- cla_table->obj_num = capability->l3i_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_CHILDC:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->childc_number *
- capability->childc_basic_size;
- cla_table->obj_size = capability->childc_basic_size;
- cla_table->obj_num = capability->childc_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_TIMER:
- /* Ensure that the basic size of the timer buffer page does not
- * exceed 128 x 4 KB. Otherwise, clearing the timer buffer of
- * the function is complex.
- */
- cla_table->trunk_order = CQM_4K_PAGE_ORDER;
- cla_table->max_buffer_size = capability->timer_number *
- capability->timer_basic_size;
- cla_table->obj_size = capability->timer_basic_size;
- cla_table->obj_num = capability->timer_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_XID2CID:
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->xid2cid_number *
- capability->xid2cid_basic_size;
- cla_table->obj_size = capability->xid2cid_basic_size;
- cla_table->obj_num = capability->xid2cid_number;
- cla_table->alloc_static = true;
- break;
- case CQM_BAT_ENTRY_T_REORDER:
- /* This entry supports only IWARP and does not support GPA validity check. */
- cla_table->trunk_order = capability->pagesize_reorder;
- cla_table->max_buffer_size = capability->reorder_number *
- capability->reorder_basic_size;
- cla_table->obj_size = capability->reorder_basic_size;
- cla_table->obj_num = capability->reorder_number;
- cla_table->alloc_static = true;
- break;
- default:
- break;
- }
-}
-
-s32 cqm_cla_init_entry_condition(struct cqm_handle *cqm_handle, u32 entry_type)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = &bat_table->entry[entry_type];
- struct cqm_cla_table *cla_table_timer = NULL;
- u32 i;
-
- /* When the timer is in LB mode 1 or 2, the timer needs to be
- * configured for four SMFs and the address space is independent.
- */
- if (cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
- cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
- for (i = 0; i < CQM_LB_SMF_MAX; i++) {
- cla_table_timer = &bat_table->timer_entry[i];
- memcpy(cla_table_timer, cla_table, sizeof(struct cqm_cla_table));
-
- if (cqm_cla_xyz(cqm_handle, cla_table_timer) == CQM_FAIL) {
- cqm_cla_uninit(cqm_handle, entry_type);
- return CQM_FAIL;
- }
- }
- }
-
- if (cqm_cla_xyz(cqm_handle, cla_table) == CQM_FAIL) {
- cqm_cla_uninit(cqm_handle, entry_type);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_init_entry(struct cqm_handle *cqm_handle,
- struct cqm_func_capability *capability)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = NULL;
- s32 ret;
- u32 i = 0;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- cla_table->type = bat_table->bat_entry_type[i];
-
- cqm_cla_init_entry_normal(cqm_handle, cla_table, capability);
- cqm_cla_init_entry_extern(cqm_handle, cla_table, capability);
-
- /* Allocate CLA entry space at each level. */
- if (cla_table->type < CQM_BAT_ENTRY_T_HASH ||
- cla_table->type > CQM_BAT_ENTRY_T_REORDER) {
- mutex_init(&cla_table->lock);
- continue;
- }
-
- /* For the PPF, resources (8 wheels x 2k scales x 32B x
- * func_num) need to be applied for to the timer. The
- * structure of the timer entry in the BAT table needs
- * to be filled. For the PF, no resource needs to be
- * applied for the timer and no structure needs to be
- * filled in the timer entry in the BAT table.
- */
- if (!(cla_table->type == CQM_BAT_ENTRY_T_TIMER &&
- cqm_handle->func_attribute.func_type != CQM_PPF)) {
- ret = cqm_cla_init_entry_condition(cqm_handle, i);
- if (ret != CQM_SUCCESS)
- return CQM_FAIL;
- }
- mutex_init(&cla_table->lock);
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_init(struct cqm_handle *cqm_handle)
-{
- struct cqm_func_capability *capability = &cqm_handle->func_capability;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- s32 ret;
-
- /* Applying for CLA Entries */
- ret = cqm_cla_init_entry(cqm_handle, capability);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init_entry));
- return ret;
- }
-
- /* After the CLA entry is applied, the address is filled in the BAT table. */
- cqm_bat_fill_cla(cqm_handle);
-
- /* Instruct the chip to update the BAT table. */
- ret = cqm_bat_update(cqm_handle);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_update));
- goto err;
- }
-
- cqm_info(handle->dev_hdl, "Timer start: func_type=%d, timer_enable=%u\n",
- cqm_handle->func_attribute.func_type,
- cqm_handle->func_capability.timer_enable);
-
- if (cqm_handle->func_attribute.func_type == CQM_PPF &&
- cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
- /* Enable the timer after the timer resources are applied for */
- cqm_info(handle->dev_hdl, "Timer start: spfc ppf timer start\n");
- ret = sphw_ppf_tmr_start((void *)(cqm_handle->ex_handle));
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, "Timer start: spfc ppf timer start, ret=%d\n",
- ret);
- goto err;
- }
- }
-
- return CQM_SUCCESS;
-
-err:
- cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
- return CQM_FAIL;
-}
-
-void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = NULL;
- s32 inv_flag = 0;
- u32 i;
-
- for (i = 0; i < entry_numb; i++) {
- cla_table = &bat_table->entry[i];
- if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
- }
- }
-
- /* When the lb mode is 1/2, the timer space allocated to the 4 SMFs
- * needs to be released.
- */
- if (cqm_handle->func_attribute.func_type == CQM_PPF &&
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
- cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2)) {
- for (i = 0; i < CQM_LB_SMF_MAX; i++) {
- cla_table = &bat_table->timer_entry[i];
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_x_buf, &inv_flag);
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_y_buf, &inv_flag);
- cqm_buf_free_cache_inv(cqm_handle, &cla_table->cla_z_buf, &inv_flag);
- }
- }
-}
-
-s32 cqm_cla_update_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
- struct cqm_cla_update_cmd *cmd)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cla_update_cmd *cla_update_cmd = NULL;
- s32 ret = CQM_FAIL;
-
- cla_update_cmd = (struct cqm_cla_update_cmd *)(buf_in->buf);
-
- cla_update_cmd->gpa_h = cmd->gpa_h;
- cla_update_cmd->gpa_l = cmd->gpa_l;
- cla_update_cmd->value_h = cmd->value_h;
- cla_update_cmd->value_l = cmd->value_l;
- cla_update_cmd->smf_id = cmd->smf_id;
- cla_update_cmd->func_id = cmd->func_id;
-
- cqm_swab32((u8 *)cla_update_cmd,
- (sizeof(struct cqm_cla_update_cmd) >> CQM_DW_SHIFT));
-
- ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
- CQM_CMD_T_CLA_UPDATE, buf_in, NULL, NULL,
- CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
- cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cqm3_send_cmd_box_ret=%d\n",
- ret);
- cqm_err(handle->dev_hdl, "Cla alloc: cqm_cla_update, cla_update_cmd: 0x%x 0x%x 0x%x 0x%x\n",
- cmd->gpa_h, cmd->gpa_l, cmd->value_h, cmd->value_l);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_update(struct cqm_handle *cqm_handle, struct cqm_buf_list *buf_node_parent,
- struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cmd_buf *buf_in = NULL;
- struct cqm_cla_update_cmd cmd;
- dma_addr_t pa = 0;
- s32 ret = CQM_FAIL;
- u8 gpa_check_enable = cqm_handle->func_capability.gpa_check_enable;
- u32 i = 0;
- u64 spu_en;
-
- buf_in = cqm3_cmd_alloc(cqm_handle->ex_handle);
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
- buf_in->size = sizeof(struct cqm_cla_update_cmd);
-
- /* Fill command format, convert to big endian. */
- /* SPU function sets bit63: acs_spu_en based on function id. */
- if (sphw_host_id(cqm_handle->ex_handle) == CQM_SPU_HOST_ID)
- spu_en = ((u64)(cqm_handle->func_attribute.func_global_idx &
- 0x1)) << 63;
- else
- spu_en = 0;
-
- pa = ((buf_node_parent->pa + (child_index * sizeof(dma_addr_t))) |
- spu_en);
- cmd.gpa_h = CQM_ADDR_HI(pa);
- cmd.gpa_l = CQM_ADDR_LW(pa);
-
- pa = (buf_node_child->pa | spu_en);
- cmd.value_h = CQM_ADDR_HI(pa);
- cmd.value_l = CQM_ADDR_LW(pa);
-
- /* current CLA GPA CHECK */
- if (gpa_check_enable) {
- switch (cla_update_mode) {
- /* gpa[0]=1 means this GPA is valid */
- case CQM_CLA_RECORD_NEW_GPA:
- cmd.value_l |= 1;
- break;
- /* gpa[0]=0 means this GPA is valid */
- case CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID:
- case CQM_CLA_DEL_GPA_WITH_CACHE_INVALID:
- cmd.value_l &= (~1);
- break;
- default:
- cqm_err(handle->dev_hdl,
- "Cla alloc: %s, wrong cla_update_mode=%u\n",
- __func__, cla_update_mode);
- break;
- }
- }
-
- /* In non-fake mode, set func_id to 0xffff. */
- cmd.func_id = 0xffff;
-
- /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
- if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
- cqm_handle->func_attribute.func_type != CQM_PPF)) {
- cmd.smf_id = cqm_funcid2smfid(cqm_handle);
- ret = cqm_cla_update_cmd(cqm_handle, buf_in, &cmd);
- }
- /* Modes 1/2 are allocated to four SMF engines by flow.
- * Therefore, one function needs to be allocated to four SMF engines.
- */
- /* Mode 0 PPF needs to be configured on 4 engines,
- * and the timer resources need to be shared by the 4 engines.
- */
- else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
- cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
- cqm_handle->func_attribute.func_type == CQM_PPF)) {
- for (i = 0; i < CQM_LB_SMF_MAX; i++) {
- /* The smf_pg variable stores currently enabled SMF. */
- if (cqm_handle->func_capability.smf_pg & (1U << i)) {
- cmd.smf_id = i;
- ret = cqm_cla_update_cmd(cqm_handle, buf_in,
- &cmd);
- if (ret != CQM_SUCCESS)
- goto out;
- }
- }
- } else {
- cqm_err(handle->dev_hdl, "Cla update: unsupport lb mode=%u\n",
- cqm_handle->func_capability.lb_mode);
- ret = CQM_FAIL;
- }
-
-out:
- cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
- return ret;
-}
-
-s32 cqm_cla_alloc(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- struct cqm_buf_list *buf_node_parent,
- struct cqm_buf_list *buf_node_child, u32 child_index)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- s32 ret = CQM_FAIL;
-
- /* Apply for trunk page */
- buf_node_child->va = (u8 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
- cla_table->trunk_order);
- CQM_PTR_CHECK_RET(buf_node_child->va, CQM_FAIL, CQM_ALLOC_FAIL(va));
-
- /* PCI mapping */
- buf_node_child->pa = pci_map_single(cqm_handle->dev, buf_node_child->va,
- PAGE_SIZE << cla_table->trunk_order,
- PCI_DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(cqm_handle->dev, buf_node_child->pa)) {
- cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_node_child->pa));
- goto err1;
- }
-
- /* Notify the chip of trunk_pa so that the chip fills in cla entry */
- ret = cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
- child_index, CQM_CLA_RECORD_NEW_GPA);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
- goto err2;
- }
-
- return CQM_SUCCESS;
-
-err2:
- pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
- PAGE_SIZE << cla_table->trunk_order,
- PCI_DMA_BIDIRECTIONAL);
-err1:
- free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
- buf_node_child->va = NULL;
- return CQM_FAIL;
-}
-
-void cqm_cla_free(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- struct cqm_buf_list *buf_node_parent,
- struct cqm_buf_list *buf_node_child, u32 child_index, u8 cla_update_mode)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 trunk_size;
-
- if (cqm_cla_update(cqm_handle, buf_node_parent, buf_node_child,
- child_index, cla_update_mode) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_update));
- return;
- }
-
- if (cla_update_mode == CQM_CLA_DEL_GPA_WITH_CACHE_INVALID) {
- trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
- if (cqm_cla_cache_invalid(cqm_handle, buf_node_child->pa,
- trunk_size) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_cla_cache_invalid));
- return;
- }
- }
-
- /* Remove PCI mapping from the trunk page */
- pci_unmap_single(cqm_handle->dev, buf_node_child->pa,
- PAGE_SIZE << cla_table->trunk_order,
- PCI_DMA_BIDIRECTIONAL);
-
- /* Rlease trunk page */
- free_pages((ulong)(buf_node_child->va), cla_table->trunk_order);
- buf_node_child->va = NULL;
-}
-
-u8 *cqm_cla_get_unlock_lvl0(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa)
-{
- struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
- u8 *ret_addr = NULL;
- u32 offset = 0;
-
- /* Level 0 CLA pages are statically allocated. */
- offset = index * cla_table->obj_size;
- ret_addr = (u8 *)(cla_z_buf->buf_list->va) + offset;
- *pa = cla_z_buf->buf_list->pa + offset;
-
- return ret_addr;
-}
-
-u8 *cqm_cla_get_unlock_lvl1(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa)
-{
- struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
- struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf_list *buf_node_y = NULL;
- struct cqm_buf_list *buf_node_z = NULL;
- u32 y_index = 0;
- u32 z_index = 0;
- u8 *ret_addr = NULL;
- u32 offset = 0;
-
- z_index = index & ((1U << (cla_table->z + 1)) - 1);
- y_index = index >> (cla_table->z + 1);
-
- if (y_index >= cla_z_buf->buf_number) {
- cqm_err(handle->dev_hdl,
- "Cla get: index exceeds buf_number, y_index %u, z_buf_number %u\n",
- y_index, cla_z_buf->buf_number);
- return NULL;
- }
- buf_node_z = &cla_z_buf->buf_list[y_index];
- buf_node_y = cla_y_buf->buf_list;
-
- /* The z buf node does not exist, applying for a page first. */
- if (!buf_node_z->va) {
- if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
- y_index) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_cla_alloc));
- cqm_err(handle->dev_hdl,
- "Cla get: cla_table->type=%u\n",
- cla_table->type);
- return NULL;
- }
- }
-
- buf_node_z->refcount += count;
- offset = z_index * cla_table->obj_size;
- ret_addr = (u8 *)(buf_node_z->va) + offset;
- *pa = buf_node_z->pa + offset;
-
- return ret_addr;
-}
-
-u8 *cqm_cla_get_unlock_lvl2(struct cqm_handle *cqm_handle,
- struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa)
-{
- struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
- struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
- struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf_list *buf_node_x = NULL;
- struct cqm_buf_list *buf_node_y = NULL;
- struct cqm_buf_list *buf_node_z = NULL;
- u32 x_index = 0;
- u32 y_index = 0;
- u32 z_index = 0;
- u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
- u8 *ret_addr = NULL;
- u32 offset = 0;
- u64 tmp;
-
- z_index = index & ((1U << (cla_table->z + 1)) - 1);
- y_index = (index >> (cla_table->z + 1)) &
- ((1U << (cla_table->y - cla_table->z)) - 1);
- x_index = index >> (cla_table->y + 1);
- tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
-
- if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
- cqm_err(handle->dev_hdl,
- "Cla get: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
- x_index, y_index, cla_y_buf->buf_number,
- cla_z_buf->buf_number);
- return NULL;
- }
-
- buf_node_x = cla_x_buf->buf_list;
- buf_node_y = &cla_y_buf->buf_list[x_index];
- buf_node_z = &cla_z_buf->buf_list[tmp];
-
- /* The y buf node does not exist, applying for pages for y node. */
- if (!buf_node_y->va) {
- if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_x, buf_node_y,
- x_index) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_cla_alloc));
- return NULL;
- }
- }
-
- /* The z buf node does not exist, applying for pages for z node. */
- if (!buf_node_z->va) {
- if (cqm_cla_alloc(cqm_handle, cla_table, buf_node_y, buf_node_z,
- y_index) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_cla_alloc));
- if (buf_node_y->refcount == 0)
- /* To release node Y, cache_invalid is
- * required.
- */
- cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
- CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
- return NULL;
- }
-
- /* reference counting of the y buffer node needs to increase
- * by 1.
- */
- buf_node_y->refcount++;
- }
-
- buf_node_z->refcount += count;
- offset = z_index * cla_table->obj_size;
- ret_addr = (u8 *)(buf_node_z->va) + offset;
- *pa = buf_node_z->pa + offset;
-
- return ret_addr;
-}
-
-u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa)
-{
- u8 *ret_addr = NULL;
-
- if (cla_table->cla_lvl == CQM_CLA_LVL_0)
- ret_addr = cqm_cla_get_unlock_lvl0(cqm_handle, cla_table, index,
- count, pa);
- else if (cla_table->cla_lvl == CQM_CLA_LVL_1)
- ret_addr = cqm_cla_get_unlock_lvl1(cqm_handle, cla_table, index,
- count, pa);
- else
- ret_addr = cqm_cla_get_unlock_lvl2(cqm_handle, cla_table, index,
- count, pa);
-
- return ret_addr;
-}
-
-u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa)
-{
- u8 *ret_addr = NULL;
-
- mutex_lock(&cla_table->lock);
-
- ret_addr = cqm_cla_get_unlock(cqm_handle, cla_table, index, count, pa);
-
- mutex_unlock(&cla_table->lock);
-
- return ret_addr;
-}
-
-void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count)
-{
- struct cqm_buf *cla_z_buf = &cla_table->cla_z_buf;
- struct cqm_buf *cla_y_buf = &cla_table->cla_y_buf;
- struct cqm_buf *cla_x_buf = &cla_table->cla_x_buf;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_buf_list *buf_node_z = NULL;
- struct cqm_buf_list *buf_node_y = NULL;
- struct cqm_buf_list *buf_node_x = NULL;
- u32 x_index = 0;
- u32 y_index = 0;
- u32 trunk_size = (u32)(PAGE_SIZE << cla_table->trunk_order);
- u64 tmp;
-
- /* The buffer is applied statically, and the reference counting
- * does not need to be controlled.
- */
- if (cla_table->alloc_static)
- return;
-
- mutex_lock(&cla_table->lock);
-
- if (cla_table->cla_lvl == CQM_CLA_LVL_1) {
- y_index = index >> (cla_table->z + 1);
-
- if (y_index >= cla_z_buf->buf_number) {
- cqm_err(handle->dev_hdl,
- "Cla put: index exceeds buf_number, y_index %u, z_buf_number %u\n",
- y_index, cla_z_buf->buf_number);
- cqm_err(handle->dev_hdl,
- "Cla put: cla_table->type=%u\n",
- cla_table->type);
- mutex_unlock(&cla_table->lock);
- return;
- }
-
- buf_node_z = &cla_z_buf->buf_list[y_index];
- buf_node_y = cla_y_buf->buf_list;
-
- /* When the value of reference counting on the z node page is 0,
- * the z node page is released.
- */
- buf_node_z->refcount -= count;
- if (buf_node_z->refcount == 0)
- /* The cache invalid is not required for the Z node. */
- cqm_cla_free(cqm_handle, cla_table, buf_node_y,
- buf_node_z, y_index,
- CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
- } else if (cla_table->cla_lvl == CQM_CLA_LVL_2) {
- y_index = (index >> (cla_table->z + 1)) &
- ((1U << (cla_table->y - cla_table->z)) - 1);
- x_index = index >> (cla_table->y + 1);
- tmp = x_index * (trunk_size / sizeof(dma_addr_t)) + y_index;
-
- if (x_index >= cla_y_buf->buf_number || tmp >= cla_z_buf->buf_number) {
- cqm_err(handle->dev_hdl,
- "Cla put: index exceeds buf_number, x_index %u, y_index %u, y_buf_number %u, z_buf_number %u\n",
- x_index, y_index, cla_y_buf->buf_number,
- cla_z_buf->buf_number);
- mutex_unlock(&cla_table->lock);
- return;
- }
-
- buf_node_x = cla_x_buf->buf_list;
- buf_node_y = &cla_y_buf->buf_list[x_index];
- buf_node_z = &cla_z_buf->buf_list[tmp];
-
- /* When the value of reference counting on the z node page is 0,
- * the z node page is released.
- */
- buf_node_z->refcount -= count;
- if (buf_node_z->refcount == 0) {
- cqm_cla_free(cqm_handle, cla_table, buf_node_y,
- buf_node_z, y_index,
- CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID);
-
- /* When the value of reference counting on the y node
- * page is 0, the y node page is released.
- */
- buf_node_y->refcount--;
- if (buf_node_y->refcount == 0)
- /* Node y requires cache to be invalid. */
- cqm_cla_free(cqm_handle, cla_table, buf_node_x, buf_node_y, x_index,
- CQM_CLA_DEL_GPA_WITH_CACHE_INVALID);
- }
- }
-
- mutex_unlock(&cla_table->lock);
-}
-
-struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type)
-{
- struct cqm_cla_table *cla_table = NULL;
- u32 i = 0;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- if (entry_type == cla_table->type)
- return cla_table;
- }
-
- return NULL;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h b/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
deleted file mode 100644
index 85b060e7935c..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
+++ /dev/null
@@ -1,215 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_CQM_BAT_CLA_H
-#define SPFC_CQM_BAT_CLA_H
-
-/* When the connection check is enabled, the maximum number of connections
- * supported by the chip is 1M - 63, which cannot reach 1M
- */
-#define CQM_BAT_MAX_CONN_NUM (0x100000 - 63)
-#define CQM_BAT_MAX_CACHE_CONN_NUM (0x100000 - 63)
-
-#define CLA_TABLE_PAGE_ORDER 0
-#define CQM_4K_PAGE_ORDER 0
-#define CQM_4K_PAGE_SIZE 4096
-
-#define CQM_BAT_ENTRY_MAX 16
-#define CQM_BAT_ENTRY_SIZE 16
-
-#define CQM_BAT_SIZE_FT_PF 192
-#define CQM_BAT_SIZE_FT_VF 112
-
-#define CQM_BAT_INDEX0 0
-#define CQM_BAT_INDEX1 1
-#define CQM_BAT_INDEX2 2
-#define CQM_BAT_INDEX3 3
-#define CQM_BAT_INDEX4 4
-#define CQM_BAT_INDEX5 5
-#define CQM_BAT_INDEX6 6
-#define CQM_BAT_INDEX7 7
-#define CQM_BAT_INDEX8 8
-#define CQM_BAT_INDEX9 9
-#define CQM_BAT_INDEX10 10
-#define CQM_BAT_INDEX11 11
-#define CQM_BAT_INDEX12 12
-#define CQM_BAT_INDEX13 13
-#define CQM_BAT_INDEX14 14
-#define CQM_BAT_INDEX15 15
-
-enum cqm_bat_entry_type {
- CQM_BAT_ENTRY_T_CFG = 0,
- CQM_BAT_ENTRY_T_HASH = 1,
- CQM_BAT_ENTRY_T_QPC = 2,
- CQM_BAT_ENTRY_T_SCQC = 3,
- CQM_BAT_ENTRY_T_SRQC = 4,
- CQM_BAT_ENTRY_T_MPT = 5,
- CQM_BAT_ENTRY_T_GID = 6,
- CQM_BAT_ENTRY_T_LUN = 7,
- CQM_BAT_ENTRY_T_TASKMAP = 8,
- CQM_BAT_ENTRY_T_L3I = 9,
- CQM_BAT_ENTRY_T_CHILDC = 10,
- CQM_BAT_ENTRY_T_TIMER = 11,
- CQM_BAT_ENTRY_T_XID2CID = 12,
- CQM_BAT_ENTRY_T_REORDER = 13,
- CQM_BAT_ENTRY_T_INVALID = 14,
- CQM_BAT_ENTRY_T_MAX = 15,
-};
-
-/* CLA update mode */
-#define CQM_CLA_RECORD_NEW_GPA 0
-#define CQM_CLA_DEL_GPA_WITHOUT_CACHE_INVALID 1
-#define CQM_CLA_DEL_GPA_WITH_CACHE_INVALID 2
-
-#define CQM_CLA_LVL_0 0
-#define CQM_CLA_LVL_1 1
-#define CQM_CLA_LVL_2 2
-
-#define CQM_MAX_INDEX_BIT 19
-
-#define CQM_CHIP_CACHELINE 256
-#define CQM_CHIP_TIMER_CACHELINE 512
-#define CQM_OBJECT_256 256
-#define CQM_OBJECT_512 512
-#define CQM_OBJECT_1024 1024
-#define CQM_CHIP_GPA_MASK 0x1ffffffffffffff
-#define CQM_CHIP_GPA_HIMASK 0x1ffffff
-#define CQM_CHIP_GPA_LOMASK 0xffffffff
-#define CQM_CHIP_GPA_HSHIFT 32
-
-/* Aligns with 64 buckets and shifts rightward by 6 bits */
-#define CQM_HASH_NUMBER_UNIT 6
-
-struct cqm_cla_table {
- u32 type;
- u32 max_buffer_size;
- u32 obj_num;
- bool alloc_static; /* Whether the buffer is statically allocated */
- u32 cla_lvl;
- u32 cacheline_x; /* x value calculated based on cacheline, used by the chip */
- u32 cacheline_y; /* y value calculated based on cacheline, used by the chip */
- u32 cacheline_z; /* z value calculated based on cacheline, used by the chip */
- u32 x; /* x value calculated based on obj_size, used by software */
- u32 y; /* y value calculated based on obj_size, used by software */
- u32 z; /* z value calculated based on obj_size, used by software */
- struct cqm_buf cla_x_buf;
- struct cqm_buf cla_y_buf;
- struct cqm_buf cla_z_buf;
- u32 trunk_order; /* A continuous physical page contains 2^order pages */
- u32 obj_size;
- struct mutex lock; /* Lock for cla buffer allocation and free */
-
- struct cqm_bitmap bitmap;
-
- struct cqm_object_table obj_table; /* Mapping table between indexes and objects */
-};
-
-struct cqm_bat_entry_cfg {
- u32 cur_conn_num_h_4 : 4;
- u32 rsv1 : 4;
- u32 max_conn_num : 20;
- u32 rsv2 : 4;
-
- u32 max_conn_cache : 10;
- u32 rsv3 : 6;
- u32 cur_conn_num_l_16 : 16;
-
- u32 bloom_filter_addr : 16;
- u32 cur_conn_cache : 10;
- u32 rsv4 : 6;
-
- u32 bucket_num : 16;
- u32 bloom_filter_len : 16;
-};
-
-#define CQM_BAT_NO_BYPASS_CACHE 0
-#define CQM_BAT_BYPASS_CACHE 1
-
-#define CQM_BAT_ENTRY_SIZE_256 0
-#define CQM_BAT_ENTRY_SIZE_512 1
-#define CQM_BAT_ENTRY_SIZE_1024 2
-
-struct cqm_bat_entry_standerd {
- u32 entry_size : 2;
- u32 rsv1 : 6;
- u32 max_number : 20;
- u32 rsv2 : 4;
-
- u32 cla_gpa_h : 32;
-
- u32 cla_gpa_l : 32;
-
- u32 rsv3 : 8;
- u32 z : 5;
- u32 y : 5;
- u32 x : 5;
- u32 rsv24 : 1;
- u32 bypass : 1;
- u32 cla_level : 2;
- u32 rsv5 : 5;
-};
-
-struct cqm_bat_entry_vf2pf {
- u32 cla_gpa_h : 25;
- u32 pf_id : 5;
- u32 fake_vf_en : 1;
- u32 acs_spu_en : 1;
-};
-
-#define CQM_BAT_ENTRY_TASKMAP_NUM 4
-struct cqm_bat_entry_taskmap_addr {
- u32 gpa_h;
- u32 gpa_l;
-};
-
-struct cqm_bat_entry_taskmap {
- struct cqm_bat_entry_taskmap_addr addr[CQM_BAT_ENTRY_TASKMAP_NUM];
-};
-
-struct cqm_bat_table {
- u32 bat_entry_type[CQM_BAT_ENTRY_MAX];
- u8 bat[CQM_BAT_ENTRY_MAX * CQM_BAT_ENTRY_SIZE];
- struct cqm_cla_table entry[CQM_BAT_ENTRY_MAX];
- /* In LB mode 1, the timer needs to be configured in 4 SMFs,
- * and the GPAs must be different and independent.
- */
- struct cqm_cla_table timer_entry[4];
- u32 bat_size;
-};
-
-#define CQM_BAT_MAX_SIZE 256
-struct cqm_cmdq_bat_update {
- u32 offset;
- u32 byte_len;
- u8 data[CQM_BAT_MAX_SIZE];
- u32 smf_id;
- u32 func_id;
-};
-
-struct cqm_cla_update_cmd {
- /* Gpa address to be updated */
- u32 gpa_h;
- u32 gpa_l;
-
- /* Updated Value */
- u32 value_h;
- u32 value_l;
-
- u32 smf_id;
- u32 func_id;
-};
-
-s32 cqm_bat_init(struct cqm_handle *cqm_handle);
-void cqm_bat_uninit(struct cqm_handle *cqm_handle);
-s32 cqm_cla_init(struct cqm_handle *cqm_handle);
-void cqm_cla_uninit(struct cqm_handle *cqm_handle, u32 entry_numb);
-u8 *cqm_cla_get_unlock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa);
-u8 *cqm_cla_get_lock(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count, dma_addr_t *pa);
-void cqm_cla_put(struct cqm_handle *cqm_handle, struct cqm_cla_table *cla_table,
- u32 index, u32 count);
-struct cqm_cla_table *cqm_cla_table_get(struct cqm_bat_table *bat_table, u32 entry_type);
-u32 cqm_funcid2smfid(struct cqm_handle *cqm_handle);
-
-#endif /* SPFC_CQM_BAT_CLA_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
deleted file mode 100644
index 21100e8db8f4..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
+++ /dev/null
@@ -1,885 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include <linux/types.h>
-#include <linux/sched.h>
-#include <linux/pci.h>
-#include <linux/module.h>
-#include <linux/vmalloc.h>
-#include <linux/device.h>
-#include <linux/mm.h>
-#include <linux/gfp.h>
-
-#include "sphw_crm.h"
-#include "sphw_hw.h"
-#include "sphw_hwdev.h"
-#include "sphw_hwif.h"
-
-#include "spfc_cqm_object.h"
-#include "spfc_cqm_bitmap_table.h"
-#include "spfc_cqm_bat_cla.h"
-#include "spfc_cqm_main.h"
-
-#define common_section
-
-void cqm_swab64(u8 *addr, u32 cnt)
-{
- u64 *temp = (u64 *)addr;
- u64 value = 0;
- u32 i;
-
- for (i = 0; i < cnt; i++) {
- value = __swab64(*temp);
- *temp = value;
- temp++;
- }
-}
-
-void cqm_swab32(u8 *addr, u32 cnt)
-{
- u32 *temp = (u32 *)addr;
- u32 value = 0;
- u32 i;
-
- for (i = 0; i < cnt; i++) {
- value = __swab32(*temp);
- *temp = value;
- temp++;
- }
-}
-
-s32 cqm_shift(u32 data)
-{
- s32 shift = -1;
-
- do {
- data >>= 1;
- shift++;
- } while (data);
-
- return shift;
-}
-
-bool cqm_check_align(u32 data)
-{
- if (data == 0)
- return false;
-
- do {
- /* When the value can be exactly divided by 2,
- * the value of data is shifted right by one bit, that is,
- * divided by 2.
- */
- if ((data & 0x1) == 0)
- data >>= 1;
- /* If the value cannot be divisible by 2, the value is
- * not 2^n-aligned and false is returned.
- */
- else
- return false;
- } while (data != 1);
-
- return true;
-}
-
-void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order)
-{
- void *orig_addr = NULL;
- void *align_addr = NULL;
- void *index_addr = NULL;
-
- orig_addr = kmalloc(size + ((u64)1 << align_order) + sizeof(void *),
- flags);
- if (!orig_addr)
- return NULL;
-
- index_addr = (void *)((char *)orig_addr + sizeof(void *));
- align_addr =
- (void *)((((u64)index_addr + ((u64)1 << align_order) - 1) >>
- align_order) << align_order);
-
- /* Record the original memory address for memory release. */
- index_addr = (void *)((char *)align_addr - sizeof(void *));
- *(void **)index_addr = orig_addr;
-
- return align_addr;
-}
-
-void cqm_kfree_align(void *addr)
-{
- void *index_addr = NULL;
-
- /* Release the original memory address. */
- index_addr = (void *)((char *)addr - sizeof(void *));
-
- kfree(*(void **)index_addr);
-}
-
-void cqm_write_lock(rwlock_t *lock, bool bh)
-{
- if (bh)
- write_lock_bh(lock);
- else
- write_lock(lock);
-}
-
-void cqm_write_unlock(rwlock_t *lock, bool bh)
-{
- if (bh)
- write_unlock_bh(lock);
- else
- write_unlock(lock);
-}
-
-void cqm_read_lock(rwlock_t *lock, bool bh)
-{
- if (bh)
- read_lock_bh(lock);
- else
- read_lock(lock);
-}
-
-void cqm_read_unlock(rwlock_t *lock, bool bh)
-{
- if (bh)
- read_unlock_bh(lock);
- else
- read_unlock(lock);
-}
-
-s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct page **pages = NULL;
- u32 i, j, order;
-
- order = get_order(buf->buf_size);
-
- if (!direct) {
- buf->direct.va = NULL;
- return CQM_SUCCESS;
- }
-
- pages = vmalloc(sizeof(struct page *) * buf->page_number);
- if (!pages) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(pages));
- return CQM_FAIL;
- }
-
- for (i = 0; i < buf->buf_number; i++) {
- for (j = 0; j < ((u32)1 << order); j++)
- pages[(ulong)(unsigned int)((i << order) + j)] =
- (void *)virt_to_page((u8 *)(buf->buf_list[i].va) +
- (PAGE_SIZE * j));
- }
-
- buf->direct.va = vmap(pages, buf->page_number, VM_MAP, PAGE_KERNEL);
- vfree(pages);
- if (!buf->direct.va) {
- cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf->direct.va));
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_buf_alloc_page(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct page *newpage = NULL;
- u32 order;
- void *va = NULL;
- s32 i, node;
-
- order = get_order(buf->buf_size);
- /* Page for applying for each buffer for non-ovs */
- if (handle->board_info.service_mode != 0) {
- for (i = 0; i < (s32)buf->buf_number; i++) {
- va = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
- order);
- if (!va) {
- cqm_err(handle->dev_hdl,
- CQM_ALLOC_FAIL(buf_page));
- break;
- }
- /* Initialize the page after the page is applied for.
- * If hash entries are involved, the initialization
- * value must be 0.
- */
- memset(va, 0, buf->buf_size);
- buf->buf_list[i].va = va;
- }
- } else {
- node = dev_to_node(handle->dev_hdl);
- for (i = 0; i < (s32)buf->buf_number; i++) {
- newpage = alloc_pages_node(node,
- GFP_KERNEL | __GFP_ZERO,
- order);
- if (!newpage) {
- cqm_err(handle->dev_hdl,
- CQM_ALLOC_FAIL(buf_page));
- break;
- }
- va = (void *)page_address(newpage);
- /* Initialize the page after the page is applied for.
- * If hash entries are involved, the initialization
- * value must be 0.
- */
- memset(va, 0, buf->buf_size);
- buf->buf_list[i].va = va;
- }
- }
-
- if (i != buf->buf_number) {
- i--;
- for (; i >= 0; i--) {
- free_pages((ulong)(buf->buf_list[i].va), order);
- buf->buf_list[i].va = NULL;
- }
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_buf_alloc_map(struct cqm_handle *cqm_handle, struct cqm_buf *buf)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct pci_dev *dev = cqm_handle->dev;
- void *va = NULL;
- s32 i;
-
- for (i = 0; i < (s32)buf->buf_number; i++) {
- va = buf->buf_list[i].va;
- buf->buf_list[i].pa = pci_map_single(dev, va, buf->buf_size,
- PCI_DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(dev, buf->buf_list[i].pa)) {
- cqm_err(handle->dev_hdl, CQM_MAP_FAIL(buf_list));
- break;
- }
- }
-
- if (i != buf->buf_number) {
- i--;
- for (; i >= 0; i--)
- pci_unmap_single(dev, buf->buf_list[i].pa,
- buf->buf_size, PCI_DMA_BIDIRECTIONAL);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct pci_dev *dev = cqm_handle->dev;
- u32 order;
- s32 i;
-
- order = get_order(buf->buf_size);
-
- /* Applying for the buffer list descriptor space */
- buf->buf_list = vmalloc(buf->buf_number * sizeof(struct cqm_buf_list));
- CQM_PTR_CHECK_RET(buf->buf_list, CQM_FAIL,
- CQM_ALLOC_FAIL(linux_buf_list));
- memset(buf->buf_list, 0, buf->buf_number * sizeof(struct cqm_buf_list));
-
- /* Page for applying for each buffer */
- if (cqm_buf_alloc_page(cqm_handle, buf) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_page));
- goto err1;
- }
-
- /* PCI mapping of the buffer */
- if (cqm_buf_alloc_map(cqm_handle, buf) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(linux_cqm_buf_alloc_map));
- goto err2;
- }
-
- /* direct remapping */
- if (cqm_buf_alloc_direct(cqm_handle, buf, direct) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_buf_alloc_direct));
- goto err3;
- }
-
- return CQM_SUCCESS;
-
-err3:
- for (i = 0; i < (s32)buf->buf_number; i++)
- pci_unmap_single(dev, buf->buf_list[i].pa, buf->buf_size,
- PCI_DMA_BIDIRECTIONAL);
-err2:
- for (i = 0; i < (s32)buf->buf_number; i++) {
- free_pages((ulong)(buf->buf_list[i].va), order);
- buf->buf_list[i].va = NULL;
- }
-err1:
- vfree(buf->buf_list);
- buf->buf_list = NULL;
- return CQM_FAIL;
-}
-
-void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev)
-{
- u32 order;
- s32 i;
-
- order = get_order(buf->buf_size);
-
- if (buf->direct.va) {
- vunmap(buf->direct.va);
- buf->direct.va = NULL;
- }
-
- if (buf->buf_list) {
- for (i = 0; i < (s32)(buf->buf_number); i++) {
- if (buf->buf_list[i].va) {
- pci_unmap_single(dev, buf->buf_list[i].pa,
- buf->buf_size,
- PCI_DMA_BIDIRECTIONAL);
-
- free_pages((ulong)(buf->buf_list[i].va), order);
- buf->buf_list[i].va = NULL;
- }
- }
-
- vfree(buf->buf_list);
- buf->buf_list = NULL;
- }
-}
-
-s32 cqm_cla_cache_invalid_cmd(struct cqm_handle *cqm_handle, struct cqm_cmd_buf *buf_in,
- struct cqm_cla_cache_invalid_cmd *cmd)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cla_cache_invalid_cmd *cla_cache_invalid_cmd = NULL;
- s32 ret;
-
- cla_cache_invalid_cmd = (struct cqm_cla_cache_invalid_cmd *)(buf_in->buf);
- cla_cache_invalid_cmd->gpa_h = cmd->gpa_h;
- cla_cache_invalid_cmd->gpa_l = cmd->gpa_l;
- cla_cache_invalid_cmd->cache_size = cmd->cache_size;
- cla_cache_invalid_cmd->smf_id = cmd->smf_id;
- cla_cache_invalid_cmd->func_id = cmd->func_id;
-
- cqm_swab32((u8 *)cla_cache_invalid_cmd,
- /* shift 2 bits by right to get length of dw(4B) */
- (sizeof(struct cqm_cla_cache_invalid_cmd) >> 2));
-
- /* Send the cmdq command. */
- ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle), CQM_MOD_CQM,
- CQM_CMD_T_CLA_CACHE_INVALID, buf_in, NULL, NULL,
- CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
- cqm_err(handle->dev_hdl,
- "Cla cache invalid: cqm3_send_cmd_box_ret=%d\n",
- ret);
- cqm_err(handle->dev_hdl,
- "Cla cache invalid: cla_cache_invalid_cmd: 0x%x 0x%x 0x%x\n",
- cmd->gpa_h, cmd->gpa_l, cmd->cache_size);
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa, u32 cache_size)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cmd_buf *buf_in = NULL;
- struct cqm_cla_cache_invalid_cmd cmd;
- s32 ret = CQM_FAIL;
- u32 i;
-
- buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
- buf_in->size = sizeof(struct cqm_cla_cache_invalid_cmd);
-
- /* Fill command and convert it to big endian */
- cmd.cache_size = cache_size;
- cmd.gpa_h = CQM_ADDR_HI(gpa);
- cmd.gpa_l = CQM_ADDR_LW(gpa);
-
- /* In non-fake mode, set func_id to 0xffff. */
- cmd.func_id = 0xffff;
-
- /* Mode 0 is hashed to 4 SMF engines (excluding PPF) by func ID. */
- if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_NORMAL ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
- cqm_handle->func_attribute.func_type != CQM_PPF)) {
- cmd.smf_id = cqm_funcid2smfid(cqm_handle);
- ret = cqm_cla_cache_invalid_cmd(cqm_handle, buf_in, &cmd);
- }
- /* Mode 1/2 are allocated to 4 SMF engines by flow. Therefore,
- * one function needs to be allocated to 4 SMF engines.
- */
- /* The PPF in mode 0 needs to be configured on 4 engines,
- * and the timer resources need to be shared by the 4 engines.
- */
- else if (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_1 ||
- cqm_handle->func_capability.lb_mode == CQM_LB_MODE_2 ||
- (cqm_handle->func_capability.lb_mode == CQM_LB_MODE_0 &&
- cqm_handle->func_attribute.func_type == CQM_PPF)) {
- for (i = 0; i < CQM_LB_SMF_MAX; i++) {
- /* The smf_pg stored currently enabled SMF engine. */
- if (cqm_handle->func_capability.smf_pg & (1U << i)) {
- cmd.smf_id = i;
- ret = cqm_cla_cache_invalid_cmd(cqm_handle,
- buf_in, &cmd);
- if (ret != CQM_SUCCESS)
- goto out;
- }
- }
- } else {
- cqm_err(handle->dev_hdl, "Cla cache invalid: unsupport lb mode=%u\n",
- cqm_handle->func_capability.lb_mode);
- ret = CQM_FAIL;
- }
-
-out:
- cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
- return ret;
-}
-
-static void free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
- s32 *inv_flag)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 order;
- s32 i;
-
- order = get_order(buf->buf_size);
-
- if (!handle->chip_present_flag)
- return;
-
- if (!buf->buf_list)
- return;
-
- for (i = 0; i < (s32)(buf->buf_number); i++) {
- if (!buf->buf_list[i].va)
- continue;
-
- if (*inv_flag != CQM_SUCCESS)
- continue;
-
- /* In the Pangea environment, if the cmdq times out,
- * no subsequent message is sent.
- */
- *inv_flag = cqm_cla_cache_invalid(cqm_handle, buf->buf_list[i].pa,
- (u32)(PAGE_SIZE << order));
- if (*inv_flag != CQM_SUCCESS)
- cqm_err(handle->dev_hdl,
- "Buffer free: fail to invalid buf_list pa cache, inv_flag=%d\n",
- *inv_flag);
- }
-}
-
-void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
- s32 *inv_flag)
-{
- /* Send a command to the chip to kick out the cache. */
- free_cache_inv(cqm_handle, buf, inv_flag);
-
- /* Clear host resources */
- cqm_buf_free(buf, cqm_handle->dev);
-}
-
-#define bitmap_section
-
-s32 cqm_single_bitmap_init(struct cqm_bitmap *bitmap)
-{
- u32 bit_number;
-
- spin_lock_init(&bitmap->lock);
-
- /* Max_num of the bitmap is 8-aligned and then
- * shifted rightward by 3 bits to obtain the number of bytes required.
- */
- bit_number = (ALIGN(bitmap->max_num, CQM_NUM_BIT_BYTE) >> CQM_BYTE_BIT_SHIFT);
- bitmap->table = vmalloc(bit_number);
- CQM_PTR_CHECK_RET(bitmap->table, CQM_FAIL, CQM_ALLOC_FAIL(bitmap->table));
- memset(bitmap->table, 0, bit_number);
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_bitmap_init(struct cqm_handle *cqm_handle)
-{
- struct cqm_func_capability *capability = &cqm_handle->func_capability;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_bitmap *bitmap = NULL;
- s32 ret = CQM_SUCCESS;
- u32 i;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- if (cla_table->obj_num == 0) {
- cqm_info(handle->dev_hdl,
- "Cla alloc: cla_type %u, obj_num=0, don't init bitmap\n",
- cla_table->type);
- continue;
- }
-
- bitmap = &cla_table->bitmap;
-
- switch (cla_table->type) {
- case CQM_BAT_ENTRY_T_QPC:
- bitmap->max_num = capability->qpc_number;
- bitmap->reserved_top = capability->qpc_reserved;
- bitmap->last = capability->qpc_reserved;
- cqm_info(handle->dev_hdl,
- "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
- cla_table->type, bitmap->max_num);
- ret = cqm_single_bitmap_init(bitmap);
- break;
- case CQM_BAT_ENTRY_T_MPT:
- bitmap->max_num = capability->mpt_number;
- bitmap->reserved_top = capability->mpt_reserved;
- bitmap->last = capability->mpt_reserved;
- cqm_info(handle->dev_hdl,
- "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
- cla_table->type, bitmap->max_num);
- ret = cqm_single_bitmap_init(bitmap);
- break;
- case CQM_BAT_ENTRY_T_SCQC:
- bitmap->max_num = capability->scqc_number;
- bitmap->reserved_top = capability->scq_reserved;
- bitmap->last = capability->scq_reserved;
- cqm_info(handle->dev_hdl,
- "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
- cla_table->type, bitmap->max_num);
- ret = cqm_single_bitmap_init(bitmap);
- break;
- case CQM_BAT_ENTRY_T_SRQC:
- bitmap->max_num = capability->srqc_number;
- bitmap->reserved_top = capability->srq_reserved;
- bitmap->last = capability->srq_reserved;
- cqm_info(handle->dev_hdl,
- "Bitmap init: cla_table_type=%u, max_num=0x%x\n",
- cla_table->type, bitmap->max_num);
- ret = cqm_single_bitmap_init(bitmap);
- break;
- default:
- break;
- }
-
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- "Bitmap init: failed to init cla_table_type=%u, obj_num=0x%x\n",
- cla_table->type, cla_table->obj_num);
- goto err;
- }
- }
-
- return CQM_SUCCESS;
-
-err:
- cqm_bitmap_uninit(cqm_handle);
- return CQM_FAIL;
-}
-
-void cqm_bitmap_uninit(struct cqm_handle *cqm_handle)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_bitmap *bitmap = NULL;
- u32 i;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- bitmap = &cla_table->bitmap;
- if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
- if (bitmap->table) {
- vfree(bitmap->table);
- bitmap->table = NULL;
- }
- }
- }
-}
-
-u32 cqm_bitmap_check_range(const ulong *table, u32 step, u32 max_num, u32 begin,
- u32 count)
-{
- u32 end = (begin + (count - 1));
- u32 i;
-
- /* Single-bit check is not performed. */
- if (count == 1)
- return begin;
-
- /* The end value exceeds the threshold. */
- if (end >= max_num)
- return max_num;
-
- /* Bit check, the next bit is returned when a non-zero bit is found. */
- for (i = (begin + 1); i <= end; i++) {
- if (test_bit((s32)i, table))
- return i + 1;
- }
-
- /* Check whether it's in different steps. */
- if ((begin & (~(step - 1))) != (end & (~(step - 1))))
- return (end & (~(step - 1)));
-
- /* If the check succeeds, begin is returned. */
- return begin;
-}
-
-void cqm_bitmap_find(struct cqm_bitmap *bitmap, u32 *index, u32 last, u32 step, u32 count)
-{
- u32 max_num = bitmap->max_num;
- ulong *table = bitmap->table;
-
- do {
- *index = (u32)find_next_zero_bit(table, max_num, last);
- if (*index < max_num)
- last = cqm_bitmap_check_range(table, step, max_num,
- *index, count);
- else
- break;
- } while (last != *index);
-}
-
-u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last)
-{
- u32 index = 0;
- u32 max_num = bitmap->max_num;
- u32 last = bitmap->last;
- ulong *table = bitmap->table;
- u32 i;
-
- spin_lock(&bitmap->lock);
-
- /* Search for an idle bit from the last position. */
- cqm_bitmap_find(bitmap, &index, last, step, count);
-
- /* The preceding search fails. Search for an idle bit
- * from the beginning.
- */
- if (index >= max_num) {
- last = bitmap->reserved_top;
- cqm_bitmap_find(bitmap, &index, last, step, count);
- }
-
- /* Set the found bit to 1 and reset last. */
- if (index < max_num) {
- for (i = index; i < (index + count); i++)
- set_bit(i, table);
-
- if (update_last) {
- bitmap->last = (index + count);
- if (bitmap->last >= bitmap->max_num)
- bitmap->last = bitmap->reserved_top;
- }
- }
-
- spin_unlock(&bitmap->lock);
- return index;
-}
-
-u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index)
-{
- ulong *table = bitmap->table;
- u32 ret_index;
-
- if (index >= bitmap->reserved_top || index >= bitmap->max_num || count != 1)
- return CQM_INDEX_INVALID;
-
- spin_lock(&bitmap->lock);
-
- if (test_bit((s32)index, table)) {
- ret_index = CQM_INDEX_INVALID;
- } else {
- set_bit(index, table);
- ret_index = index;
- }
-
- spin_unlock(&bitmap->lock);
- return ret_index;
-}
-
-void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count)
-{
- u32 i;
-
- spin_lock(&bitmap->lock);
-
- for (i = index; i < (index + count); i++)
- clear_bit((s32)i, bitmap->table);
-
- spin_unlock(&bitmap->lock);
-}
-
-#define obj_table_section
-s32 cqm_single_object_table_init(struct cqm_object_table *obj_table)
-{
- rwlock_init(&obj_table->lock);
-
- obj_table->table = vmalloc(obj_table->max_num * sizeof(void *));
- CQM_PTR_CHECK_RET(obj_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
- memset(obj_table->table, 0, obj_table->max_num * sizeof(void *));
- return CQM_SUCCESS;
-}
-
-s32 cqm_object_table_init(struct cqm_handle *cqm_handle)
-{
- struct cqm_func_capability *capability = &cqm_handle->func_capability;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object_table *obj_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- s32 ret = CQM_SUCCESS;
- u32 i;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- if (cla_table->obj_num == 0) {
- cqm_info(handle->dev_hdl,
- "Obj table init: cla_table_type %u, obj_num=0, don't init obj table\n",
- cla_table->type);
- continue;
- }
-
- obj_table = &cla_table->obj_table;
-
- switch (cla_table->type) {
- case CQM_BAT_ENTRY_T_QPC:
- obj_table->max_num = capability->qpc_number;
- ret = cqm_single_object_table_init(obj_table);
- break;
- case CQM_BAT_ENTRY_T_MPT:
- obj_table->max_num = capability->mpt_number;
- ret = cqm_single_object_table_init(obj_table);
- break;
- case CQM_BAT_ENTRY_T_SCQC:
- obj_table->max_num = capability->scqc_number;
- ret = cqm_single_object_table_init(obj_table);
- break;
- case CQM_BAT_ENTRY_T_SRQC:
- obj_table->max_num = capability->srqc_number;
- ret = cqm_single_object_table_init(obj_table);
- break;
- default:
- break;
- }
-
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- "Obj table init: failed to init cla_table_type=%u, obj_num=0x%x\n",
- cla_table->type, cla_table->obj_num);
- goto err;
- }
- }
-
- return CQM_SUCCESS;
-
-err:
- cqm_object_table_uninit(cqm_handle);
- return CQM_FAIL;
-}
-
-void cqm_object_table_uninit(struct cqm_handle *cqm_handle)
-{
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_object_table *obj_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- u32 i;
-
- for (i = 0; i < CQM_BAT_ENTRY_MAX; i++) {
- cla_table = &bat_table->entry[i];
- obj_table = &cla_table->obj_table;
- if (cla_table->type != CQM_BAT_ENTRY_T_INVALID) {
- if (obj_table->table) {
- vfree(obj_table->table);
- obj_table->table = NULL;
- }
- }
- }
-}
-
-s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, struct cqm_object *obj, bool bh)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
-
- if (index >= object_table->max_num) {
- cqm_err(handle->dev_hdl,
- "Obj table insert: index 0x%x exceeds max_num 0x%x\n",
- index, object_table->max_num);
- return CQM_FAIL;
- }
-
- cqm_write_lock(&object_table->lock, bh);
-
- if (!object_table->table[index]) {
- object_table->table[index] = obj;
- cqm_write_unlock(&object_table->lock, bh);
- return CQM_SUCCESS;
- }
-
- cqm_write_unlock(&object_table->lock, bh);
- cqm_err(handle->dev_hdl,
- "Obj table insert: object_table->table[0x%x] has been inserted\n",
- index);
-
- return CQM_FAIL;
-}
-
-void cqm_object_table_remove(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, const struct cqm_object *obj, bool bh)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
-
- if (index >= object_table->max_num) {
- cqm_err(handle->dev_hdl,
- "Obj table remove: index 0x%x exceeds max_num 0x%x\n",
- index, object_table->max_num);
- return;
- }
-
- cqm_write_lock(&object_table->lock, bh);
-
- if (object_table->table[index] && object_table->table[index] == obj)
- object_table->table[index] = NULL;
- else
- cqm_err(handle->dev_hdl,
- "Obj table remove: object_table->table[0x%x] has been removed\n",
- index);
-
- cqm_write_unlock(&object_table->lock, bh);
-}
-
-struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, bool bh)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object *obj = NULL;
-
- if (index >= object_table->max_num) {
- cqm_err(handle->dev_hdl,
- "Obj table get: index 0x%x exceeds max_num 0x%x\n",
- index, object_table->max_num);
- return NULL;
- }
-
- cqm_read_lock(&object_table->lock, bh);
-
- obj = object_table->table[index];
- if (obj)
- atomic_inc(&obj->refcount);
-
- cqm_read_unlock(&object_table->lock, bh);
-
- return obj;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h b/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
deleted file mode 100644
index 5ae554eac54a..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
+++ /dev/null
@@ -1,65 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_CQM_BITMAP_TABLE_H
-#define SPFC_CQM_BITMAP_TABLE_H
-
-struct cqm_bitmap {
- ulong *table;
- u32 max_num;
- u32 last;
- u32 reserved_top; /* reserved index */
- spinlock_t lock;
-};
-
-struct cqm_object_table {
- /* Now is big array. Later will be optimized as a red-black tree. */
- struct cqm_object **table;
- u32 max_num;
- rwlock_t lock;
-};
-
-struct cqm_cla_cache_invalid_cmd {
- u32 gpa_h;
- u32 gpa_l;
-
- u32 cache_size; /* CLA cache size=4096B */
-
- u32 smf_id;
- u32 func_id;
-};
-
-struct cqm_handle;
-
-s32 cqm_bitmap_init(struct cqm_handle *cqm_handle);
-void cqm_bitmap_uninit(struct cqm_handle *cqm_handle);
-u32 cqm_bitmap_alloc(struct cqm_bitmap *bitmap, u32 step, u32 count, bool update_last);
-u32 cqm_bitmap_alloc_reserved(struct cqm_bitmap *bitmap, u32 count, u32 index);
-void cqm_bitmap_free(struct cqm_bitmap *bitmap, u32 index, u32 count);
-s32 cqm_object_table_init(struct cqm_handle *cqm_handle);
-void cqm_object_table_uninit(struct cqm_handle *cqm_handle);
-s32 cqm_object_table_insert(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, struct cqm_object *obj, bool bh);
-void cqm_object_table_remove(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, const struct cqm_object *obj, bool bh);
-struct cqm_object *cqm_object_table_get(struct cqm_handle *cqm_handle,
- struct cqm_object_table *object_table,
- u32 index, bool bh);
-
-void cqm_swab64(u8 *addr, u32 cnt);
-void cqm_swab32(u8 *addr, u32 cnt);
-bool cqm_check_align(u32 data);
-s32 cqm_shift(u32 data);
-s32 cqm_buf_alloc(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
-s32 cqm_buf_alloc_direct(struct cqm_handle *cqm_handle, struct cqm_buf *buf, bool direct);
-void cqm_buf_free(struct cqm_buf *buf, struct pci_dev *dev);
-void cqm_buf_free_cache_inv(struct cqm_handle *cqm_handle, struct cqm_buf *buf,
- s32 *inv_flag);
-s32 cqm_cla_cache_invalid(struct cqm_handle *cqm_handle, dma_addr_t gpa,
- u32 cache_size);
-void *cqm_kmalloc_align(size_t size, gfp_t flags, u16 align_order);
-void cqm_kfree_align(void *addr);
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.c b/drivers/scsi/spfc/hw/spfc_cqm_main.c
deleted file mode 100644
index 52cc2c7838e9..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_main.c
+++ /dev/null
@@ -1,987 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include <linux/types.h>
-#include <linux/sched.h>
-#include <linux/pci.h>
-#include <linux/module.h>
-#include <linux/delay.h>
-#include <linux/vmalloc.h>
-
-#include "sphw_crm.h"
-#include "sphw_hw.h"
-#include "sphw_hw_cfg.h"
-#include "spfc_cqm_main.h"
-
-s32 cqm3_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- s32 ret;
-
- CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
-
- cqm_handle = kmalloc(sizeof(*cqm_handle), GFP_KERNEL | __GFP_ZERO);
- CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_ALLOC_FAIL(cqm_handle));
-
- /* Clear the memory to prevent other systems from
- * not clearing the memory.
- */
- memset(cqm_handle, 0, sizeof(struct cqm_handle));
-
- cqm_handle->ex_handle = handle;
- cqm_handle->dev = (struct pci_dev *)(handle->pcidev_hdl);
- handle->cqm_hdl = (void *)cqm_handle;
-
- /* Clearing Statistics */
- memset(&handle->hw_stats.cqm_stats, 0, sizeof(struct cqm_stats));
-
- /* Reads VF/PF information. */
- cqm_handle->func_attribute = handle->hwif->attr;
- cqm_info(handle->dev_hdl, "Func init: function[%u] type %d(0:PF,1:VF,2:PPF)\n",
- cqm_handle->func_attribute.func_global_idx,
- cqm_handle->func_attribute.func_type);
-
- /* Read capability from configuration management module */
- ret = cqm_capability_init(ex_handle);
- if (ret == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_capability_init));
- goto err1;
- }
-
- /* Initialize memory entries such as BAT, CLA, and bitmap. */
- if (cqm_mem_init(ex_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_mem_init));
- goto err1;
- }
-
- /* Event callback initialization */
- if (cqm_event_init(ex_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_event_init));
- goto err2;
- }
-
- /* Doorbell initiation */
- if (cqm_db_init(ex_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_init));
- goto err3;
- }
-
- /* Initialize the bloom filter. */
- if (cqm_bloomfilter_init(ex_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_bloomfilter_init));
- goto err4;
- }
-
- /* The timer bitmap is set directly at the beginning of the CQM.
- * The ifconfig up/down command is not used to set or clear the bitmap.
- */
- if (sphw_func_tmr_bitmap_set(ex_handle, true) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, "Timer start: enable timer bitmap failed\n");
- goto err5;
- }
-
- return CQM_SUCCESS;
-
-err5:
- cqm_bloomfilter_uninit(ex_handle);
-err4:
- cqm_db_uninit(ex_handle);
-err3:
- cqm_event_uninit(ex_handle);
-err2:
- cqm_mem_uninit(ex_handle);
-err1:
- handle->cqm_hdl = NULL;
- kfree(cqm_handle);
- return CQM_FAIL;
-}
-
-void cqm3_uninit(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- s32 ret;
-
- CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
-
- /* The timer bitmap is set directly at the beginning of the CQM.
- * The ifconfig up/down command is not used to set or clear the bitmap.
- */
- cqm_info(handle->dev_hdl, "Timer stop: disable timer\n");
- if (sphw_func_tmr_bitmap_set(ex_handle, false) != CQM_SUCCESS)
- cqm_err(handle->dev_hdl, "Timer stop: disable timer bitmap failed\n");
-
- /* After the TMR timer stops, the system releases resources
- * after a delay of one or two milliseconds.
- */
- if (cqm_handle->func_attribute.func_type == CQM_PPF &&
- cqm_handle->func_capability.timer_enable == CQM_TIMER_ENABLE) {
- cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop\n");
- ret = sphw_ppf_tmr_stop(handle);
- if (ret != CQM_SUCCESS)
- /* The timer fails to be stopped,
- * and the resource release is not affected.
- */
- cqm_info(handle->dev_hdl, "Timer stop: spfc ppf timer stop, ret=%d\n",
- ret);
- /* Somebody requires a delay of 1 ms, which is inaccurate. */
- usleep_range(900, 1000);
- }
-
- /* Release Bloom Filter Table */
- cqm_bloomfilter_uninit(ex_handle);
-
- /* Release hardware doorbell */
- cqm_db_uninit(ex_handle);
-
- /* Cancel the callback of the event */
- cqm_event_uninit(ex_handle);
-
- /* Release various memory tables and require the service
- * to release all objects.
- */
- cqm_mem_uninit(ex_handle);
-
- /* Release cqm_handle */
- handle->cqm_hdl = NULL;
- kfree(cqm_handle);
-}
-
-void cqm_test_mode_init(struct cqm_handle *cqm_handle,
- struct service_cap *service_capability)
-{
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
-
- if (service_capability->test_mode == 0)
- return;
-
- cqm_info(handle->dev_hdl, "Enter CQM test mode\n");
-
- func_cap->qpc_number = service_capability->test_qpc_num;
- func_cap->qpc_reserved =
- GET_MAX(func_cap->qpc_reserved,
- service_capability->test_qpc_resvd_num);
- func_cap->xid_alloc_mode = service_capability->test_xid_alloc_mode;
- func_cap->gpa_check_enable = service_capability->test_gpa_check_enable;
- func_cap->pagesize_reorder = service_capability->test_page_size_reorder;
- func_cap->qpc_alloc_static =
- (bool)(service_capability->test_qpc_alloc_mode);
- func_cap->scqc_alloc_static =
- (bool)(service_capability->test_scqc_alloc_mode);
- func_cap->flow_table_based_conn_number =
- service_capability->test_max_conn_num;
- func_cap->flow_table_based_conn_cache_number =
- service_capability->test_max_cache_conn_num;
- func_cap->scqc_number = service_capability->test_scqc_num;
- func_cap->mpt_number = service_capability->test_mpt_num;
- func_cap->mpt_reserved = service_capability->test_mpt_recvd_num;
- func_cap->reorder_number = service_capability->test_reorder_num;
- /* 256K buckets, 256K*64B = 16MB */
- func_cap->hash_number = service_capability->test_hash_num;
-}
-
-void cqm_service_capability_update(struct cqm_handle *cqm_handle)
-{
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
-
- func_cap->qpc_number = GET_MIN(CQM_MAX_QPC_NUM, func_cap->qpc_number);
- func_cap->scqc_number = GET_MIN(CQM_MAX_SCQC_NUM, func_cap->scqc_number);
- func_cap->srqc_number = GET_MIN(CQM_MAX_SRQC_NUM, func_cap->srqc_number);
- func_cap->childc_number = GET_MIN(CQM_MAX_CHILDC_NUM, func_cap->childc_number);
-}
-
-void cqm_service_valid_init(struct cqm_handle *cqm_handle,
- struct service_cap *service_capability)
-{
- enum cfg_svc_type_en type = service_capability->chip_svc_type;
- struct cqm_service *svc = cqm_handle->service;
-
- svc[CQM_SERVICE_T_FC].valid = ((u32)type & CFG_SVC_FC_BIT5) ? true : false;
-}
-
-void cqm_service_capability_init_fc(struct cqm_handle *cqm_handle, void *pra)
-{
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
- struct service_cap *service_capability = (struct service_cap *)pra;
- struct fc_service_cap *fc_cap = &service_capability->fc_cap;
- struct dev_fc_svc_cap *dev_fc_cap = &fc_cap->dev_fc_cap;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
-
- cqm_info(handle->dev_hdl, "Cap init: fc is valid\n");
- cqm_info(handle->dev_hdl, "Cap init: fc qpc 0x%x, scqc 0x%x, srqc 0x%x\n",
- dev_fc_cap->max_parent_qpc_num, dev_fc_cap->scq_num,
- dev_fc_cap->srq_num);
- func_cap->hash_number += dev_fc_cap->max_parent_qpc_num;
- func_cap->hash_basic_size = CQM_HASH_BUCKET_SIZE_64;
- func_cap->qpc_number += dev_fc_cap->max_parent_qpc_num;
- func_cap->qpc_basic_size = GET_MAX(fc_cap->parent_qpc_size,
- func_cap->qpc_basic_size);
- func_cap->qpc_alloc_static = true;
- func_cap->scqc_number += dev_fc_cap->scq_num;
- func_cap->scqc_basic_size = GET_MAX(fc_cap->scqc_size,
- func_cap->scqc_basic_size);
- func_cap->srqc_number += dev_fc_cap->srq_num;
- func_cap->srqc_basic_size = GET_MAX(fc_cap->srqc_size,
- func_cap->srqc_basic_size);
- func_cap->lun_number = CQM_LUN_FC_NUM;
- func_cap->lun_basic_size = CQM_LUN_SIZE_8;
- func_cap->taskmap_number = CQM_TASKMAP_FC_NUM;
- func_cap->taskmap_basic_size = PAGE_SIZE;
- func_cap->childc_number += dev_fc_cap->max_child_qpc_num;
- func_cap->childc_basic_size = GET_MAX(fc_cap->child_qpc_size,
- func_cap->childc_basic_size);
- func_cap->pagesize_reorder = CQM_FC_PAGESIZE_ORDER;
-}
-
-void cqm_service_capability_init(struct cqm_handle *cqm_handle,
- struct service_cap *service_capability)
-{
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 i;
-
- for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
- cqm_handle->service[i].valid = false;
- cqm_handle->service[i].has_register = false;
- cqm_handle->service[i].buf_order = 0;
- }
-
- cqm_service_valid_init(cqm_handle, service_capability);
-
- cqm_info(handle->dev_hdl, "Cap init: service type %d\n",
- service_capability->chip_svc_type);
-
- if (cqm_handle->service[CQM_SERVICE_T_FC].valid)
- cqm_service_capability_init_fc(cqm_handle, (void *)service_capability);
-}
-
-/* Set func_type in fake_cqm_handle to ppf, pf, or vf. */
-void cqm_set_func_type(struct cqm_handle *cqm_handle)
-{
- u32 idx = cqm_handle->func_attribute.func_global_idx;
-
- if (idx == 0)
- cqm_handle->func_attribute.func_type = CQM_PPF;
- else if (idx < CQM_MAX_PF_NUM)
- cqm_handle->func_attribute.func_type = CQM_PF;
- else
- cqm_handle->func_attribute.func_type = CQM_VF;
-}
-
-void cqm_lb_fake_mode_init(struct cqm_handle *cqm_handle, struct service_cap *svc_cap)
-{
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
-
- func_cap->lb_mode = svc_cap->lb_mode;
-
- /* Initializing the LB Mode */
- if (func_cap->lb_mode == CQM_LB_MODE_NORMAL)
- func_cap->smf_pg = 0;
- else
- func_cap->smf_pg = svc_cap->smf_pg;
-
- func_cap->fake_cfg_number = 0;
- func_cap->fake_func_type = CQM_FAKE_FUNC_NORMAL;
-}
-
-s32 cqm_capability_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- struct service_cap *service_capability = &handle->cfg_mgmt->svc_cap;
- struct sphw_func_attr *func_attr = &cqm_handle->func_attribute;
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
- u32 total_function_num = 0;
- int err = 0;
-
- /* Initializes the PPF capabilities: include timer, pf, vf. */
- if (func_attr->func_type == CQM_PPF) {
- total_function_num = service_capability->host_total_function;
- func_cap->timer_enable = service_capability->timer_en;
- func_cap->pf_num = service_capability->pf_num;
- func_cap->pf_id_start = service_capability->pf_id_start;
- func_cap->vf_num = service_capability->vf_num;
- func_cap->vf_id_start = service_capability->vf_id_start;
-
- cqm_info(handle->dev_hdl, "Cap init: total function num 0x%x\n",
- total_function_num);
- cqm_info(handle->dev_hdl, "Cap init: pf_num 0x%x, pf_id_start 0x%x, vf_num 0x%x, vf_id_start 0x%x\n",
- func_cap->pf_num, func_cap->pf_id_start,
- func_cap->vf_num, func_cap->vf_id_start);
- cqm_info(handle->dev_hdl, "Cap init: timer_enable %u (1: enable; 0: disable)\n",
- func_cap->timer_enable);
- }
-
- func_cap->flow_table_based_conn_number = service_capability->max_connect_num;
- func_cap->flow_table_based_conn_cache_number = service_capability->max_stick2cache_num;
- cqm_info(handle->dev_hdl, "Cap init: cfg max_conn_num 0x%x, max_cache_conn_num 0x%x\n",
- func_cap->flow_table_based_conn_number,
- func_cap->flow_table_based_conn_cache_number);
-
- func_cap->bloomfilter_enable = service_capability->bloomfilter_en;
- cqm_info(handle->dev_hdl, "Cap init: bloomfilter_enable %u (1: enable; 0: disable)\n",
- func_cap->bloomfilter_enable);
-
- if (func_cap->bloomfilter_enable) {
- func_cap->bloomfilter_length = service_capability->bfilter_len;
- func_cap->bloomfilter_addr =
- service_capability->bfilter_start_addr;
- if (func_cap->bloomfilter_length != 0 &&
- !cqm_check_align(func_cap->bloomfilter_length)) {
- cqm_err(handle->dev_hdl, "Cap init: bloomfilter_length %u is not the power of 2\n",
- func_cap->bloomfilter_length);
-
- err = CQM_FAIL;
- goto out;
- }
- }
-
- cqm_info(handle->dev_hdl, "Cap init: bloomfilter_length 0x%x, bloomfilter_addr 0x%x\n",
- func_cap->bloomfilter_length, func_cap->bloomfilter_addr);
-
- func_cap->qpc_reserved = 0;
- func_cap->mpt_reserved = 0;
- func_cap->scq_reserved = 0;
- func_cap->srq_reserved = 0;
- func_cap->qpc_alloc_static = false;
- func_cap->scqc_alloc_static = false;
-
- func_cap->l3i_number = CQM_L3I_COMM_NUM;
- func_cap->l3i_basic_size = CQM_L3I_SIZE_8;
-
- func_cap->timer_number = CQM_TIMER_ALIGN_SCALE_NUM * total_function_num;
- func_cap->timer_basic_size = CQM_TIMER_SIZE_32;
-
- func_cap->gpa_check_enable = true;
-
- cqm_lb_fake_mode_init(cqm_handle, service_capability);
- cqm_info(handle->dev_hdl, "Cap init: lb_mode=%u\n", func_cap->lb_mode);
- cqm_info(handle->dev_hdl, "Cap init: smf_pg=%u\n", func_cap->smf_pg);
- cqm_info(handle->dev_hdl, "Cap init: fake_func_type=%u\n", func_cap->fake_func_type);
- cqm_info(handle->dev_hdl, "Cap init: fake_cfg_number=%u\n", func_cap->fake_cfg_number);
-
- cqm_service_capability_init(cqm_handle, service_capability);
-
- cqm_test_mode_init(cqm_handle, service_capability);
-
- cqm_service_capability_update(cqm_handle);
-
- func_cap->ft_enable = service_capability->sf_svc_attr.ft_en;
- func_cap->rdma_enable = service_capability->sf_svc_attr.rdma_en;
-
- cqm_info(handle->dev_hdl, "Cap init: pagesize_reorder %u\n", func_cap->pagesize_reorder);
- cqm_info(handle->dev_hdl, "Cap init: xid_alloc_mode %d, gpa_check_enable %d\n",
- func_cap->xid_alloc_mode, func_cap->gpa_check_enable);
- cqm_info(handle->dev_hdl, "Cap init: qpc_alloc_mode %d, scqc_alloc_mode %d\n",
- func_cap->qpc_alloc_static, func_cap->scqc_alloc_static);
- cqm_info(handle->dev_hdl, "Cap init: hash_number 0x%x\n", func_cap->hash_number);
- cqm_info(handle->dev_hdl, "Cap init: qpc_number 0x%x, qpc_reserved 0x%x, qpc_basic_size 0x%x\n",
- func_cap->qpc_number, func_cap->qpc_reserved, func_cap->qpc_basic_size);
- cqm_info(handle->dev_hdl, "Cap init: scqc_number 0x%x scqc_reserved 0x%x, scqc_basic_size 0x%x\n",
- func_cap->scqc_number, func_cap->scq_reserved, func_cap->scqc_basic_size);
- cqm_info(handle->dev_hdl, "Cap init: srqc_number 0x%x, srqc_basic_size 0x%x\n",
- func_cap->srqc_number, func_cap->srqc_basic_size);
- cqm_info(handle->dev_hdl, "Cap init: mpt_number 0x%x, mpt_reserved 0x%x\n",
- func_cap->mpt_number, func_cap->mpt_reserved);
- cqm_info(handle->dev_hdl, "Cap init: gid_number 0x%x, lun_number 0x%x\n",
- func_cap->gid_number, func_cap->lun_number);
- cqm_info(handle->dev_hdl, "Cap init: taskmap_number 0x%x, l3i_number 0x%x\n",
- func_cap->taskmap_number, func_cap->l3i_number);
- cqm_info(handle->dev_hdl, "Cap init: timer_number 0x%x, childc_number 0x%x\n",
- func_cap->timer_number, func_cap->childc_number);
- cqm_info(handle->dev_hdl, "Cap init: childc_basic_size 0x%x\n",
- func_cap->childc_basic_size);
- cqm_info(handle->dev_hdl, "Cap init: xid2cid_number 0x%x, reorder_number 0x%x\n",
- func_cap->xid2cid_number, func_cap->reorder_number);
- cqm_info(handle->dev_hdl, "Cap init: ft_enable %d, rdma_enable %d\n",
- func_cap->ft_enable, func_cap->rdma_enable);
-
- return CQM_SUCCESS;
-
-out:
- if (func_attr->func_type == CQM_PPF)
- func_cap->timer_enable = 0;
-
- return err;
-}
-
-s32 cqm_mem_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
-
- if (cqm_bat_init(cqm_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bat_init));
- return CQM_FAIL;
- }
-
- if (cqm_cla_init(cqm_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_init));
- goto err1;
- }
-
- if (cqm_bitmap_init(cqm_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_bitmap_init));
- goto err2;
- }
-
- if (cqm_object_table_init(cqm_handle) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_object_table_init));
- goto err3;
- }
-
- return CQM_SUCCESS;
-
-err3:
- cqm_bitmap_uninit(cqm_handle);
-err2:
- cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
-err1:
- cqm_bat_uninit(cqm_handle);
- return CQM_FAIL;
-}
-
-void cqm_mem_uninit(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
-
- cqm_object_table_uninit(cqm_handle);
- cqm_bitmap_uninit(cqm_handle);
- cqm_cla_uninit(cqm_handle, CQM_BAT_ENTRY_MAX);
- cqm_bat_uninit(cqm_handle);
-}
-
-s32 cqm_event_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- if (sphw_aeq_register_swe_cb(ex_handle, SPHW_STATEFULL_EVENT,
- cqm_aeq_callback) != CHIPIF_SUCCESS) {
- cqm_err(handle->dev_hdl, "Event: fail to register aeq callback\n");
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-void cqm_event_uninit(void *ex_handle)
-{
- sphw_aeq_unregister_swe_cb(ex_handle, SPHW_STATEFULL_EVENT);
-}
-
-u32 cqm_aeq_event2type(u8 event)
-{
- u32 service_type;
-
- /* Distributes events to different service modules
- * based on the event type.
- */
- if (event >= CQM_AEQ_BASE_T_FC && event < CQM_AEQ_MAX_T_FC)
- service_type = CQM_SERVICE_T_FC;
- else
- service_type = CQM_SERVICE_T_MAX;
-
- return service_type;
-}
-
-u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct service_register_template *service_template = NULL;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- u8 event_level = FAULT_LEVEL_MAX;
- u32 service_type;
-
- CQM_PTR_CHECK_RET(ex_handle, event_level,
- CQM_PTR_NULL(aeq_callback_ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_aeq_callback_cnt[event]);
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_RET(cqm_handle, event_level,
- CQM_PTR_NULL(aeq_callback_cqm_handle));
-
- /* Distributes events to different service modules
- * based on the event type.
- */
- service_type = cqm_aeq_event2type(event);
- if (service_type == CQM_SERVICE_T_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(event));
- return event_level;
- }
-
- service = &cqm_handle->service[service_type];
- service_template = &service->service_template;
-
- if (!service_template->aeq_level_callback)
- cqm_err(handle->dev_hdl, "Event: service_type %u aeq_level_callback unregistered\n",
- service_type);
- else
- event_level = service_template->aeq_level_callback(service_template->service_handle,
- event, data);
-
- if (!service_template->aeq_callback)
- cqm_err(handle->dev_hdl, "Event: service_type %u aeq_callback unregistered\n",
- service_type);
- else
- service_template->aeq_callback(service_template->service_handle,
- event, data);
-
- return event_level;
-}
-
-s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
-
- CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_RET(cqm_handle, CQM_FAIL, CQM_PTR_NULL(cqm_handle));
- CQM_PTR_CHECK_RET(service_template, CQM_FAIL,
- CQM_PTR_NULL(service_template));
-
- if (service_template->service_type >= CQM_SERVICE_T_MAX) {
- cqm_err(handle->dev_hdl,
- CQM_WRONG_VALUE(service_template->service_type));
- return CQM_FAIL;
- }
- service = &cqm_handle->service[service_template->service_type];
- if (!service->valid) {
- cqm_err(handle->dev_hdl, "Service register: service_type %u is invalid\n",
- service_template->service_type);
- return CQM_FAIL;
- }
-
- if (service->has_register) {
- cqm_err(handle->dev_hdl, "Service register: service_type %u has registered\n",
- service_template->service_type);
- return CQM_FAIL;
- }
-
- service->has_register = true;
- (void)memcpy((void *)(&service->service_template),
- (void *)service_template,
- sizeof(struct service_register_template));
-
- return CQM_SUCCESS;
-}
-
-void cqm3_service_unregister(void *ex_handle, u32 service_type)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
-
- CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_NO_RET(cqm_handle, CQM_PTR_NULL(cqm_handle));
-
- if (service_type >= CQM_SERVICE_T_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return;
- }
-
- service = &cqm_handle->service[service_type];
- if (!service->valid)
- cqm_err(handle->dev_hdl, "Service unregister: service_type %u is disable\n",
- service_type);
-
- service->has_register = false;
- memset(&service->service_template, 0, sizeof(struct service_register_template));
-}
-
-struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_alloc_cnt);
-
- return (struct cqm_cmd_buf *)sphw_alloc_cmd_buf(ex_handle);
-}
-
-void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
- CQM_PTR_CHECK_NO_RET(cmd_buf, CQM_PTR_NULL(cmd_buf));
- CQM_PTR_CHECK_NO_RET(cmd_buf->buf, CQM_PTR_NULL(buf));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_cmd_free_cnt);
-
- sphw_free_cmd_buf(ex_handle, (struct sphw_cmd_buf *)cmd_buf);
-}
-
-s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
- struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
- u16 channel)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_PTR_NULL(buf_in));
- CQM_PTR_CHECK_RET(buf_in->buf, CQM_FAIL, CQM_PTR_NULL(buf));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_send_cmd_box_cnt);
-
- return sphw_cmdq_detail_resp(ex_handle, mod, cmd,
- (struct sphw_cmd_buf *)buf_in,
- (struct sphw_cmd_buf *)buf_out,
- out_param, timeout, channel);
-}
-
-int cqm_alloc_fc_db_addr(void *hwdev, void __iomem **db_base,
- void __iomem **dwqe_base)
-{
- struct sphw_hwif *hwif = NULL;
- u32 idx = 0;
-#define SPFC_DB_ADDR_RSVD 12
-#define SPFC_DB_MASK 128
- u64 db_base_phy_fc;
-
- if (!hwdev || !db_base)
- return -EINVAL;
-
- hwif = ((struct sphw_hwdev *)hwdev)->hwif;
-
- db_base_phy_fc = hwif->db_base_phy >> SPFC_DB_ADDR_RSVD;
-
- if (db_base_phy_fc & (SPFC_DB_MASK - 1))
- idx = SPFC_DB_MASK - (db_base_phy_fc && (SPFC_DB_MASK - 1));
-
- *db_base = hwif->db_base + idx * SPHW_DB_PAGE_SIZE;
-
- if (!dwqe_base)
- return 0;
-
- *dwqe_base = (u8 *)*db_base + SPHW_DWQE_OFFSET;
-
- return 0;
-}
-
-s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr,
- void __iomem **dwqe_addr)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- CQM_PTR_CHECK_RET(ex_handle, CQM_FAIL, CQM_PTR_NULL(ex_handle));
- CQM_PTR_CHECK_RET(db_addr, CQM_FAIL, CQM_PTR_NULL(db_addr));
- CQM_PTR_CHECK_RET(dwqe_addr, CQM_FAIL, CQM_PTR_NULL(dwqe_addr));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_alloc_cnt);
-
- return cqm_alloc_fc_db_addr(ex_handle, db_addr, dwqe_addr);
-}
-
-s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
-{
- return sphw_alloc_db_phy_addr(ex_handle, db_paddr, dwqe_addr);
-}
-
-void cqm3_db_addr_free(void *ex_handle, const void __iomem *db_addr,
- void __iomem *dwqe_addr)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
-
- CQM_PTR_CHECK_NO_RET(ex_handle, CQM_PTR_NULL(ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_db_addr_free_cnt);
-
- sphw_free_db_addr(ex_handle, db_addr, dwqe_addr);
-}
-
-void cqm_db_phy_addr_free(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr)
-{
- sphw_free_db_phy_addr(ex_handle, *db_paddr, *dwqe_addr);
-}
-
-s32 cqm_db_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- s32 i;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
-
- /* Allocate hardware doorbells to services. */
- for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
- service = &cqm_handle->service[i];
- if (!service->valid)
- continue;
-
- if (cqm3_db_addr_alloc(ex_handle, &service->hardware_db_vaddr,
- &service->dwqe_vaddr) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_db_addr_alloc));
- break;
- }
-
- if (cqm_db_phy_addr_alloc(handle, &service->hardware_db_paddr,
- &service->dwqe_paddr) != CQM_SUCCESS) {
- cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
- service->dwqe_vaddr);
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_db_phy_addr_alloc));
- break;
- }
- }
-
- if (i != CQM_SERVICE_T_MAX) {
- i--;
- for (; i >= 0; i--) {
- service = &cqm_handle->service[i];
- if (!service->valid)
- continue;
-
- cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
- service->dwqe_vaddr);
- cqm_db_phy_addr_free(ex_handle,
- &service->hardware_db_paddr,
- &service->dwqe_paddr);
- }
- return CQM_FAIL;
- }
-
- return CQM_SUCCESS;
-}
-
-void cqm_db_uninit(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- s32 i;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
-
- /* Release hardware doorbell. */
- for (i = 0; i < CQM_SERVICE_T_MAX; i++) {
- service = &cqm_handle->service[i];
- if (service->valid)
- cqm3_db_addr_free(ex_handle, service->hardware_db_vaddr,
- service->dwqe_vaddr);
- }
-}
-
-s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
- u8 pagenum, u64 db)
-{
-#define SPFC_DB_FAKE_VF_OFFSET 32
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- struct sphw_hwdev *handle = NULL;
- void *dbaddr = NULL;
-
- handle = (struct sphw_hwdev *)ex_handle;
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- service = &cqm_handle->service[service_type];
- /* Considering the performance of ringing hardware db,
- * the parameter is not checked.
- */
- wmb();
- dbaddr = (u8 *)service->hardware_db_vaddr +
- ((pagenum + SPFC_DB_FAKE_VF_OFFSET) * SPHW_DB_PAGE_SIZE);
- *((u64 *)dbaddr + db_count) = db;
- return CQM_SUCCESS;
-}
-
-s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type,
- void *direct_wqe)
-{
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- struct sphw_hwdev *handle = NULL;
- u64 *tmp = (u64 *)direct_wqe;
- int i;
-
- handle = (struct sphw_hwdev *)ex_handle;
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- service = &cqm_handle->service[service_type];
-
- /* Considering the performance of ringing hardware db,
- * the parameter is not checked.
- */
- wmb();
- *((u64 *)service->dwqe_vaddr + 0) = tmp[2];
- *((u64 *)service->dwqe_vaddr + 1) = tmp[3];
- *((u64 *)service->dwqe_vaddr + 2) = tmp[0];
- *((u64 *)service->dwqe_vaddr + 3) = tmp[1];
- tmp += 4;
-
- /* The FC use 256B WQE. The directwqe is written at block0,
- * and the length is 256B
- */
- for (i = 4; i < 32; i++)
- *((u64 *)service->dwqe_vaddr + i) = *tmp++;
-
- return CQM_SUCCESS;
-}
-
-static s32 bloomfilter_init_cmd(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- struct cqm_func_capability *capability = &cqm_handle->func_capability;
- struct cqm_bloomfilter_init_cmd *cmd = NULL;
- struct cqm_cmd_buf *buf_in = NULL;
- s32 ret;
-
- buf_in = cqm3_cmd_alloc((void *)(cqm_handle->ex_handle));
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
-
- /* Fill the command format and convert it to big-endian. */
- buf_in->size = sizeof(struct cqm_bloomfilter_init_cmd);
- cmd = (struct cqm_bloomfilter_init_cmd *)(buf_in->buf);
- cmd->bloom_filter_addr = capability->bloomfilter_addr;
- cmd->bloom_filter_len = capability->bloomfilter_length;
-
- cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_init_cmd) >> CQM_DW_SHIFT));
-
- ret = cqm3_send_cmd_box((void *)(cqm_handle->ex_handle),
- CQM_MOD_CQM, CQM_CMD_T_BLOOMFILTER_INIT, buf_in,
- NULL, NULL, CQM_CMD_TIMEOUT,
- SPHW_CHANNEL_DEFAULT);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
- cqm_err(handle->dev_hdl, "Bloomfilter: %s ret=%d\n", __func__,
- ret);
- cqm_err(handle->dev_hdl, "Bloomfilter: %s: 0x%x 0x%x\n",
- __func__, cmd->bloom_filter_addr,
- cmd->bloom_filter_len);
- cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
- return CQM_FAIL;
- }
- cqm3_cmd_free((void *)(cqm_handle->ex_handle), buf_in);
- return CQM_SUCCESS;
-}
-
-s32 cqm_bloomfilter_init(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_bloomfilter_table *bloomfilter_table = NULL;
- struct cqm_func_capability *capability = NULL;
- struct cqm_handle *cqm_handle = NULL;
- u32 array_size;
- s32 ret;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- bloomfilter_table = &cqm_handle->bloomfilter_table;
- capability = &cqm_handle->func_capability;
-
- if (capability->bloomfilter_length == 0) {
- cqm_info(handle->dev_hdl,
- "Bloomfilter: bf_length=0, don't need to init bloomfilter\n");
- return CQM_SUCCESS;
- }
-
- /* The unit of bloomfilter_length is 64B(512bits). Each bit is a table
- * node. Therefore the value must be shift 9 bits to the left.
- */
- bloomfilter_table->table_size = capability->bloomfilter_length <<
- CQM_BF_LENGTH_UNIT;
- /* The unit of bloomfilter_length is 64B. The unit of array entryis 32B.
- */
- array_size = capability->bloomfilter_length << 1;
- if (array_size == 0 || array_size > CQM_BF_BITARRAY_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(array_size));
- return CQM_FAIL;
- }
-
- bloomfilter_table->array_mask = array_size - 1;
- /* This table is not a bitmap, it is the counter of corresponding bit.
- */
- bloomfilter_table->table = vmalloc(bloomfilter_table->table_size * (sizeof(u32)));
- CQM_PTR_CHECK_RET(bloomfilter_table->table, CQM_FAIL, CQM_ALLOC_FAIL(table));
-
- memset(bloomfilter_table->table, 0,
- (bloomfilter_table->table_size * sizeof(u32)));
-
- /* The the bloomfilter must be initialized to 0 by ucode,
- * because the bloomfilter is mem mode
- */
- if (cqm_handle->func_capability.bloomfilter_enable) {
- ret = bloomfilter_init_cmd(ex_handle);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- "Bloomfilter: bloomfilter_init_cmd ret=%d\n",
- ret);
- vfree(bloomfilter_table->table);
- bloomfilter_table->table = NULL;
- return CQM_FAIL;
- }
- }
-
- mutex_init(&bloomfilter_table->lock);
- return CQM_SUCCESS;
-}
-
-void cqm_bloomfilter_uninit(void *ex_handle)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_bloomfilter_table *bloomfilter_table = NULL;
- struct cqm_handle *cqm_handle = NULL;
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- bloomfilter_table = &cqm_handle->bloomfilter_table;
-
- if (bloomfilter_table->table) {
- vfree(bloomfilter_table->table);
- bloomfilter_table->table = NULL;
- }
-}
-
-s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_cmd_buf *buf_in = NULL;
- struct cqm_bloomfilter_cmd *cmd = NULL;
- s32 ret;
-
- buf_in = cqm3_cmd_alloc(ex_handle);
- CQM_PTR_CHECK_RET(buf_in, CQM_FAIL, CQM_ALLOC_FAIL(buf_in));
-
- /* Fill the command format and convert it to big-endian. */
- buf_in->size = sizeof(struct cqm_bloomfilter_cmd);
- cmd = (struct cqm_bloomfilter_cmd *)(buf_in->buf);
- memset((void *)cmd, 0, sizeof(struct cqm_bloomfilter_cmd));
- cmd->k_en = k_flag;
- cmd->index_h = (u32)(id >> CQM_DW_OFFSET);
- cmd->index_l = (u32)(id & CQM_DW_MASK);
-
- cqm_swab32((u8 *)cmd, (sizeof(struct cqm_bloomfilter_cmd) >> CQM_DW_SHIFT));
-
- ret = cqm3_send_cmd_box(ex_handle, CQM_MOD_CQM, (u8)op, buf_in, NULL,
- NULL, CQM_CMD_TIMEOUT, SPHW_CHANNEL_DEFAULT);
- if (ret != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm3_send_cmd_box));
- cqm_err(handle->dev_hdl, "Bloomfilter: bloomfilter_cmd ret=%d\n", ret);
- cqm_err(handle->dev_hdl, "Bloomfilter: op=0x%x, cmd: 0x%x 0x%x 0x%x 0x%x\n",
- op, *((u32 *)cmd), *(((u32 *)cmd) + CQM_DW_INDEX1),
- *(((u32 *)cmd) + CQM_DW_INDEX2),
- *(((u32 *)cmd) + CQM_DW_INDEX3));
- cqm3_cmd_free(ex_handle, buf_in);
- return CQM_FAIL;
- }
-
- cqm3_cmd_free(ex_handle, buf_in);
-
- return CQM_SUCCESS;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_main.h b/drivers/scsi/spfc/hw/spfc_cqm_main.h
deleted file mode 100644
index cf10d7f5c339..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_main.h
+++ /dev/null
@@ -1,411 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_CQM_MAIN_H
-#define SPFC_CQM_MAIN_H
-
-#include "sphw_hwdev.h"
-#include "sphw_hwif.h"
-#include "spfc_cqm_object.h"
-#include "spfc_cqm_bitmap_table.h"
-#include "spfc_cqm_bat_cla.h"
-
-#define GET_MAX(a, b) ((a) > (b) ? (a) : (b))
-#define GET_MIN(a, b) ((a) < (b) ? (a) : (b))
-#define CQM_DW_SHIFT 2
-#define CQM_QW_SHIFT 3
-#define CQM_BYTE_BIT_SHIFT 3
-#define CQM_NUM_BIT_BYTE 8
-
-#define CHIPIF_SUCCESS 0
-#define CHIPIF_FAIL (-1)
-
-#define CQM_TIMER_ENABLE 1
-#define CQM_TIMER_DISABLE 0
-
-/* The value must be the same as that of sphw_service_type in sphw_crm.h. */
-#define CQM_SERVICE_T_FC SERVICE_T_FC
-#define CQM_SERVICE_T_MAX SERVICE_T_MAX
-
-struct cqm_service {
- bool valid; /* Whether to enable this service on the function. */
- bool has_register; /* Registered or Not */
- u64 hardware_db_paddr;
- void __iomem *hardware_db_vaddr;
- u64 dwqe_paddr;
- void __iomem *dwqe_vaddr;
- u32 buf_order; /* The size of each buf node is 2^buf_order pages. */
- struct service_register_template service_template;
-};
-
-struct cqm_fake_cfg {
- u32 parent_func; /* The parent func_id of the fake vfs. */
- u32 child_func_start; /* The start func_id of the child fake vfs. */
- u32 child_func_number; /* The number of the child fake vfs. */
-};
-
-#define CQM_MAX_FACKVF_GROUP 4
-
-struct cqm_func_capability {
- /* BAT_PTR table(SMLC) */
- bool ft_enable; /* BAT for flow table enable: support fc service
- */
- bool rdma_enable; /* BAT for rdma enable: support RoCE */
- /* VAT table(SMIR) */
- bool ft_pf_enable; /* Same as ft_enable. BAT entry for fc on pf
- */
- bool rdma_pf_enable; /* Same as rdma_enable. BAT entry for rdma on pf */
-
- /* Dynamic or static memory allocation during the application of
- * specified QPC/SCQC for each service.
- */
- bool qpc_alloc_static;
- bool scqc_alloc_static;
-
- u8 timer_enable; /* Whether the timer function is enabled */
- u8 bloomfilter_enable; /* Whether the bloomgfilter function is enabled */
- /* Maximum number of connections for fc, whitch cannot excedd qpc_number */
- u32 flow_table_based_conn_number;
- u32 flow_table_based_conn_cache_number; /* Maximum number of sticky caches */
- u32 bloomfilter_length; /* Size of the bloomfilter table, 64-byte aligned */
- u32 bloomfilter_addr; /* Start position of the bloomfilter table in the SMF main cache. */
- u32 qpc_reserved; /* Reserved bit in bitmap */
- u32 mpt_reserved; /* The ROCE/IWARP MPT also has a reserved bit. */
-
- /* All basic_size must be 2^n-aligned. */
- /* The number of hash bucket. The size of BAT table is aliaed with 64 bucket.
- *At least 64 buckets is required.
- */
- u32 hash_number;
- /* THe basic size of hash bucket is 64B, including 5 valid entry and one next entry. */
- u32 hash_basic_size;
- u32 qpc_number;
- u32 qpc_basic_size;
-
- /* NUmber of PFs/VFs on the current host */
- u32 pf_num;
- u32 pf_id_start;
- u32 vf_num;
- u32 vf_id_start;
-
- u32 lb_mode;
- /* Only lower 4bit is valid, indicating which SMFs are enabled.
- * For example, 0101B indicates that SMF0 and SMF2 are enabled.
- */
- u32 smf_pg;
-
- u32 fake_mode;
- /* Whether the current function belongs to the fake group (parent or child) */
- u32 fake_func_type;
- u32 fake_cfg_number; /* Number of current configuration groups */
- struct cqm_fake_cfg fake_cfg[CQM_MAX_FACKVF_GROUP];
-
- /* Note: for cqm specail test */
- u32 pagesize_reorder;
- bool xid_alloc_mode;
- bool gpa_check_enable;
- u32 scq_reserved;
- u32 srq_reserved;
-
- u32 mpt_number;
- u32 mpt_basic_size;
- u32 scqc_number;
- u32 scqc_basic_size;
- u32 srqc_number;
- u32 srqc_basic_size;
-
- u32 gid_number;
- u32 gid_basic_size;
- u32 lun_number;
- u32 lun_basic_size;
- u32 taskmap_number;
- u32 taskmap_basic_size;
- u32 l3i_number;
- u32 l3i_basic_size;
- u32 childc_number;
- u32 childc_basic_size;
- u32 child_qpc_id_start; /* FC service Child CTX is global addressing. */
- u32 childc_number_all_function; /* The chip supports a maximum of 8096 child CTXs. */
- u32 timer_number;
- u32 timer_basic_size;
- u32 xid2cid_number;
- u32 xid2cid_basic_size;
- u32 reorder_number;
- u32 reorder_basic_size;
-};
-
-#define CQM_PF TYPE_PF
-#define CQM_VF TYPE_VF
-#define CQM_PPF TYPE_PPF
-#define CQM_UNKNOWN TYPE_UNKNOWN
-#define CQM_MAX_PF_NUM 32
-
-#define CQM_LB_MODE_NORMAL 0xff
-#define CQM_LB_MODE_0 0
-#define CQM_LB_MODE_1 1
-#define CQM_LB_MODE_2 2
-
-#define CQM_LB_SMF_MAX 4
-
-#define CQM_FPGA_MODE 0
-#define CQM_EMU_MODE 1
-#define CQM_FAKE_MODE_DISABLE 0
-#define CQM_FAKE_CFUNC_START 32
-
-#define CQM_FAKE_FUNC_NORMAL 0
-#define CQM_FAKE_FUNC_PARENT 1
-#define CQM_FAKE_FUNC_CHILD 2
-#define CQM_FAKE_FUNC_CHILD_CONFLICT 3
-#define CQM_FAKE_FUNC_MAX 32
-
-#define CQM_SPU_HOST_ID 4
-
-#define CQM_QPC_ROCE_PER_DRCT 12
-#define CQM_QPC_NORMAL_RESERVE_DRC 0
-#define CQM_QPC_ROCEAA_ENABLE 1
-#define CQM_QPC_ROCE_VBS_MODE 2
-#define CQM_QPC_NORMAL_WITHOUT_RSERVER_DRC 3
-
-struct cqm_db_common {
- u32 rsvd1 : 23;
- u32 c : 1;
- u32 cos : 3;
- u32 service_type : 5;
-
- u32 rsvd2;
-};
-
-struct cqm_bloomfilter_table {
- u32 *table;
- u32 table_size; /* The unit is bit */
- u32 array_mask; /* The unit of array entry is 32B, used to address entry
- */
- struct mutex lock;
-};
-
-struct cqm_bloomfilter_init_cmd {
- u32 bloom_filter_len;
- u32 bloom_filter_addr;
-};
-
-struct cqm_bloomfilter_cmd {
- u32 rsv1;
-
- u32 k_en : 4;
- u32 rsv2 : 28;
-
- u32 index_h;
- u32 index_l;
-};
-
-struct cqm_handle {
- struct sphw_hwdev *ex_handle;
- struct pci_dev *dev;
- struct sphw_func_attr func_attribute; /* vf/pf attributes */
- struct cqm_func_capability func_capability; /* function capability set */
- struct cqm_service service[CQM_SERVICE_T_MAX]; /* Service-related structure */
- struct cqm_bat_table bat_table;
- struct cqm_bloomfilter_table bloomfilter_table;
- /* fake-vf-related structure */
- struct cqm_handle *fake_cqm_handle[CQM_FAKE_FUNC_MAX];
- struct cqm_handle *parent_cqm_handle;
-};
-
-enum cqm_cmd_type {
- CQM_CMD_T_INVALID = 0,
- CQM_CMD_T_BAT_UPDATE,
- CQM_CMD_T_CLA_UPDATE,
- CQM_CMD_T_CLA_CACHE_INVALID = 6,
- CQM_CMD_T_BLOOMFILTER_INIT,
- CQM_CMD_T_MAX
-};
-
-#define CQM_CQN_FROM_CEQE(data) ((data) & 0xfffff)
-#define CQM_XID_FROM_CEQE(data) ((data) & 0xfffff)
-#define CQM_QID_FROM_CEQE(data) (((data) >> 20) & 0x7)
-#define CQM_TYPE_FROM_CEQE(data) (((data) >> 23) & 0x7)
-
-#define CQM_HASH_BUCKET_SIZE_64 64
-
-#define CQM_MAX_QPC_NUM 0x100000
-#define CQM_MAX_SCQC_NUM 0x100000
-#define CQM_MAX_SRQC_NUM 0x100000
-#define CQM_MAX_CHILDC_NUM 0x100000
-
-#define CQM_QPC_SIZE_256 256
-#define CQM_QPC_SIZE_512 512
-#define CQM_QPC_SIZE_1024 1024
-
-#define CQM_SCQC_SIZE_32 32
-#define CQM_SCQC_SIZE_64 64
-#define CQM_SCQC_SIZE_128 128
-
-#define CQM_SRQC_SIZE_32 32
-#define CQM_SRQC_SIZE_64 64
-#define CQM_SRQC_SIZE_128 128
-
-#define CQM_MPT_SIZE_64 64
-
-#define CQM_GID_SIZE_32 32
-
-#define CQM_LUN_SIZE_8 8
-
-#define CQM_L3I_SIZE_8 8
-
-#define CQM_TIMER_SIZE_32 32
-
-#define CQM_XID2CID_SIZE_8 8
-
-#define CQM_XID2CID_SIZE_8K 8192
-
-#define CQM_REORDER_SIZE_256 256
-
-#define CQM_CHILDC_SIZE_256 256
-
-#define CQM_XID2CID_VBS_NUM (18 * 1024) /* 16K virtio VQ + 2K nvme Q */
-
-#define CQM_VBS_QPC_NUM 2048 /* 2K VOLQ */
-
-#define CQM_VBS_QPC_SIZE 512
-
-#define CQM_XID2CID_VIRTIO_NUM (16 * 1024)
-
-#define CQM_GID_RDMA_NUM 128
-
-#define CQM_LUN_FC_NUM 64
-
-#define CQM_TASKMAP_FC_NUM 4
-
-#define CQM_L3I_COMM_NUM 64
-
-#define CQM_CHILDC_ROCE_NUM (8 * 1024)
-#define CQM_CHILDC_OVS_VBS_NUM (8 * 1024)
-#define CQM_CHILDC_TOE_NUM 256
-#define CQM_CHILDC_IPSEC_NUM (4 * 1024)
-
-#define CQM_TIMER_SCALE_NUM (2 * 1024)
-#define CQM_TIMER_ALIGN_WHEEL_NUM 8
-#define CQM_TIMER_ALIGN_SCALE_NUM \
- (CQM_TIMER_SCALE_NUM * CQM_TIMER_ALIGN_WHEEL_NUM)
-
-#define CQM_QPC_OVS_RSVD (1024 * 1024)
-#define CQM_QPC_ROCE_RSVD 2
-#define CQM_QPC_ROCEAA_SWITCH_QP_NUM 4
-#define CQM_QPC_ROCEAA_RSVD \
- (4 * 1024 + CQM_QPC_ROCEAA_SWITCH_QP_NUM) /* 4096 Normal QP + 4 Switch QP */
-#define CQM_CQ_ROCEAA_RSVD 64
-#define CQM_SRQ_ROCEAA_RSVD 64
-#define CQM_QPC_ROCE_VBS_RSVD \
- (1024 + CQM_QPC_ROCE_RSVD) /* (204800 + CQM_QPC_ROCE_RSVD) */
-
-#define CQM_OVS_PAGESIZE_ORDER 8
-#define CQM_OVS_MAX_TIMER_FUNC 48
-
-#define CQM_FC_PAGESIZE_ORDER 0
-
-#define CQM_QHEAD_ALIGN_ORDER 6
-
-#define CQM_CMD_TIMEOUT 300000 /* ms */
-
-#define CQM_DW_MASK 0xffffffff
-#define CQM_DW_OFFSET 32
-#define CQM_DW_INDEX0 0
-#define CQM_DW_INDEX1 1
-#define CQM_DW_INDEX2 2
-#define CQM_DW_INDEX3 3
-
-/* The unit of bloomfilter_length is 64B(512bits). */
-#define CQM_BF_LENGTH_UNIT 9
-#define CQM_BF_BITARRAY_MAX BIT(17)
-
-typedef void (*serv_cap_init_cb)(struct cqm_handle *, void *);
-
-/* Only for llt test */
-s32 cqm_capability_init(void *ex_handle);
-/* Can be defined as static */
-s32 cqm_mem_init(void *ex_handle);
-void cqm_mem_uninit(void *ex_handle);
-s32 cqm_event_init(void *ex_handle);
-void cqm_event_uninit(void *ex_handle);
-u8 cqm_aeq_callback(void *ex_handle, u8 event, u8 *data);
-
-s32 cqm3_init(void *ex_handle);
-void cqm3_uninit(void *ex_handle);
-s32 cqm3_service_register(void *ex_handle, struct service_register_template *service_template);
-void cqm3_service_unregister(void *ex_handle, u32 service_type);
-
-struct cqm_cmd_buf *cqm3_cmd_alloc(void *ex_handle);
-void cqm3_cmd_free(void *ex_handle, struct cqm_cmd_buf *cmd_buf);
-s32 cqm3_send_cmd_box(void *ex_handle, u8 mod, u8 cmd, struct cqm_cmd_buf *buf_in,
- struct cqm_cmd_buf *buf_out, u64 *out_param, u32 timeout,
- u16 channel);
-
-s32 cqm3_db_addr_alloc(void *ex_handle, void __iomem **db_addr, void __iomem **dwqe_addr);
-s32 cqm_db_phy_addr_alloc(void *ex_handle, u64 *db_paddr, u64 *dwqe_addr);
-s32 cqm_db_init(void *ex_handle);
-void cqm_db_uninit(void *ex_handle);
-
-s32 cqm_bloomfilter_cmd(void *ex_handle, u32 op, u32 k_flag, u64 id);
-s32 cqm_bloomfilter_init(void *ex_handle);
-void cqm_bloomfilter_uninit(void *ex_handle);
-
-#define CQM_LOG_ID 0
-
-#define CQM_PTR_NULL(x) "%s: " #x " is null\n", __func__
-#define CQM_ALLOC_FAIL(x) "%s: " #x " alloc fail\n", __func__
-#define CQM_MAP_FAIL(x) "%s: " #x " map fail\n", __func__
-#define CQM_FUNCTION_FAIL(x) "%s: " #x " return failure\n", __func__
-#define CQM_WRONG_VALUE(x) "%s: " #x " %u is wrong\n", __func__, (u32)(x)
-
-#define cqm_err(dev, format, ...) dev_err(dev, "[CQM]" format, ##__VA_ARGS__)
-#define cqm_warn(dev, format, ...) dev_warn(dev, "[CQM]" format, ##__VA_ARGS__)
-#define cqm_notice(dev, format, ...) \
- dev_notice(dev, "[CQM]" format, ##__VA_ARGS__)
-#define cqm_info(dev, format, ...) dev_info(dev, "[CQM]" format, ##__VA_ARGS__)
-
-#define CQM_32_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
- do { \
- if (unlikely(((x) & 0x1f) != 0)) { \
- cqm_err(dev_hdl, desc); \
- return ret; \
- } \
- } while (0)
-#define CQM_64_ALIGN_CHECK_RET(dev_hdl, x, ret, desc) \
- do { \
- if (unlikely(((x) & 0x3f) != 0)) { \
- cqm_err(dev_hdl, desc); \
- return ret; \
- } \
- } while (0)
-
-#define CQM_PTR_CHECK_RET(ptr, ret, desc) \
- do { \
- if (unlikely((ptr) == NULL)) { \
- pr_err("[CQM]" desc); \
- return ret; \
- } \
- } while (0)
-
-#define CQM_PTR_CHECK_NO_RET(ptr, desc) \
- do { \
- if (unlikely((ptr) == NULL)) { \
- pr_err("[CQM]" desc); \
- return; \
- } \
- } while (0)
-#define CQM_CHECK_EQUAL_RET(dev_hdl, actual, expect, ret, desc) \
- do { \
- if (unlikely((expect) != (actual))) { \
- cqm_err(dev_hdl, desc); \
- return ret; \
- } \
- } while (0)
-#define CQM_CHECK_EQUAL_NO_RET(dev_hdl, actual, expect, desc) \
- do { \
- if (unlikely((expect) != (actual))) { \
- cqm_err(dev_hdl, desc); \
- return; \
- } \
- } while (0)
-
-#endif /* SPFC_CQM_MAIN_H */
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.c b/drivers/scsi/spfc/hw/spfc_cqm_object.c
deleted file mode 100644
index 165794e9c7e5..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_object.c
+++ /dev/null
@@ -1,937 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include <linux/types.h>
-#include <linux/sched.h>
-#include <linux/pci.h>
-#include <linux/module.h>
-#include <linux/vmalloc.h>
-#include <linux/device.h>
-#include <linux/gfp.h>
-#include <linux/mm.h>
-
-#include "sphw_crm.h"
-#include "sphw_hw.h"
-#include "sphw_hwdev.h"
-#include "sphw_hwif.h"
-
-#include "spfc_cqm_object.h"
-#include "spfc_cqm_bitmap_table.h"
-#include "spfc_cqm_bat_cla.h"
-#include "spfc_cqm_main.h"
-
-s32 cqm_qpc_mpt_bitmap_alloc(struct cqm_object *object, struct cqm_cla_table *cla_table)
-{
- struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
- struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
- struct cqm_qpc_mpt_info,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_func_capability *func_cap = &cqm_handle->func_capability;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_bitmap *bitmap = &cla_table->bitmap;
- u32 index, count;
-
- count = (ALIGN(object->object_size, cla_table->obj_size)) / cla_table->obj_size;
- qpc_mpt_info->index_count = count;
-
- if (qpc_mpt_info->common.xid == CQM_INDEX_INVALID) {
- /* apply for an index normally */
- index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
- count, func_cap->xid_alloc_mode);
- if (index < bitmap->max_num) {
- qpc_mpt_info->common.xid = index;
- } else {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_bitmap_alloc));
- return CQM_FAIL;
- }
- } else {
- /* apply for index to be reserved */
- index = cqm_bitmap_alloc_reserved(bitmap, count,
- qpc_mpt_info->common.xid);
- if (index != qpc_mpt_info->common.xid) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_bitmap_alloc_reserved));
- return CQM_FAIL;
- }
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_qpc_mpt_create(struct cqm_object *object)
-{
- struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
- struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
- struct cqm_qpc_mpt_info,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object_table *object_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_bitmap *bitmap = NULL;
- u32 index, count;
-
- /* find the corresponding cla table */
- if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
- } else {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
- return CQM_FAIL;
- }
-
- CQM_PTR_CHECK_RET(cla_table, CQM_FAIL,
- CQM_FUNCTION_FAIL(cqm_cla_table_get));
-
- /* Bitmap applies for index. */
- if (cqm_qpc_mpt_bitmap_alloc(object, cla_table) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_qpc_mpt_bitmap_alloc));
- return CQM_FAIL;
- }
-
- bitmap = &cla_table->bitmap;
- index = qpc_mpt_info->common.xid;
- count = qpc_mpt_info->index_count;
-
- /* Find the trunk page from the BAT/CLA and allocate the buffer.
- * Ensure that the released buffer has been cleared.
- */
- if (cla_table->alloc_static)
- qpc_mpt_info->common.vaddr = cqm_cla_get_unlock(cqm_handle,
- cla_table,
- index, count,
- &common->paddr);
- else
- qpc_mpt_info->common.vaddr = cqm_cla_get_lock(cqm_handle,
- cla_table, index,
- count,
- &common->paddr);
-
- if (!qpc_mpt_info->common.vaddr) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_get_lock));
- cqm_err(handle->dev_hdl, "Qpc mpt init: qpc mpt vaddr is null, cla_table->alloc_static=%d\n",
- cla_table->alloc_static);
- goto err1;
- }
-
- /* Indexes are associated with objects, and FC is executed
- * in the interrupt context.
- */
- object_table = &cla_table->obj_table;
- if (object->service_type == CQM_SERVICE_T_FC) {
- if (cqm_object_table_insert(cqm_handle, object_table, index,
- object, false) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_object_table_insert));
- goto err2;
- }
- } else {
- if (cqm_object_table_insert(cqm_handle, object_table, index,
- object, true) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_object_table_insert));
- goto err2;
- }
- }
-
- return CQM_SUCCESS;
-
-err2:
- cqm_cla_put(cqm_handle, cla_table, index, count);
-err1:
- cqm_bitmap_free(bitmap, index, count);
- return CQM_FAIL;
-}
-
-struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 object_size, void *object_priv,
- u32 index)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_qpc_mpt_info *qpc_mpt_info = NULL;
- struct cqm_handle *cqm_handle = NULL;
- s32 ret = CQM_FAIL;
-
- CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_create_cnt);
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
-
- if (service_type >= CQM_SERVICE_T_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return NULL;
- }
- /* exception of service registrion check */
- if (!cqm_handle->service[service_type].has_register) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return NULL;
- }
-
- if (object_type != CQM_OBJECT_SERVICE_CTX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
- return NULL;
- }
-
- qpc_mpt_info = kmalloc(sizeof(*qpc_mpt_info), GFP_ATOMIC | __GFP_ZERO);
- CQM_PTR_CHECK_RET(qpc_mpt_info, NULL, CQM_ALLOC_FAIL(qpc_mpt_info));
-
- qpc_mpt_info->common.object.service_type = service_type;
- qpc_mpt_info->common.object.object_type = object_type;
- qpc_mpt_info->common.object.object_size = object_size;
- atomic_set(&qpc_mpt_info->common.object.refcount, 1);
- init_completion(&qpc_mpt_info->common.object.free);
- qpc_mpt_info->common.object.cqm_handle = cqm_handle;
- qpc_mpt_info->common.xid = index;
-
- qpc_mpt_info->common.priv = object_priv;
-
- ret = cqm_qpc_mpt_create(&qpc_mpt_info->common.object);
- if (ret == CQM_SUCCESS)
- return &qpc_mpt_info->common;
-
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_qpc_mpt_create));
- kfree(qpc_mpt_info);
- return NULL;
-}
-
-void cqm_linkwqe_fill(struct cqm_buf *buf, u32 wqe_per_buf, u32 wqe_size,
- u32 wqe_number, bool tail, u8 link_mode)
-{
- struct cqm_linkwqe_128B *linkwqe = NULL;
- struct cqm_linkwqe *wqe = NULL;
- dma_addr_t addr;
- u8 *tmp = NULL;
- u8 *va = NULL;
- u32 i;
-
- /* The linkwqe of other buffer except the last buffer
- * is directly filled to the tail.
- */
- for (i = 0; i < buf->buf_number; i++) {
- va = (u8 *)(buf->buf_list[i].va);
-
- if (i != (buf->buf_number - 1)) {
- wqe = (struct cqm_linkwqe *)(va + (u32)(wqe_size * wqe_per_buf));
- wqe->wf = CQM_WQE_WF_LINK;
- wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
- wqe->lp = CQM_LINK_WQE_LP_INVALID;
- /* The valid value of link wqe needs to be set to 1.
- * Each service ensures that o-bit=1 indicates that
- * link wqe is valid and o-bit=0 indicates that
- * link wqe is invalid.
- */
- wqe->o = CQM_LINK_WQE_OWNER_VALID;
- addr = buf->buf_list[(u32)(i + 1)].pa;
- wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
- wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
- } else { /* linkwqe special padding of the last buffer */
- if (tail) {
- /* must be filled at the end of the page */
- tmp = va + (u32)(wqe_size * wqe_per_buf);
- wqe = (struct cqm_linkwqe *)tmp;
- } else {
- /* The last linkwqe is filled
- * following the last wqe.
- */
- tmp = va + (u32)(wqe_size * (wqe_number -
- wqe_per_buf *
- (buf->buf_number -
- 1)));
- wqe = (struct cqm_linkwqe *)tmp;
- }
- wqe->wf = CQM_WQE_WF_LINK;
- wqe->ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
-
- /* In link mode, the last link WQE is invalid;
- * In ring mode, the last link wqe is valid, pointing to
- * the home page, and the lp is set.
- */
- if (link_mode == CQM_QUEUE_LINK_MODE) {
- wqe->o = CQM_LINK_WQE_OWNER_INVALID;
- } else {
- /* The lp field of the last link_wqe is set to
- * 1, indicating that the meaning of the o-bit
- * is reversed.
- */
- wqe->lp = CQM_LINK_WQE_LP_VALID;
- wqe->o = CQM_LINK_WQE_OWNER_VALID;
- addr = buf->buf_list[0].pa;
- wqe->next_page_gpa_h = CQM_ADDR_HI(addr);
- wqe->next_page_gpa_l = CQM_ADDR_LW(addr);
- }
- }
-
- if (wqe_size == CQM_LINKWQE_128B) {
- /* After the B800 version, the WQE obit scheme is
- * changed. The 64B bits before and after the 128B WQE
- * need to be assigned a value:
- * ifoe the 63rd bit from the end of the last 64B is
- * obit;
- * toe the 157th bit from the end of the last 64B is
- * obit.
- */
- linkwqe = (struct cqm_linkwqe_128B *)wqe;
- linkwqe->second64B.forth_16B.bs.ifoe_o = CQM_LINK_WQE_OWNER_VALID;
-
- /* shift 2 bits by right to get length of dw(4B) */
- cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe_128B) >> 2);
- } else {
- /* shift 2 bits by right to get length of dw(4B) */
- cqm_swab32((u8 *)wqe, sizeof(struct cqm_linkwqe) >> 2);
- }
- }
-}
-
-s32 cqm_nonrdma_queue_ctx_create(struct cqm_object *object)
-{
- struct cqm_queue *common = container_of(object, struct cqm_queue, object);
- struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object_table *object_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_bitmap *bitmap = NULL;
- s32 shift;
-
- if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
- shift = cqm_shift(qinfo->q_ctx_size);
- common->q_ctx_vaddr = cqm_kmalloc_align(qinfo->q_ctx_size,
- GFP_KERNEL | __GFP_ZERO,
- (u16)shift);
- if (!common->q_ctx_vaddr) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_ctx_vaddr));
- return CQM_FAIL;
- }
-
- common->q_ctx_paddr = pci_map_single(cqm_handle->dev,
- common->q_ctx_vaddr,
- qinfo->q_ctx_size,
- PCI_DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(cqm_handle->dev,
- common->q_ctx_paddr)) {
- cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_ctx_vaddr));
- cqm_kfree_align(common->q_ctx_vaddr);
- common->q_ctx_vaddr = NULL;
- return CQM_FAIL;
- }
- } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
- /* find the corresponding cla table */
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
- if (!cla_table) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(nonrdma_cqm_cla_table_get));
- return CQM_FAIL;
- }
-
- /* bitmap applies for index */
- bitmap = &cla_table->bitmap;
- qinfo->index_count =
- (ALIGN(qinfo->q_ctx_size, cla_table->obj_size)) /
- cla_table->obj_size;
- qinfo->common.index = cqm_bitmap_alloc(bitmap, 1U << (cla_table->z + 1),
- qinfo->index_count,
- cqm_handle->func_capability.xid_alloc_mode);
- if (qinfo->common.index >= bitmap->max_num) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(nonrdma_cqm_bitmap_alloc));
- return CQM_FAIL;
- }
-
- /* find the trunk page from BAT/CLA and allocate the buffer */
- common->q_ctx_vaddr = cqm_cla_get_lock(cqm_handle, cla_table,
- qinfo->common.index,
- qinfo->index_count,
- &common->q_ctx_paddr);
- if (!common->q_ctx_vaddr) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(nonrdma_cqm_cla_get_lock));
- cqm_bitmap_free(bitmap, qinfo->common.index,
- qinfo->index_count);
- return CQM_FAIL;
- }
-
- /* index and object association */
- object_table = &cla_table->obj_table;
- if (object->service_type == CQM_SERVICE_T_FC) {
- if (cqm_object_table_insert(cqm_handle, object_table,
- qinfo->common.index, object,
- false) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
- cqm_cla_put(cqm_handle, cla_table,
- qinfo->common.index,
- qinfo->index_count);
- cqm_bitmap_free(bitmap, qinfo->common.index,
- qinfo->index_count);
- return CQM_FAIL;
- }
- } else {
- if (cqm_object_table_insert(cqm_handle, object_table,
- qinfo->common.index, object,
- true) != CQM_SUCCESS) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(nonrdma_cqm_object_table_insert));
- cqm_cla_put(cqm_handle, cla_table,
- qinfo->common.index,
- qinfo->index_count);
- cqm_bitmap_free(bitmap, qinfo->common.index,
- qinfo->index_count);
- return CQM_FAIL;
- }
- }
- }
-
- return CQM_SUCCESS;
-}
-
-s32 cqm_nonrdma_queue_create(struct cqm_object *object)
-{
- struct cqm_queue *common = container_of(object, struct cqm_queue, object);
- struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_service *service = cqm_handle->service + object->service_type;
- struct cqm_buf *q_room_buf = &common->q_room_buf_1;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- u32 wqe_number = qinfo->common.object.object_size;
- u32 wqe_size = qinfo->wqe_size;
- u32 order = service->buf_order;
- u32 buf_number, buf_size;
- bool tail = false; /* determine whether the linkwqe is at the end of the page */
-
- /* When creating a CQ/SCQ queue, the page size is 4 KB,
- * the linkwqe must be at the end of the page.
- */
- if (object->object_type == CQM_OBJECT_NONRDMA_EMBEDDED_CQ ||
- object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
- /* depth: 2^n-aligned; depth range: 256-32 K */
- if (wqe_number < CQM_CQ_DEPTH_MIN ||
- wqe_number > CQM_CQ_DEPTH_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_number));
- return CQM_FAIL;
- }
- if (!cqm_check_align(wqe_number)) {
- cqm_err(handle->dev_hdl, "Nonrdma queue alloc: wqe_number is not align on 2^n\n");
- return CQM_FAIL;
- }
-
- order = CQM_4K_PAGE_ORDER; /* wqe page 4k */
- tail = true; /* The linkwqe must be at the end of the page. */
- buf_size = CQM_4K_PAGE_SIZE;
- } else {
- buf_size = (u32)(PAGE_SIZE << order);
- }
-
- /* Calculate the total number of buffers required,
- * -1 indicates that the link wqe in a buffer is deducted.
- */
- qinfo->wqe_per_buf = (buf_size / wqe_size) - 1;
- /* number of linkwqes that are included in the depth transferred
- * by the service
- */
- buf_number = ALIGN((wqe_size * wqe_number), buf_size) / buf_size;
-
- /* apply for buffer */
- q_room_buf->buf_number = buf_number;
- q_room_buf->buf_size = buf_size;
- q_room_buf->page_number = buf_number << order;
- if (cqm_buf_alloc(cqm_handle, q_room_buf, false) == CQM_FAIL) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_buf_alloc));
- return CQM_FAIL;
- }
- /* fill link wqe, wqe_number - buf_number is the number of wqe without
- * link wqe
- */
- cqm_linkwqe_fill(q_room_buf, qinfo->wqe_per_buf, wqe_size,
- wqe_number - buf_number, tail,
- common->queue_link_mode);
-
- /* create queue header */
- qinfo->common.q_header_vaddr = cqm_kmalloc_align(sizeof(struct cqm_queue_header),
- GFP_KERNEL | __GFP_ZERO,
- CQM_QHEAD_ALIGN_ORDER);
- if (!qinfo->common.q_header_vaddr) {
- cqm_err(handle->dev_hdl, CQM_ALLOC_FAIL(q_header_vaddr));
- goto err1;
- }
-
- common->q_header_paddr = pci_map_single(cqm_handle->dev,
- qinfo->common.q_header_vaddr,
- sizeof(struct cqm_queue_header),
- PCI_DMA_BIDIRECTIONAL);
- if (pci_dma_mapping_error(cqm_handle->dev, common->q_header_paddr)) {
- cqm_err(handle->dev_hdl, CQM_MAP_FAIL(q_header_vaddr));
- goto err2;
- }
-
- /* create queue ctx */
- if (cqm_nonrdma_queue_ctx_create(object) == CQM_FAIL) {
- cqm_err(handle->dev_hdl,
- CQM_FUNCTION_FAIL(cqm_nonrdma_queue_ctx_create));
- goto err3;
- }
-
- return CQM_SUCCESS;
-
-err3:
- pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
- sizeof(struct cqm_queue_header), PCI_DMA_BIDIRECTIONAL);
-err2:
- cqm_kfree_align(qinfo->common.q_header_vaddr);
- qinfo->common.q_header_vaddr = NULL;
-err1:
- cqm_buf_free(q_room_buf, cqm_handle->dev);
- return CQM_FAIL;
-}
-
-struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 wqe_number, u32 wqe_size,
- void *object_priv)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- u32 valid_wqe_per_buffer;
- u32 wqe_sum; /* include linkwqe, normal wqe */
- u32 buf_size;
- u32 buf_num;
- s32 ret;
-
- CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_fc_srq_create_cnt);
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
-
- /* service_type must be fc */
- if (service_type != CQM_SERVICE_T_FC) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return NULL;
- }
-
- /* exception of service unregistered check */
- if (!cqm_handle->service[service_type].has_register) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return NULL;
- }
-
- /* wqe_size cannot exceed PAGE_SIZE and must be 2^n aligned. */
- if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
- return NULL;
- }
-
- /* FC RQ is SRQ. (Different from the SRQ concept of TOE, FC indicates
- * that packets received by all flows are placed on the same RQ.
- * The SRQ of TOE is similar to the RQ resource pool.)
- */
- if (object_type != CQM_OBJECT_NONRDMA_SRQ) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
- return NULL;
- }
-
- service = &cqm_handle->service[service_type];
- buf_size = (u32)(PAGE_SIZE << (service->buf_order));
- /* subtract 1 link wqe */
- valid_wqe_per_buffer = buf_size / wqe_size - 1;
- buf_num = wqe_number / valid_wqe_per_buffer;
- if (wqe_number % valid_wqe_per_buffer != 0)
- buf_num++;
-
- /* calculate the total number of WQEs */
- wqe_sum = buf_num * (valid_wqe_per_buffer + 1);
- nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
- CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
-
- /* initialize object member */
- nonrdma_qinfo->common.object.service_type = service_type;
- nonrdma_qinfo->common.object.object_type = object_type;
- /* total number of WQEs */
- nonrdma_qinfo->common.object.object_size = wqe_sum;
- atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
- init_completion(&nonrdma_qinfo->common.object.free);
- nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
-
- /* Initialize the doorbell used by the current queue.
- * The default doorbell is the hardware doorbell.
- */
- nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
- /* Currently, the connection mode is fixed. In the future,
- * the service needs to transfer the connection mode.
- */
- nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
-
- /* initialize public members */
- nonrdma_qinfo->common.priv = object_priv;
- nonrdma_qinfo->common.valid_wqe_num = wqe_sum - buf_num;
-
- /* initialize internal private members */
- nonrdma_qinfo->wqe_size = wqe_size;
- /* RQ (also called SRQ of FC) created by FC services,
- * CTX needs to be created.
- */
- nonrdma_qinfo->q_ctx_size = service->service_template.srq_ctx_size;
-
- ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
- if (ret == CQM_SUCCESS)
- return &nonrdma_qinfo->common;
-
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_fc_queue_create));
- kfree(nonrdma_qinfo);
- return NULL;
-}
-
-struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 wqe_number, u32 wqe_size,
- void *object_priv)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_nonrdma_qinfo *nonrdma_qinfo = NULL;
- struct cqm_handle *cqm_handle = NULL;
- struct cqm_service *service = NULL;
- s32 ret;
-
- CQM_PTR_CHECK_RET(ex_handle, NULL, CQM_PTR_NULL(ex_handle));
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_create_cnt);
-
- cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- CQM_PTR_CHECK_RET(cqm_handle, NULL, CQM_PTR_NULL(cqm_handle));
-
- /* exception of service registrion check */
- if (!cqm_handle->service[service_type].has_register) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(service_type));
- return NULL;
- }
- /* wqe_size can't be more than PAGE_SIZE, can't be zero, must be power
- * of 2 the function of cqm_check_align is to check above
- */
- if (wqe_size >= PAGE_SIZE || (!cqm_check_align(wqe_size))) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(wqe_size));
- return NULL;
- }
-
- /* nonrdma supports: RQ, SQ, SRQ, CQ, SCQ */
- if (object_type < CQM_OBJECT_NONRDMA_EMBEDDED_RQ ||
- object_type > CQM_OBJECT_NONRDMA_SCQ) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object_type));
- return NULL;
- }
-
- nonrdma_qinfo = kmalloc(sizeof(*nonrdma_qinfo), GFP_KERNEL | __GFP_ZERO);
- CQM_PTR_CHECK_RET(nonrdma_qinfo, NULL, CQM_ALLOC_FAIL(nonrdma_qinfo));
-
- nonrdma_qinfo->common.object.service_type = service_type;
- nonrdma_qinfo->common.object.object_type = object_type;
- nonrdma_qinfo->common.object.object_size = wqe_number;
- atomic_set(&nonrdma_qinfo->common.object.refcount, 1);
- init_completion(&nonrdma_qinfo->common.object.free);
- nonrdma_qinfo->common.object.cqm_handle = cqm_handle;
-
- /* Initialize the doorbell used by the current queue.
- * The default value is hardware doorbell
- */
- nonrdma_qinfo->common.current_q_doorbell = CQM_HARDWARE_DOORBELL;
- /* Currently, the link mode is hardcoded and needs to be transferred by
- * the service side.
- */
- nonrdma_qinfo->common.queue_link_mode = CQM_QUEUE_RING_MODE;
-
- nonrdma_qinfo->common.priv = object_priv;
-
- /* Initialize internal private members */
- nonrdma_qinfo->wqe_size = wqe_size;
- service = &cqm_handle->service[service_type];
- switch (object_type) {
- case CQM_OBJECT_NONRDMA_SCQ:
- nonrdma_qinfo->q_ctx_size =
- service->service_template.scq_ctx_size;
- break;
- case CQM_OBJECT_NONRDMA_SRQ:
- /* Currently, the SRQ of the service is created through a
- * dedicated interface.
- */
- nonrdma_qinfo->q_ctx_size =
- service->service_template.srq_ctx_size;
- break;
- default:
- break;
- }
-
- ret = cqm_nonrdma_queue_create(&nonrdma_qinfo->common.object);
- if (ret == CQM_SUCCESS)
- return &nonrdma_qinfo->common;
-
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_nonrdma_queue_create));
- kfree(nonrdma_qinfo);
- return NULL;
-}
-
-void cqm_qpc_mpt_delete(struct cqm_object *object)
-{
- struct cqm_qpc_mpt *common = container_of(object, struct cqm_qpc_mpt, object);
- struct cqm_qpc_mpt_info *qpc_mpt_info = container_of(common,
- struct cqm_qpc_mpt_info,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object_table *object_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- u32 count = qpc_mpt_info->index_count;
- u32 index = qpc_mpt_info->common.xid;
- struct cqm_bitmap *bitmap = NULL;
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_qpc_mpt_delete_cnt);
-
- /* find the corresponding cla table */
- if (object->object_type == CQM_OBJECT_SERVICE_CTX) {
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
- } else {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
- return;
- }
-
- CQM_PTR_CHECK_NO_RET(cla_table,
- CQM_FUNCTION_FAIL(cqm_cla_table_get_qpc));
-
- /* disassociate index and object */
- object_table = &cla_table->obj_table;
- if (object->service_type == CQM_SERVICE_T_FC)
- cqm_object_table_remove(cqm_handle, object_table, index, object,
- false);
- else
- cqm_object_table_remove(cqm_handle, object_table, index, object,
- true);
-
- /* wait for completion to ensure that all references to
- * the QPC are complete
- */
- if (atomic_dec_and_test(&object->refcount))
- complete(&object->free);
- else
- cqm_err(handle->dev_hdl, "Qpc mpt del: object is referred by others, has to wait for completion\n");
-
- /* Static QPC allocation must be non-blocking.
- * Services ensure that the QPC is referenced
- * when the QPC is deleted.
- */
- if (!cla_table->alloc_static)
- wait_for_completion(&object->free);
-
- /* release qpc buffer */
- cqm_cla_put(cqm_handle, cla_table, index, count);
-
- /* release the index to the bitmap */
- bitmap = &cla_table->bitmap;
- cqm_bitmap_free(bitmap, index, count);
-}
-
-s32 cqm_qpc_mpt_delete_ret(struct cqm_object *object)
-{
- u32 object_type;
-
- object_type = object->object_type;
- switch (object_type) {
- case CQM_OBJECT_SERVICE_CTX:
- cqm_qpc_mpt_delete(object);
- return CQM_SUCCESS;
- default:
- return CQM_FAIL;
- }
-}
-
-void cqm_nonrdma_queue_delete(struct cqm_object *object)
-{
- struct cqm_queue *common = container_of(object, struct cqm_queue, object);
- struct cqm_nonrdma_qinfo *qinfo = container_of(common, struct cqm_nonrdma_qinfo,
- common);
- struct cqm_handle *cqm_handle = (struct cqm_handle *)object->cqm_handle;
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_buf *q_room_buf = &common->q_room_buf_1;
- struct sphw_hwdev *handle = cqm_handle->ex_handle;
- struct cqm_object_table *object_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_bitmap *bitmap = NULL;
- u32 index = qinfo->common.index;
- u32 count = qinfo->index_count;
-
- atomic_inc(&handle->hw_stats.cqm_stats.cqm_nonrdma_queue_delete_cnt);
-
- /* The SCQ has an independent SCQN association. */
- if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
- CQM_PTR_CHECK_NO_RET(cla_table, CQM_FUNCTION_FAIL(cqm_cla_table_get_queue));
-
- /* disassociate index and object */
- object_table = &cla_table->obj_table;
- if (object->service_type == CQM_SERVICE_T_FC)
- cqm_object_table_remove(cqm_handle, object_table, index,
- object, false);
- else
- cqm_object_table_remove(cqm_handle, object_table, index,
- object, true);
- }
-
- /* wait for completion to ensure that all references to
- * the QPC are complete
- */
- if (atomic_dec_and_test(&object->refcount))
- complete(&object->free);
- else
- cqm_err(handle->dev_hdl, "Nonrdma queue del: object is referred by others, has to wait for completion\n");
-
- wait_for_completion(&object->free);
-
- /* If the q header exists, release. */
- if (qinfo->common.q_header_vaddr) {
- pci_unmap_single(cqm_handle->dev, common->q_header_paddr,
- sizeof(struct cqm_queue_header),
- PCI_DMA_BIDIRECTIONAL);
-
- cqm_kfree_align(qinfo->common.q_header_vaddr);
- qinfo->common.q_header_vaddr = NULL;
- }
-
- cqm_buf_free(q_room_buf, cqm_handle->dev);
- /* SRQ and SCQ have independent CTXs and release. */
- if (object->object_type == CQM_OBJECT_NONRDMA_SRQ) {
- /* The CTX of the SRQ of the nordma is
- * applied for independently.
- */
- if (common->q_ctx_vaddr) {
- pci_unmap_single(cqm_handle->dev, common->q_ctx_paddr,
- qinfo->q_ctx_size,
- PCI_DMA_BIDIRECTIONAL);
-
- cqm_kfree_align(common->q_ctx_vaddr);
- common->q_ctx_vaddr = NULL;
- }
- } else if (object->object_type == CQM_OBJECT_NONRDMA_SCQ) {
- /* The CTX of the SCQ of the nordma is managed by BAT/CLA. */
- cqm_cla_put(cqm_handle, cla_table, index, count);
-
- /* release the index to the bitmap */
- bitmap = &cla_table->bitmap;
- cqm_bitmap_free(bitmap, index, count);
- }
-}
-
-s32 cqm_nonrdma_queue_delete_ret(struct cqm_object *object)
-{
- u32 object_type;
-
- object_type = object->object_type;
- switch (object_type) {
- case CQM_OBJECT_NONRDMA_EMBEDDED_RQ:
- case CQM_OBJECT_NONRDMA_EMBEDDED_SQ:
- case CQM_OBJECT_NONRDMA_EMBEDDED_CQ:
- case CQM_OBJECT_NONRDMA_SCQ:
- cqm_nonrdma_queue_delete(object);
- return CQM_SUCCESS;
- case CQM_OBJECT_NONRDMA_SRQ:
- cqm_nonrdma_queue_delete(object);
- return CQM_SUCCESS;
- default:
- return CQM_FAIL;
- }
-}
-
-void cqm3_object_delete(struct cqm_object *object)
-{
- struct cqm_handle *cqm_handle = NULL;
- struct sphw_hwdev *handle = NULL;
-
- CQM_PTR_CHECK_NO_RET(object, CQM_PTR_NULL(object));
- if (!object->cqm_handle) {
- pr_err("[CQM]object del: cqm_handle is null, service type %u, refcount %d\n",
- object->service_type, (int)object->refcount.counter);
- kfree(object);
- return;
- }
-
- cqm_handle = (struct cqm_handle *)object->cqm_handle;
-
- if (!cqm_handle->ex_handle) {
- pr_err("[CQM]object del: ex_handle is null, service type %u, refcount %d\n",
- object->service_type, (int)object->refcount.counter);
- kfree(object);
- return;
- }
-
- handle = cqm_handle->ex_handle;
-
- if (object->service_type >= CQM_SERVICE_T_MAX) {
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->service_type));
- kfree(object);
- return;
- }
-
- if (cqm_qpc_mpt_delete_ret(object) == CQM_SUCCESS) {
- kfree(object);
- return;
- }
-
- if (cqm_nonrdma_queue_delete_ret(object) == CQM_SUCCESS) {
- kfree(object);
- return;
- }
-
- cqm_err(handle->dev_hdl, CQM_WRONG_VALUE(object->object_type));
- kfree(object);
-}
-
-struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
- u32 index, bool bh)
-{
- struct sphw_hwdev *handle = (struct sphw_hwdev *)ex_handle;
- struct cqm_handle *cqm_handle = (struct cqm_handle *)(handle->cqm_hdl);
- struct cqm_bat_table *bat_table = &cqm_handle->bat_table;
- struct cqm_object_table *object_table = NULL;
- struct cqm_cla_table *cla_table = NULL;
- struct cqm_object *object = NULL;
-
- /* The data flow path takes performance into consideration and
- * does not check input parameters.
- */
- switch (object_type) {
- case CQM_OBJECT_SERVICE_CTX:
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_QPC);
- break;
- case CQM_OBJECT_NONRDMA_SCQ:
- cla_table = cqm_cla_table_get(bat_table, CQM_BAT_ENTRY_T_SCQC);
- break;
- default:
- return NULL;
- }
-
- if (!cla_table) {
- cqm_err(handle->dev_hdl, CQM_FUNCTION_FAIL(cqm_cla_table_get));
- return NULL;
- }
-
- object_table = &cla_table->obj_table;
- object = cqm_object_table_get(cqm_handle, object_table, index, bh);
- return object;
-}
-
-void cqm3_object_put(struct cqm_object *object)
-{
- /* The data flow path takes performance into consideration and
- * does not check input parameters.
- */
- if (atomic_dec_and_test(&object->refcount))
- complete(&object->free);
-}
diff --git a/drivers/scsi/spfc/hw/spfc_cqm_object.h b/drivers/scsi/spfc/hw/spfc_cqm_object.h
deleted file mode 100644
index 02a3e9070162..000000000000
--- a/drivers/scsi/spfc/hw/spfc_cqm_object.h
+++ /dev/null
@@ -1,279 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_CQM_OBJECT_H
-#define SPFC_CQM_OBJECT_H
-
-#ifdef __cplusplus
-#if __cplusplus
-extern "C" {
-#endif
-#endif /* __cplusplus */
-
-#define CQM_SUCCESS 0
-#define CQM_FAIL (-1)
-/* Ignore the return value and continue */
-#define CQM_CONTINUE 1
-
-/* type of WQE is LINK WQE */
-#define CQM_WQE_WF_LINK 1
-
-/* chain queue mode */
-#define CQM_QUEUE_LINK_MODE 0
-/* RING queue mode */
-#define CQM_QUEUE_RING_MODE 1
-
-#define CQM_CQ_DEPTH_MAX 32768
-#define CQM_CQ_DEPTH_MIN 256
-
-/* linkwqe */
-#define CQM_LINK_WQE_CTRLSL_VALUE 2
-#define CQM_LINK_WQE_LP_VALID 1
-#define CQM_LINK_WQE_LP_INVALID 0
-#define CQM_LINK_WQE_OWNER_VALID 1
-#define CQM_LINK_WQE_OWNER_INVALID 0
-
-#define CQM_ADDR_HI(addr) ((u32)((u64)(addr) >> 32))
-#define CQM_ADDR_LW(addr) ((u32)((u64)(addr) & 0xffffffff))
-
-#define CQM_QPC_LAYOUT_TABLE_SIZE 16
-
-#define CQM_MOD_CQM 8
-
-/* generic linkwqe structure */
-struct cqm_linkwqe {
- u32 rsv1 : 14; /* <reserved field */
- u32 wf : 1; /* <wf */
- u32 rsv2 : 14; /* <reserved field */
- u32 ctrlsl : 2; /* <ctrlsl */
- u32 o : 1; /* <o bit */
-
- u32 rsv3 : 31; /* <reserved field */
- u32 lp : 1; /* The lp field determines whether the o-bit meaning is reversed. */
- u32 next_page_gpa_h;
- u32 next_page_gpa_l;
- u32 next_buffer_addr_h;
- u32 next_buffer_addr_l;
-};
-
-/* SRQ linkwqe structure. The wqe size must not exceed the common RQE size. */
-struct cqm_srq_linkwqe {
- struct cqm_linkwqe linkwqe; /* <generic linkwqe structure */
- u32 current_buffer_gpa_h;
- u32 current_buffer_gpa_l;
- u32 current_buffer_addr_h;
- u32 current_buffer_addr_l;
-
- u32 fast_link_page_addr_h;
- u32 fast_link_page_addr_l;
-
- u32 fixed_next_buffer_addr_h;
- u32 fixed_next_buffer_addr_l;
-};
-
-#define CQM_LINKWQE_128B 128
-
-/* first 64B of standard 128B WQE */
-union cqm_linkwqe_first64B {
- struct cqm_linkwqe basic_linkwqe; /* <generic linkwqe structure */
- u32 value[16]; /* <reserved field */
-};
-
-/* second 64B of standard 128B WQE */
-struct cqm_linkwqe_second64B {
- u32 rsvd0[4]; /* <first 16B reserved field */
- u32 rsvd1[4]; /* <second 16B reserved field */
- u32 rsvd2[4];
-
- union {
- struct {
- u32 rsvd0[2];
- u32 rsvd1 : 31;
- u32 ifoe_o : 1; /* <o bit of ifoe */
- u32 rsvd2;
- } bs;
- u32 value[4];
- } forth_16B; /* <fourth 16B */
-};
-
-/* standard 128B WQE structure */
-struct cqm_linkwqe_128B {
- union cqm_linkwqe_first64B first64B; /* <first 64B of standard 128B WQE */
- struct cqm_linkwqe_second64B second64B; /* <back 64B of standard 128B WQE */
-};
-
-/* AEQ type definition */
-enum cqm_aeq_event_type {
- CQM_AEQ_BASE_T_FC = 48, /* <FC consists of 8 events:48~55 */
- CQM_AEQ_MAX_T_FC = 56
-};
-
-/* service registration template */
-struct service_register_template {
- u32 service_type; /* <service type */
- u32 srq_ctx_size; /* <SRQ context size */
- u32 scq_ctx_size; /* <SCQ context size */
- void *service_handle;
- u8 (*aeq_level_callback)(void *service_handle, u8 event_type, u8 *val);
- void (*aeq_callback)(void *service_handle, u8 event_type, u8 *val);
-};
-
-/* object operation type definition */
-enum cqm_object_type {
- CQM_OBJECT_ROOT_CTX = 0, /* <0:root context, which is compatible with root CTX management */
- CQM_OBJECT_SERVICE_CTX, /* <1:QPC, connection management object */
- CQM_OBJECT_NONRDMA_EMBEDDED_RQ = 10, /* <10:RQ of non-RDMA services, managed by LINKWQE */
- CQM_OBJECT_NONRDMA_EMBEDDED_SQ, /* <11:SQ of non-RDMA services, managed by LINKWQE */
- /* <12:SRQ of non-RDMA services, managed by MTT, but the CQM needs to apply for MTT. */
- CQM_OBJECT_NONRDMA_SRQ,
- /* <13:Embedded CQ for non-RDMA services, managed by LINKWQE */
- CQM_OBJECT_NONRDMA_EMBEDDED_CQ,
- CQM_OBJECT_NONRDMA_SCQ, /* <14:SCQ of non-RDMA services, managed by LINKWQE */
-};
-
-/* return value of the failure to apply for the BITMAP table */
-#define CQM_INDEX_INVALID (~(0U))
-
-/* doorbell mode selected by the current Q, hardware doorbell */
-#define CQM_HARDWARE_DOORBELL 1
-
-/* single-node structure of the CQM buffer */
-struct cqm_buf_list {
- void *va; /* <virtual address */
- dma_addr_t pa; /* <physical address */
- u32 refcount; /* <reference count of the buf, which is used for internal buf management. */
-};
-
-/* common management structure of the CQM buffer */
-struct cqm_buf {
- struct cqm_buf_list *buf_list; /* <buffer list */
- /* <map the discrete buffer list to a group of consecutive addresses */
- struct cqm_buf_list direct;
- u32 page_number; /* <buf_number in quantity of page_number=2^n */
- u32 buf_number; /* <number of buf_list nodes */
- u32 buf_size; /* <PAGE_SIZE in quantity of buf_size=2^n */
-};
-
-/* CQM object structure, which can be considered
- * as the base class abstracted from all queues/CTX.
- */
-struct cqm_object {
- u32 service_type; /* <service type */
- u32 object_type; /* <object type, such as context, queue, mpt, and mtt, etc */
- u32 object_size; /* <object Size, for queue/CTX/MPT, the unit is Byte*/
- atomic_t refcount; /* <reference counting */
- struct completion free; /* <release completed quantity */
- void *cqm_handle; /* <cqm_handle */
-};
-
-/* structure of the QPC and MPT objects of the CQM */
-struct cqm_qpc_mpt {
- struct cqm_object object;
- u32 xid;
- dma_addr_t paddr; /* <physical address of the QPC/MTT memory */
- void *priv; /* <private information about the object of the service driver. */
- u8 *vaddr; /* <virtual address of the QPC/MTT memory */
-};
-
-/* queue header structure */
-struct cqm_queue_header {
- u64 doorbell_record; /* <SQ/RQ DB content */
- u64 ci_record; /* <CQ DB content */
- u64 rsv1;
- u64 rsv2;
-};
-
-/* queue management structure: for queues of non-RDMA services, embedded queues
- * are managed by LinkWQE, SRQ and SCQ are managed by MTT, but MTT needs to be
- * applied by CQM; the queue of the RDMA service is managed by the MTT.
- */
-struct cqm_queue {
- struct cqm_object object; /* <object base class */
- /* <The embedded queue and QP do not have indexes, but the SRQ and SCQ do. */
- u32 index;
- /* <private information about the object of the service driver */
- void *priv;
- /* <doorbell type selected by the current queue. HW/SW are used for the roce QP. */
- u32 current_q_doorbell;
- u32 current_q_room;
- struct cqm_buf q_room_buf_1; /* <nonrdma:only q_room_buf_1 can be set to q_room_buf */
- struct cqm_buf q_room_buf_2; /* <The CQ of RDMA reallocates the size of the queue room. */
- struct cqm_queue_header *q_header_vaddr; /* <queue header virtual address */
- dma_addr_t q_header_paddr; /* <physical address of the queue header */
- u8 *q_ctx_vaddr; /* <CTX virtual addresses of SRQ and SCQ */
- dma_addr_t q_ctx_paddr; /* <CTX physical addresses of SRQ and SCQ */
- u32 valid_wqe_num; /* <number of valid WQEs that are successfully created */
- u8 *tail_container; /* <tail pointer of the SRQ container */
- u8 *head_container; /* <head pointer of SRQ container */
- /* <Determine the connection mode during queue creation, such as link and ring. */
- u8 queue_link_mode;
-};
-
-struct cqm_qpc_layout_table_node {
- u32 type;
- u32 size;
- u32 offset;
- struct cqm_object *object;
-};
-
-struct cqm_qpc_mpt_info {
- struct cqm_qpc_mpt common;
- /* Different service has different QPC.
- * The large QPC/mpt will occupy some continuous indexes in bitmap.
- */
- u32 index_count;
- struct cqm_qpc_layout_table_node qpc_layout_table[CQM_QPC_LAYOUT_TABLE_SIZE];
-};
-
-struct cqm_nonrdma_qinfo {
- struct cqm_queue common;
- u32 wqe_size;
- /* Number of WQEs in each buffer (excluding link WQEs)
- * For SRQ, the value is the number of WQEs contained in a container.
- */
- u32 wqe_per_buf;
- u32 q_ctx_size;
- /* When different services use CTXs of different sizes,
- * a large CTX occupies multiple consecutive indexes in the bitmap.
- */
- u32 index_count;
- /* add for srq */
- u32 container_size;
-};
-
-/* sending command structure */
-struct cqm_cmd_buf {
- void *buf;
- dma_addr_t dma;
- u16 size;
-};
-
-struct cqm_queue *cqm3_object_fc_srq_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 wqe_number, u32 wqe_size,
- void *object_priv);
-struct cqm_qpc_mpt *cqm3_object_qpc_mpt_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 object_size, void *object_priv,
- u32 index);
-struct cqm_queue *cqm3_object_nonrdma_queue_create(void *ex_handle, u32 service_type,
- enum cqm_object_type object_type,
- u32 wqe_number, u32 wqe_size,
- void *object_priv);
-void cqm3_object_delete(struct cqm_object *object);
-struct cqm_object *cqm3_object_get(void *ex_handle, enum cqm_object_type object_type,
- u32 index, bool bh);
-void cqm3_object_put(struct cqm_object *object);
-
-s32 cqm3_ring_hardware_db_fc(void *ex_handle, u32 service_type, u8 db_count,
- u8 pagenum, u64 db);
-s32 cqm_ring_direct_wqe_db(void *ex_handle, u32 service_type, u8 db_count, void *direct_wqe);
-s32 cqm_ring_direct_wqe_db_fc(void *ex_handle, u32 service_type, void *direct_wqe);
-
-#ifdef __cplusplus
-#if __cplusplus
-}
-#endif
-#endif /* __cplusplus */
-
-#endif /* SPFC_CQM_OBJECT_H */
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
deleted file mode 100644
index b033dcb78bb3..000000000000
--- a/drivers/scsi/spfc/hw/spfc_hba.c
+++ /dev/null
@@ -1,1751 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_hba.h"
-#include "spfc_module.h"
-#include "spfc_utils.h"
-#include "spfc_chipitf.h"
-#include "spfc_io.h"
-#include "spfc_lld.h"
-#include "sphw_hw.h"
-#include "spfc_cqm_main.h"
-
-struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
-ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
-static ulong card_num_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
-static struct spfc_card_num_manage card_num_manage[SPFC_MAX_CARD_NUM];
-spinlock_t probe_spin_lock;
-u32 max_parent_qpc_num;
-
-static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
-static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev);
-static u32 spfc_initial_chip_access(struct spfc_hba_info *hba);
-static void spfc_release_chip_access(struct spfc_hba_info *hba);
-static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode, void *var_in);
-static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode, void *para_out);
-static u32 spfc_port_update_wwn(void *hba, void *para_in);
-static u32 spfc_get_chip_info(struct spfc_hba_info *hba);
-static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn);
-static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa);
-static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state);
-static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba);
-static u32 spfc_set_port_state(void *hba, void *para_in);
-
-struct spfc_uld_info fc_uld_info = {
- .probe = spfc_probe,
- .remove = spfc_remove,
- .resume = NULL,
- .event = NULL,
- .suspend = NULL,
- .ioctl = NULL
-};
-
-struct service_register_template service_cqm_temp = {
- .service_type = SERVICE_T_FC,
- .scq_ctx_size = SPFC_SCQ_CNTX_SIZE,
- .srq_ctx_size = SPFC_SRQ_CNTX_SIZE, /* srq, scq context_size configuration */
- .aeq_callback = spfc_process_aeqe, /* the API of asynchronous event from TILE to driver */
-};
-
-/* default configuration: auto speed, auto topology, INI+TGT */
-static struct unf_cfg_item spfc_port_cfg_parm[] = {
- {"port_id", 0, 0x110000, 0xffffff},
- /* port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
- {"port_mode", 0, 0x20, 0xff},
- /* port topology, 0x3: loop , 0xc:p2p, 0xf:auto, 0x10:vn2vn */
- {"port_topology", 0, 0xf, 0x20},
- {"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
- /* queue depth of originator registered to SCSI midlayer */
- {"max_queue_depth", 0, 512, 512},
- {"sest_num", 0, 2048, 2048},
- {"max_login", 0, 2048, 2048},
- /* nodename from 32 bit to 64 bit */
- {"node_name_high", 0, 0x1000286e, 0xffffffff},
- /* nodename from 0 bit to 31 bit */
- {"node_name_low", 0, 0xd4bbf12f, 0xffffffff},
- /* portname from 32 bit to 64 bit */
- {"port_name_high", 0, 0x2000286e, 0xffffffff},
- /* portname from 0 bit to 31 bit */
- {"port_name_low", 0, 0xd4bbf12f, 0xffffffff},
- /* port speed 0:auto 1:1Gbps 2:2Gbps 3:4Gbps 4:8Gbps 5:16Gbps */
- {"port_speed", 0, 0, 32},
- {"interrupt_delay", 0, 0, 100}, /* unit: us */
- {"tape_support", 0, 0, 1}, /* tape support */
- {"End", 0, 0, 0}
-};
-
-struct unf_low_level_functioon_op spfc_func_op = {
- .low_level_type = UNF_SPFC_FC,
- .name = "SPFC",
- .xchg_mgr_type = UNF_LOW_LEVEL_MGR_TYPE_PASSTIVE,
- .abts_xchg = UNF_NO_EXTRA_ABTS_XCHG,
- .passthrough_flag = UNF_LOW_LEVEL_PASS_THROUGH_PORT_LOGIN,
- .support_max_npiv_num = UNF_SPFC_MAXNPIV_NUM,
- .support_max_ssq_num = SPFC_MAX_SSQ_NUM - 1,
- .chip_id = 0,
- .support_max_speed = UNF_PORT_SPEED_32_G,
- .support_max_rport = UNF_SPFC_MAXRPORT_NUM,
- .sfp_type = UNF_PORT_TYPE_FC_SFP,
- .rport_release_type = UNF_LOW_LEVEL_RELEASE_RPORT_ASYNC,
- .sirt_page_mode = UNF_LOW_LEVEL_SIRT_PAGE_MODE_XCHG,
-
- /* Link service */
- .service_op = {
- .unf_ls_gs_send = spfc_send_ls_gs_cmnd,
- .unf_bls_send = spfc_send_bls_cmnd,
- .unf_cmnd_send = spfc_send_scsi_cmnd,
- .unf_release_rport_res = spfc_free_parent_resource,
- .unf_flush_ini_resp_que = spfc_flush_ini_resp_queue,
- .unf_alloc_rport_res = spfc_alloc_parent_resource,
- .ll_release_xid = spfc_free_xid,
- },
-
- /* Port Mgr */
- .port_mgr_op = {
- .ll_port_config_set = spfc_port_config_set,
- .ll_port_config_get = spfc_port_config_get,
- }
-};
-
-struct spfc_port_cfg_op {
- enum unf_port_config_set_op opcode;
- u32 (*spfc_operation)(void *hba, void *para);
-};
-
-struct spfc_port_cfg_op spfc_config_set_op[] = {
- {UNF_PORT_CFG_SET_PORT_SWITCH, spfc_sfp_switch},
- {UNF_PORT_CFG_UPDATE_WWN, spfc_port_update_wwn},
- {UNF_PORT_CFG_SET_PORT_STATE, spfc_set_port_state},
- {UNF_PORT_CFG_UPDATE_FABRIC_PARAM, spfc_update_fabric_param},
- {UNF_PORT_CFG_UPDATE_PLOGI_PARAM, spfc_update_port_param},
- {UNF_PORT_CFG_SET_BUTT, NULL}
-};
-
-struct spfc_port_cfg_get_op {
- enum unf_port_cfg_get_op opcode;
- u32 (*spfc_operation)(void *hba, void *para);
-};
-
-struct spfc_port_cfg_get_op spfc_config_get_op[] = {
- {UNF_PORT_CFG_GET_TOPO_ACT, spfc_get_topo_act},
- {UNF_PORT_CFG_GET_LOOP_MAP, spfc_get_loop_map},
- {UNF_PORT_CFG_GET_WORKBALE_BBCREDIT, spfc_get_workable_bb_credit},
- {UNF_PORT_CFG_GET_WORKBALE_BBSCN, spfc_get_workable_bb_scn},
- {UNF_PORT_CFG_GET_LOOP_ALPA, spfc_get_loop_alpa},
- {UNF_PORT_CFG_GET_MAC_ADDR, spfc_get_chip_msg},
- {UNF_PORT_CFG_GET_PCIE_LINK_STATE, spfc_get_hba_pcie_link_state},
- {UNF_PORT_CFG_GET_BUTT, NULL},
-};
-
-static u32 spfc_set_port_state(void *hba, void *para_in)
-{
- u32 ret = UNF_RETURN_ERROR;
- enum unf_port_config_state port_state = UNF_PORT_CONFIG_STATE_START;
-
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
-
- port_state = *((enum unf_port_config_state *)para_in);
- switch (port_state) {
- case UNF_PORT_CONFIG_STATE_RESET:
- ret = (u32)spfc_port_reset(hba);
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Cannot set port_state(0x%x)", port_state);
- break;
- }
-
- return ret;
-
-}
-
-static u32 spfc_port_update_wwn(void *hba, void *para_in)
-{
- struct unf_port_wwn *port_wwn = NULL;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
-
- port_wwn = (struct unf_port_wwn *)para_in;
-
- /* Update it to the hba in the later */
- *(u64 *)spfc_hba->sys_node_name = port_wwn->sys_node_name;
- *(u64 *)spfc_hba->sys_port_name = port_wwn->sys_port_wwn;
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "[info]Port(0x%x) updates WWNN(0x%llx) WWPN(0x%llx)",
- spfc_hba->port_cfg.port_id,
- *(u64 *)spfc_hba->sys_node_name,
- *(u64 *)spfc_hba->sys_port_name);
-
- return RETURN_OK;
-}
-
-static u32 spfc_port_config_set(void *hba, enum unf_port_config_set_op opcode,
- void *var_in)
-{
- u32 op_idx = 0;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- for (op_idx = 0; op_idx < sizeof(spfc_config_set_op) /
- sizeof(struct spfc_port_cfg_op); op_idx++) {
- if (opcode == spfc_config_set_op[op_idx].opcode) {
- if (!spfc_config_set_op[op_idx].spfc_operation) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Null operation for configuration, opcode(0x%x), operation ID(0x%x)",
- opcode, op_idx);
-
- return UNF_RETURN_ERROR;
- }
- return spfc_config_set_op[op_idx].spfc_operation(hba, var_in);
- }
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]No operation code for configuration, opcode(0x%x)",
- opcode);
-
- return UNF_RETURN_ERROR;
-}
-
-static u32 spfc_port_config_get(void *hba, enum unf_port_cfg_get_op opcode,
- void *para_out)
-{
- u32 op_idx = 0;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- for (op_idx = 0; op_idx < sizeof(spfc_config_get_op) /
- sizeof(struct spfc_port_cfg_get_op); op_idx++) {
- if (opcode == spfc_config_get_op[op_idx].opcode) {
- if (!spfc_config_get_op[op_idx].spfc_operation) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Null operation to get configuration, opcode(0x%x), operation ID(0x%x)",
- opcode, op_idx);
- return UNF_RETURN_ERROR;
- }
- return spfc_config_get_op[op_idx].spfc_operation(hba, para_out);
- }
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]No operation to get configuration, opcode(0x%x)",
- opcode);
-
- return UNF_RETURN_ERROR;
-}
-
-static u32 spfc_fc_mode_check(void *hw_dev_handle)
-{
- FC_CHECK_RETURN_VALUE(hw_dev_handle, UNF_RETURN_ERROR);
-
- if (!sphw_support_fc(hw_dev_handle, NULL)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Work mode is error");
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Selected work mode is FC");
-
- return RETURN_OK;
-}
-
-static u32 spfc_check_port_cfg(const struct spfc_port_cfg *port_cfg)
-{
- bool topo_condition = false;
- bool speed_condition = false;
- /* About Work Topology */
- topo_condition = ((port_cfg->port_topology != UNF_TOP_LOOP_MASK) &&
- (port_cfg->port_topology != UNF_TOP_P2P_MASK) &&
- (port_cfg->port_topology != UNF_TOP_AUTO_MASK));
- if (topo_condition) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Configured port topology(0x%x) is incorrect",
- port_cfg->port_topology);
-
- return UNF_RETURN_ERROR;
- }
-
- /* About Work Mode */
- if (port_cfg->port_mode != UNF_PORT_MODE_INI) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Configured port mode(0x%x) is incorrect",
- port_cfg->port_mode);
-
- return UNF_RETURN_ERROR;
- }
-
- /* About Work Speed */
- speed_condition = ((port_cfg->port_speed != UNF_PORT_SPEED_AUTO) &&
- (port_cfg->port_speed != UNF_PORT_SPEED_2_G) &&
- (port_cfg->port_speed != UNF_PORT_SPEED_4_G) &&
- (port_cfg->port_speed != UNF_PORT_SPEED_8_G) &&
- (port_cfg->port_speed != UNF_PORT_SPEED_16_G) &&
- (port_cfg->port_speed != UNF_PORT_SPEED_32_G));
- if (speed_condition) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Configured port speed(0x%x) is incorrect",
- port_cfg->port_speed);
-
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Check port configuration OK");
-
- return RETURN_OK;
-}
-
-static u32 spfc_get_port_cfg(struct spfc_hba_info *hba,
- struct spfc_chip_info *chip_info, u8 card_num)
-{
-#define UNF_CONFIG_ITEM_LEN 15
- /* Maximum length of a configuration item name, including the end
- * character
- */
-#define UNF_MAX_ITEM_NAME_LEN (32 + 1)
-
- /* Get and check parameters */
- char cfg_item[UNF_MAX_ITEM_NAME_LEN];
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- memset((void *)cfg_item, 0, sizeof(cfg_item));
-
- spfc_hba->card_info.func_num = (sphw_global_func_id(hba->dev_handle)) & UNF_FUN_ID_MASK;
- spfc_hba->card_info.card_num = card_num;
-
- /* The range of PF of FC server is from PF1 to PF2 */
- snprintf(cfg_item, UNF_MAX_ITEM_NAME_LEN, "spfc_cfg_%1u", (spfc_hba->card_info.func_num));
-
- cfg_item[UNF_MAX_ITEM_NAME_LEN - 1] = 0;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Get port configuration: %s", cfg_item);
-
- /* Get configuration parameters from file */
- UNF_LOWLEVEL_GET_CFG_PARMS(ret, cfg_item, &spfc_port_cfg_parm[ARRAY_INDEX_0],
- (u32 *)(void *)(&spfc_hba->port_cfg),
- sizeof(spfc_port_cfg_parm) / sizeof(struct unf_cfg_item));
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't get configuration",
- spfc_hba->port_cfg.port_id);
-
- return ret;
- }
-
- if (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) {
- spfc_hba->port_cfg.sest_num = UNF_SPFC_MAXRPORT_NUM;
- spfc_hba->port_cfg.max_login = UNF_SPFC_MAXRPORT_NUM;
- }
-
- spfc_hba->port_cfg.port_id &= SPFC_PORT_ID_MASK;
- spfc_hba->port_cfg.port_id |= spfc_hba->card_info.card_num << UNF_SHIFT_8;
- spfc_hba->port_cfg.port_id |= spfc_hba->card_info.func_num;
- spfc_hba->port_cfg.tape_support = (u32)chip_info->tape_support;
-
- /* Parameters check */
- ret = spfc_check_port_cfg(&spfc_hba->port_cfg);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) check configuration incorrect",
- spfc_hba->port_cfg.port_id);
-
- return ret;
- }
-
- /* Set configuration which is got from file */
- spfc_hba->port_speed_cfg = spfc_hba->port_cfg.port_speed;
- spfc_hba->port_topo_cfg = spfc_hba->port_cfg.port_topology;
- spfc_hba->port_mode = (enum unf_port_mode)(spfc_hba->port_cfg.port_mode);
-
- return ret;
-}
-
-void spfc_generate_sys_wwn(struct spfc_hba_info *hba)
-{
- FC_CHECK_RETURN_VOID(hba);
-
- *(u64 *)hba->sys_node_name = (((u64)hba->port_cfg.node_name_hi << UNF_SHIFT_32) |
- (hba->port_cfg.node_name_lo));
- *(u64 *)hba->sys_port_name = (((u64)hba->port_cfg.port_name_hi << UNF_SHIFT_32) |
- (hba->port_cfg.port_name_lo));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]NodeName = 0x%llx, PortName = 0x%llx",
- *(u64 *)hba->sys_node_name, *(u64 *)hba->sys_port_name);
-}
-
-static u32 spfc_create_queues(struct spfc_hba_info *hba)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- SPFC_FUNCTION_ENTER;
-
- /* Initialize shared resources of SCQ and SRQ in parent queue */
- ret = spfc_create_common_share_queues(hba);
- if (ret != RETURN_OK)
- goto out_create_common_queue_fail;
-
- /* Initialize parent queue manager resources */
- ret = spfc_alloc_parent_queue_mgr(hba);
- if (ret != RETURN_OK)
- goto out_free_share_queue_resource;
-
- /* Initialize shared WQE page pool in parent SQ */
- ret = spfc_alloc_parent_sq_wqe_page_pool(hba);
- if (ret != RETURN_OK)
- goto out_free_parent_queue_resource;
-
- ret = spfc_create_ssq(hba);
- if (ret != RETURN_OK)
- goto out_free_parent_wqe_page_pool;
-
- /*
- * Notice: the configuration of SQ and QID(default_sqid)
- * must be the same in FC
- */
- hba->next_clear_sq = 0;
- hba->default_sqid = SPFC_QID_SQ;
-
- SPFC_FUNCTION_RETURN;
- return RETURN_OK;
-out_free_parent_wqe_page_pool:
- spfc_free_parent_sq_wqe_page_pool(hba);
-
-out_free_parent_queue_resource:
- spfc_free_parent_queue_mgr(hba);
-
-out_free_share_queue_resource:
- spfc_flush_scq_ctx(hba);
- spfc_flush_srq_ctx(hba);
- spfc_destroy_common_share_queues(hba);
-
-out_create_common_queue_fail:
- SPFC_FUNCTION_RETURN;
-
- return ret;
-}
-
-static u32 spfc_alloc_dma_buffers(struct spfc_hba_info *hba)
-{
- struct pci_dev *pci_dev = NULL;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- pci_dev = hba->pci_dev;
- FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
-
- hba->sfp_buf = dma_alloc_coherent(&hba->pci_dev->dev,
- sizeof(struct unf_sfp_err_rome_info),
- &hba->sfp_dma_addr, GFP_KERNEL);
- if (!hba->sfp_buf) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't allocate SFP DMA buffer",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
- memset(hba->sfp_buf, 0, sizeof(struct unf_sfp_err_rome_info));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) allocate sfp buffer(0x%p 0x%llx)",
- hba->port_cfg.port_id, hba->sfp_buf,
- (u64)hba->sfp_dma_addr);
-
- return RETURN_OK;
-}
-
-static void spfc_free_dma_buffers(struct spfc_hba_info *hba)
-{
- struct pci_dev *pci_dev = NULL;
-
- FC_CHECK_RETURN_VOID(hba);
- pci_dev = hba->pci_dev;
- FC_CHECK_RETURN_VOID(pci_dev);
-
- if (hba->sfp_buf) {
- dma_free_coherent(&pci_dev->dev, sizeof(struct unf_sfp_err_rome_info),
- hba->sfp_buf, hba->sfp_dma_addr);
-
- hba->sfp_buf = NULL;
- hba->sfp_dma_addr = 0;
- }
-}
-
-static void spfc_destroy_queues(struct spfc_hba_info *hba)
-{
- /* Free ssq */
- spfc_free_ssq(hba, SPFC_MAX_SSQ_NUM);
-
- /* Free parent queue resource */
- spfc_free_parent_queues(hba);
-
- /* Free queue manager resource */
- spfc_free_parent_queue_mgr(hba);
-
- /* Free linked List SQ and WQE page pool resource */
- spfc_free_parent_sq_wqe_page_pool(hba);
-
- /* Free shared SRQ and SCQ queue resource */
- spfc_destroy_common_share_queues(hba);
-}
-
-static u32 spfc_alloc_default_session(struct spfc_hba_info *hba)
-{
- struct unf_port_info rport_info = {0};
- u32 wait_sq_cnt = 0;
-
- rport_info.nport_id = 0xffffff;
- rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
- rport_info.local_nport_id = 0xffffff;
- rport_info.port_name = 0;
- rport_info.cs_ctrl = 0x81;
-
- if (spfc_alloc_parent_resource((void *)hba, &rport_info) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Alloc default session resource failed");
- goto failed;
- }
-
- for (;;) {
- if (hba->default_sq_info.default_sq_flag == 1)
- break;
-
- msleep(SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS);
- wait_sq_cnt++;
- if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES) {
- hba->default_sq_info.default_sq_flag = 0xF;
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Wait Default Session enable timeout");
- goto failed;
- }
- }
-
- if (spfc_mbx_config_default_session(hba, 1) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Notify up config default session table fail");
- goto failed;
- }
-
- return RETURN_OK;
-
-failed:
- spfc_sess_resource_free_sync((void *)hba, &rport_info);
- return UNF_RETURN_ERROR;
-}
-
-static u32 spfc_init_host_res(struct spfc_hba_info *hba)
-{
- u32 ret = RETURN_OK;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
-
- SPFC_FUNCTION_ENTER;
-
- /* Initialize spin lock */
- spin_lock_init(&spfc_hba->hba_lock);
- spin_lock_init(&spfc_hba->flush_state_lock);
- spin_lock_init(&spfc_hba->clear_state_lock);
- spin_lock_init(&spfc_hba->spin_lock);
- spin_lock_init(&spfc_hba->srq_delay_info.srq_lock);
- /* Initialize init_completion */
- init_completion(&spfc_hba->hba_init_complete);
- init_completion(&spfc_hba->mbox_complete);
- init_completion(&spfc_hba->vpf_complete);
- init_completion(&spfc_hba->fcfi_complete);
- init_completion(&spfc_hba->get_sfp_complete);
- /* Step-1: initialize the communication channel between driver and uP */
- ret = spfc_initial_chip_access(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't initialize chip access",
- spfc_hba->port_cfg.port_id);
-
- goto out_unmap_memory;
- }
- /* Step-2: get chip configuration information before creating
- * queue resources
- */
- ret = spfc_get_chip_info(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't get chip information",
- spfc_hba->port_cfg.port_id);
-
- goto out_unmap_memory;
- }
-
- /* Step-3: create queue resources */
- ret = spfc_create_queues(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't create queues",
- spfc_hba->port_cfg.port_id);
-
- goto out_release_chip_access;
- }
- /* Allocate DMA buffer (SFP information) */
- ret = spfc_alloc_dma_buffers(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't allocate DMA buffers",
- spfc_hba->port_cfg.port_id);
-
- goto out_destroy_queues;
- }
- /* Initialize status parameters */
- spfc_hba->active_port_speed = UNF_PORT_SPEED_UNKNOWN;
- spfc_hba->active_topo = UNF_ACT_TOP_UNKNOWN;
- spfc_hba->sfp_on = false;
- spfc_hba->port_loop_role = UNF_LOOP_ROLE_MASTER_OR_SLAVE;
- spfc_hba->phy_link = UNF_PORT_LINK_DOWN;
- spfc_hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_INIT;
-
- /* Initialize parameters referring to the lowlevel */
- spfc_hba->remote_rttov_tag = 0;
- spfc_hba->port_bb_scn_cfg = SPFC_LOWLEVEL_DEFAULT_BB_SCN;
-
- /* Initialize timer, and the unit of E_D_TOV is ms */
- spfc_hba->remote_edtov_tag = 0;
- spfc_hba->remote_bb_credit = 0;
- spfc_hba->compared_bb_scn = 0;
- spfc_hba->compared_edtov_val = UNF_DEFAULT_EDTOV;
- spfc_hba->compared_ratov_val = UNF_DEFAULT_RATOV;
- spfc_hba->removing = false;
- spfc_hba->dev_present = true;
-
- /* Initialize parameters about cos */
- spfc_hba->cos_bitmap = cos_bit_map;
- memset(spfc_hba->cos_rport_cnt, 0, SPFC_MAX_COS_NUM * sizeof(atomic_t));
-
- /* Mailbox access completion */
- complete(&spfc_hba->mbox_complete);
-
- ret = spfc_alloc_default_session(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't allocate Default Session",
- spfc_hba->port_cfg.port_id);
-
- goto out_destroy_dma_buff;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]SPFC port(0x%x) initialize host resources succeeded",
- spfc_hba->port_cfg.port_id);
-
- return ret;
-
-out_destroy_dma_buff:
- spfc_free_dma_buffers(spfc_hba);
-out_destroy_queues:
- spfc_flush_scq_ctx(spfc_hba);
- spfc_flush_srq_ctx(spfc_hba);
- spfc_destroy_queues(spfc_hba);
-
-out_release_chip_access:
- spfc_release_chip_access(spfc_hba);
-
-out_unmap_memory:
- return ret;
-}
-
-static u32 spfc_get_chip_info(struct spfc_hba_info *hba)
-{
- u32 ret = RETURN_OK;
- u32 exi_count = 0;
- u32 exi_base = 0;
- u32 exi_stride = 0;
- u32 fun_idx = 0;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- hba->vpid_start = hba->service_cap.dev_fc_cap.vp_id_start;
- hba->vpid_end = hba->service_cap.dev_fc_cap.vp_id_end;
- fun_idx = sphw_global_func_id(hba->dev_handle);
-
- exi_count = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
- exit_count >> UNF_SHIFT_1 : exit_count;
- exi_stride = (max_parent_qpc_num <= SPFC_MAX_PARENT_QPC_NUM) ?
- exit_stride >> UNF_SHIFT_1 : exit_stride;
- exi_base = exit_base;
-
- exi_base += (fun_idx * exi_stride);
- hba->exi_base = SPFC_LSW(exi_base);
- hba->exi_count = SPFC_LSW(exi_count);
- hba->max_support_speed = max_speed;
- hba->port_index = SPFC_LSB(fun_idx);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) base information: PortIndex=0x%x, ExiBase=0x%x, ExiCount=0x%x, VpIdStart=0x%x, VpIdEnd=0x%x, MaxSpeed=0x%x, Speed=0x%x, Topo=0x%x",
- hba->port_cfg.port_id, hba->port_index, hba->exi_base,
- hba->exi_count, hba->vpid_start, hba->vpid_end,
- hba->max_support_speed, hba->port_speed_cfg, hba->port_topo_cfg);
-
- return ret;
-}
-
-static u32 spfc_initial_chip_access(struct spfc_hba_info *hba)
-{
- int ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- /* 1. Initialize cqm access related with scq, emb cq, aeq(ucode-->driver) */
- service_cqm_temp.service_handle = hba;
-
- ret = cqm3_service_register(hba->dev_handle, &service_cqm_temp);
- if (ret != CQM_SUCCESS)
- return UNF_RETURN_ERROR;
-
- /* 2. Initialize mailbox(driver-->up), aeq(up--->driver) access */
- ret = sphw_register_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC, hba,
- spfc_up_msg2driver_proc);
- if (ret != CQM_SUCCESS)
- goto out_unreg_cqm;
-
- return RETURN_OK;
-
-out_unreg_cqm:
- cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
-
- return UNF_RETURN_ERROR;
-}
-
-static void spfc_release_chip_access(struct spfc_hba_info *hba)
-{
- FC_CHECK_RETURN_VOID(hba);
- FC_CHECK_RETURN_VOID(hba->dev_handle);
-
- sphw_unregister_mgmt_msg_cb(hba->dev_handle, COMM_MOD_FC);
-
- cqm3_service_unregister(hba->dev_handle, SERVICE_T_FC);
-}
-
-static void spfc_update_lport_config(struct spfc_hba_info *hba,
- struct unf_low_level_functioon_op *lowlevel_func)
-{
-#define SPFC_MULTI_CONF_NONSUPPORT 0
-
- struct unf_lport_cfg_item *lport_cfg = NULL;
-
- lport_cfg = &lowlevel_func->lport_cfg_items;
-
- if (hba->port_cfg.max_login < lowlevel_func->support_max_rport)
- lport_cfg->max_login = hba->port_cfg.max_login;
- else
- lport_cfg->max_login = lowlevel_func->support_max_rport;
-
- if (hba->port_cfg.sest_num >> UNF_SHIFT_1 < UNF_RESERVE_SFS_XCHG)
- lport_cfg->max_io = hba->port_cfg.sest_num;
- else
- lport_cfg->max_io = hba->port_cfg.sest_num - UNF_RESERVE_SFS_XCHG;
-
- lport_cfg->max_sfs_xchg = UNF_MAX_SFS_XCHG;
- lport_cfg->port_id = hba->port_cfg.port_id;
- lport_cfg->port_mode = hba->port_cfg.port_mode;
- lport_cfg->port_topology = hba->port_cfg.port_topology;
- lport_cfg->max_queue_depth = hba->port_cfg.max_queue_depth;
-
- lport_cfg->port_speed = hba->port_cfg.port_speed;
- lport_cfg->tape_support = hba->port_cfg.tape_support;
-
- lowlevel_func->sys_port_name = *(u64 *)hba->sys_port_name;
- lowlevel_func->sys_node_name = *(u64 *)hba->sys_node_name;
-
- /* Update chip information */
- lowlevel_func->dev = hba->pci_dev;
- lowlevel_func->chip_info.chip_work_mode = hba->work_mode;
- lowlevel_func->chip_info.chip_type = hba->chip_type;
- lowlevel_func->chip_info.disable_err_flag = 0;
- lowlevel_func->support_max_speed = hba->max_support_speed;
- lowlevel_func->support_min_speed = hba->min_support_speed;
-
- lowlevel_func->chip_id = 0;
-
- lowlevel_func->sfp_type = UNF_PORT_TYPE_FC_SFP;
-
- lowlevel_func->multi_conf_support = SPFC_MULTI_CONF_NONSUPPORT;
- lowlevel_func->support_max_hot_tag_range = hba->port_cfg.sest_num;
- lowlevel_func->update_fw_reset_active = UNF_PORT_UNGRADE_FW_RESET_INACTIVE;
- lowlevel_func->port_type = 0; /* DRV_PORT_ENTITY_TYPE_PHYSICAL */
-
- if ((lport_cfg->port_id & UNF_FIRST_LPORT_ID_MASK) == lport_cfg->port_id)
- lowlevel_func->support_upgrade_report = UNF_PORT_SUPPORT_UPGRADE_REPORT;
- else
- lowlevel_func->support_upgrade_report = UNF_PORT_UNSUPPORT_UPGRADE_REPORT;
-}
-
-static u32 spfc_create_lport(struct spfc_hba_info *hba)
-{
- void *lport = NULL;
- struct unf_low_level_functioon_op lowlevel_func;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- spfc_func_op.dev = hba->pci_dev;
- memcpy(&lowlevel_func, &spfc_func_op, sizeof(struct unf_low_level_functioon_op));
-
- /* Update port configuration table */
- spfc_update_lport_config(hba, &lowlevel_func);
-
- /* Apply for lport resources */
- UNF_LOWLEVEL_ALLOC_LPORT(lport, hba, &lowlevel_func);
- if (!lport) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't allocate Lport",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
- hba->lport = lport;
-
- return RETURN_OK;
-}
-
-void spfc_release_probe_index(u32 probe_index)
-{
- if (probe_index >= SPFC_MAX_PROBE_PORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Probe index(0x%x) is invalid", probe_index);
-
- return;
- }
-
- spin_lock(&probe_spin_lock);
- if (!test_bit((int)probe_index, (const ulong *)probe_bit_map)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Probe index(0x%x) is not probed",
- probe_index);
-
- spin_unlock(&probe_spin_lock);
-
- return;
- }
-
- clear_bit((int)probe_index, probe_bit_map);
- spin_unlock(&probe_spin_lock);
-}
-
-static void spfc_delete_default_session(struct spfc_hba_info *hba)
-{
- struct unf_port_info rport_info = {0};
-
- rport_info.nport_id = 0xffffff;
- rport_info.rport_index = SPFC_DEFAULT_RPORT_INDEX;
- rport_info.local_nport_id = 0xffffff;
- rport_info.port_name = 0;
- rport_info.cs_ctrl = 0x81;
-
- /* Need config table to up first, then delete default session */
- (void)spfc_mbx_config_default_session(hba, 0);
- spfc_sess_resource_free_sync((void *)hba, &rport_info);
-}
-
-static void spfc_release_host_res(struct spfc_hba_info *hba)
-{
- spfc_free_dma_buffers(hba);
-
- spfc_destroy_queues(hba);
-
- spfc_release_chip_access(hba);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) release low level resource done",
- hba->port_cfg.port_id);
-}
-
-static struct spfc_hba_info *spfc_init_hba(struct pci_dev *pci_dev,
- void *hw_dev_handle,
- struct spfc_chip_info *chip_info,
- u8 card_num)
-{
- u32 ret = RETURN_OK;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VALUE(pci_dev, NULL);
- FC_CHECK_RETURN_VALUE(hw_dev_handle, NULL);
-
- /* Allocate HBA */
- hba = kmalloc(sizeof(struct spfc_hba_info), GFP_ATOMIC);
- FC_CHECK_RETURN_VALUE(hba, NULL);
- memset(hba, 0, sizeof(struct spfc_hba_info));
-
- /* Heartbeat default */
- hba->heart_status = 1;
- /* Private data in pciDev */
- hba->pci_dev = pci_dev;
- hba->dev_handle = hw_dev_handle;
-
- /* Work mode */
- hba->work_mode = chip_info->work_mode;
- /* Create work queue */
- hba->work_queue = create_singlethread_workqueue("spfc");
- if (!hba->work_queue) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Spfc creat workqueue failed");
-
- goto out_free_hba;
- }
- /* Init delay work */
- INIT_DELAYED_WORK(&hba->srq_delay_info.del_work, spfc_rcvd_els_from_srq_timeout);
- INIT_WORK(&hba->els_srq_clear_work, spfc_wq_destroy_els_srq);
-
- /* Notice: Only use FC features */
- (void)sphw_support_fc(hw_dev_handle, &hba->service_cap);
- /* Check parent context available */
- if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]FC parent context is not allocated in this function");
-
- goto out_destroy_workqueue;
- }
- max_parent_qpc_num = hba->service_cap.dev_fc_cap.max_parent_qpc_num;
-
- /* Get port configuration */
- ret = spfc_get_port_cfg(hba, chip_info, card_num);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Can't get port configuration");
-
- goto out_destroy_workqueue;
- }
- /* Get WWN */
- spfc_generate_sys_wwn(hba);
-
- /* Initialize host resources */
- ret = spfc_init_host_res(hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't initialize host resource",
- hba->port_cfg.port_id);
-
- goto out_destroy_workqueue;
- }
- /* Local Port create */
- ret = spfc_create_lport(hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't create lport",
- hba->port_cfg.port_id);
- goto out_release_host_res;
- }
- complete(&hba->hba_init_complete);
-
- /* Print reference count */
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[info]Port(0x%x) probe succeeded. Memory reference is 0x%x",
- hba->port_cfg.port_id, atomic_read(&fc_mem_ref));
-
- return hba;
-
-out_release_host_res:
- spfc_delete_default_session(hba);
- spfc_flush_scq_ctx(hba);
- spfc_flush_srq_ctx(hba);
- spfc_release_host_res(hba);
-
-out_destroy_workqueue:
- flush_workqueue(hba->work_queue);
- destroy_workqueue(hba->work_queue);
- hba->work_queue = NULL;
-
-out_free_hba:
- kfree(hba);
-
- return NULL;
-}
-
-void spfc_get_total_probed_num(u32 *probe_cnt)
-{
- u32 i = 0;
- u32 cnt = 0;
-
- spin_lock(&probe_spin_lock);
- for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
- if (test_bit((int)i, (const ulong *)probe_bit_map))
- cnt++;
- }
-
- *probe_cnt = cnt;
- spin_unlock(&probe_spin_lock);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Probed port total number is 0x%x", cnt);
-}
-
-u32 spfc_assign_card_num(struct spfc_lld_dev *lld_dev,
- struct spfc_chip_info *chip_info, u8 *card_num)
-{
- u8 i = 0;
- u64 card_index = 0;
-
- card_index = (!pci_is_root_bus(lld_dev->pdev->bus)) ?
- lld_dev->pdev->bus->parent->number : lld_dev->pdev->bus->number;
-
- spin_lock(&probe_spin_lock);
-
- for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
- if (test_bit((int)i, (const ulong *)card_num_bit_map)) {
- if (card_num_manage[i].card_number ==
- card_index && !card_num_manage[i].is_removing
- ) {
- card_num_manage[i].port_count++;
- *card_num = i;
- spin_unlock(&probe_spin_lock);
- return RETURN_OK;
- }
- }
- }
-
- for (i = 0; i < SPFC_MAX_CARD_NUM; i++) {
- if (!test_bit((int)i, (const ulong *)card_num_bit_map)) {
- card_num_manage[i].card_number = card_index;
- card_num_manage[i].port_count = 1;
- card_num_manage[i].is_removing = false;
-
- *card_num = i;
- set_bit(i, card_num_bit_map);
-
- spin_unlock(&probe_spin_lock);
-
- return RETURN_OK;
- }
- }
-
- spin_unlock(&probe_spin_lock);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Have probe more than 0x%x port, probe failed", i);
-
- return UNF_RETURN_ERROR;
-}
-
-static void spfc_dec_and_free_card_num(u8 card_num)
-{
- /* 2 ports per card */
- if (card_num >= SPFC_MAX_CARD_NUM) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Card number(0x%x) is invalid", card_num);
-
- return;
- }
-
- spin_lock(&probe_spin_lock);
-
- if (test_bit((int)card_num, (const ulong *)card_num_bit_map)) {
- card_num_manage[card_num].port_count--;
- card_num_manage[card_num].is_removing = true;
-
- if (card_num_manage[card_num].port_count == 0) {
- card_num_manage[card_num].card_number = 0;
- card_num_manage[card_num].is_removing = false;
- clear_bit((int)card_num, card_num_bit_map);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Can not find card number(0x%x)", card_num);
- }
-
- spin_unlock(&probe_spin_lock);
-}
-
-u32 spfc_assign_probe_index(u32 *probe_index)
-{
- u32 i = 0;
-
- spin_lock(&probe_spin_lock);
- for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
- if (!test_bit((int)i, (const ulong *)probe_bit_map)) {
- *probe_index = i;
- set_bit(i, probe_bit_map);
-
- spin_unlock(&probe_spin_lock);
-
- return RETURN_OK;
- }
- }
- spin_unlock(&probe_spin_lock);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Have probe more than 0x%x port, probe failed", i);
-
- return UNF_RETURN_ERROR;
-}
-
-u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index)
-{
- u32 total_probe_num = 0;
- u32 i = 0;
- u32 probe_cnt = 0;
-
- spfc_get_total_probed_num(&total_probe_num);
-
- for (i = 0; i < SPFC_MAX_PROBE_PORT_NUM; i++) {
- if (!spfc_hba[i])
- continue;
-
- if (total_probe_num == probe_cnt)
- break;
-
- if (port_id == spfc_hba[i]->port_cfg.port_id) {
- *probe_index = spfc_hba[i]->probe_index;
-
- return RETURN_OK;
- }
-
- probe_cnt++;
- }
-
- return UNF_RETURN_ERROR;
-}
-
-static int spfc_probe(struct spfc_lld_dev *lld_dev, void **uld_dev,
- char *uld_dev_name)
-{
- struct pci_dev *pci_dev = NULL;
- struct spfc_hba_info *hba = NULL;
- u32 ret = UNF_RETURN_ERROR;
- const u8 work_mode = SPFC_SMARTIO_WORK_MODE_FC;
- u32 probe_index = 0;
- u32 probe_total_num = 0;
- u8 card_num = INVALID_VALUE8;
- struct spfc_chip_info chip_info;
-
- FC_CHECK_RETURN_VALUE(lld_dev, UNF_RETURN_ERROR_S32);
- FC_CHECK_RETURN_VALUE(lld_dev->hwdev, UNF_RETURN_ERROR_S32);
- FC_CHECK_RETURN_VALUE(lld_dev->pdev, UNF_RETURN_ERROR_S32);
- FC_CHECK_RETURN_VALUE(uld_dev, UNF_RETURN_ERROR_S32);
- FC_CHECK_RETURN_VALUE(uld_dev_name, UNF_RETURN_ERROR_S32);
-
- pci_dev = lld_dev->pdev;
- memset(&chip_info, 0, sizeof(struct spfc_chip_info));
- /* 1. Get & check Total_Probed_number */
- spfc_get_total_probed_num(&probe_total_num);
- if (probe_total_num >= allowed_probe_num) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Total probe num (0x%x) is larger than allowed number(0x%x)",
- probe_total_num, allowed_probe_num);
-
- return UNF_RETURN_ERROR_S32;
- }
- /* 2. Check device work mode */
- ret = spfc_fc_mode_check(lld_dev->hwdev);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR_S32;
-
- /* 3. Assign & Get new Probe index */
- ret = spfc_assign_probe_index(&probe_index);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]AssignProbeIndex fail");
-
- return UNF_RETURN_ERROR_S32;
- }
-
- ret = spfc_get_chip_capability((void *)lld_dev->hwdev, &chip_info);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]GetChipCapability fail");
- return UNF_RETURN_ERROR_S32;
- }
- chip_info.work_mode = work_mode;
-
- /* Assign & Get new Card number */
- ret = spfc_assign_card_num(lld_dev, &chip_info, &card_num);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]spfc_assign_card_num fail");
- spfc_release_probe_index(probe_index);
-
- return UNF_RETURN_ERROR_S32;
- }
-
- /* Init HBA resource */
- hba = spfc_init_hba(pci_dev, lld_dev->hwdev, &chip_info, card_num);
- if (!hba) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Probe HBA(0x%x) failed. Memory reference = 0x%x",
- probe_index, atomic_read(&fc_mem_ref));
-
- spfc_release_probe_index(probe_index);
- spfc_dec_and_free_card_num(card_num);
-
- return UNF_RETURN_ERROR_S32;
- }
-
- /* Name by the order of probe */
- *uld_dev = hba;
- snprintf(uld_dev_name, SPFC_PORT_NAME_STR_LEN, "%s%02x%02x",
- SPFC_PORT_NAME_LABEL, hba->card_info.card_num,
- hba->card_info.func_num);
- memcpy(hba->port_name, uld_dev_name, SPFC_PORT_NAME_STR_LEN);
- hba->probe_index = probe_index;
- spfc_hba[probe_index] = hba;
-
- return RETURN_OK;
-}
-
-u32 spfc_sfp_switch(void *hba, void *para_in)
-{
- struct spfc_hba_info *spfc_hba = (struct spfc_hba_info *)hba;
- bool turn_on = false;
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(para_in, UNF_RETURN_ERROR);
-
- /* Redundancy check */
- turn_on = *((bool *)para_in);
- if ((u32)turn_on == (u32)spfc_hba->sfp_on) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) FC physical port is already %s",
- spfc_hba->port_cfg.port_id, (turn_on) ? "on" : "off");
-
- return ret;
- }
-
- if (turn_on) {
- ret = spfc_port_check_fw_ready(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Get port(0x%x) clear state failed, turn on fail",
- spfc_hba->port_cfg.port_id);
- return ret;
- }
- /* At first, configure port table info if necessary */
- ret = spfc_config_port_table(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't configurate port table",
- spfc_hba->port_cfg.port_id);
-
- return ret;
- }
- }
-
- /* Switch physical port */
- ret = spfc_port_switch(spfc_hba, turn_on);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Port(0x%x) switch failed",
- spfc_hba->port_cfg.port_id);
-
- return ret;
- }
-
- /* Update HBA's sfp state */
- spfc_hba->sfp_on = turn_on;
-
- return ret;
-}
-
-static u32 spfc_destroy_lport(struct spfc_hba_info *hba)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, hba->lport);
- hba->lport = NULL;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) destroy L_Port done",
- hba->port_cfg.port_id);
-
- return ret;
-}
-
-static u32 spfc_port_check_fw_ready(struct spfc_hba_info *hba)
-{
-#define SPFC_PORT_CLEAR_DONE 0
-#define SPFC_PORT_CLEAR_DOING 1
-#define SPFC_WAIT_ONE_TIME_MS 1000
-#define SPFC_LOOP_TIMES 30
-
- u32 clear_state = SPFC_PORT_CLEAR_DOING;
- u32 ret = RETURN_OK;
- u32 wait_timeout = 0;
-
- do {
- msleep(SPFC_WAIT_ONE_TIME_MS);
- wait_timeout += SPFC_WAIT_ONE_TIME_MS;
- ret = spfc_mbx_get_fw_clear_stat(hba, &clear_state);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- /* Total time more than 30s retry more than 3 times failed */
- if (wait_timeout > SPFC_LOOP_TIMES * SPFC_WAIT_ONE_TIME_MS &&
- clear_state != SPFC_PORT_CLEAR_DONE)
- return UNF_RETURN_ERROR;
- } while (clear_state != SPFC_PORT_CLEAR_DONE);
-
- return RETURN_OK;
-}
-
-u32 spfc_port_reset(struct spfc_hba_info *hba)
-{
- u32 ret = RETURN_OK;
- ulong timeout = 0;
- bool sfp_before_reset = false;
- bool off_para_in = false;
- struct pci_dev *pci_dev = NULL;
- struct spfc_hba_info *spfc_hba = hba;
-
- FC_CHECK_RETURN_VALUE(spfc_hba, UNF_RETURN_ERROR);
- pci_dev = spfc_hba->pci_dev;
- FC_CHECK_RETURN_VALUE(pci_dev, UNF_RETURN_ERROR);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Port(0x%x) reset HBA begin",
- spfc_hba->port_cfg.port_id);
-
- /* Wait for last init/reset completion */
- timeout = wait_for_completion_timeout(&spfc_hba->hba_init_complete,
- (ulong)SPFC_PORT_INIT_TIME_SEC_MAX * HZ);
-
- if (timeout == SPFC_ZERO) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Last HBA initialize/reset timeout: %d second",
- SPFC_PORT_INIT_TIME_SEC_MAX);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Save current port state */
- sfp_before_reset = spfc_hba->sfp_on;
-
- /* Inform the reset event to CM level before beginning */
- UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_START, NULL);
- spfc_hba->reset_time = jiffies;
-
- /* Close SFP */
- ret = spfc_sfp_switch(spfc_hba, &off_para_in);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) can't close SFP",
- spfc_hba->port_cfg.port_id);
- spfc_hba->sfp_on = sfp_before_reset;
-
- complete(&spfc_hba->hba_init_complete);
-
- return ret;
- }
-
- ret = spfc_port_check_fw_ready(spfc_hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Get port(0x%x) clear state failed, hang port and report chip error",
- spfc_hba->port_cfg.port_id);
-
- complete(&spfc_hba->hba_init_complete);
-
- return ret;
- }
-
- spfc_queue_pre_process(spfc_hba, false);
-
- ret = spfc_mb_reset_chip(spfc_hba, SPFC_MBOX_SUBTYPE_LIGHT_RESET);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SPFC port(0x%x) can't reset chip mailbox",
- spfc_hba->port_cfg.port_id);
-
- UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_GET_FWLOG, NULL);
- UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_DEBUG_DUMP, NULL);
- }
-
- /* Inform the success to CM level */
- UNF_LOWLEVEL_PORT_EVENT(ret, spfc_hba->lport, UNF_PORT_RESET_END, NULL);
-
- /* Queue open */
- spfc_queue_post_process(spfc_hba);
-
- /* Open SFP */
- (void)spfc_sfp_switch(spfc_hba, &sfp_before_reset);
-
- complete(&spfc_hba->hba_init_complete);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]Port(0x%x) reset HBA done",
- spfc_hba->port_cfg.port_id);
-
- return ret;
-#undef SPFC_WAIT_LINKDOWN_EVENT_MS
-}
-
-static u32 spfc_delete_scqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 scqn)
-{
- /* Via CMND Queue */
-#define SPFC_DEL_SCQC_TIMEOUT 3000
-
- int ret;
- struct spfc_cmdqe_delete_scqc del_scqc_cmd;
- struct sphw_cmd_buf *cmd_buf;
-
- /* Alloc cmd buffer */
- cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmd_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf alloc failed");
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
- return UNF_RETURN_ERROR;
- }
-
- /* Build & Send Cmnd */
- memset(&del_scqc_cmd, 0, sizeof(del_scqc_cmd));
- del_scqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SCQC;
- del_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
- spfc_cpu_to_big32(&del_scqc_cmd, sizeof(del_scqc_cmd));
- memcpy(cmd_buf->buf, &del_scqc_cmd, sizeof(del_scqc_cmd));
- cmd_buf->size = sizeof(del_scqc_cmd);
-
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
- NULL, NULL, SPFC_DEL_SCQC_TIMEOUT,
- SPHW_CHANNEL_FC);
-
- /* Free cmnd buffer */
- sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
-
- if (ret) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Send del scqc via cmdq failed, ret=0x%x",
- ret);
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
- return UNF_RETURN_ERROR;
- }
-
- SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SCQC);
-
- return RETURN_OK;
-}
-
-static u32 spfc_delete_srqc_via_cmdq_sync(struct spfc_hba_info *hba, u64 sqrc_gpa)
-{
- /* Via CMND Queue */
-#define SPFC_DEL_SRQC_TIMEOUT 3000
-
- int ret;
- struct spfc_cmdqe_delete_srqc del_srqc_cmd;
- struct sphw_cmd_buf *cmd_buf;
-
- /* Alloc Cmnd buffer */
- cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmd_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf allocate failed");
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
- return UNF_RETURN_ERROR;
- }
-
- /* Build & Send Cmnd */
- memset(&del_srqc_cmd, 0, sizeof(del_srqc_cmd));
- del_srqc_cmd.wd0.task_type = SPFC_TASK_T_DEL_SRQC;
- del_srqc_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(sqrc_gpa);
- del_srqc_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(sqrc_gpa);
- spfc_cpu_to_big32(&del_srqc_cmd, sizeof(del_srqc_cmd));
- memcpy(cmd_buf->buf, &del_srqc_cmd, sizeof(del_srqc_cmd));
- cmd_buf->size = sizeof(del_srqc_cmd);
-
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf,
- NULL, NULL, SPFC_DEL_SRQC_TIMEOUT,
- SPHW_CHANNEL_FC);
-
- /* Free Cmnd Buffer */
- sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
-
- if (ret) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Send del srqc via cmdq failed, ret=0x%x",
- ret);
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
- return UNF_RETURN_ERROR;
- }
-
- SPFC_IO_STAT(hba, SPFC_TASK_T_DEL_SRQC);
-
- return RETURN_OK;
-}
-
-void spfc_flush_scq_ctx(struct spfc_hba_info *hba)
-{
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Start destroy total 0x%x SCQC", SPFC_TOTAL_SCQ_NUM);
-
- FC_CHECK_RETURN_VOID(hba);
-
- (void)spfc_delete_scqc_via_cmdq_sync(hba, 0);
-}
-
-void spfc_flush_srq_ctx(struct spfc_hba_info *hba)
-{
- struct spfc_srq_info *srq_info = NULL;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Start destroy ELS&IMMI SRQC");
-
- FC_CHECK_RETURN_VOID(hba);
-
- /* Check state to avoid to flush SRQC again */
- srq_info = &hba->els_srq_info;
- if (srq_info->srq_type == SPFC_SRQ_ELS && srq_info->enable) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[event]HBA(0x%x) flush ELS SRQC",
- hba->port_index);
-
- (void)spfc_delete_srqc_via_cmdq_sync(hba, srq_info->cqm_srq_info->q_ctx_paddr);
- }
-}
-
-void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush)
-{
- ulong flag = 0;
-
- spin_lock_irqsave(&hba->flush_state_lock, flag);
- hba->in_flushing = in_flush;
- spin_unlock_irqrestore(&hba->flush_state_lock, flag);
-}
-
-void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag)
-{
- ulong flag = 0;
-
- spin_lock_irqsave(&hba->clear_state_lock, flag);
- hba->port_is_cleared = clear_flag;
- spin_unlock_irqrestore(&hba->clear_state_lock, flag);
-}
-
-bool spfc_hba_is_present(struct spfc_hba_info *hba)
-{
- int ret_val = RETURN_OK;
- bool present_flag = false;
- u32 vendor_id = 0;
-
- ret_val = pci_read_config_dword(hba->pci_dev, 0, &vendor_id);
- vendor_id &= SPFC_PCI_VENDOR_ID_MASK;
- if (ret_val == RETURN_OK && vendor_id == SPFC_PCI_VENDOR_ID_RAMAXEL) {
- present_flag = true;
- } else {
- present_flag = false;
- hba->dev_present = false;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[info]Port %s remove: vender_id=0x%x, ret=0x%x",
- present_flag ? "normal" : "surprise", vendor_id, ret_val);
-
- return present_flag;
-}
-
-static void spfc_exit(struct pci_dev *pci_dev, struct spfc_hba_info *hba)
-{
-#define SPFC_WAIT_CLR_RESOURCE_MS 1000
- u32 ret = UNF_RETURN_ERROR;
- bool sfp_switch = false;
- bool present_flag = true;
-
- FC_CHECK_RETURN_VOID(pci_dev);
- FC_CHECK_RETURN_VOID(hba);
-
- hba->removing = true;
-
- /* 1. Check HBA present or not */
- present_flag = spfc_hba_is_present(hba);
- if (present_flag) {
- if (hba->phy_link == UNF_PORT_LINK_DOWN)
- hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHDONE;
-
- /* At first, close sfp */
- sfp_switch = false;
- (void)spfc_sfp_switch((void *)hba, (void *)&sfp_switch);
- }
-
- /* 2. Report COM with HBA removing: delete route timer delay work */
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_BEGIN_REMOVE, NULL);
-
- /* 3. Report COM with HBA Nop, COM release I/O(s) & R_Port(s) forcely */
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_NOP, NULL);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]PCI device(%p) remove port(0x%x) failed",
- pci_dev, hba->port_index);
- }
-
- spfc_delete_default_session(hba);
-
- if (present_flag)
- /* 4.1 Wait for all SQ empty, free SRQ buffer & SRQC */
- spfc_queue_pre_process(hba, true);
-
- /* 5. Destroy L_Port */
- (void)spfc_destroy_lport(hba);
-
- /* 6. With HBA is present */
- if (present_flag) {
- /* Enable Queues dispatch */
- spfc_queue_post_process(hba);
-
- /* Need reset port if necessary */
- (void)spfc_mb_reset_chip(hba, SPFC_MBOX_SUBTYPE_HEAVY_RESET);
-
- /* Flush SCQ context */
- spfc_flush_scq_ctx(hba);
-
- /* Flush SRQ context */
- spfc_flush_srq_ctx(hba);
-
- sphw_func_rx_tx_flush(hba->dev_handle, SPHW_CHANNEL_FC);
-
- /* NOTE: while flushing txrx, hash bucket will be cached out in
- * UP. Wait to clear resources completely
- */
- msleep(SPFC_WAIT_CLR_RESOURCE_MS);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) flush scq & srq & root context done",
- hba->port_cfg.port_id);
- }
-
- /* 7. Release host resources */
- spfc_release_host_res(hba);
-
- /* 8. Destroy FC work queue */
- if (hba->work_queue) {
- flush_workqueue(hba->work_queue);
- destroy_workqueue(hba->work_queue);
- hba->work_queue = NULL;
- }
-
- /* 9. Release Probe index & Decrease card number */
- spfc_release_probe_index(hba->probe_index);
- spfc_dec_and_free_card_num((u8)hba->card_info.card_num);
-
- /* 10. Free HBA memory */
- kfree(hba);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]PCI device(%p) remove succeed, memory reference is 0x%x",
- pci_dev, atomic_read(&fc_mem_ref));
-}
-
-static void spfc_remove(struct spfc_lld_dev *lld_dev, void *uld_dev)
-{
- struct pci_dev *pci_dev = NULL;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)uld_dev;
- u32 probe_total_num = 0;
- u32 probe_index = 0;
-
- FC_CHECK_RETURN_VOID(lld_dev);
- FC_CHECK_RETURN_VOID(uld_dev);
- FC_CHECK_RETURN_VOID(lld_dev->hwdev);
- FC_CHECK_RETURN_VOID(lld_dev->pdev);
-
- pci_dev = hba->pci_dev;
-
- /* Get total probed port number */
- spfc_get_total_probed_num(&probe_total_num);
- if (probe_total_num < 1) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port manager is empty and no need to remove");
- return;
- }
-
- /* check pci vendor id */
- if (pci_dev->vendor != SPFC_PCI_VENDOR_ID_RAMAXEL) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Wrong vendor id(0x%x) and exit",
- pci_dev->vendor);
- return;
- }
-
- /* Check function ability */
- if (!sphw_support_fc(lld_dev->hwdev, NULL)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]FC is not enable in this function");
- return;
- }
-
- /* Get probe index */
- probe_index = hba->probe_index;
-
- /* Parent context alloc check */
- if (hba->service_cap.dev_fc_cap.max_parent_qpc_num == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]FC parent context not allocate in this function");
- return;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]HBA(0x%x) start removing...", hba->port_index);
-
- /* HBA removinig... */
- spfc_exit(pci_dev, hba);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Port(0x%x) pci device removed, vendorid(0x%04x) devid(0x%04x)",
- probe_index, pci_dev->vendor, pci_dev->device);
-
- /* Probe index check */
- if (probe_index < SPFC_HBA_PORT_MAX_NUM) {
- spfc_hba[probe_index] = NULL;
- } else {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Probe index(0x%x) is invalid and remove failed",
- probe_index);
- }
-
- spfc_get_total_probed_num(&probe_total_num);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]Removed index=%u, RemainNum=%u, AllowNum=%u",
- probe_index, probe_total_num, allowed_probe_num);
-}
-
-static u32 spfc_get_hba_pcie_link_state(void *hba, void *link_state)
-{
- bool *link_state_info = link_state;
- bool present_flag = true;
- struct spfc_hba_info *spfc_hba = hba;
- int ret;
- bool last_dev_state = true;
- bool cur_dev_state = true;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(link_state, UNF_RETURN_ERROR);
- last_dev_state = spfc_hba->dev_present;
- ret = sphw_get_card_present_state(spfc_hba->dev_handle, (bool *)&present_flag);
- if (ret || !present_flag) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]port(0x%x) is not present,ret:%d, present_flag:%d",
- spfc_hba->port_cfg.port_id, ret, present_flag);
- cur_dev_state = false;
- } else {
- cur_dev_state = true;
- }
-
- spfc_hba->dev_present = cur_dev_state;
-
- /* To prevent false alarms, the heartbeat is considered lost only
- * when the PCIe link is down for two consecutive times.
- */
- if (!last_dev_state && !cur_dev_state)
- spfc_hba->heart_status = false;
-
- *link_state_info = spfc_hba->dev_present;
-
- return RETURN_OK;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_hba.h b/drivers/scsi/spfc/hw/spfc_hba.h
deleted file mode 100644
index 937f00ea8fc7..000000000000
--- a/drivers/scsi/spfc/hw/spfc_hba.h
+++ /dev/null
@@ -1,341 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_HBA_H
-#define SPFC_HBA_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "spfc_queue.h"
-#include "sphw_crm.h"
-#define SPFC_PCI_VENDOR_ID_MASK (0xffff)
-
-#define FW_VER_LEN (32)
-#define HW_VER_LEN (32)
-#define FW_SUB_VER_LEN (24)
-
-#define SPFC_LOWLEVEL_RTTOV_TAG 0
-#define SPFC_LOWLEVEL_EDTOV_TAG 0
-#define SPFC_LOWLEVEL_DEFAULT_LOOP_BB_CREDIT (8)
-#define SPFC_LOWLEVEL_DEFAULT_32G_BB_CREDIT (255)
-#define SPFC_LOWLEVEL_DEFAULT_16G_BB_CREDIT (255)
-#define SPFC_LOWLEVEL_DEFAULT_8G_BB_CREDIT (255)
-#define SPFC_LOWLEVEL_DEFAULT_BB_SCN 0
-#define SPFC_LOWLEVEL_DEFAULT_RA_TOV UNF_DEFAULT_RATOV
-#define SPFC_LOWLEVEL_DEFAULT_ED_TOV UNF_DEFAULT_EDTOV
-
-#define SPFC_LOWLEVEL_DEFAULT_32G_ESCH_VALUE 28081
-#define SPFC_LOWLEVEL_DEFAULT_16G_ESCH_VALUE 14100
-#define SPFC_LOWLEVEL_DEFAULT_8G_ESCH_VALUE 7000
-#define SPFC_LOWLEVEL_DEFAULT_ESCH_BUST_SIZE 0x2000
-
-#define SPFC_PCI_STATUS 0x06
-
-#define SPFC_SMARTIO_WORK_MODE_FC 0x1
-#define SPFC_SMARTIO_WORK_MODE_OTHER 0xF
-#define UNF_FUN_ID_MASK 0x07
-
-#define UNF_SPFC_FC (0x01)
-#define UNF_SPFC_MAXNPIV_NUM 64 /* If not support NPIV, Initialized to 0 */
-
-#define SPFC_MAX_COS_NUM (8)
-
-#define SPFC_INTR_ENABLE 0x5
-#define SPFC_INTR_DISABLE 0x0
-#define SPFC_CLEAR_FW_INTR 0x1
-#define SPFC_REG_ENABLE_INTR 0x00000200
-
-#define SPFC_PCI_VENDOR_ID_RAMAXEL 0x1E81
-
-#define SPFC_SCQ_CNTX_SIZE 32
-#define SPFC_SRQ_CNTX_SIZE 64
-
-#define SPFC_PORT_INIT_TIME_SEC_MAX 1
-
-#define SPFC_PORT_NAME_LABEL "spfc"
-#define SPFC_PORT_NAME_STR_LEN (16)
-
-#define SPFC_MAX_PROBE_PORT_NUM (64)
-#define SPFC_PORT_NUM_PER_TABLE (64)
-#define SPFC_MAX_CARD_NUM (32)
-
-#define SPFC_HBA_PORT_MAX_NUM SPFC_MAX_PROBE_PORT_NUM
-#define SPFC_SIRT_MIN_RXID 0
-#define SPFC_SIRT_MAX_RXID 255
-
-#define SPFC_GET_HBA_PORT_ID(hba) ((hba)->port_index)
-
-#define SPFC_MAX_WAIT_LOOP_TIMES 10000
-#define SPFC_WAIT_SESS_ENABLE_ONE_TIME_MS 1
-#define SPFC_WAIT_SESS_FREE_ONE_TIME_MS 1
-
-#define SPFC_PORT_ID_MASK 0xff0000
-
-#define SPFC_MAX_PARENT_QPC_NUM 2048
-struct spfc_port_cfg {
- u32 port_id; /* Port ID */
- u32 port_mode; /* Port mode:INI(0x20), TGT(0x10), BOTH(0x30) */
- u32 port_topology; /* Port topo:0x3:loop,0xc:p2p,0xf:auto */
- u32 port_alpa; /* Port ALPA */
- u32 max_queue_depth; /* Max Queue depth Registration to SCSI */
- u32 sest_num; /* IO burst num:512-4096 */
- u32 max_login; /* Max Login Session. */
- u32 node_name_hi; /* nodename high 32 bits */
- u32 node_name_lo; /* nodename low 32 bits */
- u32 port_name_hi; /* portname high 32 bits */
- u32 port_name_lo; /* portname low 32 bits */
- u32 port_speed; /* Port speed 0:auto 4:4Gbps 8:8Gbps 16:16Gbps */
- u32 interrupt_delay; /* Delay times(ms) in interrupt */
- u32 tape_support; /* tape support */
-};
-
-#define SPFC_VER_INFO_SIZE 128
-struct spfc_drv_version {
- char ver[SPFC_VER_INFO_SIZE];
-};
-
-struct spfc_card_info {
- u32 card_num : 8;
- u32 func_num : 8;
- u32 base_func : 8;
- /* Card type:UNF_FC_SERVER_BOARD_32_G(6) 32G mode,
- * UNF_FC_SERVER_BOARD_16_G(7)16G mode
- */
- u32 card_type : 8;
-};
-
-struct spfc_card_num_manage {
- bool is_removing;
- u32 port_count;
- u64 card_number;
-};
-
-struct spfc_sim_ini_err {
- u32 err_code;
- u32 times;
-};
-
-struct spfc_sim_pcie_err {
- u32 err_code;
- u32 times;
-};
-
-struct spfc_led_state {
- u8 green_speed_led;
- u8 yellow_speed_led;
- u8 ac_led;
- u8 rsvd;
-};
-
-enum spfc_led_activity {
- SPFC_LED_CFG_ACTVE_FRAME = 0,
- SPFC_LED_CFG_ACTVE_FC = 3
-};
-
-enum spfc_queue_set_stage {
- SPFC_QUEUE_SET_STAGE_INIT = 0,
- SPFC_QUEUE_SET_STAGE_SCANNING,
- SPFC_QUEUE_SET_STAGE_FLUSHING,
- SPFC_QUEUE_SET_STAGE_FLUSHDONE,
- SPFC_QUEUE_SET_STAGE_BUTT
-};
-
-struct spfc_vport_info {
- u64 node_name;
- u64 port_name;
- u32 port_mode; /* INI, TGT or both */
- u32 nport_id; /* maybe acquired by lowlevel and update to common */
- void *vport;
- u16 vp_index;
-};
-
-struct spfc_srq_delay_info {
- u8 srq_delay_flag; /* Check whether need to delay */
- u8 root_rq_rcvd_flag;
- u16 rsd;
-
- spinlock_t srq_lock;
- struct unf_frame_pkg frame_pkg;
-
- struct delayed_work del_work;
-};
-
-struct spfc_fw_ver_detail {
- u8 ucode_ver[SPFC_VER_LEN];
- u8 ucode_compile_time[SPFC_COMPILE_TIME_LEN];
-
- u8 up_ver[SPFC_VER_LEN];
- u8 up_compile_time[SPFC_COMPILE_TIME_LEN];
-
- u8 boot_ver[SPFC_VER_LEN];
- u8 boot_compile_time[SPFC_COMPILE_TIME_LEN];
-};
-
-/* get wwpn and wwnn */
-struct spfc_chip_info {
- u8 work_mode;
- u8 tape_support;
- u64 wwpn;
- u64 wwnn;
-};
-
-/* Default SQ info */
-struct spfc_default_sq_info {
- u32 sq_cid;
- u32 sq_xid;
- u32 fun_cid;
- u32 default_sq_flag;
-};
-
-struct spfc_hba_info {
- struct pci_dev *pci_dev;
- void *dev_handle;
-
- struct fc_service_cap service_cap; /* struct fc_service_cap pstFcoeServiceCap; */
-
- struct spfc_scq_info scq_info[SPFC_TOTAL_SCQ_NUM];
- struct spfc_srq_info els_srq_info;
-
- struct spfc_vport_info vport_info[UNF_SPFC_MAXNPIV_NUM + 1];
-
- /* PCI IO Memory */
- void __iomem *bar0;
- u32 bar0_len;
-
- struct spfc_parent_queue_mgr *parent_queue_mgr;
-
- /* Link list Sq WqePage Pool */
- struct spfc_sq_wqepage_pool sq_wpg_pool;
-
- enum spfc_queue_set_stage queue_set_stage;
- u32 next_clear_sq;
- u32 default_sqid;
-
- /* Port parameters, Obtained through firmware */
- u16 queue_set_max_count;
- u8 port_type; /* FC or FCoE Port */
- u8 port_index; /* Phy Port */
- u32 default_scqn;
- char fw_ver[FW_VER_LEN]; /* FW version */
- char hw_ver[HW_VER_LEN]; /* HW version */
- char mst_fw_ver[FW_SUB_VER_LEN];
- char fc_fw_ver[FW_SUB_VER_LEN];
- u8 chip_type; /* chiptype:Smart or fc */
- u8 work_mode;
- struct spfc_card_info card_info;
- char port_name[SPFC_PORT_NAME_STR_LEN];
- u32 probe_index;
-
- u16 exi_base;
- u16 exi_count;
- u16 vpf_count;
- u8 vpid_start;
- u8 vpid_end;
-
- spinlock_t flush_state_lock;
- bool in_flushing;
-
- spinlock_t clear_state_lock;
- bool port_is_cleared;
-
- struct spfc_port_cfg port_cfg; /* Obtained through Config */
-
- void *lport; /* Used in UNF level */
-
- u8 sys_node_name[UNF_WWN_LEN];
- u8 sys_port_name[UNF_WWN_LEN];
-
- struct completion hba_init_complete;
- struct completion mbox_complete;
- struct completion vpf_complete;
- struct completion fcfi_complete;
- struct completion get_sfp_complete;
-
- u16 init_stage;
- u16 removing;
- bool sfp_on;
- bool dev_present;
- bool heart_status;
- spinlock_t hba_lock;
- u32 port_topo_cfg;
- u32 port_bb_scn_cfg;
- u32 port_loop_role;
- u32 port_speed_cfg;
- u32 max_support_speed;
- u32 min_support_speed;
- u32 server_max_speed;
-
- u8 remote_rttov_tag;
- u8 remote_edtov_tag;
- u16 compared_bb_scn;
- u16 remote_bb_credit;
- u32 compared_edtov_val;
- u32 compared_ratov_val;
- enum unf_act_topo active_topo;
- u32 active_port_speed;
- u32 active_rxbb_credit;
- u32 active_bb_scn;
-
- u32 phy_link;
-
- enum unf_port_mode port_mode;
-
- u32 fcp_cfg;
-
- /* loop */
- u8 active_alpa;
- u8 loop_map_valid;
- u8 loop_map[UNF_LOOPMAP_COUNT];
-
- /* sfp info dma */
- void *sfp_buf;
- dma_addr_t sfp_dma_addr;
- u32 sfp_status;
- int chip_temp;
- u32 sfp_posion;
-
- u32 cos_bitmap;
- atomic_t cos_rport_cnt[SPFC_MAX_COS_NUM];
-
- /* fw debug dma buffer */
- void *debug_buf;
- dma_addr_t debug_buf_dma_addr;
- void *log_buf;
- dma_addr_t fw_log_dma_addr;
-
- void *dma_addr;
- dma_addr_t update_dma_addr;
-
- struct spfc_sim_ini_err sim_ini_err;
- struct spfc_sim_pcie_err sim_pcie_err;
-
- struct spfc_led_state led_states;
-
- u32 fec_status;
-
- struct workqueue_struct *work_queue;
- struct work_struct els_srq_clear_work;
- u64 reset_time;
-
- spinlock_t spin_lock;
-
- struct spfc_srq_delay_info srq_delay_info;
- struct spfc_fw_ver_detail hardinfo;
- struct spfc_default_sq_info default_sq_info;
-};
-
-extern struct spfc_hba_info *spfc_hba[SPFC_HBA_PORT_MAX_NUM];
-extern spinlock_t probe_spin_lock;
-extern ulong probe_bit_map[SPFC_MAX_PROBE_PORT_NUM / SPFC_PORT_NUM_PER_TABLE];
-
-u32 spfc_port_reset(struct spfc_hba_info *hba);
-void spfc_flush_scq_ctx(struct spfc_hba_info *hba);
-void spfc_flush_srq_ctx(struct spfc_hba_info *hba);
-void spfc_set_hba_flush_state(struct spfc_hba_info *hba, bool in_flush);
-void spfc_set_hba_clear_state(struct spfc_hba_info *hba, bool clear_flag);
-u32 spfc_get_probe_index_by_port_id(u32 port_id, u32 *probe_index);
-void spfc_get_total_probed_num(u32 *probe_cnt);
-u32 spfc_sfp_switch(void *hba, void *para_in);
-bool spfc_hba_is_present(struct spfc_hba_info *hba);
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_hw_wqe.h b/drivers/scsi/spfc/hw/spfc_hw_wqe.h
deleted file mode 100644
index e03d24a98579..000000000000
--- a/drivers/scsi/spfc/hw/spfc_hw_wqe.h
+++ /dev/null
@@ -1,1645 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_HW_WQE_H
-#define SPFC_HW_WQE_H
-
-#define FC_ICQ_EN
-#define FC_SCSI_CMDIU_LEN 48
-#define FC_NVME_CMDIU_LEN 96
-#define FC_LS_GS_USERID_CNT_MAX 10
-#define FC_SENSEDATA_USERID_CNT_MAX 2
-#define FC_INVALID_MAGIC_NUM 0xFFFFFFFF
-#define FC_INVALID_HOTPOOLTAG 0xFFFF
-
-/* TASK TYPE: in order to compatible wiht EDA, please add new type before BUTT. */
-enum spfc_task_type {
- SPFC_TASK_T_EMPTY = 0, /* SCQE TYPE: means task type not initialize */
-
- SPFC_TASK_T_IWRITE = 1, /* SQE TYPE: ini send FCP Write Command */
- SPFC_TASK_T_IREAD = 2, /* SQE TYPE: ini send FCP Read Command */
- SPFC_TASK_T_IRESP = 3, /* SCQE TYPE: ini recv fcp rsp for IREAD/IWRITE/ITMF */
- SPFC_TASK_T_TCMND = 4, /* NA */
- SPFC_TASK_T_TREAD = 5, /* SQE TYPE: tgt send FCP Read Command */
- SPFC_TASK_T_TWRITE = 6, /* SQE TYPE: tgt send FCP Write Command (XFER_RDY) */
- SPFC_TASK_T_TRESP = 7, /* SQE TYPE: tgt send fcp rsp of Read/Write */
- SPFC_TASK_T_TSTS = 8, /* SCQE TYPE: tgt sts for TREAD/TWRITE/TRESP */
- SPFC_TASK_T_ABTS = 9, /* SQE TYPE: ini send abts request Command */
- SPFC_TASK_T_IELS = 10, /* NA */
- SPFC_TASK_T_ITMF = 11, /* SQE TYPE: ini send tmf request Command */
- SPFC_TASK_T_CLEAN_UP = 12, /* NA */
- SPFC_TASK_T_CLEAN_UP_ALL = 13, /* NA */
- SPFC_TASK_T_UNSOLICITED = 14, /* NA */
- SPFC_TASK_T_ERR_WARN = 15, /* NA */
- SPFC_TASK_T_SESS_EN = 16, /* CMDQ TYPE: enable session */
- SPFC_TASK_T_SESS_DIS = 17, /* NA */
- SPFC_TASK_T_SESS_DEL = 18, /* NA */
- SPFC_TASK_T_RQE_REPLENISH = 19, /* NA */
-
- SPFC_TASK_T_RCV_TCMND = 20, /* SCQE TYPE: tgt recv fcp cmd */
- SPFC_TASK_T_RCV_ELS_CMD = 21, /* SCQE TYPE: tgt recv els cmd */
- SPFC_TASK_T_RCV_ABTS_CMD = 22, /* SCQE TYPE: tgt recv abts cmd */
- SPFC_TASK_T_RCV_IMMEDIATE = 23, /* SCQE TYPE: tgt recv immediate data */
- /* SQE TYPE: send ESL rsp. PLOGI_ACC, PRLI_ACC will carry the parent
- *context parameter indication.
- */
- SPFC_TASK_T_ELS_RSP = 24,
- SPFC_TASK_T_ELS_RSP_STS = 25, /* SCQE TYPE: ELS rsp sts */
- SPFC_TASK_T_ABTS_RSP = 26, /* CMDQ TYPE: tgt send abts rsp */
- SPFC_TASK_T_ABTS_RSP_STS = 27, /* SCQE TYPE: tgt abts rsp sts */
-
- SPFC_TASK_T_ABORT = 28, /* CMDQ TYPE: tgt send Abort Command */
- SPFC_TASK_T_ABORT_STS = 29, /* SCQE TYPE: Abort sts */
-
- SPFC_TASK_T_ELS = 30, /* SQE TYPE: send ELS request Command */
- SPFC_TASK_T_RCV_ELS_RSP = 31, /* SCQE TYPE: recv ELS response */
-
- SPFC_TASK_T_GS = 32, /* SQE TYPE: send GS request Command */
- SPFC_TASK_T_RCV_GS_RSP = 33, /* SCQE TYPE: recv GS response */
-
- SPFC_TASK_T_SESS_EN_STS = 34, /* SCQE TYPE: enable session sts */
- SPFC_TASK_T_SESS_DIS_STS = 35, /* NA */
- SPFC_TASK_T_SESS_DEL_STS = 36, /* NA */
-
- SPFC_TASK_T_RCV_ABTS_RSP = 37, /* SCQE TYPE: ini recv abts rsp */
-
- SPFC_TASK_T_BUFFER_CLEAR = 38, /* CMDQ TYPE: Buffer Clear */
- SPFC_TASK_T_BUFFER_CLEAR_STS = 39, /* SCQE TYPE: Buffer Clear sts */
- SPFC_TASK_T_FLUSH_SQ = 40, /* CMDQ TYPE: flush sq */
- SPFC_TASK_T_FLUSH_SQ_STS = 41, /* SCQE TYPE: flush sq sts */
-
- SPFC_TASK_T_SESS_RESET = 42, /* SQE TYPE: Reset session */
- SPFC_TASK_T_SESS_RESET_STS = 43, /* SCQE TYPE: Reset session sts */
- SPFC_TASK_T_RQE_REPLENISH_STS = 44, /* NA */
- SPFC_TASK_T_DUMP_EXCH = 45, /* CMDQ TYPE: dump exch */
- SPFC_TASK_T_INIT_SRQC = 46, /* CMDQ TYPE: init SRQC */
- SPFC_TASK_T_CLEAR_SRQ = 47, /* CMDQ TYPE: clear SRQ */
- SPFC_TASK_T_CLEAR_SRQ_STS = 48, /* SCQE TYPE: clear SRQ sts */
- SPFC_TASK_T_INIT_SCQC = 49, /* CMDQ TYPE: init SCQC */
- SPFC_TASK_T_DEL_SCQC = 50, /* CMDQ TYPE: delete SCQC */
- SPFC_TASK_T_TMF_RESP = 51, /* SQE TYPE: tgt send tmf rsp */
- SPFC_TASK_T_DEL_SRQC = 52, /* CMDQ TYPE: delete SRQC */
- SPFC_TASK_T_RCV_IMMI_CONTINUE = 53, /* SCQE TYPE: tgt recv continue immediate data */
-
- SPFC_TASK_T_ITMF_RESP = 54, /* SCQE TYPE: ini recv tmf rsp */
- SPFC_TASK_T_ITMF_MARKER_STS = 55, /* SCQE TYPE: tmf marker sts */
- SPFC_TASK_T_TACK = 56,
- SPFC_TASK_T_SEND_AEQERR = 57,
- SPFC_TASK_T_ABTS_MARKER_STS = 58, /* SCQE TYPE: abts marker sts */
- SPFC_TASK_T_FLR_CLEAR_IO = 59, /* FLR clear io type */
- SPFC_TASK_T_CREATE_SSQ_CONTEXT = 60,
- SPFC_TASK_T_CLEAR_SSQ_CONTEXT = 61,
- SPFC_TASK_T_EXCH_ID_FREE = 62,
- SPFC_TASK_T_DIFX_RESULT_STS = 63,
- SPFC_TASK_T_EXCH_ID_FREE_ABORT = 64,
- SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS = 65,
- SPFC_TASK_T_PARAM_CHECK_FAIL = 66,
- SPFC_TASK_T_TGT_UNKNOWN = 67,
- SPFC_TASK_T_NVME_LS = 70, /* SQE TYPE: Snd Ls Req */
- SPFC_TASK_T_RCV_NVME_LS_RSP = 71, /* SCQE TYPE: Rcv Ls Rsp */
-
- SPFC_TASK_T_NVME_LS_RSP = 72, /* SQE TYPE: Snd Ls Rsp */
- SPFC_TASK_T_RCV_NVME_LS_RSP_STS = 73, /* SCQE TYPE: Rcv Ls Rsp sts */
-
- SPFC_TASK_T_RCV_NVME_LS_CMD = 74, /* SCQE TYPE: Rcv ls cmd */
-
- SPFC_TASK_T_NVME_IREAD = 75, /* SQE TYPE: Ini Snd Nvme Read Cmd */
- SPFC_TASK_T_NVME_IWRITE = 76, /* SQE TYPE: Ini Snd Nvme write Cmd */
-
- SPFC_TASK_T_NVME_TREAD = 77, /* SQE TYPE: Tgt Snd Nvme Read Cmd */
- SPFC_TASK_T_NVME_TWRITE = 78, /* SQE TYPE: Tgt Snd Nvme write Cmd */
-
- SPFC_TASK_T_NVME_IRESP = 79, /* SCQE TYPE: Ini recv nvme rsp for NVMEIREAD/NVMEIWRITE */
-
- SPFC_TASK_T_INI_IO_ABORT = 80, /* SQE type: INI Abort Cmd */
- SPFC_TASK_T_INI_IO_ABORT_STS = 81, /* SCQE type: INI Abort sts */
-
- SPFC_TASK_T_INI_LS_ABORT = 82, /* SQE type: INI ls abort Cmd */
- SPFC_TASK_T_INI_LS_ABORT_STS = 83, /* SCQE type: INI ls abort sts */
- SPFC_TASK_T_EXCHID_TIMEOUT_STS = 84, /* SCQE TYPE: EXCH_ID TIME OUT */
- SPFC_TASK_T_PARENT_ERR_STS = 85, /* SCQE TYPE: PARENT ERR */
-
- SPFC_TASK_T_NOP = 86,
- SPFC_TASK_T_NOP_STS = 87,
-
- SPFC_TASK_T_DFX_INFO = 126,
- SPFC_TASK_T_BUTT
-};
-
-/* error code for error report */
-
-enum spfc_err_code {
- FC_CQE_COMPLETED = 0, /* Successful */
- FC_SESS_HT_INSERT_FAIL = 1, /* Offload fail: hash insert fail */
- FC_SESS_HT_INSERT_DUPLICATE = 2, /* Offload fail: duplicate offload */
- FC_SESS_HT_BIT_SET_FAIL = 3, /* Offload fail: bloom filter set fail */
- FC_SESS_HT_DELETE_FAIL = 4, /* Offload fail: hash delete fail(duplicate delete) */
- FC_CQE_BUFFER_CLEAR_IO_COMPLETED = 5, /* IO done in buffer clear */
- FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED = 6, /* IO done in session rst mode=1 */
- FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED = 7, /* IO done in session rst mode=3 */
- FC_CQE_TMF_RSP_IO_COMPLETED = 8, /* IO done in tgt tmf rsp */
- FC_CQE_TMF_IO_COMPLETED = 9, /* IO done in ini tmf */
- FC_CQE_DRV_ABORT_IO_COMPLETED = 10, /* IO done in tgt abort */
- /*
- *IO done in fcp rsp process. Used for the sceanrio: 1.abort before cmd 2.
- *send fcp rsp directly after recv cmd.
- */
- FC_CQE_DRV_ABORT_IO_IN_RSP_COMPLETED = 11,
- /*
- *IO done in fcp cmd process. Used for the sceanrio: 1.abort before cmd 2.child setup fail.
- */
- FC_CQE_DRV_ABORT_IO_IN_CMD_COMPLETED = 12,
- FC_CQE_WQE_FLUSH_IO_COMPLETED = 13, /* IO done in FLUSH SQ */
- FC_ERROR_CODE_DATA_DIFX_FAILED = 14, /* fcp data format check: DIFX check error */
- /* fcp data format check: task_type is not read */
- FC_ERROR_CODE_DATA_TASK_TYPE_INCORRECT = 15,
- FC_ERROR_CODE_DATA_OOO_RO = 16, /* fcp data format check: data offset is not continuous */
- FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS = 17, /* fcp data format check: data is over run */
- /* fcp rsp format check: payload is too short */
- FC_ERROR_CODE_FCP_RSP_INVALID_LENGTH_FIELD = 18,
- /* fcp rsp format check: fcp_conf need, but exch don't hold seq initiative */
- FC_ERROR_CODE_FCP_RSP_CONF_REQ_NOT_SUPPORTED_YET = 19,
- /* fcp rsp format check: fcp_conf is required, but it's the last seq */
- FC_ERROR_CODE_FCP_RSP_OPENED_SEQ = 20,
- /* xfer rdy format check: payload is too short */
- FC_ERROR_CODE_XFER_INVALID_PAYLOAD_SIZE = 21,
- /* xfer rdy format check: last data out havn't finished */
- FC_ERROR_CODE_XFER_PEND_XFER_SET = 22,
- /* xfer rdy format check: data offset is not continuous */
- FC_ERROR_CODE_XFER_OOO_RO = 23,
- FC_ERROR_CODE_XFER_NULL_BURST_LEN = 24, /* xfer rdy format check: burst len is 0 */
- FC_ERROR_CODE_REC_TIMER_EXPIRE = 25, /* Timer expire: REC_TIMER */
- FC_ERROR_CODE_E_D_TIMER_EXPIRE = 26, /* Timer expire: E_D_TIMER */
- FC_ERROR_CODE_ABORT_TIMER_EXPIRE = 27, /* Timer expire: Abort timer */
- FC_ERROR_CODE_ABORT_MAGIC_NUM_NOT_MATCH = 28, /* Abort IO magic number mismatch */
- FC_IMMI_CMDPKT_SETUP_FAIL = 29, /* RX immediate data cmd pkt child setup fail */
- FC_ERROR_CODE_DATA_SEQ_ID_NOT_EQUAL = 30, /* RX fcp data sequence id not equal */
- FC_ELS_GS_RSP_EXCH_CHECK_FAIL = 31, /* ELS/GS exch info check fail */
- FC_CQE_ELS_GS_SRQE_GET_FAIL = 32, /* ELS/GS process get SRQE fail */
- FC_CQE_DATA_DMA_REQ_FAIL = 33, /* SMF soli-childdma rsp error */
- FC_CQE_SESSION_CLOSED = 34, /* Session is closed */
- FC_SCQ_IS_FULL = 35, /* SCQ is full */
- FC_SRQ_IS_FULL = 36, /* SRQ is full */
- FC_ERROR_DUCHILDCTX_SETUP_FAIL = 37, /* dpchild ctx setup fail */
- FC_ERROR_INVALID_TXMFS = 38, /* invalid txmfs */
- FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL = 39, /* offload fail,lack of SCQE,through AEQ */
- FC_ERROR_INVALID_TASK_ID = 40, /* tx invlaid task id */
- FC_ERROR_INVALID_PKT_LEN = 41, /* tx els gs pakcet len check */
- FC_CQE_ELS_GS_REQ_CLR_IO_COMPLETED = 42, /* IO done in els gs tx */
- FC_CQE_ELS_RSP_CLR_IO_COMPLETED = 43, /* IO done in els rsp tx */
- FC_ERROR_CODE_RESID_UNDER_ERR = 44, /* FCP RSP RESID ERROR */
- FC_ERROR_EXCH_ID_FREE_ERR = 45, /* Abnormal free xid failed */
- FC_ALLOC_EXCH_ID_FAILED = 46, /* ucode alloc EXCH ID failed */
- FC_ERROR_DUPLICATE_IO_RECEIVED = 47, /* Duplicate tcmnd or tmf rsp received */
- FC_ERROR_RXID_MISCOMPARE = 48,
- FC_ERROR_FAILOVER_CLEAR_VALID_HOST = 49, /* Failover cleared valid host io */
- FC_ERROR_EXCH_ID_NOT_MATCH = 50, /* SCQ TYPE: xid not match */
- FC_ERROR_ABORT_FAIL = 51, /* SCQ TYPE: abort fail */
- FC_ERROR_SHARD_TABLE_OP_FAIL = 52, /* SCQ TYPE: shard table OP fail */
- FC_ERROR_E0E1_FAIL = 53,
- FC_INSERT_EXCH_ID_HASH_FAILED = 54, /* ucode INSERT EXCH ID HASH failed */
- FC_ERROR_CODE_FCP_RSP_UPDMA_FAILED = 55, /* up dma req failed,while fcp rsp is rcving */
- FC_ERROR_CODE_SID_DID_NOT_MATCH = 56, /* sid or did not match */
- FC_ERROR_DATA_NOT_REL_OFF = 57, /* data not rel off */
- FC_ERROR_CODE_EXCH_ID_TIMEOUT = 58, /* exch id timeout */
- FC_ERROR_PARENT_CHECK_FAIL = 59,
- FC_ERROR_RECV_REC_REJECT = 60, /* RECV REC RSP REJECT */
- FC_ERROR_RECV_SRR_REJECT = 61, /* RECV REC SRR REJECT */
- FC_ERROR_REC_NOT_FIND_EXID_INVALID = 62,
- FC_ERROR_RECV_REC_NO_ERR = 63,
- FC_ERROR_PARENT_CTX_ERR = 64
-};
-
-/* AEQ EVENT TYPE */
-enum spfc_aeq_evt_type {
- /* SCQ and SRQ not enough, HOST will initiate a operation to associated SCQ/SRQ */
- FC_AEQ_EVENT_QUEUE_ERROR = 48,
- FC_AEQ_EVENT_WQE_FATAL_ERROR = 49, /* WQE MSN check error,HOST will reset port */
- FC_AEQ_EVENT_CTX_FATAL_ERROR = 50, /* serious chip error, HOST will reset chip */
- FC_AEQ_EVENT_OFFLOAD_ERROR = 51,
- FC_FC_AEQ_EVENT_TYPE_LAST
-};
-
-enum spfc_protocol_class {
- FC_PROTOCOL_CLASS_3 = 0x0,
- FC_PROTOCOL_CLASS_2 = 0x1,
- FC_PROTOCOL_CLASS_1 = 0x2,
- FC_PROTOCOL_CLASS_F = 0x3,
- FC_PROTOCOL_CLASS_OTHER = 0x4
-};
-
-enum spfc_aeq_evt_err_code {
- /* detail type of resource lack */
- FC_SCQ_IS_FULL_ERR = 0,
- FC_SRQ_IS_FULL_ERR,
-
- /* detail type of FC_AEQ_EVENT_WQE_FATAL_ERROR */
- FC_SQE_CHILD_SETUP_WQE_MSN_ERR = 2,
- FC_SQE_CHILD_SETUP_WQE_GPA_ERR,
- FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_1,
- FC_CMDPKT_CHILD_SETUP_INVALID_WQE_ERR_2,
- FC_CLEAEQ_WQE_ERR,
- FC_WQEFETCH_WQE_MSN_ERR,
- FC_WQEFETCH_QUINFO_ERR,
-
- /* detail type of FC_AEQ_EVENT_CTX_FATAL_ERROR */
- FC_SCQE_ERR_BIT_ERR = 9,
- FC_UPDMA_ADDR_REQ_SRQ_ERR,
- FC_SOLICHILDDMA_ADDR_REQ_ERR,
- FC_UNSOLICHILDDMA_ADDR_REQ_ERR,
- FC_SQE_CHILD_SETUP_QINFO_ERR_1,
- FC_SQE_CHILD_SETUP_QINFO_ERR_2,
- FC_CMDPKT_CHILD_SETUP_QINFO_ERR_1,
- FC_CMDPKT_CHILD_SETUP_QINFO_ERR_2,
- FC_CMDPKT_CHILD_SETUP_PMSN_ERR,
- FC_CLEAEQ_CTX_ERR,
- FC_WQEFETCH_CTX_ERR,
- FC_FLUSH_QPC_ERR_LQP,
- FC_FLUSH_QPC_ERR_SMF,
- FC_PREFETCH_QPC_ERR_PCM_MHIT_LQP,
- FC_PREFETCH_QPC_ERR_PCM_MHIT_FQG,
- FC_PREFETCH_QPC_ERR_PCM_ABM_FQG,
- FC_PREFETCH_QPC_ERR_MAP_FQG,
- FC_PREFETCH_QPC_ERR_MAP_LQP,
- FC_PREFETCH_QPC_ERR_SMF_RTN,
- FC_PREFETCH_QPC_ERR_CFG,
- FC_PREFETCH_QPC_ERR_FLSH_HIT,
- FC_PREFETCH_QPC_ERR_FLSH_ACT,
- FC_PREFETCH_QPC_ERR_ABM_W_RSC,
- FC_PREFETCH_QPC_ERR_RW_ABM,
- FC_PREFETCH_QPC_ERR_DEFAULT,
- FC_CHILDHASH_INSERT_SW_ERR,
- FC_CHILDHASH_LOOKUP_SW_ERR,
- FC_CHILDHASH_DEL_SW_ERR,
- FC_EXCH_ID_FREE_SW_ERR,
- FC_FLOWHASH_INSERT_SW_ERR,
- FC_FLOWHASH_LOOKUP_SW_ERR,
- FC_FLOWHASH_DEL_SW_ERR,
- FC_FLUSH_QPC_ERR_USED,
- FC_FLUSH_QPC_ERR_OUTER_LOCK,
- FC_SETUP_SESSION_ERR,
-
- FC_AEQ_EVT_ERR_CODE_BUTT
-
-};
-
-/* AEQ data structure */
-struct spfc_aqe_data {
- union {
- struct {
- u32 conn_id : 16;
- u32 rsvd : 8;
- u32 evt_code : 8;
- } wd0;
-
- u32 data0;
- };
-
- union {
- struct {
- u32 xid : 20;
- u32 rsvd : 12;
- } wd1;
-
- u32 data1;
- };
-};
-
-/* Control Section: Common Header */
-struct spfc_wqe_ctrl_ch {
- union {
- struct {
- u32 bdsl : 8;
- u32 drv_sl : 2;
- u32 rsvd0 : 4;
- u32 wf : 1;
- u32 cf : 1;
- u32 tsl : 5;
- u32 va : 1;
- u32 df : 1;
- u32 cr : 1;
- u32 dif_sl : 3;
- u32 csl : 2;
- u32 ctrl_sl : 2;
- u32 owner : 1;
- } wd0;
-
- u32 ctrl_ch_val;
- };
-};
-
-/* Control Section: Queue Specific Field */
-struct spfc_wqe_ctrl_qsf {
- u32 wqe_sn : 16;
- u32 dump_wqe_sn : 16;
-};
-
-/* DIF info definition in WQE */
-struct spfc_fc_dif_info {
- struct {
- u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
- /* Bit 0: scenario of the reference tag verify mode.
- *Bit 1: scenario of the reference tag insert/replace mode.
- */
- u32 ref_tag_mode : 2;
- /* 0: fixed; 1: increasement; */
- u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
- u32 grd_agm_ini_ctrl : 3;
- u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
- /* Bit 1: DIF/DIX guard replace or insert algorithm control */
- u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
- u32 dif_verify_type : 2; /* verify type */
- u32 difx_ref_esc : 1; /* Check blocks whose reference tag contains 0xFFFF flag */
- u32 difx_app_esc : 1;/* Check blocks whose application tag contains 0xFFFF flag */
- u32 rsvd : 8;
- u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
- u32 smd_tp : 2;
- u32 difx_en : 1;
- } wd0;
-
- struct {
- u32 cmp_app_tag_msk : 16;
- u32 rsvd : 7;
- u32 lun_qos_en : 2;
- u32 vpid : 7;
- } wd1;
-
- u16 cmp_app_tag;
- u16 rep_app_tag;
-
- u32 cmp_ref_tag;
- u32 rep_ref_tag;
-};
-
-/* Task Section: TMF SQE for INI */
-struct spfc_tmf_info {
- union {
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } bs;
- u32 value;
- } w0;
-
- union {
- struct {
- u32 reset_did : 24;
- u32 reset_type : 2;
- u32 marker_sts : 1;
- u32 rsvd0 : 5;
- } bs;
- u32 value;
- } w1;
-
- union {
- struct {
- u32 reset_sid : 24;
- u32 rsvd0 : 8;
- } bs;
- u32 value;
- } w2;
-
- u8 reset_lun[8];
-};
-
-/* Task Section: CMND SQE for INI */
-struct spfc_sqe_icmnd {
- u8 fcp_cmnd_iu[FC_SCSI_CMDIU_LEN];
- union {
- struct spfc_fc_dif_info dif_info;
- struct spfc_tmf_info tmf;
- } info;
-};
-
-/* Task Section: ABTS SQE */
-struct spfc_sqe_abts {
- u32 fh_parm_abts;
- u32 hotpooltag;
- u32 release_timer;
-};
-
-struct spfc_keys {
- struct {
- u32 smac1 : 8;
- u32 smac0 : 8;
- u32 rsv : 16;
- } wd0;
-
- u8 smac[4];
-
- u8 dmac[6];
- u8 sid[3];
- u8 did[3];
-
- struct {
- u32 port_id : 3;
- u32 host_id : 2;
- u32 rsvd : 27;
- } wd5;
- u32 rsvd;
-};
-
-/* BDSL: Session Enable WQE.keys field only use 26 bytes room */
-struct spfc_cmdqe_sess_en {
- struct {
- u32 rx_id : 16;
- u32 port_id : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 cid : 20;
- u32 rsvd1 : 12;
- } wd1;
-
- struct {
- u32 conn_id : 16;
- u32 scqn : 16;
- } wd2;
-
- struct {
- u32 xid_p : 20;
- u32 rsvd3 : 12;
- } wd3;
-
- u32 context_gpa_hi;
- u32 context_gpa_lo;
- struct spfc_keys keys;
- u32 context[64];
-};
-
-/* Control Section */
-struct spfc_wqe_ctrl {
- struct spfc_wqe_ctrl_ch ch;
- struct spfc_wqe_ctrl_qsf qsf;
-};
-
-struct spfc_sqe_els_rsp {
- struct {
- u32 echo_flag : 16;
- u32 data_len : 16;
- } wd0;
-
- struct {
- u32 rsvd1 : 27;
- u32 offload_flag : 1;
- u32 lp_bflag : 1;
- u32 clr_io : 1;
- u32 para_update : 2;
- } wd1;
-
- struct {
- u32 seq_cnt : 1;
- u32 e_d_tov : 1;
- u32 rsvd2 : 6;
- u32 class_mode : 8; /* 0:class3, 1:class2*/
- u32 tx_mfs : 16;
- } wd2;
-
- u32 e_d_tov_timer_val;
-
- struct {
- u32 conf : 1;
- u32 rec : 1;
- u32 xfer_dis : 1;
- u32 immi_taskid_cnt : 13;
- u32 immi_taskid_start : 16;
- } wd4;
-
- u32 first_burst_len;
-
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } wd6;
-
- struct {
- u32 scqn : 16;
- u32 hotpooltag : 16;
- } wd7;
-
- u32 magic_local;
- u32 magic_remote;
- u32 ts_rcv_echo_req;
- u32 sid;
- u32 did;
- u32 context_gpa_hi;
- u32 context_gpa_lo;
-};
-
-struct spfc_sqe_reset_session {
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } wd0;
-
- struct {
- u32 reset_did : 24;
- u32 mode : 2;
- u32 rsvd : 6;
- } wd1;
-
- struct {
- u32 reset_sid : 24;
- u32 rsvd : 8;
- } wd2;
-
- struct {
- u32 scqn : 16;
- u32 rsvd : 16;
- } wd3;
-};
-
-struct spfc_sqe_nop_sq {
- struct {
- u32 scqn : 16;
- u32 rsvd : 16;
- } wd0;
- u32 magic_num;
-};
-
-struct spfc_sqe_t_els_gs {
- u16 echo_flag;
- u16 data_len;
-
- struct {
- u32 rsvd1 : 9;
- u32 offload_flag : 1;
- u32 origin_hottag : 16;
- u32 rec_flag : 1;
- u32 rec_support : 1;
- u32 lp_bflag : 1;
- u32 clr_io : 1;
- u32 para_update : 2;
- } wd4;
-
- struct {
- u32 seq_cnt : 1;
- u32 e_d_tov : 1;
- u32 rsvd2 : 14;
- u32 tx_mfs : 16;
- } wd5;
-
- u32 e_d_tov_timer_val;
-
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } wd6;
-
- struct {
- u32 scqn : 16;
- u32 hotpooltag : 16; /* used for send ELS rsp */
- } wd7;
-
- u32 sid;
- u32 did;
- u32 context_gpa_hi;
- u32 context_gpa_lo;
- u32 origin_magicnum;
-};
-
-struct spfc_sqe_els_gs_elsrsp_comm {
- u16 rsvd;
- u16 data_len;
-};
-
-struct spfc_sqe_lpb_msg {
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } w0;
-
- struct {
- u32 reset_did : 24;
- u32 reset_type : 2;
- u32 rsvd0 : 6;
- } w1;
-
- struct {
- u32 reset_sid : 24;
- u32 rsvd0 : 8;
- } w2;
-
- u16 tmf_exch_id;
- u16 rsvd1;
-
- u8 reset_lun[8];
-};
-
-/* SQE Task Section's Contents except Common Header */
-union spfc_sqe_ts_cont {
- struct spfc_sqe_icmnd icmnd;
- struct spfc_sqe_abts abts;
- struct spfc_sqe_els_rsp els_rsp;
- struct spfc_sqe_t_els_gs t_els_gs;
- struct spfc_sqe_els_gs_elsrsp_comm els_gs_elsrsp_comm;
- struct spfc_sqe_reset_session reset_session;
- struct spfc_sqe_lpb_msg lpb_msg;
- struct spfc_sqe_nop_sq nop_sq;
- u32 value[17];
-};
-
-struct spfc_sqe_nvme_icmnd_part2 {
- u8 nvme_cmnd_iu_part2_data[FC_NVME_CMDIU_LEN - FC_SCSI_CMDIU_LEN];
-};
-
-union spfc_sqe_ts_ex {
- struct spfc_sqe_nvme_icmnd_part2 nvme_icmnd_part2;
- u32 value[12];
-};
-
-struct spfc_sqe_ts {
- /* SQE Task Section's Common Header */
- u32 local_xid : 16; /* local exch_id, icmnd/els send used for hotpooltag */
- u32 crc_inj : 1;
- u32 immi_std : 1;
- u32 cdb_type : 1; /* cdb_type = 0:CDB_LEN = 16B, cdb_type = 1:CDB_LEN = 32B */
- u32 rsvd : 5; /* used for loopback saving bdsl's num */
- u32 task_type : 8;
-
- struct {
- u16 conn_id;
- u16 remote_xid;
- } wd0;
-
- u32 xid : 20;
- u32 sqn : 12;
- u32 cid;
- u32 magic_num;
- union spfc_sqe_ts_cont cont;
-};
-
-struct spfc_constant_sge {
- u32 buf_addr_hi;
- u32 buf_addr_lo;
-};
-
-struct spfc_variable_sge {
- u32 buf_addr_hi;
- u32 buf_addr_lo;
-
- struct {
- u32 buf_len : 31;
- u32 r_flag : 1;
- } wd0;
-
- struct {
- u32 buf_addr_gpa : 16;
- u32 xid : 14;
- u32 extension_flag : 1;
- u32 last_flag : 1;
- } wd1;
-};
-
-#define FC_WQE_SIZE 256
-/* SQE, should not be over 256B */
-struct spfc_sqe {
- struct spfc_wqe_ctrl ctrl_sl;
- u32 sid;
- u32 did;
- u64 wqe_gpa; /* gpa shift 6 bit to right*/
- u64 db_val;
- union spfc_sqe_ts_ex ts_ex;
- struct spfc_variable_sge esge[3];
- struct spfc_wqe_ctrl ectrl_sl;
- struct spfc_sqe_ts ts_sl;
- struct spfc_variable_sge sge[2];
-};
-
-struct spfc_rqe_ctrl {
- struct spfc_wqe_ctrl_ch ch;
-
- struct {
- u16 wqe_msn;
- u16 dump_wqe_msn;
- } wd0;
-};
-
-struct spfc_rqe_drv {
- struct {
- u32 rsvd0 : 16;
- u32 user_id : 16;
- } wd0;
-
- u32 rsvd1;
-};
-
-/* RQE,should not be over 32B */
-struct spfc_rqe {
- struct spfc_rqe_ctrl ctrl_sl;
- u32 cqe_gpa_h;
- u32 cqe_gpa_l;
- struct spfc_constant_sge bds_sl;
- struct spfc_rqe_drv drv_sl;
-};
-
-struct spfc_cmdqe_abort {
- struct {
- u32 rx_id : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 ox_id : 16;
- u32 rsvd1 : 12;
- u32 trsp_send : 1;
- u32 tcmd_send : 1;
- u32 immi : 1;
- u32 reply_sts : 1;
- } wd1;
-
- struct {
- u32 conn_id : 16;
- u32 scqn : 16;
- } wd2;
-
- struct {
- u32 xid : 20;
- u32 rsvd : 12;
- } wd3;
-
- struct {
- u32 cid : 20;
- u32 rsvd : 12;
- } wd4;
- struct {
- u32 hotpooltag : 16;
- u32 rsvd : 16;
- } wd5; /* v6 new define */
- /* abort time out. Used for abort and io cmd reach ucode in different path
- * and io cmd will not arrive.
- */
- u32 time_out;
- u32 magic_num;
-};
-
-struct spfc_cmdqe_abts_rsp {
- struct {
- u32 rx_id : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 ox_id : 16;
- u32 rsvd1 : 4;
- u32 port_id : 4;
- u32 payload_len : 7;
- u32 rsp_type : 1;
- } wd1;
-
- struct {
- u32 conn_id : 16;
- u32 scqn : 16;
- } wd2;
-
- struct {
- u32 xid : 20;
- u32 rsvd : 12;
- } wd3;
-
- struct {
- u32 cid : 20;
- u32 rsvd : 12;
- } wd4;
-
- struct {
- u32 req_rx_id : 16;
- u32 hotpooltag : 16;
- } wd5;
-
- /* payload length is according to rsp_type:1DWORD or 3DWORD */
- u32 payload[3];
-};
-
-struct spfc_cmdqe_buffer_clear {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 wqe_type : 8;
- } wd0;
-
- struct {
- u32 rx_id_end : 16;
- u32 rx_id_start : 16;
- } wd1;
-
- u32 scqn;
- u32 wd3;
-};
-
-struct spfc_cmdqe_flush_sq {
- struct {
- u32 entry_count : 16;
- u32 rsvd : 8;
- u32 wqe_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 port_id : 4;
- u32 pos : 11;
- u32 last_wqe : 1;
- } wd1;
-
- struct {
- u32 rsvd : 4;
- u32 clr_pos : 12;
- u32 pkt_ptr : 16;
- } wd2;
-
- struct {
- u32 first_sq_xid : 24;
- u32 sqqid_start_per_session : 4;
- u32 sqcnt_per_session : 4;
- } wd3;
-};
-
-struct spfc_cmdqe_dump_exch {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- u16 oqid_wr;
- u16 oqid_rd;
-
- u32 host_id;
- u32 func_id;
- u32 cache_id;
- u32 exch_id;
-};
-
-struct spfc_cmdqe_creat_srqc {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- u32 srqc_gpa_h;
- u32 srqc_gpa_l;
-
- u32 srqc[16]; /* srqc_size=64B */
-};
-
-struct spfc_cmdqe_delete_srqc {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- u32 srqc_gpa_h;
- u32 srqc_gpa_l;
-};
-
-struct spfc_cmdqe_clr_srq {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 srq_type : 16;
- } wd1;
-
- u32 srqc_gpa_h;
- u32 srqc_gpa_l;
-};
-
-struct spfc_cmdqe_creat_scqc {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 rsvd2 : 16;
- } wd1;
-
- u32 scqc[16]; /* scqc_size=64B */
-};
-
-struct spfc_cmdqe_delete_scqc {
- struct {
- u32 rsvd1 : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 rsvd2 : 16;
- } wd1;
-};
-
-struct spfc_cmdqe_creat_ssqc {
- struct {
- u32 rsvd1 : 4;
- u32 xid : 20;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 rsvd2 : 16;
- } wd1;
- u32 context_gpa_hi;
- u32 context_gpa_lo;
-
- u32 ssqc[64]; /* ssqc_size=256B */
-};
-
-struct spfc_cmdqe_delete_ssqc {
- struct {
- u32 entry_count : 4;
- u32 xid : 20;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 scqn : 16;
- u32 rsvd2 : 16;
- } wd1;
- u32 context_gpa_hi;
- u32 context_gpa_lo;
-};
-
-/* add xid free via cmdq */
-struct spfc_cmdqe_exch_id_free {
- struct {
- u32 task_id : 16;
- u32 port_id : 8;
- u32 rsvd0 : 8;
- } wd0;
-
- u32 magic_num;
-
- struct {
- u32 scqn : 16;
- u32 hotpool_tag : 16;
- } wd2;
- struct {
- u32 rsvd1 : 31;
- u32 clear_abort_flag : 1;
- } wd3;
- u32 sid;
- u32 did;
- u32 type; /* ELS/ELS RSP/IO */
-};
-
-struct spfc_cmdqe_cmdqe_dfx {
- struct {
- u32 rsvd1 : 4;
- u32 xid : 20;
- u32 task_type : 8;
- } wd0;
-
- struct {
- u32 qid_crclen : 12;
- u32 cid : 20;
- } wd1;
- u32 context_gpa_hi;
- u32 context_gpa_lo;
- u32 dfx_type;
-
- u32 rsv[16];
-};
-
-struct spfc_sqe_t_rsp {
- struct {
- u32 rsvd1 : 16;
- u32 fcp_rsp_len : 8;
- u32 busy_rsp : 3;
- u32 immi : 1;
- u32 mode : 1;
- u32 conf : 1;
- u32 fill : 2;
- } wd0;
-
- u32 hotpooltag;
-
- union {
- struct {
- u32 addr_h;
- u32 addr_l;
- } gpa;
-
- struct {
- u32 data[23]; /* FCP_RESP payload buf, 92B rsvd */
- } buf;
- } payload;
-};
-
-struct spfc_sqe_tmf_t_rsp {
- struct {
- u32 scqn : 16;
- u32 fcp_rsp_len : 8;
- u32 pkt_nosnd_flag : 3; /* tmf rsp snd flag, 0:snd, 1: not snd, Driver ignore */
- u32 reset_type : 2;
- u32 conf : 1;
- u32 fill : 2;
- } wd0;
-
- struct {
- u32 reset_exch_end : 16;
- u32 reset_exch_start : 16;
- } wd1;
-
- struct {
- u16 hotpooltag; /*tmf rsp hotpooltag, Driver ignore */
- u16 rsvd;
- } wd2;
-
- u8 lun[8]; /* Lun ID */
- u32 data[20]; /* FCP_RESP payload buf, 80B rsvd */
-};
-
-struct spfc_sqe_tresp_ts {
- /* SQE Task Section's Common Header */
- u16 local_xid;
- u8 rsvd0;
- u8 task_type;
-
- struct {
- u16 conn_id;
- u16 remote_xid;
- } wd0;
-
- u32 xid : 20;
- u32 sqn : 12;
- u32 cid;
- u32 magic_num;
- struct spfc_sqe_t_rsp t_rsp;
-};
-
-struct spfc_sqe_tmf_resp_ts {
- /* SQE Task Section's Common Header */
- u16 local_xid;
- u8 rsvd0;
- u8 task_type;
-
- struct {
- u16 conn_id;
- u16 remote_xid;
- } wd0;
-
- u32 xid : 20;
- u32 sqn : 12;
- u32 cid;
- u32 magic_num; /* magic num */
- struct spfc_sqe_tmf_t_rsp tmf_rsp;
-};
-
-/* SQE for fcp response, max TSL is 120B */
-struct spfc_sqe_tresp {
- struct spfc_wqe_ctrl ctrl_sl;
- u64 taskrsvd;
- u64 wqe_gpa;
- u64 db_val;
- union spfc_sqe_ts_ex ts_ex;
- struct spfc_variable_sge esge[3];
- struct spfc_wqe_ctrl ectrl_sl;
- struct spfc_sqe_tresp_ts ts_sl;
-};
-
-/* SQE for tmf response, max TSL is 120B */
-struct spfc_sqe_tmf_rsp {
- struct spfc_wqe_ctrl ctrl_sl;
- u64 taskrsvd;
- u64 wqe_gpa;
- u64 db_val;
- union spfc_sqe_ts_ex ts_ex;
- struct spfc_variable_sge esge[3];
- struct spfc_wqe_ctrl ectrl_sl;
- struct spfc_sqe_tmf_resp_ts ts_sl;
-};
-
-/* SCQE Common Header */
-struct spfc_scqe_ch {
- struct {
- u32 task_type : 8;
- u32 sqn : 13;
- u32 cqe_remain_cnt : 3;
- u32 err_code : 7;
- u32 owner : 1;
- } wd0;
-};
-
-struct spfc_scqe_type {
- struct spfc_scqe_ch ch;
-
- u32 rsvd0;
-
- u16 conn_id;
- u16 rsvd4;
-
- u32 rsvd1[12];
-
- struct {
- u32 done : 1;
- u32 rsvd : 23;
- u32 dif_vry_rst : 8;
- } wd0;
-};
-
-struct spfc_scqe_sess_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 xid_qpn : 20;
- u32 rsvd1 : 12;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 rsvd3 : 16;
- } wd1;
-
- struct {
- u32 cid : 20;
- u32 rsvd2 : 12;
- } wd2;
-
- u64 rsvd3;
-};
-
-struct spfc_scqe_comm_rsp_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
- } wd1;
-
- u32 magic_num;
-};
-
-struct spfc_scqe_iresp {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 rsvd0 : 3;
- u32 user_id_num : 8;
- u32 dif_info : 5;
- } wd1;
-
- struct {
- u32 scsi_status : 8;
- u32 fcp_flag : 8;
- u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
- } wd2;
-
- u32 fcp_resid;
- u32 fcp_sns_len;
- u32 fcp_rsp_len;
- u32 magic_num;
- u16 user_id[FC_SENSEDATA_USERID_CNT_MAX];
- u32 rsv1;
-};
-
-struct spfc_scqe_nvme_iresp {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 eresp_flag : 8;
- u32 user_id_num : 8;
- } wd1;
-
- struct {
- u32 scsi_status : 8;
- u32 fcp_flag : 8;
- u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
- } wd2;
- u32 magic_num;
- u32 eresp[8];
-};
-
-#pragma pack(1)
-struct spfc_dif_result {
- u8 vrd_rpt;
- u16 pad;
- u8 rcv_pi_vb;
- u32 rcv_pi_h;
- u32 rcv_pi_l;
- u16 vrf_agm_imm;
- u16 ri_agm_imm;
-};
-
-#pragma pack()
-
-struct spfc_scqe_dif_result {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 rsvd0 : 11;
- u32 dif_info : 5;
- } wd1;
-
- struct {
- u32 scsi_status : 8;
- u32 fcp_flag : 8;
- u32 hotpooltag : 16; /* ucode return hotpooltag to drv */
- } wd2;
-
- u32 fcp_resid;
- u32 fcp_sns_len;
- u32 fcp_rsp_len;
- u32 magic_num;
-
- u32 rsv1[3];
- struct spfc_dif_result difinfo;
-};
-
-struct spfc_scqe_rcv_abts_rsp {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 hotpooltag : 16;
- } wd1;
-
- struct {
- u32 fh_rctrl : 8;
- u32 rsvd0 : 24;
- } wd2;
-
- struct {
- u32 did : 24;
- u32 rsvd1 : 8;
- } wd3;
-
- struct {
- u32 sid : 24;
- u32 rsvd2 : 8;
- } wd4;
-
- /* payload length is according to fh_rctrl:1DWORD or 3DWORD */
- u32 payload[3];
- u32 magic_num;
-};
-
-struct spfc_scqe_fcp_rsp_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd0;
-
- struct {
- u32 conn_id : 16;
- u32 rsvd0 : 10;
- u32 immi : 1;
- u32 dif_info : 5;
- } wd1;
-
- u32 magic_num;
- u32 hotpooltag;
- u32 xfer_rsp;
- u32 rsvd[5];
-
- u32 dif_tmp[4]; /* HW will overwrite it */
-};
-
-struct spfc_scqe_rcv_els_cmd {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 did : 24;
- u32 class_mode : 8; /* 0:class3, 1:class2 */
- } wd0;
-
- struct {
- u32 sid : 24;
- u32 rsvd1 : 8;
- } wd1;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd2;
-
- struct {
- u32 user_id_num : 16;
- u32 data_len : 16;
- } wd3;
- /* User ID of SRQ SGE, used for drvier buffer release */
- u16 user_id[FC_LS_GS_USERID_CNT_MAX];
- u32 ts;
-};
-
-struct spfc_scqe_param_check_scq {
- struct spfc_scqe_ch ch;
-
- u8 rsvd0[3];
- u8 port_id;
-
- u16 scqn;
- u16 check_item;
-
- u16 exch_id_load;
- u16 exch_id;
-
- u16 historty_type;
- u16 entry_count;
-
- u32 xid;
-
- u32 gpa_h;
- u32 gpa_l;
-
- u32 magic_num;
- u32 hotpool_tag;
-
- u32 payload_len;
- u32 sub_err;
-
- u32 rsvd2[3];
-};
-
-struct spfc_scqe_rcv_abts_cmd {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 did : 24;
- u32 rsvd0 : 8;
- } wd0;
-
- struct {
- u32 sid : 24;
- u32 rsvd1 : 8;
- } wd1;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd2;
-};
-
-struct spfc_scqe_rcv_els_gs_rsp {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd1;
-
- struct {
- u32 conn_id : 16;
- u32 data_len : 16; /* ELS/GS RSP Payload length */
- } wd2;
-
- struct {
- u32 did : 24;
- u32 rsvd : 6;
- u32 echo_rsp : 1;
- u32 end_rsp : 1;
- } wd3;
-
- struct {
- u32 sid : 24;
- u32 user_id_num : 8;
- } wd4;
-
- struct {
- u32 rsvd : 16;
- u32 hotpooltag : 16;
- } wd5;
-
- u32 magic_num;
- u16 user_id[FC_LS_GS_USERID_CNT_MAX];
-};
-
-struct spfc_scqe_rcv_flush_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rsvd0 : 4;
- u32 clr_pos : 12;
- u32 port_id : 8;
- u32 last_flush : 8;
- } wd0;
-};
-
-struct spfc_scqe_rcv_clear_buf_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rsvd0 : 24;
- u32 port_id : 8;
- } wd0;
-};
-
-struct spfc_scqe_clr_srq_rsp {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 srq_type : 16;
- u32 cur_wqe_msn : 16;
- } wd0;
-};
-
-struct spfc_scqe_itmf_marker_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd1;
-
- struct {
- u32 did : 24;
- u32 end_rsp : 8;
- } wd2;
-
- struct {
- u32 sid : 24;
- u32 rsvd1 : 8;
- } wd3;
-
- struct {
- u32 hotpooltag : 16;
- u32 rsvd : 16;
- } wd4;
-
- u32 magic_num;
-};
-
-struct spfc_scqe_abts_marker_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd1;
-
- struct {
- u32 did : 24;
- u32 end_rsp : 8;
- } wd2;
-
- struct {
- u32 sid : 24;
- u32 io_state : 8;
- } wd3;
-
- struct {
- u32 hotpooltag : 16;
- u32 rsvd : 16;
- } wd4;
-
- u32 magic_num;
-};
-
-struct spfc_scqe_ini_abort_sts {
- struct spfc_scqe_ch ch;
-
- struct {
- u32 rx_id : 16;
- u32 ox_id : 16;
- } wd1;
-
- struct {
- u32 did : 24;
- u32 rsvd : 8;
- } wd2;
-
- struct {
- u32 sid : 24;
- u32 io_state : 8;
- } wd3;
-
- struct {
- u32 hotpooltag : 16;
- u32 rsvd : 16;
- } wd4;
-
- u32 magic_num;
-};
-
-struct spfc_scqe_sq_nop_sts {
- struct spfc_scqe_ch ch;
- struct {
- u32 rsvd : 16;
- u32 sqn : 16;
- } wd0;
- struct {
- u32 rsvd : 16;
- u32 conn_id : 16;
- } wd1;
- u32 magic_num;
-};
-
-/* SCQE, should not be over 64B */
-#define FC_SCQE_SIZE 64
-union spfc_scqe {
- struct spfc_scqe_type common;
- struct spfc_scqe_sess_sts sess_sts; /* session enable/disable/delete sts */
- struct spfc_scqe_comm_rsp_sts comm_sts; /* aborts/abts_rsp/els rsp sts */
- struct spfc_scqe_rcv_clear_buf_sts clear_sts; /* clear buffer sts */
- struct spfc_scqe_rcv_flush_sts flush_sts; /* flush sq sts */
- struct spfc_scqe_iresp iresp;
- struct spfc_scqe_rcv_abts_rsp rcv_abts_rsp; /* recv abts rsp */
- struct spfc_scqe_fcp_rsp_sts fcp_rsp_sts; /* Read/Write/Rsp sts */
- struct spfc_scqe_rcv_els_cmd rcv_els_cmd; /* recv els cmd */
- struct spfc_scqe_rcv_abts_cmd rcv_abts_cmd; /* recv abts cmd */
- struct spfc_scqe_rcv_els_gs_rsp rcv_els_gs_rsp; /* recv els/gs rsp */
- struct spfc_scqe_clr_srq_rsp clr_srq_sts;
- struct spfc_scqe_itmf_marker_sts itmf_marker_sts; /* tmf marker */
- struct spfc_scqe_abts_marker_sts abts_marker_sts; /* abts marker */
- struct spfc_scqe_dif_result dif_result;
- struct spfc_scqe_param_check_scq param_check_sts;
- struct spfc_scqe_nvme_iresp nvme_iresp;
- struct spfc_scqe_ini_abort_sts ini_abort_sts;
- struct spfc_scqe_sq_nop_sts sq_nop_sts;
-};
-
-struct spfc_cmdqe_type {
- struct {
- u32 rx_id : 16;
- u32 rsvd0 : 8;
- u32 task_type : 8;
- } wd0;
-};
-
-struct spfc_cmdqe_send_ack {
- struct {
- u32 rx_id : 16;
- u32 immi_stand : 1;
- u32 rsvd0 : 7;
- u32 task_type : 8;
- } wd0;
-
- u32 xid;
- u32 cid;
-};
-
-struct spfc_cmdqe_send_aeq_err {
- struct {
- u32 errorevent : 8;
- u32 errortype : 8;
- u32 portid : 8;
- u32 task_type : 8;
- } wd0;
-};
-
-/* CMDQE, variable length */
-union spfc_cmdqe {
- struct spfc_cmdqe_type common;
- struct spfc_cmdqe_sess_en session_enable;
- struct spfc_cmdqe_abts_rsp snd_abts_rsp;
- struct spfc_cmdqe_abort snd_abort;
- struct spfc_cmdqe_buffer_clear buffer_clear;
- struct spfc_cmdqe_flush_sq flush_sq;
- struct spfc_cmdqe_dump_exch dump_exch;
- struct spfc_cmdqe_creat_srqc create_srqc;
- struct spfc_cmdqe_delete_srqc delete_srqc;
- struct spfc_cmdqe_clr_srq clear_srq;
- struct spfc_cmdqe_creat_scqc create_scqc;
- struct spfc_cmdqe_delete_scqc delete_scqc;
- struct spfc_cmdqe_send_ack send_ack;
- struct spfc_cmdqe_send_aeq_err send_aeqerr;
- struct spfc_cmdqe_creat_ssqc createssqc;
- struct spfc_cmdqe_delete_ssqc deletessqc;
- struct spfc_cmdqe_cmdqe_dfx dfx_info;
- struct spfc_cmdqe_exch_id_free xid_free;
-};
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
deleted file mode 100644
index 7184eb6a10af..000000000000
--- a/drivers/scsi/spfc/hw/spfc_io.c
+++ /dev/null
@@ -1,1193 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_io.h"
-#include "spfc_module.h"
-#include "spfc_service.h"
-
-#define SPFC_SGE_WD1_XID_MASK 0x3fff
-
-u32 dif_protect_opcode = INVALID_VALUE32;
-u32 dif_app_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
-u32 dif_ref_esc_check = SPFC_DIF_APP_REF_ESC_CHECK;
-u32 grd_agm_ini_ctrl = SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1;
-u32 ref_tag_no_increase;
-u32 dix_flag;
-u32 grd_ctrl;
-u32 grd_agm_ctrl = SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16;
-u32 cmp_app_tag_mask = 0xffff;
-u32 app_tag_ctrl;
-u32 ref_tag_ctrl;
-u32 ref_tag_mod = INVALID_VALUE32;
-u32 rep_ref_tag;
-u32 rx_rep_ref_tag;
-u16 cmp_app_tag;
-u16 rep_app_tag;
-
-static void spfc_dif_err_count(struct spfc_hba_info *hba, u8 info)
-{
- u8 dif_info = info;
-
- if (dif_info & SPFC_TX_DIF_ERROR_FLAG) {
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_ALL);
- if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_CRC);
-
- if (dif_info & SPFC_DIF_ERROR_CODE_APP)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_APP);
-
- if (dif_info & SPFC_DIF_ERROR_CODE_REF)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_SEND_DIFERR_REF);
- } else {
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_ALL);
- if (dif_info & SPFC_DIF_ERROR_CODE_CRC)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_CRC);
-
- if (dif_info & SPFC_DIF_ERROR_CODE_APP)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_APP);
-
- if (dif_info & SPFC_DIF_ERROR_CODE_REF)
- SPFC_DIF_ERR_STAT(hba, SPFC_DIF_RECV_DIFERR_REF);
- }
-}
-
-void spfc_build_no_dif_control(struct unf_frame_pkg *pkg,
- struct spfc_fc_dif_info *info)
-{
- struct spfc_fc_dif_info *dif_info = info;
-
- /* dif enable or disable */
- dif_info->wd0.difx_en = SPFC_DIF_DISABLE;
-
- dif_info->wd1.vpid = pkg->qos_level;
- dif_info->wd1.lun_qos_en = 1;
-}
-
-void spfc_dif_action_forward(struct spfc_fc_dif_info *dif_info_l1,
- struct unf_dif_control_info *dif_ctrl_u1)
-{
- dif_info_l1->wd0.grd_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.grd_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_REPLACE_CRC_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
- : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
-
- dif_info_l1->wd0.ref_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.ref_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_REPLACE_LBA_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
- : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
-
- dif_info_l1->wd0.app_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.app_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_REPLACE_APP_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_REPLACE
- : SPFC_DIF_GARD_REF_APP_CTRL_FORWARD;
-}
-
-void spfc_dif_action_delete(struct spfc_fc_dif_info *dif_info_l1,
- struct unf_dif_control_info *dif_ctrl_u1)
-{
- dif_info_l1->wd0.grd_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_CRC_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
-
- dif_info_l1->wd0.ref_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_LBA_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.ref_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
-
- dif_info_l1->wd0.app_tag_ctrl |=
- (dif_ctrl_u1->protect_opcode & UNF_VERIFY_APP_MASK)
- ? SPFC_DIF_GARD_REF_APP_CTRL_VERIFY
- : SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.app_tag_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_DELETE;
-}
-
-static void spfc_convert_dif_action(struct unf_dif_control_info *dif_ctrl,
- struct spfc_fc_dif_info *dif_info)
-{
- struct spfc_fc_dif_info *dif_info_l1 = NULL;
- struct unf_dif_control_info *dif_ctrl_u1 = NULL;
-
- dif_info_l1 = dif_info;
- dif_ctrl_u1 = dif_ctrl;
-
- switch (UNF_DIF_ACTION_MASK & dif_ctrl_u1->protect_opcode) {
- case UNF_DIF_ACTION_VERIFY_AND_REPLACE:
- case UNF_DIF_ACTION_VERIFY_AND_FORWARD:
- spfc_dif_action_forward(dif_info_l1, dif_ctrl_u1);
- break;
-
- case UNF_DIF_ACTION_INSERT:
- dif_info_l1->wd0.grd_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.grd_ctrl |= SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
- dif_info_l1->wd0.ref_tag_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.ref_tag_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
- dif_info_l1->wd0.app_tag_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY;
- dif_info_l1->wd0.app_tag_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_INSERT;
- break;
-
- case UNF_DIF_ACTION_VERIFY_AND_DELETE:
- spfc_dif_action_delete(dif_info_l1, dif_ctrl_u1);
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "Unknown dif protect opcode 0x%x",
- dif_ctrl_u1->protect_opcode);
- break;
- }
-}
-
-void spfc_get_dif_info_l1(struct spfc_fc_dif_info *dif_info_l1,
- struct unf_dif_control_info *dif_ctrl_u1)
-{
- dif_info_l1->wd1.cmp_app_tag_msk = cmp_app_tag_mask;
-
- dif_info_l1->rep_app_tag = dif_ctrl_u1->app_tag;
- dif_info_l1->rep_ref_tag = dif_ctrl_u1->start_lba;
-
- dif_info_l1->cmp_app_tag = dif_ctrl_u1->app_tag;
- dif_info_l1->cmp_ref_tag = dif_ctrl_u1->start_lba;
-
- if (cmp_app_tag != 0)
- dif_info_l1->cmp_app_tag = cmp_app_tag;
-
- if (rep_app_tag != 0)
- dif_info_l1->rep_app_tag = rep_app_tag;
-
- if (rep_ref_tag != 0)
- dif_info_l1->rep_ref_tag = rep_ref_tag;
-}
-
-void spfc_build_dif_control(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg,
- struct spfc_fc_dif_info *dif_info)
-{
- struct spfc_fc_dif_info *dif_info_l1 = NULL;
- struct unf_dif_control_info *dif_ctrl_u1 = NULL;
-
- dif_info_l1 = dif_info;
- dif_ctrl_u1 = &pkg->dif_control;
-
- /* dif enable or disable */
- dif_info_l1->wd0.difx_en = SPFC_DIF_ENABLE;
-
- dif_info_l1->wd1.vpid = pkg->qos_level;
- dif_info_l1->wd1.lun_qos_en = 1;
-
- /* 512B + 8 size mode */
- dif_info_l1->wd0.sct_size = (dif_ctrl_u1->flags & UNF_DIF_SECTSIZE_4KB)
- ? SPFC_DIF_SECTOR_4KB_MODE
- : SPFC_DIF_SECTOR_512B_MODE;
-
- /* dif type 1 */
- dif_info_l1->wd0.dif_verify_type = dif_type;
-
- /* Check whether the 0xffff app or ref domain is isolated */
- /* If all ff messages are displayed in type1 app, checkcheck sector
- * dif_info_l1->wd0.difx_app_esc = SPFC_DIF_APP_REF_ESC_CHECK
- */
-
- dif_info_l1->wd0.difx_app_esc = dif_app_esc_check;
-
- /* type1 ref tag If all ff is displayed, check sector is required */
- dif_info_l1->wd0.difx_ref_esc = dif_ref_esc_check;
-
- /* Currently, only t10 crc is supported */
- dif_info_l1->wd0.grd_agm_ctrl = 0;
-
- /* Set this parameter based on the values of bit zero and bit one.
- * The initial value is 0, and the value is UNF_DEFAULT_CRC_GUARD_SEED
- */
- dif_info_l1->wd0.grd_agm_ini_ctrl = grd_agm_ini_ctrl;
- dif_info_l1->wd0.app_tag_ctrl = 0;
- dif_info_l1->wd0.grd_ctrl = 0;
- dif_info_l1->wd0.ref_tag_ctrl = 0;
-
- /* Convert the verify operation, replace, forward, insert,
- * and delete operations based on the actual operation code of the upper
- * layer
- */
- if (dif_protect_opcode != INVALID_VALUE32) {
- dif_ctrl_u1->protect_opcode =
- dif_protect_opcode |
- (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_MASK);
- }
-
- spfc_convert_dif_action(dif_ctrl_u1, dif_info_l1);
- dif_info_l1->wd0.app_tag_ctrl |= app_tag_ctrl;
-
- /* Address self-increase mode */
- dif_info_l1->wd0.ref_tag_mode =
- (dif_ctrl_u1->protect_opcode & UNF_DIF_ACTION_NO_INCREASE_REFTAG)
- ? (BOTH_NONE)
- : (BOTH_INCREASE);
-
- if (ref_tag_mod != INVALID_VALUE32)
- dif_info_l1->wd0.ref_tag_mode = ref_tag_mod;
-
- /* This parameter is used only when type 3 is set to 0xffff. */
- spfc_get_dif_info_l1(dif_info_l1, dif_ctrl_u1);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) sid_did(0x%x_0x%x) package type(0x%x) apptag(0x%x) flag(0x%x) opcode(0x%x) fcpdl(0x%x) statlba(0x%x)",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did, pkg->type, pkg->dif_control.app_tag,
- pkg->dif_control.flags, pkg->dif_control.protect_opcode,
- pkg->dif_control.fcp_dl, pkg->dif_control.start_lba);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) cover dif control info, app:cmp_tag(0x%x) cmp_tag_mask(0x%x) rep_tag(0x%x), ref:tag_mode(0x%x) cmp_tag(0x%x) rep_tag(0x%x).",
- hba->port_cfg.port_id, dif_info_l1->cmp_app_tag,
- dif_info_l1->wd1.cmp_app_tag_msk, dif_info_l1->rep_app_tag,
- dif_info_l1->wd0.ref_tag_mode, dif_info_l1->cmp_ref_tag,
- dif_info_l1->rep_ref_tag);
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "Port(0x%x) cover dif control info, ctrl:grd(0x%x) ref(0x%x) app(0x%x).",
- hba->port_cfg.port_id, dif_info_l1->wd0.grd_ctrl,
- dif_info_l1->wd0.ref_tag_ctrl,
- dif_info_l1->wd0.app_tag_ctrl);
-}
-
-static u32 spfc_fill_external_sgl_page(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg,
- struct unf_esgl_page *esgl_page,
- u32 sge_num, int direction,
- u32 context_id, u32 dif_flag)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 index = 0;
- u32 sge_num_per_page = 0;
- u32 buffer_addr = 0;
- u32 buf_len = 0;
- char *buf = NULL;
- ulong phys = 0;
- struct unf_esgl_page *unf_esgl_page = NULL;
- struct spfc_variable_sge *sge = NULL;
-
- unf_esgl_page = esgl_page;
- while (sge_num > 0) {
- /* Obtains the initial address of the sge page */
- sge = (struct spfc_variable_sge *)unf_esgl_page->page_address;
-
- /* Calculate the number of sge on each page */
- sge_num_per_page = (unf_esgl_page->page_size) / sizeof(struct spfc_variable_sge);
-
- /* Fill in sgl page. The last sge of each page is link sge by
- * default
- */
- for (index = 0; index < (sge_num_per_page - 1); index++) {
- UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len, dif_flag);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
- phys = (ulong)buf;
- sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
- sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
- sge[index].wd0.buf_len = buf_len;
- sge[index].wd0.r_flag = 0;
- sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- /* Parity bit */
- sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
- sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
-
- spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
-
- sge_num--;
- if (sge_num == 0)
- break;
- }
-
- /* sge Set the end flag on the last sge of the page if all the
- * pages have been filled.
- */
- if (sge_num == 0) {
- sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
-
- /* Parity bit */
- buffer_addr = be32_to_cpu(sge[index].buf_addr_lo);
- sge[index].wd1.buf_addr_gpa = (buffer_addr >> UNF_SHIFT_16);
- sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
-
- spfc_cpu_to_big32(&sge[index].wd1, SPFC_DWORD_BYTE);
- }
- /* If only one sge is left empty, the sge reserved on the page
- * is used for filling.
- */
- else if (sge_num == 1) {
- UNF_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len,
- dif_flag);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
- phys = (ulong)buf;
- sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
- sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
- sge[index].wd0.buf_len = buf_len;
- sge[index].wd0.r_flag = 0;
- sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
-
- /* Parity bit */
- sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
- sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
-
- spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
-
- sge_num--;
- } else {
- /* Apply for a new sgl page and fill in link sge */
- UNF_GET_FREE_ESGL_PAGE(unf_esgl_page, hba->lport, pkg);
- if (!unf_esgl_page) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Get free esgl page failed.");
- return UNF_RETURN_ERROR;
- }
- phys = unf_esgl_page->esgl_phy_addr;
- sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
- sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
-
- /* For the cascaded wqe, you only need to enter the
- * cascading buffer address and extension flag, and do
- * not need to fill in other fields
- */
- sge[index].wd0.buf_len = 0;
- sge[index].wd0.r_flag = 0;
- sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
- sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- /* parity bit */
- sge[index].wd1.buf_addr_gpa = (sge[index].buf_addr_lo >> UNF_SHIFT_16);
- sge[index].wd1.xid = (context_id & SPFC_SGE_WD1_XID_MASK);
-
- spfc_cpu_to_big32(&sge[index], sizeof(struct spfc_variable_sge));
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Port(0x%x) SID(0x%x) DID(0x%x) RXID(0x%x) build esgl left sge num: %u.",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did,
- pkg->frame_head.oxid_rxid, sge_num);
- }
-
- return RETURN_OK;
-}
-
-static u32 spfc_build_local_dif_sgl(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
- int direction, u32 bd_sge_num)
-{
- u32 ret = UNF_RETURN_ERROR;
- char *buf = NULL;
- u32 buf_len = 0;
- ulong phys = 0;
- u32 dif_sge_place = 0;
-
- /* DIF SGE must be followed by BD SGE */
- dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
-
- /* The entry_count= 0 needs to be specially processed and does not need
- * to be mounted. As long as len is set to zero, Last-bit is set to one,
- * and E-bit is set to 0.
- */
- if (pkg->dif_control.dif_sge_count == 0) {
- sqe->sge[dif_sge_place].buf_addr_hi = 0;
- sqe->sge[dif_sge_place].buf_addr_lo = 0;
- sqe->sge[dif_sge_place].wd0.buf_len = 0;
- } else {
- UNF_CM_GET_DIF_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "DOUBLE DIF Get Dif Buf Fail.");
- return UNF_RETURN_ERROR;
- }
- phys = (ulong)buf;
- sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
- sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
- sqe->sge[dif_sge_place].wd0.buf_len = buf_len;
- }
-
- /* rdma flag. If the fc is not used, enter 0. */
- sqe->sge[dif_sge_place].wd0.r_flag = 0;
-
- /* parity bit */
- sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
- sqe->sge[dif_sge_place].wd1.xid = 0;
-
- /* The local sgl does not use the cascading SGE. Therefore, the value of
- * this field is always 0.
- */
- sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
-
- spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
-
- return RETURN_OK;
-}
-
-static u32 spfc_build_external_dif_sgl(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg,
- struct spfc_sqe *sqe, int direction,
- u32 bd_sge_num)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct unf_esgl_page *esgl_page = NULL;
- ulong phys = 0;
- u32 left_sge_num = 0;
- u32 dif_sge_place = 0;
- struct spfc_parent_ssq_info *ssq = NULL;
- u32 ssqn = 0;
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
-
- /* DIF SGE must be followed by BD SGE */
- dif_sge_place = ((bd_sge_num <= pkg->entry_count) ? bd_sge_num : pkg->entry_count);
-
- /* Allocate the first page first */
- UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
- if (!esgl_page) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "DOUBLE DIF Get External Page Fail.");
- return UNF_RETURN_ERROR;
- }
-
- phys = esgl_page->esgl_phy_addr;
-
- /* Configuring the Address of the Cascading Page */
- sqe->sge[dif_sge_place].buf_addr_hi = UNF_DMA_HI32(phys);
- sqe->sge[dif_sge_place].buf_addr_lo = UNF_DMA_LO32(phys);
-
- /* Configuring Control Information About the Cascading Page */
- sqe->sge[dif_sge_place].wd0.buf_len = 0;
- sqe->sge[dif_sge_place].wd0.r_flag = 0;
- sqe->sge[dif_sge_place].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
- sqe->sge[dif_sge_place].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- /* parity bit */
- sqe->sge[dif_sge_place].wd1.buf_addr_gpa = 0;
- sqe->sge[dif_sge_place].wd1.xid = 0;
-
- spfc_cpu_to_big32(&sqe->sge[dif_sge_place], sizeof(struct spfc_variable_sge));
-
- /* Fill in the sge information on the cascading page */
- left_sge_num = pkg->dif_control.dif_sge_count;
- ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
- direction, ssq->context_id, true);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- return RETURN_OK;
-}
-
-static u32 spfc_build_local_sgl(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
- int direction)
-{
- u32 ret = UNF_RETURN_ERROR;
- char *buf = NULL;
- u32 buf_len = 0;
- u32 index = 0;
- ulong phys = 0;
-
- for (index = 0; index < pkg->entry_count; index++) {
- UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
-
- phys = (ulong)buf;
- sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
- sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
- sqe->sge[index].wd0.buf_len = buf_len;
-
- /* rdma flag. If the fc is not used, enter 0. */
- sqe->sge[index].wd0.r_flag = 0;
-
- /* parity bit */
- sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
- sqe->sge[index].wd1.xid = 0;
-
- /* The local sgl does not use the cascading SGE. Therefore, the
- * value of this field is always 0.
- */
- sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- if (index == (pkg->entry_count - 1)) {
- /* Sets the last WQE end flag 1 */
- sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
- }
-
- spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
- }
-
- /* Adjust the length of the BDSL field in the CTRL domain. */
- SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
- SPFC_BYTES_TO_QW_NUM((pkg->entry_count *
- sizeof(struct spfc_variable_sge))));
-
- /* The entry_count= 0 needs to be specially processed and does not need
- * to be mounted. As long as len is set to zero, Last-bit is set to one,
- * and E-bit is set to 0.
- */
- if (pkg->entry_count == 0) {
- sqe->sge[ARRAY_INDEX_0].buf_addr_hi = 0;
- sqe->sge[ARRAY_INDEX_0].buf_addr_lo = 0;
- sqe->sge[ARRAY_INDEX_0].wd0.buf_len = 0;
-
- /* rdma flag. This field is not used in fc. Set it to 0. */
- sqe->sge[ARRAY_INDEX_0].wd0.r_flag = 0;
-
- /* parity bit */
- sqe->sge[ARRAY_INDEX_0].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
- sqe->sge[ARRAY_INDEX_0].wd1.xid = 0;
-
- /* The local sgl does not use the cascading SGE. Therefore, the
- * value of this field is always 0.
- */
- sqe->sge[ARRAY_INDEX_0].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sqe->sge[ARRAY_INDEX_0].wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
-
- spfc_cpu_to_big32(&sqe->sge[ARRAY_INDEX_0], sizeof(struct spfc_variable_sge));
-
- /* Adjust the length of the BDSL field in the CTRL domain. */
- SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
- SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
- }
-
- return RETURN_OK;
-}
-
-static u32 spfc_build_external_sgl(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
- int direction, u32 bd_sge_num)
-{
- u32 ret = UNF_RETURN_ERROR;
- char *buf = NULL;
- struct unf_esgl_page *esgl_page = NULL;
- ulong phys = 0;
- u32 buf_len = 0;
- u32 index = 0;
- u32 left_sge_num = 0;
- u32 local_sge_num = 0;
- struct spfc_parent_ssq_info *ssq = NULL;
- u16 ssqn = 0;
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
-
- /* Ensure that the value of bd_sge_num is greater than or equal to one
- */
- local_sge_num = bd_sge_num - 1;
-
- for (index = 0; index < local_sge_num; index++) {
- UNF_CM_GET_SGL_ENTRY(ret, (void *)pkg, &buf, &buf_len);
- if (unlikely(ret != RETURN_OK))
- return UNF_RETURN_ERROR;
-
- phys = (ulong)buf;
-
- sqe->sge[index].buf_addr_hi = UNF_DMA_HI32(phys);
- sqe->sge[index].buf_addr_lo = UNF_DMA_LO32(phys);
- sqe->sge[index].wd0.buf_len = buf_len;
-
- /* RDMA flag, which is not used by FC. */
- sqe->sge[index].wd0.r_flag = 0;
- sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- /* parity bit */
- sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
- sqe->sge[index].wd1.xid = 0;
-
- spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
- }
-
- /* Calculate the number of remaining sge. */
- left_sge_num = pkg->entry_count - local_sge_num;
- /* Adjust the length of the BDSL field in the CTRL domain. */
- SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.bdsl,
- SPFC_BYTES_TO_QW_NUM((bd_sge_num * sizeof(struct spfc_variable_sge))));
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "alloc extended sgl page,leftsge:%d", left_sge_num);
- /* Allocating the first cascading page */
- UNF_GET_FREE_ESGL_PAGE(esgl_page, hba->lport, pkg);
- if (unlikely(!esgl_page))
- return UNF_RETURN_ERROR;
-
- phys = esgl_page->esgl_phy_addr;
-
- /* Configuring the Address of the Cascading Page */
- sqe->sge[index].buf_addr_hi = (u32)UNF_DMA_HI32(phys);
- sqe->sge[index].buf_addr_lo = (u32)UNF_DMA_LO32(phys);
-
- /* Configuring Control Information About the Cascading Page */
- sqe->sge[index].wd0.buf_len = 0;
- sqe->sge[index].wd0.r_flag = 0;
- sqe->sge[index].wd1.extension_flag = SPFC_WQE_SGE_EXTEND_FLAG;
- sqe->sge[index].wd1.last_flag = SPFC_WQE_SGE_NOT_LAST_FLAG;
-
- /* parity bit */
- sqe->sge[index].wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
- sqe->sge[index].wd1.xid = 0;
-
- spfc_cpu_to_big32(&sqe->sge[index], sizeof(struct spfc_variable_sge));
-
- /* Fill in the sge information on the cascading page. */
- ret = spfc_fill_external_sgl_page(hba, pkg, esgl_page, left_sge_num,
- direction, ssq->context_id, false);
- if (ret != RETURN_OK)
- return UNF_RETURN_ERROR;
- /* Copy the extended data sge to the extended sge of the extended wqe.*/
- if (left_sge_num > 0) {
- memcpy(sqe->esge, (void *)esgl_page->page_address,
- SPFC_WQE_MAX_ESGE_NUM * sizeof(struct spfc_variable_sge));
- }
-
- return RETURN_OK;
-}
-
-u32 spfc_build_sgl_by_local_sge_num(struct unf_frame_pkg *pkg,
- struct spfc_hba_info *hba, struct spfc_sqe *sqe,
- int direction, u32 bd_sge_num)
-{
- u32 ret = RETURN_OK;
-
- if (pkg->entry_count <= bd_sge_num)
- ret = spfc_build_local_sgl(hba, pkg, sqe, direction);
- else
- ret = spfc_build_external_sgl(hba, pkg, sqe, direction, bd_sge_num);
-
- return ret;
-}
-
-u32 spfc_conf_dual_sgl_info(struct unf_frame_pkg *pkg,
- struct spfc_hba_info *hba, struct spfc_sqe *sqe,
- int direction, u32 bd_sge_num, bool double_sgl)
-{
- u32 ret = RETURN_OK;
-
- if (double_sgl) {
- /* Adjust the length of the DIF_SL field in the CTRL domain */
- SPFC_ADJUST_DATA(sqe->ctrl_sl.ch.wd0.dif_sl,
- SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
-
- if (pkg->dif_control.dif_sge_count <= SPFC_WQE_SGE_DIF_ENTRY_NUM)
- ret = spfc_build_local_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
- else
- ret = spfc_build_external_dif_sgl(hba, pkg, sqe, direction, bd_sge_num);
- }
-
- return ret;
-}
-
-u32 spfc_build_sgl(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
- struct spfc_sqe *sqe, int direction, u32 dif_flag)
-{
-#define SPFC_ESGE_CNT 3
- u32 ret = RETURN_OK;
- u32 bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM;
- bool double_sgl = false;
-
- if (dif_flag != 0 && (pkg->dif_control.flags & UNF_DIF_DOUBLE_SGL)) {
- bd_sge_num = SPFC_WQE_SGE_ENTRY_NUM - SPFC_WQE_SGE_DIF_ENTRY_NUM;
- double_sgl = true;
- }
-
- /* Only one wqe local sge can be loaded. If more than one wqe local sge
- * is used, use the esgl
- */
- ret = spfc_build_sgl_by_local_sge_num(pkg, hba, sqe, direction, bd_sge_num);
-
- if (unlikely(ret != RETURN_OK))
- return ret;
-
- /* Configuring Dual SGL Information for DIF */
- ret = spfc_conf_dual_sgl_info(pkg, hba, sqe, direction, bd_sge_num, double_sgl);
-
- return ret;
-}
-
-void spfc_adjust_dix(struct unf_frame_pkg *pkg, struct spfc_fc_dif_info *dif_info,
- u8 task_type)
-{
- u8 tasktype = task_type;
- struct spfc_fc_dif_info *dif_info_l1 = NULL;
-
- dif_info_l1 = dif_info;
-
- if (dix_flag == 1) {
- if (tasktype == SPFC_SQE_FCP_IWRITE ||
- tasktype == SPFC_SQE_FCP_TRD) {
- if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
- UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
- dif_info_l1->wd0.grd_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
- dif_info_l1->wd0.grd_agm_ctrl =
- SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
- }
-
- if ((UNF_DIF_ACTION_MASK & pkg->dif_control.protect_opcode) ==
- UNF_DIF_ACTION_VERIFY_AND_DELETE) {
- dif_info_l1->wd0.grd_agm_ctrl =
- SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16;
- }
- }
-
- if (tasktype == SPFC_SQE_FCP_IREAD ||
- tasktype == SPFC_SQE_FCP_TWR) {
- if ((UNF_DIF_ACTION_MASK &
- pkg->dif_control.protect_opcode) ==
- UNF_DIF_ACTION_VERIFY_AND_FORWARD) {
- dif_info_l1->wd0.grd_ctrl |=
- SPFC_DIF_GARD_REF_APP_CTRL_REPLACE;
- dif_info_l1->wd0.grd_agm_ctrl =
- SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
- }
-
- if ((UNF_DIF_ACTION_MASK &
- pkg->dif_control.protect_opcode) ==
- UNF_DIF_ACTION_INSERT) {
- dif_info_l1->wd0.grd_agm_ctrl =
- SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM;
- }
- }
- }
-
- if (grd_agm_ctrl != 0)
- dif_info_l1->wd0.grd_agm_ctrl = grd_agm_ctrl;
-
- if (grd_ctrl != 0)
- dif_info_l1->wd0.grd_ctrl = grd_ctrl;
-}
-
-void spfc_get_dma_direction_by_fcp_cmnd(const struct unf_fcp_cmnd *fcp_cmnd,
- int *dma_direction, u8 *task_type)
-{
- if (UNF_FCP_WR_DATA & fcp_cmnd->control) {
- *task_type = SPFC_SQE_FCP_IWRITE;
- *dma_direction = DMA_TO_DEVICE;
- } else if (UNF_GET_TASK_MGMT_FLAGS(fcp_cmnd->control) != 0) {
- *task_type = SPFC_SQE_FCP_ITMF;
- *dma_direction = DMA_FROM_DEVICE;
- } else {
- *task_type = SPFC_SQE_FCP_IREAD;
- *dma_direction = DMA_FROM_DEVICE;
- }
-}
-
-static inline u32 spfc_build_icmnd_wqe(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg,
- struct spfc_sqe *sge)
-{
- u32 ret = RETURN_OK;
- int direction = 0;
- u8 tasktype = 0;
- struct unf_fcp_cmnd *fcp_cmnd = NULL;
- struct spfc_sqe *sqe = sge;
- u32 dif_flag = 0;
-
- fcp_cmnd = pkg->fcp_cmnd;
- if (unlikely(!fcp_cmnd)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Package's FCP commond pointer is NULL.");
-
- return UNF_RETURN_ERROR;
- }
-
- spfc_get_dma_direction_by_fcp_cmnd(fcp_cmnd, &direction, &tasktype);
-
- spfc_build_icmnd_wqe_ts_header(pkg, sqe, tasktype, hba->exi_base, hba->port_index);
-
- spfc_build_icmnd_wqe_ctrls(pkg, sqe);
-
- spfc_build_icmnd_wqe_ts(hba, pkg, &sqe->ts_sl, &sqe->ts_ex);
-
- if (sqe->ts_sl.task_type != SPFC_SQE_FCP_ITMF) {
- if (pkg->dif_control.protect_opcode == UNF_DIF_ACTION_NONE) {
- dif_flag = 0;
- spfc_build_no_dif_control(pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
- } else {
- dif_flag = 1;
- spfc_build_dif_control(hba, pkg, &sqe->ts_sl.cont.icmnd.info.dif_info);
- spfc_adjust_dix(pkg,
- &sqe->ts_sl.cont.icmnd.info.dif_info,
- tasktype);
- }
- }
-
- ret = spfc_build_sgl(hba, pkg, sqe, direction, dif_flag);
-
- sqe->sid = UNF_GET_SID(pkg);
- sqe->did = UNF_GET_DID(pkg);
-
- return ret;
-}
-
-u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg)
-{
- struct spfc_hba_info *spfc_hba = NULL;
- struct spfc_parent_sq_info *parent_sq = NULL;
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_sqe sqe;
- u16 ssqn;
- struct spfc_parent_queue_info *parent_queue = NULL;
-
- /* input param check */
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
-
- SPFC_CHECK_PKG_ALLOCTIME(pkg);
- memset(&sqe, 0, sizeof(struct spfc_sqe));
- spfc_hba = hba;
-
- /* 1. find parent sq for scsi_cmnd(pkg) */
- parent_sq = spfc_find_parent_sq_by_pkg(spfc_hba, pkg);
- if (unlikely(!parent_sq)) {
- /* Do not need to print info */
- return UNF_RETURN_ERROR;
- }
-
- pkg->qos_level += spfc_hba->vpid_start;
-
- /* 2. build cmnd wqe (to sqe) for scsi_cmnd(pkg) */
- ret = spfc_build_icmnd_wqe(spfc_hba, pkg, &sqe);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[fail]Port(0x%x) Build WQE failed, SID(0x%x) DID(0x%x) pkg type(0x%x) hottag(0x%x).",
- spfc_hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did, pkg->type, UNF_GET_XCHG_TAG(pkg));
-
- return ret;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "Port(0x%x) RPort(0x%x) send FCP_CMND TYPE(0x%x) Local_Xid(0x%x) hottag(0x%x) LBA(0x%llx)",
- spfc_hba->port_cfg.port_id, parent_sq->rport_index,
- sqe.ts_sl.task_type, sqe.ts_sl.local_xid,
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX],
- *((u64 *)pkg->fcp_cmnd->cdb));
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- if (sqe.ts_sl.task_type == SPFC_SQE_FCP_ITMF) {
- parent_queue = container_of(parent_sq, struct spfc_parent_queue_info,
- parent_sq_info);
- ret = spfc_suspend_sqe_and_send_nop(spfc_hba, parent_queue, &sqe, pkg);
- return ret;
- }
- /* 3. En-Queue Parent SQ for scsi_cmnd(pkg) sqe */
- ret = spfc_parent_sq_enqueue(parent_sq, &sqe, ssqn);
-
- return ret;
-}
-
-static void spfc_ini_status_default_handler(struct spfc_scqe_iresp *iresp,
- struct unf_frame_pkg *pkg)
-{
- u8 control = 0;
- u16 com_err_code = 0;
-
- control = iresp->wd2.fcp_flag & SPFC_CTRL_MASK;
-
- if (iresp->fcp_resid != 0) {
- com_err_code = UNF_IO_FAILED;
- pkg->residus_len = iresp->fcp_resid;
- } else {
- com_err_code = UNF_IO_SUCCESS;
- pkg->residus_len = 0;
- }
-
- pkg->status = spfc_fill_pkg_status(com_err_code, control, iresp->wd2.scsi_status);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Fill package with status: 0x%x, residus len: 0x%x",
- pkg->status, pkg->residus_len);
-}
-
-static void spfc_check_fcp_rsp_iu(struct spfc_scqe_iresp *iresp,
- struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg)
-{
- u8 scsi_status = 0;
- u8 control = 0;
-
- control = (u8)iresp->wd2.fcp_flag;
- scsi_status = (u8)iresp->wd2.scsi_status;
-
- /* FcpRspIU with Little End from IOB WQE to COM's pkg also */
- if (control & FCP_RESID_UNDER_MASK) {
- /* under flow: usually occurs in inquiry */
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]I_STS IOB posts under flow with residus len: %u, FCP residue: %u.",
- pkg->residus_len, iresp->fcp_resid);
-
- if (pkg->residus_len != iresp->fcp_resid)
- pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
- else
- pkg->status = spfc_fill_pkg_status(UNF_IO_UNDER_FLOW, control, scsi_status);
- }
-
- if (control & FCP_RESID_OVER_MASK) {
- /* over flow: error happened */
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]I_STS IOB posts over flow with residus len: %u, FCP residue: %u.",
- pkg->residus_len, iresp->fcp_resid);
-
- if (pkg->residus_len != iresp->fcp_resid)
- pkg->status = spfc_fill_pkg_status(UNF_IO_FAILED, control, scsi_status);
- else
- pkg->status = spfc_fill_pkg_status(UNF_IO_OVER_FLOW, control, scsi_status);
- }
-
- pkg->unf_rsp_pload_bl.length = 0;
- pkg->unf_sense_pload_bl.length = 0;
-
- if (control & FCP_RSP_LEN_VALID_MASK) {
- /* dma by chip */
- pkg->unf_rsp_pload_bl.buffer_ptr = NULL;
-
- pkg->unf_rsp_pload_bl.length = iresp->fcp_rsp_len;
- pkg->byte_orders |= UNF_BIT_3;
- }
-
- if (control & FCP_SNS_LEN_VALID_MASK) {
- /* dma by chip */
- pkg->unf_sense_pload_bl.buffer_ptr = NULL;
-
- pkg->unf_sense_pload_bl.length = iresp->fcp_sns_len;
- pkg->byte_orders |= UNF_BIT_4;
- }
-
- if (iresp->wd1.user_id_num == 1 &&
- (pkg->unf_sense_pload_bl.length + pkg->unf_rsp_pload_bl.length > 0)) {
- pkg->unf_rsp_pload_bl.buffer_ptr =
- (u8 *)spfc_get_els_buf_by_user_id(hba, (u16)iresp->user_id[ARRAY_INDEX_0]);
- } else if (iresp->wd1.user_id_num > 1) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]receive buff num 0x%x > 1 0x%x",
- iresp->wd1.user_id_num, control);
- }
-}
-
-u16 spfc_get_com_err_code(struct unf_frame_pkg *pkg)
-{
- u16 com_err_code = UNF_IO_FAILED;
- u32 status_subcode = 0;
-
- status_subcode = pkg->status_sub_code;
-
- if (likely(status_subcode == 0))
- com_err_code = 0;
- else if (status_subcode == UNF_DIF_CRC_ERR)
- com_err_code = UNF_IO_DIF_ERROR;
- else if (status_subcode == UNF_DIF_LBA_ERR)
- com_err_code = UNF_IO_DIF_REF_ERROR;
- else if (status_subcode == UNF_DIF_APP_ERR)
- com_err_code = UNF_IO_DIF_GEN_ERROR;
-
- return com_err_code;
-}
-
-void spfc_process_ini_fail_io(struct spfc_hba_info *hba, union spfc_scqe *iresp,
- struct unf_frame_pkg *pkg)
-{
- u16 com_err_code = UNF_IO_FAILED;
-
- /* 1. error stats process */
- if (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp) != 0) {
- switch (SPFC_GET_SCQE_STATUS((union spfc_scqe *)(void *)iresp)) {
- /* I/O not complete: 1.session reset; 2.clear buffer */
- case FC_CQE_BUFFER_CLEAR_IO_COMPLETED:
- case FC_CQE_SESSION_RST_CLEAR_IO_COMPLETED:
- case FC_CQE_SESSION_ONLY_CLEAR_IO_COMPLETED:
- case FC_CQE_WQE_FLUSH_IO_COMPLETED:
- com_err_code = UNF_IO_CLEAN_UP;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[warn]Port(0x%x) INI IO not complete, OX_ID(0x%x) RX_ID(0x%x) status(0x%x)",
- hba->port_cfg.port_id,
- ((struct spfc_scqe_iresp *)iresp)->wd0.ox_id,
- ((struct spfc_scqe_iresp *)iresp)->wd0.rx_id,
- com_err_code);
-
- break;
- /* Allocate task id(oxid) fail */
- case FC_ERROR_INVALID_TASK_ID:
- com_err_code = UNF_IO_NO_XCHG;
- break;
- case FC_ALLOC_EXCH_ID_FAILED:
- com_err_code = UNF_IO_NO_XCHG;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[warn]Port(0x%x) INI IO, tag 0x%x alloc oxid fail.",
- hba->port_cfg.port_id,
- ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
- break;
- case FC_ERROR_CODE_DATA_DIFX_FAILED:
- com_err_code = pkg->status >> UNF_SHIFT_16;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[warn]Port(0x%x) INI IO, tag 0x%x tx dif error.",
- hba->port_cfg.port_id,
- ((struct spfc_scqe_iresp *)iresp)->wd2.hotpooltag);
- break;
- /* any other: I/O failed --->>> DID error */
- default:
- com_err_code = UNF_IO_FAILED;
- break;
- }
-
- /* fill pkg status & return directly */
- pkg->status =
- spfc_fill_pkg_status(com_err_code,
- ((struct spfc_scqe_iresp *)iresp)->wd2.fcp_flag,
- ((struct spfc_scqe_iresp *)iresp)->wd2.scsi_status);
-
- return;
- }
-
- /* 2. default stats process */
- spfc_ini_status_default_handler((struct spfc_scqe_iresp *)iresp, pkg);
-
- /* 3. FCP RSP IU check */
- spfc_check_fcp_rsp_iu((struct spfc_scqe_iresp *)iresp, hba, pkg);
-}
-
-void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
- struct unf_frame_pkg *pkg)
-{
- u16 com_err_code = UNF_IO_FAILED;
- u8 dif_info = 0;
-
- dif_info = wqe->common.wd0.dif_vry_rst;
- if (dif_info == SPFC_TX_DIF_ERROR_FLAG) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[error]Port(0x%x) TGT recv tx dif result abnormal.",
- hba->port_cfg.port_id);
- }
-
- pkg->status_sub_code =
- (dif_info & SPFC_DIF_ERROR_CODE_CRC)
- ? UNF_DIF_CRC_ERR
- : ((dif_info & SPFC_DIF_ERROR_CODE_REF)
- ? UNF_DIF_LBA_ERR
- : ((dif_info & SPFC_DIF_ERROR_CODE_APP) ? UNF_DIF_APP_ERR : 0));
- com_err_code = spfc_get_com_err_code(pkg);
- pkg->status = (u32)(com_err_code) << UNF_SHIFT_16;
-
- if (unlikely(com_err_code != 0)) {
- spfc_dif_err_count(hba, dif_info);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[error]Port(0x%x) INI io status with dif result(0x%x),subcode(0x%x) pkg->status(0x%x)",
- hba->port_cfg.port_id, dif_info,
- pkg->status_sub_code, pkg->status);
- }
-}
-
-u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
-{
-#define SPFC_IRSP_USERID_LEN ((FC_SENSEDATA_USERID_CNT_MAX + 1) / 2)
- struct spfc_scqe_iresp *iresp = NULL;
- struct unf_frame_pkg pkg;
- u32 ret = RETURN_OK;
- u16 hot_tag;
-
- FC_CHECK_RETURN_VALUE((hba), UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE((wqe), UNF_RETURN_ERROR);
-
- iresp = (struct spfc_scqe_iresp *)(void *)wqe;
-
- /* 1. Constraints: I_STS remain cnt must be zero */
- if (unlikely(SPFC_GET_SCQE_REMAIN_CNT(wqe) != 0)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) ini_wqe(OX_ID:0x%x RX_ID:0x%x) HotTag(0x%x) remain_cnt(0x%x) abnormal, status(0x%x)",
- hba->port_cfg.port_id, iresp->wd0.ox_id,
- iresp->wd0.rx_id, iresp->wd2.hotpooltag,
- SPFC_GET_SCQE_REMAIN_CNT(wqe),
- SPFC_GET_SCQE_STATUS(wqe));
-
- UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe, sizeof(union spfc_scqe));
-
- /* return directly */
- return UNF_RETURN_ERROR;
- }
-
- spfc_swap_16_in_32((u32 *)iresp->user_id, SPFC_IRSP_USERID_LEN);
-
- memset(&pkg, 0, sizeof(struct unf_frame_pkg));
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
- pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
-
- hot_tag = (u16)iresp->wd2.hotpooltag;
- /* 2. HotTag validity check */
- if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
- pkg.status = UNF_IO_SUCCESS;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
- hot_tag - hba->exi_base;
- } else {
- /* OX_ID error: return by COM */
- pkg.status = UNF_IO_FAILED;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE16;
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) ini_cmnd_wqe(OX_ID:0x%x RX_ID:0x%x) ox_id invalid, status(0x%x)",
- hba->port_cfg.port_id, iresp->wd0.ox_id, iresp->wd0.rx_id,
- SPFC_GET_SCQE_STATUS(wqe));
-
- UNF_PRINT_SFS_LIMIT(UNF_MAJOR, hba->port_cfg.port_id, wqe,
- sizeof(union spfc_scqe));
- }
-
- /* process dif result */
- spfc_process_dif_result(hba, wqe, &pkg);
-
- /* 3. status check */
- if (unlikely(SPFC_GET_SCQE_STATUS(wqe) ||
- iresp->wd2.scsi_status != 0 || iresp->fcp_resid != 0 ||
- ((iresp->wd2.fcp_flag & SPFC_CTRL_MASK) != 0))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[warn]Port(0x%x) scq_status(0x%x) scsi_status(0x%x) fcp_resid(0x%x) fcp_flag(0x%x)",
- hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(wqe),
- iresp->wd2.scsi_status, iresp->fcp_resid,
- iresp->wd2.fcp_flag);
-
- /* set pkg status & check fcp_rsp IU */
- spfc_process_ini_fail_io(hba, (union spfc_scqe *)iresp, &pkg);
- }
-
- /* 4. LL_Driver ---to--->>> COM_Driver */
- UNF_LOWLEVEL_SCSI_COMPLETED(ret, hba->lport, &pkg);
- if (iresp->wd1.user_id_num == 1 &&
- (pkg.unf_sense_pload_bl.length + pkg.unf_rsp_pload_bl.length > 0)) {
- spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)iresp->user_id[ARRAY_INDEX_0]);
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) rport(0x%x) recv(%s) hottag(0x%x) OX_ID(0x%x) RX_ID(0x%x) return(%s)",
- hba->port_cfg.port_id, iresp->wd1.conn_id,
- (SPFC_SCQE_FCP_IRSP == (SPFC_GET_SCQE_TYPE(wqe)) ? "IRESP" : "ITMF_RSP"),
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX], iresp->wd0.ox_id,
- iresp->wd0.rx_id, (ret == RETURN_OK) ? "OK" : "ERROR");
-
- return ret;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_io.h b/drivers/scsi/spfc/hw/spfc_io.h
deleted file mode 100644
index 26d10a51bbe4..000000000000
--- a/drivers/scsi/spfc/hw/spfc_io.h
+++ /dev/null
@@ -1,138 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_IO_H
-#define SPFC_IO_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "spfc_hba.h"
-
-#ifdef __cplusplus
-#if __cplusplus
-extern "C" {
-#endif
-#endif /* __cplusplus */
-
-#define BYTE_PER_DWORD 4
-#define SPFC_TRESP_DIRECT_CARRY_LEN (23 * 4)
-#define FCP_RESP_IU_LEN_BYTE_GOOD_STATUS 24
-#define SPFC_TRSP_IU_CONTROL_OFFSET 2
-#define SPFC_TRSP_IU_FCP_CONF_REP (1 << 12)
-
-struct spfc_dif_io_param {
- u32 all_len;
- u32 buf_len;
- char **buf;
- char *in_buf;
- int drect;
-};
-
-enum dif_mode_type {
- DIF_MODE_NONE = 0x0,
- DIF_MODE_INSERT = 0x1,
- DIF_MODE_REMOVE = 0x2,
- DIF_MODE_FORWARD_OR_REPLACE = 0x3
-};
-
-enum ref_tag_mode_type {
- BOTH_NONE = 0x0,
- RECEIVE_INCREASE = 0x1,
- REPLACE_INCREASE = 0x2,
- BOTH_INCREASE = 0x3
-};
-
-#define SPFC_DIF_DISABLE 0
-#define SPFC_DIF_ENABLE 1
-#define SPFC_DIF_SINGLE_SGL 0
-#define SPFC_DIF_DOUBLE_SGL 1
-#define SPFC_DIF_SECTOR_512B_MODE 0
-#define SPFC_DIF_SECTOR_4KB_MODE 1
-#define SPFC_DIF_TYPE1 0x01
-#define SPFC_DIF_TYPE3 0x03
-#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_T10_CRC16 0x0
-#define SPFC_DIF_GUARD_VERIFY_CRC16_REPLACE_IP_CHECKSUM 0x1
-#define SPFC_DIF_GUARD_VERIFY_IP_CHECKSUM_REPLACE_CRC16 0x2
-#define SPFC_DIF_GUARD_VERIFY_ALGORITHM_CTL_IP_CHECKSUM 0x3
-#define SPFC_DIF_CRC16_INITIAL_SELECTOR_DEFAUL 0
-#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_REGISTER 0
-#define SPFC_DIF_CRC_CS_INITIAL_CONFIG_BY_BIT0_1 0x4
-
-#define SPFC_DIF_GARD_REF_APP_CTRL_VERIFY 0x4
-#define SPFC_DIF_GARD_REF_APP_CTRL_NOT_VERIFY 0x0
-#define SPFC_DIF_GARD_REF_APP_CTRL_INSERT 0x0
-#define SPFC_DIF_GARD_REF_APP_CTRL_DELETE 0x1
-#define SPFC_DIF_GARD_REF_APP_CTRL_FORWARD 0x2
-#define SPFC_DIF_GARD_REF_APP_CTRL_REPLACE 0x3
-
-#define SPFC_BUILD_RESPONSE_INFO_NON_GAP_MODE0 0
-#define SPFC_BUILD_RESPONSE_INFO_GPA_MODE1 1
-#define SPFC_CONF_SUPPORT 1
-#define SPFC_CONF_NOT_SUPPORT 0
-#define SPFC_XID_INTERVAL 2048
-
-#define SPFC_DIF_ERROR_CODE_MASK 0xe
-#define SPFC_DIF_ERROR_CODE_CRC 0x2
-#define SPFC_DIF_ERROR_CODE_REF 0x4
-#define SPFC_DIF_ERROR_CODE_APP 0x8
-#define SPFC_TX_DIF_ERROR_FLAG (1 << 7)
-
-#define SPFC_DIF_PAYLOAD_TYPE (1 << 0)
-#define SPFC_DIF_CRC_TYPE (1 << 1)
-#define SPFC_DIF_APP_TYPE (1 << 2)
-#define SPFC_DIF_REF_TYPE (1 << 3)
-
-#define SPFC_DIF_SEND_DIFERR_ALL (0)
-#define SPFC_DIF_SEND_DIFERR_CRC (1)
-#define SPFC_DIF_SEND_DIFERR_APP (2)
-#define SPFC_DIF_SEND_DIFERR_REF (3)
-#define SPFC_DIF_RECV_DIFERR_ALL (4)
-#define SPFC_DIF_RECV_DIFERR_CRC (5)
-#define SPFC_DIF_RECV_DIFERR_APP (6)
-#define SPFC_DIF_RECV_DIFERR_REF (7)
-#define SPFC_DIF_ERR_ENABLE (382855)
-#define SPFC_DIF_ERR_DISABLE (0)
-
-#define SPFC_DIF_LENGTH (8)
-#define SPFC_SECT_SIZE_512 (512)
-#define SPFC_SECT_SIZE_4096 (4096)
-#define SPFC_SECT_SIZE_512_8 (520)
-#define SPFC_SECT_SIZE_4096_8 (4104)
-#define SPFC_DIF_SECT_SIZE_APP_OFFSET (2)
-#define SPFC_DIF_SECT_SIZE_LBA_OFFSET (4)
-
-#define SPFC_MAX_IO_TAG (2048)
-#define SPFC_PRINT_WORD (8)
-
-extern u32 dif_protect_opcode;
-extern u32 dif_sect_size;
-extern u32 no_dif_sect_size;
-extern u32 grd_agm_ini_ctrl;
-extern u32 ref_tag_mod;
-extern u32 grd_ctrl;
-extern u32 grd_agm_ctrl;
-extern u32 cmp_app_tag_mask;
-extern u32 app_tag_ctrl;
-extern u32 ref_tag_ctrl;
-extern u32 rep_ref_tag;
-extern u32 rx_rep_ref_tag;
-extern u16 cmp_app_tag;
-extern u16 rep_app_tag;
-
-#define spfc_fill_pkg_status(com_err_code, control, scsi_status) \
- (((u32)(com_err_code) << 16) | ((u32)(control) << 8) | \
- (u32)(scsi_status))
-#define SPFC_CTRL_MASK 0x1f
-
-u32 spfc_send_scsi_cmnd(void *hba, struct unf_frame_pkg *pkg);
-u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe);
-void spfc_process_dif_result(struct spfc_hba_info *hba, union spfc_scqe *wqe,
- struct unf_frame_pkg *pkg);
-
-#ifdef __cplusplus
-#if __cplusplus
-}
-#endif
-#endif /* __cplusplus */
-
-#endif /* __SPFC_IO_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_lld.c b/drivers/scsi/spfc/hw/spfc_lld.c
deleted file mode 100644
index a35484f1c917..000000000000
--- a/drivers/scsi/spfc/hw/spfc_lld.c
+++ /dev/null
@@ -1,997 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/device.h>
-#include <linux/io-mapping.h>
-#include <linux/interrupt.h>
-#include <linux/inetdevice.h>
-#include <net/addrconf.h>
-#include <linux/time.h>
-#include <linux/timex.h>
-#include <linux/rtc.h>
-#include <linux/aer.h>
-#include <linux/debugfs.h>
-
-#include "spfc_lld.h"
-#include "sphw_hw.h"
-#include "sphw_mt.h"
-#include "sphw_hw_cfg.h"
-#include "sphw_hw_comm.h"
-#include "sphw_common.h"
-#include "spfc_cqm_main.h"
-#include "spfc_module.h"
-
-#define SPFC_DRV_NAME "spfc"
-#define SPFC_CHIP_NAME "spfc"
-
-#define PCI_VENDOR_ID_RAMAXEL 0x1E81
-#define SPFC_DEV_ID_PF_STD 0x9010
-#define SPFC_DEV_ID_VF 0x9008
-
-#define SPFC_VF_PCI_CFG_REG_BAR 0
-#define SPFC_PF_PCI_CFG_REG_BAR 1
-
-#define SPFC_PCI_INTR_REG_BAR 2
-#define SPFC_PCI_MGMT_REG_BAR 3
-#define SPFC_PCI_DB_BAR 4
-
-#define SPFC_SECOND_BASE (1000)
-#define SPFC_SYNC_YEAR_OFFSET (1900)
-#define SPFC_SYNC_MONTH_OFFSET (1)
-#define SPFC_MINUTE_BASE (60)
-#define SPFC_WAIT_TOOL_CNT_TIMEOUT 10000
-
-#define SPFC_MIN_TIME_IN_USECS 900
-#define SPFC_MAX_TIME_IN_USECS 1000
-#define SPFC_MAX_LOOP_TIMES 10000
-
-#define SPFC_TOOL_MIN_TIME_IN_USECS 9900
-#define SPFC_TOOL_MAX_TIME_IN_USECS 10000
-
-#define SPFC_EVENT_PROCESS_TIMEOUT 10000
-
-#define FIND_BIT(num, n) (((num) & (1UL << (n))) ? 1 : 0)
-#define SET_BIT(num, n) ((num) | (1UL << (n)))
-#define CLEAR_BIT(num, n) ((num) & (~(1UL << (n))))
-
-#define MAX_CARD_ID 64
-static unsigned long card_bit_map;
-LIST_HEAD(g_spfc_chip_list);
-struct spfc_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
-
-struct unf_cm_handle_op spfc_cm_op_handle = {0};
-
-u32 allowed_probe_num = SPFC_MAX_PORT_NUM;
-u32 dif_sgl_mode;
-u32 max_speed = SPFC_SPEED_32G;
-u32 accum_db_num = 1;
-u32 dif_type = 0x1;
-u32 wqe_page_size = 4096;
-u32 wqe_pre_load = 6;
-u32 combo_length = 128;
-u32 cos_bit_map = 0x1f;
-u32 spfc_dif_type;
-u32 spfc_dif_enable;
-u8 spfc_guard;
-int link_lose_tmo = 30;
-
-u32 exit_count = 4096;
-u32 exit_stride = 4096;
-u32 exit_base;
-
-/* dfx counter */
-atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
-u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
-u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
-atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-
-#ifndef MAX_SIZE
-#define MAX_SIZE (16)
-#endif
-
-struct spfc_lld_lock g_lld_lock;
-
-/* g_device_mutex */
-struct mutex g_device_mutex;
-
-/* pci device initialize lock */
-struct mutex g_pci_init_mutex;
-
-#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
-#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
-#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
-
-void lld_dev_cnt_init(struct spfc_pcidev *pci_adapter)
-{
- atomic_set(&pci_adapter->ref_cnt, 0);
-}
-
-void lld_dev_hold(struct spfc_lld_dev *dev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
-
- atomic_inc(&pci_adapter->ref_cnt);
-}
-
-void lld_dev_put(struct spfc_lld_dev *dev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
-
- atomic_dec(&pci_adapter->ref_cnt);
-}
-
-static void spfc_sync_time_to_fmw(struct spfc_pcidev *pdev_pri)
-{
- struct tm tm = {0};
- u64 tv_msec;
- int err;
-
- tv_msec = ktime_to_ms(ktime_get_real());
- err = sphw_sync_time(pdev_pri->hwdev, tv_msec);
- if (err) {
- sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
- err);
- } else {
- time64_to_tm(tv_msec / MSEC_PER_SEC, 0, &tm);
- sdk_info(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware succeed. UTC time %ld-%02d-%02d %02d:%02d:%02d.\n",
- tm.tm_year + 1900, tm.tm_mon + 1,
- tm.tm_mday, tm.tm_hour,
- tm.tm_min, tm.tm_sec);
- }
-}
-
-void wait_lld_dev_unused(struct spfc_pcidev *pci_adapter)
-{
- u32 loop_cnt = 0;
-
- while (loop_cnt < SPFC_WAIT_TOOL_CNT_TIMEOUT) {
- if (!atomic_read(&pci_adapter->ref_cnt))
- return;
-
- usleep_range(SPFC_TOOL_MIN_TIME_IN_USECS, SPFC_TOOL_MAX_TIME_IN_USECS);
- loop_cnt++;
- }
-}
-
-static void lld_lock_chip_node(void)
-{
- u32 loop_cnt;
-
- mutex_lock(&g_lld_lock.lld_mutex);
-
- loop_cnt = 0;
- while (loop_cnt < WAIT_LLD_DEV_NODE_CHANGED) {
- if (!test_and_set_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
- break;
-
- loop_cnt++;
-
- if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
- pr_warn("[warn]Wait for lld node change complete for %us",
- loop_cnt / UNF_S_TO_MS);
-
- usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
- }
-
- if (loop_cnt == WAIT_LLD_DEV_NODE_CHANGED)
- pr_warn("[warn]Wait for lld node change complete timeout when try to get lld lock");
-
- loop_cnt = 0;
- while (loop_cnt < WAIT_LLD_DEV_REF_CNT_EMPTY) {
- if (!atomic_read(&g_lld_lock.dev_ref_cnt))
- break;
-
- loop_cnt++;
-
- if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
- pr_warn("[warn]Wait for lld dev unused for %us, reference count: %d",
- loop_cnt / UNF_S_TO_MS, atomic_read(&g_lld_lock.dev_ref_cnt));
-
- usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
- }
-
- if (loop_cnt == WAIT_LLD_DEV_REF_CNT_EMPTY)
- pr_warn("[warn]Wait for lld dev unused timeout");
-
- mutex_unlock(&g_lld_lock.lld_mutex);
-}
-
-static void lld_unlock_chip_node(void)
-{
- clear_bit(SPFC_NODE_CHANGE, &g_lld_lock.status);
-}
-
-void lld_hold(void)
-{
- u32 loop_cnt = 0;
-
- /* ensure there have not any chip node in changing */
- mutex_lock(&g_lld_lock.lld_mutex);
-
- while (loop_cnt < WAIT_LLD_DEV_HOLD_TIMEOUT) {
- if (!test_bit(SPFC_NODE_CHANGE, &g_lld_lock.status))
- break;
-
- loop_cnt++;
-
- if (loop_cnt % SPFC_MAX_LOOP_TIMES == 0)
- pr_warn("[warn]Wait lld node change complete for %u",
- loop_cnt / UNF_S_TO_MS);
-
- usleep_range(SPFC_MIN_TIME_IN_USECS, SPFC_MAX_TIME_IN_USECS);
- }
-
- if (loop_cnt == WAIT_LLD_DEV_HOLD_TIMEOUT)
- pr_warn("[warn]Wait lld node change complete timeout when try to hode lld dev %u",
- loop_cnt / UNF_S_TO_MS);
-
- atomic_inc(&g_lld_lock.dev_ref_cnt);
-
- mutex_unlock(&g_lld_lock.lld_mutex);
-}
-
-void lld_put(void)
-{
- atomic_dec(&g_lld_lock.dev_ref_cnt);
-}
-
-static void spfc_lld_lock_init(void)
-{
- mutex_init(&g_lld_lock.lld_mutex);
- atomic_set(&g_lld_lock.dev_ref_cnt, 0);
-}
-
-static void spfc_realease_cmo_op_handle(void)
-{
- memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
-}
-
-static void spfc_check_module_para(void)
-{
- if (spfc_dif_enable) {
- dif_sgl_mode = true;
- spfc_dif_type = SHOST_DIF_TYPE1_PROTECTION | SHOST_DIX_TYPE1_PROTECTION;
- dix_flag = 1;
- }
-
- if (dif_sgl_mode != 0)
- dif_sgl_mode = 1;
-}
-
-void spfc_event_process(void *adapter, struct sphw_event_info *event)
-{
- struct spfc_pcidev *dev = adapter;
-
- if (test_and_set_bit(SERVICE_T_FC, &dev->state)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[WARN]Event: fc is in detach");
- return;
- }
-
- if (g_uld_info[SERVICE_T_FC].event)
- g_uld_info[SERVICE_T_FC].event(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC], event);
-
- clear_bit(SERVICE_T_FC, &dev->state);
-}
-
-int spfc_stateful_init(void *hwdev)
-{
- struct sphw_hwdev *dev = hwdev;
- int stateful_en;
- int err;
-
- if (!dev)
- return -EINVAL;
-
- if (dev->statufull_ref_cnt++)
- return 0;
-
- stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
- if (stateful_en && SPHW_IS_PPF(dev)) {
- err = sphw_ppf_ext_db_init(dev);
- if (err)
- goto out;
- }
-
- err = cqm3_init(dev);
- if (err) {
- sdk_err(dev->dev_hdl, "Failed to init cqm, err: %d\n", err);
- goto init_cqm_err;
- }
-
- sdk_info(dev->dev_hdl, "Initialize statefull resource success\n");
-
- return 0;
-
-init_cqm_err:
- if (stateful_en)
- sphw_ppf_ext_db_deinit(dev);
-
-out:
- dev->statufull_ref_cnt--;
-
- return err;
-}
-
-void spfc_stateful_deinit(void *hwdev)
-{
- struct sphw_hwdev *dev = hwdev;
- u32 stateful_en;
-
- if (!dev || !dev->statufull_ref_cnt)
- return;
-
- if (--dev->statufull_ref_cnt)
- return;
-
- cqm3_uninit(hwdev);
-
- stateful_en = IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev);
- if (stateful_en)
- sphw_ppf_ext_db_deinit(hwdev);
-
- sdk_info(dev->dev_hdl, "Clear statefull resource success\n");
-}
-
-static int attach_uld(struct spfc_pcidev *dev, struct spfc_uld_info *uld_info)
-{
- void *uld_dev = NULL;
- int err;
-
- mutex_lock(&dev->pdev_mutex);
- if (dev->uld_dev[SERVICE_T_FC]) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]fc driver has attached to pcie device");
- err = 0;
- goto out_unlock;
- }
-
- err = spfc_stateful_init(dev->hwdev);
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to initialize statefull resources");
- goto out_unlock;
- }
-
- err = uld_info->probe(&dev->lld_dev, &uld_dev,
- dev->uld_dev_name[SERVICE_T_FC]);
- if (err || !uld_dev) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to add object for fc driver to pcie device");
- goto probe_failed;
- }
-
- dev->uld_dev[SERVICE_T_FC] = uld_dev;
- mutex_unlock(&dev->pdev_mutex);
-
- return RETURN_OK;
-
-probe_failed:
- spfc_stateful_deinit(dev->hwdev);
-
-out_unlock:
- mutex_unlock(&dev->pdev_mutex);
-
- return err;
-}
-
-static void detach_uld(struct spfc_pcidev *dev)
-{
- struct spfc_uld_info *uld_info = &g_uld_info[SERVICE_T_FC];
- u32 cnt = 0;
-
- mutex_lock(&dev->pdev_mutex);
- if (!dev->uld_dev[SERVICE_T_FC]) {
- mutex_unlock(&dev->pdev_mutex);
- return;
- }
-
- while (cnt < SPFC_EVENT_PROCESS_TIMEOUT) {
- if (!test_and_set_bit(SERVICE_T_FC, &dev->state))
- break;
- usleep_range(900, 1000);
- cnt++;
- }
-
- uld_info->remove(&dev->lld_dev, dev->uld_dev[SERVICE_T_FC]);
- dev->uld_dev[SERVICE_T_FC] = NULL;
- spfc_stateful_deinit(dev->hwdev);
- if (cnt < SPFC_EVENT_PROCESS_TIMEOUT)
- clear_bit(SERVICE_T_FC, &dev->state);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "Detach fc driver from pcie device succeed");
- mutex_unlock(&dev->pdev_mutex);
-}
-
-int spfc_register_uld(struct spfc_uld_info *uld_info)
-{
- memset(g_uld_info, 0, sizeof(g_uld_info));
- spfc_lld_lock_init();
- mutex_init(&g_device_mutex);
- mutex_init(&g_pci_init_mutex);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Module Init Success, wait for pci init and probe");
-
- if (!uld_info || !uld_info->probe || !uld_info->remove) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Invalid information of fc driver to register");
- return -EINVAL;
- }
-
- lld_hold();
-
- if (g_uld_info[SERVICE_T_FC].probe) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]fc driver has registered");
- lld_put();
- return -EINVAL;
- }
-
- memcpy(&g_uld_info[SERVICE_T_FC], uld_info, sizeof(*uld_info));
-
- lld_put();
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[KEVENT]Register spfc driver succeed");
- return RETURN_OK;
-}
-
-void spfc_unregister_uld(void)
-{
- struct spfc_uld_info *uld_info = NULL;
-
- lld_hold();
- uld_info = &g_uld_info[SERVICE_T_FC];
- memset(uld_info, 0, sizeof(*uld_info));
- lld_put();
-}
-
-static int spfc_pci_init(struct pci_dev *pdev)
-{
- struct spfc_pcidev *pci_adapter = NULL;
- int err = 0;
-
- pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
- if (!pci_adapter) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to alloc pci device adapter");
- return -ENOMEM;
- }
- pci_adapter->pcidev = pdev;
- mutex_init(&pci_adapter->pdev_mutex);
-
- pci_set_drvdata(pdev, pci_adapter);
-
- err = pci_enable_device(pdev);
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to enable PCI device");
- goto pci_enable_err;
- }
-
- err = pci_request_regions(pdev, SPFC_DRV_NAME);
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to request regions");
- goto pci_regions_err;
- }
-
- pci_enable_pcie_error_reporting(pdev);
-
- pci_set_master(pdev);
-
- err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Couldn't set 64-bit DMA mask");
- err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR, "[err]Failed to set DMA mask");
- goto dma_mask_err;
- }
- }
-
- err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Couldn't set 64-bit coherent DMA mask");
- err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR,
- "[err]Failed to set coherent DMA mask");
- goto dma_consistnet_mask_err;
- }
- }
-
- return 0;
-
-dma_consistnet_mask_err:
-dma_mask_err:
- pci_clear_master(pdev);
- pci_release_regions(pdev);
-
-pci_regions_err:
- pci_disable_device(pdev);
-
-pci_enable_err:
- pci_set_drvdata(pdev, NULL);
- kfree(pci_adapter);
-
- return err;
-}
-
-static void spfc_pci_deinit(struct pci_dev *pdev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
-
- pci_clear_master(pdev);
- pci_release_regions(pdev);
- pci_disable_pcie_error_reporting(pdev);
- pci_disable_device(pdev);
- pci_set_drvdata(pdev, NULL);
- kfree(pci_adapter);
-}
-
-static int alloc_chip_node(struct spfc_pcidev *pci_adapter)
-{
- struct card_node *chip_node = NULL;
- unsigned char i;
- unsigned char bus_number = 0;
-
- if (!pci_is_root_bus(pci_adapter->pcidev->bus))
- bus_number = pci_adapter->pcidev->bus->number;
-
- if (bus_number != 0) {
- list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
- if (chip_node->bus_num == bus_number) {
- pci_adapter->chip_node = chip_node;
- return 0;
- }
- }
- } else if (pci_adapter->pcidev->device == SPFC_DEV_ID_VF) {
- list_for_each_entry(chip_node, &g_spfc_chip_list, node) {
- if (chip_node) {
- pci_adapter->chip_node = chip_node;
- return 0;
- }
- }
- }
-
- for (i = 0; i < MAX_CARD_ID; i++) {
- if (!test_and_set_bit(i, &card_bit_map))
- break;
- }
-
- if (i == MAX_CARD_ID) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to alloc card id");
- return -EFAULT;
- }
-
- chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
- if (!chip_node) {
- clear_bit(i, &card_bit_map);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to alloc chip node");
- return -ENOMEM;
- }
-
- /* bus number */
- chip_node->bus_num = bus_number;
-
- snprintf(chip_node->chip_name, IFNAMSIZ, "%s%d", SPFC_CHIP_NAME, i);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[INFO]Add new chip %s to global list succeed",
- chip_node->chip_name);
-
- list_add_tail(&chip_node->node, &g_spfc_chip_list);
-
- INIT_LIST_HEAD(&chip_node->func_list);
- pci_adapter->chip_node = chip_node;
-
- return 0;
-}
-
-#ifdef CONFIG_X86
-void cfg_order_reg(struct spfc_pcidev *pci_adapter)
-{
- u8 cpu_model[] = {0x3c, 0x3f, 0x45, 0x46, 0x3d, 0x47, 0x4f, 0x56};
- struct cpuinfo_x86 *cpuinfo = NULL;
- u32 i;
-
- if (sphw_func_type(pci_adapter->hwdev) == TYPE_VF)
- return;
-
- cpuinfo = &cpu_data(0);
-
- for (i = 0; i < sizeof(cpu_model); i++) {
- if (cpu_model[i] == cpuinfo->x86_model)
- sphw_set_pcie_order_cfg(pci_adapter->hwdev);
- }
-}
-#endif
-
-static int mapping_bar(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
-{
- int cfg_bar;
-
- cfg_bar = pdev->is_virtfn ? SPFC_VF_PCI_CFG_REG_BAR : SPFC_PF_PCI_CFG_REG_BAR;
-
- pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
- if (!pci_adapter->cfg_reg_base) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Failed to map configuration regs");
- return -ENOMEM;
- }
-
- pci_adapter->intr_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_INTR_REG_BAR);
- if (!pci_adapter->intr_reg_base) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Failed to map interrupt regs");
- goto map_intr_bar_err;
- }
-
- if (!pdev->is_virtfn) {
- pci_adapter->mgmt_reg_base = pci_ioremap_bar(pdev, SPFC_PCI_MGMT_REG_BAR);
- if (!pci_adapter->mgmt_reg_base) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR, "Failed to map mgmt regs");
- goto map_mgmt_bar_err;
- }
- }
-
- pci_adapter->db_base_phy = pci_resource_start(pdev, SPFC_PCI_DB_BAR);
- pci_adapter->db_dwqe_len = pci_resource_len(pdev, SPFC_PCI_DB_BAR);
- pci_adapter->db_base = pci_ioremap_bar(pdev, SPFC_PCI_DB_BAR);
- if (!pci_adapter->db_base) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "Failed to map doorbell regs");
- goto map_db_err;
- }
-
- return 0;
-
-map_db_err:
- if (!pdev->is_virtfn)
- iounmap(pci_adapter->mgmt_reg_base);
-
-map_mgmt_bar_err:
- iounmap(pci_adapter->intr_reg_base);
-
-map_intr_bar_err:
- iounmap(pci_adapter->cfg_reg_base);
-
- return -ENOMEM;
-}
-
-static void unmapping_bar(struct spfc_pcidev *pci_adapter)
-{
- iounmap(pci_adapter->db_base);
-
- if (!pci_adapter->pcidev->is_virtfn)
- iounmap(pci_adapter->mgmt_reg_base);
-
- iounmap(pci_adapter->intr_reg_base);
- iounmap(pci_adapter->cfg_reg_base);
-}
-
-static int spfc_func_init(struct pci_dev *pdev, struct spfc_pcidev *pci_adapter)
-{
- struct sphw_init_para init_para = {0};
- int err;
-
- init_para.adapter_hdl = pci_adapter;
- init_para.pcidev_hdl = pdev;
- init_para.dev_hdl = &pdev->dev;
- init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
- init_para.intr_reg_base = pci_adapter->intr_reg_base;
- init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
- init_para.db_base = pci_adapter->db_base;
- init_para.db_base_phy = pci_adapter->db_base_phy;
- init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
- init_para.hwdev = &pci_adapter->hwdev;
- init_para.chip_node = pci_adapter->chip_node;
- err = sphw_init_hwdev(&init_para);
- if (err) {
- pci_adapter->hwdev = NULL;
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to initialize hardware device");
- return -EFAULT;
- }
-
- pci_adapter->lld_dev.pdev = pdev;
- pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
-
- sphw_event_register(pci_adapter->hwdev, pci_adapter, spfc_event_process);
-
- if (sphw_func_type(pci_adapter->hwdev) != TYPE_VF)
- spfc_sync_time_to_fmw(pci_adapter);
- lld_lock_chip_node();
- list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
- lld_unlock_chip_node();
- err = attach_uld(pci_adapter, &g_uld_info[SERVICE_T_FC]);
-
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Spfc3 attach uld fail");
- goto attach_fc_err;
- }
-
-#ifdef CONFIG_X86
- cfg_order_reg(pci_adapter);
-#endif
-
- return 0;
-
-attach_fc_err:
- lld_lock_chip_node();
- list_del(&pci_adapter->node);
- lld_unlock_chip_node();
- wait_lld_dev_unused(pci_adapter);
-
- return err;
-}
-
-static void spfc_func_deinit(struct pci_dev *pdev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
-
- lld_lock_chip_node();
- list_del(&pci_adapter->node);
- lld_unlock_chip_node();
- wait_lld_dev_unused(pci_adapter);
-
- detach_uld(pci_adapter);
- sphw_disable_mgmt_msg_report(pci_adapter->hwdev);
- sphw_flush_mgmt_workq(pci_adapter->hwdev);
- sphw_event_unregister(pci_adapter->hwdev);
- sphw_free_hwdev(pci_adapter->hwdev);
-}
-
-static void free_chip_node(struct spfc_pcidev *pci_adapter)
-{
- struct card_node *chip_node = pci_adapter->chip_node;
- int id, err;
-
- if (list_empty(&chip_node->func_list)) {
- list_del(&chip_node->node);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[INFO]Delete chip %s from global list succeed",
- chip_node->chip_name);
- err = sscanf(chip_node->chip_name, SPFC_CHIP_NAME "%d", &id);
- if (err < 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR, "[err]Failed to get spfc id");
- }
-
- clear_bit(id, &card_bit_map);
-
- kfree(chip_node);
- }
-}
-
-static int spfc_probe(struct pci_dev *pdev, const struct pci_device_id *id)
-{
- struct spfc_pcidev *pci_adapter = NULL;
- int err;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[event]Spfc3 Pcie device probe begin");
-
- mutex_lock(&g_pci_init_mutex);
- err = spfc_pci_init(pdev);
- if (err) {
- mutex_unlock(&g_pci_init_mutex);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]pci init fail, return %d", err);
- return err;
- }
- pci_adapter = pci_get_drvdata(pdev);
- err = mapping_bar(pdev, pci_adapter);
- if (err) {
- mutex_unlock(&g_pci_init_mutex);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to map bar");
- goto map_bar_failed;
- }
- mutex_unlock(&g_pci_init_mutex);
- pci_adapter->id = *id;
- lld_dev_cnt_init(pci_adapter);
-
- /* if chip information of pcie function exist, add the function into chip */
- lld_lock_chip_node();
- err = alloc_chip_node(pci_adapter);
- if (err) {
- lld_unlock_chip_node();
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Failed to add new chip node to global list");
- goto alloc_chip_node_fail;
- }
-
- lld_unlock_chip_node();
- err = spfc_func_init(pdev, pci_adapter);
- if (err) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]spfc func init fail");
- goto func_init_err;
- }
-
- return 0;
-
-func_init_err:
- lld_lock_chip_node();
- free_chip_node(pci_adapter);
- lld_unlock_chip_node();
-
-alloc_chip_node_fail:
- unmapping_bar(pci_adapter);
-
-map_bar_failed:
- spfc_pci_deinit(pdev);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Pcie device probe failed");
- return err;
-}
-
-static void spfc_remove(struct pci_dev *pdev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
-
- if (!pci_adapter)
- return;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[INFO]Pcie device remove begin");
- sphw_detect_hw_present(pci_adapter->hwdev);
- spfc_func_deinit(pdev);
- lld_lock_chip_node();
- free_chip_node(pci_adapter);
- lld_unlock_chip_node();
- unmapping_bar(pci_adapter);
- mutex_lock(&g_pci_init_mutex);
- spfc_pci_deinit(pdev);
- mutex_unlock(&g_pci_init_mutex);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[INFO]Pcie device removed");
-}
-
-static void spfc_shutdown(struct pci_dev *pdev)
-{
- struct spfc_pcidev *pci_adapter = pci_get_drvdata(pdev);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Shutdown device");
-
- if (pci_adapter)
- sphw_shutdown_hwdev(pci_adapter->hwdev);
-
- pci_disable_device(pdev);
-}
-
-static pci_ers_result_t spfc_io_error_detected(struct pci_dev *pdev,
- pci_channel_state_t state)
-{
- struct spfc_pcidev *pci_adapter = NULL;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Uncorrectable error detected, log and cleanup error status: 0x%08x",
- state);
-
- pci_aer_clear_nonfatal_status(pdev);
- pci_adapter = pci_get_drvdata(pdev);
-
- if (pci_adapter)
- sphw_record_pcie_error(pci_adapter->hwdev);
-
- return PCI_ERS_RESULT_CAN_RECOVER;
-}
-
-static int unf_global_value_init(void)
-{
- memset(rx_tx_stat, 0, sizeof(rx_tx_stat));
- memset(rx_tx_err, 0, sizeof(rx_tx_err));
- memset(scq_err_stat, 0, sizeof(scq_err_stat));
- memset(aeq_err_stat, 0, sizeof(aeq_err_stat));
- memset(dif_err_stat, 0, sizeof(dif_err_stat));
- memset(link_event_stat, 0, sizeof(link_event_stat));
- memset(link_reason_stat, 0, sizeof(link_reason_stat));
- memset(hba_stat, 0, sizeof(hba_stat));
- memset(&spfc_cm_op_handle, 0, sizeof(struct unf_cm_handle_op));
- memset(up_err_event_stat, 0, sizeof(up_err_event_stat));
- memset(mail_box_stat, 0, sizeof(mail_box_stat));
- memset(spfc_hba, 0, sizeof(spfc_hba));
-
- spin_lock_init(&probe_spin_lock);
-
- /* 4. Get COM Handlers used for low_level */
- if (unf_get_cm_handle_ops(&spfc_cm_op_handle) != RETURN_OK) {
- spfc_realease_cmo_op_handle();
- return RETURN_ERROR_S32;
- }
-
- return RETURN_OK;
-}
-
-static const struct pci_device_id spfc_pci_table[] = {
- {PCI_VDEVICE(RAMAXEL, SPFC_DEV_ID_PF_STD), 0},
- {0, 0}
-};
-
-MODULE_DEVICE_TABLE(pci, spfc_pci_table);
-
-static struct pci_error_handlers spfc_err_handler = {
- .error_detected = spfc_io_error_detected,
-};
-
-static struct pci_driver spfc_driver = {.name = SPFC_DRV_NAME,
- .id_table = spfc_pci_table,
- .probe = spfc_probe,
- .remove = spfc_remove,
- .shutdown = spfc_shutdown,
- .err_handler = &spfc_err_handler};
-
-static __init int spfc_lld_init(void)
-{
- if (unf_common_init() != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]UNF_Common_init failed");
- return RETURN_ERROR_S32;
- }
-
- spfc_check_module_para();
-
- if (unf_global_value_init() != RETURN_OK)
- return RETURN_ERROR_S32;
-
- spfc_register_uld(&fc_uld_info);
- return pci_register_driver(&spfc_driver);
-}
-
-static __exit void spfc_lld_exit(void)
-{
- pci_unregister_driver(&spfc_driver);
- spfc_unregister_uld();
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]SPFC module removing...");
-
- spfc_realease_cmo_op_handle();
-
- /* 2. Unregister FC COM module(level) */
- unf_common_exit();
-}
-
-module_init(spfc_lld_init);
-module_exit(spfc_lld_exit);
-
-MODULE_AUTHOR("Ramaxel Memory Technology, Ltd");
-MODULE_DESCRIPTION(SPFC_DRV_DESC);
-MODULE_VERSION(SPFC_DRV_VERSION);
-MODULE_LICENSE("GPL");
-
-module_param(allowed_probe_num, uint, 0444);
-module_param(dif_sgl_mode, uint, 0444);
-module_param(max_speed, uint, 0444);
-module_param(wqe_page_size, uint, 0444);
-module_param(combo_length, uint, 0444);
-module_param(cos_bit_map, uint, 0444);
-module_param(spfc_dif_enable, uint, 0444);
-MODULE_PARM_DESC(spfc_dif_enable, "set dif enable/disable(1/0), default is 0(disable).");
-module_param(link_lose_tmo, uint, 0444);
-MODULE_PARM_DESC(link_lose_tmo, "set link time out, default is 30s.");
diff --git a/drivers/scsi/spfc/hw/spfc_lld.h b/drivers/scsi/spfc/hw/spfc_lld.h
deleted file mode 100644
index f7b4a5e5ce07..000000000000
--- a/drivers/scsi/spfc/hw/spfc_lld.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_LLD_H
-#define SPFC_LLD_H
-
-#include "sphw_crm.h"
-
-struct spfc_lld_dev {
- struct pci_dev *pdev;
- void *hwdev;
-};
-
-struct spfc_uld_info {
- /* uld_dev: should not return null even the function capability
- * is not support the up layer driver
- * uld_dev_name: NIC driver should copy net device name.
- * FC driver could copy fc device name.
- * other up layer driver don`t need copy anything
- */
- int (*probe)(struct spfc_lld_dev *lld_dev, void **uld_dev,
- char *uld_dev_name);
- void (*remove)(struct spfc_lld_dev *lld_dev, void *uld_dev);
- int (*suspend)(struct spfc_lld_dev *lld_dev, void *uld_dev,
- pm_message_t state);
- int (*resume)(struct spfc_lld_dev *lld_dev, void *uld_dev);
- void (*event)(struct spfc_lld_dev *lld_dev, void *uld_dev,
- struct sphw_event_info *event);
- int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
- void *buf_out, u32 *out_size);
-};
-
-/* Structure pcidev private */
-struct spfc_pcidev {
- struct pci_dev *pcidev;
- void *hwdev;
- struct card_node *chip_node;
- struct spfc_lld_dev lld_dev;
- /* such as fc_dev */
- void *uld_dev[SERVICE_T_MAX];
- /* Record the service object name */
- char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
- /* It is a the global variable for driver to manage
- * all function device linked list
- */
- struct list_head node;
- void __iomem *cfg_reg_base;
- void __iomem *intr_reg_base;
- void __iomem *mgmt_reg_base;
- u64 db_dwqe_len;
- u64 db_base_phy;
- void __iomem *db_base;
- /* lock for attach/detach uld */
- struct mutex pdev_mutex;
- /* setted when uld driver processing event */
- unsigned long state;
- struct pci_device_id id;
- atomic_t ref_cnt;
-};
-
-enum spfc_lld_status {
- SPFC_NODE_CHANGE = BIT(0),
-};
-
-struct spfc_lld_lock {
- /* lock for chip list */
- struct mutex lld_mutex;
- unsigned long status;
- atomic_t dev_ref_cnt;
-};
-
-#ifndef MAX_SIZE
-#define MAX_SIZE (16)
-#endif
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_module.h b/drivers/scsi/spfc/hw/spfc_module.h
deleted file mode 100644
index 153d59955339..000000000000
--- a/drivers/scsi/spfc/hw/spfc_module.h
+++ /dev/null
@@ -1,297 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_MODULE_H
-#define SPFC_MODULE_H
-#include "unf_type.h"
-#include "unf_log.h"
-#include "unf_common.h"
-#include "spfc_utils.h"
-#include "spfc_hba.h"
-
-#define SPFC_FT_ENABLE (1)
-#define SPFC_FC_DISABLE (0)
-
-#define SPFC_P2P_DIRECT (0)
-#define SPFC_P2P_FABRIC (1)
-#define SPFC_LOOP (2)
-#define SPFC_ATUOSPEED (1)
-#define SPFC_FIXEDSPEED (0)
-#define SPFC_AUTOTOPO (0)
-#define SPFC_P2PTOPO (0x2)
-#define SPFC_LOOPTOPO (0x1)
-#define SPFC_SPEED_2G (0x2)
-#define SPFC_SPEED_4G (0x4)
-#define SPFC_SPEED_8G (0x8)
-#define SPFC_SPEED_16G (0x10)
-#define SPFC_SPEED_32G (0x20)
-
-#define SPFC_MAX_PORT_NUM SPFC_MAX_PROBE_PORT_NUM
-#define SPFC_MAX_PORT_TASK_TYPE_STAT_NUM (128)
-#define SPFC_MAX_LINK_EVENT_CNT (4)
-#define SPFC_MAX_LINK_REASON_CNT (256)
-
-#define SPFC_MML_LOGCTRO_NUM (14)
-
-#define WWN_SIZE 8 /* Size of WWPN, WWN & WWNN */
-
-/*
- * Define the data type
- */
-struct spfc_log_ctrl {
- char *log_option;
- u32 state;
-};
-
-/*
- * Declare the global function.
- */
-extern struct unf_cm_handle_op spfc_cm_op_handle;
-extern struct spfc_uld_info fc_uld_info;
-extern u32 allowed_probe_num;
-extern u32 max_speed;
-extern u32 accum_db_num;
-extern u32 wqe_page_size;
-extern u32 dif_type;
-extern u32 wqe_pre_load;
-extern u32 combo_length;
-extern u32 cos_bit_map;
-extern u32 exit_count;
-extern u32 exit_stride;
-extern u32 exit_base;
-
-extern atomic64_t rx_tx_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t rx_tx_err[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t scq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t aeq_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t dif_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t mail_box_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern atomic64_t com_up_event_err_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern u64 link_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_EVENT_CNT];
-extern u64 link_reason_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_LINK_REASON_CNT];
-extern atomic64_t up_err_event_stat[SPFC_MAX_PORT_NUM][SPFC_MAX_PORT_TASK_TYPE_STAT_NUM];
-extern u64 hba_stat[SPFC_MAX_PORT_NUM][SPFC_HBA_STAT_BUTT];
-#define SPFC_LINK_EVENT_STAT(hba, link_ent) \
- (link_event_stat[(hba)->probe_index][link_ent]++)
-#define SPFC_LINK_REASON_STAT(hba, link_rsn) \
- (link_reason_stat[(hba)->probe_index][link_rsn]++)
-#define SPFC_HBA_STAT(hba, hba_stat_type) \
- (hba_stat[(hba)->probe_index][hba_stat_type]++)
-
-#define SPFC_UP_ERR_EVENT_STAT(hba, err_type) \
- (atomic64_inc(&up_err_event_stat[(hba)->probe_index][err_type]))
-#define SPFC_UP_ERR_EVENT_STAT_READ(probe_index, io_type) \
- (atomic64_read(&up_err_event_stat[probe_index][io_type]))
-#define SPFC_DIF_ERR_STAT(hba, dif_err) \
- (atomic64_inc(&dif_err_stat[(hba)->probe_index][dif_err]))
-#define SPFC_DIF_ERR_STAT_READ(probe_index, dif_err) \
- (atomic64_read(&dif_err_stat[probe_index][dif_err]))
-
-#define SPFC_IO_STAT(hba, io_type) \
- (atomic64_inc(&rx_tx_stat[(hba)->probe_index][io_type]))
-#define SPFC_IO_STAT_READ(probe_index, io_type) \
- (atomic64_read(&rx_tx_stat[probe_index][io_type]))
-
-#define SPFC_ERR_IO_STAT(hba, io_type) \
- (atomic64_inc(&rx_tx_err[(hba)->probe_index][io_type]))
-#define SPFC_ERR_IO_STAT_READ(probe_index, io_type) \
- (atomic64_read(&rx_tx_err[probe_index][io_type]))
-
-#define SPFC_SCQ_ERR_TYPE_STAT(hba, err_type) \
- (atomic64_inc(&scq_err_stat[(hba)->probe_index][err_type]))
-#define SPFC_SCQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
- (atomic64_read(&scq_err_stat[probe_index][io_type]))
-#define SPFC_AEQ_ERR_TYPE_STAT(hba, err_type) \
- (atomic64_inc(&aeq_err_stat[(hba)->probe_index][err_type]))
-#define SPFC_AEQ_ERR_TYPE_STAT_READ(probe_index, io_type) \
- (atomic64_read(&aeq_err_stat[probe_index][io_type]))
-
-#define SPFC_MAILBOX_STAT(hba, io_type) \
- (atomic64_inc(&mail_box_stat[(hba)->probe_index][io_type]))
-#define SPFC_MAILBOX_STAT_READ(probe_index, io_type) \
- (atomic64_read(&mail_box_stat[probe_index][io_type]))
-
-#define SPFC_COM_UP_ERR_EVENT_STAT(hba, err_type) \
- (atomic64_inc( \
- &com_up_event_err_stat[(hba)->probe_index][err_type]))
-#define SPFC_COM_UP_ERR_EVENT_STAT_READ(probe_index, err_type) \
- (atomic64_read(&com_up_event_err_stat[probe_index][err_type]))
-
-#define UNF_LOWLEVEL_ALLOC_LPORT(lport, fc_port, low_level) \
- do { \
- if (spfc_cm_op_handle.unf_alloc_local_port) { \
- lport = spfc_cm_op_handle.unf_alloc_local_port( \
- (fc_port), (low_level)); \
- } else { \
- lport = NULL; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, fc_port, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_receive_ls_gs_pkg) { \
- ret = spfc_cm_op_handle.unf_receive_ls_gs_pkg( \
- (fc_port), (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_SEND_ELS_DONE(ret, fc_port, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_send_els_done) { \
- ret = spfc_cm_op_handle.unf_send_els_done((fc_port), \
- (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_GET_CFG_PARMS(ret, section_name, cfg_parm, cfg_value, \
- item_num) \
- do { \
- if (spfc_cm_op_handle.unf_get_cfg_parms) { \
- ret = (u32)spfc_cm_op_handle.unf_get_cfg_parms( \
- (section_name), (cfg_parm), (cfg_value), \
- (item_num)); \
- } else { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN, \
- "Get config parameter function is NULL."); \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RELEASE_LOCAL_PORT(ret, lport) \
- do { \
- if (unlikely(!spfc_cm_op_handle.unf_release_local_port)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = \
- spfc_cm_op_handle.unf_release_local_port(lport); \
- } \
- } while (0)
-
-#define UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len) \
- do { \
- if (unlikely(!spfc_cm_op_handle.unf_cm_get_sgl_entry)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = spfc_cm_op_handle.unf_cm_get_sgl_entry( \
- pkg, buf, buf_len); \
- } \
- } while (0)
-
-#define UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len) \
- do { \
- if (unlikely(!spfc_cm_op_handle.unf_cm_get_dif_sgl_entry)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = spfc_cm_op_handle.unf_cm_get_dif_sgl_entry( \
- pkg, buf, buf_len); \
- } \
- } while (0)
-
-#define UNF_GET_SGL_ENTRY(ret, pkg, buf, buf_len, dif_flag) \
- do { \
- if (dif_flag) { \
- UNF_CM_GET_DIF_SGL_ENTRY(ret, pkg, buf, buf_len); \
- } else { \
- UNF_CM_GET_SGL_ENTRY(ret, pkg, buf, buf_len); \
- } \
- } while (0)
-
-#define UNF_GET_FREE_ESGL_PAGE(ret, lport, pkg) \
- do { \
- if (unlikely( \
- !spfc_cm_op_handle.unf_get_one_free_esgl_page)) { \
- ret = NULL; \
- } else { \
- ret = \
- spfc_cm_op_handle.unf_get_one_free_esgl_page( \
- lport, pkg); \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_FCP_CMND_RECEIVED(ret, lport, pkg) \
- do { \
- if (unlikely(!spfc_cm_op_handle.unf_process_fcp_cmnd)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = spfc_cm_op_handle.unf_process_fcp_cmnd(lport, \
- pkg); \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_SCSI_COMPLETED(ret, lport, pkg) \
- do { \
- if (unlikely(NULL == \
- spfc_cm_op_handle.unf_receive_ini_response)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = spfc_cm_op_handle.unf_receive_ini_response( \
- lport, pkg); \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_PORT_EVENT(ret, lport, event, input) \
- do { \
- if (unlikely(!spfc_cm_op_handle.unf_fc_port_event)) { \
- ret = UNF_RETURN_ERROR; \
- } else { \
- ret = spfc_cm_op_handle.unf_fc_port_event( \
- lport, event, input); \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RECEIVE_FC4LS_PKG(ret, fc_port, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_receive_fc4ls_pkg) { \
- ret = spfc_cm_op_handle.unf_receive_fc4ls_pkg( \
- (fc_port), (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_SEND_FC4LS_DONE(ret, lport, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_send_fc4ls_done) { \
- ret = spfc_cm_op_handle.unf_send_fc4ls_done((lport), \
- (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, lport, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_receive_bls_pkg) { \
- ret = spfc_cm_op_handle.unf_receive_bls_pkg((lport), \
- (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, lport, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_receive_marker_status) { \
- ret = spfc_cm_op_handle.unf_receive_marker_status( \
- (lport), (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#define UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, lport, pkg) \
- do { \
- if (spfc_cm_op_handle.unf_receive_abts_marker_status) { \
- ret = \
- spfc_cm_op_handle.unf_receive_abts_marker_status( \
- (lport), (pkg)); \
- } else { \
- ret = UNF_RETURN_ERROR; \
- } \
- } while (0)
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_parent_context.h b/drivers/scsi/spfc/hw/spfc_parent_context.h
deleted file mode 100644
index dc4baffe5c44..000000000000
--- a/drivers/scsi/spfc/hw/spfc_parent_context.h
+++ /dev/null
@@ -1,269 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_PARENT_CONTEXT_H
-#define SPFC_PARENT_CONTEXT_H
-
-enum fc_parent_status {
- FC_PARENT_STATUS_INVALID = 0,
- FC_PARENT_STATUS_NORMAL,
- FC_PARENT_STATUS_CLOSING
-};
-
-#define SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE (48)
-
-#define SPFC_PARENT_CONTEXT_TIMER_SIZE (32) /* 24+2*N,N=timer count */
-
-#define FC_CALC_CID(_xid) \
- (((((_xid) >> 5) & 0x1ff) << 11) | ((((_xid) >> 5) & 0x1ff) << 2) | \
- (((_xid) >> 3) & 0x3))
-
-#define MAX_PKT_SIZE_PER_DISPATCH (fc_child_ctx_ex->per_xmit_data_size)
-
-/* immediate data DIF info definition in parent context */
-struct immi_dif_info {
- union {
- u32 value;
- struct {
- u32 app_tag_ctrl : 3; /* DIF/DIX APP TAG Control */
- u32 ref_tag_mode : 2; /* Bit 0: scenario of the reference tag verify mode */
- /* Bit 1: scenario of the reference tag insert/replace
- * mode 0: fixed; 1: increasement;
- */
- u32 ref_tag_ctrl : 3; /* The DIF/DIX Reference tag control */
- u32 grd_agm_ini_ctrl : 3;
- u32 grd_agm_ctrl : 2; /* Bit 0: DIF/DIX guard verify algorithm control */
- /* Bit 1: DIF/DIX guard replace or insert algorithm control */
- u32 grd_ctrl : 3; /* The DIF/DIX Guard control */
- u32 dif_verify_type : 2; /* verify type */
- /* Check blocks whose reference tag contains 0xFFFF flag */
- u32 difx_ref_esc : 1;
- /* Check blocks whose application tag contains 0xFFFF flag */
- u32 difx_app_esc : 1;
- u32 rsvd : 8;
- u32 sct_size : 1; /* Sector size, 1: 4K; 0: 512 */
- u32 smd_tp : 2;
- u32 difx_en : 1;
- } info;
- } dif_dw3;
-
- u32 cmp_app_tag : 16;
- u32 rep_app_tag : 16;
- /* The ref tag value for verify compare, do not support replace or insert ref tag */
- u32 cmp_ref_tag;
- u32 rep_ref_tag;
-
- u32 rsv1 : 16;
- u32 cmp_app_tag_msk : 16;
-};
-
-/* parent context SW section definition: SW(80B) */
-struct spfc_sw_section {
- u16 scq_num_rcv_cmd;
- u16 scq_num_max_scqn;
-
- struct {
- u32 xid : 13;
- u32 vport : 7;
- u32 csctrl : 8;
- u32 rsvd0 : 4;
- } sw_ctxt_vport_xid;
-
- u32 scq_num_scqn_mask : 12;
- u32 cid : 20; /* ucode init */
-
- u16 conn_id;
- u16 immi_rq_page_size;
-
- u16 immi_taskid_min;
- u16 immi_taskid_max;
-
- union {
- u32 pctxt_val0;
- struct {
- u32 srv_type : 5; /* driver init */
- u32 srr_support : 2; /* sequence retransmition support flag */
- u32 rsvd1 : 5;
- u32 port_id : 4; /* driver init */
- u32 vlan_id : 16; /* driver init */
- } dw;
- } sw_ctxt_misc;
-
- u32 rsvd2;
- u32 per_xmit_data_size;
-
- /* RW fields */
- u32 cmd_scq_gpa_h;
- u32 cmd_scq_gpa_l;
- u32 e_d_tov_timer_val; /* E_D_TOV timer value: value should be set on ms by driver */
- u16 mfs_unaligned_bytes; /* mfs unalined bytes of per 64KB dispatch*/
- u16 tx_mfs; /* remote port max receive fc payload length */
- u32 xfer_rdy_dis_max_len_remote; /* max data len allowed in xfer_rdy dis scenario */
- u32 xfer_rdy_dis_max_len_local;
-
- union {
- struct {
- u32 priority : 3; /* vlan priority */
- u32 rsvd4 : 2;
- u32 status : 8; /* status of flow */
- u32 cos : 3; /* doorbell cos value */
- u32 oq_cos_data : 3; /* esch oq cos for data */
- u32 oq_cos_cmd : 3; /* esch oq cos for cmd/xferrdy/rsp */
- /* used for parent context cache Consistency judgment,1: done */
- u32 flush_done : 1;
- u32 work_mode : 2; /* 0:Target, 1:Initiator, 2:Target&Initiator */
- u32 seq_cnt : 1; /* seq_cnt */
- u32 e_d_tov : 1; /* E_D_TOV resolution */
- u32 vlan_enable : 1; /* Vlan enable flag */
- u32 conf_support : 1; /* Response confirm support flag */
- u32 rec_support : 1; /* REC support flag */
- u32 write_xfer_rdy : 1; /* WRITE Xfer_Rdy disable or enable */
- u32 sgl_num : 1; /* Double or single SGL, 1: double; 0: single */
- } dw;
- u32 pctxt_val1;
- } sw_ctxt_config;
- struct immi_dif_info immi_dif_info; /* immediate data dif control info(20B) */
-};
-
-struct spfc_hw_rsvd_queue {
- /* bitmap[0]:255-192 */
- /* bitmap[1]:191-128 */
- /* bitmap[2]:127-64 */
- /* bitmap[3]:63-0 */
- u64 seq_id_bitmap[4];
- struct {
- u64 last_req_seq_id : 8;
- u64 xid : 20;
- u64 rsvd0 : 36;
- } wd0;
-};
-
-struct spfc_sq_qinfo {
- u64 rsvd_0 : 10;
- u64 pmsn_type : 1; /* 0: get pmsn from queue header; 1: get pmsn from ucode */
- u64 rsvd_1 : 4;
- u64 cur_wqe_o : 1; /* should be opposite from loop_o */
- u64 rsvd_2 : 48;
-
- u64 cur_sqe_gpa;
- u64 pmsn_gpa; /* sq's queue header gpa */
-
- u64 sqe_dmaattr_idx : 6;
- u64 sq_so_ro : 2;
- u64 rsvd_3 : 2;
- u64 ring : 1; /* 0: link; 1: ring */
- u64 loop_o : 1; /* init to be the first round o-bit */
- u64 rsvd_4 : 4;
- u64 zerocopy_dmaattr_idx : 6;
- u64 zerocopy_so_ro : 2;
- u64 parity : 8;
- u64 r : 1;
- u64 s : 1;
- u64 enable_256 : 1;
- u64 rsvd_5 : 23;
- u64 pcie_template : 6;
-};
-
-struct spfc_cq_qinfo {
- u64 pcie_template_hi : 3;
- u64 parity_2 : 1;
- u64 cur_cqe_gpa : 60;
-
- u64 pi : 15;
- u64 pi_o : 1;
- u64 ci : 15;
- u64 ci_o : 1;
- u64 c_eqn_msi_x : 10; /* if init_mode = 2, is msi/msi-x; other the low-5-bit means c_eqn */
- u64 parity_1 : 1;
- u64 ci_type : 1; /* 0: get ci from queue header; 1: get ci from ucode */
- u64 cq_depth : 3; /* valid when ring = 1 */
- u64 armq : 1; /* 0: IDLE state; 1: NEXT state */
- u64 cur_cqe_cnt : 8;
- u64 cqe_max_cnt : 8;
-
- u64 cqe_dmaattr_idx : 6;
- u64 cq_so_ro : 2;
- u64 init_mode : 2; /* 1: armQ; 2: msi/msi-x; others: rsvd */
- u64 next_o : 1; /* next pate valid o-bit */
- u64 loop_o : 1; /* init to be the first round o-bit */
- u64 next_cq_wqe_page_gpa : 52;
-
- u64 pcie_template_lo : 3;
- u64 parity_0 : 1;
- u64 ci_gpa : 60; /* cq's queue header gpa */
-};
-
-struct spfc_scq_qinfo {
- union {
- struct {
- u64 scq_n : 20; /* scq number */
- u64 sq_min_preld_cache_num : 4;
- u64 sq_th0_preld_cache_num : 5;
- u64 sq_th1_preld_cache_num : 5;
- u64 sq_th2_preld_cache_num : 5;
- u64 rq_min_preld_cache_num : 4;
- u64 rq_th0_preld_cache_num : 5;
- u64 rq_th1_preld_cache_num : 5;
- u64 rq_th2_preld_cache_num : 5;
- u64 parity : 6;
- } info;
-
- u64 pctxt_val1;
- } hw_scqc_config;
-};
-
-struct spfc_srq_qinfo {
- u64 parity : 4;
- u64 srqc_gpa : 60;
-};
-
-/* here is the layout of service type 12/13 */
-struct spfc_parent_context {
- u8 key[SPFC_PARENT_CONTEXT_KEY_ALIGN_SIZE];
- struct spfc_scq_qinfo resp_scq_qinfo;
- struct spfc_srq_qinfo imm_srq_info;
- struct spfc_sq_qinfo sq_qinfo;
- u8 timer_section[SPFC_PARENT_CONTEXT_TIMER_SIZE];
- struct spfc_hw_rsvd_queue hw_rsvdq;
- struct spfc_srq_qinfo els_srq_info;
- struct spfc_sw_section sw_section;
-};
-
-/* here is the layout of service type 13 */
-struct spfc_ssq_parent_context {
- u8 rsvd0[64];
- struct spfc_sq_qinfo sq1_qinfo;
- u8 rsvd1[32];
- struct spfc_sq_qinfo sq2_qinfo;
- u8 rsvd2[32];
- struct spfc_sq_qinfo sq3_qinfo;
- struct spfc_scq_qinfo sq_pretchinfo;
- u8 rsvd3[24];
-};
-
-/* FC Key Section */
-struct spfc_fc_key_section {
- u32 xid_h : 4;
- u32 key_size : 2;
- u32 rsvd1 : 1;
- u32 srv_type : 5;
- u32 csize : 2;
- u32 rsvd0 : 17;
- u32 v : 1;
-
- u32 tag_fp_h : 4;
- u32 rsvd2 : 12;
- u32 xid_l : 16;
-
- u16 tag_fp_l;
- u8 smac[6]; /* Source MAC */
- u8 dmac[6]; /* Dest MAC */
- u8 sid[3]; /* Source FC ID */
- u8 did[3]; /* Dest FC ID */
- u8 svlan[4]; /* Svlan */
- u8 cvlan[4]; /* Cvlan */
-
- u32 next_ptr_h;
-};
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
deleted file mode 100644
index fa4295832da7..000000000000
--- a/drivers/scsi/spfc/hw/spfc_queue.c
+++ /dev/null
@@ -1,4852 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_queue.h"
-#include "unf_log.h"
-#include "unf_lport.h"
-#include "spfc_module.h"
-#include "spfc_utils.h"
-#include "spfc_service.h"
-#include "spfc_chipitf.h"
-#include "spfc_parent_context.h"
-#include "sphw_hw.h"
-#include "sphw_crm.h"
-
-#define SPFC_UCODE_CMD_MODIFY_QUEUE_CONTEXT 0
-
-#define SPFC_DONE_MASK (0x00000001)
-#define SPFC_OWNER_MASK (0x80000000)
-
-#define SPFC_SQ_LINK_PRE (1 << 2)
-
-#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE (64)
-#define SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE - 1)
-
-#define SPFC_ADDR_64_ALIGN(addr) \
- (((addr) + (SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK)) & \
- ~(SPFC_SQ_HEADER_ADDR_ALIGN_SIZE_MASK))
-
-u32 spfc_get_parity_value(u64 *src_data, u32 row, u32 col)
-{
- u32 i = 0;
- u32 j = 0;
- u32 offset = 0;
- u32 group = 0;
- u32 bit_offset = 0;
- u32 bit_val = 0;
- u32 tmp_val = 0;
- u32 dest_data = 0;
-
- for (i = 0; i < row; i++) {
- for (j = 0; j < col; j++) {
- offset = (row * j + i);
- group = offset / (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
- bit_offset = offset % (sizeof(src_data[ARRAY_INDEX_0]) * UNF_BITS_PER_BYTE);
- tmp_val = (src_data[group] >> bit_offset) & SPFC_PARITY_MASK;
-
- if (j == 0) {
- bit_val = tmp_val;
- continue;
- }
-
- bit_val ^= tmp_val;
- }
-
- bit_val = (~bit_val) & SPFC_PARITY_MASK;
-
- dest_data |= (bit_val << i);
- }
-
- return dest_data;
-}
-
-static void spfc_update_producer_info(u16 q_depth, u16 *pus_pi, u16 *pus_owner)
-{
- u16 current_pi = 0;
- u16 next_pi = 0;
- u16 owner = 0;
-
- current_pi = *pus_pi;
- next_pi = current_pi + 1;
-
- if (next_pi < q_depth) {
- *pus_pi = next_pi;
- } else {
- /* PI reversal */
- *pus_pi = 0;
-
- /* obit reversal */
- owner = *pus_owner;
- *pus_owner = !owner;
- }
-}
-
-static void spfc_update_consumer_info(u16 q_depth, u16 *pus_ci, u16 *pus_owner)
-{
- u16 current_ci = 0;
- u16 next_ci = 0;
- u16 owner = 0;
-
- current_ci = *pus_ci;
- next_ci = current_ci + 1;
-
- if (next_ci < q_depth) {
- *pus_ci = next_ci;
- } else {
- /* CI reversal */
- *pus_ci = 0;
-
- /* obit reversal */
- owner = *pus_owner;
- *pus_owner = !owner;
- }
-}
-
-static inline void spfc_update_cq_header(struct ci_record *ci_record, u16 ci,
- u16 owner)
-{
- u32 size = 0;
- struct ci_record record = {0};
-
- size = sizeof(struct ci_record);
- memcpy(&record, ci_record, size);
- spfc_big_to_cpu64(&record, size);
- record.cmsn = ci + (u16)(owner << SPFC_CQ_HEADER_OWNER_SHIFT);
- record.dump_cmsn = record.cmsn;
- spfc_cpu_to_big64(&record, size);
-
- wmb();
- memcpy(ci_record, &record, size);
-}
-
-static void spfc_update_srq_header(struct db_record *pmsn_record, u16 pmsn)
-{
- u32 size = 0;
- struct db_record record = {0};
-
- size = sizeof(struct db_record);
- memcpy(&record, pmsn_record, size);
- spfc_big_to_cpu64(&record, size);
- record.pmsn = pmsn;
- record.dump_pmsn = record.pmsn;
- spfc_cpu_to_big64(&record, sizeof(struct db_record));
-
- wmb();
- memcpy(pmsn_record, &record, size);
-}
-
-static void spfc_set_srq_wqe_owner_be(struct spfc_wqe_ctrl *sqe_ctrl_in_wp,
- u32 owner)
-{
- struct spfc_wqe_ctrl_ch wqe_ctrl_ch;
-
- mb();
-
- wqe_ctrl_ch.ctrl_ch_val = be32_to_cpu(sqe_ctrl_in_wp->ch.ctrl_ch_val);
- wqe_ctrl_ch.wd0.owner = owner;
- sqe_ctrl_in_wp->ch.ctrl_ch_val = cpu_to_be32(wqe_ctrl_ch.ctrl_ch_val);
-
- mb();
-}
-
-static inline void spfc_set_sq_wqe_owner_be(void *sqe)
-{
- u32 *sqe_dw = (u32 *)sqe;
- u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
-
- /* Ensure that the write of WQE is complete */
- mb();
- e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
- e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
- sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
- sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] |= SPFC_SQE_OBIT_SET_MASK_BE;
- mb();
-}
-
-void spfc_clear_sq_wqe_owner_be(struct spfc_sqe *sqe)
-{
- u32 *sqe_dw = (u32 *)sqe;
- u32 *e_sqe_dw = (u32 *)((u8 *)sqe + SPFC_EXTEND_WQE_OFFSET);
-
- mb();
- sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
- mb();
- sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
- e_sqe_dw[SPFC_SQE_SECOND_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
- e_sqe_dw[SPFC_SQE_FIRST_OBIT_DW_POS] &= SPFC_SQE_OBIT_CLEAR_MASK_BE;
-}
-
-static void spfc_set_direct_wqe_owner_be(void *sqe, u16 owner)
-{
- if (owner)
- spfc_set_sq_wqe_owner_be(sqe);
- else
- spfc_clear_sq_wqe_owner_be(sqe);
-}
-
-static void spfc_set_srq_link_wqe_owner_be(struct spfc_linkwqe *link_wqe,
- u32 owner, u16 pmsn)
-{
- struct spfc_linkwqe local_lw;
-
- mb();
- local_lw.val_wd1 = be32_to_cpu(link_wqe->val_wd1);
- local_lw.wd1.msn = pmsn;
- local_lw.wd1.dump_msn = (local_lw.wd1.msn & SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK);
- link_wqe->val_wd1 = cpu_to_be32(local_lw.val_wd1);
-
- local_lw.val_wd0 = be32_to_cpu(link_wqe->val_wd0);
- local_lw.wd0.o = owner;
- link_wqe->val_wd0 = cpu_to_be32(local_lw.val_wd0);
- mb();
-}
-
-static inline bool spfc_is_scq_link_wqe(struct spfc_scq_info *scq_info)
-{
- u16 custom_scqe_num = 0;
-
- custom_scqe_num = scq_info->ci + 1;
-
- if ((custom_scqe_num % scq_info->wqe_num_per_buf == 0) ||
- scq_info->valid_wqe_num == custom_scqe_num)
- return true;
- else
- return false;
-}
-
-static struct spfc_wqe_page *
-spfc_add_tail_wqe_page(struct spfc_parent_ssq_info *ssq)
-{
- struct spfc_hba_info *hba = NULL;
- struct spfc_wqe_page *esgl = NULL;
- struct list_head *free_list_head = NULL;
- ulong flag = 0;
-
- hba = (struct spfc_hba_info *)ssq->hba;
-
- spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
-
- /* Get a WqePage from hba->sq_wpg_pool.list_free_wpg_pool, and add to
- * sq.list_SqTailWqePage
- */
- if (!list_empty(&hba->sq_wpg_pool.list_free_wpg_pool)) {
- free_list_head = UNF_OS_LIST_NEXT(&hba->sq_wpg_pool.list_free_wpg_pool);
- list_del(free_list_head);
- list_add_tail(free_list_head, &ssq->list_linked_list_sq);
- esgl = list_entry(free_list_head, struct spfc_wqe_page, entry_wpg);
-
- /* WqePage Pool counter */
- atomic_inc(&hba->sq_wpg_pool.wpg_in_use);
- } else {
- esgl = NULL;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]SQ pool is empty when SQ(0x%x) try to get wqe page",
- ssq->sqn);
- SPFC_HBA_STAT(hba, SPFC_STAT_SQ_POOL_EMPTY);
- }
-
- spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
-
- return esgl;
-}
-
-static inline struct spfc_sqe *spfc_get_wqe_page_entry(struct spfc_wqe_page *wpg,
- u32 wqe_offset)
-{
- struct spfc_sqe *sqe_wpg = NULL;
-
- sqe_wpg = (struct spfc_sqe *)(wpg->wpg_addr);
- sqe_wpg += wqe_offset;
-
- return sqe_wpg;
-}
-
-static void spfc_free_head_wqe_page(struct spfc_parent_ssq_info *ssq)
-{
- struct spfc_hba_info *hba = NULL;
- struct spfc_wqe_page *sq_wpg = NULL;
- struct list_head *entry_head_wqe_page = NULL;
- ulong flag = 0;
-
- atomic_dec(&ssq->wqe_page_cnt);
-
- hba = (struct spfc_hba_info *)ssq->hba;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "Port(0x%x) free wqe page nowpagecnt:%d",
- hba->port_cfg.port_id,
- atomic_read(&ssq->wqe_page_cnt));
- sq_wpg = SPFC_GET_SQ_HEAD(ssq);
-
- memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
-
- spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
- entry_head_wqe_page = &sq_wpg->entry_wpg;
- list_del(entry_head_wqe_page);
- list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
-
- /* WqePage Pool counter */
- atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
- spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
-}
-
-static void spfc_free_link_list_wpg(struct spfc_parent_ssq_info *ssq)
-{
- ulong flag = 0;
- struct spfc_hba_info *hba = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct list_head *entry_head_wqe_page = NULL;
- struct spfc_wqe_page *sq_wpg = NULL;
-
- hba = (struct spfc_hba_info *)ssq->hba;
-
- list_for_each_safe(node, next_node, &ssq->list_linked_list_sq) {
- sq_wpg = list_entry(node, struct spfc_wqe_page, entry_wpg);
- memset((void *)sq_wpg->wpg_addr, WQE_MARKER_0, hba->sq_wpg_pool.wpg_size);
-
- spin_lock_irqsave(&hba->sq_wpg_pool.wpg_pool_lock, flag);
- entry_head_wqe_page = &sq_wpg->entry_wpg;
- list_del(entry_head_wqe_page);
- list_add_tail(entry_head_wqe_page, &hba->sq_wpg_pool.list_free_wpg_pool);
-
- /* WqePage Pool counter */
- atomic_dec(&ssq->wqe_page_cnt);
- atomic_dec(&hba->sq_wpg_pool.wpg_in_use);
-
- spin_unlock_irqrestore(&hba->sq_wpg_pool.wpg_pool_lock, flag);
- }
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]Port(0x%x) RPort(0x%x) Sq(0x%x) link list destroyed, Sq.WqePageCnt=0x%x, SqWpgPool.wpg_in_use=0x%x",
- hba->port_cfg.port_id, ssq->sqn, ssq->context_id,
- atomic_read(&ssq->wqe_page_cnt), atomic_read(&hba->sq_wpg_pool.wpg_in_use));
-}
-
-struct spfc_wqe_page *
-spfc_add_one_wqe_page(struct spfc_parent_ssq_info *ssq)
-{
- u32 wqe_inx = 0;
- struct spfc_wqe_page *wqe_page = NULL;
- struct spfc_sqe *sqe_in_wp = NULL;
- struct spfc_linkwqe *link_wqe_in_wpg = NULL;
- struct spfc_linkwqe link_wqe;
-
- /* Add a new Wqe Page */
- wqe_page = spfc_add_tail_wqe_page(ssq);
-
- if (!wqe_page)
- return NULL;
-
- for (wqe_inx = 0; wqe_inx <= ssq->wqe_num_per_buf; wqe_inx++) {
- sqe_in_wp = spfc_get_wqe_page_entry(wqe_page, wqe_inx);
- sqe_in_wp->ctrl_sl.ch.ctrl_ch_val = 0;
- sqe_in_wp->ectrl_sl.ch.ctrl_ch_val = 0;
- }
-
- /* Set last WqePage as linkwqe */
- link_wqe_in_wpg = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(wqe_page,
- ssq->wqe_num_per_buf);
- link_wqe.val_wd0 = 0;
- link_wqe.val_wd1 = 0;
- link_wqe.next_page_addr_hi = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
- ? SPFC_MSD(wqe_page->wpg_phy_addr)
- : 0;
- link_wqe.next_page_addr_lo = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
- ? SPFC_LSD(wqe_page->wpg_phy_addr)
- : 0;
- link_wqe.wd0.wf = CQM_WQE_WF_LINK;
- link_wqe.wd0.ctrlsl = CQM_LINK_WQE_CTRLSL_VALUE;
- link_wqe.wd0.o = !(ssq->last_pi_owner);
- link_wqe.wd1.lp = (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
- ? CQM_LINK_WQE_LP_VALID
- : CQM_LINK_WQE_LP_INVALID;
- spfc_cpu_to_big32(&link_wqe, sizeof(struct spfc_linkwqe));
- memcpy(link_wqe_in_wpg, &link_wqe, sizeof(struct spfc_linkwqe));
- memcpy((u8 *)link_wqe_in_wpg + SPFC_EXTEND_WQE_OFFSET,
- &link_wqe, sizeof(struct spfc_linkwqe));
-
- return wqe_page;
-}
-
-static inline struct spfc_scqe_type *
-spfc_get_scq_entry(struct spfc_scq_info *scq_info)
-{
- u32 buf_id = 0;
- u16 buf_offset = 0;
- u16 ci = 0;
- struct cqm_buf_list *buf = NULL;
-
- FC_CHECK_RETURN_VALUE(scq_info, NULL);
-
- ci = scq_info->ci;
- buf_id = ci / scq_info->wqe_num_per_buf;
- buf = &scq_info->cqm_scq_info->q_room_buf_1.buf_list[buf_id];
- buf_offset = (u16)(ci % scq_info->wqe_num_per_buf);
-
- return (struct spfc_scqe_type *)(buf->va) + buf_offset;
-}
-
-static inline bool spfc_is_cqe_done(u32 *done, u32 *owner, u16 driver_owner)
-{
- return ((((u16)(!!(*done & SPFC_DONE_MASK)) == driver_owner) &&
- ((u16)(!!(*owner & SPFC_OWNER_MASK)) == driver_owner)) ? true : false);
-}
-
-u32 spfc_process_scq_cqe_entity(ulong info, u32 proc_cnt)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 index = 0;
- struct wq_header *queue_header = NULL;
- struct spfc_scqe_type *scqe = NULL;
- struct spfc_scqe_type tmp_scqe;
- struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
-
- FC_CHECK_RETURN_VALUE(scq_info, ret);
- SPFC_FUNCTION_ENTER;
-
- queue_header = (struct wq_header *)(void *)(scq_info->cqm_scq_info->q_header_vaddr);
-
- for (index = 0; index < proc_cnt;) {
- /* If linked wqe, then update CI */
- if (spfc_is_scq_link_wqe(scq_info)) {
- spfc_update_consumer_info(scq_info->valid_wqe_num,
- &scq_info->ci,
- &scq_info->ci_owner);
- spfc_update_cq_header(&queue_header->ci_record,
- scq_info->ci, scq_info->ci_owner);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_INFO,
- "[info]Current wqe is a linked wqe");
- continue;
- }
-
- /* Get SCQE and then check obit & donebit whether been set */
- scqe = spfc_get_scq_entry(scq_info);
- if (unlikely(!scqe)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Scqe is NULL");
- break;
- }
-
- if (!spfc_is_cqe_done((u32 *)(void *)&scqe->wd0,
- (u32 *)(void *)&scqe->ch.wd0,
- scq_info->ci_owner)) {
- atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DONE);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_INFO, "[info]Now has no valid scqe");
- break;
- }
-
- /* rmb & do memory copy */
- rmb();
- memcpy(&tmp_scqe, scqe, sizeof(struct spfc_scqe_type));
- /* process SCQ entry */
- ret = spfc_rcv_scq_entry_from_scq(scq_info->hba, (void *)&tmp_scqe,
- scq_info->queue_id);
- if (unlikely(ret != RETURN_OK)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]QueueId(0x%x) scqn(0x%x) scqe process error at CI(0x%x)",
- scq_info->queue_id, scq_info->scqn, scq_info->ci);
- }
-
- /* Update Driver's CI & Obit */
- spfc_update_consumer_info(scq_info->valid_wqe_num,
- &scq_info->ci, &scq_info->ci_owner);
- spfc_update_cq_header(&queue_header->ci_record, scq_info->ci,
- scq_info->ci_owner);
- index++;
- }
-
- /* Re-schedule again if necessary */
- if (index == proc_cnt)
- tasklet_schedule(&scq_info->tasklet);
-
- SPFC_FUNCTION_RETURN;
-
- return index;
-}
-
-void spfc_set_scq_irg_cfg(struct spfc_hba_info *hba, u32 mode, u16 msix_index)
-{
-#define SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT 5
-#define SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG 10
- u8 pending_limt = 0;
- u8 coalesc_timer_cfg = 0;
-
- struct interrupt_info info = {0};
-
- if (mode != SPFC_SCQ_INTR_LOW_LATENCY_MODE) {
- pending_limt = SPFC_POLLING_MODE_ITERRUPT_PENDING_CNT;
- coalesc_timer_cfg =
- SPFC_POLLING_MODE_ITERRUPT_COALESC_TIMER_CFG;
- }
-
- memset(&info, 0, sizeof(info));
- info.interrupt_coalesc_set = 1;
- info.lli_set = 0;
- info.pending_limt = pending_limt;
- info.coalesc_timer_cfg = coalesc_timer_cfg;
- info.resend_timer_cfg = 0;
- info.msix_index = msix_index;
-
- sphw_set_interrupt_cfg(hba->dev_handle, info, SPHW_CHANNEL_FC);
-}
-
-void spfc_process_scq_cqe(ulong info)
-{
- struct spfc_scq_info *scq_info = (struct spfc_scq_info *)info;
-
- FC_CHECK_RETURN_VOID(scq_info);
-
- spfc_process_scq_cqe_entity(info, SPFC_CQE_MAX_PROCESS_NUM_PER_INTR);
-}
-
-irqreturn_t spfc_scq_irq(int irq, void *scq_info)
-{
- SPFC_FUNCTION_ENTER;
-
- FC_CHECK_RETURN_VALUE(scq_info, IRQ_NONE);
-
- tasklet_schedule(&((struct spfc_scq_info *)scq_info)->tasklet);
-
- SPFC_FUNCTION_RETURN;
-
- return IRQ_HANDLED;
-}
-
-static u32 spfc_alloc_scq_int(struct spfc_scq_info *scq_info)
-{
- int ret = UNF_RETURN_ERROR_S32;
- u16 act_num = 0;
- struct irq_info irq_info;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VALUE(scq_info, UNF_RETURN_ERROR);
-
- /* 1. Alloc & check SCQ IRQ */
- hba = (struct spfc_hba_info *)(scq_info->hba);
- ret = sphw_alloc_irqs(hba->dev_handle, SERVICE_T_FC, SPFC_INT_NUM_PER_QUEUE,
- &irq_info, &act_num);
- if (ret != RETURN_OK || act_num != SPFC_INT_NUM_PER_QUEUE) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate scq irq failed, return %d", ret);
- return UNF_RETURN_ERROR;
- }
-
- if (irq_info.msix_entry_idx >= SPFC_SCQ_INT_ID_MAX) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SCQ irq id exceed %d, msix_entry_idx %d",
- SPFC_SCQ_INT_ID_MAX, irq_info.msix_entry_idx);
- sphw_free_irq(hba->dev_handle, SERVICE_T_FC, irq_info.irq_id);
- return UNF_RETURN_ERROR;
- }
-
- scq_info->irq_id = (u32)(irq_info.irq_id);
- scq_info->msix_entry_idx = (u16)(irq_info.msix_entry_idx);
-
- snprintf(scq_info->irq_name, SPFC_IRQ_NAME_MAX, "fc_scq%u_%x_msix%u",
- scq_info->queue_id, hba->port_cfg.port_id, scq_info->msix_entry_idx);
-
- /* 2. SCQ IRQ tasklet init */
- tasklet_init(&scq_info->tasklet, spfc_process_scq_cqe, (ulong)(uintptr_t)scq_info);
-
- /* 3. Request IRQ for SCQ */
- ret = request_irq(scq_info->irq_id, spfc_scq_irq, 0, scq_info->irq_name, scq_info);
-
- sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_ENABLE);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Request SCQ irq failed, SCQ Index = %u, return %d",
- scq_info->queue_id, ret);
- sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
- memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
- scq_info->irq_id = 0;
- scq_info->msix_entry_idx = 0;
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-static void spfc_free_scq_int(struct spfc_scq_info *scq_info)
-{
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(scq_info);
-
- hba = (struct spfc_hba_info *)(scq_info->hba);
- sphw_set_msix_state(hba->dev_handle, scq_info->msix_entry_idx, SPHW_MSIX_DISABLE);
- free_irq(scq_info->irq_id, scq_info);
- tasklet_kill(&scq_info->tasklet);
- sphw_free_irq(hba->dev_handle, SERVICE_T_FC, scq_info->irq_id);
- memset(scq_info->irq_name, 0, SPFC_IRQ_NAME_MAX);
- scq_info->irq_id = 0;
- scq_info->msix_entry_idx = 0;
-}
-
-static void spfc_init_scq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_scq,
- u32 queue_id, struct spfc_scq_info **scq_info)
-{
- FC_CHECK_RETURN_VOID(hba);
- FC_CHECK_RETURN_VOID(cqm_scq);
- FC_CHECK_RETURN_VOID(scq_info);
-
- *scq_info = &hba->scq_info[queue_id];
- (*scq_info)->queue_id = queue_id;
- (*scq_info)->scqn = cqm_scq->index;
- (*scq_info)->hba = (void *)hba;
-
- (*scq_info)->cqm_scq_info = cqm_scq;
- (*scq_info)->wqe_num_per_buf =
- cqm_scq->q_room_buf_1.buf_size / SPFC_SCQE_SIZE;
- (*scq_info)->wqe_size = SPFC_SCQE_SIZE;
- (*scq_info)->valid_wqe_num = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQ_DEPTH
- : SPFC_CMD_SCQ_DEPTH);
- (*scq_info)->scqc_cq_depth = (SPFC_SCQ_IS_STS(queue_id) ? SPFC_STS_SCQC_CQ_DEPTH
- : SPFC_CMD_SCQC_CQ_DEPTH);
- (*scq_info)->scqc_ci_type = SPFC_STS_SCQ_CI_TYPE;
- (*scq_info)->ci = 0;
- (*scq_info)->ci_owner = 1;
-}
-
-static void spfc_init_scq_header(struct wq_header *queue_header)
-{
- FC_CHECK_RETURN_VOID(queue_header);
-
- memset(queue_header, 0, sizeof(struct wq_header));
-
- /* Obit default is 1 */
- queue_header->db_record.pmsn = 1 << UNF_SHIFT_15;
- queue_header->db_record.dump_pmsn = queue_header->db_record.pmsn;
- queue_header->ci_record.cmsn = 1 << UNF_SHIFT_15;
- queue_header->ci_record.dump_cmsn = queue_header->ci_record.cmsn;
-
- /* Big endian convert */
- spfc_cpu_to_big64((void *)queue_header, sizeof(struct wq_header));
-}
-
-static void spfc_cfg_scq_ctx(struct spfc_scq_info *scq_info,
- struct spfc_cq_qinfo *scq_ctx)
-{
- struct cqm_queue *cqm_scq_info = NULL;
- struct spfc_queue_info_bus queue_bus;
- u64 parity = 0;
-
- FC_CHECK_RETURN_VOID(scq_info);
-
- cqm_scq_info = scq_info->cqm_scq_info;
-
- scq_ctx->pcie_template_hi = 0;
- scq_ctx->cur_cqe_gpa = cqm_scq_info->q_room_buf_1.buf_list->pa >> SPFC_CQE_GPA_SHIFT;
- scq_ctx->pi = 0;
- scq_ctx->pi_o = 1;
- scq_ctx->ci = scq_info->ci;
- scq_ctx->ci_o = scq_info->ci_owner;
- scq_ctx->c_eqn_msi_x = scq_info->msix_entry_idx;
- scq_ctx->ci_type = scq_info->scqc_ci_type;
- scq_ctx->cq_depth = scq_info->scqc_cq_depth;
- scq_ctx->armq = SPFC_ARMQ_IDLE;
- scq_ctx->cur_cqe_cnt = 0;
- scq_ctx->cqe_max_cnt = 0;
- scq_ctx->cqe_dmaattr_idx = 0;
- scq_ctx->cq_so_ro = 0;
- scq_ctx->init_mode = SPFC_CQ_INT_MODE;
- scq_ctx->next_o = 1;
- scq_ctx->loop_o = 1;
- scq_ctx->next_cq_wqe_page_gpa = cqm_scq_info->q_room_buf_1.buf_list[ARRAY_INDEX_1].pa >>
- SPFC_NEXT_CQE_GPA_SHIFT;
- scq_ctx->pcie_template_lo = 0;
-
- scq_ctx->ci_gpa = (cqm_scq_info->q_header_paddr + offsetof(struct wq_header, ci_record)) >>
- SPFC_CQE_GPA_SHIFT;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(scq_info->scqn & SPFC_SCQN_MASK)); /* bits 20 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->pcie_template_lo)) << UNF_SHIFT_20);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->ci_gpa & SPFC_SCQ_CTX_CI_GPA_MASK)) <<
- UNF_SHIFT_23); /* bits 28 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cqe_dmaattr_idx)) << UNF_SHIFT_51);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->cq_so_ro)) << UNF_SHIFT_57); /* bits 2 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->init_mode)) << UNF_SHIFT_59); /* bits 2 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(scq_ctx->c_eqn_msi_x &
- SPFC_SCQ_CTX_C_EQN_MSI_X_MASK)) << UNF_SHIFT_61);
- queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(scq_ctx->c_eqn_msi_x >> UNF_SHIFT_3)); /* bits 7 */
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->ci_type)) << UNF_SHIFT_7); /* bits 1 */
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cq_depth)) << UNF_SHIFT_8); /* bits 3 */
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->cqe_max_cnt)) << UNF_SHIFT_11);
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(scq_ctx->pcie_template_hi)) << UNF_SHIFT_19);
-
- parity = spfc_get_parity_value(queue_bus.bus, SPFC_SCQC_BUS_ROW, SPFC_SCQC_BUS_COL);
- scq_ctx->parity_0 = parity & SPFC_PARITY_MASK;
- scq_ctx->parity_1 = (parity >> UNF_SHIFT_1) & SPFC_PARITY_MASK;
- scq_ctx->parity_2 = (parity >> UNF_SHIFT_2) & SPFC_PARITY_MASK;
-
- spfc_cpu_to_big64((void *)scq_ctx, sizeof(struct spfc_cq_qinfo));
-}
-
-static u32 spfc_creat_scqc_via_cmdq_sync(struct spfc_hba_info *hba,
- struct spfc_cq_qinfo *scqc, u32 scqn)
-{
-#define SPFC_INIT_SCQC_TIMEOUT 3000
- int ret;
- u32 covrt_size;
- struct spfc_cmdqe_creat_scqc init_scqc_cmd;
- struct sphw_cmd_buf *cmdq_in_buf;
-
- cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmdq_in_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf alloc failed");
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
- return UNF_RETURN_ERROR;
- }
-
- memset(&init_scqc_cmd, 0, sizeof(init_scqc_cmd));
- init_scqc_cmd.wd0.task_type = SPFC_TASK_T_INIT_SCQC;
- init_scqc_cmd.wd1.scqn = SPFC_LSW(scqn);
- covrt_size = sizeof(init_scqc_cmd) - sizeof(init_scqc_cmd.scqc);
- spfc_cpu_to_big32(&init_scqc_cmd, covrt_size);
-
- /* scqc is already big endian */
- memcpy(init_scqc_cmd.scqc, scqc, sizeof(*scqc));
- memcpy(cmdq_in_buf->buf, &init_scqc_cmd, sizeof(init_scqc_cmd));
- cmdq_in_buf->size = sizeof(init_scqc_cmd);
-
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
- cmdq_in_buf, NULL, NULL,
- SPFC_INIT_SCQC_TIMEOUT, SPHW_CHANNEL_FC);
- sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
- if (ret) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Send creat scqc via cmdq failed, ret=%d",
- ret);
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
- return UNF_RETURN_ERROR;
- }
-
- SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SCQC);
-
- return RETURN_OK;
-}
-
-static u32 spfc_delete_ssqc_via_cmdq_sync(struct spfc_hba_info *hba, u32 xid,
- u64 context_gpa, u32 entry_count)
-{
-#define SPFC_DELETE_SSQC_TIMEOUT 3000
- int ret = RETURN_OK;
- struct spfc_cmdqe_delete_ssqc delete_ssqc_cmd;
- struct sphw_cmd_buf *cmdq_in_buf = NULL;
-
- cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmdq_in_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf alloc failed");
- return UNF_RETURN_ERROR;
- }
-
- memset(&delete_ssqc_cmd, 0, sizeof(delete_ssqc_cmd));
- delete_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CLEAR_SSQ_CONTEXT;
- delete_ssqc_cmd.wd0.xid = xid;
- delete_ssqc_cmd.wd0.entry_count = entry_count;
- delete_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
- delete_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
- delete_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
- spfc_cpu_to_big32(&delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
- memcpy(cmdq_in_buf->buf, &delete_ssqc_cmd, sizeof(delete_ssqc_cmd));
- cmdq_in_buf->size = sizeof(delete_ssqc_cmd);
-
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
- cmdq_in_buf, NULL, NULL,
- SPFC_DELETE_SSQC_TIMEOUT,
- SPHW_CHANNEL_FC);
-
- sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
-
- return ret;
-}
-
-static void spfc_free_ssq_qpc(struct spfc_hba_info *hba, u32 free_sq_num)
-{
- u32 global_sq_index = 0;
- u32 qid = 0;
- struct spfc_parent_shared_queue_info *ssq_info = NULL;
-
- SPFC_FUNCTION_ENTER;
- for (global_sq_index = 0; global_sq_index < free_sq_num;) {
- for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
- ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
- if (qid == SPFC_SQ_NUM_PER_QPC ||
- global_sq_index == free_sq_num - 1) {
- if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[INFO]qid 0x%x, global_sq_index 0x%x, free_sq_num 0x%x",
- qid, global_sq_index, free_sq_num);
- cqm3_object_delete(&ssq_info->parent_ctx
- .cqm_parent_ctx_obj->object);
- ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
- }
- }
- global_sq_index++;
- if (global_sq_index >= free_sq_num)
- break;
- }
- }
-}
-
-void spfc_free_ssq(void *handle, u32 free_sq_num)
-{
-#define SPFC_FREE_SSQ_WAIT_MS 1000
- u32 global_sq_index = 0;
- u32 qid = 0;
- struct spfc_parent_shared_queue_info *ssq_info = NULL;
- struct spfc_parent_ssq_info *sq_ctrl = NULL;
- struct cqm_qpc_mpt *prnt_ctx = NULL;
- u32 ret = UNF_RETURN_ERROR;
- u32 entry_count = 0;
- struct spfc_hba_info *hba = NULL;
-
- SPFC_FUNCTION_ENTER;
-
- hba = (struct spfc_hba_info *)handle;
- for (global_sq_index = 0; global_sq_index < free_sq_num;) {
- for (qid = 1; qid <= SPFC_SQ_NUM_PER_QPC; qid++) {
- ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
- sq_ctrl = &ssq_info->parent_ssq_info;
- /* Free data cos */
- spfc_free_link_list_wpg(sq_ctrl);
- if (sq_ctrl->queue_head_original) {
- pci_unmap_single(hba->pci_dev,
- sq_ctrl->queue_hdr_phy_addr_original,
- sizeof(struct spfc_queue_header) +
- SPFC_SQ_HEADER_ADDR_ALIGN_SIZE,
- DMA_BIDIRECTIONAL);
- kfree(sq_ctrl->queue_head_original);
- sq_ctrl->queue_head_original = NULL;
- }
- if (qid == SPFC_SQ_NUM_PER_QPC || global_sq_index == free_sq_num - 1) {
- if (ssq_info->parent_ctx.cqm_parent_ctx_obj) {
- prnt_ctx = ssq_info->parent_ctx.cqm_parent_ctx_obj;
- entry_count = (qid == SPFC_SQ_NUM_PER_QPC ?
- SPFC_SQ_NUM_PER_QPC :
- free_sq_num - global_sq_index);
- ret = spfc_delete_ssqc_via_cmdq_sync(hba, prnt_ctx->xid,
- prnt_ctx->paddr,
- entry_count);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]ucode delete ssq fail, glbindex 0x%x, qid 0x%x, glsqindex 0x%x",
- global_sq_index, qid, free_sq_num);
- }
- }
- }
- global_sq_index++;
- if (global_sq_index >= free_sq_num)
- break;
- }
- }
-
- msleep(SPFC_FREE_SSQ_WAIT_MS);
-
- spfc_free_ssq_qpc(hba, free_sq_num);
-}
-
-u32 spfc_creat_ssqc_via_cmdq_sync(struct spfc_hba_info *hba,
- struct spfc_ssq_parent_context *ssqc,
- u32 xid, u64 context_gpa)
-{
-#define SPFC_INIT_SSQC_TIMEOUT 3000
- int ret;
- u32 covrt_size;
- struct spfc_cmdqe_creat_ssqc create_ssqc_cmd;
- struct sphw_cmd_buf *cmdq_in_buf = NULL;
-
- cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmdq_in_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf alloc failed");
- return UNF_RETURN_ERROR;
- }
-
- memset(&create_ssqc_cmd, 0, sizeof(create_ssqc_cmd));
- create_ssqc_cmd.wd0.task_type = SPFC_TASK_T_CREATE_SSQ_CONTEXT;
- create_ssqc_cmd.wd0.xid = xid;
- create_ssqc_cmd.wd1.scqn = SPFC_LSW(0);
- create_ssqc_cmd.context_gpa_hi = SPFC_HIGH_32_BITS(context_gpa);
- create_ssqc_cmd.context_gpa_lo = SPFC_LOW_32_BITS(context_gpa);
- covrt_size = sizeof(create_ssqc_cmd) - sizeof(create_ssqc_cmd.ssqc);
- spfc_cpu_to_big32(&create_ssqc_cmd, covrt_size);
-
- /* scqc is already big endian */
- memcpy(create_ssqc_cmd.ssqc, ssqc, sizeof(*ssqc));
- memcpy(cmdq_in_buf->buf, &create_ssqc_cmd, sizeof(create_ssqc_cmd));
- cmdq_in_buf->size = sizeof(create_ssqc_cmd);
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
- cmdq_in_buf, NULL, NULL,
- SPFC_INIT_SSQC_TIMEOUT, SPHW_CHANNEL_FC);
- sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
- if (ret)
- return UNF_RETURN_ERROR;
- return RETURN_OK;
-}
-
-void spfc_init_sq_prnt_ctxt_sq_qinfo(struct spfc_sq_qinfo *sq_info,
- struct spfc_parent_ssq_info *ssq)
-{
- struct spfc_wqe_page *head_wqe_page = NULL;
- struct spfc_sq_qinfo *prnt_sq_ctx = NULL;
- struct spfc_queue_info_bus queue_bus;
-
- SPFC_FUNCTION_ENTER;
-
- /* Obtains the Parent Context address */
- head_wqe_page = SPFC_GET_SQ_HEAD(ssq);
-
- prnt_sq_ctx = sq_info;
-
- /* The PMSN is updated by the host driver */
- prnt_sq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
-
- /* Indicates the value of O of the valid SQE in the current round of SQ.
- * * The value of Linked List SQ is always one, and the value of 0 is
- * invalid.
- */
- prnt_sq_ctx->loop_o =
- SPFC_OWNER_DRIVER_PRODUCT; /* current valid o-bit */
-
- /* should be opposite from loop_o */
- prnt_sq_ctx->cur_wqe_o = ~(prnt_sq_ctx->loop_o);
-
- /* the first sqe's gpa */
- prnt_sq_ctx->cur_sqe_gpa = head_wqe_page->wpg_phy_addr;
-
- /* Indicates the GPA of the Queue header that is initialized to the SQ
- * in * the Host memory. The value must be 16-byte aligned.
- */
- prnt_sq_ctx->pmsn_gpa = ssq->queue_hdr_phy_addr;
- if (wqe_pre_load != 0)
- prnt_sq_ctx->pmsn_gpa |= SPFC_SQ_LINK_PRE;
-
- /* This field is used to fill in the dmaattr_idx field of the ComboDMA.
- * The default value is 0
- */
- prnt_sq_ctx->sqe_dmaattr_idx = SPFC_DMA_ATTR_OFST;
-
- /* This field is filled using the value of RO_SO in the SGL0 of the
- * ComboDMA
- */
- prnt_sq_ctx->sq_so_ro = SPFC_PCIE_RELAXED_ORDERING;
-
- prnt_sq_ctx->ring = ssq->queue_style;
-
- /* This field is used to set the SGL0 field of the Child solicDMA */
- prnt_sq_ctx->zerocopy_dmaattr_idx = SPFC_DMA_ATTR_OFST;
-
- prnt_sq_ctx->zerocopy_so_ro = SPFC_PCIE_RELAXED_ORDERING;
- prnt_sq_ctx->enable_256 = SPFC_256BWQE_ENABLE;
-
- /* PCIe attribute information */
- prnt_sq_ctx->pcie_template = SPFC_PCIE_TEMPLATE;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(ssq->context_id & SPFC_SSQ_CTX_MASK)); /* bits 20 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sqe_dmaattr_idx)) << UNF_SHIFT_20);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->sq_so_ro)) << UNF_SHIFT_26);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->ring)) << UNF_SHIFT_28); /* bits 1 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_dmaattr_idx))
- << UNF_SHIFT_29); /* bits 6 */
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->zerocopy_so_ro)) << UNF_SHIFT_35);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pcie_template)) << UNF_SHIFT_37);
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_4))
- << UNF_SHIFT_43); /* bits 21 */
- queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(prnt_sq_ctx->pmsn_gpa >> UNF_SHIFT_25));
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(prnt_sq_ctx->pmsn_type)) << UNF_SHIFT_39);
- prnt_sq_ctx->parity = spfc_get_parity_value(queue_bus.bus, SPFC_SQC_BUS_ROW,
- SPFC_SQC_BUS_COL);
- spfc_cpu_to_big64(prnt_sq_ctx, sizeof(struct spfc_sq_qinfo));
-
- SPFC_FUNCTION_RETURN;
-}
-
-u32 spfc_create_ssq(void *handle)
-{
- u32 ret = RETURN_OK;
- u32 global_sq_index = 0;
- u32 qid = 0;
- struct cqm_qpc_mpt *prnt_ctx = NULL;
- struct spfc_parent_shared_queue_info *ssq_info = NULL;
- struct spfc_parent_ssq_info *sq_ctrl = NULL;
- u32 queue_header_alloc_size = 0;
- struct spfc_wqe_page *head_wpg = NULL;
- struct spfc_ssq_parent_context prnt_ctx_info;
- struct spfc_sq_qinfo *sq_info = NULL;
- struct spfc_scq_qinfo *psq_pretchinfo = NULL;
- struct spfc_queue_info_bus queue_bus;
- struct spfc_fc_key_section *keysection = NULL;
- struct spfc_hba_info *hba = NULL;
- dma_addr_t origin_addr;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- hba = (struct spfc_hba_info *)handle;
- for (global_sq_index = 0; global_sq_index < SPFC_MAX_SSQ_NUM;) {
- qid = 0;
- prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
- CQM_OBJECT_SERVICE_CTX,
- SPFC_CNTX_SIZE_256B, NULL,
- CQM_INDEX_INVALID);
- if (!prnt_ctx) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create ssq context failed, CQM_INDEX is 0x%x",
- CQM_INDEX_INVALID);
- goto ssq_ctx_create_fail;
- }
- memset(&prnt_ctx_info, 0, sizeof(prnt_ctx_info));
- keysection = (struct spfc_fc_key_section *)&prnt_ctx_info;
- keysection->xid_h = (prnt_ctx->xid >> UNF_SHIFT_16) & SPFC_KEYSECTION_XID_H_MASK;
- keysection->xid_l = prnt_ctx->xid & SPFC_KEYSECTION_XID_L_MASK;
- spfc_cpu_to_big32(keysection, sizeof(struct spfc_fc_key_section));
- for (qid = 0; qid < SPFC_SQ_NUM_PER_QPC; qid++) {
- sq_info = (struct spfc_sq_qinfo *)((u8 *)(&prnt_ctx_info) + ((qid + 1) *
- SPFC_SQ_SPACE_OFFSET));
- ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index];
- ssq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
- /* Initialize struct spfc_parent_sq_info */
- sq_ctrl = &ssq_info->parent_ssq_info;
- sq_ctrl->hba = (void *)hba;
- sq_ctrl->context_id = prnt_ctx->xid;
- sq_ctrl->sq_queue_id = qid + SPFC_SQ_QID_START_PER_QPC;
- sq_ctrl->cache_id = FC_CALC_CID(prnt_ctx->xid);
- sq_ctrl->sqn = global_sq_index;
- sq_ctrl->max_sqe_num = hba->exi_count;
- /* Reduce one Link Wqe */
- sq_ctrl->wqe_num_per_buf = hba->sq_wpg_pool.wqe_per_wpg - 1;
- sq_ctrl->wqe_size = SPFC_SQE_SIZE;
- sq_ctrl->wqe_offset = 0;
- sq_ctrl->head_start_cmsn = 0;
- sq_ctrl->head_end_cmsn = SPFC_GET_WP_END_CMSN(0, sq_ctrl->wqe_num_per_buf);
- sq_ctrl->last_pmsn = 0;
- /* Linked List SQ Owner Bit 1 valid,0 invalid */
- sq_ctrl->last_pi_owner = 1;
- atomic_set(&sq_ctrl->sq_valid, true);
- sq_ctrl->accum_wqe_cnt = 0;
- sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC_SQ;
- sq_ctrl->queue_style = (global_sq_index == SPFC_DIRECTWQE_SQ_INDEX) ?
- SPFC_QUEUE_RING_STYLE : SPFC_QUEUE_LINK_STYLE;
- INIT_LIST_HEAD(&sq_ctrl->list_linked_list_sq);
- atomic_set(&sq_ctrl->wqe_page_cnt, 0);
- atomic_set(&sq_ctrl->sq_db_cnt, 0);
- atomic_set(&sq_ctrl->sqe_minus_cqe_cnt, 1);
- atomic_set(&sq_ctrl->sq_wqe_cnt, 0);
- atomic_set(&sq_ctrl->sq_cqe_cnt, 0);
- spin_lock_init(&sq_ctrl->parent_sq_enqueue_lock);
- memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
-
- /* Allocate and initialize the Queue Header space. 64B
- * alignment is required. * Additional 64B is applied
- * for alignment
- */
- queue_header_alloc_size = sizeof(struct spfc_queue_header) +
- SPFC_SQ_HEADER_ADDR_ALIGN_SIZE;
- sq_ctrl->queue_head_original = kmalloc(queue_header_alloc_size, GFP_ATOMIC);
- if (!sq_ctrl->queue_head_original) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SQ(0x%x) create SQ queue header failed",
- global_sq_index);
- goto ssq_qheader_create_fail;
- }
-
- memset((u8 *)sq_ctrl->queue_head_original, 0, queue_header_alloc_size);
-
- sq_ctrl->queue_hdr_phy_addr_original =
- pci_map_single(hba->pci_dev, sq_ctrl->queue_head_original,
- queue_header_alloc_size, DMA_BIDIRECTIONAL);
- origin_addr = sq_ctrl->queue_hdr_phy_addr_original;
- if (pci_dma_mapping_error(hba->pci_dev, origin_addr)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]SQ(0x%x) SQ queue header DMA mapping failed",
- global_sq_index);
- goto ssq_qheader_dma_map_fail;
- }
-
- /* Obtains the 64B alignment address */
- sq_ctrl->queue_header = (struct spfc_queue_header *)(uintptr_t)
- SPFC_ADDR_64_ALIGN((u64)((uintptr_t)(sq_ctrl->queue_head_original)));
- sq_ctrl->queue_hdr_phy_addr = SPFC_ADDR_64_ALIGN(origin_addr);
-
- /* Each SQ is allocated with a Wqe Page by default. The
- * WqePageCnt is incremented by one
- */
- head_wpg = spfc_add_one_wqe_page(sq_ctrl);
- if (!head_wpg) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]SQ(0x%x) create SQ first wqe page failed",
- global_sq_index);
- goto ssq_headwpg_create_fail;
- }
-
- atomic_inc(&sq_ctrl->wqe_page_cnt);
- spfc_init_sq_prnt_ctxt_sq_qinfo(sq_info, sq_ctrl);
- global_sq_index++;
- if (global_sq_index == SPFC_MAX_SSQ_NUM)
- break;
- }
- psq_pretchinfo = &prnt_ctx_info.sq_pretchinfo;
- psq_pretchinfo->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
- psq_pretchinfo->hw_scqc_config.info.scq_n = (u64)0;
- psq_pretchinfo->hw_scqc_config.info.parity = 0;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] = psq_pretchinfo->hw_scqc_config.pctxt_val1;
- psq_pretchinfo->hw_scqc_config.info.parity =
- spfc_get_parity_value(queue_bus.bus, SPFC_HW_SCQC_BUS_ROW,
- SPFC_HW_SCQC_BUS_COL);
- spfc_cpu_to_big64(psq_pretchinfo, sizeof(struct spfc_scq_qinfo));
- ret = spfc_creat_ssqc_via_cmdq_sync(hba, &prnt_ctx_info,
- prnt_ctx->xid, prnt_ctx->paddr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]SQ(0x%x) create ssqc failed.",
- global_sq_index);
- goto ssq_cmdqsync_fail;
- }
- }
-
- return RETURN_OK;
-
-ssq_headwpg_create_fail:
- pci_unmap_single(hba->pci_dev, sq_ctrl->queue_hdr_phy_addr_original,
- queue_header_alloc_size, DMA_BIDIRECTIONAL);
-
-ssq_qheader_dma_map_fail:
- kfree(sq_ctrl->queue_head_original);
- sq_ctrl->queue_head_original = NULL;
-
-ssq_qheader_create_fail:
- cqm3_object_delete(&prnt_ctx->object);
- ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
- if (qid > 0) {
- while (qid--) {
- ssq_info = &hba->parent_queue_mgr->shared_queue[global_sq_index - qid];
- ssq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
- }
- }
-
-ssq_ctx_create_fail:
-ssq_cmdqsync_fail:
- if (global_sq_index > 0)
- spfc_free_ssq(hba, global_sq_index);
-
- return UNF_RETURN_ERROR;
-}
-
-static u32 spfc_create_scq(struct spfc_hba_info *hba)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 scq_index = 0;
- u32 scq_cfg_num = 0;
- struct cqm_queue *cqm_scq = NULL;
- void *handle = NULL;
- struct spfc_scq_info *scq_info = NULL;
- struct spfc_cq_qinfo cq_qinfo;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- handle = hba->dev_handle;
- /* Create SCQ by CQM interface */
- for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
- /*
- * 1. Create/Allocate SCQ
- * *
- * Notice: SCQ[0, 2, 4 ...]--->CMD SCQ,
- * SCQ[1, 3, 5 ...]--->STS SCQ,
- * SCQ[SPFC_TOTAL_SCQ_NUM-1]--->Defaul SCQ
- */
- cqm_scq = cqm3_object_nonrdma_queue_create(handle, SERVICE_T_FC,
- CQM_OBJECT_NONRDMA_SCQ,
- SPFC_SCQ_IS_STS(scq_index) ?
- SPFC_STS_SCQ_DEPTH :
- SPFC_CMD_SCQ_DEPTH,
- SPFC_SCQE_SIZE, hba);
- if (!cqm_scq) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_WARN, "[err]Create scq failed");
-
- goto free_scq;
- }
-
- /* 2. Initialize SCQ (info) */
- spfc_init_scq_info(hba, cqm_scq, scq_index, &scq_info);
-
- /* 3. Allocate & Initialize SCQ interrupt */
- ret = spfc_alloc_scq_int(scq_info);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate scq interrupt failed");
-
- cqm3_object_delete(&cqm_scq->object);
- memset(scq_info, 0, sizeof(struct spfc_scq_info));
- goto free_scq;
- }
-
- /* 4. Initialize SCQ queue header */
- spfc_init_scq_header((struct wq_header *)(void *)cqm_scq->q_header_vaddr);
-
- /* 5. Initialize & Create SCQ CTX */
- memset(&cq_qinfo, 0, sizeof(cq_qinfo));
- spfc_cfg_scq_ctx(scq_info, &cq_qinfo);
- ret = spfc_creat_scqc_via_cmdq_sync(hba, &cq_qinfo, scq_info->scqn);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Create scq context failed");
-
- cqm3_object_delete(&cqm_scq->object);
- memset(scq_info, 0, sizeof(struct spfc_scq_info));
- goto free_scq;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Create SCQ[%u] Scqn=%u WqeNum=%u WqeSize=%u WqePerBuf=%u CqDepth=%u CiType=%u irq=%u msix=%u",
- scq_info->queue_id, scq_info->scqn,
- scq_info->valid_wqe_num, scq_info->wqe_size,
- scq_info->wqe_num_per_buf, scq_info->scqc_cq_depth,
- scq_info->scqc_ci_type, scq_info->irq_id,
- scq_info->msix_entry_idx);
- }
-
- /* Last SCQ is used to handle SCQE delivery access when clearing buffer
- */
- hba->default_scqn = scq_info->scqn;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Default Scqn=%u CqmScqIndex=%u", hba->default_scqn,
- cqm_scq->index);
-
- return RETURN_OK;
-
-free_scq:
- spfc_flush_scq_ctx(hba);
-
- scq_cfg_num = scq_index;
- for (scq_index = 0; scq_index < scq_cfg_num; scq_index++) {
- scq_info = &hba->scq_info[scq_index];
- spfc_free_scq_int(scq_info);
- cqm_scq = scq_info->cqm_scq_info;
- cqm3_object_delete(&cqm_scq->object);
- memset(scq_info, 0, sizeof(struct spfc_scq_info));
- }
-
- return UNF_RETURN_ERROR;
-}
-
-static void spfc_destroy_scq(struct spfc_hba_info *hba)
-{
- u32 scq_index = 0;
- struct cqm_queue *cqm_scq = NULL;
- struct spfc_scq_info *scq_info = NULL;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Start destroy total %d SCQ", SPFC_TOTAL_SCQ_NUM);
-
- FC_CHECK_RETURN_VOID(hba);
-
- /* Use CQM to delete SCQ */
- for (scq_index = 0; scq_index < SPFC_TOTAL_SCQ_NUM; scq_index++) {
- scq_info = &hba->scq_info[scq_index];
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ALL,
- "[info]Destroy SCQ%u, Scqn=%u, Irq=%u, msix=%u, name=%s",
- scq_index, scq_info->scqn, scq_info->irq_id,
- scq_info->msix_entry_idx, scq_info->irq_name);
-
- spfc_free_scq_int(scq_info);
- cqm_scq = scq_info->cqm_scq_info;
- cqm3_object_delete(&cqm_scq->object);
- memset(scq_info, 0, sizeof(struct spfc_scq_info));
- }
-}
-
-static void spfc_init_srq_info(struct spfc_hba_info *hba, struct cqm_queue *cqm_srq,
- struct spfc_srq_info *srq_info)
-{
- FC_CHECK_RETURN_VOID(hba);
- FC_CHECK_RETURN_VOID(cqm_srq);
- FC_CHECK_RETURN_VOID(srq_info);
-
- srq_info->hba = (void *)hba;
-
- srq_info->cqm_srq_info = cqm_srq;
- srq_info->wqe_num_per_buf = cqm_srq->q_room_buf_1.buf_size / SPFC_SRQE_SIZE - 1;
- srq_info->wqe_size = SPFC_SRQE_SIZE;
- srq_info->valid_wqe_num = cqm_srq->valid_wqe_num;
- srq_info->pi = 0;
- srq_info->pi_owner = SPFC_SRQ_INIT_LOOP_O;
- srq_info->pmsn = 0;
- srq_info->srqn = cqm_srq->index;
- srq_info->first_rqe_recv_dma = 0;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Init srq info(srq index 0x%x) valid wqe num 0x%x, buffer size 0x%x, wqe num per buf 0x%x",
- cqm_srq->index, srq_info->valid_wqe_num,
- cqm_srq->q_room_buf_1.buf_size, srq_info->wqe_num_per_buf);
-}
-
-static void spfc_init_srq_header(struct wq_header *queue_header)
-{
- FC_CHECK_RETURN_VOID(queue_header);
-
- memset(queue_header, 0, sizeof(struct wq_header));
-}
-
-/*
- *Function Name : spfc_get_srq_entry
- *Function Description: Obtain RQE in SRQ via PI.
- *Input Parameters : *srq_info,
- * **linked_rqe,
- * position
- *Output Parameters : N/A
- *Return Type : struct spfc_rqe*
- */
-static struct spfc_rqe *spfc_get_srq_entry(struct spfc_srq_info *srq_info,
- struct spfc_rqe **linked_rqe, u16 position)
-{
- u32 buf_id = 0;
- u32 wqe_num_per_buf = 0;
- u16 buf_offset = 0;
- struct cqm_buf_list *buf = NULL;
-
- FC_CHECK_RETURN_VALUE(srq_info, NULL);
-
- wqe_num_per_buf = srq_info->wqe_num_per_buf;
-
- buf_id = position / wqe_num_per_buf;
- buf = &srq_info->cqm_srq_info->q_room_buf_1.buf_list[buf_id];
- buf_offset = position % ((u16)wqe_num_per_buf);
-
- if (buf_offset + 1 == wqe_num_per_buf)
- *linked_rqe = (struct spfc_rqe *)(buf->va) + wqe_num_per_buf;
- else
- *linked_rqe = NULL;
-
- return (struct spfc_rqe *)(buf->va) + buf_offset;
-}
-
-void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id)
-{
- struct spfc_rqe *rqe = NULL;
- struct spfc_rqe tmp_rqe;
- struct spfc_rqe *linked_rqe = NULL;
- struct wq_header *wq_header = NULL;
- struct spfc_drq_buff_entry *buff_entry = NULL;
-
- FC_CHECK_RETURN_VOID(srq_info);
- FC_CHECK_RETURN_VOID(buf_id < srq_info->valid_wqe_num);
-
- buff_entry = srq_info->els_buff_entry_head + buf_id;
-
- spin_lock(&srq_info->srq_spin_lock);
-
- /* Obtain RQE, not include link wqe */
- rqe = spfc_get_srq_entry(srq_info, &linked_rqe, srq_info->pi);
- if (!rqe) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]post els srq,get srqe failed, valid wqe num 0x%x, pi 0x%x, pmsn 0x%x",
- srq_info->valid_wqe_num, srq_info->pi,
- srq_info->pmsn);
-
- spin_unlock(&srq_info->srq_spin_lock);
- return;
- }
-
- /* Initialize RQE */
- /* cs section is not used */
- memset(&tmp_rqe, 0, sizeof(struct spfc_rqe));
-
- /* default Obit is invalid, and set valid finally */
- spfc_build_srq_wqe_ctrls(&tmp_rqe, !srq_info->pi_owner, srq_info->pmsn + 1);
-
- tmp_rqe.bds_sl.buf_addr_hi = SPFC_HIGH_32_BITS(buff_entry->buff_dma);
- tmp_rqe.bds_sl.buf_addr_lo = SPFC_LOW_32_BITS(buff_entry->buff_dma);
- tmp_rqe.drv_sl.wd0.user_id = buf_id;
-
- /* convert to big endian */
- spfc_cpu_to_big32(&tmp_rqe, sizeof(struct spfc_rqe));
-
- memcpy(rqe, &tmp_rqe, sizeof(struct spfc_rqe));
-
- /* reset Obit */
- spfc_set_srq_wqe_owner_be((struct spfc_wqe_ctrl *)(void *)(&rqe->ctrl_sl),
- srq_info->pi_owner);
-
- if (linked_rqe) {
- /* Update Obit in linked WQE */
- spfc_set_srq_link_wqe_owner_be((struct spfc_linkwqe *)(void *)linked_rqe,
- srq_info->pi_owner, srq_info->pmsn + 1);
- }
-
- /* Update PI and PMSN */
- spfc_update_producer_info((u16)(srq_info->valid_wqe_num),
- &srq_info->pi, &srq_info->pi_owner);
-
- /* pmsn is 16bit. The value is added to the maximum value and is
- * automatically reversed
- */
- srq_info->pmsn++;
-
- /* Update pmsn in queue header */
- wq_header = (struct wq_header *)(void *)srq_info->cqm_srq_info->q_header_vaddr;
- spfc_update_srq_header(&wq_header->db_record, srq_info->pmsn);
-
- spin_unlock(&srq_info->srq_spin_lock);
-}
-
-/*
- *Function Name : spfc_cfg_srq_ctx
- *Function Description: Initialize the CTX of the SRQ that receives the
- * immediate data. The RQE of the SRQ
- * needs to be
- *initialized when the RQE is filled. Input Parameters : *srq_info, *srq_ctx,
- * sge_size,
- * rqe_gpa
- *Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_cfg_srq_ctx(struct spfc_srq_info *srq_info,
- struct spfc_srq_ctx *ctx, u32 sge_size,
- u64 rqe_gpa)
-{
- struct spfc_srq_ctx *srq_ctx = NULL;
- struct cqm_queue *cqm_srq_info = NULL;
- struct spfc_queue_info_bus queue_bus;
-
- FC_CHECK_RETURN_VOID(srq_info);
- FC_CHECK_RETURN_VOID(ctx);
-
- cqm_srq_info = srq_info->cqm_srq_info;
- srq_ctx = ctx;
- srq_ctx->last_rq_pmsn = 0;
- srq_ctx->cur_rqe_msn = 0;
- srq_ctx->pcie_template = 0;
- /* The value of CTX needs to be updated
- *when RQE is configured
- */
- srq_ctx->cur_rqe_gpa = rqe_gpa;
- srq_ctx->cur_sge_v = 0;
- srq_ctx->cur_sge_l = 0;
- /* The information received by the SRQ is reported through the
- *SCQ. The interrupt and ArmCQ are disabled.
- */
- srq_ctx->int_mode = 0;
- srq_ctx->ceqn_msix = 0;
- srq_ctx->cur_sge_remain_len = 0;
- srq_ctx->cur_sge_id = 0;
- srq_ctx->consant_sge_len = sge_size;
- srq_ctx->cur_wqe_o = 0;
- srq_ctx->pmsn_type = SPFC_PMSN_CI_TYPE_FROM_HOST;
- srq_ctx->bdsl = 0;
- srq_ctx->cr = 0;
- srq_ctx->csl = 0;
- srq_ctx->cf = 0;
- srq_ctx->ctrl_sl = 0;
- srq_ctx->cur_sge_gpa = 0;
- srq_ctx->cur_pmsn_gpa = cqm_srq_info->q_header_paddr;
- srq_ctx->prefetch_max_masn = 0;
- srq_ctx->cqe_max_cnt = 0;
- srq_ctx->cur_cqe_cnt = 0;
- srq_ctx->arm_q = 0;
- srq_ctx->cq_so_ro = 0;
- srq_ctx->cqe_dma_attr_idx = 0;
- srq_ctx->rq_so_ro = 0;
- srq_ctx->rqe_dma_attr_idx = 0;
- srq_ctx->loop_o = SPFC_SRQ_INIT_LOOP_O;
- srq_ctx->ring = SPFC_QUEUE_RING;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] |= ((u64)(cqm_srq_info->q_ctx_paddr >> UNF_SHIFT_4));
- queue_bus.bus[ARRAY_INDEX_0] |= (((u64)(srq_ctx->rqe_dma_attr_idx &
- SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK))
- << UNF_SHIFT_60); /* bits 4 */
-
- queue_bus.bus[ARRAY_INDEX_1] |= ((u64)(srq_ctx->rqe_dma_attr_idx >> UNF_SHIFT_4));
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->rq_so_ro)) << UNF_SHIFT_2); /* bits 2 */
- queue_bus.bus[ARRAY_INDEX_1] |= (((u64)(srq_ctx->cur_pmsn_gpa >> UNF_SHIFT_4))
- << UNF_SHIFT_4); /* bits 60 */
-
- queue_bus.bus[ARRAY_INDEX_2] |= ((u64)(srq_ctx->consant_sge_len)); /* bits 17 */
- queue_bus.bus[ARRAY_INDEX_2] |= (((u64)(srq_ctx->pcie_template)) << UNF_SHIFT_17);
-
- srq_ctx->parity = spfc_get_parity_value((void *)queue_bus.bus, SPFC_SRQC_BUS_ROW,
- SPFC_SRQC_BUS_COL);
-
- spfc_cpu_to_big64((void *)srq_ctx, sizeof(struct spfc_srq_ctx));
-}
-
-static u32 spfc_creat_srqc_via_cmdq_sync(struct spfc_hba_info *hba,
- struct spfc_srq_ctx *srqc,
- u64 ctx_gpa)
-{
-#define SPFC_INIT_SRQC_TIMEOUT 3000
-
- int ret;
- u32 covrt_size;
- struct spfc_cmdqe_creat_srqc init_srq_cmd;
- struct sphw_cmd_buf *cmdq_in_buf;
-
- cmdq_in_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmdq_in_buf) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]cmdq in_cmd_buf alloc failed");
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
- return UNF_RETURN_ERROR;
- }
-
- memset(&init_srq_cmd, 0, sizeof(init_srq_cmd));
- init_srq_cmd.wd0.task_type = SPFC_TASK_T_INIT_SRQC;
- init_srq_cmd.srqc_gpa_h = SPFC_HIGH_32_BITS(ctx_gpa);
- init_srq_cmd.srqc_gpa_l = SPFC_LOW_32_BITS(ctx_gpa);
- covrt_size = sizeof(init_srq_cmd) - sizeof(init_srq_cmd.srqc);
- spfc_cpu_to_big32(&init_srq_cmd, covrt_size);
-
- /* srqc is already big-endian */
- memcpy(init_srq_cmd.srqc, srqc, sizeof(*srqc));
- memcpy(cmdq_in_buf->buf, &init_srq_cmd, sizeof(init_srq_cmd));
- cmdq_in_buf->size = sizeof(init_srq_cmd);
-
- ret = sphw_cmdq_detail_resp(hba->dev_handle, COMM_MOD_FC, 0,
- cmdq_in_buf, NULL, NULL,
- SPFC_INIT_SRQC_TIMEOUT, SPHW_CHANNEL_FC);
-
- sphw_free_cmd_buf(hba->dev_handle, cmdq_in_buf);
-
- if (ret) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Send creat srqc via cmdq failed, ret=%d",
- ret);
-
- SPFC_ERR_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
- return UNF_RETURN_ERROR;
- }
-
- SPFC_IO_STAT(hba, SPFC_TASK_T_INIT_SRQC);
-
- return RETURN_OK;
-}
-
-static void spfc_init_els_srq_wqe(struct spfc_srq_info *srq_info)
-{
- u32 rqe_index = 0;
- struct spfc_drq_buff_entry *buf_entry = NULL;
-
- FC_CHECK_RETURN_VOID(srq_info);
-
- for (rqe_index = 0; rqe_index < srq_info->valid_wqe_num - 1; rqe_index++) {
- buf_entry = srq_info->els_buff_entry_head + rqe_index;
- spfc_post_els_srq_wqe(srq_info, buf_entry->buff_id);
- }
-}
-
-static void spfc_free_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
-{
- u32 buff_index = 0;
- struct spfc_srq_info *srq_info = NULL;
- struct spfc_drq_buff_entry *buff_entry = NULL;
-
- FC_CHECK_RETURN_VOID(hba);
-
- srq_info = &hba->els_srq_info;
-
- if (!srq_info->els_buff_entry_head)
- return;
-
- for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
- buff_entry = &srq_info->els_buff_entry_head[buff_index];
- buff_entry->buff_addr = NULL;
- }
-
- if (srq_info->buf_list.buflist) {
- for (buff_index = 0; buff_index < srq_info->buf_list.buf_num;
- buff_index++) {
- if (srq_info->buf_list.buflist[buff_index].paddr != 0) {
- pci_unmap_single(hba->pci_dev,
- srq_info->buf_list.buflist[buff_index].paddr,
- srq_info->buf_list.buf_size,
- DMA_FROM_DEVICE);
- srq_info->buf_list.buflist[buff_index].paddr = 0;
- }
- kfree(srq_info->buf_list.buflist[buff_index].vaddr);
- srq_info->buf_list.buflist[buff_index].vaddr = NULL;
- }
-
- kfree(srq_info->buf_list.buflist);
- srq_info->buf_list.buflist = NULL;
- }
-
- kfree(srq_info->els_buff_entry_head);
- srq_info->els_buff_entry_head = NULL;
-}
-
-static u32 spfc_alloc_els_srq_buff(struct spfc_hba_info *hba, u32 srq_valid_wqe)
-{
- u32 req_buff_size = 0;
- u32 buff_index = 0;
- struct spfc_srq_info *srq_info = NULL;
- struct spfc_drq_buff_entry *buff_entry = NULL;
- u32 buf_total_size;
- u32 buf_num;
- u32 alloc_idx;
- u32 cur_buf_idx = 0;
- u32 cur_buf_offset = 0;
- u32 buf_cnt_perhugebuf;
-
- srq_info = &hba->els_srq_info;
-
- /* Apply for entry buffer */
- req_buff_size = (u32)(srq_valid_wqe * sizeof(struct spfc_drq_buff_entry));
- srq_info->els_buff_entry_head = (struct spfc_drq_buff_entry *)kmalloc(req_buff_size,
- GFP_KERNEL);
- if (!srq_info->els_buff_entry_head) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate ELS Srq receive buffer entries failed");
-
- return UNF_RETURN_ERROR;
- }
- memset(srq_info->els_buff_entry_head, 0, req_buff_size);
-
- buf_total_size = SPFC_SRQ_ELS_SGE_LEN * srq_valid_wqe;
-
- srq_info->buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE
- ? BUF_LIST_PAGE_SIZE
- : buf_total_size;
- buf_cnt_perhugebuf = srq_info->buf_list.buf_size / SPFC_SRQ_ELS_SGE_LEN;
- buf_num = srq_valid_wqe % buf_cnt_perhugebuf ?
- srq_valid_wqe / buf_cnt_perhugebuf + 1 :
- srq_valid_wqe / buf_cnt_perhugebuf;
- srq_info->buf_list.buflist = (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list),
- GFP_KERNEL);
- srq_info->buf_list.buf_num = buf_num;
-
- if (!srq_info->buf_list.buflist) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate ELS buf list failed out of memory");
- goto free_buff;
- }
- memset(srq_info->buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
-
- for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
- srq_info->buf_list.buflist[alloc_idx].vaddr = kmalloc(srq_info->buf_list.buf_size,
- GFP_KERNEL);
- if (!srq_info->buf_list.buflist[alloc_idx].vaddr)
- goto free_buff;
-
- memset(srq_info->buf_list.buflist[alloc_idx].vaddr, 0, srq_info->buf_list.buf_size);
-
- srq_info->buf_list.buflist[alloc_idx].paddr =
- pci_map_single(hba->pci_dev, srq_info->buf_list.buflist[alloc_idx].vaddr,
- srq_info->buf_list.buf_size, DMA_FROM_DEVICE);
- if (pci_dma_mapping_error(hba->pci_dev,
- srq_info->buf_list.buflist[alloc_idx].paddr)) {
- srq_info->buf_list.buflist[alloc_idx].paddr = 0;
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Map els srq buffer failed");
-
- goto free_buff;
- }
- }
-
- /* Apply for receiving buffer and attach it to the free linked list */
- for (buff_index = 0; buff_index < srq_valid_wqe; buff_index++) {
- buff_entry = &srq_info->els_buff_entry_head[buff_index];
- cur_buf_idx = buff_index / buf_cnt_perhugebuf;
- cur_buf_offset = SPFC_SRQ_ELS_SGE_LEN * (buff_index % buf_cnt_perhugebuf);
- buff_entry->buff_addr = srq_info->buf_list.buflist[cur_buf_idx].vaddr +
- cur_buf_offset;
- buff_entry->buff_dma = srq_info->buf_list.buflist[cur_buf_idx].paddr +
- cur_buf_offset;
- buff_entry->buff_id = (u16)buff_index;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num,
- buf_total_size);
-
- return RETURN_OK;
-
-free_buff:
- spfc_free_els_srq_buff(hba, srq_valid_wqe);
- return UNF_RETURN_ERROR;
-}
-
-void spfc_send_clear_srq_cmd(struct spfc_hba_info *hba,
- struct spfc_srq_info *srq_info)
-{
- union spfc_cmdqe cmdqe;
- struct cqm_queue *cqm_fcp_srq = NULL;
- ulong flag = 0;
-
- memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
-
- spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
- cqm_fcp_srq = srq_info->cqm_srq_info;
- if (!cqm_fcp_srq) {
- srq_info->state = SPFC_CLEAN_DONE;
- spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
- return;
- }
-
- cmdqe.clear_srq.wd0.task_type = SPFC_TASK_T_CLEAR_SRQ;
- cmdqe.clear_srq.wd1.scqn = SPFC_LSW(hba->default_scqn);
- cmdqe.clear_srq.wd1.srq_type = srq_info->srq_type;
- cmdqe.clear_srq.srqc_gpa_h = SPFC_HIGH_32_BITS(cqm_fcp_srq->q_ctx_paddr);
- cmdqe.clear_srq.srqc_gpa_l = SPFC_LOW_32_BITS(cqm_fcp_srq->q_ctx_paddr);
-
- (void)queue_delayed_work(hba->work_queue, &srq_info->del_work,
- (ulong)msecs_to_jiffies(SPFC_SRQ_DEL_STAGE_TIMEOUT_MS));
- spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port 0x%x begin to clear srq 0x%x(0x%x,0x%llx)",
- hba->port_cfg.port_id, srq_info->srq_type,
- SPFC_LSW(hba->default_scqn),
- (u64)cqm_fcp_srq->q_ctx_paddr);
-
- /* Run the ROOT CMDQ command to issue the clear srq command. If the
- * command fails to be delivered, retry upon timeout.
- */
- (void)spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.clear_srq));
-}
-
-/*
- *Function Name : spfc_srq_clr_timeout
- *Function Description: Delete srq when timeout.
- *Input Parameters : *work
- *Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_srq_clr_timeout(struct work_struct *work)
-{
-#define SPFC_MAX_DEL_SRQ_RETRY_TIMES 2
- struct spfc_srq_info *srq = NULL;
- struct spfc_hba_info *hba = NULL;
- struct cqm_queue *cqm_fcp_imm_srq = NULL;
- ulong flag = 0;
-
- srq = container_of(work, struct spfc_srq_info, del_work.work);
-
- spin_lock_irqsave(&srq->srq_spin_lock, flag);
- hba = srq->hba;
- cqm_fcp_imm_srq = srq->cqm_srq_info;
- spin_unlock_irqrestore(&srq->srq_spin_lock, flag);
-
- if (hba && cqm_fcp_imm_srq) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port 0x%x clear srq 0x%x stat 0x%x timeout",
- hba->port_cfg.port_id, srq->srq_type, srq->state);
-
- /* If the delivery fails or the execution times out after the
- * delivery, try again once
- */
- srq->del_retry_time++;
- if (srq->del_retry_time < SPFC_MAX_DEL_SRQ_RETRY_TIMES)
- spfc_send_clear_srq_cmd(hba, srq);
- else
- srq->del_retry_time = 0;
- }
-}
-
-static u32 spfc_create_els_srq(struct spfc_hba_info *hba)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct cqm_queue *cqm_srq = NULL;
- struct wq_header *wq_header = NULL;
- struct spfc_srq_info *srq_info = NULL;
- struct spfc_srq_ctx srq_ctx = {0};
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- cqm_srq = cqm3_object_fc_srq_create(hba->dev_handle, SERVICE_T_FC,
- CQM_OBJECT_NONRDMA_SRQ, SPFC_SRQ_ELS_DATA_DEPTH,
- SPFC_SRQE_SIZE, hba);
- if (!cqm_srq) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Create Els Srq failed");
-
- return UNF_RETURN_ERROR;
- }
-
- /* Initialize SRQ */
- srq_info = &hba->els_srq_info;
- spfc_init_srq_info(hba, cqm_srq, srq_info);
- srq_info->srq_type = SPFC_SRQ_ELS;
- srq_info->enable = true;
- srq_info->state = SPFC_CLEAN_DONE;
- srq_info->del_retry_time = 0;
-
- /* The srq lock is initialized and can be created repeatedly */
- spin_lock_init(&srq_info->srq_spin_lock);
- srq_info->spin_lock_init = true;
-
- /* Initialize queue header */
- wq_header = (struct wq_header *)(void *)cqm_srq->q_header_vaddr;
- spfc_init_srq_header(wq_header);
- INIT_DELAYED_WORK(&srq_info->del_work, spfc_srq_clr_timeout);
-
- /* Apply for RQ buffer */
- ret = spfc_alloc_els_srq_buff(hba, srq_info->valid_wqe_num);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate Els Srq buffer failed");
-
- cqm3_object_delete(&cqm_srq->object);
- memset(srq_info, 0, sizeof(struct spfc_srq_info));
- return UNF_RETURN_ERROR;
- }
-
- /* Fill RQE, update queue header */
- spfc_init_els_srq_wqe(srq_info);
-
- /* Fill SRQ CTX */
- memset(&srq_ctx, 0, sizeof(srq_ctx));
- spfc_cfg_srq_ctx(srq_info, &srq_ctx, SPFC_SRQ_ELS_SGE_LEN,
- srq_info->cqm_srq_info->q_room_buf_1.buf_list->pa);
-
- ret = spfc_creat_srqc_via_cmdq_sync(hba, &srq_ctx, srq_info->cqm_srq_info->q_ctx_paddr);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Creat Els Srqc failed");
-
- spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
- cqm3_object_delete(&cqm_srq->object);
- memset(srq_info, 0, sizeof(struct spfc_srq_info));
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-void spfc_wq_destroy_els_srq(struct work_struct *work)
-{
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(work);
- hba =
- container_of(work, struct spfc_hba_info, els_srq_clear_work);
- spfc_destroy_els_srq(hba);
-}
-
-void spfc_destroy_els_srq(void *handle)
-{
- /*
- * Receive clear els srq sts
- * ---then--->>> destroy els srq
- */
- struct spfc_srq_info *srq_info = NULL;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(handle);
-
- hba = (struct spfc_hba_info *)handle;
- srq_info = &hba->els_srq_info;
-
- /* release receive buffer */
- spfc_free_els_srq_buff(hba, srq_info->valid_wqe_num);
-
- /* release srq info */
- if (srq_info->cqm_srq_info) {
- cqm3_object_delete(&srq_info->cqm_srq_info->object);
- srq_info->cqm_srq_info = NULL;
- }
- if (srq_info->spin_lock_init)
- srq_info->spin_lock_init = false;
- srq_info->hba = NULL;
- srq_info->enable = false;
- srq_info->state = SPFC_CLEAN_DONE;
-}
-
-/*
- *Function Name : spfc_create_srq
- *Function Description: Create SRQ, which contains four SRQ for receiving
- * instant data and a SRQ for receiving
- * ELS data.
- *Input Parameters : *hba Output Parameters : N/A Return Type :u32
- */
-static u32 spfc_create_srq(struct spfc_hba_info *hba)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
-
- /* Create ELS SRQ */
- ret = spfc_create_els_srq(hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Create Els Srq failed");
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-/*
- *Function Name : spfc_destroy_srq
- *Function Description: Release the SRQ resource, including the SRQ for
- * receiving the immediate data and the
- * SRQ forreceiving the ELS data.
- *Input Parameters : *hba Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_destroy_srq(struct spfc_hba_info *hba)
-{
- FC_CHECK_RETURN_VOID(hba);
-
- spfc_destroy_els_srq(hba);
-}
-
-u32 spfc_create_common_share_queues(void *handle)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- hba = (struct spfc_hba_info *)handle;
- /* Create & Init 8 pairs SCQ */
- ret = spfc_create_scq(hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Create scq failed");
-
- return UNF_RETURN_ERROR;
- }
-
- /* Alloc SRQ resource for SIRT & ELS */
- ret = spfc_create_srq(hba);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Create srq failed");
-
- spfc_flush_scq_ctx(hba);
- spfc_destroy_scq(hba);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-void spfc_destroy_common_share_queues(void *hba)
-{
- FC_CHECK_RETURN_VOID(hba);
-
- spfc_destroy_scq((struct spfc_hba_info *)hba);
- spfc_destroy_srq((struct spfc_hba_info *)hba);
-}
-
-static u8 spfc_map_fcp_data_cos(struct spfc_hba_info *hba)
-{
- u8 i = 0;
- u8 min_cnt_index = SPFC_PACKET_COS_FC_DATA;
- bool get_init_index = false;
-
- for (i = 0; i < SPFC_MAX_COS_NUM; i++) {
- /* Check whether the CoS is valid for the FC and cannot be
- * occupied by the CMD
- */
- if ((!(hba->cos_bitmap & ((u32)1 << i))) || i == SPFC_PACKET_COS_FC_CMD)
- continue;
-
- if (!get_init_index) {
- min_cnt_index = i;
- get_init_index = true;
- continue;
- }
-
- if (atomic_read(&hba->cos_rport_cnt[i]) <
- atomic_read(&hba->cos_rport_cnt[min_cnt_index]))
- min_cnt_index = i;
- }
-
- atomic_inc(&hba->cos_rport_cnt[min_cnt_index]);
-
- return min_cnt_index;
-}
-
-static void spfc_update_cos_rport_cnt(struct spfc_hba_info *hba, u8 cos_index)
-{
- if (cos_index >= SPFC_MAX_COS_NUM ||
- cos_index == SPFC_PACKET_COS_FC_CMD ||
- (!(hba->cos_bitmap & ((u32)1 << cos_index))) ||
- (atomic_read(&hba->cos_rport_cnt[cos_index]) == 0))
- return;
-
- atomic_dec(&hba->cos_rport_cnt[cos_index]);
-}
-
-void spfc_invalid_parent_sq(struct spfc_parent_sq_info *sq_info)
-{
- sq_info->rport_index = INVALID_VALUE32;
- sq_info->context_id = INVALID_VALUE32;
- sq_info->sq_queue_id = INVALID_VALUE32;
- sq_info->cache_id = INVALID_VALUE32;
- sq_info->local_port_id = INVALID_VALUE32;
- sq_info->remote_port_id = INVALID_VALUE32;
- sq_info->hba = NULL;
- sq_info->del_start_jiff = INVALID_VALUE64;
- sq_info->port_in_flush = false;
- sq_info->sq_in_sess_rst = false;
- sq_info->oqid_rd = INVALID_VALUE16;
- sq_info->oqid_wr = INVALID_VALUE16;
- sq_info->srq_ctx_addr = 0;
- sq_info->sqn_base = 0;
- atomic_set(&sq_info->sq_cached, false);
- sq_info->vport_id = 0;
- sq_info->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
- sq_info->need_offloaded = INVALID_VALUE8;
- atomic_set(&sq_info->sq_valid, false);
- atomic_set(&sq_info->flush_done_wait_cnt, 0);
- memset(&sq_info->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
- memset(sq_info->io_stat, 0, sizeof(sq_info->io_stat));
-}
-
-static void spfc_parent_sq_opreate_timeout(struct work_struct *work)
-{
- ulong flag = 0;
- struct spfc_parent_sq_info *parent_sq = NULL;
- struct spfc_parent_queue_info *parent_queue = NULL;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(work);
-
- parent_sq = container_of(work, struct spfc_parent_sq_info, del_work.work);
- parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
- hba = (struct spfc_hba_info *)parent_sq->hba;
- FC_CHECK_RETURN_VOID(hba);
-
- spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
- if (parent_queue->offload_state == SPFC_QUEUE_STATE_DESTROYING) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "Port(0x%x) sq rport index(0x%x) local nportid(0x%x),remote nportid(0x%x) reset timeout.",
- hba->port_cfg.port_id, parent_sq->rport_index,
- parent_sq->local_port_id,
- parent_sq->remote_port_id);
- }
- spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
-}
-
-static void spfc_parent_sq_wait_flush_done_timeout(struct work_struct *work)
-{
- ulong flag = 0;
- struct spfc_parent_sq_info *parent_sq = NULL;
- struct spfc_parent_queue_info *parent_queue = NULL;
- struct spfc_hba_info *hba = NULL;
- u32 ctx_flush_done;
- u32 *ctx_dw = NULL;
- int ret;
- int sq_state = SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK;
- spinlock_t *prtq_state_lock = NULL;
-
- FC_CHECK_RETURN_VOID(work);
-
- parent_sq = container_of(work, struct spfc_parent_sq_info, flush_done_timeout_work.work);
-
- FC_CHECK_RETURN_VOID(parent_sq);
-
- parent_queue = container_of(parent_sq, struct spfc_parent_queue_info, parent_sq_info);
- prtq_state_lock = &parent_queue->parent_queue_state_lock;
- hba = (struct spfc_hba_info *)parent_sq->hba;
- FC_CHECK_RETURN_VOID(hba);
- FC_CHECK_RETURN_VOID(parent_queue);
-
- spin_lock_irqsave(prtq_state_lock, flag);
- if (parent_queue->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) sq rport index(0x%x) is not destroying status,offloadsts is %d",
- hba->port_cfg.port_id, parent_sq->rport_index,
- parent_queue->offload_state);
- spin_unlock_irqrestore(prtq_state_lock, flag);
- return;
- }
-
- if (parent_queue->parent_ctx.cqm_parent_ctx_obj) {
- ctx_dw = (u32 *)((void *)(parent_queue->parent_ctx.cqm_parent_ctx_obj->vaddr));
- ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
- if (ctx_flush_done == 0) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- if (atomic_read(&parent_queue->parent_sq_info.flush_done_wait_cnt) <
- SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[info]Port(0x%x) sq rport index(0x%x) wait flush done timeout %d times",
- hba->port_cfg.port_id, parent_sq->rport_index,
- atomic_read(&(parent_queue->parent_sq_info
- .flush_done_wait_cnt)));
-
- atomic_inc(&parent_queue->parent_sq_info.flush_done_wait_cnt);
-
- /* Delay Free Sq info */
- ret = queue_delayed_work(hba->work_queue,
- &(parent_queue->parent_sq_info
- .flush_done_timeout_work),
- (ulong)msecs_to_jiffies((u32)
- SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
- if (!ret) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
- hba->port_cfg.port_id,
- parent_sq->rport_index, ret);
- SPFC_HBA_STAT(hba, sq_state);
- }
-
- return;
- }
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) sq rport index(0x%x) has wait flush done %d times,do not free sq",
- hba->port_cfg.port_id,
- parent_sq->rport_index,
- atomic_read(&(parent_queue->parent_sq_info
- .flush_done_wait_cnt)));
-
- SPFC_HBA_STAT(hba, SPFC_STAT_CTXT_FLUSH_DONE);
- return;
- }
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) sq rport index(0x%x) flush done bit is ok,free sq now",
- hba->port_cfg.port_id, parent_sq->rport_index);
-
- spfc_free_parent_queue_info(hba, parent_queue);
-}
-
-static void spfc_free_parent_sq(struct spfc_hba_info *hba,
- struct spfc_parent_queue_info *parq_info)
-{
-#define SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES 100
- u32 ctx_flush_done = 0;
- u32 *ctx_dw = NULL;
- struct spfc_parent_sq_info *sq_info = NULL;
- u32 uidelaycnt = 0;
- struct list_head *list = NULL;
- struct spfc_suspend_sqe_info *suspend_sqe = NULL;
-
- sq_info = &parq_info->parent_sq_info;
-
- while (!list_empty(&sq_info->suspend_sqe_list)) {
- list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
- list_del(list);
- suspend_sqe = list_entry(list, struct spfc_suspend_sqe_info, list_sqe_entry);
- if (suspend_sqe) {
- if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[warn]reset worker timer maybe timeout");
- }
-
- kfree(suspend_sqe);
- }
- }
-
- /* Free data cos */
- spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
-
- if (parq_info->parent_ctx.cqm_parent_ctx_obj) {
- ctx_dw = (u32 *)((void *)(parq_info->parent_ctx.cqm_parent_ctx_obj->vaddr));
- ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
- mb();
- if (parq_info->offload_state == SPFC_QUEUE_STATE_DESTROYING &&
- ctx_flush_done == 0) {
- do {
- ctx_flush_done = ctx_dw[SPFC_CTXT_FLUSH_DONE_DW_POS] &
- SPFC_CTXT_FLUSH_DONE_MASK_BE;
- mb();
- if (ctx_flush_done != 0)
- break;
- uidelaycnt++;
- } while (uidelaycnt < SPFC_WAIT_PRT_CTX_FUSH_DONE_LOOP_TIMES);
-
- if (ctx_flush_done == 0) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) Rport(0x%x) flush done is not set",
- hba->port_cfg.port_id,
- sq_info->rport_index);
- }
- }
-
- cqm3_object_delete(&parq_info->parent_ctx.cqm_parent_ctx_obj->object);
- parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
- }
-
- spfc_invalid_parent_sq(sq_info);
-}
-
-u32 spfc_alloc_parent_sq(struct spfc_hba_info *hba,
- struct spfc_parent_queue_info *parq_info,
- struct unf_port_info *rport_info)
-{
- struct spfc_parent_sq_info *sq_ctrl = NULL;
- struct cqm_qpc_mpt *prnt_ctx = NULL;
- ulong flag = 0;
-
- /* Craete parent context via CQM */
- prnt_ctx = cqm3_object_qpc_mpt_create(hba->dev_handle, SERVICE_T_FC,
- CQM_OBJECT_SERVICE_CTX, SPFC_CNTX_SIZE_256B,
- parq_info, CQM_INDEX_INVALID);
- if (!prnt_ctx) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Create parent context failed, CQM_INDEX is 0x%x",
- CQM_INDEX_INVALID);
- goto parent_create_fail;
- }
-
- parq_info->parent_ctx.cqm_parent_ctx_obj = prnt_ctx;
- /* Initialize struct spfc_parent_sq_info */
- sq_ctrl = &parq_info->parent_sq_info;
- sq_ctrl->hba = (void *)hba;
- sq_ctrl->rport_index = rport_info->rport_index;
- sq_ctrl->sqn_base = rport_info->sqn_base;
- sq_ctrl->context_id = prnt_ctx->xid;
- sq_ctrl->sq_queue_id = SPFC_QID_SQ;
- sq_ctrl->cache_id = INVALID_VALUE32;
- sq_ctrl->local_port_id = INVALID_VALUE32;
- sq_ctrl->remote_port_id = INVALID_VALUE32;
- sq_ctrl->sq_in_sess_rst = false;
- atomic_set(&sq_ctrl->sq_valid, true);
- sq_ctrl->del_start_jiff = INVALID_VALUE64;
- sq_ctrl->service_type = SPFC_SERVICE_TYPE_FC;
- sq_ctrl->vport_id = (u8)rport_info->qos_level;
- sq_ctrl->cs_ctrl = (u8)rport_info->cs_ctrl;
- sq_ctrl->sirt_dif_control.protect_opcode = UNF_DIF_ACTION_NONE;
- sq_ctrl->need_offloaded = INVALID_VALUE8;
- atomic_set(&sq_ctrl->flush_done_wait_cnt, 0);
-
- /* Check whether the HBA is in the Linkdown state. Note that
- * offload_state must be in the non-FREE state.
- */
- spin_lock_irqsave(&hba->flush_state_lock, flag);
- sq_ctrl->port_in_flush = hba->in_flushing;
- spin_unlock_irqrestore(&hba->flush_state_lock, flag);
- memset(sq_ctrl->io_stat, 0, sizeof(sq_ctrl->io_stat));
-
- INIT_DELAYED_WORK(&sq_ctrl->del_work, spfc_parent_sq_opreate_timeout);
- INIT_DELAYED_WORK(&sq_ctrl->flush_done_timeout_work,
- spfc_parent_sq_wait_flush_done_timeout);
- INIT_LIST_HEAD(&sq_ctrl->suspend_sqe_list);
-
- memset(&sq_ctrl->delay_sqe, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
-
- return RETURN_OK;
-
-parent_create_fail:
- parq_info->parent_ctx.cqm_parent_ctx_obj = NULL;
-
- return UNF_RETURN_ERROR;
-}
-
-static void
-spfc_init_prnt_ctxt_scq_qinfo(void *hba,
- struct spfc_parent_queue_info *prnt_qinfo)
-{
- u32 resp_scqn = 0;
- struct spfc_parent_context *ctx = NULL;
- struct spfc_scq_qinfo *resp_prnt_scq_ctxt = NULL;
- struct spfc_queue_info_bus queue_bus;
-
- /* Obtains the queue id of the scq returned by the CQM when the SCQ is
- * created
- */
- resp_scqn = prnt_qinfo->parent_sts_scq_info.cqm_queue_id;
-
- /* Obtains the Parent Context address */
- ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
-
- resp_prnt_scq_ctxt = &ctx->resp_scq_qinfo;
- resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th2_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th1_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.rq_th0_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.rq_min_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th2_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th1_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.sq_th0_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.sq_min_preld_cache_num = wqe_pre_load;
- resp_prnt_scq_ctxt->hw_scqc_config.info.scq_n = (u64)resp_scqn;
- resp_prnt_scq_ctxt->hw_scqc_config.info.parity = 0;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] = resp_prnt_scq_ctxt->hw_scqc_config.pctxt_val1;
- resp_prnt_scq_ctxt->hw_scqc_config.info.parity = spfc_get_parity_value(queue_bus.bus,
- SPFC_HW_SCQC_BUS_ROW,
- SPFC_HW_SCQC_BUS_COL
- );
- spfc_cpu_to_big64(resp_prnt_scq_ctxt, sizeof(struct spfc_scq_qinfo));
-}
-
-static void
-spfc_init_prnt_ctxt_srq_qinfo(void *handle, struct spfc_parent_queue_info *prnt_qinfo)
-{
- struct spfc_parent_context *ctx = NULL;
- struct cqm_queue *cqm_els_srq = NULL;
- struct spfc_parent_sq_info *sq = NULL;
- struct spfc_queue_info_bus queue_bus;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- /* Obtains the SQ address */
- sq = &prnt_qinfo->parent_sq_info;
-
- /* Obtains the Parent Context address */
- ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
-
- cqm_els_srq = hba->els_srq_info.cqm_srq_info;
-
- /* Initialize the Parent SRQ INFO used when the ELS is received */
- ctx->els_srq_info.srqc_gpa = cqm_els_srq->q_ctx_paddr >> UNF_SHIFT_4;
-
- memset(&queue_bus, 0, sizeof(struct spfc_queue_info_bus));
- queue_bus.bus[ARRAY_INDEX_0] = ctx->els_srq_info.srqc_gpa;
- ctx->els_srq_info.parity = spfc_get_parity_value(queue_bus.bus, SPFC_HW_SRQC_BUS_ROW,
- SPFC_HW_SRQC_BUS_COL);
- spfc_cpu_to_big64(&ctx->els_srq_info, sizeof(struct spfc_srq_qinfo));
-
- ctx->imm_srq_info.srqc_gpa = 0;
- sq->srq_ctx_addr = 0;
-}
-
-static u16 spfc_get_max_sequence_id(void)
-{
- return SPFC_HRQI_SEQ_ID_MAX;
-}
-
-static void spfc_init_prnt_rsvd_qinfo(struct spfc_parent_queue_info *prnt_qinfo)
-{
- struct spfc_parent_context *ctx = NULL;
- struct spfc_hw_rsvd_queue *hw_rsvd_qinfo = NULL;
- u16 max_seq = 0;
- u32 each = 0, seq_index = 0;
-
- /* Obtains the Parent Context address */
- ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
- hw_rsvd_qinfo = (struct spfc_hw_rsvd_queue *)&ctx->hw_rsvdq;
- memset(hw_rsvd_qinfo->seq_id_bitmap, 0, sizeof(hw_rsvd_qinfo->seq_id_bitmap));
-
- max_seq = spfc_get_max_sequence_id();
-
- /* special set for sequence id 0, which is always kept by ucode for
- * sending fcp-cmd
- */
- hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] = 1;
- seq_index = SPFC_HRQI_SEQ_SEPCIAL_ID - (max_seq >> SPFC_HRQI_SEQ_INDEX_SHIFT);
-
- /* Set the unavailable mask to start from max + 1 */
- for (each = (max_seq % SPFC_HRQI_SEQ_INDEX_MAX) + 1;
- each < SPFC_HRQI_SEQ_INDEX_MAX; each++) {
- hw_rsvd_qinfo->seq_id_bitmap[seq_index] |= ((u64)0x1) << each;
- }
-
- hw_rsvd_qinfo->seq_id_bitmap[seq_index] =
- cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[seq_index]);
-
- /* sepcial set for sequence id 0 */
- if (seq_index != SPFC_HRQI_SEQ_SEPCIAL_ID)
- hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID] =
- cpu_to_be64(hw_rsvd_qinfo->seq_id_bitmap[SPFC_HRQI_SEQ_SEPCIAL_ID]);
-
- for (each = 0; each < seq_index; each++)
- hw_rsvd_qinfo->seq_id_bitmap[each] = SPFC_HRQI_SEQ_INVALID_ID;
-
- /* no matter what the range of seq id, last_req_seq_id is fixed value
- * 0xff
- */
- hw_rsvd_qinfo->wd0.last_req_seq_id = SPFC_HRQI_SEQ_ID_MAX;
- hw_rsvd_qinfo->wd0.xid = prnt_qinfo->parent_sq_info.context_id;
-
- *(u64 *)&hw_rsvd_qinfo->wd0 =
- cpu_to_be64(*(u64 *)&hw_rsvd_qinfo->wd0);
-}
-
-/*
- *Function Name : spfc_init_prnt_sw_section_info
- *Function Description: Initialize the SW Section area that can be accessed by
- * the Parent Context uCode.
- *Input Parameters : *hba,
- * *prnt_qinfo
- *Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_init_prnt_sw_section_info(struct spfc_hba_info *hba,
- struct spfc_parent_queue_info *prnt_qinfo)
-{
-#define SPFC_VLAN_ENABLE (1)
-#define SPFC_MB_PER_KB 1024
- u16 rport_index;
- struct spfc_parent_context *ctx = NULL;
- struct spfc_sw_section *sw_setion = NULL;
- u16 total_scq_num = SPFC_TOTAL_SCQ_NUM;
- u32 queue_id;
- dma_addr_t queue_hdr_paddr;
-
- /* Obtains the Parent Context address */
- ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
- sw_setion = &ctx->sw_section;
-
- /* xid+vPortId */
- sw_setion->sw_ctxt_vport_xid.xid = prnt_qinfo->parent_sq_info.context_id;
- spfc_cpu_to_big32(&sw_setion->sw_ctxt_vport_xid, sizeof(sw_setion->sw_ctxt_vport_xid));
-
- /* conn_id */
- rport_index = SPFC_LSW(prnt_qinfo->parent_sq_info.rport_index);
- sw_setion->conn_id = cpu_to_be16(rport_index);
-
- /* Immediate parameters */
- sw_setion->immi_rq_page_size = 0;
-
- /* Parent SCQ INFO used for sending packets to the Cmnd */
- sw_setion->scq_num_rcv_cmd = cpu_to_be16((u16)prnt_qinfo->parent_cmd_scq_info.cqm_queue_id);
- sw_setion->scq_num_max_scqn = cpu_to_be16(total_scq_num);
-
- /* sw_ctxt_misc */
- sw_setion->sw_ctxt_misc.dw.srv_type = prnt_qinfo->parent_sq_info.service_type;
- sw_setion->sw_ctxt_misc.dw.port_id = hba->port_index;
-
- /* only the VN2VF mode is supported */
- sw_setion->sw_ctxt_misc.dw.vlan_id = 0;
- spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
- sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
-
- /* Configuring the combo length */
- sw_setion->per_xmit_data_size = cpu_to_be32(combo_length * SPFC_MB_PER_KB);
- sw_setion->sw_ctxt_config.dw.work_mode = SPFC_PORT_MODE_INI;
- sw_setion->sw_ctxt_config.dw.status = FC_PARENT_STATUS_INVALID;
- sw_setion->sw_ctxt_config.dw.cos = 0;
- sw_setion->sw_ctxt_config.dw.oq_cos_cmd = SPFC_PACKET_COS_FC_CMD;
- sw_setion->sw_ctxt_config.dw.oq_cos_data = prnt_qinfo->queue_data_cos;
- sw_setion->sw_ctxt_config.dw.priority = 0;
- sw_setion->sw_ctxt_config.dw.vlan_enable = SPFC_VLAN_ENABLE;
- sw_setion->sw_ctxt_config.dw.sgl_num = dif_sgl_mode;
- spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
- sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
- spfc_cpu_to_big32(&sw_setion->immi_dif_info, sizeof(sw_setion->immi_dif_info));
-
- queue_id = prnt_qinfo->parent_cmd_scq_info.local_queue_id;
- queue_hdr_paddr = hba->scq_info[queue_id].cqm_scq_info->q_header_paddr;
- sw_setion->cmd_scq_gpa_h = SPFC_HIGH_32_BITS(queue_hdr_paddr);
- sw_setion->cmd_scq_gpa_l = SPFC_LOW_32_BITS(queue_hdr_paddr);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[info]Port(0x%x) RPort(0x%x) CmdLocalScqn(0x%x) QheaderGpaH(0x%x) QheaderGpaL(0x%x)",
- hba->port_cfg.port_id, prnt_qinfo->parent_sq_info.rport_index, queue_id,
- sw_setion->cmd_scq_gpa_h, sw_setion->cmd_scq_gpa_l);
-
- spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_h, sizeof(sw_setion->cmd_scq_gpa_h));
- spfc_cpu_to_big32(&sw_setion->cmd_scq_gpa_l, sizeof(sw_setion->cmd_scq_gpa_l));
-}
-
-static void spfc_init_parent_context(void *hba, struct spfc_parent_queue_info *prnt_qinfo)
-{
- struct spfc_parent_context *ctx = NULL;
-
- ctx = (struct spfc_parent_context *)(prnt_qinfo->parent_ctx.parent_ctx);
-
- /* Initialize Parent Context */
- memset(ctx, 0, SPFC_CNTX_SIZE_256B);
-
- /* Initialize the Queue Info hardware area */
- spfc_init_prnt_ctxt_scq_qinfo(hba, prnt_qinfo);
- spfc_init_prnt_ctxt_srq_qinfo(hba, prnt_qinfo);
- spfc_init_prnt_rsvd_qinfo(prnt_qinfo);
-
- /* Initialize Software Section */
- spfc_init_prnt_sw_section_info(hba, prnt_qinfo);
-}
-
-void spfc_map_shared_queue_qid(struct spfc_hba_info *hba,
- struct spfc_parent_queue_info *parent_queue_info,
- u32 rport_index)
-{
- u32 cmd_scqn_local = 0;
- u32 sts_scqn_local = 0;
-
- /* The SCQ is used for each connection based on the balanced *
- * distribution of commands and responses
- */
- cmd_scqn_local = SPFC_RPORTID_TO_CMD_SCQN(rport_index);
- sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(rport_index);
- parent_queue_info->parent_cmd_scq_info.local_queue_id = cmd_scqn_local;
- parent_queue_info->parent_sts_scq_info.local_queue_id = sts_scqn_local;
- parent_queue_info->parent_cmd_scq_info.cqm_queue_id =
- hba->scq_info[cmd_scqn_local].scqn;
- parent_queue_info->parent_sts_scq_info.cqm_queue_id =
- hba->scq_info[sts_scqn_local].scqn;
-
- /* Each session share with immediate SRQ and ElsSRQ */
- parent_queue_info->parent_els_srq_info.local_queue_id = 0;
- parent_queue_info->parent_els_srq_info.cqm_queue_id = hba->els_srq_info.srqn;
-
- /* Allocate fcp data cos value */
- parent_queue_info->queue_data_cos = spfc_map_fcp_data_cos(hba);
-
- /* Allocate Parent SQ vPort */
- parent_queue_info->parent_sq_info.vport_id += parent_queue_info->queue_vport_id;
-}
-
-u32 spfc_send_session_enable(struct spfc_hba_info *hba, struct unf_port_info *rport_info)
-{
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- dma_addr_t ctx_phy_addr = 0;
- void *ctx_addr = NULL;
- union spfc_cmdqe session_enable;
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_parent_context *ctx = NULL;
- struct spfc_sw_section *sw_setion = NULL;
- struct spfc_host_keys key;
- u32 tx_mfs = 2048;
- u32 edtov_timer = 2000;
- ulong flag = 0;
- spinlock_t *prtq_state_lock = NULL;
- u32 index;
-
- memset(&session_enable, 0, sizeof(union spfc_cmdqe));
- memset(&key, 0, sizeof(struct spfc_host_keys));
- index = rport_info->rport_index;
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- ctx = (struct spfc_parent_context *)(parent_queue_info->parent_ctx.parent_ctx);
- sw_setion = &ctx->sw_section;
-
- sw_setion->tx_mfs = cpu_to_be16((u16)(tx_mfs));
- sw_setion->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
-
- spfc_big_to_cpu32(&sw_setion->sw_ctxt_misc.pctxt_val0,
- sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
- sw_setion->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
- spfc_cpu_to_big32(&sw_setion->sw_ctxt_misc.pctxt_val0,
- sizeof(sw_setion->sw_ctxt_misc.pctxt_val0));
-
- spfc_big_to_cpu32(&sw_setion->sw_ctxt_config.pctxt_val1,
- sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
- spfc_cpu_to_big32(&sw_setion->sw_ctxt_config.pctxt_val1,
- sizeof(sw_setion->sw_ctxt_config.pctxt_val1));
-
- parent_queue_info->parent_sq_info.rport_index = rport_info->rport_index;
- parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
- parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
- parent_queue_info->parent_sq_info.context_id =
- parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
-
- /* Fill in contex to the chip */
- ctx_phy_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
- ctx_addr = parent_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
- memcpy(ctx_addr, parent_queue_info->parent_ctx.parent_ctx,
- sizeof(struct spfc_parent_context));
- session_enable.session_enable.wd0.task_type = SPFC_TASK_T_SESS_EN;
- session_enable.session_enable.wd2.conn_id = rport_info->rport_index;
- session_enable.session_enable.wd2.scqn = hba->default_scqn;
- session_enable.session_enable.wd3.xid_p =
- parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid;
- session_enable.session_enable.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_phy_addr);
- session_enable.session_enable.context_gpa_lo = SPFC_LOW_32_BITS(ctx_phy_addr);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- key.wd3.sid_2 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_2_MASK) >> UNF_SHIFT_16;
- key.wd3.sid_1 = (rport_info->local_nport_id & SPFC_KEY_WD3_SID_1_MASK) >> UNF_SHIFT_8;
- key.wd4.sid_0 = rport_info->local_nport_id & SPFC_KEY_WD3_SID_0_MASK;
- key.wd4.did_0 = rport_info->nport_id & SPFC_KEY_WD4_DID_0_MASK;
- key.wd4.did_1 = (rport_info->nport_id & SPFC_KEY_WD4_DID_1_MASK) >> UNF_SHIFT_8;
- key.wd4.did_2 = (rport_info->nport_id & SPFC_KEY_WD4_DID_2_MASK) >> UNF_SHIFT_16;
- key.wd5.host_id = 0;
- key.wd5.port_id = hba->port_index;
-
- memcpy(&session_enable.session_enable.keys, &key, sizeof(struct spfc_host_keys));
-
- memcpy((void *)(uintptr_t)session_enable.session_enable.context,
- parent_queue_info->parent_ctx.parent_ctx,
- sizeof(struct spfc_parent_context));
- spfc_big_to_cpu32((void *)(uintptr_t)session_enable.session_enable.context,
- sizeof(struct spfc_parent_context));
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_MAJOR,
- "[info] xid:0x%x, sid:0x%x,did:0x%x parentcontext:",
- parent_queue_info->parent_ctx.cqm_parent_ctx_obj->xid,
- rport_info->local_nport_id, rport_info->nport_id);
-
- ret = spfc_root_cmdq_enqueue(hba, &session_enable, sizeof(session_enable.session_enable));
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]RootCMDQEnqueue Error, free default session parent resource");
- return UNF_RETURN_ERROR;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) send default session enable success,rport index(0x%x),context id(0x%x) SID=(0x%x), DID=(0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- parent_queue_info->parent_sq_info.context_id,
- rport_info->local_nport_id, rport_info->nport_id);
-
- return RETURN_OK;
-}
-
-u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_hba_info *hba = NULL;
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- ulong flag = 0;
- spinlock_t *prtq_state_lock = NULL;
- u32 index;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
-
- hba = (struct spfc_hba_info *)handle;
- if (!hba->parent_queue_mgr) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) cannot find parent queue pool",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- index = rport_info->rport_index;
- if (index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x)",
- hba->port_cfg.port_id, index,
- rport_info->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_FREE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) allocate parent resource failed, invlaid rport index(0x%x),rport nportid(0x%x), offload state(0x%x)",
- hba->port_cfg.port_id, index, rport_info->nport_id,
- parent_queue_info->offload_state);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- return UNF_RETURN_ERROR;
- }
-
- parent_queue_info->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
- /* Create Parent Context and Link List SQ */
- ret = spfc_alloc_parent_sq(hba, parent_queue_info, rport_info);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "Port(0x%x) alloc session resoure failed.rport index(0x%x),rport nportid(0x%x).",
- hba->port_cfg.port_id, index,
- rport_info->nport_id);
-
- parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
- spfc_invalid_parent_sq(&parent_queue_info->parent_sq_info);
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Allocate the corresponding queue xid to each parent */
- spfc_map_shared_queue_qid(hba, parent_queue_info, rport_info->rport_index);
-
- /* Initialize Parent Context, including hardware area and ucode area */
- spfc_init_parent_context(hba, parent_queue_info);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- /* Only default enable session obviously, other will enable secertly */
- if (unlikely(rport_info->rport_index == SPFC_DEFAULT_RPORT_INDEX))
- return spfc_send_session_enable(handle, rport_info);
-
- parent_queue_info->parent_sq_info.local_port_id = rport_info->local_nport_id;
- parent_queue_info->parent_sq_info.remote_port_id = rport_info->nport_id;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) allocate parent sq success,rport index(0x%x),rport nportid(0x%x),context id(0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.context_id);
-
- return ret;
-}
-
-u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info)
-{
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- ulong flag = 0;
- ulong rst_flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- enum spfc_session_reset_mode mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
- struct spfc_hba_info *hba = NULL;
- spinlock_t *prtq_state_lock = NULL;
- spinlock_t *sq_enq_lock = NULL;
- u32 index;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(rport_info, UNF_RETURN_ERROR);
-
- hba = (struct spfc_hba_info *)handle;
- if (!hba->parent_queue_mgr) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) cannot find parent queue pool",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- /* get parent queue info (by rport index) */
- if (rport_info->rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) free parent resource failed, invlaid rport_index(%u) rport_nport_id(0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index, rport_info->nport_id);
-
- return UNF_RETURN_ERROR;
- }
-
- index = rport_info->rport_index;
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- sq_enq_lock = &parent_queue_info->parent_sq_info.parent_sq_enqueue_lock;
-
- spin_lock_irqsave(prtq_state_lock, flag);
- /* 1. for has been offload */
- if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADED) {
- parent_queue_info->offload_state = SPFC_QUEUE_STATE_DESTROYING;
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- /* set reset state, in order to prevent I/O in_SQ */
- spin_lock_irqsave(sq_enq_lock, rst_flag);
- parent_queue_info->parent_sq_info.sq_in_sess_rst = true;
- spin_unlock_irqrestore(sq_enq_lock, rst_flag);
-
- /* check pcie device state */
- if (!hba->dev_present) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) hba is not present, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- parent_queue_info->parent_sq_info.rport_index,
- parent_queue_info->parent_sq_info.local_port_id,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.remote_port_id);
-
- spfc_free_parent_queue_info(hba, parent_queue_info);
- return RETURN_OK;
- }
-
- parent_queue_info->parent_sq_info.del_start_jiff = jiffies;
- (void)queue_delayed_work(hba->work_queue,
- &parent_queue_info->parent_sq_info.del_work,
- (ulong)msecs_to_jiffies((u32)
- SPFC_SQ_DEL_STAGE_TIMEOUT_MS));
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to reset parent session, rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- parent_queue_info->parent_sq_info.rport_index,
- parent_queue_info->parent_sq_info.local_port_id,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.remote_port_id);
- /* Forcibly set both mode */
- mode = SPFC_SESS_RST_DELETE_IO_CONN_BOTH;
- ret = spfc_send_session_rst_cmd(hba, parent_queue_info, mode);
-
- return ret;
- } else if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
- /* 2. for resource has been alloc, but not offload */
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) parent sq is not offloaded, free directly. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- parent_queue_info->parent_sq_info.rport_index,
- parent_queue_info->parent_sq_info.local_port_id,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.remote_port_id);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- spfc_free_parent_queue_info(hba, parent_queue_info);
-
- return RETURN_OK;
- } else if (parent_queue_info->offload_state ==
- SPFC_QUEUE_STATE_OFFLOADING) {
- /* 3. for driver has offloading CMND to uCode */
- spfc_push_destroy_parent_queue_sqe(hba, parent_queue_info, rport_info);
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) parent sq is offloading, push to delay free. rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
- hba->port_cfg.port_id, rport_info->rport_index,
- parent_queue_info->parent_sq_info.rport_index,
- parent_queue_info->parent_sq_info.local_port_id,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.remote_port_id);
-
- return RETURN_OK;
- }
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) parent sq is not created, do not need free state(0x%x) rport_index(0x%x:0x%x) local_nportid(0x%x) remote_nportid(0x%x:0x%x)",
- hba->port_cfg.port_id, parent_queue_info->offload_state,
- rport_info->rport_index,
- parent_queue_info->parent_sq_info.rport_index,
- parent_queue_info->parent_sq_info.local_port_id,
- rport_info->nport_id,
- parent_queue_info->parent_sq_info.remote_port_id);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- return RETURN_OK;
-}
-
-void spfc_free_parent_queue_mgr(void *handle)
-{
- u32 index = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(handle);
-
- hba = (struct spfc_hba_info *)handle;
- if (!hba->parent_queue_mgr)
- return;
- parent_queue_mgr = hba->parent_queue_mgr;
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- if (parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx)
- parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx = NULL;
- }
-
- if (parent_queue_mgr->parent_sq_buf_list.buflist) {
- for (index = 0; index < parent_queue_mgr->parent_sq_buf_list.buf_num; index++) {
- if (parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr != 0) {
- pci_unmap_single(hba->pci_dev,
- parent_queue_mgr->parent_sq_buf_list
- .buflist[index].paddr,
- parent_queue_mgr->parent_sq_buf_list.buf_size,
- DMA_BIDIRECTIONAL);
- parent_queue_mgr->parent_sq_buf_list.buflist[index].paddr = 0;
- }
- kfree(parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr);
- parent_queue_mgr->parent_sq_buf_list.buflist[index].vaddr = NULL;
- }
-
- kfree(parent_queue_mgr->parent_sq_buf_list.buflist);
- parent_queue_mgr->parent_sq_buf_list.buflist = NULL;
- }
-
- vfree(parent_queue_mgr);
- hba->parent_queue_mgr = NULL;
-}
-
-void spfc_free_parent_queues(void *handle)
-{
- u32 index = 0;
- ulong flag = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- struct spfc_hba_info *hba = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- FC_CHECK_RETURN_VOID(handle);
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_mgr = hba->parent_queue_mgr;
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (SPFC_QUEUE_STATE_DESTROYING ==
- parent_queue_mgr->parent_queue[index].offload_state) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
- .parent_sq_info.del_work);
- (void)cancel_delayed_work_sync(&parent_queue_mgr->parent_queue[index]
- .parent_sq_info.flush_done_timeout_work);
-
- /* free parent queue */
- spfc_free_parent_queue_info(hba, &parent_queue_mgr->parent_queue[index]);
- continue;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- }
-}
-
-/*
- *Function Name : spfc_alloc_parent_queue_mgr
- *Function Description: Allocate and initialize parent queue manager.
- *Input Parameters : *handle
- *Output Parameters : N/A
- *Return Type : void
- */
-u32 spfc_alloc_parent_queue_mgr(void *handle)
-{
- u32 index = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- u32 buf_total_size;
- u32 buf_num;
- u32 alloc_idx;
- u32 cur_buf_idx = 0;
- u32 cur_buf_offset = 0;
- u32 prt_ctx_size = sizeof(struct spfc_parent_context);
- u32 buf_cnt_perhugebuf;
- struct spfc_hba_info *hba = NULL;
- u32 init_val = INVALID_VALUE32;
- dma_addr_t paddr;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_mgr = (struct spfc_parent_queue_mgr *)vmalloc(sizeof
- (struct spfc_parent_queue_mgr));
- if (!parent_queue_mgr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) cannot allocate queue manager",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- hba->parent_queue_mgr = parent_queue_mgr;
- memset(parent_queue_mgr, 0, sizeof(struct spfc_parent_queue_mgr));
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- spin_lock_init(&parent_queue_mgr->parent_queue[index].parent_queue_state_lock);
- parent_queue_mgr->parent_queue[index].offload_state = SPFC_QUEUE_STATE_FREE;
- spin_lock_init(&(parent_queue_mgr->parent_queue[index]
- .parent_sq_info.parent_sq_enqueue_lock));
- parent_queue_mgr->parent_queue[index].parent_cmd_scq_info.cqm_queue_id = init_val;
- parent_queue_mgr->parent_queue[index].parent_sts_scq_info.cqm_queue_id = init_val;
- parent_queue_mgr->parent_queue[index].parent_els_srq_info.cqm_queue_id = init_val;
- parent_queue_mgr->parent_queue[index].parent_sq_info.del_start_jiff = init_val;
- parent_queue_mgr->parent_queue[index].queue_vport_id = hba->vpid_start;
- }
-
- buf_total_size = prt_ctx_size * UNF_SPFC_MAXRPORT_NUM;
- parent_queue_mgr->parent_sq_buf_list.buf_size = buf_total_size > BUF_LIST_PAGE_SIZE ?
- BUF_LIST_PAGE_SIZE : buf_total_size;
- buf_cnt_perhugebuf = parent_queue_mgr->parent_sq_buf_list.buf_size / prt_ctx_size;
- buf_num = UNF_SPFC_MAXRPORT_NUM % buf_cnt_perhugebuf ?
- UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf + 1 :
- UNF_SPFC_MAXRPORT_NUM / buf_cnt_perhugebuf;
- parent_queue_mgr->parent_sq_buf_list.buflist =
- (struct buff_list *)kmalloc(buf_num * sizeof(struct buff_list), GFP_KERNEL);
- parent_queue_mgr->parent_sq_buf_list.buf_num = buf_num;
-
- if (!parent_queue_mgr->parent_sq_buf_list.buflist) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Allocate QueuMgr buf list failed out of memory");
- goto free_parent_queue;
- }
- memset(parent_queue_mgr->parent_sq_buf_list.buflist, 0, buf_num * sizeof(struct buff_list));
-
- for (alloc_idx = 0; alloc_idx < buf_num; alloc_idx++) {
- parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr =
- kmalloc(parent_queue_mgr->parent_sq_buf_list.buf_size, GFP_KERNEL);
- if (!parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr)
- goto free_parent_queue;
-
- memset(parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr, 0,
- parent_queue_mgr->parent_sq_buf_list.buf_size);
-
- parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr =
- pci_map_single(hba->pci_dev,
- parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].vaddr,
- parent_queue_mgr->parent_sq_buf_list.buf_size,
- DMA_BIDIRECTIONAL);
- paddr = parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr;
- if (pci_dma_mapping_error(hba->pci_dev, paddr)) {
- parent_queue_mgr->parent_sq_buf_list.buflist[alloc_idx].paddr = 0;
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[err]Map QueuMgr address failed");
-
- goto free_parent_queue;
- }
- }
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- cur_buf_idx = index / buf_cnt_perhugebuf;
- cur_buf_offset = prt_ctx_size * (index % buf_cnt_perhugebuf);
-
- parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx =
- parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].vaddr +
- cur_buf_offset;
- parent_queue_mgr->parent_queue[index].parent_ctx.parent_ctx_addr =
- parent_queue_mgr->parent_sq_buf_list.buflist[cur_buf_idx].paddr +
- cur_buf_offset;
- }
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_INFO,
- "[EVENT]Allocate bufnum:%u,buf_total_size:%u", buf_num, buf_total_size);
-
- return RETURN_OK;
-
-free_parent_queue:
- spfc_free_parent_queue_mgr(hba);
- return UNF_RETURN_ERROR;
-}
-
-static void spfc_rlease_all_wqe_pages(struct spfc_hba_info *hba)
-{
- u32 index;
- struct spfc_wqe_page *wpg = NULL;
-
- FC_CHECK_RETURN_VOID((hba));
-
- wpg = hba->sq_wpg_pool.wpg_pool_addr;
-
- for (index = 0; index < hba->sq_wpg_pool.wpg_cnt; index++) {
- if (wpg->wpg_addr) {
- dma_pool_free(hba->sq_wpg_pool.wpg_dma_pool,
- wpg->wpg_addr, wpg->wpg_phy_addr);
- wpg->wpg_addr = NULL;
- wpg->wpg_phy_addr = 0;
- }
-
- wpg++;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port[%u] free total %u wqepages", hba->port_index,
- index);
-}
-
-u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle)
-{
- u32 index = 0;
- struct spfc_sq_wqepage_pool *wpg_pool = NULL;
- struct spfc_wqe_page *wpg = NULL;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- wpg_pool = &hba->sq_wpg_pool;
-
- INIT_LIST_HEAD(&wpg_pool->list_free_wpg_pool);
- spin_lock_init(&wpg_pool->wpg_pool_lock);
- atomic_set(&wpg_pool->wpg_in_use, 0);
-
- /* Calculate the number of Wqe Page required in the pool */
- wpg_pool->wpg_size = wqe_page_size;
- wpg_pool->wpg_cnt = SPFC_MIN_WP_NUM * SPFC_MAX_SSQ_NUM +
- ((hba->exi_count * SPFC_SQE_SIZE) / wpg_pool->wpg_size);
- wpg_pool->wqe_per_wpg = wpg_pool->wpg_size / SPFC_SQE_SIZE;
-
- /* Craete DMA POOL */
- wpg_pool->wpg_dma_pool = dma_pool_create("spfc_wpg_pool",
- &hba->pci_dev->dev,
- wpg_pool->wpg_size,
- SPFC_SQE_SIZE, 0);
- if (!wpg_pool->wpg_dma_pool) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Cannot allocate SQ WqePage DMA pool");
-
- goto out_create_dma_pool_err;
- }
-
- /* Allocate arrays to record all WqePage addresses */
- wpg_pool->wpg_pool_addr = (struct spfc_wqe_page *)vmalloc(wpg_pool->wpg_cnt *
- sizeof(struct spfc_wqe_page));
- if (!wpg_pool->wpg_pool_addr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Allocate SQ WqePageAddr array failed");
-
- goto out_alloc_wpg_array_err;
- }
- wpg = wpg_pool->wpg_pool_addr;
- memset(wpg, 0, wpg_pool->wpg_cnt * sizeof(struct spfc_wqe_page));
-
- for (index = 0; index < wpg_pool->wpg_cnt; index++) {
- wpg->wpg_addr = dma_pool_alloc(wpg_pool->wpg_dma_pool, GFP_KERNEL,
- (u64 *)&wpg->wpg_phy_addr);
- if (!wpg->wpg_addr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_ERR, "[err]Dma pool allocated failed");
- break;
- }
-
- /* To ensure security, clear the memory */
- memset(wpg->wpg_addr, 0, wpg_pool->wpg_size);
-
- /* Add to the idle linked list */
- INIT_LIST_HEAD(&wpg->entry_wpg);
- list_add_tail(&wpg->entry_wpg, &wpg_pool->list_free_wpg_pool);
-
- wpg++;
- }
- /* ALL allocated successfully */
- if (wpg_pool->wpg_cnt == index)
- return RETURN_OK;
-
- spfc_rlease_all_wqe_pages(hba);
- vfree(wpg_pool->wpg_pool_addr);
- wpg_pool->wpg_pool_addr = NULL;
-
-out_alloc_wpg_array_err:
- dma_pool_destroy(wpg_pool->wpg_dma_pool);
- wpg_pool->wpg_dma_pool = NULL;
-
-out_create_dma_pool_err:
- return UNF_RETURN_ERROR;
-}
-
-void spfc_free_parent_sq_wqe_page_pool(void *handle)
-{
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID((handle));
- hba = (struct spfc_hba_info *)handle;
- spfc_rlease_all_wqe_pages(hba);
- hba->sq_wpg_pool.wpg_cnt = 0;
-
- if (hba->sq_wpg_pool.wpg_pool_addr) {
- vfree(hba->sq_wpg_pool.wpg_pool_addr);
- hba->sq_wpg_pool.wpg_pool_addr = NULL;
- }
-
- dma_pool_destroy(hba->sq_wpg_pool.wpg_dma_pool);
- hba->sq_wpg_pool.wpg_dma_pool = NULL;
-}
-
-static u32 spfc_parent_sq_ring_direct_wqe_doorbell(struct spfc_parent_ssq_info *sq, u8 *direct_wqe)
-{
- u32 ret = RETURN_OK;
- int ravl;
- u16 pmsn;
- u64 queue_hdr_db_val;
- struct spfc_hba_info *hba;
-
- hba = (struct spfc_hba_info *)sq->hba;
- pmsn = sq->last_pmsn;
-
- if (sq->cache_id == INVALID_VALUE32) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]SQ(0x%x) invalid cid", sq->context_id);
- return RETURN_ERROR;
- }
- /* Fill Doorbell Record */
- queue_hdr_db_val = sq->queue_header->door_bell_record;
- queue_hdr_db_val &= (u64)(~(0xFFFFFFFF));
- queue_hdr_db_val |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
- sq->queue_header->door_bell_record =
- cpu_to_be64(queue_hdr_db_val);
-
- ravl = cqm_ring_direct_wqe_db_fc(hba->dev_handle, SERVICE_T_FC, direct_wqe);
- if (unlikely(ravl != CQM_SUCCESS)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]SQ(0x%x) send DB failed", sq->context_id);
-
- ret = RETURN_ERROR;
- }
-
- atomic_inc(&sq->sq_db_cnt);
-
- return ret;
-}
-
-u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level, u32 c)
-{
- u32 ret = RETURN_OK;
- int ravl;
- u16 pmsn;
- u8 pmsn_lo;
- u8 pmsn_hi;
- u64 db_val_qw;
- struct spfc_hba_info *hba;
- struct spfc_parent_sq_db door_bell;
-
- hba = (struct spfc_hba_info *)sq->hba;
- pmsn = sq->last_pmsn;
- /* Obtain the low 8 Bit of PMSN */
- pmsn_lo = (u8)(pmsn & SPFC_PMSN_MASK);
- /* Obtain the high 8 Bit of PMSN */
- pmsn_hi = (u8)((pmsn >> UNF_SHIFT_8) & SPFC_PMSN_MASK);
- door_bell.wd0.service_type = SPFC_LSW(sq->service_type);
- door_bell.wd0.cos = 0;
- /* c = 0 data type, c = 1 control type, two type are different in mqm */
- door_bell.wd0.c = c;
- door_bell.wd0.arm = SPFC_DB_ARM_DISABLE;
- door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
- door_bell.wd0.xid = sq->context_id;
- door_bell.wd1.sm_data = sq->cache_id;
- door_bell.wd1.qid = sq->sq_queue_id;
- door_bell.wd1.pi_hi = (u32)pmsn_hi;
-
- if (sq->cache_id == INVALID_VALUE32) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]SQ(0x%x) invalid cid", sq->context_id);
- return UNF_RETURN_ERROR;
- }
- /* Fill Doorbell Record */
- db_val_qw = sq->queue_header->door_bell_record;
- db_val_qw &= (u64)(~(SPFC_DB_VAL_MASK));
- db_val_qw |= (u64)((u64)pmsn << UNF_SHIFT_16 | pmsn);
- sq->queue_header->door_bell_record = cpu_to_be64(db_val_qw);
-
- /* ring doorbell */
- db_val_qw = *(u64 *)&door_bell;
- ravl = cqm3_ring_hardware_db_fc(hba->dev_handle, SERVICE_T_FC, pmsn_lo,
- (qos_level & SPFC_QOS_LEVEL_MASK),
- db_val_qw);
- if (unlikely(ravl != CQM_SUCCESS)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]SQ(0x%x) send DB(0x%llx) failed",
- sq->context_id, db_val_qw);
-
- ret = UNF_RETURN_ERROR;
- }
-
- /* Doorbell success counter */
- atomic_inc(&sq->sq_db_cnt);
-
- return ret;
-}
-
-u32 spfc_direct_sq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
-{
- u32 ret = RETURN_OK;
- u32 msn_wd = INVALID_VALUE32;
- u16 link_wqe_msn = 0;
- ulong flag = 0;
- struct spfc_wqe_page *tail_wpg = NULL;
- struct spfc_sqe *sqe_in_wp = NULL;
- struct spfc_linkwqe *link_wqe = NULL;
- struct spfc_linkwqe *link_wqe_last_part = NULL;
- u64 wqe_gpa;
- struct spfc_direct_wqe_db dre_door_bell;
-
- spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
- tail_wpg = SPFC_GET_SQ_TAIL(ssq);
- if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
- "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), sqe_minus_cqe_cnt(0x%x)",
- ssq->sqn, ssq->context_id, ssq->sq_queue_id,
- ssq->last_pmsn,
- atomic_read(&ssq->sqe_minus_cqe_cnt));
-
- link_wqe_msn = SPFC_MSN_DEC(ssq->last_pmsn);
- link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
- ssq->wqe_offset);
- msn_wd = be32_to_cpu(link_wqe->val_wd1);
- msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
- msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
- link_wqe->val_wd1 = cpu_to_be32(msn_wd);
- link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
- SPFC_EXTEND_WQE_OFFSET);
- link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
- spfc_set_direct_wqe_owner_be(link_wqe, ssq->last_pi_owner);
- ssq->wqe_offset = 0;
- ssq->last_pi_owner = !ssq->last_pi_owner;
- }
- sqe_in_wp =
- (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
- spfc_build_wqe_owner_pmsn(io_sqe, (ssq->last_pi_owner), ssq->last_pmsn);
- SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
-
- wqe_gpa = tail_wpg->wpg_phy_addr + (ssq->wqe_offset * sizeof(struct spfc_sqe));
- io_sqe->wqe_gpa = (wqe_gpa >> UNF_SHIFT_6);
-
- dre_door_bell.wd0.ddb = IWARP_FC_DDB_TYPE;
- dre_door_bell.wd0.cos = 0;
- dre_door_bell.wd0.c = 0;
- dre_door_bell.wd0.pi_hi =
- (u32)(ssq->last_pmsn >> UNF_SHIFT_12) & SPFC_DB_WD0_PI_H_MASK;
- dre_door_bell.wd0.cntx_size = SPFC_CNTX_SIZE_T_256B;
- dre_door_bell.wd0.xid = ssq->context_id;
- dre_door_bell.wd1.sm_data = ssq->cache_id;
- dre_door_bell.wd1.pi_lo = (u32)(ssq->last_pmsn & SPFC_DB_WD0_PI_L_MASK);
- io_sqe->db_val = *(u64 *)&dre_door_bell;
-
- spfc_convert_parent_wqe_to_big_endian(io_sqe);
- memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
- spfc_set_direct_wqe_owner_be(sqe_in_wp, ssq->last_pi_owner);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
- "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx,o:0x%x,outstandind:0x%x,pmsn:0x%x,cmsn:0x%x",
- ssq->sqn, ssq->context_id, ssq->sq_queue_id, wqe_gpa,
- ssq->last_pi_owner, atomic_read(&ssq->sqe_minus_cqe_cnt),
- ssq->last_pmsn, SPFC_GET_QUEUE_CMSN(ssq));
-
- ssq->accum_wqe_cnt++;
- if (ssq->accum_wqe_cnt == accum_db_num) {
- ret = spfc_parent_sq_ring_direct_wqe_doorbell(ssq, (void *)sqe_in_wp);
- if (unlikely(ret != RETURN_OK))
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
- ssq->accum_wqe_cnt = 0;
- }
-
- ssq->wqe_offset += 1;
- ssq->last_pmsn = SPFC_MSN_INC(ssq->last_pmsn);
- atomic_inc(&ssq->sq_wqe_cnt);
- atomic_inc(&ssq->sqe_minus_cqe_cnt);
- SPFC_SQ_IO_STAT(ssq, wqe_type);
- spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
- return ret;
-}
-
-u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq, struct spfc_sqe *io_sqe, u8 wqe_type)
-{
- u32 ret = RETURN_OK;
- u32 addr_wd = INVALID_VALUE32;
- u32 msn_wd = INVALID_VALUE32;
- u16 link_wqe_msn = 0;
- ulong flag = 0;
- struct spfc_wqe_page *new_wqe_page = NULL;
- struct spfc_wqe_page *tail_wpg = NULL;
- struct spfc_sqe *sqe_in_wp = NULL;
- struct spfc_linkwqe *link_wqe = NULL;
- struct spfc_linkwqe *link_wqe_last_part = NULL;
- u32 cur_cmsn = 0;
- u8 qos_level = (u8)io_sqe->ts_sl.cont.icmnd.info.dif_info.wd1.vpid;
- u32 c = SPFC_DB_C_BIT_CONTROL_TYPE;
-
- if (ssq->queue_style == SPFC_QUEUE_RING_STYLE)
- return spfc_direct_sq_enqueue(ssq, io_sqe, wqe_type);
-
- spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
- tail_wpg = SPFC_GET_SQ_TAIL(ssq);
- if (ssq->wqe_offset == ssq->wqe_num_per_buf) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
- "[info]Ssq(0x%x), xid(0x%x) qid(0x%x) add wqepage at Pmsn(0x%x), WpgCnt(0x%x)",
- ssq->sqn, ssq->context_id, ssq->sq_queue_id,
- ssq->last_pmsn,
- atomic_read(&ssq->wqe_page_cnt));
- cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
- spfc_free_sq_wqe_page(ssq, cur_cmsn);
- new_wqe_page = spfc_add_one_wqe_page(ssq);
- if (unlikely(!new_wqe_page)) {
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
- spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
- return UNF_RETURN_ERROR;
- }
- link_wqe = (struct spfc_linkwqe *)spfc_get_wqe_page_entry(tail_wpg,
- ssq->wqe_offset);
- addr_wd = SPFC_MSD(new_wqe_page->wpg_phy_addr);
- link_wqe->next_page_addr_hi = cpu_to_be32(addr_wd);
- addr_wd = SPFC_LSD(new_wqe_page->wpg_phy_addr);
- link_wqe->next_page_addr_lo = cpu_to_be32(addr_wd);
- link_wqe_msn = SPFC_MSN_DEC(ssq->last_pmsn);
- msn_wd = be32_to_cpu(link_wqe->val_wd1);
- msn_wd |= ((u32)(link_wqe_msn & SPFC_MSNWD_L_MASK));
- msn_wd |= (((u32)(link_wqe_msn & SPFC_MSNWD_H_MASK)) << UNF_SHIFT_16);
- link_wqe->val_wd1 = cpu_to_be32(msn_wd);
- link_wqe_last_part = (struct spfc_linkwqe *)((u8 *)link_wqe +
- SPFC_EXTEND_WQE_OFFSET);
- link_wqe_last_part->next_page_addr_hi = link_wqe->next_page_addr_hi;
- link_wqe_last_part->next_page_addr_lo = link_wqe->next_page_addr_lo;
- link_wqe_last_part->val_wd1 = link_wqe->val_wd1;
- spfc_set_sq_wqe_owner_be(link_wqe);
- ssq->wqe_offset = 0;
- tail_wpg = SPFC_GET_SQ_TAIL(ssq);
- atomic_inc(&ssq->wqe_page_cnt);
- }
-
- spfc_build_wqe_owner_pmsn(io_sqe, !(ssq->last_pi_owner), ssq->last_pmsn);
- SPFC_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
- spfc_convert_parent_wqe_to_big_endian(io_sqe);
- sqe_in_wp = (struct spfc_sqe *)spfc_get_wqe_page_entry(tail_wpg, ssq->wqe_offset);
- memcpy(sqe_in_wp, io_sqe, sizeof(struct spfc_sqe));
- spfc_set_sq_wqe_owner_be(sqe_in_wp);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_INFO,
- "[INFO]Ssq(0x%x) xid:0x%x,qid:0x%x wqegpa:0x%llx, qos_level:0x%x, c:0x%x",
- ssq->sqn, ssq->context_id, ssq->sq_queue_id,
- virt_to_phys(sqe_in_wp), qos_level, c);
-
- ssq->accum_wqe_cnt++;
- if (ssq->accum_wqe_cnt == accum_db_num) {
- ret = spfc_parent_sq_ring_doorbell(ssq, qos_level, c);
- if (unlikely(ret != RETURN_OK))
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)ssq->hba, wqe_type);
- ssq->accum_wqe_cnt = 0;
- }
- ssq->wqe_offset += 1;
- ssq->last_pmsn = SPFC_MSN_INC(ssq->last_pmsn);
- atomic_inc(&ssq->sq_wqe_cnt);
- atomic_inc(&ssq->sqe_minus_cqe_cnt);
- SPFC_SQ_IO_STAT(ssq, wqe_type);
- spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
- return ret;
-}
-
-u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe, u16 ssqn)
-{
- u8 wqe_type = 0;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)sq->hba;
- struct spfc_parent_ssq_info *ssq = NULL;
-
- if (unlikely(ssqn >= SPFC_MAX_SSQ_NUM)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Ssqn 0x%x is invalid.", ssqn);
-
- return UNF_RETURN_ERROR;
- }
-
- wqe_type = (u8)SPFC_GET_WQE_TYPE(io_sqe);
-
- /* Serial enqueue */
- io_sqe->ts_sl.xid = sq->context_id;
- io_sqe->ts_sl.cid = sq->cache_id;
- io_sqe->ts_sl.sqn = ssqn;
-
- /* Choose SSQ */
- ssq = &hba->parent_queue_mgr->shared_queue[ssqn].parent_ssq_info;
-
- /* If the SQ is invalid, the wqe is discarded */
- if (unlikely(!atomic_read(&sq->sq_valid))) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]SQ is invalid, reject wqe(0x%x)", wqe_type);
-
- return UNF_RETURN_ERROR;
- }
-
- /* The heartbeat detection status is 0, which allows control sessions
- * enqueuing
- */
- if (unlikely(!hba->heart_status && SPFC_WQE_IS_IO(io_sqe))) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]Heart status is false");
-
- return UNF_RETURN_ERROR;
- }
-
- if (sq->need_offloaded != SPFC_NEED_DO_OFFLOAD) {
- /* Ensure to be offloaded */
- if (unlikely(!atomic_read(&sq->sq_cached))) {
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
- SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba,
- SPFC_STAT_PARENT_SQ_NOT_OFFLOADED);
-
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[err]RPort(0x%x) Session(0x%x) is not offloaded, reject wqe(0x%x)",
- sq->rport_index, sq->context_id, wqe_type);
-
- return UNF_RETURN_ERROR;
- }
- }
-
- /* Whether the SQ is in the flush state. Temporarily allow the control
- * sessions to enqueue.
- */
- if (unlikely(sq->port_in_flush && SPFC_WQE_IS_IO(io_sqe))) {
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
- SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Session(0x%x) in flush, Sqn(0x%x) cmsn(0x%x), reject wqe(0x%x)",
- sq->context_id, ssqn, SPFC_GET_QUEUE_CMSN(ssq),
- wqe_type);
-
- return UNF_RETURN_ERROR;
- }
-
- /* If the SQ is in the Seesion deletion state and is the WQE of the I/O
- * path, * the I/O failure is directly returned
- */
- if (unlikely(sq->sq_in_sess_rst && SPFC_WQE_IS_IO(io_sqe))) {
- SPFC_ERR_IO_STAT((struct spfc_hba_info *)sq->hba, wqe_type);
- SPFC_HBA_STAT((struct spfc_hba_info *)sq->hba, SPFC_STAT_PARENT_IO_FLUSHED);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Session(0x%x) in session reset, reject wqe(0x%x)",
- sq->context_id, wqe_type);
-
- return UNF_RETURN_ERROR;
- }
-
- return spfc_parent_ssq_enqueue(ssq, io_sqe, wqe_type);
-}
-
-static bool spfc_msn_in_wqe_page(u32 start_msn, u32 end_msn, u32 cur_cmsn)
-{
- bool ret = true;
-
- if (end_msn >= start_msn) {
- if (cur_cmsn < start_msn || cur_cmsn > end_msn)
- ret = false;
- else
- ret = true;
- } else {
- if (cur_cmsn > end_msn && cur_cmsn < start_msn)
- ret = false;
- else
- ret = true;
- }
-
- return ret;
-}
-
-void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn)
-{
- u16 wpg_start_cmsn = 0;
- u16 wpg_end_cmsn = 0;
- bool wqe_page_in_use = false;
-
- /* If there is only zero or one Wqe Page, no release is required */
- if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM)
- return;
-
- /* Check whether the current MSN is within the MSN range covered by the
- * WqePage
- */
- wpg_start_cmsn = ssq->head_start_cmsn;
- wpg_end_cmsn = ssq->head_end_cmsn;
- wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
-
- /* If the value of CMSN is within the current Wqe Page, no release is
- * required
- */
- if (wqe_page_in_use)
- return;
-
- /* If the next WqePage is available and the CMSN is not in the current
- * WqePage, * the current WqePage is released
- */
- while (!wqe_page_in_use &&
- (atomic_read(&ssq->wqe_page_cnt) > SPFC_MIN_WP_NUM)) {
- /* Free WqePage */
- spfc_free_head_wqe_page(ssq);
-
- /* Obtain the start MSN of the next WqePage */
- wpg_start_cmsn = SPFC_MSN_INC(wpg_end_cmsn);
-
- /* obtain the end MSN of the next WqePage */
- wpg_end_cmsn =
- SPFC_GET_WP_END_CMSN(wpg_start_cmsn, ssq->wqe_num_per_buf);
-
- /* Set new MSN range */
- ssq->head_start_cmsn = wpg_start_cmsn;
- ssq->head_end_cmsn = wpg_end_cmsn;
- cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
- /* Check whether the current MSN is within the MSN range covered
- * by the WqePage
- */
- wqe_page_in_use = spfc_msn_in_wqe_page(wpg_start_cmsn, wpg_end_cmsn, cur_cmsn);
- }
-}
-
-/*
- *Function Name : SPFC_UpdateSqCompletionStat
- *Function Description: Update the calculation statistics of the CQE
- *corresponding to the WQE on the connection SQ.
- *Input Parameters : *sq, *scqe
- *Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_update_sq_wqe_completion_stat(struct spfc_parent_ssq_info *ssq,
- union spfc_scqe *scqe)
-{
- struct spfc_scqe_rcv_els_gs_rsp *els_gs_rsp = NULL;
-
- els_gs_rsp = (struct spfc_scqe_rcv_els_gs_rsp *)scqe;
-
- /* For the ELS/GS RSP intermediate frame and the CQE that is more than
- * the ELS_GS_RSP_EXCH_CHECK_FAIL, no statistics are required
- */
- if (unlikely(SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP) ||
- (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP)) {
- if (!els_gs_rsp->wd3.end_rsp || !SPFC_SCQE_ERR_TO_CM(scqe))
- return;
- }
-
- /* When the SQ statistics are updated, the PlogiAcc or PlogiAccSts that
- * is * implicitly unloaded will enter here, and one more CQE count is
- * added
- */
- atomic_inc(&ssq->sq_cqe_cnt);
- atomic_dec(&ssq->sqe_minus_cqe_cnt);
- SPFC_SQ_IO_STAT(ssq, SPFC_GET_SCQE_TYPE(scqe));
-}
-
-/*
- *Function Name : spfc_reclaim_sq_wqe_page
- *Function Description: Reclaim the Wqe Pgae that has been used up in the Linked
- * List SQ.
- *Input Parameters : *handle,
- * *scqe
- *Output Parameters : N/A
- *Return Type : u32
- */
-u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe)
-{
- u32 ret = RETURN_OK;
- u32 cur_cmsn = 0;
- u32 sqn = INVALID_VALUE32;
- struct spfc_parent_ssq_info *ssq = NULL;
- struct spfc_parent_shared_queue_info *parent_queue_info = NULL;
- struct spfc_hba_info *hba = NULL;
- ulong flag = 0;
-
- hba = (struct spfc_hba_info *)handle;
- sqn = SPFC_GET_SCQE_SQN(scqe);
- if (sqn >= SPFC_MAX_SSQ_NUM) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) do not have sqn: 0x%x",
- hba->port_cfg.port_id, sqn);
-
- return UNF_RETURN_ERROR;
- }
-
- parent_queue_info = &hba->parent_queue_mgr->shared_queue[sqn];
- ssq = &parent_queue_info->parent_ssq_info;
- /* If there is only zero or one Wqe Page, no release is required */
- if (atomic_read(&ssq->wqe_page_cnt) <= SPFC_MIN_WP_NUM) {
- spfc_update_sq_wqe_completion_stat(ssq, scqe);
- return RETURN_OK;
- }
-
- spin_lock_irqsave(&ssq->parent_sq_enqueue_lock, flag);
- cur_cmsn = SPFC_GET_QUEUE_CMSN(ssq);
- spfc_free_sq_wqe_page(ssq, cur_cmsn);
- spin_unlock_irqrestore(&ssq->parent_sq_enqueue_lock, flag);
-
- spfc_update_sq_wqe_completion_stat(ssq, scqe);
-
- return ret;
-}
-
-u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len)
-{
-#define SPFC_ROOTCMDQ_TIMEOUT_MS 3000
- u8 wqe_type = 0;
- int cmq_ret = 0;
- struct sphw_cmd_buf *cmd_buf = NULL;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- wqe_type = (u8)cmdqe->common.wd0.task_type;
- SPFC_IO_STAT(hba, wqe_type);
-
- cmd_buf = sphw_alloc_cmd_buf(hba->dev_handle);
- if (!cmd_buf) {
- SPFC_ERR_IO_STAT(hba, wqe_type);
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) CqmHandle(0x%p) allocate cmdq buffer failed",
- hba->port_cfg.port_id, hba->dev_handle);
-
- return UNF_RETURN_ERROR;
- }
-
- memcpy(cmd_buf->buf, cmdqe, cmd_len);
- spfc_cpu_to_big32(cmd_buf->buf, cmd_len);
- cmd_buf->size = cmd_len;
-
- cmq_ret = sphw_cmdq_async(hba->dev_handle, COMM_MOD_FC, 0, cmd_buf, SPHW_CHANNEL_FC);
-
- if (cmq_ret != RETURN_OK) {
- sphw_free_cmd_buf(hba->dev_handle, cmd_buf);
- SPFC_ERR_IO_STAT(hba, wqe_type);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) CqmHandle(0x%p) send buff clear cmnd failed(0x%x)",
- hba->port_cfg.port_id, hba->dev_handle, cmq_ret);
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-struct spfc_parent_queue_info *
-spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg)
-{
- u32 rport_index = 0;
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
-
- if (unlikely(rport_index >= UNF_SPFC_MAXRPORT_NUM)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[warn]Port(0x%x) send pkg sid_did(0x%x_0x%x), but uplevel allocate invalid rport index: 0x%x",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did, rport_index);
-
- return NULL;
- }
-
- /* parent -->> session */
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
-
- return parent_queue_info;
-}
-
-struct spfc_parent_queue_info *spfc_find_parent_queue_info_by_id(struct spfc_hba_info *hba,
- u32 local_id, u32 remote_id)
-{
- u32 index = 0;
- ulong flag = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- spinlock_t *prtq_state_lock = NULL;
- u32 lport_id;
- u32 rport_id;
-
- parent_queue_mgr = hba->parent_queue_mgr;
- if (!parent_queue_mgr)
- return NULL;
-
- /* rport_number -->> parent_number -->> session_number */
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
- lport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.local_port_id;
- rport_id = parent_queue_mgr->parent_queue[index].parent_sq_info.remote_port_id;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- /* local_id & remote_id & offload */
- if (local_id == lport_id && remote_id == rport_id &&
- parent_queue_mgr->parent_queue[index].offload_state ==
- SPFC_QUEUE_STATE_OFFLOADED) {
- parent_queue_info = &parent_queue_mgr->parent_queue[index];
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- return parent_queue_info;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- }
-
- return NULL;
-}
-
-struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle, u32 local_id,
- u32 remote_id, u32 rport_index)
-{
- u32 index = 0;
- ulong flag = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- struct spfc_hba_info *hba = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_mgr = hba->parent_queue_mgr;
- if (!parent_queue_mgr)
- return NULL;
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- if (rport_index == index)
- continue;
- prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (local_id == parent_queue_mgr->parent_queue[index]
- .parent_sq_info.local_port_id &&
- remote_id == parent_queue_mgr->parent_queue[index]
- .parent_sq_info.remote_port_id &&
- parent_queue_mgr->parent_queue[index].offload_state !=
- SPFC_QUEUE_STATE_FREE &&
- parent_queue_mgr->parent_queue[index].offload_state !=
- SPFC_QUEUE_STATE_INITIALIZED) {
- parent_queue_info = &parent_queue_mgr->parent_queue[index];
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- return parent_queue_info;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- }
-
- return NULL;
-}
-
-struct spfc_parent_sq_info *spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg)
-{
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- struct cqm_qpc_mpt *cqm_parent_ctxt_obj = NULL;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_info = spfc_find_parent_queue_info_by_pkg(hba, pkg);
- if (unlikely(!parent_queue_info)) {
- parent_queue_info = spfc_find_parent_queue_info_by_id(hba,
- pkg->frame_head.csctl_sid &
- UNF_NPORTID_MASK,
- pkg->frame_head.rctl_did &
- UNF_NPORTID_MASK);
- if (!parent_queue_info) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x), get a null parent queue information",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return NULL;
- }
- }
-
- cqm_parent_ctxt_obj = (parent_queue_info->parent_ctx.cqm_parent_ctx_obj);
- if (unlikely(!cqm_parent_ctxt_obj)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[err]Port(0x%x) send pkg sid_did(0x%x_0x%x) with this rport has not alloc parent sq information",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return NULL;
- }
-
- return &parent_queue_info->parent_sq_info;
-}
-
-u32 spfc_check_all_parent_queue_free(struct spfc_hba_info *hba)
-{
- u32 index = 0;
- ulong flag = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- parent_queue_mgr = hba->parent_queue_mgr;
- if (!parent_queue_mgr) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[err]Port(0x%x) get a null parent queue mgr",
- hba->port_cfg.port_id);
-
- return UNF_RETURN_ERROR;
- }
-
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- prtq_state_lock = &parent_queue_mgr->parent_queue[index].parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
- return UNF_RETURN_ERROR;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- }
-
- return RETURN_OK;
-}
-
-void spfc_flush_specific_scq(struct spfc_hba_info *hba, u32 index)
-{
- /* The software interrupt is scheduled and processed during the second
- * timeout period
- */
- struct spfc_scq_info *scq_info = NULL;
- u32 flush_done_time = 0;
-
- scq_info = &hba->scq_info[index];
- atomic_set(&scq_info->flush_stat, SPFC_QUEUE_FLUSH_DOING);
- tasklet_schedule(&scq_info->tasklet);
-
- /* Wait for a maximum of 2 seconds. If the SCQ soft interrupt is not
- * scheduled * within 2 seconds, only timeout is returned
- */
- while ((atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) &&
- (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
- msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
- flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
- tasklet_schedule(&scq_info->tasklet);
- }
-
- if (atomic_read(&scq_info->flush_stat) != SPFC_QUEUE_FLUSH_DONE) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]Port(0x%x) special scq(0x%x) flush timeout",
- hba->port_cfg.port_id, index);
- }
-}
-
-static void spfc_flush_cmd_scq(struct spfc_hba_info *hba)
-{
- u32 index = 0;
-
- for (index = SPFC_CMD_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
- index += SPFC_SCQS_PER_SESSION) {
- spfc_flush_specific_scq(hba, index);
- }
-}
-
-static void spfc_flush_sts_scq(struct spfc_hba_info *hba)
-{
- u32 index = 0;
-
- /* for each STS SCQ */
- for (index = SPFC_STS_SCQN_START; index < SPFC_SESSION_SCQ_NUM;
- index += SPFC_SCQS_PER_SESSION) {
- spfc_flush_specific_scq(hba, index);
- }
-}
-
-static void spfc_flush_all_scq(struct spfc_hba_info *hba)
-{
- spfc_flush_cmd_scq(hba);
- spfc_flush_sts_scq(hba);
- /* Flush Default SCQ */
- spfc_flush_specific_scq(hba, SPFC_SESSION_SCQ_NUM);
-}
-
-void spfc_wait_all_queues_empty(struct spfc_hba_info *hba)
-{
- spfc_flush_all_scq(hba);
-}
-
-void spfc_set_rport_flush_state(void *handle, bool in_flush)
-{
- u32 index = 0;
- ulong flag = 0;
- struct spfc_parent_queue_mgr *parent_queue_mgr = NULL;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_mgr = hba->parent_queue_mgr;
- if (!parent_queue_mgr) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) parent queue manager is empty",
- hba->port_cfg.port_id);
- return;
- }
-
- /*
- * for each HBA's R_Port(SQ),
- * set state with been flushing or flush done
- */
- for (index = 0; index < UNF_SPFC_MAXRPORT_NUM; index++) {
- spin_lock_irqsave(&parent_queue_mgr->parent_queue[index]
- .parent_sq_info.parent_sq_enqueue_lock, flag);
- if (parent_queue_mgr->parent_queue[index].offload_state != SPFC_QUEUE_STATE_FREE) {
- parent_queue_mgr->parent_queue[index]
- .parent_sq_info.port_in_flush = in_flush;
- }
- spin_unlock_irqrestore(&parent_queue_mgr->parent_queue[index]
- .parent_sq_info.parent_sq_enqueue_lock, flag);
- }
-}
-
-u32 spfc_clear_fetched_sq_wqe(void *handle)
-{
- u32 ret = UNF_RETURN_ERROR;
- union spfc_cmdqe cmdqe;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
-
- hba = (struct spfc_hba_info *)handle;
- /*
- * The ROOT SQ cannot control the WQE in the empty queue of the ROOT SQ.
- * Therefore, the ROOT SQ does not enqueue the WQE after the hardware
- * obtains the. Link down after the wait mode is used. Therefore, the
- * WQE of the hardware driver needs to enter the WQE of the queue after
- * the Link down of the Link down is reported.
- */
- memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
- spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_BUFFER_CLEAR, 0);
- cmdqe.buffer_clear.wd1.rx_id_start = hba->exi_base;
- cmdqe.buffer_clear.wd1.rx_id_end = hba->exi_base + hba->exi_count - 1;
- cmdqe.buffer_clear.scqn = hba->default_scqn;
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "[info]Port(0x%x) start clear all fetched wqe in start(0x%x) - end(0x%x) scqn(0x%x) stage(0x%x)",
- hba->port_cfg.port_id, cmdqe.buffer_clear.wd1.rx_id_start,
- cmdqe.buffer_clear.wd1.rx_id_end, cmdqe.buffer_clear.scqn,
- hba->queue_set_stage);
-
- /* Send BUFFER_CLEAR command via ROOT CMDQ */
- ret = spfc_root_cmdq_enqueue(hba, &cmdqe, sizeof(cmdqe.buffer_clear));
-
- return ret;
-}
-
-u32 spfc_clear_pending_sq_wqe(void *handle)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 cmdqe_len = 0;
- ulong flag = 0;
- struct spfc_parent_ssq_info *ssq_info = NULL;
- union spfc_cmdqe cmdqe;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- memset(&cmdqe, 0, sizeof(union spfc_cmdqe));
- spfc_build_cmdqe_common(&cmdqe, SPFC_TASK_T_FLUSH_SQ, 0);
- cmdqe.flush_sq.wd0.wqe_type = SPFC_TASK_T_FLUSH_SQ;
- cmdqe.flush_sq.wd1.scqn = SPFC_LSW(hba->default_scqn);
- cmdqe.flush_sq.wd1.port_id = hba->port_index;
-
- ssq_info = &hba->parent_queue_mgr->shared_queue[ARRAY_INDEX_0].parent_ssq_info;
-
- spin_lock_irqsave(&ssq_info->parent_sq_enqueue_lock, flag);
- cmdqe.flush_sq.wd3.first_sq_xid = ssq_info->context_id;
- spin_unlock_irqrestore(&ssq_info->parent_sq_enqueue_lock, flag);
- cmdqe.flush_sq.wd0.entry_count = SPFC_MAX_SSQ_NUM;
- cmdqe.flush_sq.wd3.sqqid_start_per_session = SPFC_SQ_QID_START_PER_QPC;
- cmdqe.flush_sq.wd3.sqcnt_per_session = SPFC_SQ_NUM_PER_QPC;
- cmdqe.flush_sq.wd1.last_wqe = 1;
-
- /* Clear pending Queue */
- cmdqe_len = (u32)(sizeof(cmdqe.flush_sq));
- ret = spfc_root_cmdq_enqueue(hba, &cmdqe, (u16)cmdqe_len);
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "[info]Port(0x%x) clear total 0x%x SQ in this CMDQE(last=%u), stage (0x%x)",
- hba->port_cfg.port_id, SPFC_MAX_SSQ_NUM,
- cmdqe.flush_sq.wd1.last_wqe, hba->queue_set_stage);
-
- return ret;
-}
-
-u32 spfc_wait_queue_set_flush_done(struct spfc_hba_info *hba)
-{
- u32 flush_done_time = 0;
- u32 ret = RETURN_OK;
-
- while ((hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) &&
- (flush_done_time < SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS)) {
- msleep(SPFC_QUEUE_FLUSH_WAIT_MS);
- flush_done_time += SPFC_QUEUE_FLUSH_WAIT_MS;
- }
-
- if (hba->queue_set_stage != SPFC_QUEUE_SET_STAGE_FLUSHDONE) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]Port(0x%x) queue sets flush timeout with stage(0x%x)",
- hba->port_cfg.port_id, hba->queue_set_stage);
-
- ret = UNF_RETURN_ERROR;
- }
-
- return ret;
-}
-
-void spfc_disable_all_scq_schedule(struct spfc_hba_info *hba)
-{
- struct spfc_scq_info *scq_info = NULL;
- u32 index = 0;
-
- for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
- scq_info = &hba->scq_info[index];
- tasklet_disable(&scq_info->tasklet);
- }
-}
-
-void spfc_disable_queues_dispatch(struct spfc_hba_info *hba)
-{
- spfc_disable_all_scq_schedule(hba);
-}
-
-void spfc_enable_all_scq_schedule(struct spfc_hba_info *hba)
-{
- struct spfc_scq_info *scq_info = NULL;
- u32 index = 0;
-
- for (index = 0; index < SPFC_TOTAL_SCQ_NUM; index++) {
- scq_info = &hba->scq_info[index];
- tasklet_enable(&scq_info->tasklet);
- }
-}
-
-void spfc_enalbe_queues_dispatch(void *handle)
-{
- spfc_enable_all_scq_schedule((struct spfc_hba_info *)handle);
-}
-
-/*
- *Function Name : spfc_clear_els_srq
- *Function Description: When the port is used as the remove, the resources
- *related to the els srq are deleted.
- *Input Parameters : *hba Output Parameters
- *Return Type : void
- */
-void spfc_clear_els_srq(struct spfc_hba_info *hba)
-{
-#define SPFC_WAIT_CLR_SRQ_CTX_MS 500
-#define SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES 60
-
- u32 index = 0;
- ulong flag = 0;
- struct spfc_srq_info *srq_info = NULL;
-
- srq_info = &hba->els_srq_info;
-
- spin_lock_irqsave(&srq_info->srq_spin_lock, flag);
- if (!srq_info->enable || srq_info->state == SPFC_CLEAN_DOING) {
- spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
-
- return;
- }
- srq_info->enable = false;
- srq_info->state = SPFC_CLEAN_DOING;
- spin_unlock_irqrestore(&srq_info->srq_spin_lock, flag);
-
- spfc_send_clear_srq_cmd(hba, &hba->els_srq_info);
-
- /* wait for uCode to clear SRQ context, the timer is 30S */
- while ((srq_info->state != SPFC_CLEAN_DONE) &&
- (index < SPFC_WAIT_CLR_SRQ_CTX_LOOP_TIMES)) {
- msleep(SPFC_WAIT_CLR_SRQ_CTX_MS);
- index++;
- }
-
- if (srq_info->state != SPFC_CLEAN_DONE) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_WARN,
- "[warn]SPFC Port(0x%x) clear els srq timeout",
- hba->port_cfg.port_id);
- }
-}
-
-u32 spfc_wait_all_parent_queue_free(struct spfc_hba_info *hba)
-{
-#define SPFC_MAX_LOOP_TIMES 6000
-#define SPFC_WAIT_ONE_TIME_MS 5
- u32 index = 0;
- u32 ret = UNF_RETURN_ERROR;
-
- do {
- ret = spfc_check_all_parent_queue_free(hba);
- if (ret == RETURN_OK)
- break;
-
- index++;
- msleep(SPFC_WAIT_ONE_TIME_MS);
- } while (index < SPFC_MAX_LOOP_TIMES);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ERR,
- "[warn]Port(0x%x) wait all parent queue state free timeout",
- hba->port_cfg.port_id);
- }
-
- return ret;
-}
-
-/*
- *Function Name : spfc_queue_pre_process
- *Function Description: When the port functions as the remove, the queue needs
- * to be preprocessed.
- *Input Parameters : *handle,
- * clean
- *Output Parameters : N/A
- *Return Type : void
- */
-void spfc_queue_pre_process(void *handle, bool clean)
-{
-#define SPFC_WAIT_LINKDOWN_EVENT_MS 500
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- /* From port reset & port remove */
- /* 1. Wait for 2s and wait for QUEUE to be FLUSH Done. */
- if (spfc_wait_queue_set_flush_done(hba) != RETURN_OK) {
- /*
- * During the process of removing the card, if the port is
- * disabled and the flush done is not available, the chip is
- * powered off or the pcie link is disconnected. In this case,
- * you can proceed with the next step.
- */
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]SPFC Port(0x%x) clean queue sets timeout",
- hba->port_cfg.port_id);
- }
-
- /*
- * 2. Port remove:
- * 2.1 free parent queue
- * 2.2 clear & destroy ELS/SIRT SRQ
- */
- if (clean) {
- if (spfc_wait_all_parent_queue_free(hba) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT,
- UNF_WARN,
- "[warn]SPFC Port(0x%x) free all parent queue timeout",
- hba->port_cfg.port_id);
- }
-
- /* clear & than destroy ELS/SIRT SRQ */
- spfc_clear_els_srq(hba);
- }
-
- msleep(SPFC_WAIT_LINKDOWN_EVENT_MS);
-
- /*
- * 3. The internal resources of the port chip are flush done. However,
- * there may be residual scqe or rq in the queue. The scheduling is
- * forcibly refreshed once.
- */
- spfc_wait_all_queues_empty(hba);
-
- /* 4. Disable tasklet scheduling for upstream queues on the software
- * layer
- */
- spfc_disable_queues_dispatch(hba);
-}
-
-void spfc_queue_post_process(void *hba)
-{
- spfc_enalbe_queues_dispatch((struct spfc_hba_info *)hba);
-}
-
-/*
- *Function Name : spfc_push_delay_sqe
- *Function Description: Check whether there is a sq that is being deleted.
- * If yes, add the sq to the sq.
- *Input Parameters : *hba,
- * *offload_parent_queue,
- * *sqe,
- * *pkg
- *Output Parameters : N/A
- *Return Type : u32
- */
-u32 spfc_push_delay_sqe(void *hba,
- struct spfc_parent_queue_info *offload_parent_queue,
- struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
-{
- ulong flag = 0;
- spinlock_t *prtq_state_lock = NULL;
-
- prtq_state_lock = &offload_parent_queue->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (offload_parent_queue->offload_state != SPFC_QUEUE_STATE_INITIALIZED &&
- offload_parent_queue->offload_state != SPFC_QUEUE_STATE_FREE) {
- memcpy(&offload_parent_queue->parent_sq_info.delay_sqe.sqe,
- sqe, sizeof(struct spfc_sqe));
- offload_parent_queue->parent_sq_info.delay_sqe.start_jiff = jiffies;
- offload_parent_queue->parent_sq_info.delay_sqe.time_out =
- pkg->private_data[PKG_PRIVATE_XCHG_TIMEER];
- offload_parent_queue->parent_sq_info.delay_sqe.valid = true;
- offload_parent_queue->parent_sq_info.delay_sqe.rport_index =
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
- offload_parent_queue->parent_sq_info.delay_sqe.sid =
- pkg->frame_head.csctl_sid & UNF_NPORTID_MASK;
- offload_parent_queue->parent_sq_info.delay_sqe.did =
- pkg->frame_head.rctl_did & UNF_NPORTID_MASK;
- offload_parent_queue->parent_sq_info.delay_sqe.xid =
- sqe->ts_sl.xid;
- offload_parent_queue->parent_sq_info.delay_sqe.ssqn =
- (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%x) delay send ELS, OXID(0x%x), RXID(0x%x)",
- ((struct spfc_hba_info *)hba)->port_cfg.port_id,
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
- UNF_GET_OXID(pkg), UNF_GET_RXID(pkg));
-
- return RETURN_OK;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- return UNF_RETURN_ERROR;
-}
-
-static u32 spfc_pop_session_valid_check(struct spfc_hba_info *hba,
- struct spfc_delay_sqe_ctrl_info *sqe_info, u32 rport_index)
-{
- if (!sqe_info->valid)
- return UNF_RETURN_ERROR;
-
- if (jiffies_to_msecs(jiffies - sqe_info->start_jiff) >= sqe_info->time_out) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) pop delay enable session failed, start time 0x%llx, timeout value 0x%x",
- hba->port_cfg.port_id, sqe_info->start_jiff,
- sqe_info->time_out);
-
- return UNF_RETURN_ERROR;
- }
-
- if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) pop delay enable session failed, rport index(0x%x) is invalid",
- hba->port_cfg.port_id, rport_index);
-
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-/*
- *Function Name : spfc_pop_delay_sqe
- *Function Description: The sqe that is delayed due to the deletion of the old
- * connection is sent to the root sq for
- *processing. Input Parameters : *hba, *sqe_info Output Parameters : N/A
- *Return Type : void
- */
-static void spfc_pop_delay_sqe(struct spfc_hba_info *hba,
- struct spfc_delay_sqe_ctrl_info *sqe_info)
-{
- ulong flag;
- u32 delay_rport_index = INVALID_VALUE32;
- struct spfc_parent_queue_info *parent_queue = NULL;
- enum spfc_parent_queue_state offload_state =
- SPFC_QUEUE_STATE_DESTROYING;
- struct spfc_delay_destroy_ctrl_info destroy_sqe_info;
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_parent_sq_info *sq_info = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- memset(&destroy_sqe_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
- delay_rport_index = sqe_info->rport_index;
-
- /* According to the sequence, the rport index id is reported and then
- * the sqe of the new link setup request is delivered.
- */
- ret = spfc_pop_session_valid_check(hba, sqe_info, delay_rport_index);
-
- if (ret != RETURN_OK)
- return;
-
- parent_queue = &hba->parent_queue_mgr->parent_queue[delay_rport_index];
- sq_info = &parent_queue->parent_sq_info;
- prtq_state_lock = &parent_queue->parent_queue_state_lock;
- /* Before the root sq is delivered, check the status again to
- * ensure that the initialization status is not uninstalled. Other
- * states are not processed and are discarded directly.
- */
- spin_lock_irqsave(prtq_state_lock, flag);
- offload_state = parent_queue->offload_state;
-
- /* Before re-enqueuing the rootsq, check whether the offload status and
- * connection information is consistent to prevent the old request from
- * being sent after the connection status is changed.
- */
- if (offload_state == SPFC_QUEUE_STATE_INITIALIZED &&
- parent_queue->parent_sq_info.local_port_id == sqe_info->sid &&
- parent_queue->parent_sq_info.remote_port_id == sqe_info->did &&
- SPFC_CHECK_XID_MATCHED(parent_queue->parent_sq_info.context_id,
- sqe_info->sqe.ts_sl.xid)) {
- parent_queue->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) pop up delay session enable, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
- hba->port_cfg.port_id, sqe_info->start_jiff,
- sqe_info->time_out, delay_rport_index, offload_state);
-
- if (spfc_parent_sq_enqueue(sq_info, &sqe_info->sqe, sqe_info->ssqn) != RETURN_OK) {
- spin_lock_irqsave(prtq_state_lock, flag);
-
- if (parent_queue->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
- parent_queue->offload_state = offload_state;
-
- if (parent_queue->parent_sq_info.destroy_sqe.valid) {
- memcpy(&destroy_sqe_info,
- &parent_queue->parent_sq_info.destroy_sqe,
- sizeof(struct spfc_delay_destroy_ctrl_info));
-
- parent_queue->parent_sq_info.destroy_sqe.valid = false;
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- spfc_pop_destroy_parent_queue_sqe((void *)hba, &destroy_sqe_info);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) pop up delay session enable fail, recover offload state 0x%x",
- hba->port_cfg.port_id, parent_queue->offload_state);
- return;
- }
- } else {
- spin_unlock_irqrestore(prtq_state_lock, flag);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port 0x%x pop delay session enable failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
- hba->port_cfg.port_id, sqe_info->start_jiff,
- sqe_info->time_out, delay_rport_index,
- offload_state);
- }
-}
-
-void spfc_push_destroy_parent_queue_sqe(void *hba,
- struct spfc_parent_queue_info *offloading_parent_queue,
- struct unf_port_info *rport_info)
-{
- offloading_parent_queue->parent_sq_info.destroy_sqe.valid = true;
- offloading_parent_queue->parent_sq_info.destroy_sqe.rport_index = rport_info->rport_index;
- offloading_parent_queue->parent_sq_info.destroy_sqe.time_out =
- SPFC_SQ_DEL_STAGE_TIMEOUT_MS;
- offloading_parent_queue->parent_sq_info.destroy_sqe.start_jiff = jiffies;
- offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.nport_id =
- rport_info->nport_id;
- offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.rport_index =
- rport_info->rport_index;
- offloading_parent_queue->parent_sq_info.destroy_sqe.rport_info.port_name =
- rport_info->port_name;
-}
-
-/*
- *Function Name : spfc_pop_destroy_parent_queue_sqe
- *Function Description: The deletion connection sqe that is delayed due to
- * connection uninstallation is sent to
- *the parent sq for processing. Input Parameters : *handle, *destroy_sqe_info
- *Output Parameters : N/A
- *Return Type : void
- */
-void spfc_pop_destroy_parent_queue_sqe(void *handle,
- struct spfc_delay_destroy_ctrl_info *destroy_sqe_info)
-{
- u32 ret = UNF_RETURN_ERROR;
- ulong flag;
- u32 index = INVALID_VALUE32;
- struct spfc_parent_queue_info *parent_queue = NULL;
- enum spfc_parent_queue_state offload_state =
- SPFC_QUEUE_STATE_DESTROYING;
- struct spfc_hba_info *hba = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- if (!destroy_sqe_info->valid)
- return;
-
- if (jiffies_to_msecs(jiffies - destroy_sqe_info->start_jiff) < destroy_sqe_info->time_out) {
- index = destroy_sqe_info->rport_index;
- parent_queue = &hba->parent_queue_mgr->parent_queue[index];
- prtq_state_lock = &parent_queue->parent_queue_state_lock;
- /* Before delivery, check the status again to ensure that the
- * initialization status is not uninstalled. Other states are
- * not processed and are discarded directly.
- */
- spin_lock_irqsave(prtq_state_lock, flag);
-
- offload_state = parent_queue->offload_state;
- if (offload_state == SPFC_QUEUE_STATE_OFFLOADED ||
- offload_state == SPFC_QUEUE_STATE_INITIALIZED) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port 0x%x pop up delay destroy parent sq, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, offload state 0x%x",
- hba->port_cfg.port_id,
- destroy_sqe_info->start_jiff,
- destroy_sqe_info->time_out,
- index, offload_state);
- ret = spfc_free_parent_resource(hba, &destroy_sqe_info->rport_info);
- } else {
- ret = UNF_RETURN_ERROR;
- spin_unlock_irqrestore(prtq_state_lock, flag);
- }
- }
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port 0x%x pop delay destroy parent sq failed, sqe start time 0x%llx, timeout value 0x%x, rport index 0x%x, rport nport id 0x%x,offload state 0x%x",
- hba->port_cfg.port_id, destroy_sqe_info->start_jiff,
- destroy_sqe_info->time_out, index,
- destroy_sqe_info->rport_info.nport_id, offload_state);
- }
-}
-
-void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info)
-{
- ulong flag = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 rport_index = INVALID_VALUE32;
- struct spfc_hba_info *hba = NULL;
- struct spfc_delay_sqe_ctrl_info sqe_info;
- spinlock_t *prtq_state_lock = NULL;
-
- memset(&sqe_info, 0, sizeof(struct spfc_delay_sqe_ctrl_info));
- hba = (struct spfc_hba_info *)handle;
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) begin to free parent sq, rport_index(0x%x)",
- hba->port_cfg.port_id, parent_queue_info->parent_sq_info.rport_index);
-
- if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[info]Port(0x%x) duplicate free parent sq, rport_index(0x%x)",
- hba->port_cfg.port_id,
- parent_queue_info->parent_sq_info.rport_index);
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
- return;
- }
-
- if (parent_queue_info->parent_sq_info.delay_sqe.valid) {
- memcpy(&sqe_info, &parent_queue_info->parent_sq_info.delay_sqe,
- sizeof(struct spfc_delay_sqe_ctrl_info));
- }
-
- rport_index = parent_queue_info->parent_sq_info.rport_index;
-
- /* The Parent Contexe and SQ information is released. After
- * initialization, the Parent Contexe and SQ information is associated
- * with the sq in the queue of the parent
- */
-
- spfc_free_parent_sq(hba, parent_queue_info);
-
- /* The initialization of all queue id is invalid */
- parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
- parent_queue_info->parent_sts_scq_info.cqm_queue_id = INVALID_VALUE32;
- parent_queue_info->parent_els_srq_info.cqm_queue_id = INVALID_VALUE32;
- parent_queue_info->offload_state = SPFC_QUEUE_STATE_FREE;
-
- spin_unlock_irqrestore(prtq_state_lock, flag);
-
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport, UNF_PORT_RELEASE_RPORT_INDEX,
- (void *)&rport_index);
-
- spfc_pop_delay_sqe(hba, &sqe_info);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Port(0x%x) free parent sq with rport_index(0x%x) failed",
- hba->port_cfg.port_id, rport_index);
- }
-}
-
-static void spfc_do_port_reset(struct work_struct *work)
-{
- struct spfc_suspend_sqe_info *suspend_sqe = NULL;
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VOID(work);
-
- suspend_sqe = container_of(work, struct spfc_suspend_sqe_info,
- timeout_work.work);
- hba = (struct spfc_hba_info *)suspend_sqe->hba;
- FC_CHECK_RETURN_VOID(hba);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) magic num (0x%x)do port reset.",
- hba->port_cfg.port_id, suspend_sqe->magic_num);
-
- spfc_port_reset(hba);
-}
-
-static void
-spfc_push_sqe_suspend(void *hba, struct spfc_parent_queue_info *parent_queue,
- struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, u32 magic_num)
-{
-#define SPFC_SQ_NOP_TIMEOUT_MS 1000
- ulong flag = 0;
- u32 sqn_base;
- struct spfc_parent_sq_info *sq = NULL;
- struct spfc_suspend_sqe_info *suspend_sqe = NULL;
-
- sq = &parent_queue->parent_sq_info;
- suspend_sqe =
- kmalloc(sizeof(struct spfc_suspend_sqe_info), GFP_ATOMIC);
- if (!suspend_sqe) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[err]alloc suspend sqe memory failed");
- return;
- }
- memset(suspend_sqe, 0, sizeof(struct spfc_suspend_sqe_info));
- memcpy(&suspend_sqe->sqe, sqe, sizeof(struct spfc_sqe));
- suspend_sqe->magic_num = magic_num;
- suspend_sqe->old_offload_sts = sq->need_offloaded;
- suspend_sqe->hba = sq->hba;
-
- if (pkg) {
- memcpy(&suspend_sqe->pkg, pkg, sizeof(struct unf_frame_pkg));
- } else {
- sqn_base = sq->sqn_base;
- suspend_sqe->pkg.private_data[PKG_PRIVATE_XCHG_SSQ_INDEX] =
- sqn_base;
- }
-
- INIT_DELAYED_WORK(&suspend_sqe->timeout_work, spfc_do_port_reset);
- INIT_LIST_HEAD(&suspend_sqe->list_sqe_entry);
-
- spin_lock_irqsave(&parent_queue->parent_queue_state_lock, flag);
- list_add_tail(&suspend_sqe->list_sqe_entry, &sq->suspend_sqe_list);
- spin_unlock_irqrestore(&parent_queue->parent_queue_state_lock, flag);
-
- (void)queue_delayed_work(((struct spfc_hba_info *)hba)->work_queue,
- &suspend_sqe->timeout_work,
- (ulong)msecs_to_jiffies((u32)SPFC_SQ_NOP_TIMEOUT_MS));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) magic num(0x%x)suspend sqe",
- ((struct spfc_hba_info *)hba)->port_cfg.port_id, magic_num);
-}
-
-u32 spfc_pop_suspend_sqe(void *handle, struct spfc_parent_queue_info *parent_queue,
- struct spfc_suspend_sqe_info *suspen_sqe)
-{
- ulong flag;
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_parent_sq_info *sq = NULL;
- u16 ssqn;
- struct unf_frame_pkg *pkg = NULL;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
- u8 task_type;
- spinlock_t *prtq_state_lock = NULL;
-
- sq = &parent_queue->parent_sq_info;
- task_type = suspen_sqe->sqe.ts_sl.task_type;
- pkg = &suspen_sqe->pkg;
- if (!pkg) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "[error]pkt is null.");
- return UNF_RETURN_ERROR;
- }
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) pop up suspend wqe sqn (0x%x) TaskType(0x%x)",
- hba->port_cfg.port_id, ssqn, task_type);
-
- prtq_state_lock = &parent_queue->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flag);
- if (SPFC_RPORT_NOT_OFFLOADED(parent_queue) &&
- (task_type == SPFC_SQE_ELS_RSP ||
- task_type == SPFC_TASK_T_ELS)) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
- /* Send PLOGI or PLOGI ACC or SCR if session not offload */
- ret = spfc_send_els_via_default_session(hba, &suspen_sqe->sqe, pkg, parent_queue);
- } else {
- spin_unlock_irqrestore(prtq_state_lock, flag);
- ret = spfc_parent_sq_enqueue(sq, &suspen_sqe->sqe, ssqn);
- }
- return ret;
-}
-
-static void spfc_build_nop_sqe(struct spfc_hba_info *hba, struct spfc_parent_sq_info *sq,
- struct spfc_sqe *sqe, u32 magic_num, u32 scqn)
-{
- sqe->ts_sl.task_type = SPFC_SQE_NOP;
- sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
- sqe->ts_sl.cont.nop_sq.wd0.scqn = scqn;
- sqe->ts_sl.cont.nop_sq.magic_num = magic_num;
- spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
- sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
-}
-
-u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
- u32 magic_num, u16 sqn)
-{
- struct spfc_sqe empty_sq_sqe;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
- u32 ret;
-
- memset(&empty_sq_sqe, 0, sizeof(struct spfc_sqe));
-
- spfc_build_nop_sqe(hba, parent_sq_info, &empty_sq_sqe, magic_num, hba->default_scqn);
- ret = spfc_parent_sq_enqueue(parent_sq_info, &empty_sq_sqe, sqn);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]send nop cmd scqn(0x%x) sq(0x%x).",
- hba->default_scqn, sqn);
- return ret;
-}
-
-u32 spfc_suspend_sqe_and_send_nop(void *handle,
- struct spfc_parent_queue_info *parent_queue,
- struct spfc_sqe *sqe, struct unf_frame_pkg *pkg)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 magic_num;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
- struct spfc_parent_sq_info *parent_sq = &parent_queue->parent_sq_info;
- struct unf_lport *lport = (struct unf_lport *)hba->lport;
-
- FC_CHECK_RETURN_VALUE(lport, UNF_RETURN_ERROR);
-
- if (pkg) {
- magic_num = pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
- } else {
- magic_num = (u32)atomic64_inc_return(&((struct unf_lport *)
- lport->root_lport)->exchg_index);
- }
-
- spfc_push_sqe_suspend(hba, parent_queue, sqe, pkg, magic_num);
- if (SPFC_RPORT_NOT_OFFLOADED(parent_queue))
- parent_sq->need_offloaded = SPFC_NEED_DO_OFFLOAD;
-
- ret = spfc_send_nop_cmd(hba, parent_sq, magic_num,
- (u16)parent_sq->sqn_base);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[err]Port(0x%x) rport_index(0x%x)send sq empty failed.",
- hba->port_cfg.port_id, parent_sq->rport_index);
- }
- return ret;
-}
-
-void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
- struct spfc_sqe *sqe, enum spfc_session_reset_mode mode, u32 scqn)
-{
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- /* The reset session command does not occupy xid. Therefore,
- * 0xffff can be used to align with the microcode.
- */
- sqe->ts_sl.task_type = SPFC_SQE_SESS_RST;
- sqe->ts_sl.local_xid = 0xffff;
- sqe->ts_sl.wd0.conn_id = (u16)(sq->rport_index);
- sqe->ts_sl.wd0.remote_xid = 0xffff;
- sqe->ts_sl.cont.reset_session.wd0.reset_exch_start = hba->exi_base;
- sqe->ts_sl.cont.reset_session.wd0.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
- sqe->ts_sl.cont.reset_session.wd1.reset_did = sq->remote_port_id;
- sqe->ts_sl.cont.reset_session.wd1.mode = mode;
- sqe->ts_sl.cont.reset_session.wd2.reset_sid = sq->local_port_id;
- sqe->ts_sl.cont.reset_session.wd3.scqn = scqn;
-
- spfc_build_common_wqe_ctrls(&sqe->ctrl_sl,
- sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE);
-}
-
-u32 spfc_send_session_rst_cmd(void *handle,
- struct spfc_parent_queue_info *parent_queue_info,
- enum spfc_session_reset_mode mode)
-{
- struct spfc_parent_sq_info *sq = NULL;
- struct spfc_sqe rst_sess_sqe;
- u32 ret = UNF_RETURN_ERROR;
- u32 sts_scqn = 0;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
- memset(&rst_sess_sqe, 0, sizeof(struct spfc_sqe));
- sq = &parent_queue_info->parent_sq_info;
- sts_scqn = hba->default_scqn;
-
- spfc_build_session_rst_wqe(hba, sq, &rst_sess_sqe, mode, sts_scqn);
- ret = spfc_suspend_sqe_and_send_nop(hba, parent_queue_info, &rst_sess_sqe, NULL);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]RPort(0x%x) send SESS_RST(%d) start_exch_id(0x%x) end_exch_id(0x%x), scqn(0x%x) ctx_id(0x%x) cid(0x%x)",
- sq->rport_index, mode,
- rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_start,
- rst_sess_sqe.ts_sl.cont.reset_session.wd0.reset_exch_end,
- rst_sess_sqe.ts_sl.cont.reset_session.wd3.scqn,
- sq->context_id, sq->cache_id);
- return ret;
-}
-
-void spfc_rcvd_els_from_srq_timeout(struct work_struct *work)
-{
- struct spfc_hba_info *hba = NULL;
-
- hba = container_of(work, struct spfc_hba_info, srq_delay_info.del_work.work);
-
- /* If the frame is not processed, the frame is pushed to the CM layer:
- * The frame may have been processed when the root rq receives data.
- */
- if (hba->srq_delay_info.srq_delay_flag) {
- spfc_recv_els_cmnd(hba, &hba->srq_delay_info.frame_pkg,
- hba->srq_delay_info.frame_pkg.unf_cmnd_pload_bl.buffer_ptr,
- 0, false);
- hba->srq_delay_info.srq_delay_flag = 0;
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) srq delay work timeout, send saved plgoi to CM",
- hba->port_cfg.port_id);
- }
-}
-
-u32 spfc_flush_ini_resp_queue(void *handle)
-{
- struct spfc_hba_info *hba = NULL;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- hba = (struct spfc_hba_info *)handle;
-
- spfc_flush_sts_scq(hba);
-
- return RETURN_OK;
-}
-
-static void spfc_handle_aeq_queue_error(struct spfc_hba_info *hba,
- struct spfc_aqe_data *aeq_msg)
-{
- u32 sts_scqn_local = 0;
- u32 full_ci = INVALID_VALUE32;
- u32 full_ci_owner = INVALID_VALUE32;
- struct spfc_scq_info *scq_info = NULL;
-
- sts_scqn_local = SPFC_RPORTID_TO_STS_SCQN(aeq_msg->wd0.conn_id);
- scq_info = &hba->scq_info[sts_scqn_local];
- full_ci = scq_info->ci;
- full_ci_owner = scq_info->ci_owner;
-
- /* Currently, Flush is forcibly set to StsScq. No matter whether scq is
- * processed, AEQE is returned
- */
- tasklet_schedule(&scq_info->tasklet);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%x) LocalScqn(0x%x) CqmScqn(0x%x) is full, force flush CI from (%u|0x%x) to (%u|0x%x)",
- hba->port_cfg.port_id, aeq_msg->wd0.conn_id,
- sts_scqn_local, scq_info->scqn, full_ci_owner, full_ci,
- scq_info->ci_owner, scq_info->ci);
-}
-
-void spfc_process_aeqe(void *handle, u8 event_type, u8 *val)
-{
- u32 ret = RETURN_OK;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
- struct spfc_aqe_data aeq_msg;
- u8 event_code = INVALID_VALUE8;
- u64 event_val = *((u64 *)val);
-
- FC_CHECK_RETURN_VOID(hba);
-
- memcpy(&aeq_msg, (struct spfc_aqe_data *)&event_val, sizeof(struct spfc_aqe_data));
- event_code = (u8)aeq_msg.wd0.evt_code;
-
- switch (event_type) {
- case FC_AEQ_EVENT_QUEUE_ERROR:
- spfc_handle_aeq_queue_error(hba, &aeq_msg);
- break;
-
- case FC_AEQ_EVENT_WQE_FATAL_ERROR:
- UNF_LOWLEVEL_PORT_EVENT(ret, hba->lport,
- UNF_PORT_ABNORMAL_RESET, NULL);
- break;
-
- case FC_AEQ_EVENT_CTX_FATAL_ERROR:
- break;
-
- case FC_AEQ_EVENT_OFFLOAD_ERROR:
- ret = spfc_handle_aeq_off_load_err(hba, &aeq_msg);
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[warn]Port(0x%x) receive a unsupported AEQ EventType(0x%x) EventVal(0x%llx).",
- hba->port_cfg.port_id, event_type, (u64)event_val);
- return;
- }
-
- if (event_code < FC_AEQ_EVT_ERR_CODE_BUTT)
- SPFC_AEQ_ERR_TYPE_STAT(hba, aeq_msg.wd0.evt_code);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_KEVENT,
- "[info]Port(0x%x) receive AEQ EventType(0x%x) EventVal(0x%llx) EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x) %s",
- hba->port_cfg.port_id, event_type, (u64)event_val, event_code,
- aeq_msg.wd0.conn_id, aeq_msg.wd1.xid,
- (ret == UNF_RETURN_ERROR) ? "ERROR" : "OK");
-}
-
-void spfc_sess_resource_free_sync(void *handle,
- struct unf_port_info *rport_info)
-{
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- ulong flag = 0;
- u32 wait_sq_cnt = 0;
- struct spfc_hba_info *hba = NULL;
- spinlock_t *prtq_state_lock = NULL;
- u32 index = SPFC_DEFAULT_RPORT_INDEX;
-
- FC_CHECK_RETURN_VOID(handle);
- FC_CHECK_RETURN_VOID(rport_info);
-
- hba = (struct spfc_hba_info *)handle;
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[index];
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- (void)spfc_free_parent_resource((void *)hba, rport_info);
-
- for (;;) {
- spin_lock_irqsave(prtq_state_lock, flag);
- if (parent_queue_info->offload_state == SPFC_QUEUE_STATE_FREE) {
- spin_unlock_irqrestore(prtq_state_lock, flag);
- break;
- }
- spin_unlock_irqrestore(prtq_state_lock, flag);
- msleep(SPFC_WAIT_SESS_FREE_ONE_TIME_MS);
- wait_sq_cnt++;
- if (wait_sq_cnt >= SPFC_MAX_WAIT_LOOP_TIMES)
- break;
- }
-}
diff --git a/drivers/scsi/spfc/hw/spfc_queue.h b/drivers/scsi/spfc/hw/spfc_queue.h
deleted file mode 100644
index c09f098e7324..000000000000
--- a/drivers/scsi/spfc/hw/spfc_queue.h
+++ /dev/null
@@ -1,711 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_QUEUE_H
-#define SPFC_QUEUE_H
-
-#include "unf_type.h"
-#include "spfc_wqe.h"
-#include "spfc_cqm_main.h"
-#define SPFC_MIN_WP_NUM (2)
-#define SPFC_EXTEND_WQE_OFFSET (128)
-#define SPFC_SQE_SIZE (256)
-#define WQE_MARKER_0 (0x0)
-#define WQE_MARKER_6B (0x6b)
-
-/* PARENT SQ & Context defines */
-#define SPFC_MAX_MSN (65535)
-#define SPFC_MSN_MASK (0xffff000000000000LL)
-#define SPFC_SQE_TS_SIZE (72)
-#define SPFC_SQE_FIRST_OBIT_DW_POS (0)
-#define SPFC_SQE_SECOND_OBIT_DW_POS (30)
-#define SPFC_SQE_OBIT_SET_MASK_BE (0x80)
-#define SPFC_SQE_OBIT_CLEAR_MASK_BE (0xffffff7f)
-#define SPFC_MAX_SQ_TASK_TYPE_CNT (128)
-#define SPFC_SQ_NUM_PER_QPC (3)
-#define SPFC_SQ_QID_START_PER_QPC 0
-#define SPFC_SQ_SPACE_OFFSET (64)
-#define SPFC_MAX_SSQ_NUM (SPFC_SQ_NUM_PER_QPC * 63 + 1) /* must be a multiple of 3 */
-#define SPFC_DIRECTWQE_SQ_INDEX (SPFC_MAX_SSQ_NUM - 1)
-
-/* Note: if the location of flush done bit changes, the definition must be
- * modifyed again
- */
-#define SPFC_CTXT_FLUSH_DONE_DW_POS (58)
-#define SPFC_CTXT_FLUSH_DONE_MASK_BE (0x4000)
-#define SPFC_CTXT_FLUSH_DONE_MASK_LE (0x400000)
-
-#define SPFC_PCIE_TEMPLATE (0)
-#define SPFC_DMA_ATTR_OFST (0)
-
-/*
- *When driver assembles WQE SGE, the GPA parity bit is multiplexed as follows:
- * {rsvd'2,zerocopysoro'2,zerocopy_dmaattr_idx'6,pcie_template'6}
- */
-#define SPFC_PCIE_TEMPLATE_OFFSET 0
-#define SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET 6
-#define SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET 12
-#define SPFC_PCIE_RELAXED_ORDERING (1)
-#define SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE \
- (SPFC_PCIE_RELAXED_ORDERING << SPFC_PCIE_ZEROCOPY_SO_RO_OFFSET | \
- SPFC_DMA_ATTR_OFST << SPFC_PCIE_ZEROCOPY_DMAATTR_IDX_OFFSET | \
- SPFC_PCIE_TEMPLATE)
-
-#define SPFC_GET_SQ_HEAD(sq) \
- list_entry(UNF_OS_LIST_NEXT(&(sq)->list_linked_list_sq), \
- struct spfc_wqe_page, entry_wpg)
-#define SPFC_GET_SQ_TAIL(sq) \
- list_entry(UNF_OS_LIST_PREV(&(sq)->list_linked_list_sq), \
- struct spfc_wqe_page, entry_wpg)
-#define SPFC_SQ_IO_STAT(ssq, io_type) \
- (atomic_inc(&(ssq)->io_stat[io_type]))
-#define SPFC_SQ_IO_STAT_READ(ssq, io_type) \
- (atomic_read(&(ssq)->io_stat[io_type]))
-#define SPFC_GET_QUEUE_CMSN(ssq) \
- ((u32)(be64_to_cpu(((((ssq)->queue_header)->ci_record) & SPFC_MSN_MASK))))
-#define SPFC_GET_WP_END_CMSN(head_start_cmsn, wqe_num_per_buf) \
- ((u16)(((u32)(head_start_cmsn) + (u32)(wqe_num_per_buf) - 1) % (SPFC_MAX_MSN + 1)))
-#define SPFC_MSN_INC(msn) (((SPFC_MAX_MSN) == (msn)) ? 0 : ((msn) + 1))
-#define SPFC_MSN_DEC(msn) (((msn) == 0) ? (SPFC_MAX_MSN) : ((msn) - 1))
-#define SPFC_QUEUE_MSN_OFFSET(start_cmsn, end_cmsn) \
- ((u32)((((u32)(end_cmsn) + (SPFC_MAX_MSN)) - (u32)(start_cmsn)) % (SPFC_MAX_MSN + 1)))
-#define SPFC_MSN32_ADD(msn, inc) (((msn) + (inc)) % (SPFC_MAX_MSN + 1))
-
-/*
- *SCQ defines
- */
-#define SPFC_INT_NUM_PER_QUEUE (1)
-#define SPFC_SCQ_INT_ID_MAX (2048) /* 11BIT */
-#define SPFC_SCQE_SIZE (64)
-#define SPFC_CQE_GPA_SHIFT (4)
-#define SPFC_NEXT_CQE_GPA_SHIFT (12)
-/* 1-Update Ci by Tile, 0-Update Ci by Hardware */
-#define SPFC_PMSN_CI_TYPE_FROM_HOST (0)
-#define SPFC_PMSN_CI_TYPE_FROM_UCODE (1)
-#define SPFC_ARMQ_IDLE (0)
-#define SPFC_CQ_INT_MODE (2)
-#define SPFC_CQ_HEADER_OWNER_SHIFT (15)
-
-/* SCQC_CQ_DEPTH 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k.
- * include LinkWqe
- */
-#define SPFC_CMD_SCQ_DEPTH (4096)
-#define SPFC_STS_SCQ_DEPTH (8192)
-
-#define SPFC_CMD_SCQC_CQ_DEPTH (spfc_log2n(SPFC_CMD_SCQ_DEPTH >> 8))
-#define SPFC_STS_SCQC_CQ_DEPTH (spfc_log2n(SPFC_STS_SCQ_DEPTH >> 8))
-#define SPFC_STS_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_HOST
-
-#define SPFC_CMD_SCQ_CI_TYPE SPFC_PMSN_CI_TYPE_FROM_UCODE
-
-#define SPFC_SCQ_INTR_LOW_LATENCY_MODE 0
-#define SPFC_SCQ_INTR_POLLING_MODE 1
-#define SPFC_SCQ_PROC_CNT_PER_SECOND_THRESHOLD (30000)
-
-#define SPFC_CQE_MAX_PROCESS_NUM_PER_INTR (128)
-#define SPFC_SESSION_SCQ_NUM (16)
-
-/* SCQ[0, 2, 4 ...]CMD SCQ,SCQ[1, 3, 5 ...]STS
- * SCQ,SCQ[SPFC_TOTAL_SCQ_NUM-1]Defaul SCQ
- */
-#define SPFC_CMD_SCQN_START (0)
-#define SPFC_STS_SCQN_START (1)
-#define SPFC_SCQS_PER_SESSION (2)
-
-#define SPFC_TOTAL_SCQ_NUM (SPFC_SESSION_SCQ_NUM + 1)
-
-#define SPFC_SCQ_IS_STS(scq_index) \
- (((scq_index) % SPFC_SCQS_PER_SESSION) || ((scq_index) == SPFC_SESSION_SCQ_NUM))
-#define SPFC_SCQ_IS_CMD(scq_index) (!SPFC_SCQ_IS_STS(scq_index))
-#define SPFC_RPORTID_TO_CMD_SCQN(rport_index) \
- (((rport_index) * SPFC_SCQS_PER_SESSION) % SPFC_SESSION_SCQ_NUM)
-#define SPFC_RPORTID_TO_STS_SCQN(rport_index) \
- ((((rport_index) * SPFC_SCQS_PER_SESSION) + 1) % SPFC_SESSION_SCQ_NUM)
-
-/*
- *SRQ defines
- */
-#define SPFC_SRQE_SIZE (32)
-#define SPFC_SRQ_INIT_LOOP_O (1)
-#define SPFC_QUEUE_RING (1)
-#define SPFC_SRQ_ELS_DATA_NUM (1)
-#define SPFC_SRQ_ELS_SGE_LEN (256)
-#define SPFC_SRQ_ELS_DATA_DEPTH (31750) /* depth should Divide 127 */
-
-#define SPFC_IRQ_NAME_MAX (30)
-
-/* Support 2048 sessions(xid) */
-#define SPFC_CQM_XID_MASK (0x7ff)
-
-#define SPFC_QUEUE_FLUSH_DOING (0)
-#define SPFC_QUEUE_FLUSH_DONE (1)
-#define SPFC_QUEUE_FLUSH_WAIT_TIMEOUT_MS (2000)
-#define SPFC_QUEUE_FLUSH_WAIT_MS (2)
-
-/*
- *RPort defines
- */
-#define SPFC_RPORT_OFFLOADED(prnt_qinfo) \
- ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADED)
-#define SPFC_RPORT_NOT_OFFLOADED(prnt_qinfo) \
- ((prnt_qinfo)->offload_state != SPFC_QUEUE_STATE_OFFLOADED)
-#define SPFC_RPORT_FLUSH_NOT_NEEDED(prnt_qinfo) \
- (((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_INITIALIZED) || \
- ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_OFFLOADING) || \
- ((prnt_qinfo)->offload_state == SPFC_QUEUE_STATE_FREE))
-#define SPFC_CHECK_XID_MATCHED(sq_xid, sqe_xid) \
- (((sq_xid) & SPFC_CQM_XID_MASK) == ((sqe_xid) & SPFC_CQM_XID_MASK))
-#define SPFC_PORT_MODE_TGT (0) /* Port mode */
-#define SPFC_PORT_MODE_INI (1)
-#define SPFC_PORT_MODE_BOTH (2)
-
-/*
- *Hardware Reserved Queue Info defines
- */
-#define SPFC_HRQI_SEQ_ID_MAX (255)
-#define SPFC_HRQI_SEQ_INDEX_MAX (64)
-#define SPFC_HRQI_SEQ_INDEX_SHIFT (6)
-#define SPFC_HRQI_SEQ_SEPCIAL_ID (3)
-#define SPFC_HRQI_SEQ_INVALID_ID (~0LL)
-
-enum spfc_session_reset_mode {
- SPFC_SESS_RST_DELETE_IO_ONLY = 1,
- SPFC_SESS_RST_DELETE_CONN_ONLY = 2,
- SPFC_SESS_RST_DELETE_IO_CONN_BOTH = 3,
- SPFC_SESS_RST_MODE_BUTT
-};
-
-/* linkwqe */
-#define CQM_LINK_WQE_CTRLSL_VALUE 2
-#define CQM_LINK_WQE_LP_VALID 1
-#define CQM_LINK_WQE_LP_INVALID 0
-
-/* bit mask */
-#define SPFC_SCQN_MASK 0xfffff
-#define SPFC_SCQ_CTX_CI_GPA_MASK 0xfffffff
-#define SPFC_SCQ_CTX_C_EQN_MSI_X_MASK 0x7
-#define SPFC_PARITY_MASK 0x1
-#define SPFC_KEYSECTION_XID_H_MASK 0xf
-#define SPFC_KEYSECTION_XID_L_MASK 0xffff
-#define SPFC_SRQ_CTX_rqe_dma_attr_idx_MASK 0xf
-#define SPFC_SSQ_CTX_MASK 0xfffff
-#define SPFC_KEY_WD3_SID_2_MASK 0x00ff0000
-#define SPFC_KEY_WD3_SID_1_MASK 0x00ff00
-#define SPFC_KEY_WD3_SID_0_MASK 0x0000ff
-#define SPFC_KEY_WD4_DID_2_MASK 0x00ff0000
-#define SPFC_KEY_WD4_DID_1_MASK 0x00ff00
-#define SPFC_KEY_WD4_DID_0_MASK 0x0000ff
-#define SPFC_LOCAL_LW_WD1_DUMP_MSN_MASK 0x7fff
-#define SPFC_PMSN_MASK 0xff
-#define SPFC_QOS_LEVEL_MASK 0x3
-#define SPFC_DB_VAL_MASK 0xFFFFFFFF
-#define SPFC_MSNWD_L_MASK 0xffff
-#define SPFC_MSNWD_H_MASK 0x7fff
-#define SPFC_DB_WD0_PI_H_MASK 0xf
-#define SPFC_DB_WD0_PI_L_MASK 0xfff
-
-#define SPFC_DB_C_BIT_DATA_TYPE 0
-#define SPFC_DB_C_BIT_CONTROL_TYPE 1
-
-#define SPFC_OWNER_DRIVER_PRODUCT (1)
-
-#define SPFC_256BWQE_ENABLE (1)
-#define SPFC_DB_ARM_DISABLE (0)
-
-#define SPFC_CNTX_SIZE_T_256B (0)
-#define SPFC_CNTX_SIZE_256B (256)
-
-#define SPFC_SERVICE_TYPE_FC (12)
-#define SPFC_SERVICE_TYPE_FC_SQ (13)
-
-#define SPFC_PACKET_COS_FC_CMD (0)
-#define SPFC_PACKET_COS_FC_DATA (1)
-
-#define SPFC_QUEUE_LINK_STYLE (0)
-#define SPFC_QUEUE_RING_STYLE (1)
-
-#define SPFC_NEED_DO_OFFLOAD (1)
-#define SPFC_QID_SQ (0)
-
-/*
- *SCQ defines
- */
-struct spfc_scq_info {
- struct cqm_queue *cqm_scq_info;
- u32 wqe_num_per_buf;
- u32 wqe_size;
- u32 scqc_cq_depth; /* 0-256, 1-512, 2-1k, 3-2k, 4-4k, 5-8k, 6-16k, 7-32k */
- u16 scqc_ci_type;
- u16 valid_wqe_num; /* ScQ depth include link wqe */
- u16 ci;
- u16 ci_owner;
- u32 queue_id;
- u32 scqn;
- char irq_name[SPFC_IRQ_NAME_MAX];
- u16 msix_entry_idx;
- u32 irq_id;
- struct tasklet_struct tasklet;
- atomic_t flush_stat;
- void *hba;
- u32 reserved;
- struct task_struct *delay_task;
- bool task_exit;
- u32 intr_mode;
-};
-
-struct spfc_srq_ctx {
- /* DW0 */
- u64 pcie_template : 6;
- u64 rsvd0 : 2;
- u64 parity : 8;
- u64 cur_rqe_usr_id : 16;
- u64 cur_rqe_msn : 16;
- u64 last_rq_pmsn : 16;
-
- /* DW1 */
- u64 cur_rqe_gpa;
-
- /* DW2 */
- u64 ctrl_sl : 1;
- u64 cf : 1;
- u64 csl : 2;
- u64 cr : 1;
- u64 bdsl : 4;
- u64 pmsn_type : 1;
- u64 cur_wqe_o : 1;
- u64 consant_sge_len : 17;
- u64 cur_sge_id : 4;
- u64 cur_sge_remain_len : 17;
- u64 ceqn_msix : 11;
- u64 int_mode : 2;
- u64 cur_sge_l : 1;
- u64 cur_sge_v : 1;
-
- /* DW3 */
- u64 cur_sge_gpa;
-
- /* DW4 */
- u64 cur_pmsn_gpa;
-
- /* DW5 */
- u64 rsvd3 : 5;
- u64 ring : 1;
- u64 loop_o : 1;
- u64 rsvd2 : 1;
- u64 rqe_dma_attr_idx : 6;
- u64 rq_so_ro : 2;
- u64 cqe_dma_attr_idx : 6;
- u64 cq_so_ro : 2;
- u64 rsvd1 : 7;
- u64 arm_q : 1;
- u64 cur_cqe_cnt : 8;
- u64 cqe_max_cnt : 8;
- u64 prefetch_max_masn : 16;
-
- /* DW6~DW7 */
- u64 rsvd4;
- u64 rsvd5;
-};
-
-struct spfc_drq_buff_entry {
- u16 buff_id;
- void *buff_addr;
- dma_addr_t buff_dma;
-};
-
-enum spfc_clean_state { SPFC_CLEAN_DONE, SPFC_CLEAN_DOING, SPFC_CLEAN_BUTT };
-enum spfc_srq_type { SPFC_SRQ_ELS = 1, SPFC_SRQ_IMMI, SPFC_SRQ_BUTT };
-
-struct spfc_srq_info {
- enum spfc_srq_type srq_type;
-
- struct cqm_queue *cqm_srq_info;
- u32 wqe_num_per_buf; /* Wqe number per buf, dont't inlcude link wqe */
- u32 wqe_size;
- u32 valid_wqe_num; /* valid wqe number, dont't include link wqe */
- u16 pi;
- u16 pi_owner;
- u16 pmsn;
- u16 ci;
- u16 cmsn;
- u32 srqn;
-
- dma_addr_t first_rqe_recv_dma;
-
- struct spfc_drq_buff_entry *els_buff_entry_head;
- struct buf_describe buf_list;
- spinlock_t srq_spin_lock;
- bool spin_lock_init;
- bool enable;
- enum spfc_clean_state state;
-
- atomic_t ref;
-
- struct delayed_work del_work;
- u32 del_retry_time;
- void *hba;
-};
-
-/*
- * The doorbell record keeps PI of WQE, which will be produced next time.
- * The PI is 15 bits width o-bit
- */
-struct db_record {
- u64 pmsn : 16;
- u64 dump_pmsn : 16;
- u64 rsvd0 : 32;
-};
-
-/*
- * The ci record keeps CI of WQE, which will be consumed next time.
- * The ci is 15 bits width with 1 o-bit
- */
-struct ci_record {
- u64 cmsn : 16;
- u64 dump_cmsn : 16;
- u64 rsvd0 : 32;
-};
-
-/* The accumulate data in WQ header */
-struct accumulate {
- u64 data_2_uc;
- u64 data_2_drv;
-};
-
-/* The WQ header structure */
-struct wq_header {
- struct db_record db_record;
- struct ci_record ci_record;
- struct accumulate soft_data;
-};
-
-/* Link list Sq WqePage Pool */
-/* queue header struct */
-struct spfc_queue_header {
- u64 door_bell_record;
- u64 ci_record;
- u64 rsv1;
- u64 rsv2;
-};
-
-/* WPG-WQEPAGE, LLSQ-LINKED LIST SQ */
-struct spfc_wqe_page {
- struct list_head entry_wpg;
-
- /* Wqe Page virtual addr */
- void *wpg_addr;
-
- /* Wqe Page physical addr */
- u64 wpg_phy_addr;
-};
-
-struct spfc_sq_wqepage_pool {
- u32 wpg_cnt;
- u32 wpg_size;
- u32 wqe_per_wpg;
-
- /* PCI DMA Pool */
- struct dma_pool *wpg_dma_pool;
- struct spfc_wqe_page *wpg_pool_addr;
- struct list_head list_free_wpg_pool;
- spinlock_t wpg_pool_lock;
- atomic_t wpg_in_use;
-};
-
-#define SPFC_SQ_DEL_STAGE_TIMEOUT_MS (3 * 1000)
-#define SPFC_SRQ_DEL_STAGE_TIMEOUT_MS (10 * 1000)
-#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS (10 * 1000)
-#define SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_CNT (3)
-
-#define SPFC_SRQ_PROCESS_DELAY_MS (20)
-
-/* PLOGI parameters */
-struct spfc_plogi_copram {
- u32 seq_cnt : 1;
- u32 ed_tov : 1;
- u32 rsvd : 14;
- u32 tx_mfs : 16;
- u32 ed_tov_time;
-};
-
-struct spfc_delay_sqe_ctrl_info {
- bool valid;
- u32 rport_index;
- u32 time_out;
- u64 start_jiff;
- u32 sid;
- u32 did;
- u32 xid;
- u16 ssqn;
- struct spfc_sqe sqe;
-};
-
-struct spfc_suspend_sqe_info {
- void *hba;
- u32 magic_num;
- u8 old_offload_sts;
- struct unf_frame_pkg pkg;
- struct spfc_sqe sqe;
- struct delayed_work timeout_work;
- struct list_head list_sqe_entry;
-};
-
-struct spfc_delay_destroy_ctrl_info {
- bool valid;
- u32 rport_index;
- u32 time_out;
- u64 start_jiff;
- struct unf_port_info rport_info;
-};
-
-/* PARENT SQ Info */
-struct spfc_parent_sq_info {
- void *hba;
- spinlock_t parent_sq_enqueue_lock;
- u32 rport_index;
- u32 context_id;
- /* Fixed value,used for Doorbell */
- u32 sq_queue_id;
- /* When a session is offloaded, tile will return the CacheId to the
- * driver,which is used for Doorbell
- */
- u32 cache_id;
- /* service type, fc or fc */
- u32 service_type;
- /* OQID */
- u16 oqid_rd;
- u16 oqid_wr;
- u32 local_port_id;
- u32 remote_port_id;
- u32 sqn_base;
- bool port_in_flush;
- bool sq_in_sess_rst;
- atomic_t sq_valid;
- /* Used by NPIV QoS */
- u8 vport_id;
- /* Used by NPIV QoS */
- u8 cs_ctrl;
- struct delayed_work del_work;
- struct delayed_work flush_done_timeout_work;
- u64 del_start_jiff;
- dma_addr_t srq_ctx_addr;
- atomic_t sq_cached;
- atomic_t flush_done_wait_cnt;
- struct spfc_plogi_copram plogi_co_parms;
- /* dif control info for immi */
- struct unf_dif_control_info sirt_dif_control;
- struct spfc_delay_sqe_ctrl_info delay_sqe;
- struct spfc_delay_destroy_ctrl_info destroy_sqe;
- struct list_head suspend_sqe_list;
- atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
- u8 need_offloaded;
-};
-
-/* parent context doorbell */
-struct spfc_parent_sq_db {
- struct {
- u32 xid : 20;
- u32 cntx_size : 2;
- u32 arm : 1;
- u32 c : 1;
- u32 cos : 3;
- u32 service_type : 5;
- } wd0;
-
- struct {
- u32 pi_hi : 8;
- u32 sm_data : 20;
- u32 qid : 4;
- } wd1;
-};
-
-#define IWARP_FC_DDB_TYPE 3
-
-/* direct wqe doorbell */
-struct spfc_direct_wqe_db {
- struct {
- u32 xid : 20;
- u32 cntx_size : 2;
- u32 pi_hi : 4;
- u32 c : 1;
- u32 cos : 3;
- u32 ddb : 2;
- } wd0;
-
- struct {
- u32 pi_lo : 12;
- u32 sm_data : 20;
- } wd1;
-};
-
-struct spfc_parent_cmd_scq_info {
- u32 cqm_queue_id;
- u32 local_queue_id;
-};
-
-struct spfc_parent_st_scq_info {
- u32 cqm_queue_id;
- u32 local_queue_id;
-};
-
-struct spfc_parent_els_srq_info {
- u32 cqm_queue_id;
- u32 local_queue_id;
-};
-
-enum spfc_parent_queue_state {
- SPFC_QUEUE_STATE_INITIALIZED = 0,
- SPFC_QUEUE_STATE_OFFLOADING = 1,
- SPFC_QUEUE_STATE_OFFLOADED = 2,
- SPFC_QUEUE_STATE_DESTROYING = 3,
- SPFC_QUEUE_STATE_FREE = 4,
- SPFC_QUEUE_STATE_BUTT
-};
-
-struct spfc_parent_ctx {
- dma_addr_t parent_ctx_addr;
- void *parent_ctx;
- struct cqm_qpc_mpt *cqm_parent_ctx_obj;
-};
-
-struct spfc_parent_queue_info {
- spinlock_t parent_queue_state_lock;
- struct spfc_parent_ctx parent_ctx;
- enum spfc_parent_queue_state offload_state;
- struct spfc_parent_sq_info parent_sq_info;
- struct spfc_parent_cmd_scq_info parent_cmd_scq_info;
- struct spfc_parent_st_scq_info
- parent_sts_scq_info;
- struct spfc_parent_els_srq_info parent_els_srq_info;
- u8 queue_vport_id;
- u8 queue_data_cos;
-};
-
-struct spfc_parent_ssq_info {
- void *hba;
- spinlock_t parent_sq_enqueue_lock;
- atomic_t wqe_page_cnt;
- u32 context_id;
- u32 cache_id;
- u32 sq_queue_id;
- u32 sqn;
- u32 service_type;
- u32 max_sqe_num; /* SQ depth */
- u32 wqe_num_per_buf;
- u32 wqe_size;
- u32 accum_wqe_cnt;
- u32 wqe_offset;
- u16 head_start_cmsn;
- u16 head_end_cmsn;
- u16 last_pmsn;
- u16 last_pi_owner;
- u32 queue_style;
- atomic_t sq_valid;
- void *queue_head_original;
- struct spfc_queue_header *queue_header;
- dma_addr_t queue_hdr_phy_addr_original;
- dma_addr_t queue_hdr_phy_addr;
- struct list_head list_linked_list_sq;
- atomic_t sq_db_cnt;
- atomic_t sq_wqe_cnt;
- atomic_t sq_cqe_cnt;
- atomic_t sqe_minus_cqe_cnt;
- atomic_t io_stat[SPFC_MAX_SQ_TASK_TYPE_CNT];
-};
-
-struct spfc_parent_shared_queue_info {
- struct spfc_parent_ctx parent_ctx;
- struct spfc_parent_ssq_info parent_ssq_info;
-};
-
-struct spfc_parent_queue_mgr {
- struct spfc_parent_queue_info parent_queue[UNF_SPFC_MAXRPORT_NUM];
- struct spfc_parent_shared_queue_info shared_queue[SPFC_MAX_SSQ_NUM];
- struct buf_describe parent_sq_buf_list;
-};
-
-#define SPFC_SRQC_BUS_ROW 8
-#define SPFC_SRQC_BUS_COL 19
-#define SPFC_SQC_BUS_ROW 8
-#define SPFC_SQC_BUS_COL 13
-#define SPFC_HW_SCQC_BUS_ROW 6
-#define SPFC_HW_SCQC_BUS_COL 10
-#define SPFC_HW_SRQC_BUS_ROW 4
-#define SPFC_HW_SRQC_BUS_COL 15
-#define SPFC_SCQC_BUS_ROW 3
-#define SPFC_SCQC_BUS_COL 29
-
-#define SPFC_QUEUE_INFO_BUS_NUM 4
-struct spfc_queue_info_bus {
- u64 bus[SPFC_QUEUE_INFO_BUS_NUM];
-};
-
-u32 spfc_free_parent_resource(void *handle, struct unf_port_info *rport_info);
-u32 spfc_alloc_parent_resource(void *handle, struct unf_port_info *rport_info);
-u32 spfc_alloc_parent_queue_mgr(void *handle);
-void spfc_free_parent_queue_mgr(void *handle);
-u32 spfc_create_common_share_queues(void *handle);
-u32 spfc_create_ssq(void *handle);
-void spfc_destroy_common_share_queues(void *v_pstHba);
-u32 spfc_alloc_parent_sq_wqe_page_pool(void *handle);
-void spfc_free_parent_sq_wqe_page_pool(void *handle);
-struct spfc_parent_queue_info *
-spfc_find_parent_queue_info_by_pkg(void *handle, struct unf_frame_pkg *pkg);
-struct spfc_parent_sq_info *
-spfc_find_parent_sq_by_pkg(void *handle, struct unf_frame_pkg *pkg);
-u32 spfc_root_cmdq_enqueue(void *handle, union spfc_cmdqe *cmdqe, u16 cmd_len);
-void spfc_process_scq_cqe(ulong scq_info);
-u32 spfc_process_scq_cqe_entity(ulong scq_info, u32 proc_cnt);
-void spfc_post_els_srq_wqe(struct spfc_srq_info *srq_info, u16 buf_id);
-void spfc_process_aeqe(void *handle, u8 event_type, u8 *event_val);
-u32 spfc_parent_sq_enqueue(struct spfc_parent_sq_info *sq, struct spfc_sqe *io_sqe,
- u16 ssqn);
-u32 spfc_parent_ssq_enqueue(struct spfc_parent_ssq_info *ssq,
- struct spfc_sqe *io_sqe, u8 wqe_type);
-void spfc_free_sq_wqe_page(struct spfc_parent_ssq_info *ssq, u32 cur_cmsn);
-u32 spfc_reclaim_sq_wqe_page(void *handle, union spfc_scqe *scqe);
-void spfc_set_rport_flush_state(void *handle, bool in_flush);
-u32 spfc_clear_fetched_sq_wqe(void *handle);
-u32 spfc_clear_pending_sq_wqe(void *handle);
-void spfc_free_parent_queues(void *handle);
-void spfc_free_ssq(void *handle, u32 free_sq_num);
-void spfc_enalbe_queues_dispatch(void *handle);
-void spfc_queue_pre_process(void *handle, bool clean);
-void spfc_queue_post_process(void *handle);
-void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *parent_queue_info);
-u32 spfc_send_session_rst_cmd(void *handle,
- struct spfc_parent_queue_info *parent_queue_info,
- enum spfc_session_reset_mode mode);
-u32 spfc_send_nop_cmd(void *handle, struct spfc_parent_sq_info *parent_sq_info,
- u32 magic_num, u16 sqn);
-void spfc_build_session_rst_wqe(void *handle, struct spfc_parent_sq_info *sq,
- struct spfc_sqe *sqe,
- enum spfc_session_reset_mode mode, u32 scqn);
-void spfc_wq_destroy_els_srq(struct work_struct *work);
-void spfc_destroy_els_srq(void *handle);
-u32 spfc_push_delay_sqe(void *hba,
- struct spfc_parent_queue_info *offload_parent_queue,
- struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
-void spfc_push_destroy_parent_queue_sqe(void *hba,
- struct spfc_parent_queue_info *offloading_parent_queue,
- struct unf_port_info *rport_info);
-void spfc_pop_destroy_parent_queue_sqe(void *handle,
- struct spfc_delay_destroy_ctrl_info *destroy_sqe_info);
-struct spfc_parent_queue_info *spfc_find_offload_parent_queue(void *handle,
- u32 local_id,
- u32 remote_id,
- u32 rport_index);
-u32 spfc_flush_ini_resp_queue(void *handle);
-void spfc_rcvd_els_from_srq_timeout(struct work_struct *work);
-u32 spfc_send_aeq_info_via_cmdq(void *hba, u32 aeq_error_type);
-u32 spfc_parent_sq_ring_doorbell(struct spfc_parent_ssq_info *sq, u8 qos_level,
- u32 c);
-void spfc_sess_resource_free_sync(void *handle,
- struct unf_port_info *rport_info);
-u32 spfc_suspend_sqe_and_send_nop(void *handle,
- struct spfc_parent_queue_info *parent_queue,
- struct spfc_sqe *sqe, struct unf_frame_pkg *pkg);
-u32 spfc_pop_suspend_sqe(void *handle,
- struct spfc_parent_queue_info *parent_queue,
- struct spfc_suspend_sqe_info *suspen_sqe);
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
deleted file mode 100644
index 1da58e3f9fbe..000000000000
--- a/drivers/scsi/spfc/hw/spfc_service.c
+++ /dev/null
@@ -1,2170 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_service.h"
-#include "unf_log.h"
-#include "spfc_io.h"
-#include "spfc_chipitf.h"
-
-#define SPFC_ELS_SRQ_BUF_NUM (0x9)
-#define SPFC_LS_GS_USERID_LEN ((FC_LS_GS_USERID_CNT_MAX + 1) / 2)
-
-struct unf_scqe_handle_table {
- u32 scqe_type; /* ELS type */
- bool reclaim_sq_wpg;
- u32 (*scqe_handle_func)(struct spfc_hba_info *hba, union spfc_scqe *scqe);
-};
-
-static u32 spfc_get_els_rsp_pld_len(u16 els_type, u16 els_cmnd,
- u32 *els_acc_pld_len)
-{
- u32 ret = RETURN_OK;
-
- FC_CHECK_RETURN_VALUE(els_acc_pld_len, UNF_RETURN_ERROR);
-
- /* RJT */
- if (els_type == ELS_RJT) {
- *els_acc_pld_len = UNF_ELS_ACC_RJT_LEN;
- return RETURN_OK;
- }
-
- /* ACC */
- switch (els_cmnd) {
- /* uses the same PAYLOAD length as PLOGI. */
- case ELS_FLOGI:
- case ELS_PDISC:
- case ELS_PLOGI:
- *els_acc_pld_len = UNF_PLOGI_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_PRLI:
- /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
- *els_acc_pld_len = (UNF_PRLI_ACC_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE);
-
- break;
-
- case ELS_LOGO:
- *els_acc_pld_len = UNF_LOGO_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_PRLO:
- *els_acc_pld_len = UNF_PRLO_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_RSCN:
- *els_acc_pld_len = UNF_RSCN_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_ADISC:
- *els_acc_pld_len = UNF_ADISC_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_RRQ:
- *els_acc_pld_len = UNF_RRQ_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_SCR:
- *els_acc_pld_len = UNF_SCR_RSP_PAYLOAD_LEN;
- break;
-
- case ELS_ECHO:
- *els_acc_pld_len = UNF_ECHO_ACC_PAYLOAD_LEN;
- break;
-
- case ELS_REC:
- *els_acc_pld_len = UNF_REC_ACC_PAYLOAD_LEN;
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_WARN, "[warn]Unknown ELS command(0x%x)",
- els_cmnd);
- ret = UNF_RETURN_ERROR;
- break;
- }
-
- return ret;
-}
-
-struct unf_els_cmd_paylod_table {
- u16 els_cmnd; /* ELS type */
- u32 els_req_pld_len;
- u32 els_rsp_pld_len;
-};
-
-static const struct unf_els_cmd_paylod_table els_pld_table_map[] = {
- {ELS_FDISC, UNF_FDISC_PAYLOAD_LEN, UNF_FDISC_ACC_PAYLOAD_LEN},
- {ELS_FLOGI, UNF_FLOGI_PAYLOAD_LEN, UNF_FLOGI_ACC_PAYLOAD_LEN},
- {ELS_PLOGI, UNF_PLOGI_PAYLOAD_LEN, UNF_PLOGI_ACC_PAYLOAD_LEN},
- {ELS_SCR, UNF_SCR_PAYLOAD_LEN, UNF_SCR_RSP_PAYLOAD_LEN},
- {ELS_PDISC, UNF_PDISC_PAYLOAD_LEN, UNF_PDISC_ACC_PAYLOAD_LEN},
- {ELS_LOGO, UNF_LOGO_PAYLOAD_LEN, UNF_LOGO_ACC_PAYLOAD_LEN},
- {ELS_PRLO, UNF_PRLO_PAYLOAD_LEN, UNF_PRLO_ACC_PAYLOAD_LEN},
- {ELS_ADISC, UNF_ADISC_PAYLOAD_LEN, UNF_ADISC_ACC_PAYLOAD_LEN},
- {ELS_RRQ, UNF_RRQ_PAYLOAD_LEN, UNF_RRQ_ACC_PAYLOAD_LEN},
- {ELS_RSCN, 0, UNF_RSCN_ACC_PAYLOAD_LEN},
- {ELS_ECHO, UNF_ECHO_PAYLOAD_LEN, UNF_ECHO_ACC_PAYLOAD_LEN},
- {ELS_REC, UNF_REC_PAYLOAD_LEN, UNF_REC_ACC_PAYLOAD_LEN}
-};
-
-static u32 spfc_get_els_req_acc_pld_len(u16 els_cmnd, u32 *req_pld_len, u32 *rsp_pld_len)
-{
- u32 ret = RETURN_OK;
- u32 i;
-
- FC_CHECK_RETURN_VALUE(req_pld_len, UNF_RETURN_ERROR);
-
- for (i = 0; i < (sizeof(els_pld_table_map) /
- sizeof(struct unf_els_cmd_paylod_table));
- i++) {
- if (els_pld_table_map[i].els_cmnd == els_cmnd) {
- *req_pld_len = els_pld_table_map[i].els_req_pld_len;
- *rsp_pld_len = els_pld_table_map[i].els_rsp_pld_len;
- return ret;
- }
- }
-
- switch (els_cmnd) {
- case ELS_PRLI:
- /* If sirt is enabled, The PRLI ACC payload extends 12 bytes */
- *req_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
- *rsp_pld_len = SPFC_GET_PRLI_PAYLOAD_LEN;
-
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Unknown ELS_CMD(0x%x)", els_cmnd);
- ret = UNF_RETURN_ERROR;
- break;
- }
-
- return ret;
-}
-
-static u32 spfc_check_parent_qinfo_valid(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
- struct spfc_parent_queue_info **prt_qinfo)
-{
- if (!*prt_qinfo) {
- if (pkg->type == UNF_PKG_ELS_REQ || pkg->type == UNF_PKG_ELS_REPLY) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send LS SID(0x%x) DID(0x%x) with null prtqinfo",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
- pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = SPFC_DEFAULT_RPORT_INDEX;
- *prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
- if (!*prt_qinfo)
- return UNF_RETURN_ERROR;
- } else {
- return UNF_RETURN_ERROR;
- }
- }
-
- if (pkg->type == UNF_PKG_GS_REQ && SPFC_RPORT_NOT_OFFLOADED(*prt_qinfo)) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[info]Port(0x%x) send GS SID(0x%x) DID(0x%x), send GS Request before PLOGI",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
- return UNF_RETURN_ERROR;
- }
- return RETURN_OK;
-}
-
-static void spfc_get_pkt_cmnd_type_code(struct unf_frame_pkg *pkg,
- u16 *ls_gs_cmnd_code,
- u16 *ls_gs_cmnd_type)
-{
- *ls_gs_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
- if (SPFC_PKG_IS_ELS_RSP(*ls_gs_cmnd_type)) {
- *ls_gs_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
- } else if (pkg->type == UNF_PKG_GS_REQ) {
- *ls_gs_cmnd_code = *ls_gs_cmnd_type;
- } else {
- *ls_gs_cmnd_code = *ls_gs_cmnd_type;
- *ls_gs_cmnd_type = ELS_CMND;
- }
-}
-
-static u32 spfc_get_gs_req_rsp_pld_len(u16 cmnd_code, u32 *gs_pld_len, u32 *gs_rsp_pld_len)
-{
- FC_CHECK_RETURN_VALUE(gs_pld_len, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(gs_rsp_pld_len, UNF_RETURN_ERROR);
-
- switch (cmnd_code) {
- case NS_GPN_ID:
- *gs_pld_len = UNF_GPNID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GPNID_RSP_PAYLOAD_LEN;
- break;
-
- case NS_GNN_ID:
- *gs_pld_len = UNF_GNNID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GNNID_RSP_PAYLOAD_LEN;
- break;
-
- case NS_GFF_ID:
- *gs_pld_len = UNF_GFFID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GFFID_RSP_PAYLOAD_LEN;
- break;
-
- case NS_GID_FT:
- case NS_GID_PT:
- *gs_pld_len = UNF_GID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
- break;
-
- case NS_RFT_ID:
- *gs_pld_len = UNF_RFTID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
- break;
-
- case NS_RFF_ID:
- *gs_pld_len = UNF_RFFID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_RFFID_RSP_PAYLOAD_LEN;
- break;
- case NS_GA_NXT:
- *gs_pld_len = UNF_GID_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
- break;
-
- case NS_GIEL:
- *gs_pld_len = UNF_RFTID_RSP_PAYLOAD_LEN;
- *gs_rsp_pld_len = UNF_GID_ACC_PAYLOAD_LEN;
- break;
-
- default:
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Unknown GS commond type(0x%x)", cmnd_code);
- return UNF_RETURN_ERROR;
- }
-
- return RETURN_OK;
-}
-
-static void *spfc_get_els_frame_addr(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg,
- u16 els_cmnd_code, u16 els_cmnd_type,
- u64 *phy_addr)
-{
- void *frame_pld_addr = NULL;
- dma_addr_t els_frame_addr = 0;
-
- if (els_cmnd_code == ELS_ECHO) {
- frame_pld_addr = (void *)UNF_GET_ECHO_PAYLOAD(pkg);
- els_frame_addr = UNF_GET_ECHO_PAYLOAD_PHYADDR(pkg);
- } else if (els_cmnd_code == ELS_RSCN) {
- if (els_cmnd_type == ELS_CMND) {
- /* Not Support */
- frame_pld_addr = NULL;
- els_frame_addr = 0;
- } else {
- frame_pld_addr = (void *)UNF_GET_RSCN_ACC_PAYLOAD(pkg);
- els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
- sizeof(struct unf_fc_head);
- }
- } else {
- frame_pld_addr = (void *)SPFC_GET_CMND_PAYLOAD_ADDR(pkg);
- els_frame_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr +
- sizeof(struct unf_fc_head);
- }
- *phy_addr = els_frame_addr;
- return frame_pld_addr;
-}
-
-static u32 spfc_get_frame_info(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, void **frame_pld_addr,
- u32 *frame_pld_len, u64 *frame_phy_addr,
- u32 *acc_pld_len)
-{
- u32 ret = RETURN_OK;
- u16 ls_gs_cmnd_code = SPFC_ZERO;
- u16 ls_gs_cmnd_type = SPFC_ZERO;
-
- spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
-
- if (pkg->type == UNF_PKG_GS_REQ) {
- ret = spfc_get_gs_req_rsp_pld_len(ls_gs_cmnd_code,
- frame_pld_len, acc_pld_len);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) send GS SID(0x%x) DID(0x%x), get error GS request and response payload length",
- hba->port_cfg.port_id,
- pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return ret;
- }
- *frame_pld_addr = (void *)(SPFC_GET_CMND_PAYLOAD_ADDR(pkg));
- *frame_phy_addr = pkg->unf_cmnd_pload_bl.buf_dma_addr + sizeof(struct unf_fc_head);
- if (ls_gs_cmnd_code == NS_GID_FT || ls_gs_cmnd_code == NS_GID_PT)
- *frame_pld_addr = (void *)(UNF_GET_GID_PAYLOAD(pkg));
- } else {
- *frame_pld_addr = spfc_get_els_frame_addr(hba, pkg, ls_gs_cmnd_code,
- ls_gs_cmnd_type, frame_phy_addr);
- if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
- ret = spfc_get_els_rsp_pld_len(ls_gs_cmnd_type, ls_gs_cmnd_code,
- frame_pld_len);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) get els cmd (0x%x) rsp len failed.",
- hba->port_cfg.port_id,
- ls_gs_cmnd_code);
- return ret;
- }
- } else {
- ret = spfc_get_els_req_acc_pld_len(ls_gs_cmnd_code, frame_pld_len,
- acc_pld_len);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) get els cmd (0x%x) req and acc len failed.",
- hba->port_cfg.port_id,
- ls_gs_cmnd_code);
- return ret;
- }
- }
- }
- return ret;
-}
-
-static u32
-spfc_send_ls_gs_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
- struct spfc_parent_queue_info *prt_queue_info)
-{
- u32 ret = UNF_RETURN_ERROR;
- u16 ls_gs_cmnd_code = SPFC_ZERO;
- u16 ls_gs_cmnd_type = SPFC_ZERO;
- u16 remote_exid = 0;
- u16 hot_tag = 0;
- struct spfc_parent_sq_info *parent_sq_info = NULL;
- struct spfc_sqe tmp_sqe;
- struct spfc_sqe *sqe = NULL;
- void *frame_pld_addr = NULL;
- u32 frame_pld_len = 0;
- u32 acc_pld_len = 0;
- u64 frame_pa = 0;
- ulong flags = 0;
- u16 ssqn = 0;
- spinlock_t *prtq_state_lock = NULL;
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- sqe = &tmp_sqe;
- memset(sqe, 0, sizeof(struct spfc_sqe));
-
- parent_sq_info = &prt_queue_info->parent_sq_info;
- hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
-
- spfc_get_pkt_cmnd_type_code(pkg, &ls_gs_cmnd_code, &ls_gs_cmnd_type);
-
- ret = spfc_get_frame_info(hba, pkg, &frame_pld_addr, &frame_pld_len,
- &frame_pa, &acc_pld_len);
- if (ret != RETURN_OK)
- return ret;
-
- if (SPFC_PKG_IS_ELS_RSP(ls_gs_cmnd_type)) {
- remote_exid = UNF_GET_OXID(pkg);
- spfc_build_els_wqe_ts_rsp(sqe, prt_queue_info, pkg,
- frame_pld_addr, ls_gs_cmnd_type,
- ls_gs_cmnd_code);
-
- /* Assemble the SQE Task Section Els Common part */
- spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index,
- UNF_GET_RXID(pkg), remote_exid,
- SPFC_LSW(frame_pld_len));
- } else {
- remote_exid = UNF_GET_RXID(pkg);
- /* send els req ,only use local_xid for hotpooltag */
- spfc_build_els_wqe_ts_req(sqe, parent_sq_info,
- prt_queue_info->parent_sts_scq_info.cqm_queue_id,
- frame_pld_addr, pkg);
- spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, hot_tag,
- remote_exid, SPFC_LSW(frame_pld_len));
- }
- /* Assemble the SQE Control Section part */
- spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
- SPFC_BYTES_TO_QW_NUM(sizeof(struct spfc_variable_sge)));
-
- /* Build SGE */
- spfc_build_els_gs_wqe_sge(sqe, frame_pld_addr, frame_pa, frame_pld_len,
- parent_sq_info->context_id, hba);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%x) send ELS/GS Type(0x%x) Code(0x%x) HotTag(0x%x)",
- hba->port_cfg.port_id, parent_sq_info->rport_index, ls_gs_cmnd_type,
- ls_gs_cmnd_code, hot_tag);
- if (ls_gs_cmnd_code == ELS_PLOGI || ls_gs_cmnd_code == ELS_LOGO) {
- ret = spfc_suspend_sqe_and_send_nop(hba, prt_queue_info, sqe, pkg);
- return ret;
- }
- prtq_state_lock = &prt_queue_info->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flags);
- if (SPFC_RPORT_NOT_OFFLOADED(prt_queue_info)) {
- spin_unlock_irqrestore(prtq_state_lock, flags);
- /* Send PLOGI or PLOGI ACC or SCR if session not offload */
- ret = spfc_send_els_via_default_session(hba, sqe, pkg, prt_queue_info);
- } else {
- spin_unlock_irqrestore(prtq_state_lock, flags);
- ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
- }
-
- return ret;
-}
-
-u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_hba_info *hba = NULL;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
- u16 ls_gs_cmnd_code = SPFC_ZERO;
- union unf_sfs_u *sfs_entry = NULL;
- struct unf_rrq *rrq_pld = NULL;
- u16 ox_id = 0;
- u16 rx_id = 0;
-
- /* Check Parameters */
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(UNF_GET_SFS_ENTRY(pkg), UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(SPFC_GET_CMND_PAYLOAD_ADDR(pkg), UNF_RETURN_ERROR);
-
- SPFC_CHECK_PKG_ALLOCTIME(pkg);
- hba = (struct spfc_hba_info *)handle;
- ls_gs_cmnd_code = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
-
- /* If RRQ Req, Special processing */
- if (ls_gs_cmnd_code == ELS_RRQ) {
- sfs_entry = UNF_GET_SFS_ENTRY(pkg);
- rrq_pld = &sfs_entry->rrq;
- ox_id = (u16)(rrq_pld->oxid_rxid >> UNF_SHIFT_16);
- rx_id = (u16)(rrq_pld->oxid_rxid & SPFC_RXID_MASK);
- rrq_pld->oxid_rxid = (u32)ox_id << UNF_SHIFT_16 | rx_id;
- }
-
- prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
- ret = spfc_check_parent_qinfo_valid(hba, pkg, &prt_qinfo);
-
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_MAJOR,
- "[error]Port(0x%x) send ELS/GS SID(0x%x) DID(0x%x) check qinfo invalid",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
- return UNF_RETURN_ERROR;
- }
-
- ret = spfc_send_ls_gs_via_parent(hba, pkg, prt_qinfo);
-
- return ret;
-}
-
-void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
- struct unf_port_login_parms *login_params)
-{
- u32 rport_index = login_params->rport_index;
- struct spfc_parent_sq_info *parent_sq_info = NULL;
-
- if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[err]Port(0x%x) save login parms,but uplevel alloc invalid rport index: 0x%x",
- hba->port_cfg.port_id, rport_index);
-
- return;
- }
-
- parent_sq_info = &hba->parent_queue_mgr->parent_queue[rport_index].parent_sq_info;
-
- parent_sq_info->plogi_co_parms.seq_cnt = login_params->seq_cnt;
- parent_sq_info->plogi_co_parms.ed_tov = login_params->ed_tov;
- parent_sq_info->plogi_co_parms.tx_mfs = (login_params->tx_mfs <
- SPFC_DEFAULT_TX_MAX_FREAM_SIZE) ?
- SPFC_DEFAULT_TX_MAX_FREAM_SIZE :
- login_params->tx_mfs;
- parent_sq_info->plogi_co_parms.ed_tov_time = login_params->ed_tov_timer_val;
-}
-
-static void
-spfc_recover_offloading_state(struct spfc_parent_queue_info *prt_queue_info,
- enum spfc_parent_queue_state offload_state)
-{
- ulong flags = 0;
-
- spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
-
- if (prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
- prt_queue_info->offload_state = offload_state;
-
- spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
-}
-
-static bool spfc_check_need_delay_offload(void *hba, struct unf_frame_pkg *pkg, u32 rport_index,
- struct spfc_parent_queue_info *cur_prt_queue_info,
- struct spfc_parent_queue_info **offload_prt_queue_info)
-{
- ulong flags = 0;
- struct spfc_parent_queue_info *prt_queue_info = NULL;
- spinlock_t *prtq_state_lock = NULL;
-
- prtq_state_lock = &cur_prt_queue_info->parent_queue_state_lock;
- spin_lock_irqsave(prtq_state_lock, flags);
-
- if (cur_prt_queue_info->offload_state == SPFC_QUEUE_STATE_OFFLOADING) {
- spin_unlock_irqrestore(prtq_state_lock, flags);
-
- prt_queue_info = spfc_find_offload_parent_queue(hba, pkg->frame_head.csctl_sid &
- UNF_NPORTID_MASK,
- pkg->frame_head.rctl_did &
- UNF_NPORTID_MASK, rport_index);
- if (prt_queue_info) {
- *offload_prt_queue_info = prt_queue_info;
- return true;
- }
- } else {
- spin_unlock_irqrestore(prtq_state_lock, flags);
- }
-
- return false;
-}
-
-static u16 spfc_build_wqe_with_offload(struct spfc_hba_info *hba, struct spfc_sqe *sqe,
- struct spfc_parent_queue_info *prt_queue_info,
- struct unf_frame_pkg *pkg,
- enum spfc_parent_queue_state last_offload_state)
-{
- u32 tx_mfs = 2048;
- u32 edtov_timer = 2000;
- dma_addr_t ctx_pa = 0;
- u16 els_cmnd_type = SPFC_ZERO;
- u16 els_cmnd_code = SPFC_ZERO;
- void *ctx_va = NULL;
- struct spfc_parent_context *parent_ctx_info = NULL;
- struct spfc_sw_section *sw_setction = NULL;
- struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
- u16 offload_flag = 0;
-
- els_cmnd_type = SPFC_GET_ELS_RSP_TYPE(pkg->cmnd);
- if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
- els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
- } else {
- els_cmnd_code = els_cmnd_type;
- els_cmnd_type = ELS_CMND;
- }
-
- offload_flag = SPFC_CHECK_NEED_OFFLOAD(els_cmnd_code, els_cmnd_type, last_offload_state);
-
- parent_ctx_info = (struct spfc_parent_context *)(prt_queue_info->parent_ctx.parent_ctx);
- sw_setction = &parent_ctx_info->sw_section;
-
- sw_setction->tx_mfs = cpu_to_be16((u16)(tx_mfs));
- sw_setction->e_d_tov_timer_val = cpu_to_be32(edtov_timer);
-
- spfc_big_to_cpu32(&sw_setction->sw_ctxt_misc.pctxt_val0,
- sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
- sw_setction->sw_ctxt_misc.dw.port_id = SPFC_GET_NETWORK_PORT_ID(hba);
- spfc_cpu_to_big32(&sw_setction->sw_ctxt_misc.pctxt_val0,
- sizeof(sw_setction->sw_ctxt_misc.pctxt_val0));
-
- spfc_big_to_cpu32(&sw_setction->sw_ctxt_config.pctxt_val1,
- sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
- spfc_cpu_to_big32(&sw_setction->sw_ctxt_config.pctxt_val1,
- sizeof(sw_setction->sw_ctxt_config.pctxt_val1));
-
- /* Fill in contex to the chip */
- ctx_pa = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->paddr;
- ctx_va = prt_queue_info->parent_ctx.cqm_parent_ctx_obj->vaddr;
-
- /* No need write key and no need do BIG TO CPU32 */
- memcpy(ctx_va, prt_queue_info->parent_ctx.parent_ctx, sizeof(struct spfc_parent_context));
-
- if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
- sqe->ts_sl.cont.els_rsp.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
- sqe->ts_sl.cont.els_rsp.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
- sqe->ts_sl.cont.els_rsp.wd1.offload_flag = offload_flag;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
- parent_sq_info->local_port_id,
- parent_sq_info->remote_port_id,
- sqe->ts_sl.cont.els_rsp.context_gpa_hi,
- sqe->ts_sl.cont.els_rsp.context_gpa_lo,
- prt_queue_info->parent_sts_scq_info.cqm_queue_id,
- offload_flag);
- } else {
- sqe->ts_sl.cont.t_els_gs.context_gpa_hi = SPFC_HIGH_32_BITS(ctx_pa);
- sqe->ts_sl.cont.t_els_gs.context_gpa_lo = SPFC_LOW_32_BITS(ctx_pa);
- sqe->ts_sl.cont.t_els_gs.wd4.offload_flag = offload_flag;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]sid 0x%x, did 0x%x, GPA HIGH 0x%x,GPA LOW 0x%x, scq 0x%x,offload flag 0x%x",
- parent_sq_info->local_port_id,
- parent_sq_info->remote_port_id,
- sqe->ts_sl.cont.t_els_gs.context_gpa_hi,
- sqe->ts_sl.cont.t_els_gs.context_gpa_lo,
- prt_queue_info->parent_sts_scq_info.cqm_queue_id,
- offload_flag);
- }
-
- if (offload_flag) {
- prt_queue_info->offload_state = SPFC_QUEUE_STATE_OFFLOADING;
- parent_sq_info->need_offloaded = SPFC_NEED_DO_OFFLOAD;
- }
-
- return offload_flag;
-}
-
-u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
- struct unf_frame_pkg *pkg,
- struct spfc_parent_queue_info *prt_queue_info)
-{
- ulong flags = 0;
- bool sqe_delay = false;
- u32 ret = UNF_RETURN_ERROR;
- u16 els_cmnd_code = SPFC_ZERO;
- u16 els_cmnd_type = SPFC_ZERO;
- u16 ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- u32 rport_index = pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX];
- struct spfc_sqe *sqe = io_sqe;
- struct spfc_parent_queue_info *default_prt_queue_info = NULL;
- struct spfc_parent_sq_info *parent_sq_info = &prt_queue_info->parent_sq_info;
- struct spfc_parent_queue_info *offload_queue_info = NULL;
- enum spfc_parent_queue_state last_offload_state = SPFC_QUEUE_STATE_INITIALIZED;
- struct spfc_delay_destroy_ctrl_info delay_ctl_info;
- u16 offload_flag = 0;
- u32 default_index = SPFC_DEFAULT_RPORT_INDEX;
-
- memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
- /* Determine the ELS type in pkg */
- els_cmnd_type = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
-
- if (SPFC_PKG_IS_ELS_RSP(els_cmnd_type)) {
- els_cmnd_code = SPFC_GET_ELS_RSP_CODE(pkg->cmnd);
- } else {
- els_cmnd_code = els_cmnd_type;
- els_cmnd_type = ELS_CMND;
- }
-
- spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock, flags);
-
- last_offload_state = prt_queue_info->offload_state;
-
- offload_flag = spfc_build_wqe_with_offload(hba, sqe, prt_queue_info,
- pkg, last_offload_state);
-
- spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
-
- if (!offload_flag) {
- default_prt_queue_info = &hba->parent_queue_mgr->parent_queue[default_index];
- if (!default_prt_queue_info) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_ERR,
- "[ERR]cmd(0x%x), type(0x%x) send fail, default session null",
- els_cmnd_code, els_cmnd_type);
- return UNF_RETURN_ERROR;
- }
- parent_sq_info = &default_prt_queue_info->parent_sq_info;
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]cmd(0x%x), type(0x%x) send via default session",
- els_cmnd_code, els_cmnd_type);
- } else {
- /* Need this xid to judge delay offload, when Sqe Enqueue will
- * write again
- */
- sqe->ts_sl.xid = parent_sq_info->context_id;
- sqe_delay = spfc_check_need_delay_offload(hba, pkg, rport_index, prt_queue_info,
- &offload_queue_info);
-
- if (sqe_delay) {
- ret = spfc_push_delay_sqe(hba, offload_queue_info, sqe, pkg);
- if (ret == RETURN_OK) {
- spfc_recover_offloading_state(prt_queue_info, last_offload_state);
- return ret;
- }
- }
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_INFO,
- "[info]cmd(0x%x), type(0x%x) do secretly offload",
- els_cmnd_code, els_cmnd_type);
- }
-
- ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
-
- if (ret != RETURN_OK) {
- spfc_recover_offloading_state(prt_queue_info, last_offload_state);
-
- spin_lock_irqsave(&prt_queue_info->parent_queue_state_lock,
- flags);
-
- if (prt_queue_info->parent_sq_info.destroy_sqe.valid) {
- memcpy(&delay_ctl_info, &prt_queue_info->parent_sq_info.destroy_sqe,
- sizeof(struct spfc_delay_destroy_ctrl_info));
-
- prt_queue_info->parent_sq_info.destroy_sqe.valid = false;
- }
-
- spin_unlock_irqrestore(&prt_queue_info->parent_queue_state_lock, flags);
-
- spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_WARN,
- "[warn]Port(0x%x) RPort(0x%x) send ELS Type(0x%x) Code(0x%x) fail,recover offloadstatus(%u)",
- hba->port_cfg.port_id, rport_index, els_cmnd_type,
- els_cmnd_code, prt_queue_info->offload_state);
- }
-
- return ret;
-}
-
-static u32 spfc_rcv_ls_gs_rsp_payload(struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag,
- u8 *els_pld_buf, u32 pld_len)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
- if (pkg->type == UNF_PKG_GS_REQ_DONE)
- spfc_big_to_cpu32(els_pld_buf, pld_len);
- else
- pkg->byte_orders |= SPFC_BIT_2;
-
- pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld_buf;
- pkg->unf_cmnd_pload_bl.length = pld_len;
-
- pkg->last_pkg_flag = UNF_PKG_NOT_LAST_RESPONSE;
-
- UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- /* Default path, which is sent from SCQ to the driver */
- u8 status = 0;
- u32 ret = UNF_RETURN_ERROR;
- u32 ox_id = INVALID_VALUE32;
- u32 hot_tag = INVALID_VALUE32;
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_rcv_abts_rsp *abts_rsp = NULL;
-
- abts_rsp = &scqe->rcv_abts_rsp;
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_rsp->magic_num;
-
- ox_id = (u32)(abts_rsp->wd0.ox_id);
-
- hot_tag = abts_rsp->wd1.hotpooltag;
- if (unlikely(hot_tag < (u32)hba->exi_base ||
- hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) has bad HotTag(0x%x) for bls_rsp",
- hba->port_cfg.port_id, hot_tag);
-
- status = UNF_IO_FAILED;
- hot_tag = INVALID_VALUE32;
- } else {
- hot_tag -= hba->exi_base;
- if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) BLS response has error code(0x%x) tag(0x%x)",
- hba->port_cfg.port_id,
- SPFC_GET_SCQE_STATUS(scqe), (u32)hot_tag);
-
- status = UNF_IO_FAILED;
- } else {
- pkg.frame_head.rctl_did = abts_rsp->wd3.did;
- pkg.frame_head.csctl_sid = abts_rsp->wd4.sid;
- pkg.frame_head.oxid_rxid = (u32)(abts_rsp->wd0.rx_id) | ox_id <<
- UNF_SHIFT_16;
-
- /* BLS_ACC/BLS_RJT: IO_succeed */
- if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_ACC) {
- status = UNF_IO_SUCCESS;
- } else if (abts_rsp->wd2.fh_rctrl == SPFC_RCTL_BLS_RJT) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) ABTS RJT: %08x-%08x-%08x",
- hba->port_cfg.port_id,
- abts_rsp->payload[ARRAY_INDEX_0],
- abts_rsp->payload[ARRAY_INDEX_1],
- abts_rsp->payload[ARRAY_INDEX_2]);
-
- status = UNF_IO_SUCCESS;
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) BLS response RCTL is error",
- hba->port_cfg.port_id);
- SPFC_ERR_IO_STAT(hba, SPFC_SCQE_ABTS_RSP);
- status = UNF_IO_FAILED;
- }
- }
- }
-
- /* Set PKG/exchange status & Process BLS_RSP */
- pkg.status = status;
- ret = spfc_rcv_bls_rsp(hba, &pkg, hot_tag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) recv ABTS rsp OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) SID(0x%x) DID(0x%x) %s",
- hba->port_cfg.port_id, ox_id, abts_rsp->wd0.rx_id, hot_tag,
- abts_rsp->wd4.sid, abts_rsp->wd3.did,
- (ret == RETURN_OK) ? "OK" : "ERROR");
-
- return ret;
-}
-
-u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
- bool first)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- /* Convert Payload to small endian */
- spfc_big_to_cpu32(els_pld, pld_len);
-
- pkg->type = UNF_PKG_ELS_REQ;
-
- pkg->unf_cmnd_pload_bl.buffer_ptr = els_pld;
-
- /* Payload length */
- pkg->unf_cmnd_pload_bl.length = pld_len;
-
- /* Obtain the Cmnd type from the Paylaod. The Cmnd is in small endian */
- if (first)
- pkg->cmnd = UNF_GET_FC_PAYLOAD_ELS_CMND(pkg->unf_cmnd_pload_bl.buffer_ptr);
-
- /* Errors have been processed in SPFC_RecvElsError */
- pkg->status = UNF_IO_SUCCESS;
-
- /* Send PKG to the CM layer */
- UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
-
- if (ret != RETURN_OK) {
- pkg->rx_or_ox_id = UNF_PKG_FREE_RXID;
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
- pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
- ret = spfc_free_xid((void *)hba, pkg);
-
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) recv %s ox_id(0x%x) RXID(0x%x) PldLen(0x%x) failed, Free xid %s",
- hba->port_cfg.port_id,
- UNF_GET_FC_HEADER_RCTL(&pkg->frame_head) == SPFC_FC_RCTL_ELS_REQ ?
- "ELS REQ" : "ELS RSP",
- UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), pld_len,
- (ret == RETURN_OK) ? "OK" : "ERROR");
- }
-
- return ret;
-}
-
-u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
- if (pkg->type == UNF_PKG_ELS_REQ_DONE)
- pkg->byte_orders |= SPFC_BIT_2;
-
- pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
-
- UNF_LOWLEVEL_RECEIVE_LS_GS_PKG(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->type = UNF_PKG_ELS_REPLY_DONE;
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
-
- UNF_LOWLEVEL_SEND_ELS_DONE(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
- u32 hot_tag)
-{
- /*
- * 1. SCQ (normal)
- * 2. from Root RQ (parent no existence)
- * *
- * single frame, single sequence
- */
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->type = UNF_PKG_BLS_REQ_DONE;
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
- pkg->last_pkg_flag = UNF_PKG_LAST_RESPONSE;
-
- UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 rx_id)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->type = UNF_PKG_BLS_REPLY_DONE;
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = rx_id;
-
- UNF_LOWLEVEL_RECEIVE_BLS_PKG(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_rcv_tmf_marker_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
-
- /* Send PKG info to COM */
- UNF_LOWLEVEL_RECEIVE_MARKER_STS(ret, hba->lport, pkg);
-
- return ret;
-}
-
-u32 spfc_rcv_abts_marker_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag)
-{
- u32 ret = UNF_RETURN_ERROR;
-
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
-
- UNF_LOWLEVEL_RECEIVE_ABTS_MARKER_STS(ret, hba->lport, pkg);
-
- return ret;
-}
-
-static void spfc_scqe_error_pre_proc(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- /* Currently, only printing and statistics collection are performed */
- SPFC_ERR_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
- SPFC_SCQ_ERR_TYPE_STAT(hba, SPFC_GET_SCQE_STATUS(scqe));
-
- FC_DRV_PRINT(UNF_LOG_ABNORMAL, UNF_WARN,
- "[warn]Port(0x%x)-Task_type(%u) SCQE contain error code(%u),additional info(0x%x)",
- hba->port_cfg.port_id, scqe->common.ch.wd0.task_type,
- scqe->common.ch.wd0.err_code, scqe->common.conn_id);
-}
-
-void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id)
-{
- struct spfc_drq_buff_entry *srq_buf_entry = NULL;
- struct spfc_srq_info *srq_info = NULL;
-
- FC_CHECK_RETURN_VALUE(hba, NULL);
-
- srq_info = &hba->els_srq_info;
- FC_CHECK_RETURN_VALUE(user_id < srq_info->valid_wqe_num, NULL);
-
- srq_buf_entry = &srq_info->els_buff_entry_head[user_id];
-
- return srq_buf_entry->buff_addr;
-}
-
-static u32 spfc_check_srq_buf_valid(struct spfc_hba_info *hba,
- u16 *buf_id_array, u32 buf_num)
-{
- u32 index = 0;
- u32 buf_id = 0;
- void *srq_buf = NULL;
-
- for (index = 0; index < buf_num; index++) {
- buf_id = buf_id_array[index];
-
- if (buf_id < hba->els_srq_info.valid_wqe_num)
- srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
- else
- srq_buf = NULL;
-
- if (!srq_buf) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) get srq buffer user id(0x%x) is null",
- hba->port_cfg.port_id, buf_id);
-
- return UNF_RETURN_ERROR;
- }
- }
-
- return RETURN_OK;
-}
-
-static void spfc_reclaim_srq_buf(struct spfc_hba_info *hba, u16 *buf_id_array,
- u32 buf_num)
-{
- u32 index = 0;
- u32 buf_id = 0;
- void *srq_buf = NULL;
-
- for (index = 0; index < buf_num; index++) {
- buf_id = buf_id_array[index];
- if (buf_id < hba->els_srq_info.valid_wqe_num)
- srq_buf = spfc_get_els_buf_by_user_id(hba, (u16)buf_id);
- else
- srq_buf = NULL;
-
- /* If the value of buffer is NULL, it indicates that the value
- * of buffer is invalid. In this case, exit directly.
- */
- if (!srq_buf)
- break;
-
- spfc_post_els_srq_wqe(&hba->els_srq_info, (u16)buf_id);
- }
-}
-
-static u32 spfc_check_ls_gs_valid(struct spfc_hba_info *hba, union spfc_scqe *scqe,
- struct unf_frame_pkg *pkg, u16 *buf_id_array,
- u32 buf_num, u32 frame_len)
-{
- u32 hot_tag;
-
- hot_tag = UNF_GET_HOTPOOL_TAG(pkg);
-
- /* The ELS CMD returns an error code and discards it directly */
- if ((sizeof(struct spfc_fc_frame_header) > frame_len) ||
- (SPFC_SCQE_HAS_ERRCODE(scqe)) || buf_num > SPFC_ELS_SRQ_BUF_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) get scqe type(0x%x) payload len(0x%x),scq status(0x%x),user id num(0x%x) abnormal",
- hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), frame_len,
- SPFC_GET_SCQE_STATUS(scqe), buf_num);
-
- /* ELS RSP Special Processing */
- if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP ||
- SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_GS_RSP) {
- if (SPFC_SCQE_ERR_TO_CM(scqe)) {
- pkg->status = UNF_IO_FAILED;
- (void)spfc_rcv_ls_gs_rsp(hba, pkg, hot_tag);
- } else {
- if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_RSP)
- SPFC_HBA_STAT(hba, SPFC_STAT_ELS_RSP_EXCH_REUSE);
- else
- SPFC_HBA_STAT(hba, SPFC_STAT_GS_RSP_EXCH_REUSE);
- }
- }
-
- /* Reclaim srq */
- if (buf_num <= SPFC_ELS_SRQ_BUF_NUM)
- spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
-
- return UNF_RETURN_ERROR;
- }
-
- /* ELS CMD Check the validity of the buffer sent by the ucode */
- if (SPFC_GET_SCQE_TYPE(scqe) == SPFC_SCQE_ELS_CMND) {
- if (spfc_check_srq_buf_valid(hba, buf_id_array, buf_num) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) get els cmnd scqe user id num(0x%x) abnormal, as some srq buff is null",
- hba->port_cfg.port_id, buf_num);
-
- spfc_reclaim_srq_buf(hba, buf_id_array, buf_num);
-
- return UNF_RETURN_ERROR;
- }
- }
-
- return RETURN_OK;
-}
-
-u32 spfc_scq_recv_els_cmnd(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 ret = RETURN_OK;
- u32 pld_len = 0;
- u32 header_len = 0;
- u32 frame_len = 0;
- u32 rcv_data_len = 0;
- u32 max_buf_num = 0;
- u16 buf_id = 0;
- u32 index = 0;
- u8 *pld_addr = NULL;
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_rcv_els_cmd *els_cmd = NULL;
- struct spfc_fc_frame_header *els_frame = NULL;
- struct spfc_fc_frame_header tmp_frame = {0};
- void *els_buf = NULL;
- bool first = false;
-
- els_cmd = &scqe->rcv_els_cmd;
- frame_len = els_cmd->wd3.data_len;
- max_buf_num = els_cmd->wd3.user_id_num;
- spfc_swap_16_in_32((u32 *)els_cmd->user_id, SPFC_LS_GS_USERID_LEN);
-
- pkg.xchg_contex = NULL;
- pkg.status = UNF_IO_SUCCESS;
-
- /* Check the validity of error codes and buff. If an exception occurs,
- * discard the error code
- */
- ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, els_cmd->user_id,
- max_buf_num, frame_len);
- if (ret != RETURN_OK) {
- pkg.rx_or_ox_id = UNF_PKG_FREE_RXID;
- pkg.frame_head.oxid_rxid =
- (u32)(els_cmd->wd2.rx_id) | (u32)(els_cmd->wd2.ox_id) << UNF_SHIFT_16;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = INVALID_VALUE32;
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = INVALID_VALUE32;
- pkg.frame_head.csctl_sid = els_cmd->wd1.sid;
- pkg.frame_head.rctl_did = els_cmd->wd0.did;
- spfc_free_xid((void *)hba, &pkg);
- return RETURN_OK;
- }
-
- /* Send data to COM cyclically */
- for (index = 0; index < max_buf_num; index++) {
- /* Exception record, which is not processed currently */
- if (rcv_data_len >= frame_len) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) get els cmd date len(0x%x) is bigger than fream len(0x%x)",
- hba->port_cfg.port_id, rcv_data_len, frame_len);
- }
-
- buf_id = (u16)els_cmd->user_id[index];
- els_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
-
- /* Obtain playload address */
- pld_addr = (u8 *)(els_buf);
- header_len = 0;
- first = false;
- if (index == 0) {
- els_frame = (struct spfc_fc_frame_header *)els_buf;
- pld_addr = (u8 *)(els_frame + 1);
-
- header_len = sizeof(struct spfc_fc_frame_header);
- first = true;
-
- memcpy(&tmp_frame, els_frame, sizeof(struct spfc_fc_frame_header));
- spfc_big_to_cpu32(&tmp_frame, sizeof(struct spfc_fc_frame_header));
- memcpy(&pkg.frame_head, &tmp_frame, sizeof(pkg.frame_head));
- pkg.frame_head.oxid_rxid = (u32)((pkg.frame_head.oxid_rxid &
- SPFC_OXID_MASK) | (els_cmd->wd2.rx_id));
- }
-
- /* Calculate the playload length */
- pkg.last_pkg_flag = 0;
- pld_len = SPFC_SRQ_ELS_SGE_LEN;
-
- if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len) {
- pkg.last_pkg_flag = 1;
- pld_len = frame_len - rcv_data_len;
- }
-
- pkg.class_mode = els_cmd->wd0.class_mode;
-
- /* Push data to COM */
- if (ret == RETURN_OK) {
- ret = spfc_recv_els_cmnd(hba, &pkg, pld_addr,
- (pld_len - header_len), first);
- }
-
- /* Reclaim srq buffer */
- spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
-
- rcv_data_len += pld_len;
- }
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) recv ELS Type(0x%x) Cmnd(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) %u",
- hba->port_cfg.port_id, pkg.type, pkg.cmnd, els_cmd->wd2.ox_id,
- els_cmd->wd2.rx_id, els_cmd->wd1.sid, els_cmd->wd0.did, ret);
-
- return ret;
-}
-
-static u32 spfc_get_ls_gs_pld_len(struct spfc_hba_info *hba, u32 rcv_data_len, u32 frame_len)
-{
- u32 pld_len;
-
- /* Exception record, which is not processed currently */
- if (rcv_data_len >= frame_len) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) get els rsp data len(0x%x) is bigger than fream len(0x%x)",
- hba->port_cfg.port_id, rcv_data_len, frame_len);
- }
-
- pld_len = SPFC_SRQ_ELS_SGE_LEN;
- if ((rcv_data_len + SPFC_SRQ_ELS_SGE_LEN) >= frame_len)
- pld_len = frame_len - rcv_data_len;
-
- return pld_len;
-}
-
-u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 ret = RETURN_OK;
- u32 pld_len = 0;
- u32 header_len = 0;
- u32 frame_len = 0;
- u32 rcv_data_len = 0;
- u32 max_buf_num = 0;
- u16 buf_id = 0;
- u32 hot_tag = INVALID_VALUE32;
- u32 index = 0;
- u32 ox_id = (~0);
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_rcv_els_gs_rsp *ls_gs_rsp_scqe = NULL;
- struct spfc_fc_frame_header *els_frame = NULL;
- void *ls_gs_buf = NULL;
- u8 *pld_addr = NULL;
- u8 task_type;
-
- ls_gs_rsp_scqe = &scqe->rcv_els_gs_rsp;
- frame_len = ls_gs_rsp_scqe->wd2.data_len;
- max_buf_num = ls_gs_rsp_scqe->wd4.user_id_num;
- spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
-
- ox_id = ls_gs_rsp_scqe->wd1.ox_id;
- hot_tag = ((u16)ls_gs_rsp_scqe->wd5.hotpooltag) - hba->exi_base;
- pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
- pkg.frame_head.csctl_sid = ls_gs_rsp_scqe->wd4.sid;
- pkg.frame_head.rctl_did = ls_gs_rsp_scqe->wd3.did;
- pkg.status = UNF_IO_SUCCESS;
- pkg.type = UNF_PKG_ELS_REQ_DONE;
-
- task_type = SPFC_GET_SCQE_TYPE(scqe);
- if (task_type == SPFC_SCQE_GS_RSP) {
- if (ls_gs_rsp_scqe->wd3.end_rsp)
- SPFC_HBA_STAT(hba, SPFC_STAT_LAST_GS_SCQE);
- pkg.type = UNF_PKG_GS_REQ_DONE;
- }
-
- /* Handle the exception first. The LS/GS RSP returns the error code.
- * Only the ox_id can submit the error code to the CM layer.
- */
- ret = spfc_check_ls_gs_valid(hba, scqe, &pkg, ls_gs_rsp_scqe->user_id,
- max_buf_num, frame_len);
- if (ret != RETURN_OK)
- return RETURN_OK;
-
- if (ls_gs_rsp_scqe->wd3.echo_rsp) {
- pkg.private_data[PKG_PRIVATE_ECHO_CMD_RCV_TIME] =
- ls_gs_rsp_scqe->user_id[ARRAY_INDEX_5];
- pkg.private_data[PKG_PRIVATE_ECHO_RSP_SND_TIME] =
- ls_gs_rsp_scqe->user_id[ARRAY_INDEX_6];
- pkg.private_data[PKG_PRIVATE_ECHO_CMD_SND_TIME] =
- ls_gs_rsp_scqe->user_id[ARRAY_INDEX_7];
- pkg.private_data[PKG_PRIVATE_ECHO_ACC_RCV_TIME] =
- ls_gs_rsp_scqe->user_id[ARRAY_INDEX_8];
- }
-
- /* Send data to COM cyclically */
- for (index = 0; index < max_buf_num; index++) {
- /* Obtain buffer address */
- ls_gs_buf = NULL;
- buf_id = (u16)ls_gs_rsp_scqe->user_id[index];
- ls_gs_buf = spfc_get_els_buf_by_user_id(hba, buf_id);
-
- if (unlikely(!ls_gs_buf)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) Index(0x%x) get els rsp buff user id(0x%x) abnormal",
- hba->port_cfg.port_id, ox_id,
- ls_gs_rsp_scqe->wd1.rx_id, ls_gs_rsp_scqe->wd4.sid,
- ls_gs_rsp_scqe->wd3.did, index, buf_id);
-
- if (index == 0) {
- pkg.status = UNF_IO_FAILED;
- ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
- }
-
- return ret;
- }
-
- header_len = 0;
- pld_addr = (u8 *)(ls_gs_buf);
- if (index == 0) {
- header_len = sizeof(struct spfc_fc_frame_header);
- els_frame = (struct spfc_fc_frame_header *)ls_gs_buf;
- pld_addr = (u8 *)(els_frame + 1);
- }
-
- /* Calculate the playload length */
- pld_len = spfc_get_ls_gs_pld_len(hba, rcv_data_len, frame_len);
-
- /* Push data to COM */
- if (ret == RETURN_OK) {
- ret = spfc_rcv_ls_gs_rsp_payload(hba, &pkg, hot_tag, pld_addr,
- (pld_len - header_len));
- }
-
- /* Reclaim srq buffer */
- spfc_post_els_srq_wqe(&hba->els_srq_info, buf_id);
-
- rcv_data_len += pld_len;
- }
-
- if (ls_gs_rsp_scqe->wd3.end_rsp && ret == RETURN_OK)
- ret = spfc_rcv_ls_gs_rsp(hba, &pkg, hot_tag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) receive LS/GS RSP ox_id(0x%x) RXID(0x%x) SID(0x%x) DID(0x%x) end_rsp(0x%x) user_num(0x%x)",
- hba->port_cfg.port_id, ox_id, ls_gs_rsp_scqe->wd1.rx_id,
- ls_gs_rsp_scqe->wd4.sid, ls_gs_rsp_scqe->wd3.did,
- ls_gs_rsp_scqe->wd3.end_rsp,
- ls_gs_rsp_scqe->wd4.user_id_num);
-
- return ret;
-}
-
-u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 rx_id = INVALID_VALUE32;
- u32 hot_tag = INVALID_VALUE32;
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_comm_rsp_sts *els_rsp_sts_scqe = NULL;
-
- els_rsp_sts_scqe = &scqe->comm_sts;
- rx_id = (u32)els_rsp_sts_scqe->wd0.rx_id;
-
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
- els_rsp_sts_scqe->magic_num;
- pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
- hot_tag = (u32)(els_rsp_sts_scqe->wd1.hotpooltag - hba->exi_base);
-
- if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
- pkg.status = UNF_IO_FAILED;
- else
- pkg.status = UNF_IO_SUCCESS;
-
- ret = spfc_rcv_els_rsp_sts(hba, &pkg, hot_tag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) recv ELS RSP STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
- hba->port_cfg.port_id, els_rsp_sts_scqe->wd0.ox_id, rx_id,
- hot_tag, (ret == RETURN_OK) ? "OK" : "ERROR");
-
- return ret;
-}
-
-static u32 spfc_check_rport_valid(const struct spfc_parent_queue_info *prt_queue_info, u32 scqe_xid)
-{
- if (prt_queue_info->parent_ctx.cqm_parent_ctx_obj) {
- if ((prt_queue_info->parent_sq_info.context_id & SPFC_CQM_XID_MASK) ==
- (scqe_xid & SPFC_CQM_XID_MASK)) {
- return RETURN_OK;
- }
- }
-
- return UNF_RETURN_ERROR;
-}
-
-u32 spfc_scq_recv_offload_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 valid = UNF_RETURN_ERROR;
- u32 rport_index = 0;
- u32 cid = 0;
- u32 xid = 0;
- ulong flags = 0;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
- struct spfc_parent_sq_info *parent_sq_info = NULL;
- struct spfc_scqe_sess_sts *offload_sts_scqe = NULL;
- struct spfc_delay_destroy_ctrl_info delay_ctl_info;
-
- memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
- offload_sts_scqe = &scqe->sess_sts;
- rport_index = offload_sts_scqe->wd1.conn_id;
- cid = offload_sts_scqe->wd2.cid;
- xid = offload_sts_scqe->wd0.xid_qpn;
-
- if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, cacheid(0x%x)",
- hba->port_cfg.port_id, rport_index, cid);
-
- return UNF_RETURN_ERROR;
- }
-
- if (rport_index == SPFC_DEFAULT_RPORT_INDEX &&
- hba->default_sq_info.default_sq_flag == 0xF) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) default session timeout: rport(0x%x) cacheid(0x%x)",
- hba->port_cfg.port_id, rport_index, cid);
- return UNF_RETURN_ERROR;
- }
-
- prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
- parent_sq_info = &prt_qinfo->parent_sq_info;
-
- valid = spfc_check_rport_valid(prt_qinfo, xid);
- if (valid != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
- hba->port_cfg.port_id, rport_index, xid);
-
- return UNF_RETURN_ERROR;
- }
-
- /* Offload failed */
- if (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x), rport(0x%x), context id(0x%x), cache id(0x%x), offload failed",
- hba->port_cfg.port_id, rport_index, xid, cid);
-
- spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
- if (prt_qinfo->offload_state != SPFC_QUEUE_STATE_OFFLOADED) {
- prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
- parent_sq_info->need_offloaded = INVALID_VALUE8;
- }
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock,
- flags);
-
- return UNF_RETURN_ERROR;
- }
-
- spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
- prt_qinfo->parent_sq_info.cache_id = cid;
- prt_qinfo->offload_state = SPFC_QUEUE_STATE_OFFLOADED;
- parent_sq_info->need_offloaded = SPFC_HAVE_OFFLOAD;
- atomic_set(&prt_qinfo->parent_sq_info.sq_cached, true);
-
- if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
- delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
- delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
- delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
- delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
- delay_ctl_info.rport_info.nport_id =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
- delay_ctl_info.rport_info.rport_index =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
- delay_ctl_info.rport_info.port_name =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
- prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
- }
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
-
- if (rport_index == SPFC_DEFAULT_RPORT_INDEX) {
- hba->default_sq_info.sq_cid = cid;
- hba->default_sq_info.sq_xid = xid;
- hba->default_sq_info.default_sq_flag = 1;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_MAJOR, "[info]Receive default Session info");
- }
-
- spfc_pop_destroy_parent_queue_sqe((void *)hba, &delay_ctl_info);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) offload success: rport index(0x%x),rport nportid(0x%x),context id(0x%x),cache id(0x%x).",
- hba->port_cfg.port_id, rport_index,
- prt_qinfo->parent_sq_info.remote_port_id, xid, cid);
-
- return RETURN_OK;
-}
-
-static u32 spfc_send_bls_via_parent(struct spfc_hba_info *hba, struct unf_frame_pkg *pkg)
-{
- u32 ret = UNF_RETURN_ERROR;
- u16 ox_id = INVALID_VALUE16;
- u16 rx_id = INVALID_VALUE16;
- struct spfc_sqe tmp_sqe;
- struct spfc_sqe *sqe = NULL;
- struct spfc_parent_sq_info *parent_sq_info = NULL;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
- u16 ssqn;
-
- FC_CHECK_RETURN_VALUE((pkg->type == UNF_PKG_BLS_REQ), UNF_RETURN_ERROR);
-
- sqe = &tmp_sqe;
- memset(sqe, 0, sizeof(struct spfc_sqe));
-
- prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
- if (!prt_qinfo) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return ret;
- }
-
- parent_sq_info = spfc_find_parent_sq_by_pkg(hba, pkg);
- if (!parent_sq_info) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send ABTS SID_DID(0x%x_0x%x) with null parent queue information",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return ret;
- }
-
- rx_id = UNF_GET_RXID(pkg);
- ox_id = UNF_GET_OXID(pkg);
-
- /* Assemble the SQE Control Section part. The ABTS does not have
- * Payload. bdsl=0
- */
- spfc_build_service_wqe_ctrl_section(&sqe->ctrl_sl, SPFC_BYTES_TO_QW_NUM(SPFC_SQE_TS_SIZE),
- 0);
-
- /* Assemble the SQE Task Section BLS Common part. The value of DW2 of
- * BLS WQE is Rsvd, and the value of DW2 is 0
- */
- spfc_build_service_wqe_ts_common(&sqe->ts_sl, parent_sq_info->rport_index, ox_id, rx_id, 0);
-
- /* Assemble the special part of the ABTS */
- spfc_build_bls_wqe_ts_req(sqe, pkg, hba);
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) RPort(0x%x) send ABTS_REQ ox_id(0x%x) RXID(0x%x), HotTag(0x%x)",
- hba->port_cfg.port_id, parent_sq_info->rport_index, ox_id,
- rx_id, (u16)(UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base));
-
- ssqn = (u16)pkg->private_data[PKG_PRIVATE_XCHG_SSQ_INDEX];
- ret = spfc_parent_sq_enqueue(parent_sq_info, sqe, ssqn);
-
- return ret;
-}
-
-u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg)
-{
- u32 ret = UNF_RETURN_ERROR;
- struct spfc_hba_info *hba = NULL;
- ulong flags = 0;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
-
- FC_CHECK_RETURN_VALUE(handle, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg->type == UNF_PKG_BLS_REQ || pkg->type == UNF_PKG_BLS_REPLY,
- UNF_RETURN_ERROR);
-
- SPFC_CHECK_PKG_ALLOCTIME(pkg);
- hba = (struct spfc_hba_info *)handle;
-
- prt_qinfo = spfc_find_parent_queue_info_by_pkg(hba, pkg);
- if (!prt_qinfo) {
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[warn]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with null parent queue information",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
-
- return ret;
- }
-
- spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
-
- if (SPFC_RPORT_OFFLOADED(prt_qinfo)) {
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
- ret = spfc_send_bls_via_parent(hba, pkg);
- } else {
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
- FC_DRV_PRINT(UNF_LOG_IO_ATT, UNF_WARN,
- "[error]Port(0x%x) send BLS SID_DID(0x%x_0x%x) with no offloaded, do noting",
- hba->port_cfg.port_id, pkg->frame_head.csctl_sid,
- pkg->frame_head.rctl_did);
- }
-
- return ret;
-}
-
-static u32 spfc_scq_rcv_flush_sq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- /*
- * RCVD sq flush sts
- * --->>> continue flush or clear done
- */
- u32 ret = UNF_RETURN_ERROR;
-
- if (scqe->flush_sts.wd0.port_id != hba->port_index) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
- "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
- hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
- hba->port_index, hba->queue_set_stage);
-
- return UNF_RETURN_ERROR;
- }
-
- if (scqe->flush_sts.wd0.last_flush) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_INFO,
- "[info]Port(0x%x) flush sq(0x%x) done, stage(0x%x)",
- hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
-
- /* If the Flush STS is last one, send cmd done */
- ret = spfc_clear_sq_wqe_done(hba);
- } else {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_MAJOR,
- "[info]Port(0x%x) continue flush sq(0x%x), stage(0x%x)",
- hba->port_cfg.port_id, hba->next_clear_sq, hba->queue_set_stage);
-
- ret = spfc_clear_pending_sq_wqe(hba);
- }
-
- return ret;
-}
-
-static u32 spfc_scq_rcv_buf_clear_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- /*
- * clear: fetched sq wqe
- * ---to--->>> pending sq wqe
- */
- u32 ret = UNF_RETURN_ERROR;
-
- if (scqe->clear_sts.wd0.port_id != hba->port_index) {
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_CRITICAL,
- "[err]Port(0x%x) clear_sts_port_idx(0x%x) not match hba_port_idx(0x%x), stage(0x%x)",
- hba->port_cfg.port_id, scqe->clear_sts.wd0.port_id,
- hba->port_index, hba->queue_set_stage);
-
- return UNF_RETURN_ERROR;
- }
-
- /* set port with I/O cleared state */
- spfc_set_hba_clear_state(hba, true);
-
- FC_DRV_PRINT(UNF_LOG_EVENT, UNF_KEVENT,
- "[info]Port(0x%x) cleared all fetched wqe, start clear sq pending wqe, stage (0x%x)",
- hba->port_cfg.port_id, hba->queue_set_stage);
-
- hba->queue_set_stage = SPFC_QUEUE_SET_STAGE_FLUSHING;
- ret = spfc_clear_pending_sq_wqe(hba);
-
- return ret;
-}
-
-u32 spfc_scq_recv_sess_rst_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 rport_index = INVALID_VALUE32;
- ulong flags = 0;
- struct spfc_parent_queue_info *parent_queue_info = NULL;
- struct spfc_scqe_sess_sts *sess_sts_scqe = (struct spfc_scqe_sess_sts *)(void *)scqe;
- u32 flush_done;
- u32 *ctx_array = NULL;
- int ret;
- spinlock_t *prtq_state_lock = NULL;
-
- rport_index = sess_sts_scqe->wd1.conn_id;
- if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) receive reset session cmd sts failed, invlaid rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
- hba->port_cfg.port_id, rport_index,
- sess_sts_scqe->ch.wd0.err_code,
- sess_sts_scqe->ch.wd0.cqe_remain_cnt);
-
- return UNF_RETURN_ERROR;
- }
-
- parent_queue_info = &hba->parent_queue_mgr->parent_queue[rport_index];
- prtq_state_lock = &parent_queue_info->parent_queue_state_lock;
- /*
- * If only session reset is used, the offload status of sq remains
- * unchanged. If a link is deleted, the offload status is set to
- * destroying and is irreversible.
- */
- spin_lock_irqsave(prtq_state_lock, flags);
-
- /*
- * According to the fault tolerance principle, even if the connection
- * deletion times out and the sts returns to delete the connection, one
- * indicates that the cancel timer is successful, and 0 indicates that
- * the timer is being processed.
- */
- if (!cancel_delayed_work(&parent_queue_info->parent_sq_info.del_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) rport_index(0x%x) delete rport timer maybe timeout",
- hba->port_cfg.port_id, rport_index);
- }
-
- /*
- * If the SessRstSts is returned too late and the Parent Queue Info
- * resource is released, OK is returned.
- */
- if (parent_queue_info->offload_state != SPFC_QUEUE_STATE_DESTROYING) {
- spin_unlock_irqrestore(prtq_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[info]Port(0x%x) reset session cmd complete, no need to free parent qinfo, rport(0x%x) status_code(0x%x) remain_cnt(0x%x)",
- hba->port_cfg.port_id, rport_index,
- sess_sts_scqe->ch.wd0.err_code,
- sess_sts_scqe->ch.wd0.cqe_remain_cnt);
-
- return RETURN_OK;
- }
-
- if (parent_queue_info->parent_ctx.cqm_parent_ctx_obj) {
- ctx_array = (u32 *)((void *)(parent_queue_info->parent_ctx
- .cqm_parent_ctx_obj->vaddr));
- flush_done = ctx_array[SPFC_CTXT_FLUSH_DONE_DW_POS] & SPFC_CTXT_FLUSH_DONE_MASK_BE;
- mb();
- if (flush_done == 0) {
- spin_unlock_irqrestore(prtq_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) rport(0x%x) flushdone is not set, delay to free parent session",
- hba->port_cfg.port_id, rport_index);
-
- /* If flushdone bit is not set,delay free Sq info */
- ret = queue_delayed_work(hba->work_queue,
- &(parent_queue_info->parent_sq_info
- .flush_done_timeout_work),
- (ulong)msecs_to_jiffies((u32)
- SPFC_SQ_WAIT_FLUSH_DONE_TIMEOUT_MS));
- if (!ret) {
- SPFC_HBA_STAT(hba, SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK);
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) rport(0x%x) queue delayed work failed ret:%d",
- hba->port_cfg.port_id, rport_index,
- ret);
- }
-
- return RETURN_OK;
- }
- }
-
- spin_unlock_irqrestore(prtq_state_lock, flags);
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) begin to free parent session with rport(0x%x)",
- hba->port_cfg.port_id, rport_index);
-
- spfc_free_parent_queue_info(hba, parent_queue_info);
-
- return RETURN_OK;
-}
-
-static u32 spfc_scq_rcv_clear_srq_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- /*
- * clear ELS/Immi SRQ
- * ---then--->>> Destroy SRQ
- */
- struct spfc_srq_info *srq_info = NULL;
-
- if (SPFC_GET_SCQE_STATUS(scqe) != 0) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) clear srq failed, status(0x%x)",
- hba->port_cfg.port_id, SPFC_GET_SCQE_STATUS(scqe));
-
- return RETURN_OK;
- }
-
- srq_info = &hba->els_srq_info;
-
- /*
- * 1: cancel timer succeed
- * 0: the timer is being processed, the SQ is released when the timer
- * times out
- */
- if (cancel_delayed_work(&srq_info->del_work))
- queue_work(hba->work_queue, &hba->els_srq_clear_work);
-
- return RETURN_OK;
-}
-
-u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 ox_id = INVALID_VALUE32;
- u32 rx_id = INVALID_VALUE32;
- u32 hot_tag = INVALID_VALUE32;
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_itmf_marker_sts *tmf_marker_sts_scqe = NULL;
-
- tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
- ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
- rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
- hot_tag = tmf_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
- pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
- pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
- pkg.frame_head.rctl_did = tmf_marker_sts_scqe->wd2.did;
-
- /* 1. set pkg status */
- if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
- pkg.status = UNF_IO_FAILED;
- else
- pkg.status = UNF_IO_SUCCESS;
-
- /* 2 .process rcvd marker STS: set exchange state */
- ret = spfc_rcv_tmf_marker_sts(hba, &pkg, hot_tag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[event]Port(0x%x) recv marker STS OX_ID(0x%x) RX_ID(0x%x) HotTag(0x%x) result %s",
- hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
- (ret == RETURN_OK) ? "succeed" : "failed");
-
- return ret;
-}
-
-u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 ret = UNF_RETURN_ERROR;
- u32 ox_id = INVALID_VALUE32;
- u32 rx_id = INVALID_VALUE32;
- u32 hot_tag = INVALID_VALUE32;
- struct unf_frame_pkg pkg = {0};
- struct spfc_scqe_abts_marker_sts *abts_marker_sts_scqe = NULL;
-
- abts_marker_sts_scqe = &scqe->abts_marker_sts;
- if (!abts_marker_sts_scqe) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]ABTS marker STS is NULL");
- return ret;
- }
-
- ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
- rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
- hot_tag = abts_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
- pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
- pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
- pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
- pkg.abts_maker_status = (u32)abts_marker_sts_scqe->wd3.io_state;
- pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = abts_marker_sts_scqe->magic_num;
-
- if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
- pkg.status = UNF_IO_FAILED;
- else
- pkg.status = UNF_IO_SUCCESS;
-
- ret = spfc_rcv_abts_marker_sts(hba, &pkg, hot_tag);
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_MAJOR,
- "[info]Port(0x%x) recv abts marker STS ox_id(0x%x) RXID(0x%x) HotTag(0x%x) %s",
- hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
- (ret == RETURN_OK) ? "SUCCEED" : "FAILED");
-
- return ret;
-}
-
-u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba, struct spfc_aqe_data *aeq_msg)
-{
- u32 ret = RETURN_OK;
- u32 rport_index = 0;
- u32 xid = 0;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
- struct spfc_delay_destroy_ctrl_info delay_ctl_info;
- ulong flags = 0;
-
- memset(&delay_ctl_info, 0, sizeof(struct spfc_delay_destroy_ctrl_info));
-
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) receive Offload Err Event, EvtCode(0x%x) Conn_id(0x%x) Xid(0x%x)",
- hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
- aeq_msg->wd0.conn_id, aeq_msg->wd1.xid);
-
- /* Currently, only the offload failure caused by insufficient scqe is
- * processed. Other errors are not processed temporarily.
- */
- if (unlikely(aeq_msg->wd0.evt_code != FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL)) {
- FC_DRV_PRINT(UNF_LOG_REG_ATT, UNF_ERR,
- "[err]Port(0x%x) receive an unsupported error code of AEQ Event,EvtCode(0x%x) Conn_id(0x%x)",
- hba->port_cfg.port_id, aeq_msg->wd0.evt_code,
- aeq_msg->wd0.conn_id);
-
- return UNF_RETURN_ERROR;
- }
- SPFC_SCQ_ERR_TYPE_STAT(hba, FC_ERROR_OFFLOAD_LACKOF_SCQE_FAIL);
-
- rport_index = aeq_msg->wd0.conn_id;
- xid = aeq_msg->wd1.xid;
-
- if (rport_index >= UNF_SPFC_MAXRPORT_NUM) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) receive an error offload status: rport(0x%x) is invalid, Xid(0x%x)",
- hba->port_cfg.port_id, rport_index, aeq_msg->wd1.xid);
-
- return UNF_RETURN_ERROR;
- }
-
- prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
- if (spfc_check_rport_valid(prt_qinfo, xid) != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) receive an error offload status: rport(0x%x), context id(0x%x) is invalid",
- hba->port_cfg.port_id, rport_index, xid);
-
- return UNF_RETURN_ERROR;
- }
-
- /* The offload status is restored only when the offload status is offloading */
- spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
- if (prt_qinfo->offload_state == SPFC_QUEUE_STATE_OFFLOADING)
- prt_qinfo->offload_state = SPFC_QUEUE_STATE_INITIALIZED;
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
-
- if (prt_qinfo->parent_sq_info.destroy_sqe.valid) {
- delay_ctl_info.valid = prt_qinfo->parent_sq_info.destroy_sqe.valid;
- delay_ctl_info.rport_index = prt_qinfo->parent_sq_info.destroy_sqe.rport_index;
- delay_ctl_info.time_out = prt_qinfo->parent_sq_info.destroy_sqe.time_out;
- delay_ctl_info.start_jiff = prt_qinfo->parent_sq_info.destroy_sqe.start_jiff;
- delay_ctl_info.rport_info.nport_id =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.nport_id;
- delay_ctl_info.rport_info.rport_index =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index;
- delay_ctl_info.rport_info.port_name =
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.port_name;
- prt_qinfo->parent_sq_info.destroy_sqe.valid = false;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[info]Port(0x%x) pop up delay sqe, start:0x%llx, timeout:0x%x, rport:0x%x, offload state:0x%x",
- hba->port_cfg.port_id, delay_ctl_info.start_jiff,
- delay_ctl_info.time_out,
- prt_qinfo->parent_sq_info.destroy_sqe.rport_info.rport_index,
- SPFC_QUEUE_STATE_INITIALIZED);
-
- ret = spfc_free_parent_resource(hba, &delay_ctl_info.rport_info);
- if (ret != RETURN_OK) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[err]Port(0x%x) pop delay destroy parent sq failed, rport(0x%x), rport nport id 0x%x",
- hba->port_cfg.port_id,
- delay_ctl_info.rport_info.rport_index,
- delay_ctl_info.rport_info.nport_id);
- }
- }
-
- return ret;
-}
-
-u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg)
-{
- u32 ret = RETURN_ERROR;
- u16 rx_id = INVALID_VALUE16;
- u16 ox_id = INVALID_VALUE16;
- u16 hot_tag = INVALID_VALUE16;
- struct spfc_hba_info *hba = (struct spfc_hba_info *)handle;
- union spfc_cmdqe tmp_cmd_wqe;
- union spfc_cmdqe *cmd_wqe = NULL;
-
- FC_CHECK_RETURN_VALUE(hba, RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(pkg, RETURN_ERROR);
- SPFC_CHECK_PKG_ALLOCTIME(pkg);
-
- cmd_wqe = &tmp_cmd_wqe;
- memset(cmd_wqe, 0, sizeof(union spfc_cmdqe));
-
- rx_id = UNF_GET_RXID(pkg);
- ox_id = UNF_GET_OXID(pkg);
- if (UNF_GET_HOTPOOL_TAG(pkg) != INVALID_VALUE32)
- hot_tag = (u16)UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
-
- spfc_build_cmdqe_common(cmd_wqe, SPFC_TASK_T_EXCH_ID_FREE, rx_id);
- cmd_wqe->xid_free.wd2.hotpool_tag = hot_tag;
- cmd_wqe->xid_free.magic_num = UNF_GETXCHGALLOCTIME(pkg);
- cmd_wqe->xid_free.sid = pkg->frame_head.csctl_sid;
- cmd_wqe->xid_free.did = pkg->frame_head.rctl_did;
- cmd_wqe->xid_free.type = pkg->type;
-
- if (pkg->rx_or_ox_id == UNF_PKG_FREE_OXID)
- cmd_wqe->xid_free.wd0.task_id = ox_id;
- else
- cmd_wqe->xid_free.wd0.task_id = rx_id;
-
- cmd_wqe->xid_free.wd0.port_id = hba->port_index;
- cmd_wqe->xid_free.wd2.scqn = hba->default_scqn;
- ret = spfc_root_cmdq_enqueue(hba, cmd_wqe, sizeof(cmd_wqe->xid_free));
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "[info]Port(0x%x) ox_id(0x%x) RXID(0x%x) hottag(0x%x) magic_num(0x%x) Sid(0x%x) Did(0x%x), send free xid %s",
- hba->port_cfg.port_id, ox_id, rx_id, hot_tag,
- cmd_wqe->xid_free.magic_num, cmd_wqe->xid_free.sid,
- cmd_wqe->xid_free.did,
- (ret == RETURN_OK) ? "OK" : "ERROR");
-
- return ret;
-}
-
-u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 hot_tag = INVALID_VALUE32;
- u32 magic_num = INVALID_VALUE32;
- u32 ox_id = INVALID_VALUE32;
- u32 rx_id = INVALID_VALUE32;
- struct spfc_scqe_comm_rsp_sts *free_xid_sts_scqe = NULL;
-
- free_xid_sts_scqe = &scqe->comm_sts;
- magic_num = free_xid_sts_scqe->magic_num;
- ox_id = (u32)free_xid_sts_scqe->wd0.ox_id;
- rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
-
- if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
- hot_tag = free_xid_sts_scqe->wd1.hotpooltag - hba->exi_base;
- }
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "Port(0x%x) hottag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
- hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
- SPFC_GET_SCQE_STATUS(scqe));
-
- return RETURN_OK;
-}
-
-u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- u32 hot_tag = INVALID_VALUE32;
- u32 magic_num = INVALID_VALUE32;
- u32 ox_id = INVALID_VALUE32;
- u32 rx_id = INVALID_VALUE32;
- struct spfc_scqe_comm_rsp_sts *time_out_scqe = NULL;
-
- time_out_scqe = &scqe->comm_sts;
- magic_num = time_out_scqe->magic_num;
- ox_id = (u32)time_out_scqe->wd0.ox_id;
- rx_id = (u32)time_out_scqe->wd0.rx_id;
-
- if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
- hot_tag = time_out_scqe->wd1.hotpooltag - hba->exi_base;
-
- FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
- "Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
- hba->port_cfg.port_id, hot_tag, magic_num, ox_id, rx_id,
- SPFC_GET_SCQE_STATUS(scqe));
-
- return RETURN_OK;
-}
-
-u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
-{
- struct spfc_scqe_sq_nop_sts *sq_nop_scqe = NULL;
- struct spfc_parent_queue_info *prt_qinfo = NULL;
- struct spfc_parent_sq_info *parent_sq_info = NULL;
- struct list_head *node = NULL;
- struct list_head *next_node = NULL;
- struct spfc_suspend_sqe_info *suspend_sqe = NULL;
- struct spfc_suspend_sqe_info *sqe = NULL;
- u32 rport_index = 0;
- u32 magic_num;
- u16 sqn;
- u32 sqn_base;
- u32 sqn_max;
- u32 ret = RETURN_OK;
- ulong flags = 0;
-
- sq_nop_scqe = &scqe->sq_nop_sts;
- rport_index = sq_nop_scqe->wd1.conn_id;
- magic_num = sq_nop_scqe->magic_num;
- sqn = sq_nop_scqe->wd0.sqn;
- prt_qinfo = &hba->parent_queue_mgr->parent_queue[rport_index];
- parent_sq_info = &prt_qinfo->parent_sq_info;
- sqn_base = parent_sq_info->sqn_base;
- sqn_max = sqn_base + UNF_SQ_NUM_PER_SESSION - 1;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) rport(0x%x), magic_num(0x%x) receive nop sq sts form sq(0x%x)",
- hba->port_cfg.port_id, rport_index, magic_num, sqn);
-
- spin_lock_irqsave(&prt_qinfo->parent_queue_state_lock, flags);
- list_for_each_safe(node, next_node, &parent_sq_info->suspend_sqe_list) {
- sqe = list_entry(node, struct spfc_suspend_sqe_info, list_sqe_entry);
- if (sqe->magic_num != magic_num)
- continue;
- suspend_sqe = sqe;
- if (sqn == sqn_max)
- list_del(node);
- break;
- }
- spin_unlock_irqrestore(&prt_qinfo->parent_queue_state_lock, flags);
-
- if (suspend_sqe) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
- hba->port_cfg.port_id, rport_index);
- if ((sqn < sqn_max) && (sqn >= sqn_base)) {
- ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
- } else if (sqn == sqn_max) {
- if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "[warn]Port(0x%x) rport(0x%x) reset worker timer maybe timeout",
- hba->port_cfg.port_id, rport_index);
- }
- parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
- ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
- kfree(suspend_sqe);
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) rport(0x%x) rcv error sqn(0x%x)",
- hba->port_cfg.port_id, rport_index, sqn);
- }
- } else {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x) rport(0x%x) magicnum(0x%x)can't find suspend sqe",
- hba->port_cfg.port_id, rport_index, magic_num);
- }
- return ret;
-}
-
-static const struct unf_scqe_handle_table scqe_handle_table[] = {
- {/* INI rcvd FCP RSP */
- SPFC_SCQE_FCP_IRSP, true, spfc_scq_recv_iresp},
- {/* INI/TGT rcvd ELS_CMND */
- SPFC_SCQE_ELS_CMND, false, spfc_scq_recv_els_cmnd},
- {/* INI/TGT rcvd ELS_RSP */
- SPFC_SCQE_ELS_RSP, true, spfc_scq_recv_ls_gs_rsp},
- {/* INI/TGT rcvd GS_RSP */
- SPFC_SCQE_GS_RSP, true, spfc_scq_recv_ls_gs_rsp},
- {/* INI rcvd BLS_RSP */
- SPFC_SCQE_ABTS_RSP, true, spfc_scq_recv_abts_rsp},
- {/* INI/TGT rcvd ELS_RSP STS(Done) */
- SPFC_SCQE_ELS_RSP_STS, true, spfc_scq_recv_els_rsp_sts},
- {/* INI or TGT rcvd Session enable STS */
- SPFC_SCQE_SESS_EN_STS, false, spfc_scq_recv_offload_sts},
- {/* INI or TGT rcvd flush (pending) SQ STS */
- SPFC_SCQE_FLUSH_SQ_STS, false, spfc_scq_rcv_flush_sq_sts},
- {/* INI or TGT rcvd Buffer clear STS */
- SPFC_SCQE_BUF_CLEAR_STS, false, spfc_scq_rcv_buf_clear_sts},
- {/* INI or TGT rcvd session reset STS */
- SPFC_SCQE_SESS_RST_STS, false, spfc_scq_recv_sess_rst_sts},
- {/* ELS/IMMI SRQ */
- SPFC_SCQE_CLEAR_SRQ_STS, false, spfc_scq_rcv_clear_srq_sts},
- {/* INI rcvd TMF RSP */
- SPFC_SCQE_FCP_ITMF_RSP, true, spfc_scq_recv_iresp},
- {/* INI rcvd TMF Marker STS */
- SPFC_SCQE_ITMF_MARKER_STS, false, spfc_scq_recv_marker_sts},
- {/* INI rcvd ABTS Marker STS */
- SPFC_SCQE_ABTS_MARKER_STS, false, spfc_scq_recv_abts_marker_sts},
- {SPFC_SCQE_XID_FREE_ABORT_STS, false, spfc_scq_free_xid_sts},
- {SPFC_SCQE_EXCHID_TIMEOUT_STS, false, spfc_scq_exchg_timeout_sts},
- {SPFC_SQE_NOP_STS, true, spfc_scq_rcv_sq_nop_sts},
-
-};
-
-u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba, union spfc_scqe *scqe, u32 scqn)
-{
- u32 ret = UNF_RETURN_ERROR;
- bool reclaim = false;
- u32 index = 0;
- u32 total = 0;
-
- FC_CHECK_RETURN_VALUE(hba, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(scqe, UNF_RETURN_ERROR);
- FC_CHECK_RETURN_VALUE(scqn < SPFC_TOTAL_SCQ_NUM, UNF_RETURN_ERROR);
-
- SPFC_IO_STAT(hba, SPFC_GET_SCQE_TYPE(scqe));
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[info]Port(0x%x) receive scqe type %d from SCQ[%u]",
- hba->port_cfg.port_id, SPFC_GET_SCQE_TYPE(scqe), scqn);
-
- /* 1. error code cheking */
- if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe))) {
- /* So far, just print & counter */
- spfc_scqe_error_pre_proc(hba, scqe);
- }
-
- /* 2. Process SCQE by corresponding processer */
- total = sizeof(scqe_handle_table) / sizeof(struct unf_scqe_handle_table);
- while (index < total) {
- if (SPFC_GET_SCQE_TYPE(scqe) == scqe_handle_table[index].scqe_type) {
- ret = scqe_handle_table[index].scqe_handle_func(hba, scqe);
- reclaim = scqe_handle_table[index].reclaim_sq_wpg;
-
- break;
- }
-
- index++;
- }
-
- /* 3. SCQE type check */
- if (unlikely(total == index)) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
- "[warn]Unknown SCQE type %d",
- SPFC_GET_SCQE_TYPE(scqe));
-
- UNF_PRINT_SFS_LIMIT(UNF_ERR, hba->port_cfg.port_id, scqe, sizeof(union spfc_scqe));
- }
-
- /* 4. If SCQE is for SQ-WQE then recovery Link List SQ free page */
- if (reclaim) {
- if (SPFC_GET_SCQE_SQN(scqe) < SPFC_MAX_SSQ_NUM) {
- ret = spfc_reclaim_sq_wqe_page(hba, scqe);
- } else {
- /* NOTE: for buffer clear, the SCQE conn_id is 0xFFFF,count with HBA */
- SPFC_HBA_STAT((struct spfc_hba_info *)hba, SPFC_STAT_SQ_IO_BUFFER_CLEARED);
- }
- }
-
- return ret;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_service.h b/drivers/scsi/spfc/hw/spfc_service.h
deleted file mode 100644
index e2555c55f4d1..000000000000
--- a/drivers/scsi/spfc/hw/spfc_service.h
+++ /dev/null
@@ -1,282 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_SERVICE_H
-#define SPFC_SERVICE_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "unf_scsi_common.h"
-#include "spfc_hba.h"
-
-#define SPFC_HAVE_OFFLOAD (0)
-
-/* FC txmfs */
-#define SPFC_DEFAULT_TX_MAX_FREAM_SIZE (256)
-
-#define SPFC_GET_NETWORK_PORT_ID(hba) \
- (((hba)->port_index > 1) ? ((hba)->port_index + 2) : (hba)->port_index)
-
-#define SPFC_GET_PRLI_PAYLOAD_LEN \
- (UNF_PRLI_PAYLOAD_LEN - UNF_PRLI_SIRT_EXTRA_SIZE)
-/* Start addr of the header/payloed of the cmnd buffer in the pkg */
-#define SPFC_FC_HEAD_LEN (sizeof(struct unf_fc_head))
-#define SPFC_PAYLOAD_OFFSET (sizeof(struct unf_fc_head))
-#define SPFC_GET_CMND_PAYLOAD_ADDR(pkg) UNF_GET_FLOGI_PAYLOAD(pkg)
-#define SPFC_GET_CMND_HEADER_ADDR(pkg) \
- ((pkg)->unf_cmnd_pload_bl.buffer_ptr)
-#define SPFC_GET_RSP_HEADER_ADDR(pkg) \
- ((pkg)->unf_rsp_pload_bl.buffer_ptr)
-#define SPFC_GET_RSP_PAYLOAD_ADDR(pkg) \
- ((pkg)->unf_rsp_pload_bl.buffer_ptr + SPFC_PAYLOAD_OFFSET)
-#define SPFC_GET_CMND_FC_HEADER(pkg) \
- (&(UNF_GET_SFS_ENTRY(pkg)->sfs_common.frame_head))
-#define SPFC_PKG_IS_ELS_RSP(cmd_type) \
- (((cmd_type) == ELS_ACC) || ((cmd_type) == ELS_RJT))
-#define SPFC_XID_IS_VALID(exid, base, exi_count) \
- (((exid) >= (base)) && ((exid) < ((base) + (exi_count))))
-#define SPFC_CHECK_NEED_OFFLOAD(cmd_code, cmd_type, offload_state) \
- (((cmd_code) == ELS_PLOGI) && ((cmd_type) != ELS_RJT) && \
- ((offload_state) == SPFC_QUEUE_STATE_INITIALIZED))
-
-#define UNF_FC_PAYLOAD_ELS_MASK (0xFF000000)
-#define UNF_FC_PAYLOAD_ELS_SHIFT (24)
-#define UNF_FC_PAYLOAD_ELS_DWORD (0)
-
-/* Note: this pfcpayload is little endian */
-#define UNF_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
- UNF_GET_SHIFTMASK(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD], \
- UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
-
-/* Note: this pfcpayload is big endian */
-#define SPFC_GET_FC_PAYLOAD_ELS_CMND(pfcpayload) \
- UNF_GET_SHIFTMASK(be32_to_cpu(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_ELS_DWORD]), \
- UNF_FC_PAYLOAD_ELS_SHIFT, UNF_FC_PAYLOAD_ELS_MASK)
-
-#define UNF_FC_PAYLOAD_RX_SZ_MASK (0x00000FFF)
-#define UNF_FC_PAYLOAD_RX_SZ_SHIFT (16)
-#define UNF_FC_PAYLOAD_RX_SZ_DWORD (2)
-
-/* Note: this pfcpayload is little endian */
-#define UNF_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
- ((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD] & \
- UNF_FC_PAYLOAD_RX_SZ_MASK))
-
-/* Note: this pfcpayload is big endian */
-#define SPFC_GET_FC_PAYLOAD_RX_SZ(pfcpayload) \
- (be32_to_cpu((u16)(((u32 *)(void *)(pfcpayload))[UNF_FC_PAYLOAD_RX_SZ_DWORD]) & \
- UNF_FC_PAYLOAD_RX_SZ_MASK))
-
-#define SPFC_GET_RA_TOV_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_a_tov)
-#define SPFC_GET_RT_TOV_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.r_t_tov)
-#define SPFC_GET_E_D_TOV_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov)
-#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.e_d_tov_resolution)
-#define SPFC_GET_BB_SC_N_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bbscn)
-#define SPFC_GET_BB_CREDIT_FROM_PAYLOAD(pfcpayload) \
- (((struct unf_flogi_fdisc_payload *)(pfcpayload))->fabric_parms.co_parms.bb_credit)
-
-#define SPFC_GET_RA_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_a_tov)
-#define SPFC_GET_RT_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.r_t_tov)
-#define SPFC_GET_E_D_TOV_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov)
-#define SPFC_GET_E_D_TOV_RESOLUTION_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.e_d_tov_resolution)
-#define SPFC_GET_BB_SC_N_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.bbscn)
-#define SPFC_GET_BB_CREDIT_FROM_PARAMS(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.bb_credit)
-#define SPFC_CHECK_NPORT_FPORT_BIT(pfcparams) \
- (((struct unf_fabric_parm *)(pfcparams))->co_parms.nport)
-
-#define UNF_FC_RCTL_BLS_MASK (0x80)
-#define SPFC_UNSOLICITED_FRAME_IS_BLS(hdr) (UNF_GET_FC_HEADER_RCTL(hdr) & UNF_FC_RCTL_BLS_MASK)
-
-#define SPFC_LOW_SEQ_CNT (0)
-#define SPFC_HIGH_SEQ_CNT (0xFFFF)
-
-/* struct unf_frame_pkg.cmnd meaning:
- * The least significant 16 bits indicate whether to send ELS CMND or ELS RSP
- * (ACC or RJT). The most significant 16 bits indicate the corresponding ELS
- * CMND when the lower 16 bits are ELS RSP.
- */
-#define SPFC_ELS_CMND_MASK (0xffff)
-#define SPFC_ELS_CMND__RELEVANT_SHIFT (16UL)
-#define SPFC_GET_LS_GS_CMND_CODE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
-#define SPFC_GET_ELS_RSP_TYPE(cmnd) ((u16)((cmnd) & SPFC_ELS_CMND_MASK))
-#define SPFC_GET_ELS_RSP_CODE(cmnd) \
- ((u16)((cmnd) >> SPFC_ELS_CMND__RELEVANT_SHIFT & SPFC_ELS_CMND_MASK))
-
-/* ELS CMND Request */
-#define ELS_CMND (0)
-
-/* fh_f_ctl - Frame control flags. */
-#define SPFC_FC_EX_CTX BIT(23) /* sent by responder to exchange */
-#define SPFC_FC_SEQ_CTX BIT(22) /* sent by responder to sequence */
-#define SPFC_FC_FIRST_SEQ BIT(21) /* first sequence of this exchange */
-#define SPFC_FC_LAST_SEQ BIT(20) /* last sequence of this exchange */
-#define SPFC_FC_END_SEQ BIT(19) /* last frame of sequence */
-#define SPFC_FC_END_CONN BIT(18) /* end of class 1 connection pending */
-#define SPFC_FC_RES_B17 BIT(17) /* reserved */
-#define SPFC_FC_SEQ_INIT BIT(16) /* transfer of sequence initiative */
-#define SPFC_FC_X_ID_REASS BIT(15) /* exchange ID has been changed */
-#define SPFC_FC_X_ID_INVAL BIT(14) /* exchange ID invalidated */
-#define SPFC_FC_ACK_1 BIT(12) /* 13:12 = 1: ACK_1 expected */
-#define SPFC_FC_ACK_N (2 << 12) /* 13:12 = 2: ACK_N expected */
-#define SPFC_FC_ACK_0 (3 << 12) /* 13:12 = 3: ACK_0 expected */
-#define SPFC_FC_RES_B11 BIT(11) /* reserved */
-#define SPFC_FC_RES_B10 BIT(10) /* reserved */
-#define SPFC_FC_RETX_SEQ BIT(9) /* retransmitted sequence */
-#define SPFC_FC_UNI_TX BIT(8) /* unidirectional transmit (class 1) */
-#define SPFC_FC_CONT_SEQ(i) ((i) << 6)
-#define SPFC_FC_ABT_SEQ(i) ((i) << 4)
-#define SPFC_FC_REL_OFF BIT(3) /* parameter is relative offset */
-#define SPFC_FC_RES2 BIT(2) /* reserved */
-#define SPFC_FC_FILL(i) ((i) & 3) /* 1:0: bytes of trailing fill */
-
-#define SPFC_FCTL_REQ (SPFC_FC_FIRST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
-#define SPFC_FCTL_RESP \
- (SPFC_FC_EX_CTX | SPFC_FC_LAST_SEQ | SPFC_FC_END_SEQ | SPFC_FC_SEQ_INIT)
-#define SPFC_RCTL_BLS_REQ (0x81)
-#define SPFC_RCTL_BLS_ACC (0x84)
-#define SPFC_RCTL_BLS_RJT (0x85)
-
-#define PHY_PORT_TYPE_FC 0x1 /* Physical port type of FC */
-#define PHY_PORT_TYPE_FCOE 0x2 /* Physical port type of FCoE */
-#define SPFC_FC_COS_VALUE (0X4)
-
-#define SPFC_CDB16_LBA_MASK 0xffff
-#define SPFC_CDB16_TRANSFERLEN_MASK 0xff
-#define SPFC_RXID_MASK 0xffff
-#define SPFC_OXID_MASK 0xffff0000
-
-enum spfc_fc_fh_type {
- SPFC_FC_TYPE_BLS = 0x00, /* basic link service */
- SPFC_FC_TYPE_ELS = 0x01, /* extended link service */
- SPFC_FC_TYPE_IP = 0x05, /* IP over FC, RFC 4338 */
- SPFC_FC_TYPE_FCP = 0x08, /* SCSI FCP */
- SPFC_FC_TYPE_CT = 0x20, /* Fibre Channel Services (FC-CT) */
- SPFC_FC_TYPE_ILS = 0x22 /* internal link service */
-};
-
-enum spfc_fc_fh_rctl {
- SPFC_FC_RCTL_DD_UNCAT = 0x00, /* uncategorized information */
- SPFC_FC_RCTL_DD_SOL_DATA = 0x01, /* solicited data */
- SPFC_FC_RCTL_DD_UNSOL_CTL = 0x02, /* unsolicited control */
- SPFC_FC_RCTL_DD_SOL_CTL = 0x03, /* solicited control or reply */
- SPFC_FC_RCTL_DD_UNSOL_DATA = 0x04, /* unsolicited data */
- SPFC_FC_RCTL_DD_DATA_DESC = 0x05, /* data descriptor */
- SPFC_FC_RCTL_DD_UNSOL_CMD = 0x06, /* unsolicited command */
- SPFC_FC_RCTL_DD_CMD_STATUS = 0x07, /* command status */
-
-#define SPFC_FC_RCTL_ILS_REQ SPFC_FC_RCTL_DD_UNSOL_CTL /* ILS request */
-#define SPFC_FC_RCTL_ILS_REP SPFC_FC_RCTL_DD_SOL_CTL /* ILS reply */
-
- /*
- * Extended Link_Data
- */
- SPFC_FC_RCTL_ELS_REQ = 0x22, /* extended link services request */
- SPFC_FC_RCTL_ELS_RSP = 0x23, /* extended link services reply */
- SPFC_FC_RCTL_ELS4_REQ = 0x32, /* FC-4 ELS request */
- SPFC_FC_RCTL_ELS4_RSP = 0x33, /* FC-4 ELS reply */
- /*
- * Optional Extended Headers
- */
- SPFC_FC_RCTL_VFTH = 0x50, /* virtual fabric tagging header */
- SPFC_FC_RCTL_IFRH = 0x51, /* inter-fabric routing header */
- SPFC_FC_RCTL_ENCH = 0x52, /* encapsulation header */
- /*
- * Basic Link Services fh_r_ctl values.
- */
- SPFC_FC_RCTL_BA_NOP = 0x80, /* basic link service NOP */
- SPFC_FC_RCTL_BA_ABTS = 0x81, /* basic link service abort */
- SPFC_FC_RCTL_BA_RMC = 0x82, /* remove connection */
- SPFC_FC_RCTL_BA_ACC = 0x84, /* basic accept */
- SPFC_FC_RCTL_BA_RJT = 0x85, /* basic reject */
- SPFC_FC_RCTL_BA_PRMT = 0x86, /* dedicated connection preempted */
- /*
- * Link Control Information.
- */
- SPFC_FC_RCTL_ACK_1 = 0xc0, /* acknowledge_1 */
- SPFC_FC_RCTL_ACK_0 = 0xc1, /* acknowledge_0 */
- SPFC_FC_RCTL_P_RJT = 0xc2, /* port reject */
- SPFC_FC_RCTL_F_RJT = 0xc3, /* fabric reject */
- SPFC_FC_RCTL_P_BSY = 0xc4, /* port busy */
- SPFC_FC_RCTL_F_BSY = 0xc5, /* fabric busy to data frame */
- SPFC_FC_RCTL_F_BSYL = 0xc6, /* fabric busy to link control frame */
- SPFC_FC_RCTL_LCR = 0xc7, /* link credit reset */
- SPFC_FC_RCTL_END = 0xc9 /* end */
-};
-
-struct spfc_fc_frame_header {
- u8 rctl; /* routing control */
- u8 did[ARRAY_INDEX_3]; /* Destination ID */
-
- u8 cs_ctrl; /* class of service control / pri */
- u8 sid[ARRAY_INDEX_3]; /* Source ID */
-
- u8 type; /* see enum fc_fh_type below */
- u8 frame_ctrl[ARRAY_INDEX_3]; /* frame control */
-
- u8 seq_id; /* sequence ID */
- u8 df_ctrl; /* data field control */
- u16 seq_cnt; /* sequence count */
-
- u16 oxid; /* originator exchange ID */
- u16 rxid; /* responder exchange ID */
- u32 param_offset; /* parameter or relative offset */
-};
-
-u32 spfc_recv_els_cmnd(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u8 *els_pld, u32 pld_len,
- bool first);
-u32 spfc_rcv_ls_gs_rsp(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag);
-u32 spfc_rcv_els_rsp_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 hot_tag);
-u32 spfc_rcv_bls_rsp(const struct spfc_hba_info *hba, struct unf_frame_pkg *pkg,
- u32 hot_tag);
-u32 spfc_rsv_bls_rsp_sts(const struct spfc_hba_info *hba,
- struct unf_frame_pkg *pkg, u32 rx_id);
-void spfc_save_login_parms_in_sq_info(struct spfc_hba_info *hba,
- struct unf_port_login_parms *login_params);
-u32 spfc_handle_aeq_off_load_err(struct spfc_hba_info *hba,
- struct spfc_aqe_data *aeq_msg);
-u32 spfc_free_xid(void *handle, struct unf_frame_pkg *pkg);
-u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
-u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
-u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe);
-u32 spfc_send_els_via_default_session(struct spfc_hba_info *hba, struct spfc_sqe *io_sqe,
- struct unf_frame_pkg *pkg,
- struct spfc_parent_queue_info *prt_queue_info);
-u32 spfc_send_ls_gs_cmnd(void *handle, struct unf_frame_pkg *pkg);
-u32 spfc_send_bls_cmnd(void *handle, struct unf_frame_pkg *pkg);
-
-/* Receive Frame from SCQ */
-u32 spfc_rcv_scq_entry_from_scq(struct spfc_hba_info *hba,
- union spfc_scqe *scqe, u32 scqn);
-void *spfc_get_els_buf_by_user_id(struct spfc_hba_info *hba, u16 user_id);
-
-#define SPFC_CHECK_PKG_ALLOCTIME(pkg) \
- do { \
- if (unlikely(UNF_GETXCHGALLOCTIME(pkg) == 0)) { \
- FC_DRV_PRINT(UNF_LOG_NORMAL, \
- UNF_WARN, \
- "[warn]Invalid MagicNum,S_ID(0x%x) " \
- "D_ID(0x%x) OXID(0x%x) " \
- "RX_ID(0x%x) Pkg type(0x%x) hot " \
- "pooltag(0x%x)", \
- UNF_GET_SID(pkg), UNF_GET_DID(pkg), \
- UNF_GET_OXID(pkg), UNF_GET_RXID(pkg), \
- ((struct unf_frame_pkg *)(pkg))->type, \
- UNF_GET_XCHG_TAG(pkg)); \
- } \
- } while (0)
-
-#endif
diff --git a/drivers/scsi/spfc/hw/spfc_utils.c b/drivers/scsi/spfc/hw/spfc_utils.c
deleted file mode 100644
index 328c388c95fe..000000000000
--- a/drivers/scsi/spfc/hw/spfc_utils.c
+++ /dev/null
@@ -1,102 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_utils.h"
-#include "unf_log.h"
-#include "unf_common.h"
-
-void spfc_cpu_to_big64(void *addr, u32 size)
-{
- u32 index = 0;
- u32 cnt = 0;
- u64 *temp = NULL;
-
- FC_CHECK_VALID(addr, dump_stack(); return);
- FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
-
- temp = (u64 *)addr;
- cnt = SPFC_SHIFT_TO_U64(size);
-
- for (index = 0; index < cnt; index++) {
- *temp = cpu_to_be64(*temp);
- temp++;
- }
-}
-
-void spfc_big_to_cpu64(void *addr, u32 size)
-{
- u32 index = 0;
- u32 cnt = 0;
- u64 *temp = NULL;
-
- FC_CHECK_VALID(addr, dump_stack(); return);
- FC_CHECK_VALID((size % SPFC_QWORD_BYTE) == 0, dump_stack(); return);
-
- temp = (u64 *)addr;
- cnt = SPFC_SHIFT_TO_U64(size);
-
- for (index = 0; index < cnt; index++) {
- *temp = be64_to_cpu(*temp);
- temp++;
- }
-}
-
-void spfc_cpu_to_big32(void *addr, u32 size)
-{
- unf_cpu_to_big_end(addr, size);
-}
-
-void spfc_big_to_cpu32(void *addr, u32 size)
-{
- if (size % UNF_BYTES_OF_DWORD)
- dump_stack();
-
- unf_big_end_to_cpu(addr, size);
-}
-
-void spfc_cpu_to_be24(u8 *data, u32 value)
-{
- data[ARRAY_INDEX_0] = (value >> UNF_SHIFT_16) & UNF_MASK_BIT_7_0;
- data[ARRAY_INDEX_1] = (value >> UNF_SHIFT_8) & UNF_MASK_BIT_7_0;
- data[ARRAY_INDEX_2] = value & UNF_MASK_BIT_7_0;
-}
-
-u32 spfc_big_to_cpu24(u8 *data)
-{
- return (data[ARRAY_INDEX_0] << UNF_SHIFT_16) |
- (data[ARRAY_INDEX_1] << UNF_SHIFT_8) | data[ARRAY_INDEX_2];
-}
-
-void spfc_print_buff(u32 dbg_level, void *buff, u32 size)
-{
- u32 *spfc_buff = NULL;
- u32 loop = 0;
- u32 index = 0;
-
- FC_CHECK_VALID(buff, dump_stack(); return);
- FC_CHECK_VALID(0 == (size % SPFC_DWORD_BYTE), dump_stack(); return);
-
- if ((dbg_level) <= unf_dgb_level) {
- spfc_buff = (u32 *)buff;
- loop = size / SPFC_DWORD_BYTE;
-
- for (index = 0; index < loop; index++) {
- spfc_buff = (u32 *)buff + index;
- FC_DRV_PRINT(UNF_LOG_NORMAL,
- UNF_MAJOR, "Buff DW%u 0x%08x.", index, *spfc_buff);
- }
- }
-}
-
-u32 spfc_log2n(u32 val)
-{
- u32 result = 0;
- u32 logn = (val >> UNF_SHIFT_1);
-
- while (logn) {
- logn >>= UNF_SHIFT_1;
- result++;
- }
-
- return result;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_utils.h b/drivers/scsi/spfc/hw/spfc_utils.h
deleted file mode 100644
index 6b4330da3f1d..000000000000
--- a/drivers/scsi/spfc/hw/spfc_utils.h
+++ /dev/null
@@ -1,202 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_UTILS_H
-#define SPFC_UTILS_H
-
-#include "unf_type.h"
-#include "unf_log.h"
-
-#define SPFC_ZERO (0)
-
-#define SPFC_BIT(n) (0x1UL << (n))
-#define SPFC_BIT_0 SPFC_BIT(0)
-#define SPFC_BIT_1 SPFC_BIT(1)
-#define SPFC_BIT_2 SPFC_BIT(2)
-#define SPFC_BIT_3 SPFC_BIT(3)
-#define SPFC_BIT_4 SPFC_BIT(4)
-#define SPFC_BIT_5 SPFC_BIT(5)
-#define SPFC_BIT_6 SPFC_BIT(6)
-#define SPFC_BIT_7 SPFC_BIT(7)
-#define SPFC_BIT_8 SPFC_BIT(8)
-#define SPFC_BIT_9 SPFC_BIT(9)
-#define SPFC_BIT_10 SPFC_BIT(10)
-#define SPFC_BIT_11 SPFC_BIT(11)
-#define SPFC_BIT_12 SPFC_BIT(12)
-#define SPFC_BIT_13 SPFC_BIT(13)
-#define SPFC_BIT_14 SPFC_BIT(14)
-#define SPFC_BIT_15 SPFC_BIT(15)
-#define SPFC_BIT_16 SPFC_BIT(16)
-#define SPFC_BIT_17 SPFC_BIT(17)
-#define SPFC_BIT_18 SPFC_BIT(18)
-#define SPFC_BIT_19 SPFC_BIT(19)
-#define SPFC_BIT_20 SPFC_BIT(20)
-#define SPFC_BIT_21 SPFC_BIT(21)
-#define SPFC_BIT_22 SPFC_BIT(22)
-#define SPFC_BIT_23 SPFC_BIT(23)
-#define SPFC_BIT_24 SPFC_BIT(24)
-#define SPFC_BIT_25 SPFC_BIT(25)
-#define SPFC_BIT_26 SPFC_BIT(26)
-#define SPFC_BIT_27 SPFC_BIT(27)
-#define SPFC_BIT_28 SPFC_BIT(28)
-#define SPFC_BIT_29 SPFC_BIT(29)
-#define SPFC_BIT_30 SPFC_BIT(30)
-#define SPFC_BIT_31 SPFC_BIT(31)
-
-#define SPFC_GET_BITS(data, mask) ((data) & (mask)) /* Obtains the bit */
-#define SPFC_SET_BITS(data, mask) ((data) |= (mask)) /* set the bit */
-#define SPFC_CLR_BITS(data, mask) ((data) &= ~(mask)) /* clear the bit */
-
-#define SPFC_LSB(x) ((u8)(x))
-#define SPFC_MSB(x) ((u8)((u16)(x) >> 8))
-
-#define SPFC_LSW(x) ((u16)(x))
-#define SPFC_MSW(x) ((u16)((u32)(x) >> 16))
-
-#define SPFC_LSD(x) ((u32)((u64)(x)))
-#define SPFC_MSD(x) ((u32)((((u64)(x)) >> 16) >> 16))
-
-#define SPFC_BYTES_TO_QW_NUM(x) ((x) >> 3)
-#define SPFC_BYTES_TO_DW_NUM(x) ((x) >> 2)
-
-#define UNF_GET_SHIFTMASK(__src, __shift, __mask) (((__src) & (__mask)) >> (__shift))
-#define UNF_FC_SET_SHIFTMASK(__des, __val, __shift, __mask) \
- ((__des) = (((__des) & ~(__mask)) | (((__val) << (__shift)) & (__mask))))
-
-/* R_CTL */
-#define UNF_FC_HEADER_RCTL_MASK (0xFF000000)
-#define UNF_FC_HEADER_RCTL_SHIFT (24)
-#define UNF_FC_HEADER_RCTL_DWORD (0)
-#define UNF_GET_FC_HEADER_RCTL(__pfcheader) \
- UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[UNF_FC_HEADER_RCTL_DWORD], \
- UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK)
-
-#define UNF_SET_FC_HEADER_RCTL(__pfcheader, __val) \
- do { \
- UNF_FC_SET_SHIFTMASK(((u32 *)(void *)(__pfcheader)[UNF_FC_HEADER_RCTL_DWORD], \
- __val, UNF_FC_HEADER_RCTL_SHIFT, UNF_FC_HEADER_RCTL_MASK) \
- } while (0)
-
-/* PRLI PARAM 3 */
-#define SPFC_PRLI_PARAM_WXFER_ENABLE_MASK (0x00000001)
-#define SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT (0)
-#define SPFC_PRLI_PARAM_WXFER_DWORD (3)
-#define SPFC_GET_PRLI_PARAM_WXFER(__pfcheader) \
- UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_WXFER_DWORD], \
- SPFC_PRLI_PARAM_WXFER_ENABLE_SHIFT, \
- SPFC_PRLI_PARAM_WXFER_ENABLE_MASK)
-
-#define SPFC_PRLI_PARAM_CONF_ENABLE_MASK (0x00000080)
-#define SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT (7)
-#define SPFC_PRLI_PARAM_CONF_DWORD (3)
-#define SPFC_GET_PRLI_PARAM_CONF(__pfcheader) \
- UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_DWORD], \
- SPFC_PRLI_PARAM_CONF_ENABLE_SHIFT, \
- SPFC_PRLI_PARAM_CONF_ENABLE_MASK)
-
-#define SPFC_PRLI_PARAM_REC_ENABLE_MASK (0x00000400)
-#define SPFC_PRLI_PARAM_REC_ENABLE_SHIFT (10)
-#define SPFC_PRLI_PARAM_CONF_REC (3)
-#define SPFC_GET_PRLI_PARAM_REC(__pfcheader) \
- UNF_GET_SHIFTMASK(((u32 *)(void *)(__pfcheader))[SPFC_PRLI_PARAM_CONF_REC], \
- SPFC_PRLI_PARAM_REC_ENABLE_SHIFT, SPFC_PRLI_PARAM_REC_ENABLE_MASK)
-
-#define SPFC_FUNCTION_ENTER \
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
- "%s Enter.", __func__)
-#define SPFC_FUNCTION_RETURN \
- FC_DRV_PRINT(UNF_LOG_NORMAL, UNF_ALL, \
- "%s Return.", __func__)
-
-#define SPFC_SPIN_LOCK_IRQSAVE(interrupt, hw_adapt_lock, flags) \
- do { \
- if ((interrupt) == false) { \
- spin_lock_irqsave(&(hw_adapt_lock), flags); \
- } \
- } while (0)
-
-#define SPFC_SPIN_UNLOCK_IRQRESTORE(interrupt, hw_adapt_lock, flags) \
- do { \
- if ((interrupt) == false) { \
- spin_unlock_irqrestore(&(hw_adapt_lock), flags); \
- } \
- } while (0)
-
-#define FC_CHECK_VALID(condition, fail_do) \
- do { \
- if (unlikely(!(condition))) { \
- FC_DRV_PRINT(UNF_LOG_REG_ATT, \
- UNF_ERR, "Para check(%s) invalid", \
- #condition); \
- fail_do; \
- } \
- } while (0)
-
-#define RETURN_ERROR_S32 (-1)
-#define UNF_RETURN_ERROR_S32 (-1)
-
-enum SPFC_LOG_CTRL_E {
- SPFC_LOG_ALL = 0,
- SPFC_LOG_SCQE_RX,
- SPFC_LOG_ELS_TX,
- SPFC_LOG_ELS_RX,
- SPFC_LOG_GS_TX,
- SPFC_LOG_GS_RX,
- SPFC_LOG_BLS_TX,
- SPFC_LOG_BLS_RX,
- SPFC_LOG_FCP_TX,
- SPFC_LOG_FCP_RX,
- SPFC_LOG_SESS_TX,
- SPFC_LOG_SESS_RX,
- SPFC_LOG_DIF_TX,
- SPFC_LOG_DIF_RX
-};
-
-extern u32 spfc_log_en;
-#define SPFC_LOG_EN(hba, log_ctrl) (spfc_log_en + (log_ctrl))
-
-enum SPFC_HBA_ERR_STAT_E {
- SPFC_STAT_CTXT_FLUSH_DONE = 0,
- SPFC_STAT_SQ_WAIT_EMPTY,
- SPFC_STAT_LAST_GS_SCQE,
- SPFC_STAT_SQ_POOL_EMPTY,
- SPFC_STAT_PARENT_IO_FLUSHED,
- SPFC_STAT_ROOT_IO_FLUSHED, /* 5 */
- SPFC_STAT_ROOT_SQ_FULL,
- SPFC_STAT_ELS_RSP_EXCH_REUSE,
- SPFC_STAT_GS_RSP_EXCH_REUSE,
- SPFC_STAT_SQ_IO_BUFFER_CLEARED,
- SPFC_STAT_PARENT_SQ_NOT_OFFLOADED, /* 10 */
- SPFC_STAT_PARENT_SQ_QUEUE_DELAYED_WORK,
- SPFC_STAT_PARENT_SQ_INVALID_CACHED_ID,
- SPFC_HBA_STAT_BUTT
-};
-
-#define SPFC_DWORD_BYTE (4)
-#define SPFC_QWORD_BYTE (8)
-#define SPFC_SHIFT_TO_U64(x) ((x) >> 3)
-#define SPFC_SHIFT_TO_U32(x) ((x) >> 2)
-
-void spfc_cpu_to_big64(void *addr, u32 size);
-void spfc_big_to_cpu64(void *addr, u32 size);
-void spfc_cpu_to_big32(void *addr, u32 size);
-void spfc_big_to_cpu32(void *addr, u32 size);
-void spfc_cpu_to_be24(u8 *data, u32 value);
-u32 spfc_big_to_cpu24(u8 *data);
-
-void spfc_print_buff(u32 dbg_level, void *buff, u32 size);
-
-u32 spfc_log2n(u32 val);
-
-static inline void spfc_swap_16_in_32(u32 *paddr, u32 length)
-{
- u32 i;
-
- for (i = 0; i < length; i++) {
- paddr[i] =
- ((((paddr[i]) & UNF_MASK_BIT_31_16) >> UNF_SHIFT_16) |
- (((paddr[i]) & UNF_MASK_BIT_15_0) << UNF_SHIFT_16));
- }
-}
-
-#endif /* __SPFC_UTILS_H__ */
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.c b/drivers/scsi/spfc/hw/spfc_wqe.c
deleted file mode 100644
index 61909c51bc8c..000000000000
--- a/drivers/scsi/spfc/hw/spfc_wqe.c
+++ /dev/null
@@ -1,646 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#include "spfc_wqe.h"
-#include "spfc_module.h"
-#include "spfc_service.h"
-
-void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
- struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
- u32 scqn)
-{
- sqe->ts_sl.task_type = SPFC_SQE_FCP_TMF_TRSP;
- sqe->ts_sl.wd0.conn_id =
- (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
-
- if (UNF_GET_RXID(pkg) == INVALID_VALUE16)
- sqe->ts_sl.local_xid = INVALID_VALUE16;
- else
- sqe->ts_sl.local_xid = UNF_GET_RXID(pkg) + exi_base;
-
- sqe->ts_sl.tmf_rsp.wd0.scqn = scqn;
- sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
-}
-
-void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len)
-{
- /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
- * to 2040 bytes (8 bits of 8 bytes' chunk)
- */
- ctrl_sl->ch.wd0.bdsl = 0;
-
- /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
- * 0 to 24 bytes
- */
- ctrl_sl->ch.wd0.drv_sl = 0;
-
- /* a.
- * b1 - linking WQE, which will be only used in linked page architecture
- * instead of ring, it's a special control WQE which does not contain
- * any buffer or inline data information, and will only be consumed by
- * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
- * normal SEG WQE or inline data WQE
- */
- ctrl_sl->ch.wd0.wf = 0;
-
- /*
- * "CF" field of CtrlS - Completion Format - defines the format of CS.
- * a. b0 - Status information is embedded inside of Completion Section
- * b. b1 - Completion Section keeps SGL, where Status information
- * should be written. (For the definition of SGLs see ?4.1
- * .)
- */
- ctrl_sl->ch.wd0.cf = 0;
-
- /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
- * 248 bytes
- */
- ctrl_sl->ch.wd0.tsl = task_len;
-
- /*
- * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
- * format is of two types, which are defined by "VA " field of CtrlS.
- * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits
- * buffer's pointer and 31-bits Length, each SGE can only support up to
- * 2G-1B, it can guarantee each single SGE length can not exceed 2GB by
- * nature, A byte count value of zero means a 0byte data transfer. o b1.
- * SGE comprises 64-bits buffer's pointer, 31-bits Length and 30-bits
- * Key of the Translation table , each SGE can only support up to 2G-1B,
- * it can guarantee each single SGE length can not exceed 2GB by nature,
- * A byte count value of zero means a 0byte data transfer
- */
- ctrl_sl->ch.wd0.va = 0;
-
- /*
- * "DF" field of CtrlS - Data Format - defines the format of BDS
- * a. b0 - BDS carries the list of SGEs (SGL)
- * b. b1 - BDS carries the inline data
- */
- ctrl_sl->ch.wd0.df = 0;
-
- /* "CR" - Completion is Required - marks CQE generation request per WQE
- */
- ctrl_sl->ch.wd0.cr = 1;
-
- /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
- * 0 to 56 bytes
- */
- ctrl_sl->ch.wd0.dif_sl = 0;
-
- /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
- * 24 bytes
- */
- ctrl_sl->ch.wd0.csl = 0;
-
- /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
- * value Zero is not valid
- */
- ctrl_sl->ch.wd0.ctrl_sl = 1;
-
- /* "O" - Owner - marks ownership of WQE */
- ctrl_sl->ch.wd0.owner = 0;
-}
-
-void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
-{
- /* "BDSL" field of CtrlS - defines the size of BDS, which varies from 0
- * to 2040 bytes (8 bits of 8 bytes' chunk)
- */
- /* TrdWqe carry 2 SGE defaultly, 4DW per SGE, the value is 4 because
- * unit is 2DW, in double SGL mode, bdsl is 2
- */
- sqe->ctrl_sl.ch.wd0.bdsl = SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE;
-
- /* "DrvSL" field of CtrlS - defines the size of DrvS, which varies from
- * 0 to 24 bytes
- */
- /* DrvSL = 0 */
- sqe->ctrl_sl.ch.wd0.drv_sl = 0;
-
- /* a.
- * b1 - linking WQE, which will be only used in linked page architecture
- * instead of ring, it's a special control WQE which does not contain
- * any buffer or inline data information, and will only be consumed by
- * hardware. The size is aligned to WQEBB/WQE b0 - normal WQE, either
- * normal SEG WQE or inline data WQE
- */
- /* normal wqe */
- sqe->ctrl_sl.ch.wd0.wf = 0;
-
- /*
- * "CF" field of CtrlS - Completion Format - defines the format of CS.
- * a. b0 - Status information is embedded inside of Completion Section
- * b. b1 - Completion Section keeps SGL, where Status information
- * should be written. (For the definition of SGLs see ?4.1)
- */
- /* by SCQE mode, the value is ignored */
- sqe->ctrl_sl.ch.wd0.cf = 0;
-
- /* "TSL" field of CtrlS - defines the size of TS, which varies from 0 to
- * 248 bytes
- */
- /* TSL is configured by 56 bytes */
- sqe->ctrl_sl.ch.wd0.tsl =
- sizeof(struct spfc_sqe_ts) / SPFC_WQE_SECTION_CHUNK_SIZE;
-
- /*
- * Variable length SGE (vSGE). The size of SGE is 16 bytes. The vSGE
- * format is of two types, which are defined by "VA " field of CtrlS.
- * "VA" stands for Virtual Address: o b0. SGE comprises 64-bits buffer's
- * pointer and 31-bits Length, each SGE can only support up to 2G-1B, it
- * can guarantee each single SGE length can not exceed 2GB by nature, A
- * byte count value of zero means a 0byte data transfer. o b1. SGE
- * comprises 64-bits buffer's pointer, 31-bits Length and 30-bits Key of
- * the Translation table , each SGE can only support up to 2G-1B, it can
- * guarantee each single SGE length can not exceed 2GB by nature, A byte
- * count value of zero means a 0byte data transfer
- */
- sqe->ctrl_sl.ch.wd0.va = 0;
-
- /*
- * "DF" field of CtrlS - Data Format - defines the format of BDS
- * a. b0 - BDS carries the list of SGEs (SGL)
- * b. b1 - BDS carries the inline data
- */
- sqe->ctrl_sl.ch.wd0.df = 0;
-
- /* "CR" - Completion is Required - marks CQE generation request per WQE
- */
- /* by SCQE mode, this value is ignored */
- sqe->ctrl_sl.ch.wd0.cr = 1;
-
- /* "DIFSL" field of CtrlS - defines the size of DIFS, which varies from
- * 0 to 56 bytes.
- */
- sqe->ctrl_sl.ch.wd0.dif_sl = 0;
-
- /* "CSL" field of CtrlS - defines the size of CS, which varies from 0 to
- * 24 bytes
- */
- sqe->ctrl_sl.ch.wd0.csl = 0;
-
- /* CtrlSL describes the size of CtrlS in 8 bytes chunks. The
- * value Zero is not valid.
- */
- sqe->ctrl_sl.ch.wd0.ctrl_sl = SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE;
-
- /* "O" - Owner - marks ownership of WQE */
- sqe->ctrl_sl.ch.wd0.owner = 0;
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_service_wqe_ts_common
- * Function Description : Construct the DW1~DW3 field in the Parent SQ WQE
- * request of the ELS and ELS_RSP requests.
- * Input Parameters : struct spfc_sqe_ts *sqe_ts u32 rport_index u16 local_xid
- * u16 remote_xid u16 data_len
- * Output Parameters : N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
- u16 local_xid, u16 remote_xid, u16 data_len)
-{
- sqe_ts->local_xid = local_xid;
-
- sqe_ts->wd0.conn_id = (u16)rport_index;
- sqe_ts->wd0.remote_xid = remote_xid;
-
- sqe_ts->cont.els_gs_elsrsp_comm.data_len = data_len;
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_els_gs_wqe_sge
- * Function Description : Construct the SGE field of the ELS and ELS_RSP WQE.
- * The SGE and frame content have been converted to large ends in this
- * function.
- * Input Parameters: struct spfc_sqe *sqe void *buf_addr u32 buf_len u32 xid
- * Output Parameters : N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
- u32 buf_len, u32 xid, void *handle)
-{
- u64 els_rsp_phy_addr;
- struct spfc_variable_sge *sge = NULL;
-
- /* Fill in SGE and convert it to big-endian. */
- sge = &sqe->sge[ARRAY_INDEX_0];
- els_rsp_phy_addr = phy_addr;
- sge->buf_addr_hi = SPFC_HIGH_32_BITS(els_rsp_phy_addr);
- sge->buf_addr_lo = SPFC_LOW_32_BITS(els_rsp_phy_addr);
- sge->wd0.buf_len = buf_len;
- sge->wd0.r_flag = 0;
- sge->wd1.extension_flag = SPFC_WQE_SGE_NOT_EXTEND_FLAG;
- sge->wd1.buf_addr_gpa = SPFC_ZEROCOPY_PCIE_TEMPLATE_VALUE;
- sge->wd1.xid = 0;
- sge->wd1.last_flag = SPFC_WQE_SGE_LAST_FLAG;
- spfc_cpu_to_big32(sge, sizeof(*sge));
-
- /* Converts the payload of an FC frame into a big end. */
- if (buf_addr)
- spfc_cpu_to_big32(buf_addr, buf_len);
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_els_wqe_ts_rsp
- * Function Description : Construct the DW2~DW6 field in the Parent SQ WQE
- * of the ELS_RSP request.
- * Input Parameters : struct spfc_sqe *sqe void *sq_info void *frame_pld
- * u16 type u16 cmnd u32 scqn
- * Output Parameters: N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
- struct unf_frame_pkg *pkg, void *frame_pld,
- u16 type, u16 cmnd)
-{
- struct unf_prli_payload *prli_acc_pld = NULL;
- struct spfc_sqe_els_rsp *els_rsp = NULL;
- struct spfc_sqe_ts *sqe_ts = NULL;
- struct spfc_parent_sq_info *sq_info = NULL;
- struct spfc_hba_info *hba = NULL;
- struct unf_fc_head *pkg_fc_hdr_info = NULL;
- struct spfc_parent_queue_info *prnt_q_info = (struct spfc_parent_queue_info *)info;
-
- FC_CHECK_RETURN_VOID(sqe);
- FC_CHECK_RETURN_VOID(frame_pld);
-
- sqe_ts = &sqe->ts_sl;
- els_rsp = &sqe_ts->cont.els_rsp;
- sqe_ts->task_type = SPFC_SQE_ELS_RSP;
-
- /* The default chip does not need to update parameters. */
- els_rsp->wd1.para_update = 0x0;
-
- sq_info = &prnt_q_info->parent_sq_info;
- hba = (struct spfc_hba_info *)sq_info->hba;
-
- pkg_fc_hdr_info = &pkg->frame_head;
- els_rsp->sid = pkg_fc_hdr_info->csctl_sid;
- els_rsp->did = pkg_fc_hdr_info->rctl_did;
- els_rsp->wd7.hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) + hba->exi_base;
- els_rsp->wd2.class_mode = FC_PROTOCOL_CLASS_3;
-
- if (type == ELS_RJT)
- els_rsp->wd2.class_mode = pkg->class_mode;
-
- /* When the PLOGI request is sent, the microcode needs to be instructed
- * to clear the I/O related to the link to avoid data inconsistency
- * caused by the disorder of the IO.
- */
- if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI)) {
- els_rsp->wd1.clr_io = 1;
- els_rsp->wd6.reset_exch_start = hba->exi_base;
- els_rsp->wd6.reset_exch_end =
- hba->exi_base + (hba->exi_count - 1);
- els_rsp->wd7.scqn =
- prnt_q_info->parent_sts_scq_info.cqm_queue_id;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) send cmd(0x%x) to RPort(0x%x),rport index(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
- sq_info->local_port_id, cmnd, sq_info->remote_port_id,
- sq_info->rport_index, els_rsp->wd6.reset_exch_start,
- els_rsp->wd6.reset_exch_end, els_rsp->wd7.scqn);
-
- return;
- }
-
- if (type == ELS_RJT)
- return;
-
- /* Enter WQE in the PrliAcc negotiation parameter, and fill in the
- * Update flag in WQE.
- */
- if (cmnd == ELS_PRLI) {
- /* The chip updates the PLOGI ACC negotiation parameters. */
- els_rsp->wd2.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
- els_rsp->wd2.e_d_tov = sq_info->plogi_co_parms.ed_tov;
- els_rsp->wd2.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
- els_rsp->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
-
- /* The chip updates the PRLI ACC parameter. */
- prli_acc_pld = (struct unf_prli_payload *)frame_pld;
- els_rsp->wd4.xfer_dis = SPFC_GET_PRLI_PARAM_WXFER(prli_acc_pld->parms);
- els_rsp->wd4.conf = SPFC_GET_PRLI_PARAM_CONF(prli_acc_pld->parms);
- els_rsp->wd4.rec = SPFC_GET_PRLI_PARAM_REC(prli_acc_pld->parms);
-
- els_rsp->wd1.para_update = 0x03;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x, xfer_dis:0x%x,conf:0x%x,rec:0x%x.",
- sq_info->local_port_id, sq_info->rport_index,
- els_rsp->wd2.seq_cnt, els_rsp->wd2.e_d_tov,
- els_rsp->wd2.tx_mfs, els_rsp->e_d_tov_timer_val,
- els_rsp->wd4.xfer_dis, els_rsp->wd4.conf, els_rsp->wd4.rec);
- }
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_els_wqe_ts_req
- * Function Description: Construct the DW2~DW4 field in the Parent SQ WQE
- * of the ELS request.
- * Input Parameters: struct spfc_sqe *sqe void *sq_info u16 cmnd u32 scqn
- * Output Parameters: N/A
- * Return Type: void
- ****************************************************************************
- */
-void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
- void *frame_pld, struct unf_frame_pkg *pkg)
-{
- struct spfc_sqe_ts *sqe_ts = NULL;
- struct spfc_sqe_t_els_gs *els_req = NULL;
- struct spfc_parent_sq_info *sq_info = NULL;
- struct spfc_hba_info *hba = NULL;
- struct unf_fc_head *pkg_fc_hdr_info = NULL;
- u16 cmnd;
-
- cmnd = SPFC_GET_LS_GS_CMND_CODE(pkg->cmnd);
-
- sqe_ts = &sqe->ts_sl;
- if (pkg->type == UNF_PKG_GS_REQ)
- sqe_ts->task_type = SPFC_SQE_GS_CMND;
- else
- sqe_ts->task_type = SPFC_SQE_ELS_CMND;
-
- sqe_ts->magic_num = UNF_GETXCHGALLOCTIME(pkg);
-
- els_req = &sqe_ts->cont.t_els_gs;
- pkg_fc_hdr_info = &pkg->frame_head;
-
- sq_info = (struct spfc_parent_sq_info *)info;
- hba = (struct spfc_hba_info *)sq_info->hba;
- els_req->sid = pkg_fc_hdr_info->csctl_sid;
- els_req->did = pkg_fc_hdr_info->rctl_did;
-
- /* When the PLOGI request is sent, the microcode needs to be instructed
- * to clear the I/O related to the link to avoid data inconsistency
- * caused by the disorder of the IO.
- */
- if ((cmnd == ELS_LOGO || cmnd == ELS_PLOGI) && hba) {
- els_req->wd4.clr_io = 1;
- els_req->wd6.reset_exch_start = hba->exi_base;
- els_req->wd6.reset_exch_end = hba->exi_base + (hba->exi_count - 1);
- els_req->wd7.scqn = scqn;
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) Rport(0x%x) SID(0x%x) send %s to DID(0x%x), notify clean io start 0x%x, end 0x%x, scqn 0x%x.",
- hba->port_cfg.port_id, sq_info->rport_index,
- sq_info->local_port_id, (cmnd == ELS_PLOGI) ? "PLOGI" : "LOGO",
- sq_info->remote_port_id, els_req->wd6.reset_exch_start,
- els_req->wd6.reset_exch_end, scqn);
-
- return;
- }
-
- /* The chip updates the PLOGI ACC negotiation parameters. */
- if (cmnd == ELS_PRLI) {
- els_req->wd5.seq_cnt = sq_info->plogi_co_parms.seq_cnt;
- els_req->wd5.e_d_tov = sq_info->plogi_co_parms.ed_tov;
- els_req->wd5.tx_mfs = sq_info->plogi_co_parms.tx_mfs;
- els_req->e_d_tov_timer_val = sq_info->plogi_co_parms.ed_tov_time;
-
- els_req->wd4.rec_support = hba->port_cfg.tape_support ? 1 : 0;
- els_req->wd4.para_update = 0x01;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT,
- UNF_INFO,
- "Port(0x%x) save rport index(0x%x) login parms,seqcnt:0x%x,e_d_tov:0x%x,txmfs:0x%x,e_d_tovtimerval:0x%x.",
- sq_info->local_port_id, sq_info->rport_index,
- els_req->wd5.seq_cnt, els_req->wd5.e_d_tov,
- els_req->wd5.tx_mfs, els_req->e_d_tov_timer_val);
- }
-
- if (cmnd == ELS_ECHO)
- els_req->echo_flag = true;
-
- if (cmnd == ELS_REC) {
- els_req->wd4.rec_flag = 1;
- els_req->wd4.origin_hottag = pkg->origin_hottag + hba->exi_base;
- els_req->origin_magicnum = pkg->origin_magicnum;
-
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_MAJOR,
- "Port(0x%x) Rport(0x%x) SID(0x%x) send Rec to DID(0x%x), origin_hottag 0x%x",
- hba->port_cfg.port_id, sq_info->rport_index,
- sq_info->local_port_id, sq_info->remote_port_id,
- els_req->wd4.origin_hottag);
- }
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_bls_wqe_ts_req
- * Function Description: Construct the DW2 field in the Parent SQ WQE of
- * the ELS request, that is, ABTS parameter.
- * Input Parameters:struct unf_frame_pkg *pkg void *hba
- * Output Parameters: N/A
- * Return Type: void
- ****************************************************************************
- */
-void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg, void *handle)
-{
- struct spfc_sqe_abts *abts;
-
- sqe->ts_sl.task_type = SPFC_SQE_BLS_CMND;
- sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
-
- abts = &sqe->ts_sl.cont.abts;
- abts->fh_parm_abts = pkg->frame_head.parameter;
- abts->hotpooltag = UNF_GET_HOTPOOL_TAG(pkg) +
- ((struct spfc_hba_info *)handle)->exi_base;
- abts->release_timer = UNF_GET_XID_RELEASE_TIMER(pkg);
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_service_wqe_ctrl_section
- * Function Description: fill Parent SQ WQE and Root SQ WQE's Control Section
- * Input Parameters : struct spfc_wqe_ctrl *wqe_cs u32 ts_size u32 bdsl
- * Output Parameters : N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
- u32 bdsl)
-{
- wqe_cs->ch.wd0.bdsl = bdsl;
- wqe_cs->ch.wd0.drv_sl = 0;
- wqe_cs->ch.wd0.rsvd0 = 0;
- wqe_cs->ch.wd0.wf = 0;
- wqe_cs->ch.wd0.cf = 0;
- wqe_cs->ch.wd0.tsl = ts_size;
- wqe_cs->ch.wd0.va = 0;
- wqe_cs->ch.wd0.df = 0;
- wqe_cs->ch.wd0.cr = 1;
- wqe_cs->ch.wd0.dif_sl = 0;
- wqe_cs->ch.wd0.csl = 0;
- wqe_cs->ch.wd0.ctrl_sl = SPFC_BYTES_TO_QW_NUM(sizeof(*wqe_cs)); /* divided by 8 */
- wqe_cs->ch.wd0.owner = 0;
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_wqe_owner_pmsn
- * Function Description: This field is filled using the value of Control
- * Section of Parent SQ WQE.
- * Input Parameters: struct spfc_wqe_ctrl *wqe_cs u16 owner u16 pmsn
- * Output Parameters : N/A
- * Return Type: void
- ****************************************************************************
- */
-void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn)
-{
- struct spfc_wqe_ctrl *wqe_cs = &io_sqe->ctrl_sl;
- struct spfc_wqe_ctrl *wqee_cs = &io_sqe->ectrl_sl;
-
- wqe_cs->qsf.wqe_sn = pmsn;
- wqe_cs->qsf.dump_wqe_sn = wqe_cs->qsf.wqe_sn;
- wqe_cs->ch.wd0.owner = (u32)owner;
- wqee_cs->ch.ctrl_ch_val = wqe_cs->ch.ctrl_ch_val;
- wqee_cs->qsf.wqe_sn = wqe_cs->qsf.wqe_sn;
- wqee_cs->qsf.dump_wqe_sn = wqe_cs->qsf.dump_wqe_sn;
-}
-
-/* ****************************************************************************
- * Function Name : spfc_convert_parent_wqe_to_big_endian
- * Function Description: Set the Done field of Parent SQ WQE and convert
- * Control Section and Task Section to big-endian.
- * Input Parameters:struct spfc_sqe *sqe
- * Output Parameters : N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe)
-{
- if (likely(sqe->ts_sl.task_type != SPFC_TASK_T_TRESP &&
- sqe->ts_sl.task_type != SPFC_TASK_T_TMF_RESP)) {
- /* Convert Control Secton and Task Section to big-endian. Before
- * the SGE enters the queue, the upper-layer driver converts the
- * SGE and Task Section to the big-endian mode.
- */
- spfc_cpu_to_big32(&sqe->ctrl_sl, sizeof(sqe->ctrl_sl));
- spfc_cpu_to_big32(&sqe->ts_sl, sizeof(sqe->ts_sl));
- spfc_cpu_to_big32(&sqe->ectrl_sl, sizeof(sqe->ectrl_sl));
- spfc_cpu_to_big32(&sqe->sid, sizeof(sqe->sid));
- spfc_cpu_to_big32(&sqe->did, sizeof(sqe->did));
- spfc_cpu_to_big32(&sqe->wqe_gpa, sizeof(sqe->wqe_gpa));
- spfc_cpu_to_big32(&sqe->db_val, sizeof(sqe->db_val));
- } else {
- /* The SPFC_TASK_T_TRESP may use the SGE as the Task Section to
- * convert the entire SQE into a large end.
- */
- spfc_cpu_to_big32(sqe, sizeof(struct spfc_sqe_tresp));
- }
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_cmdqe_common
- * Function Description : Assemble the Cmdqe Common part.
- * Input Parameters: union spfc_cmdqe *cmd_qe enum spfc_task_type task_type u16 rxid
- * Output Parameters : N/A
- * Return Type: void
- ****************************************************************************
- */
-void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
- u16 rxid)
-{
- cmd_qe->common.wd0.task_type = task_type;
- cmd_qe->common.wd0.rx_id = rxid;
- cmd_qe->common.wd0.rsvd0 = 0;
-}
-
-#define SPFC_STANDARD_SIRT_ENABLE (1)
-#define SPFC_STANDARD_SIRT_DISABLE (0)
-#define SPFC_UNKNOWN_ID (0xFFFF)
-
-void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
- u8 task_type, u16 exi_base, u8 port_idx)
-{
- sqe->ts_sl.local_xid = (u16)UNF_GET_HOTPOOL_TAG(pkg) + exi_base;
- sqe->ts_sl.task_type = task_type;
- sqe->ts_sl.wd0.conn_id =
- (u16)(pkg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]);
-
- sqe->ts_sl.wd0.remote_xid = SPFC_UNKNOWN_ID;
- sqe->ts_sl.magic_num = UNF_GETXCHGALLOCTIME(pkg);
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_icmnd_wqe_ts
- * Function Description : Constructing the TS Domain of the ICmnd
- * Input Parameters: void *hba struct unf_frame_pkg *pkg
- * struct spfc_sqe_ts *sqe_ts
- * Output Parameters :N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
- struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex)
-{
- struct spfc_sqe_icmnd *icmnd = &sqe_ts->cont.icmnd;
- struct spfc_hba_info *hba = NULL;
-
- hba = (struct spfc_hba_info *)handle;
-
- sqe_ts->cdb_type = 0;
- memcpy(icmnd->fcp_cmnd_iu, pkg->fcp_cmnd, sizeof(struct unf_fcp_cmnd));
-
- if (sqe_ts->task_type == SPFC_SQE_FCP_ITMF) {
- icmnd->info.tmf.w0.bs.reset_exch_start = hba->exi_base;
- icmnd->info.tmf.w0.bs.reset_exch_end = hba->exi_base + hba->exi_count - 1;
-
- icmnd->info.tmf.w1.bs.reset_did = UNF_GET_DID(pkg);
- /* delivers the marker status flag to the microcode. */
- icmnd->info.tmf.w1.bs.marker_sts = 1;
- SPFC_GET_RESET_TYPE(UNF_GET_TASK_MGMT_FLAGS(pkg->fcp_cmnd->control),
- icmnd->info.tmf.w1.bs.reset_type);
-
- icmnd->info.tmf.w2.bs.reset_sid = UNF_GET_SID(pkg);
-
- memcpy(icmnd->info.tmf.reset_lun, pkg->fcp_cmnd->lun,
- sizeof(icmnd->info.tmf.reset_lun));
- }
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_icmnd_wqe_ctrls
- * Function Description : The CtrlS domain of the ICmnd is constructed. The
- * analysis result is the same as that of the TWTR.
- * Input Parameters: struct unf_frame_pkg *pkg struct spfc_sqe *sqe
- * Output Parameters: N/A
- * Return Type: void
- ****************************************************************************
- */
-void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe)
-{
- spfc_build_trd_twr_wqe_ctrls(pkg, sqe);
-}
-
-/* ****************************************************************************
- * Function Name : spfc_build_srq_wqe_ctrls
- * Function Description : Construct the CtrlS domain of the ICmnd. The analysis
- * result is the same as that of the TWTR.
- * Input Parameters : struct spfc_rqe *rqe u16 owner u16 pmsn
- * Output Parameters : N/A
- * Return Type : void
- ****************************************************************************
- */
-void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn)
-{
- struct spfc_wqe_ctrl_ch *wqe_ctrls = NULL;
-
- wqe_ctrls = &rqe->ctrl_sl.ch;
- wqe_ctrls->wd0.owner = owner;
- wqe_ctrls->wd0.ctrl_sl = sizeof(struct spfc_wqe_ctrl) >> UNF_SHIFT_3;
- wqe_ctrls->wd0.csl = 1;
- wqe_ctrls->wd0.dif_sl = 0;
- wqe_ctrls->wd0.cr = 1;
- wqe_ctrls->wd0.df = 0;
- wqe_ctrls->wd0.va = 0;
- wqe_ctrls->wd0.tsl = 0;
- wqe_ctrls->wd0.cf = 0;
- wqe_ctrls->wd0.wf = 0;
- wqe_ctrls->wd0.drv_sl = sizeof(struct spfc_rqe_drv) >> UNF_SHIFT_3;
- wqe_ctrls->wd0.bdsl = sizeof(struct spfc_constant_sge) >> UNF_SHIFT_3;
-
- rqe->ctrl_sl.wd0.wqe_msn = pmsn;
- rqe->ctrl_sl.wd0.dump_wqe_msn = rqe->ctrl_sl.wd0.wqe_msn;
-}
diff --git a/drivers/scsi/spfc/hw/spfc_wqe.h b/drivers/scsi/spfc/hw/spfc_wqe.h
deleted file mode 100644
index ec6d7bbdf8f9..000000000000
--- a/drivers/scsi/spfc/hw/spfc_wqe.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2021 Ramaxel Memory Technology, Ltd */
-
-#ifndef SPFC_WQE_H
-#define SPFC_WQE_H
-
-#include "unf_type.h"
-#include "unf_common.h"
-#include "spfc_hw_wqe.h"
-#include "spfc_parent_context.h"
-
-/* TGT WQE type */
-/* DRV->uCode via Parent SQ */
-#define SPFC_SQE_FCP_TRD SPFC_TASK_T_TREAD
-#define SPFC_SQE_FCP_TWR SPFC_TASK_T_TWRITE
-#define SPFC_SQE_FCP_TRSP SPFC_TASK_T_TRESP
-#define SPFC_SQE_FCP_TACK SPFC_TASK_T_TACK
-#define SPFC_SQE_ELS_CMND SPFC_TASK_T_ELS
-#define SPFC_SQE_ELS_RSP SPFC_TASK_T_ELS_RSP
-#define SPFC_SQE_GS_CMND SPFC_TASK_T_GS
-#define SPFC_SQE_BLS_CMND SPFC_TASK_T_ABTS
-#define SPFC_SQE_FCP_IREAD SPFC_TASK_T_IREAD
-#define SPFC_SQE_FCP_IWRITE SPFC_TASK_T_IWRITE
-#define SPFC_SQE_FCP_ITMF SPFC_TASK_T_ITMF
-#define SPFC_SQE_SESS_RST SPFC_TASK_T_SESS_RESET
-#define SPFC_SQE_FCP_TMF_TRSP SPFC_TASK_T_TMF_RESP
-#define SPFC_SQE_NOP SPFC_TASK_T_NOP
-/* DRV->uCode Via CMDQ */
-#define SPFC_CMDQE_ABTS_RSP SPFC_TASK_T_ABTS_RSP
-#define SPFC_CMDQE_ABORT SPFC_TASK_T_ABORT
-#define SPFC_CMDQE_SESS_DIS SPFC_TASK_T_SESS_DIS
-#define SPFC_CMDQE_SESS_DEL SPFC_TASK_T_SESS_DEL
-
-/* uCode->Drv Via CMD SCQ */
-#define SPFC_SCQE_FCP_TCMND SPFC_TASK_T_RCV_TCMND
-#define SPFC_SCQE_ELS_CMND SPFC_TASK_T_RCV_ELS_CMD
-#define SPFC_SCQE_ABTS_CMD SPFC_TASK_T_RCV_ABTS_CMD
-#define SPFC_SCQE_FCP_IRSP SPFC_TASK_T_IRESP
-#define SPFC_SCQE_FCP_ITMF_RSP SPFC_TASK_T_ITMF_RESP
-
-/* uCode->Drv Via STS SCQ */
-#define SPFC_SCQE_FCP_TSTS SPFC_TASK_T_TSTS
-#define SPFC_SCQE_GS_RSP SPFC_TASK_T_RCV_GS_RSP
-#define SPFC_SCQE_ELS_RSP SPFC_TASK_T_RCV_ELS_RSP
-#define SPFC_SCQE_ABTS_RSP SPFC_TASK_T_RCV_ABTS_RSP
-#define SPFC_SCQE_ELS_RSP_STS SPFC_TASK_T_ELS_RSP_STS
-#define SPFC_SCQE_ABORT_STS SPFC_TASK_T_ABORT_STS
-#define SPFC_SCQE_SESS_EN_STS SPFC_TASK_T_SESS_EN_STS
-#define SPFC_SCQE_SESS_DIS_STS SPFC_TASK_T_SESS_DIS_STS
-#define SPFC_SCQE_SESS_DEL_STS SPFC_TASK_T_SESS_DEL_STS
-#define SPFC_SCQE_SESS_RST_STS SPFC_TASK_T_SESS_RESET_STS
-#define SPFC_SCQE_ITMF_MARKER_STS SPFC_TASK_T_ITMF_MARKER_STS
-#define SPFC_SCQE_ABTS_MARKER_STS SPFC_TASK_T_ABTS_MARKER_STS
-#define SPFC_SCQE_FLUSH_SQ_STS SPFC_TASK_T_FLUSH_SQ_STS
-#define SPFC_SCQE_BUF_CLEAR_STS SPFC_TASK_T_BUFFER_CLEAR_STS
-#define SPFC_SCQE_CLEAR_SRQ_STS SPFC_TASK_T_CLEAR_SRQ_STS
-#define SPFC_SCQE_DIFX_RESULT_STS SPFC_TASK_T_DIFX_RESULT_STS
-#define SPFC_SCQE_XID_FREE_ABORT_STS SPFC_TASK_T_EXCH_ID_FREE_ABORT_STS
-#define SPFC_SCQE_EXCHID_TIMEOUT_STS SPFC_TASK_T_EXCHID_TIMEOUT_STS
-#define SPFC_SQE_NOP_STS SPFC_TASK_T_NOP_STS
-
-#define SPFC_LOW_32_BITS(__addr) ((u32)((u64)(__addr) & 0xffffffff))
-#define SPFC_HIGH_32_BITS(__addr) ((u32)(((u64)(__addr) >> 32) & 0xffffffff))
-
-/* Error Code from SCQ */
-#define SPFC_COMPLETION_STATUS_SUCCESS FC_CQE_COMPLETED
-#define SPFC_COMPLETION_STATUS_ABORTED_SETUP_FAIL FC_IMMI_CMDPKT_SETUP_FAIL
-
-#define SPFC_COMPLETION_STATUS_TIMEOUT FC_ERROR_CODE_E_D_TIMER_EXPIRE
-#define SPFC_COMPLETION_STATUS_DIF_ERROR FC_ERROR_CODE_DATA_DIFX_FAILED
-#define SPFC_COMPLETION_STATUS_DATA_OOO FC_ERROR_CODE_DATA_OOO_RO
-#define SPFC_COMPLETION_STATUS_DATA_OVERFLOW \
- FC_ERROR_CODE_DATA_EXCEEDS_DATA2TRNS
-
-#define SPFC_SCQE_INVALID_CONN_ID (0xffff)
-#define SPFC_GET_SCQE_TYPE(scqe) ((scqe)->common.ch.wd0.task_type)
-#define SPFC_GET_SCQE_STATUS(scqe) ((scqe)->common.ch.wd0.err_code)
-#define SPFC_GET_SCQE_REMAIN_CNT(scqe) ((scqe)->common.ch.wd0.cqe_remain_cnt)
-#define SPFC_GET_SCQE_CONN_ID(scqe) ((scqe)->common.conn_id)
-#define SPFC_GET_SCQE_SQN(scqe) ((scqe)->common.ch.wd0.sqn)
-#define SPFC_GET_WQE_TYPE(wqe) ((wqe)->ts_sl.task_type)
-
-#define SPFC_WQE_IS_IO(wqe) \
- ((SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_SESS_RST) && \
- (SPFC_GET_WQE_TYPE(wqe) != SPFC_SQE_NOP))
-#define SPFC_SCQE_HAS_ERRCODE(scqe) \
- (SPFC_GET_SCQE_STATUS(scqe) != SPFC_COMPLETION_STATUS_SUCCESS)
-#define SPFC_SCQE_ERR_TO_CM(scqe) \
- (SPFC_GET_SCQE_STATUS(scqe) != FC_ELS_GS_RSP_EXCH_CHECK_FAIL)
-#define SPFC_SCQE_EXCH_ABORTED(scqe) \
- ((SPFC_GET_SCQE_STATUS(scqe) >= \
- FC_CQE_BUFFER_CLEAR_IO_COMPLETED) && \
- (SPFC_GET_SCQE_STATUS(scqe) <= FC_CQE_WQE_FLUSH_IO_COMPLETED))
-#define SPFC_SCQE_CONN_ID_VALID(scqe) \
- (SPFC_GET_SCQE_CONN_ID(scqe) != SPFC_SCQE_INVALID_CONN_ID)
-
-/*
- * checksum error bitmap define
- */
-#define NIC_RX_CSUM_HW_BYPASS_ERR (1)
-#define NIC_RX_CSUM_IP_CSUM_ERR (1 << 1)
-#define NIC_RX_CSUM_TCP_CSUM_ERR (1 << 2)
-#define NIC_RX_CSUM_UDP_CSUM_ERR (1 << 3)
-#define NIC_RX_CSUM_SCTP_CRC_ERR (1 << 4)
-
-#define SPFC_WQE_SECTION_CHUNK_SIZE 8 /* 8 bytes' chunk */
-#define SPFC_T_RESP_WQE_CTR_TSL_SIZE 15 /* 8 bytes' chunk */
-#define SPFC_T_RD_WR_WQE_CTR_TSL_SIZE 9 /* 8 bytes' chunk */
-#define SPFC_T_RD_WR_WQE_CTR_BDSL_SIZE 4 /* 8 bytes' chunk */
-#define SPFC_T_RD_WR_WQE_CTR_CTRLSL_SIZE 1 /* 8 bytes' chunk */
-
-#define SPFC_WQE_MAX_ESGE_NUM 3 /* 3 ESGE In Extended wqe */
-#define SPFC_WQE_SGE_ENTRY_NUM 2 /* BD SGE and DIF SGE count */
-#define SPFC_WQE_SGE_DIF_ENTRY_NUM 1 /* DIF SGE count */
-#define SPFC_WQE_SGE_LAST_FLAG 1
-#define SPFC_WQE_SGE_NOT_LAST_FLAG 0
-#define SPFC_WQE_SGE_EXTEND_FLAG 1
-#define SPFC_WQE_SGE_NOT_EXTEND_FLAG 0
-
-#define SPFC_FCP_TMF_PORT_RESET (0)
-#define SPFC_FCP_TMF_LUN_RESET (1)
-#define SPFC_FCP_TMF_TGT_RESET (2)
-#define SPFC_FCP_TMF_RSVD (3)
-
-#define SPFC_ADJUST_DATA(old_va, new_va) \
- { \
- (old_va) = new_va; \
- }
-
-#define SPFC_GET_RESET_TYPE(tmf_flag, reset_flag) \
- { \
- switch (tmf_flag) { \
- case UNF_FCP_TM_ABORT_TASK_SET: \
- case UNF_FCP_TM_LOGICAL_UNIT_RESET: \
- (reset_flag) = SPFC_FCP_TMF_LUN_RESET; \
- break; \
- case UNF_FCP_TM_TARGET_RESET: \
- (reset_flag) = SPFC_FCP_TMF_TGT_RESET; \
- break; \
- case UNF_FCP_TM_CLEAR_TASK_SET: \
- (reset_flag) = SPFC_FCP_TMF_PORT_RESET; \
- break; \
- default: \
- (reset_flag) = SPFC_FCP_TMF_RSVD; \
- } \
- }
-
-/* Link WQE structure */
-struct spfc_linkwqe {
- union {
- struct {
- u32 rsv1 : 14;
- u32 wf : 1;
- u32 rsv2 : 14;
- u32 ctrlsl : 2;
- u32 o : 1;
- } wd0;
-
- u32 val_wd0;
- };
-
- union {
- struct {
- u32 msn : 16;
- u32 dump_msn : 15;
- u32 lp : 1; /* lp means whether O bit is overturn */
- } wd1;
-
- u32 val_wd1;
- };
-
- u32 next_page_addr_hi;
- u32 next_page_addr_lo;
-};
-
-/* Session Enable */
-struct spfc_host_keys {
- struct {
- u32 smac1 : 8;
- u32 smac0 : 8;
- u32 rsv : 16;
- } wd0;
-
- u8 smac[ARRAY_INDEX_4];
-
- u8 dmac[ARRAY_INDEX_4];
- struct {
- u8 sid_1;
- u8 sid_2;
- u8 dmac_rvd[ARRAY_INDEX_2];
- } wd3;
- struct {
- u8 did_0;
- u8 did_1;
- u8 did_2;
- u8 sid_0;
- } wd4;
-
- struct {
- u32 port_id : 3;
- u32 host_id : 2;
- u32 rsvd : 27;
- } wd5;
- u32 rsvd;
-};
-
-/* Parent SQ WQE Related function */
-void spfc_build_service_wqe_ctrl_section(struct spfc_wqe_ctrl *wqe_cs, u32 ts_size,
- u32 bdsl);
-void spfc_build_service_wqe_ts_common(struct spfc_sqe_ts *sqe_ts, u32 rport_index,
- u16 local_xid, u16 remote_xid,
- u16 data_len);
-void spfc_build_els_gs_wqe_sge(struct spfc_sqe *sqe, void *buf_addr, u64 phy_addr,
- u32 buf_len, u32 xid, void *handle);
-void spfc_build_els_wqe_ts_req(struct spfc_sqe *sqe, void *info, u32 scqn,
- void *frame_pld, struct unf_frame_pkg *pkg);
-void spfc_build_els_wqe_ts_rsp(struct spfc_sqe *sqe, void *info,
- struct unf_frame_pkg *pkg, void *frame_pld,
- u16 type, u16 cmnd);
-void spfc_build_bls_wqe_ts_req(struct spfc_sqe *sqe, struct unf_frame_pkg *pkg,
- void *handle);
-void spfc_build_trd_twr_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
-void spfc_build_wqe_owner_pmsn(struct spfc_sqe *io_sqe, u16 owner, u16 pmsn);
-void spfc_convert_parent_wqe_to_big_endian(struct spfc_sqe *sqe);
-void spfc_build_icmnd_wqe_ctrls(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe);
-void spfc_build_icmnd_wqe_ts(void *handle, struct unf_frame_pkg *pkg,
- struct spfc_sqe_ts *sqe_ts, union spfc_sqe_ts_ex *sqe_tsex);
-void spfc_build_icmnd_wqe_ts_header(struct unf_frame_pkg *pkg, struct spfc_sqe *sqe,
- u8 task_type, u16 exi_base, u8 port_idx);
-
-void spfc_build_cmdqe_common(union spfc_cmdqe *cmd_qe, enum spfc_task_type task_type,
- u16 rxid);
-void spfc_build_srq_wqe_ctrls(struct spfc_rqe *rqe, u16 owner, u16 pmsn);
-void spfc_build_common_wqe_ctrls(struct spfc_wqe_ctrl *ctrl_sl, u8 task_len);
-void spfc_build_tmf_rsp_wqe_ts_header(struct unf_frame_pkg *pkg,
- struct spfc_sqe_tmf_rsp *sqe, u16 exi_base,
- u32 scqn);
-
-#endif
diff --git a/drivers/scsi/spfc/sphw_api_cmd.c b/drivers/scsi/spfc/sphw_api_cmd.c
deleted file mode 120000
index 27c7c0770fa3..000000000000
--- a/drivers/scsi/spfc/sphw_api_cmd.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_cmdq.c b/drivers/scsi/spfc/sphw_cmdq.c
deleted file mode 120000
index 5ac779ba274b..000000000000
--- a/drivers/scsi/spfc/sphw_cmdq.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_common.c b/drivers/scsi/spfc/sphw_common.c
deleted file mode 120000
index a1a30a4840e1..000000000000
--- a/drivers/scsi/spfc/sphw_common.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_common.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_eqs.c b/drivers/scsi/spfc/sphw_eqs.c
deleted file mode 120000
index 74430dcb9dc5..000000000000
--- a/drivers/scsi/spfc/sphw_eqs.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_hw_cfg.c b/drivers/scsi/spfc/sphw_hw_cfg.c
deleted file mode 120000
index 4f43d68624c1..000000000000
--- a/drivers/scsi/spfc/sphw_hw_cfg.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_hw_comm.c b/drivers/scsi/spfc/sphw_hw_comm.c
deleted file mode 120000
index c943b3b2933a..000000000000
--- a/drivers/scsi/spfc/sphw_hw_comm.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_hwdev.c b/drivers/scsi/spfc/sphw_hwdev.c
deleted file mode 120000
index b7279f17eaa2..000000000000
--- a/drivers/scsi/spfc/sphw_hwdev.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_hwif.c b/drivers/scsi/spfc/sphw_hwif.c
deleted file mode 120000
index d40ef71f9033..000000000000
--- a/drivers/scsi/spfc/sphw_hwif.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_mbox.c b/drivers/scsi/spfc/sphw_mbox.c
deleted file mode 120000
index 1b00fe7289cc..000000000000
--- a/drivers/scsi/spfc/sphw_mbox.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_mgmt.c b/drivers/scsi/spfc/sphw_mgmt.c
deleted file mode 120000
index fd18a73e9d3a..000000000000
--- a/drivers/scsi/spfc/sphw_mgmt.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_prof_adap.c b/drivers/scsi/spfc/sphw_prof_adap.c
deleted file mode 120000
index fbc7db05dd27..000000000000
--- a/drivers/scsi/spfc/sphw_prof_adap.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
\ No newline at end of file
diff --git a/drivers/scsi/spfc/sphw_wq.c b/drivers/scsi/spfc/sphw_wq.c
deleted file mode 120000
index cdfcb3a610c0..000000000000
--- a/drivers/scsi/spfc/sphw_wq.c
+++ /dev/null
@@ -1 +0,0 @@
-../../net/ethernet/ramaxel/spnic/hw/sphw_wq.c
\ No newline at end of file
--
2.20.1
1
1

[PATCH OLK-5.10 0/2] apply to delete some files of spfc and spnic drivers in architecture
by Yanling Song 26 Mar '22
by Yanling Song 26 Mar '22
26 Mar '22
There are some issues of drivers that cannot be fixed now. The drivers are not good enought for the LTS quality requirements of openEuler,so apply to remove.
Yanling Song (1):
net/spnic: Remove spnic driver.
Yun Xu (1):
SCSI: spfc: remove SPFC driver
arch/arm64/configs/openeuler_defconfig | 3 -
arch/x86/configs/openeuler_defconfig | 3 -
drivers/net/ethernet/Kconfig | 1 -
drivers/net/ethernet/Makefile | 1 -
drivers/net/ethernet/ramaxel/Kconfig | 20 -
drivers/net/ethernet/ramaxel/Makefile | 6 -
drivers/net/ethernet/ramaxel/spnic/Kconfig | 14 -
drivers/net/ethernet/ramaxel/spnic/Makefile | 39 -
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.c | 1165 ----
.../ethernet/ramaxel/spnic/hw/sphw_api_cmd.h | 277 -
.../ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h | 127 -
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 1573 ------
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h | 195 -
.../ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h | 60 -
.../ramaxel/spnic/hw/sphw_comm_msg_intf.h | 273 -
.../ethernet/ramaxel/spnic/hw/sphw_common.c | 88 -
.../ethernet/ramaxel/spnic/hw/sphw_common.h | 106 -
.../net/ethernet/ramaxel/spnic/hw/sphw_crm.h | 982 ----
.../net/ethernet/ramaxel/spnic/hw/sphw_csr.h | 158 -
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.c | 1272 -----
.../net/ethernet/ramaxel/spnic/hw/sphw_eqs.h | 157 -
.../net/ethernet/ramaxel/spnic/hw/sphw_hw.h | 643 ---
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c | 1341 -----
.../ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h | 329 --
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.c | 1280 -----
.../ethernet/ramaxel/spnic/hw/sphw_hw_comm.h | 45 -
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.c | 1324 -----
.../ethernet/ramaxel/spnic/hw/sphw_hwdev.h | 93 -
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.c | 886 ---
.../net/ethernet/ramaxel/spnic/hw/sphw_hwif.h | 102 -
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 1792 ------
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.h | 271 -
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 895 ---
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h | 106 -
.../ramaxel/spnic/hw/sphw_mgmt_msg_base.h | 19 -
.../net/ethernet/ramaxel/spnic/hw/sphw_mt.h | 533 --
.../ramaxel/spnic/hw/sphw_prof_adap.c | 94 -
.../ramaxel/spnic/hw/sphw_prof_adap.h | 49 -
.../ethernet/ramaxel/spnic/hw/sphw_profile.h | 36 -
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.c | 152 -
.../net/ethernet/ramaxel/spnic/hw/sphw_wq.h | 119 -
.../net/ethernet/ramaxel/spnic/spnic_dbg.c | 752 ---
.../net/ethernet/ramaxel/spnic/spnic_dcb.c | 965 ----
.../net/ethernet/ramaxel/spnic/spnic_dcb.h | 56 -
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.c | 811 ---
.../ethernet/ramaxel/spnic/spnic_dev_mgmt.h | 78 -
.../ethernet/ramaxel/spnic/spnic_ethtool.c | 994 ----
.../ramaxel/spnic/spnic_ethtool_stats.c | 1035 ----
.../net/ethernet/ramaxel/spnic/spnic_filter.c | 411 --
.../net/ethernet/ramaxel/spnic/spnic_irq.c | 178 -
.../net/ethernet/ramaxel/spnic/spnic_lld.c | 937 ----
.../net/ethernet/ramaxel/spnic/spnic_lld.h | 75 -
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 778 ---
.../ethernet/ramaxel/spnic/spnic_mag_cmd.h | 643 ---
.../net/ethernet/ramaxel/spnic/spnic_main.c | 924 ----
.../ramaxel/spnic/spnic_mgmt_interface.h | 605 --
.../ethernet/ramaxel/spnic/spnic_netdev_ops.c | 1526 ------
.../net/ethernet/ramaxel/spnic/spnic_nic.h | 148 -
.../ethernet/ramaxel/spnic/spnic_nic_cfg.c | 1334 -----
.../ethernet/ramaxel/spnic/spnic_nic_cfg.h | 709 ---
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 658 ---
.../ethernet/ramaxel/spnic/spnic_nic_cmd.h | 105 -
.../ethernet/ramaxel/spnic/spnic_nic_dbg.c | 151 -
.../ethernet/ramaxel/spnic/spnic_nic_dbg.h | 16 -
.../ethernet/ramaxel/spnic/spnic_nic_dev.h | 354 --
.../ethernet/ramaxel/spnic/spnic_nic_event.c | 506 --
.../net/ethernet/ramaxel/spnic/spnic_nic_io.c | 1123 ----
.../net/ethernet/ramaxel/spnic/spnic_nic_io.h | 305 -
.../net/ethernet/ramaxel/spnic/spnic_nic_qp.h | 416 --
.../net/ethernet/ramaxel/spnic/spnic_ntuple.c | 841 ---
.../ethernet/ramaxel/spnic/spnic_pci_id_tbl.h | 12 -
.../net/ethernet/ramaxel/spnic/spnic_rss.c | 741 ---
.../net/ethernet/ramaxel/spnic/spnic_rss.h | 50 -
.../ethernet/ramaxel/spnic/spnic_rss_cfg.c | 333 --
drivers/net/ethernet/ramaxel/spnic/spnic_rx.c | 1238 -----
drivers/net/ethernet/ramaxel/spnic/spnic_rx.h | 118 -
.../net/ethernet/ramaxel/spnic/spnic_sriov.c | 200 -
.../net/ethernet/ramaxel/spnic/spnic_sriov.h | 24 -
drivers/net/ethernet/ramaxel/spnic/spnic_tx.c | 877 ---
drivers/net/ethernet/ramaxel/spnic/spnic_tx.h | 129 -
drivers/scsi/Kconfig | 1 -
drivers/scsi/Makefile | 1 -
drivers/scsi/spfc/Kconfig | 16 -
drivers/scsi/spfc/Makefile | 47 -
drivers/scsi/spfc/common/unf_common.h | 1753 ------
drivers/scsi/spfc/common/unf_disc.c | 1276 -----
drivers/scsi/spfc/common/unf_disc.h | 51 -
drivers/scsi/spfc/common/unf_event.c | 517 --
drivers/scsi/spfc/common/unf_event.h | 83 -
drivers/scsi/spfc/common/unf_exchg.c | 2317 --------
drivers/scsi/spfc/common/unf_exchg.h | 436 --
drivers/scsi/spfc/common/unf_exchg_abort.c | 825 ---
drivers/scsi/spfc/common/unf_exchg_abort.h | 23 -
drivers/scsi/spfc/common/unf_fcstruct.h | 459 --
drivers/scsi/spfc/common/unf_gs.c | 2521 ---------
drivers/scsi/spfc/common/unf_gs.h | 58 -
drivers/scsi/spfc/common/unf_init.c | 353 --
drivers/scsi/spfc/common/unf_io.c | 1219 ----
drivers/scsi/spfc/common/unf_io.h | 96 -
drivers/scsi/spfc/common/unf_io_abnormal.c | 986 ----
drivers/scsi/spfc/common/unf_io_abnormal.h | 19 -
drivers/scsi/spfc/common/unf_log.h | 178 -
drivers/scsi/spfc/common/unf_lport.c | 1008 ----
drivers/scsi/spfc/common/unf_lport.h | 519 --
drivers/scsi/spfc/common/unf_ls.c | 4883 -----------------
drivers/scsi/spfc/common/unf_ls.h | 61 -
drivers/scsi/spfc/common/unf_npiv.c | 1005 ----
drivers/scsi/spfc/common/unf_npiv.h | 47 -
drivers/scsi/spfc/common/unf_npiv_portman.c | 360 --
drivers/scsi/spfc/common/unf_npiv_portman.h | 17 -
drivers/scsi/spfc/common/unf_portman.c | 2431 --------
drivers/scsi/spfc/common/unf_portman.h | 96 -
drivers/scsi/spfc/common/unf_rport.c | 2286 --------
drivers/scsi/spfc/common/unf_rport.h | 301 -
drivers/scsi/spfc/common/unf_scsi.c | 1462 -----
drivers/scsi/spfc/common/unf_scsi_common.h | 570 --
drivers/scsi/spfc/common/unf_service.c | 1430 -----
drivers/scsi/spfc/common/unf_service.h | 66 -
drivers/scsi/spfc/common/unf_type.h | 216 -
drivers/scsi/spfc/hw/spfc_chipitf.c | 1105 ----
drivers/scsi/spfc/hw/spfc_chipitf.h | 797 ---
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c | 1611 ------
drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h | 215 -
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c | 885 ---
drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h | 65 -
drivers/scsi/spfc/hw/spfc_cqm_main.c | 987 ----
drivers/scsi/spfc/hw/spfc_cqm_main.h | 411 --
drivers/scsi/spfc/hw/spfc_cqm_object.c | 937 ----
drivers/scsi/spfc/hw/spfc_cqm_object.h | 279 -
drivers/scsi/spfc/hw/spfc_hba.c | 1751 ------
drivers/scsi/spfc/hw/spfc_hba.h | 341 --
drivers/scsi/spfc/hw/spfc_hw_wqe.h | 1645 ------
drivers/scsi/spfc/hw/spfc_io.c | 1193 ----
drivers/scsi/spfc/hw/spfc_io.h | 138 -
drivers/scsi/spfc/hw/spfc_lld.c | 997 ----
drivers/scsi/spfc/hw/spfc_lld.h | 76 -
drivers/scsi/spfc/hw/spfc_module.h | 297 -
drivers/scsi/spfc/hw/spfc_parent_context.h | 269 -
drivers/scsi/spfc/hw/spfc_queue.c | 4852 ----------------
drivers/scsi/spfc/hw/spfc_queue.h | 711 ---
drivers/scsi/spfc/hw/spfc_service.c | 2170 --------
drivers/scsi/spfc/hw/spfc_service.h | 282 -
drivers/scsi/spfc/hw/spfc_utils.c | 102 -
drivers/scsi/spfc/hw/spfc_utils.h | 202 -
drivers/scsi/spfc/hw/spfc_wqe.c | 646 ---
drivers/scsi/spfc/hw/spfc_wqe.h | 239 -
drivers/scsi/spfc/sphw_api_cmd.c | 1 -
drivers/scsi/spfc/sphw_cmdq.c | 1 -
drivers/scsi/spfc/sphw_common.c | 1 -
drivers/scsi/spfc/sphw_eqs.c | 1 -
drivers/scsi/spfc/sphw_hw_cfg.c | 1 -
drivers/scsi/spfc/sphw_hw_comm.c | 1 -
drivers/scsi/spfc/sphw_hwdev.c | 1 -
drivers/scsi/spfc/sphw_hwif.c | 1 -
drivers/scsi/spfc/sphw_mbox.c | 1 -
drivers/scsi/spfc/sphw_mgmt.c | 1 -
drivers/scsi/spfc/sphw_prof_adap.c | 1 -
drivers/scsi/spfc/sphw_wq.c | 1 -
158 files changed, 90993 deletions(-)
delete mode 100644 drivers/net/ethernet/ramaxel/Kconfig
delete mode 100644 drivers/net/ethernet/ramaxel/Makefile
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/Kconfig
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/Makefile
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_api_cmd.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cfg_cmd.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_cmd.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_comm_msg_intf.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_common.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_crm.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_csr.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_eqs.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_cfg.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hw_comm.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwdev.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_hwif.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt_msg_base.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_mt.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_prof_adap.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_profile.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/hw/sphw_wq.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dbg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dcb.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_dev_mgmt.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_filter.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_irq.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_lld.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mag_cmd.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_main.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_mgmt_interface.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_netdev_ops.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_cmd.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dbg.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_dev.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_event.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_io.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_nic_qp.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_ntuple.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_pci_id_tbl.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rss_cfg.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_rx.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_sriov.h
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.c
delete mode 100644 drivers/net/ethernet/ramaxel/spnic/spnic_tx.h
delete mode 100644 drivers/scsi/spfc/Kconfig
delete mode 100644 drivers/scsi/spfc/Makefile
delete mode 100644 drivers/scsi/spfc/common/unf_common.h
delete mode 100644 drivers/scsi/spfc/common/unf_disc.c
delete mode 100644 drivers/scsi/spfc/common/unf_disc.h
delete mode 100644 drivers/scsi/spfc/common/unf_event.c
delete mode 100644 drivers/scsi/spfc/common/unf_event.h
delete mode 100644 drivers/scsi/spfc/common/unf_exchg.c
delete mode 100644 drivers/scsi/spfc/common/unf_exchg.h
delete mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.c
delete mode 100644 drivers/scsi/spfc/common/unf_exchg_abort.h
delete mode 100644 drivers/scsi/spfc/common/unf_fcstruct.h
delete mode 100644 drivers/scsi/spfc/common/unf_gs.c
delete mode 100644 drivers/scsi/spfc/common/unf_gs.h
delete mode 100644 drivers/scsi/spfc/common/unf_init.c
delete mode 100644 drivers/scsi/spfc/common/unf_io.c
delete mode 100644 drivers/scsi/spfc/common/unf_io.h
delete mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.c
delete mode 100644 drivers/scsi/spfc/common/unf_io_abnormal.h
delete mode 100644 drivers/scsi/spfc/common/unf_log.h
delete mode 100644 drivers/scsi/spfc/common/unf_lport.c
delete mode 100644 drivers/scsi/spfc/common/unf_lport.h
delete mode 100644 drivers/scsi/spfc/common/unf_ls.c
delete mode 100644 drivers/scsi/spfc/common/unf_ls.h
delete mode 100644 drivers/scsi/spfc/common/unf_npiv.c
delete mode 100644 drivers/scsi/spfc/common/unf_npiv.h
delete mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.c
delete mode 100644 drivers/scsi/spfc/common/unf_npiv_portman.h
delete mode 100644 drivers/scsi/spfc/common/unf_portman.c
delete mode 100644 drivers/scsi/spfc/common/unf_portman.h
delete mode 100644 drivers/scsi/spfc/common/unf_rport.c
delete mode 100644 drivers/scsi/spfc/common/unf_rport.h
delete mode 100644 drivers/scsi/spfc/common/unf_scsi.c
delete mode 100644 drivers/scsi/spfc/common/unf_scsi_common.h
delete mode 100644 drivers/scsi/spfc/common/unf_service.c
delete mode 100644 drivers/scsi/spfc/common/unf_service.h
delete mode 100644 drivers/scsi/spfc/common/unf_type.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_chipitf.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bat_cla.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_bitmap_table.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_main.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_cqm_object.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_hba.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_hba.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_hw_wqe.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_io.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_io.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_lld.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_lld.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_module.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_parent_context.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_queue.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_queue.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_service.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_service.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_utils.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_utils.h
delete mode 100644 drivers/scsi/spfc/hw/spfc_wqe.c
delete mode 100644 drivers/scsi/spfc/hw/spfc_wqe.h
delete mode 120000 drivers/scsi/spfc/sphw_api_cmd.c
delete mode 120000 drivers/scsi/spfc/sphw_cmdq.c
delete mode 120000 drivers/scsi/spfc/sphw_common.c
delete mode 120000 drivers/scsi/spfc/sphw_eqs.c
delete mode 120000 drivers/scsi/spfc/sphw_hw_cfg.c
delete mode 120000 drivers/scsi/spfc/sphw_hw_comm.c
delete mode 120000 drivers/scsi/spfc/sphw_hwdev.c
delete mode 120000 drivers/scsi/spfc/sphw_hwif.c
delete mode 120000 drivers/scsi/spfc/sphw_mbox.c
delete mode 120000 drivers/scsi/spfc/sphw_mgmt.c
delete mode 120000 drivers/scsi/spfc/sphw_prof_adap.c
delete mode 120000 drivers/scsi/spfc/sphw_wq.c
--
2.32.0
1
2
Fix CWE-457 bug: Use of an uninitialized variable
1
0

[PATCH openEuler-1.0-LTS 1/4] arm64/mpam: fix __mpam_device_create() section mismatch error
by Laibin Qiu 26 Mar '22
by Laibin Qiu 26 Mar '22
26 Mar '22
From: Xingang Wang <wangxingang5(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2
CVE: NA
---------------------------------------------------
Fix modpost Section mismatch error in __mpam_device_create() and others.
These warnings will occur in high version gcc, for example 10.1.0.
[...]
WARNING: vmlinux.o(.text+0x2ed88): Section mismatch in reference from the
function __mpam_device_create() to the function .init.text:mpam_device_alloc()
The function __mpam_device_create() references
the function __init mpam_device_alloc().
This is often because __mpam_device_create lacks a __init
annotation or the annotation of mpam_device_alloc is wrong.
WARNING: vmlinux.o(.text.unlikely+0xa5c): Section mismatch in reference from
the function mpam_resctrl_init() to the function .init.text:mpam_init_padding()
The function mpam_resctrl_init() references
the function __init mpam_init_padding().
This is often because mpam_resctrl_init lacks a __init
annotation or the annotation of mpam_init_padding is wrong.
WARNING: vmlinux.o(.text.unlikely+0x5a9c): Section mismatch in reference from
the function resctrl_group_init() to the function .init.text:resctrl_group_setup_root()
The function resctrl_group_init() references
the function __init resctrl_group_setup_root().
This is often because resctrl_group_init lacks a __init
annotation or the annotation of resctrl_group_setup_root is wrong.
[...]
Fixes: c5e27c395a70 ("arm64/mpam: remove __init macro to support driver probe")
Signed-off-by: Xingang Wang <wangxingang5(a)huawei.com>
Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
arch/arm64/kernel/mpam/mpam_device.c | 8 ++++----
arch/arm64/kernel/mpam/mpam_resctrl.c | 2 +-
fs/resctrlfs.c | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c
index 4e882e81cf41..5d34751f8c3e 100644
--- a/arch/arm64/kernel/mpam/mpam_device.c
+++ b/arch/arm64/kernel/mpam/mpam_device.c
@@ -621,7 +621,7 @@ static void mpam_failed(struct work_struct *work)
mutex_unlock(&mpam_cpuhp_lock);
}
-static struct mpam_device * __init
+static struct mpam_device *
mpam_device_alloc(struct mpam_component *comp)
{
struct mpam_device *dev;
@@ -656,7 +656,7 @@ static void mpam_devices_destroy(struct mpam_component *comp)
}
}
-static struct mpam_component * __init mpam_component_alloc(int id)
+static struct mpam_component *mpam_component_alloc(int id)
{
struct mpam_component *comp;
@@ -694,7 +694,7 @@ struct mpam_component *mpam_component_get(struct mpam_class *class, int id,
return comp;
}
-static struct mpam_class * __init mpam_class_alloc(u8 level_idx,
+static struct mpam_class *mpam_class_alloc(u8 level_idx,
enum mpam_class_types type)
{
struct mpam_class *class;
@@ -733,7 +733,7 @@ static void mpam_class_destroy(struct mpam_class *class)
}
}
-static struct mpam_class * __init mpam_class_get(u8 level_idx,
+static struct mpam_class *mpam_class_get(u8 level_idx,
enum mpam_class_types type,
bool alloc)
{
diff --git a/arch/arm64/kernel/mpam/mpam_resctrl.c b/arch/arm64/kernel/mpam/mpam_resctrl.c
index fe2dcf92100f..183c83d4274f 100644
--- a/arch/arm64/kernel/mpam/mpam_resctrl.c
+++ b/arch/arm64/kernel/mpam/mpam_resctrl.c
@@ -1135,7 +1135,7 @@ void closid_free(int closid)
* Choose a width for the resource name and resource data based on the
* resource that has widest name and cbm.
*/
-static __init void mpam_init_padding(void)
+static void mpam_init_padding(void)
{
int cl;
struct mpam_resctrl_res *res;
diff --git a/fs/resctrlfs.c b/fs/resctrlfs.c
index ea9df7d77b95..0e6753012140 100644
--- a/fs/resctrlfs.c
+++ b/fs/resctrlfs.c
@@ -989,7 +989,7 @@ static void resctrl_group_default_init(struct resctrl_group *r)
r->type = RDTCTRL_GROUP;
}
-static int __init resctrl_group_setup_root(void)
+static int resctrl_group_setup_root(void)
{
int ret;
--
2.22.0
1
3

[PATCH openEuler-5.10 1/2] block-map: add __GFP_ZERO flag for alloc_page in function bio_copy_kern
by Zheng Zengkai 23 Mar '22
by Zheng Zengkai 23 Mar '22
23 Mar '22
From: Haimin Zhang <tcs.kernel(a)gmail.com>
mainline inclusion
from mainline-v5.17-rc5
commit cc8f7fe1f5eab010191aa4570f27641876fa1267
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4Z2IA
CVE: CVE-2022-0494
--------------------------------
Add __GFP_ZERO flag for alloc_page in function bio_copy_kern to initialize
the buffer of a bio.
Signed-off-by: Haimin Zhang <tcs.kernel(a)gmail.com>
Reviewed-by: Chaitanya Kulkarni <kch(a)nvidia.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Link: https://lore.kernel.org/r/20220216084038.15635-1-tcs.kernel@gmail.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Conflict: commit ce288e053568 ("block: remove BLK_BOUNCE_ISA support")
is not backported.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
block/blk-map.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-map.c b/block/blk-map.c
index 21630dccac62..ede73f4f7014 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -488,7 +488,7 @@ static struct bio *bio_copy_kern(struct request_queue *q, void *data,
if (bytes > len)
bytes = len;
- page = alloc_page(q->bounce_gfp | gfp_mask);
+ page = alloc_page(q->bounce_gfp | __GFP_ZERO | gfp_mask);
if (!page)
goto cleanup;
--
2.20.1
1
1

22 Mar '22
From: eillon <yezhenyu2(a)huawei.com>
euleros inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4YW86
--------------------------------
When building with defconfig on arm32, we got a compile error:
./include/linux/page-flags-layout.h:95:2: error: #error "Not enough bits in page flags"
95 | #error "Not enough bits in page flags"
| ^~~~~
Limit PG_reserve_pgflag_0 and PG_reserve_pgflag_1 to compile only on
X86_64 and ARM64 to resolve this issue.
Fixes: afdf2a6cdee7 ("kabi: Add reserved page and gfp flags for future extension")
Signed-off-by: eillon <yezhenyu2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
include/linux/page-flags.h | 2 ++
include/trace/events/mmflags.h | 10 +++++++---
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e1af0a6c8165..65e1cbe1d1ce 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -150,8 +150,10 @@ enum pageflags {
* flags which backported from kernel upstream, please place them
* behind the reserved page flags.
*/
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
PG_reserve_pgflag_0,
PG_reserve_pgflag_1,
+#endif
__NR_PAGEFLAGS,
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index b50012bea1ef..366d972ce735 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -89,8 +89,12 @@
#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
#define IF_HAVE_PG_POOL(flag,string) ,{1UL << flag, string}
+#define IF_HAVE_PG_RESERVE0(flag,string) ,{1UL << flag, string}
+#define IF_HAVE_PG_RESERVE1(flag,string) ,{1UL << flag, string}
#else
#define IF_HAVE_PG_POOL(flag,string)
+#define IF_HAVE_PG_RESERVE0(flag,string)
+#define IF_HAVE_PG_RESERVE1(flag,string)
#endif
#ifdef CONFIG_PIN_MEMORY
@@ -128,9 +132,9 @@ IF_HAVE_PG_IDLE(PG_young, "young" ) \
IF_HAVE_PG_IDLE(PG_idle, "idle" ) \
IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \
IF_HAVE_PG_POOL(PG_pool, "pool" ) \
-IF_HAVE_PG_HOTREPLACE(PG_hotreplace, "hotreplace" ), \
- {1UL << PG_reserve_pgflag_0, "reserve_pgflag_0"}, \
- {1UL << PG_reserve_pgflag_1, "reserve_pgflag_1"}
+IF_HAVE_PG_HOTREPLACE(PG_hotreplace, "hotreplace" ) \
+IF_HAVE_PG_RESERVE0(PG_reserve_pgflag_0,"reserve_pgflag_0") \
+IF_HAVE_PG_RESERVE1(PG_reserve_pgflag_1,"reserve_pgflag_1")
#define show_page_flags(flags) \
(flags) ? __print_flags(flags, "|", \
--
2.20.1
1
2
From: eillon <yezhenyu2(a)huawei.com>
euleros inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4YW86
--------------------------------
When building with defconfig on arm32, we got a compile error:
./include/linux/page-flags-layout.h:95:2: error: #error "Not enough bits in page flags"
95 | #error "Not enough bits in page flags"
| ^~~~~
Limit PG_reserve_pgflag_0 and PG_reserve_pgflag_1 to compile only on
X86_64 and ARM64 to resolve this issue.
Fixes: afdf2a6cdee7 ("kabi: Add reserved page and gfp flags for future extension")
Signed-off-by: eillon <yezhenyu2(a)huawei.com>
---
include/linux/page-flags.h | 2 ++
include/trace/events/mmflags.h | 10 +++++++---
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e1af0a6c8165..65e1cbe1d1ce 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -150,8 +150,10 @@ enum pageflags {
* flags which backported from kernel upstream, please place them
* behind the reserved page flags.
*/
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
PG_reserve_pgflag_0,
PG_reserve_pgflag_1,
+#endif
__NR_PAGEFLAGS,
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index b50012bea1ef..366d972ce735 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -89,8 +89,12 @@
#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
#define IF_HAVE_PG_POOL(flag,string) ,{1UL << flag, string}
+#define IF_HAVE_PG_RESERVE0(flag,string) ,{1UL << flag, string}
+#define IF_HAVE_PG_RESERVE1(flag,string) ,{1UL << flag, string}
#else
#define IF_HAVE_PG_POOL(flag,string)
+#define IF_HAVE_PG_RESERVE0(flag,string)
+#define IF_HAVE_PG_RESERVE1(flag,string)
#endif
#ifdef CONFIG_PIN_MEMORY
@@ -128,9 +132,9 @@ IF_HAVE_PG_IDLE(PG_young, "young" ) \
IF_HAVE_PG_IDLE(PG_idle, "idle" ) \
IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \
IF_HAVE_PG_POOL(PG_pool, "pool" ) \
-IF_HAVE_PG_HOTREPLACE(PG_hotreplace, "hotreplace" ), \
- {1UL << PG_reserve_pgflag_0, "reserve_pgflag_0"}, \
- {1UL << PG_reserve_pgflag_1, "reserve_pgflag_1"}
+IF_HAVE_PG_HOTREPLACE(PG_hotreplace, "hotreplace" ) \
+IF_HAVE_PG_RESERVE0(PG_reserve_pgflag_0,"reserve_pgflag_0") \
+IF_HAVE_PG_RESERVE1(PG_reserve_pgflag_1,"reserve_pgflag_1")
#define show_page_flags(flags) \
(flags) ? __print_flags(flags, "|", \
--
2.27.0
2
1

[PATCH openEuler-5.10] mm/dynamic_hugetlb: only compile PG_pool on X86_64 and ARM64
by Zheng Zengkai 22 Mar '22
by Zheng Zengkai 22 Mar '22
22 Mar '22
From: Liu Shixin <liushixin2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 46904, https://gitee.com/openeuler/kernel/issues/I4YXOA
--------------------------------
When building with defconfig on arm32, we got a compile error:
./include/linux/page-flags-layout.h:95:2: error: #error "Not enough bits in page flags"
95 | #error "Not enough bits in page flags"
| ^~~~~
Limit PG_pool to compile only on X86_64 and ARM64 to resolve this issue.
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/proc/page.c | 2 ++
include/linux/page-flags.h | 6 ++++++
include/trace/events/mmflags.h | 10 ++++++++--
3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/fs/proc/page.c b/fs/proc/page.c
index d00c23d543fe..4c5bef99ec10 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -220,7 +220,9 @@ u64 stable_page_flags(struct page *page)
#ifdef CONFIG_64BIT
u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2);
#endif
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
u |= kpf_copy_bit(k, KPF_POOL, PG_pool);
+#endif
return u;
};
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index eb2fe22bc0e9..e1af0a6c8165 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -139,7 +139,9 @@ enum pageflags {
#ifdef CONFIG_64BIT
PG_arch_2,
#endif
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
PG_pool, /* Used to track page allocated from dynamic hugetlb pool */
+#endif
#ifdef CONFIG_PIN_MEMORY
PG_hotreplace,
#endif
@@ -474,7 +476,11 @@ __PAGEFLAG(Reported, reported, PF_NO_COMPOUND)
/*
* PagePool() is used to track page allocated from hpool.
*/
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
PAGEFLAG(Pool, pool, PF_NO_TAIL)
+#else
+PAGEFLAG_FALSE(Pool)
+#endif
/*
* On an anonymous page mapped into a user virtual memory area,
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index dc1805fbf893..b50012bea1ef 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -87,6 +87,12 @@
#define IF_HAVE_PG_ARCH_2(flag,string)
#endif
+#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
+#define IF_HAVE_PG_POOL(flag,string) ,{1UL << flag, string}
+#else
+#define IF_HAVE_PG_POOL(flag,string)
+#endif
+
#ifdef CONFIG_PIN_MEMORY
#define IF_HAVE_PG_HOTREPLACE(flag, string) ,{1UL << flag, string}
#else
@@ -114,14 +120,14 @@
{1UL << PG_mappedtodisk, "mappedtodisk" }, \
{1UL << PG_reclaim, "reclaim" }, \
{1UL << PG_swapbacked, "swapbacked" }, \
- {1UL << PG_unevictable, "unevictable" }, \
- {1UL << PG_pool, "pool" } \
+ {1UL << PG_unevictable, "unevictable" } \
IF_HAVE_PG_MLOCK(PG_mlocked, "mlocked" ) \
IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \
IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \
IF_HAVE_PG_IDLE(PG_young, "young" ) \
IF_HAVE_PG_IDLE(PG_idle, "idle" ) \
IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \
+IF_HAVE_PG_POOL(PG_pool, "pool" ) \
IF_HAVE_PG_HOTREPLACE(PG_hotreplace, "hotreplace" ), \
{1UL << PG_reserve_pgflag_0, "reserve_pgflag_0"}, \
{1UL << PG_reserve_pgflag_1, "reserve_pgflag_1"}
--
2.20.1
1
0
2
5

21 Mar '22
From: Miklos Szeredi <mszeredi(a)redhat.com>
mainline inclusion
from mainline-v5.17-rc8
commit 0c4bcfdecb1ac0967619ee7ff44871d93c08c909
category: bugfix
bugzilla: 186448, https://gitee.com/openeuler/kernel/issues/I4YS7O
CVE: CVE-2022-1011
--------------------------------
In FOPEN_DIRECT_IO mode, fuse_file_write_iter() calls
fuse_direct_write_iter(), which normally calls fuse_direct_io(), which then
imports the write buffer with fuse_get_user_pages(), which uses
iov_iter_get_pages() to grab references to userspace pages instead of
actually copying memory.
On the filesystem device side, these pages can then either be read to
userspace (via fuse_dev_read()), or splice()d over into a pipe using
fuse_dev_splice_read() as pipe buffers with &nosteal_pipe_buf_ops.
This is wrong because after fuse_dev_do_read() unlocks the FUSE request,
the userspace filesystem can mark the request as completed, causing write()
to return. At that point, the userspace filesystem should no longer have
access to the pipe buffer.
Fix by copying pages coming from the user address space to new pipe
buffers.
Reported-by: Jann Horn <jannh(a)google.com>
Fixes: c3021629a0d8 ("fuse: support splice() reading from fuse device")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Zhang Wensheng <zhangwensheng5(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/fuse/dev.c | 12 +++++++++++-
fs/fuse/file.c | 1 +
fs/fuse/fuse_i.h | 1 +
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index d100b5dfedbd..8ac91ba05d6d 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -945,7 +945,17 @@ static int fuse_copy_page(struct fuse_copy_state *cs, struct page **pagep,
while (count) {
if (cs->write && cs->pipebufs && page) {
- return fuse_ref_page(cs, page, offset, count);
+ /*
+ * Can't control lifetime of pipe buffers, so always
+ * copy user pages.
+ */
+ if (cs->req->args->user_pages) {
+ err = fuse_copy_fill(cs);
+ if (err)
+ return err;
+ } else {
+ return fuse_ref_page(cs, page, offset, count);
+ }
} else if (!cs->len) {
if (cs->move_pages && page &&
offset == 0 && count == PAGE_SIZE) {
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index e63ce8443c96..a869c3a527a8 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1433,6 +1433,7 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
(PAGE_SIZE - ret) & (PAGE_SIZE - 1);
}
+ ap->args.user_pages = true;
if (write)
ap->args.in_pages = true;
else
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index d31fc48c6afa..0686788e4283 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -266,6 +266,7 @@ struct fuse_args {
bool nocreds:1;
bool in_pages:1;
bool out_pages:1;
+ bool user_pages:1;
bool out_argvar:1;
bool page_zeroing:1;
bool page_replace:1;
--
2.20.1
1
1

[PATCH openEuler-5.10 1/4] blk-mq: add exception handling when srcu->sda alloc failed
by Zheng Zengkai 21 Mar '22
by Zheng Zengkai 21 Mar '22
21 Mar '22
From: Laibin Qiu <qiulaibin(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186352, https://gitee.com/openeuler/kernel/issues/I4YADX
CVE: NA
--------------------------------
In case of BLK_MQ_F_BLOCKING, per-hctx srcu is used to protect dispatch
critical area. But the current process is not aware when memory of srcu
allocation failed in blk_mq_alloc_hctx, which will leads to illegal
address BUG. Add return value validation to avoid this problem.
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
block/blk-mq.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c3beaca1f4fb..9ae1663348ac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2852,12 +2852,16 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
if (!hctx->fq)
goto free_bitmap;
- if (hctx->flags & BLK_MQ_F_BLOCKING)
- init_srcu_struct(hctx->srcu);
+ if (hctx->flags & BLK_MQ_F_BLOCKING) {
+ if (init_srcu_struct(hctx->srcu) != 0)
+ goto free_flush_queue;
+ }
blk_mq_hctx_kobj_init(hctx);
return hctx;
+ free_flush_queue:
+ blk_free_flush_queue(hctx->fq);
free_bitmap:
sbitmap_free(&hctx->ctx_map);
free_ctxs:
--
2.20.1
1
3
Update openEuler-22.03-LTS KABI whitelists.
1
0

20 Mar '22
Patch 1 fixes a bug found in earlier version.
Patch 2,3,4 are potential vulnerability discovered during code analysis.
Patch 5 is an optimization.
Patch 7 is a feature limitation.
Patch 6,8,9 are three bugfix discovered during code analysis.
Liu Shixin (9):
mm/dynamic_hugetlb: check free_pages_prepares when split pages
mm/dynamic_hugetlb: improve the initialization of huge pages
mm/dynamic_hugetlb: use pfn to traverse subpages
mm/dynamic_hugetlb: check page using check_new_page
mm/dynamic_hugetlb: use mem_cgroup_force_empty to reclaim pages
mm/dynamic_hugetlb: hold the lock until pages back to hugetlb
mm/dynamic_hugetlb: only support to merge 2M dynamicly
mm/dynamic_hugetlb: set/clear HPageFreed
mm/dynamic_hugetlb: initialize subpages before merging
include/linux/memcontrol.h | 2 +
mm/dynamic_hugetlb.c | 156 ++++++++++++++++++++++++-------------
mm/internal.h | 1 +
mm/memcontrol.c | 2 +-
mm/page_alloc.c | 2 +-
5 files changed, 109 insertions(+), 54 deletions(-)
--
2.20.1
1
9

[PATCH openEuler-1.0-LTS] blk-mq: add exception handling when srcu->sda alloc failed
by Laibin Qiu 18 Mar '22
by Laibin Qiu 18 Mar '22
18 Mar '22
hulk inclusion
category: bugfix
bugzilla: 186352, https://gitee.com/openeuler/kernel/issues/I4YADX
DTS: DTS2022031707143
CVE: NA
--------------------------------
In case of BLK_MQ_F_BLOCKING, per-hctx srcu is used to protect dispatch
critical area. But the current process is not aware when memory of srcu
allocation failed in blk_mq_alloc_hctx, which will leads to illegal
address BUG. Add return value validation to avoid this problem.
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
---
block/blk-mq.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0732bcc65f88..9604d2b39745 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2459,12 +2459,16 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
if (!hctx->fq)
goto free_bitmap;
- if (hctx->flags & BLK_MQ_F_BLOCKING)
- init_srcu_struct(hctx->srcu);
+ if (hctx->flags & BLK_MQ_F_BLOCKING) {
+ if (init_srcu_struct(hctx->srcu) != 0)
+ goto free_flush_queue;
+ }
blk_mq_hctx_kobj_init(hctx);
return hctx;
+ free_flush_queue:
+ blk_free_flush_queue(hctx->fq);
free_bitmap:
sbitmap_free(&hctx->ctx_map);
free_ctxs:
--
2.22.0
1
0

[PATCH openEuler-1.0-LTS] irqchip/gic-phytium-2500: Fix issue that interrupts are concentrated in one cpu
by Laibin Qiu 18 Mar '22
by Laibin Qiu 18 Mar '22
18 Mar '22
From: Mao HongBo <maohongbo(a)phytium.com.cn>
Phytium inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ
CVE: NA
-------------------------------------------------
Fix the issue that interrupts are concentrated in one cpu
for Phytium S2500 server.
Signed-off-by: Mao HongBo <maohongbo(a)phytium.com.cn>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/irqchip/irq-gic-phytium-2500-its.c | 3 +--
drivers/irqchip/irq-gic-phytium-2500.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/irqchip/irq-gic-phytium-2500-its.c b/drivers/irqchip/irq-gic-phytium-2500-its.c
index fff1c8546d23..dd24af3793ca 100644
--- a/drivers/irqchip/irq-gic-phytium-2500-its.c
+++ b/drivers/irqchip/irq-gic-phytium-2500-its.c
@@ -1181,8 +1181,7 @@ static int its_cpumask_select(struct its_device *its_dev,
}
cpu = cpumask_any_and(mask_val, cpu_mask);
- if ((cpu > cpus) && (cpu < (cpus + skt_cpu_cnt[skt_id])))
- cpus = cpu;
+ cpus = cpus + cpu % skt_cpu_cnt[skt_id];
if (is_kdump_kernel()) {
skt = (cpu_logical_map(cpu) >> 16) & 0xff;
diff --git a/drivers/irqchip/irq-gic-phytium-2500.c b/drivers/irqchip/irq-gic-phytium-2500.c
index 8674463a08c6..103e97f5855e 100644
--- a/drivers/irqchip/irq-gic-phytium-2500.c
+++ b/drivers/irqchip/irq-gic-phytium-2500.c
@@ -1123,8 +1123,7 @@ static int gic_cpumask_select(struct irq_data *d, const struct cpumask *mask_val
}
cpu = cpumask_any_and(mask_val, cpu_online_mask);
- if ((cpu > cpus) && (cpu < (cpus + skt_cpu_cnt[irq_skt])))
- cpus = cpu;
+ cpus = cpus + cpu % skt_cpu_cnt[irq_skt];
if (is_kdump_kernel()) {
skt = (cpu_logical_map(cpu) >> 16) & 0xff;
--
2.22.0
1
0

[PATCH openEuler-1.0-LTS 1/3] veth: Do not record rx queue hint in veth_xmit
by Laibin Qiu 17 Mar '22
by Laibin Qiu 17 Mar '22
17 Mar '22
From: Daniel Borkmann <daniel(a)iogearbox.net>
stable inclusion
from linux-4.19.226
commit bd6e97e2b6f59a19894c7032a83f03ad38ede28e
--------------------------------
commit 710ad98c363a66a0cd8526465426c5c5f8377ee0 upstream.
Laurent reported that they have seen a significant amount of TCP retransmissions
at high throughput from applications residing in network namespaces talking to
the outside world via veths. The drops were seen on the qdisc layer (fq_codel,
as per systemd default) of the phys device such as ena or virtio_net due to all
traffic hitting a _single_ TX queue _despite_ multi-queue device. (Note that the
setup was _not_ using XDP on veths as the issue is generic.)
More specifically, after edbea9220251 ("veth: Store queue_mapping independently
of XDP prog presence") which made it all the way back to v4.19.184+,
skb_record_rx_queue() would set skb->queue_mapping to 1 (given 1 RX and 1 TX
queue by default for veths) instead of leaving at 0.
This is eventually retained and callbacks like ena_select_queue() will also pick
single queue via netdev_core_pick_tx()'s ndo_select_queue() once all the traffic
is forwarded to that device via upper stack or other means. Similarly, for others
not implementing ndo_select_queue() if XPS is disabled, netdev_pick_tx() might
call into the skb_tx_hash() and check for prior skb_rx_queue_recorded() as well.
In general, it is a _bad_ idea for virtual devices like veth to mess around with
queue selection [by default]. Given dev->real_num_tx_queues is by default 1,
the skb->queue_mapping was left untouched, and so prior to edbea9220251 the
netdev_core_pick_tx() could do its job upon __dev_queue_xmit() on the phys device.
Unbreak this and restore prior behavior by removing the skb_record_rx_queue()
from veth_xmit() altogether.
If the veth peer has an XDP program attached, then it would return the first RX
queue index in xdp_md->rx_queue_index (unless configured in non-default manner).
However, this is still better than breaking the generic case.
Fixes: edbea9220251 ("veth: Store queue_mapping independently of XDP prog presence")
Fixes: 638264dc9022 ("veth: Support per queue XDP ring")
Reported-by: Laurent Bernaille <laurent.bernaille(a)datadoghq.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Cc: Maciej Fijalkowski <maciej.fijalkowski(a)intel.com>
Cc: Toshiaki Makita <toshiaki.makita1(a)gmail.com>
Cc: Eric Dumazet <eric.dumazet(a)gmail.com>
Cc: Paolo Abeni <pabeni(a)redhat.com>
Cc: John Fastabend <john.fastabend(a)gmail.com>
Cc: Willem de Bruijn <willemb(a)google.com>
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Acked-by: Toshiaki Makita <toshiaki.makita1(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
drivers/net/veth.c
Signed-off-by: Ziyang Xuan <william.xuanziyang(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/net/veth.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 41a00cd76955..749faa6fcd82 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -197,8 +197,6 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
if (rxq < rcv->real_num_rx_queues) {
rq = &rcv_priv->rq[rxq];
rcv_xdp = rcu_access_pointer(rq->xdp_prog);
- if (rcv_xdp)
- skb_record_rx_queue(skb, rxq);
}
if (likely(veth_forward_skb(rcv, skb, rq, rcv_xdp) == NET_RX_SUCCESS)) {
--
2.22.0
1
2

[PATCH openEuler-5.10] irqchip/gic-phytium-2500: Fix issue that interrupts are concentrated in one cpu
by Zheng Zengkai 17 Mar '22
by Zheng Zengkai 17 Mar '22
17 Mar '22
From: Mao HongBo <maohongbo(a)phytium.com.cn>
Phytium inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ
CVE: NA
-------------------------------------------------
Fix the issue that interrupts are concentrated in one cpu
for Phytium S2500 server.
Signed-off-by: Mao HongBo <maohongbo(a)phytium.com.cn>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/irqchip/irq-gic-phytium-2500-its.c | 4 +---
drivers/irqchip/irq-gic-phytium-2500.c | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/irqchip/irq-gic-phytium-2500-its.c b/drivers/irqchip/irq-gic-phytium-2500-its.c
index 4d2758fbad22..cb9962c4debb 100644
--- a/drivers/irqchip/irq-gic-phytium-2500-its.c
+++ b/drivers/irqchip/irq-gic-phytium-2500-its.c
@@ -1675,9 +1675,7 @@ static int its_cpumask_select(struct its_device *its_dev,
}
cpu = cpumask_any_and(mask_val, cpu_mask);
- if ((cpu > cpus) && (cpu < (cpus + skt_cpu_cnt[skt_id]))) {
- cpus = cpu;
- }
+ cpus = cpus + cpu % skt_cpu_cnt[skt_id];
if (is_kdump_kernel()) {
skt = (cpu_logical_map(cpu) >> 16) & 0xff;
diff --git a/drivers/irqchip/irq-gic-phytium-2500.c b/drivers/irqchip/irq-gic-phytium-2500.c
index dbdb778b5b4b..a0c622fb2039 100644
--- a/drivers/irqchip/irq-gic-phytium-2500.c
+++ b/drivers/irqchip/irq-gic-phytium-2500.c
@@ -1345,9 +1345,7 @@ static int gic_cpumask_select(struct irq_data *d, const struct cpumask *mask_val
}
cpu = cpumask_any_and(mask_val, cpu_online_mask);
- if ((cpu > cpus) && (cpu < (cpus + skt_cpu_cnt[irq_skt]))) {
- cpus = cpu;
- }
+ cpus = cpus + cpu % skt_cpu_cnt[irq_skt];
if (is_kdump_kernel()) {
skt = (cpu_logical_map(cpu) >> 16) & 0xff;
--
2.20.1
1
0
增加议题:
14:10 - 14:30 [openEuler-22.03-LTS]建议在spec中对内核模块压缩减少磁盘消耗 - liuchao
-----邮件原件-----
发件人: openEuler conference [mailto:public@openeuler.org]
发送时间: 2022年3月14日 10:53
收件人: kernel(a)openeuler.org; kernel-discuss(a)openeuler.org; tc(a)openeuler.org; dev(a)openeuler.org
主题: openEuler kernel 双周例会&技术分享
您好!
Kernel SIG 邀请您参加 2022-03-18 14:00 召开的ZOOM会议(自动录制)
会议主题:openEuler kernel 双周例会&技术分享
会议内容:
openEuler kernel 技术分享:内核中断子系统介绍
会议链接:https://us06web.zoom.us/j/82640076508?pwd=MktwWWxWR3ZqNFF4SlgwdGhwNlNDZz09
温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the ZOOM conference(auto recording) will be held at 2022-03-18 14:00,
The subject of the conference is openEuler kernel 双周例会&技术分享,
Summary:
openEuler kernel 技术分享:内核中断子系统介绍
You can join the meeting at https://us06web.zoom.us/j/82640076508?pwd=MktwWWxWR3ZqNFF4SlgwdGhwNlNDZz09.
Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
More information: https://openeuler.org/en/
_______________________________________________
Kernel-discuss mailing list -- kernel-discuss(a)openeuler.org To unsubscribe send an email to kernel-discuss-leave(a)openeuler.org
1
1
在kernel.spec中对内核模块进行压缩,压缩为.xz格式,可以减少内核模块约72%的磁盘消耗。
对于openEuler操作系统,可以减少约175 MB磁盘消耗。
目前openEuler中modprobe等工具都支持.xz格式的内核模块,该更改对正常使用无影响。
此功能在centos、opensuse均已默认打开。
压缩命令:
find $RPM_BUILD_ROOT/lib/modules/ -type f -name '*.ko' | xargs -n1 -P`nproc --all` xz
2
1

[openEuler-5.10 1/9] Revert "audit: bugfix for infinite loop when flush the hold queue"
by Zheng Zengkai 17 Mar '22
by Zheng Zengkai 17 Mar '22
17 Mar '22
From: Cui GaoSheng <cuigaosheng1(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186383 https://gitee.com/openeuler/kernel/issues/I4X1AI?from=project-issue
CVE: NA
--------------------------------
This reverts commit fcfdde9cfc503cfd191b7286f9c48077fb5bf420.
Signed-off-by: Cui GaoSheng <cuigaosheng1(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
kernel/audit.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/kernel/audit.c b/kernel/audit.c
index 21be62bc8205..2a38cbaf3ddb 100644
--- a/kernel/audit.c
+++ b/kernel/audit.c
@@ -732,8 +732,6 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
if (!sk) {
if (err_hook)
(*err_hook)(skb);
- if (queue == &audit_hold_queue)
- goto out;
continue;
}
@@ -750,8 +748,6 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
(*err_hook)(skb);
if (rc == -EAGAIN)
rc = 0;
- if (queue == &audit_hold_queue)
- goto out;
/* continue to drain the queue */
continue;
} else
@@ -763,7 +759,6 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
}
}
-out:
return (rc >= 0 ? 0 : rc);
}
--
2.20.1
1
8

[PATCH openEuler-5.10] arm/arm64: paravirt: Remove GPL from pv_ops export
by Zheng Zengkai 15 Mar '22
by Zheng Zengkai 15 Mar '22
15 Mar '22
From: Zengruan Ye <yezengruan(a)huawei.com>
virt inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VZPC
CVE: NA
--------------------------------
Commit 63042c58affc ("KVM: arm64: Add interface to support vCPU
preempted check") introduced paravirt spinlock operations, as pv_lock_ops
was exported via EXPORT_SYMBOL(), while the pv_ops structure containing
the pv lock operations is exported via EXPORT_SYMBOL_GPL().
Change that by using EXPORT_SYMBOL(pv_ops) for arm/arm64, as with the x86
architecture changes, the following:
https://lore.kernel.org/all/20181029150116.25372-1-jgross@suse.com/T/#u
Fixes: 63042c58affc ("KVM: arm64: Add interface to support vCPU preempted
check")
Signed-off-by: yezengruan <yezengruan(a)huawei.com>
Reviewed-by: Keqian Zhu <zhukeqian1(a)huawei.com>
Acked-by: Xie Xiuqi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm/kernel/paravirt.c | 2 +-
arch/arm64/kernel/paravirt.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
index 4cfed91fe256..3c34f456e400 100644
--- a/arch/arm/kernel/paravirt.c
+++ b/arch/arm/kernel/paravirt.c
@@ -15,4 +15,4 @@ struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;
struct paravirt_patch_template pv_ops;
-EXPORT_SYMBOL_GPL(pv_ops);
+EXPORT_SYMBOL(pv_ops);
diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
index 847b3c8b1218..ecbd2bf2e6fb 100644
--- a/arch/arm64/kernel/paravirt.c
+++ b/arch/arm64/kernel/paravirt.c
@@ -38,7 +38,7 @@ struct paravirt_patch_template pv_ops = {
#endif
.lock.vcpu_is_preempted = __native_vcpu_is_preempted,
};
-EXPORT_SYMBOL_GPL(pv_ops);
+EXPORT_SYMBOL(pv_ops);
struct pv_time_stolen_time_region {
struct pvclock_vcpu_stolen_time *kaddr;
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS] crypto: pcrypt - Fix user-after-free on module unload
by Laibin Qiu 15 Mar '22
by Laibin Qiu 15 Mar '22
15 Mar '22
From: Herbert Xu <herbert(a)gondor.apana.org.au>
stable inclusion
from linux-4.19.102
commit 47ef5cb878817127bd3d54c3578bbbd3f7c2bf2c
CVE: NA
-------------------------------
[ Upstream commit 07bfd9bdf568a38d9440c607b72342036011f727 ]
On module unload of pcrypt we must unregister the crypto algorithms
first and then tear down the padata structure. As otherwise the
crypto algorithms are still alive and can be used while the padata
structure is being freed.
Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Lu Jialin <lujialin4(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
crypto/pcrypt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index 00c7230acfed..4bf4bb5b1a8b 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -509,11 +509,12 @@ static int __init pcrypt_init(void)
static void __exit pcrypt_exit(void)
{
+ crypto_unregister_template(&pcrypt_tmpl);
+
pcrypt_fini_padata(&pencrypt);
pcrypt_fini_padata(&pdecrypt);
kset_unregister(pcrypt_kset);
- crypto_unregister_template(&pcrypt_tmpl);
}
module_init(pcrypt_init);
--
2.22.0
1
0

[PATCH openEuler-1.0-LTS] lib/iov_iter: initialize "flags" in new pipe_buffer
by Laibin Qiu 14 Mar '22
by Laibin Qiu 14 Mar '22
14 Mar '22
From: Max Kellermann <max.kellermann(a)ionos.com>
mainline inclusion
from mainline-v5.17-rc6
commit 9d2231c5d74e13b2a0546fee6737ee4446017903
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4X1GI?from=project-issue
CVE: NA
--------------------------------
The functions copy_page_to_iter_pipe() and push_pipe() can both
allocate a new pipe_buffer, but the "flags" member initializer is
missing.
Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed")
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
To: linux-fsdevel(a)vger.kernel.org
To: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Max Kellermann <max.kellermann(a)ionos.com>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
lib/iov_iter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 25668139fc1f..b80320956caf 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -506,6 +506,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
return 0;
pipe->nrbufs++;
buf->ops = &page_cache_pipe_buf_ops;
+ buf->flags = 0;
get_page(buf->page = page);
buf->offset = offset;
buf->len = bytes;
@@ -630,6 +631,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
break;
pipe->nrbufs++;
pipe->bufs[idx].ops = &default_pipe_buf_ops;
+ pipe->bufs[idx].flags = 0;
pipe->bufs[idx].page = page;
pipe->bufs[idx].offset = 0;
if (left <= PAGE_SIZE) {
--
2.22.0
1
0
fix zoneref mapping problem and disable memory reliable if kdump is in
progress.
Reliable memory used by shmem will be accurate if swap is enabled.
Changelog since v1:
- update ac->preferred_zoneref
- add patch on shmem in reliable_fb_find_zone
Ma Wupeng (3):
mm: disable memory reliable when kdump is in progress
mm: fix zoneref mapping problem in memory reliable
mm: Count reliable shmem used based on NR_SHMEM
mm/filemap.c | 5 ++++-
mm/khugepaged.c | 2 ++
mm/mem_reliable.c | 6 ++++++
mm/migrate.c | 5 +++++
mm/page_alloc.c | 25 ++++++++++++-------------
mm/shmem.c | 7 ++-----
6 files changed, 31 insertions(+), 19 deletions(-)
--
2.22.0
1
3

[PATCH openEuler-1.0-LTS 1/2] mm: fix unable to use reliable memory in page cache
by Laibin Qiu 14 Mar '22
by Laibin Qiu 14 Mar '22
14 Mar '22
From: Chen Wandun <chenwandun(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
----------------------------------------------
It is inaccurate when accumulate percpu variable without lock,
it will result in pagecache_reliable_pages to be negative in
sometime, and will prevent pagecache using reliable memory.
For more accurate statistic, replace percpu variable by percpu_counter.
The additional percpu_counter will be access in alloc_pages, the init
of these two percpu_conter is too late in late_initcall, allocations
that use alloc_pages should have to check if the two counter has been
inited, that will introduce latency, in order to sovle this, init the
two percpu_counter in advance.
Signed-off-by: Chen Wandun <chenwandun(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
include/linux/mem_reliable.h | 17 +++++-----
mm/mem_reliable.c | 62 ++++++++++++++++++------------------
mm/page_alloc.c | 5 +++
3 files changed, 45 insertions(+), 39 deletions(-)
diff --git a/include/linux/mem_reliable.h b/include/linux/mem_reliable.h
index ba5d41edbc44..c4f954340ea7 100644
--- a/include/linux/mem_reliable.h
+++ b/include/linux/mem_reliable.h
@@ -21,8 +21,9 @@ extern bool shmem_reliable;
extern struct percpu_counter reliable_shmem_used_nr_page;
extern bool pagecache_use_reliable_mem;
DECLARE_PER_CPU(long, nr_reliable_buddy_pages);
-DECLARE_PER_CPU(long, pagecache_reliable_pages);
-DECLARE_PER_CPU(long, anon_reliable_pages);
+
+extern struct percpu_counter pagecache_reliable_pages;
+extern struct percpu_counter anon_reliable_pages;
extern unsigned long nr_reliable_reserve_pages __read_mostly;
extern long shmem_reliable_nr_page __read_mostly;
@@ -43,6 +44,7 @@ extern void reliable_lru_add(enum lru_list lru, struct page *page,
extern void page_cache_prepare_alloc(gfp_t *gfp);
extern void reliable_lru_add_batch(int zid, enum lru_list lru,
int val);
+extern bool mem_reliable_counter_initialized(void);
static inline bool mem_reliable_is_enabled(void)
{
@@ -81,13 +83,10 @@ static inline void reliable_page_counter(struct page *page,
static inline bool reliable_mem_limit_check(unsigned long nr_page)
{
- int cpu;
- long num = 0;
+ s64 num;
- for_each_possible_cpu(cpu) {
- num += per_cpu(pagecache_reliable_pages, cpu);
- num += per_cpu(anon_reliable_pages, cpu);
- }
+ num = percpu_counter_read_positive(&pagecache_reliable_pages);
+ num += percpu_counter_read_positive(&anon_reliable_pages);
return num + nr_page <= task_reliable_limit / PAGE_SIZE;
}
@@ -184,6 +183,8 @@ static inline void reliable_lru_add(enum lru_list lru,
static inline void page_cache_prepare_alloc(gfp_t *gfp) {}
static inline void reliable_lru_add_batch(int zid, enum lru_list lru,
int val) {}
+
+static inline bool mem_reliable_counter_initialized(void) { return false; }
#endif
#endif
diff --git a/mm/mem_reliable.c b/mm/mem_reliable.c
index 62df78657ff9..033af716610f 100644
--- a/mm/mem_reliable.c
+++ b/mm/mem_reliable.c
@@ -34,12 +34,18 @@ unsigned long nr_reliable_reserve_pages = MEM_RELIABLE_RESERVE_MIN / PAGE_SIZE;
long shmem_reliable_nr_page = LONG_MAX;
bool pagecache_use_reliable_mem __read_mostly = true;
-DEFINE_PER_CPU(long, pagecache_reliable_pages);
-DEFINE_PER_CPU(long, anon_reliable_pages);
+struct percpu_counter pagecache_reliable_pages;
+struct percpu_counter anon_reliable_pages;
static unsigned long zero;
static unsigned long reliable_pagecache_max_bytes = ULONG_MAX;
+bool mem_reliable_counter_initialized(void)
+{
+ return likely(percpu_counter_initialized(&pagecache_reliable_pages)) &&
+ likely((percpu_counter_initialized(&anon_reliable_pages)));
+}
+
bool mem_reliable_status(void)
{
return mem_reliable_is_enabled();
@@ -66,9 +72,9 @@ void reliable_lru_add_batch(int zid, enum lru_list lru, int val)
if (zid < ZONE_MOVABLE && zid >= 0) {
if (is_file_lru(lru))
- this_cpu_add(pagecache_reliable_pages, val);
+ percpu_counter_add(&pagecache_reliable_pages, val);
else if (is_anon_lru(lru))
- this_cpu_add(anon_reliable_pages, val);
+ percpu_counter_add(&anon_reliable_pages, val);
}
}
@@ -78,14 +84,14 @@ void reliable_lru_add(enum lru_list lru, struct page *page, int val)
return;
if (is_file_lru(lru))
- this_cpu_add(pagecache_reliable_pages, val);
+ percpu_counter_add(&pagecache_reliable_pages, val);
else if (is_anon_lru(lru))
- this_cpu_add(anon_reliable_pages, val);
+ percpu_counter_add(&anon_reliable_pages, val);
else if (lru == LRU_UNEVICTABLE) {
if (PageAnon(page))
- this_cpu_add(anon_reliable_pages, val);
+ percpu_counter_add(&anon_reliable_pages, val);
else
- this_cpu_add(pagecache_reliable_pages, val);
+ percpu_counter_add(&pagecache_reliable_pages, val);
}
}
@@ -188,21 +194,20 @@ static void show_val_kb(struct seq_file *m, const char *s, unsigned long num)
void reliable_report_meminfo(struct seq_file *m)
{
bool pagecache_enabled = pagecache_reliable_is_enabled();
- long nr_pagecache_pages = 0;
- long nr_anon_pages = 0;
+ s64 nr_pagecache_pages = 0;
+ s64 nr_anon_pages = 0;
long nr_buddy_pages = 0;
int cpu;
if (!mem_reliable_is_enabled())
return;
- for_each_possible_cpu(cpu) {
+ for_each_possible_cpu(cpu)
nr_buddy_pages += per_cpu(nr_reliable_buddy_pages, cpu);
- nr_anon_pages += per_cpu(anon_reliable_pages, cpu);
- if (pagecache_enabled)
- nr_pagecache_pages +=
- per_cpu(pagecache_reliable_pages, cpu);
- }
+
+ nr_anon_pages = percpu_counter_sum_positive(&anon_reliable_pages);
+ if (pagecache_enabled)
+ nr_pagecache_pages = percpu_counter_sum_positive(&pagecache_reliable_pages);
show_val_kb(m, "ReliableTotal: ",
total_reliable_mem_sz() >> PAGE_SHIFT);
@@ -445,8 +450,7 @@ static struct ctl_table reliable_dir_table[] = {
void page_cache_prepare_alloc(gfp_t *gfp)
{
- long nr_reliable = 0;
- int cpu;
+ s64 nr_reliable = 0;
if (!mem_reliable_is_enabled())
return;
@@ -454,11 +458,7 @@ void page_cache_prepare_alloc(gfp_t *gfp)
if (!pagecache_reliable_is_enabled())
goto no_reliable;
- for_each_possible_cpu(cpu)
- nr_reliable += this_cpu_read(pagecache_reliable_pages);
-
- if (nr_reliable < 0)
- goto no_reliable;
+ nr_reliable = percpu_counter_read_positive(&pagecache_reliable_pages);
if (nr_reliable > reliable_pagecache_max_bytes >> PAGE_SHIFT)
goto no_reliable;
@@ -480,9 +480,12 @@ static int __init reliable_sysctl_init(void)
return -1;
}
+ percpu_counter_init(&pagecache_reliable_pages, 0, GFP_KERNEL);
+ percpu_counter_init(&anon_reliable_pages, 0, GFP_KERNEL);
+
return 0;
}
-late_initcall(reliable_sysctl_init);
+arch_initcall(reliable_sysctl_init);
#else
static void mem_reliable_ctrl_bit_disabled(int idx) {}
#endif
@@ -515,21 +518,18 @@ static void mem_reliable_feature_disable(int idx)
void reliable_show_mem_info(void)
{
- int cpu;
- long num = 0;
+ s64 num = 0;
if (!mem_reliable_is_enabled())
return;
- for_each_possible_cpu(cpu) {
- num += per_cpu(anon_reliable_pages, cpu);
- num += per_cpu(pagecache_reliable_pages, cpu);
- }
+ num += percpu_counter_sum_positive(&anon_reliable_pages);
+ num += percpu_counter_sum_positive(&pagecache_reliable_pages);
pr_info("ReliableTotal: %lu kB", total_reliable_mem_sz() >> 10);
pr_info("ReliableUsed: %lu kB", used_reliable_mem_sz() >> 10);
pr_info("task_reliable_limit: %lu kB", task_reliable_limit >> 10);
- pr_info("reliable_user_used: %ld kB", num << (PAGE_SHIFT - 10));
+ pr_info("reliable_user_used: %lld kB", num << (PAGE_SHIFT - 10));
}
void mem_reliable_out_of_memory(gfp_t gfp_mask, unsigned int order,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd4354d9ebad..dab62d1d3a6e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4674,6 +4674,11 @@ static inline bool prepare_before_alloc(gfp_t *gfp_mask, unsigned int order)
* allocation trigger task_reliable_limit
*/
if (is_global_init(current)) {
+ if (!mem_reliable_counter_initialized()) {
+ *gfp_mask |= ___GFP_RELIABILITY;
+ return true;
+ }
+
if (reliable_mem_limit_check(1 << order) &&
mem_reliable_watermark_ok(1 << order))
*gfp_mask |= ___GFP_RELIABILITY;
--
2.22.0
1
1
您好!
Kernel SIG 邀请您参加 2022-03-18 14:00 召开的ZOOM会议(自动录制)
会议主题:openEuler kernel 双周例会&技术分享
会议内容:
openEuler kernel 技术分享:内核中断子系统介绍
会议链接:https://us06web.zoom.us/j/82640076508?pwd=MktwWWxWR3ZqNFF4SlgwdGhwNlNDZz09
温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the ZOOM conference(auto recording) will be held at 2022-03-18 14:00,
The subject of the conference is openEuler kernel 双周例会&技术分享,
Summary:
openEuler kernel 技术分享:内核中断子系统介绍
You can join the meeting at https://us06web.zoom.us/j/82640076508?pwd=MktwWWxWR3ZqNFF4SlgwdGhwNlNDZz09.
Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
More information: https://openeuler.org/en/
1
0

[PATCH openEuler-1.0-LTS] nfc: st21nfca: Fix potential buffer overflows in EVT_TRANSACTION
by Laibin Qiu 14 Mar '22
by Laibin Qiu 14 Mar '22
14 Mar '22
From: Jordy Zomer <jordy(a)pwning.systems>
mainline inclusion
from mainline-v5.17-rc1
commit 4fbcc1a4cb20fe26ad0225679c536c80f1648221
category: bugfix
bugzilla: 186393
CVE: CVE-2022-26490
-------------------------------------------------
It appears that there are some buffer overflows in EVT_TRANSACTION.
This happens because the length parameters that are passed to memcpy
come directly from skb->data and are not guarded in any way.
Signed-off-by: Jordy Zomer <jordy(a)pwning.systems>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski(a)canonical.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Huang Guobin <huangguobin4(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/nfc/st21nfca/se.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
index fd967a38a94a..ced3c20d6453 100644
--- a/drivers/nfc/st21nfca/se.c
+++ b/drivers/nfc/st21nfca/se.c
@@ -332,6 +332,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
return -ENOMEM;
transaction->aid_len = skb->data[1];
+
+ /* Checking if the length of the AID is valid */
+ if (transaction->aid_len > sizeof(transaction->aid))
+ return -EINVAL;
+
memcpy(transaction->aid, &skb->data[2],
transaction->aid_len);
@@ -341,6 +346,11 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
return -EPROTO;
transaction->params_len = skb->data[transaction->aid_len + 3];
+
+ /* Total size is allocated (skb->len - 2) minus fixed array members */
+ if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
+ return -EINVAL;
+
memcpy(transaction->params, skb->data +
transaction->aid_len + 4, transaction->params_len);
--
2.22.0
1
0
form Linux 4.19.222 to Linux 4.19.227.
some patch with conflicts haven't inclusion yet:
8a8908cb82568 fuse: fix live lock in fuse_iget()
1e1bb4933f1fa fuse: fix bad inode
bd6e97e2b6f59 veth: Do not record rx queue hint in veth_xmit
1c3564fca0e7b xfs: map unwritten blocks in XFS_IOC_{ALLOC,FREE}SP just
like fallocate
f9dfa44be0fb5 f2fs: fix to do sanity check on last xattr entry in
__f2fs_setxattr()
9c6159ee8fc9d net: accept UFOv6 packages in virtio_net_hdr_to_skb
57f93eaff49df block, bfq: fix use after free in bfq_bfqq_expire
99ada24490c34 block, bfq: fix queue removal from weights tree
7d0efcc69c75a block, bfq: fix decrement of num_active_groups
e867d620470af block, bfq: fix asymmetric scenarios detection
e4cd53c650bef block, bfq: improve asymmetric scenarios detection
they will be sent with fixed conflicts by the analyser.
Andrew Cooper (1):
x86/pkey: Fix undefined behaviour with PKRU_WD_BIT
Andrey Ryabinin (1):
cputime, cpuacct: Include guest time in user time in cpuacct.stat
Antoine Tenart (1):
net-sysfs: update the queue counts in the unregistration path
Antony Antony (2):
xfrm: interface with if_id 0 should return error
xfrm: state and policy should fail if XFRMA_IF_ID 0
Arnd Bergmann (1):
dmaengine: pxa/mmp: stop referencing config->slave_id
Bart Van Assche (1):
scsi: ufs: Fix race conditions related to driver data
Chen Jun (1):
tpm: add request_locality before write TPM_INT_ENABLE
Chengfeng Ye (1):
crypto: qce - fix uaf on qce_ahash_register_one
Christoph Hellwig (1):
scsi: sr: Don't use GFP_DMA
Coco Li (1):
udp: using datalen to cap ipv6 udp max gso segments
David Ahern (4):
ipv6: Check attribute length for RTA_GATEWAY in multipath route
ipv6: Check attribute length for RTA_GATEWAY when deleting multipath
route
ipv6: Continue processing multipath route even if gateway attribute is
invalid
ipv6: Do cleanup if attribute validation fails in multipath route
Doyle, Patrick (1):
mtd: nand: bbt: Fix corner case in bad block table handling
Eric Dumazet (3):
xfrm: fix a small bug in xfrm_sa_len()
af_unix: annote lockless accesses to unix_tot_inflight &
gc_in_progress
netns: add schedule point in ops_exit_list()
Fernando Fernandez Mancera (1):
bonding: fix ad_actor_system option setting to default
Florian Westphal (1):
netfilter: bridge: add support for pppoe filtering
Gang Li (1):
shmem: fix a race between shmem_unused_huge_shrink and
shmem_evict_inode
Hector Martin (1):
iommu/io-pgtable-arm: Fix table descriptor paddr formatting
Jan Kara (4):
ext4: avoid trim error on fs with small groups
ext4: make sure to reset inode lockdep class when quota enabling fails
ext4: make sure quota gets properly shutdown on error
select: Fix indefinitely sleeping task in poll_schedule_timeout()
Joe Thornber (2):
dm btree: add a defensive bounds check to insert_at()
dm space map common: add bounds check to sm_ll_lookup_bitmap()
Kyeong Yoo (1):
jffs2: GC deadlock reading a page that is used in jffs2_write_begin()
Li Hua (1):
sched/rt: Try to restart rt period timer when rt runtime exceeded
Lino Sanfilippo (1):
serial: amba-pl011: do not request memory region twice
Lixiaokeng (1):
scsi: libiscsi: Fix UAF in
iscsi_conn_get_param()/iscsi_conn_teardown()
Lizhi Hou (1):
tty: serial: uartlite: allow 64 bit address
Lukas Wunner (1):
serial: Fix incorrect rs485 polarity on uart open
Luís Henriques (1):
ext4: set csum seed in tmp inode while migrating to extents
Marek Vasut (1):
crypto: stm32/crc32 - Fix kernel BUG triggered in probe()
Muchun Song (1):
net: fix use-after-free in tw_timer_handler
Naveen N. Rao (2):
tracing: Fix check for trace_percpu_buffer validity in get_trace_buf()
tracing: Tag trace_percpu_buffer as a percpu pointer
Nicolas Toromanoff (1):
crypto: stm32/cryp - fix double pm exit
Paolo Abeni (1):
bpf: Do not WARN in bpf_warn_invalid_xdp_action()
Pavel Skripkin (1):
net: mcs7830: handle usb read errors properly
Rafael J. Wysocki (2):
ACPICA: Utilities: Avoid deleting the same object twice in a row
ACPICA: Executer: Fix the REFCLASS_REFOF case in
acpi_ex_opcode_1A_0T_1R()
Suresh Kumar (1):
net: bonding: debug: avoid printing debug logs when bond is not
notifying peers
Thadeu Lima de Souza Cascardo (2):
ipmi: bail out if init_srcu_struct fails
ipmi: fix initialization when workqueue allocation fails
Theodore Ts'o (1):
ext4: don't use the orphan list when migrating an inode
Thomas Gleixner (1):
can: bcm: switch timer to HRTIMER_MODE_SOFT and remove hrtimer_tasklet
Tom Rix (1):
selinux: initialize proto variable in selinux_ip_postroute_compat()
Willem de Bruijn (1):
net: skip virtio_net_hdr_set_proto if protocol already set
William Zhao (1):
ip6_vti: initialize __ip6_tnl_parm struct in vti6_siocdevprivate
Wu Bo (1):
ipmi: Fix UAF when uninstall ipmi_si and ipmi_msghandler module
Xin Xiong (1):
netfilter: ipt_CLUSTERIP: fix refcount leak in clusterip_tg_check()
Documentation/networking/bonding.txt | 11 +-
arch/x86/include/asm/pgtable.h | 4 +-
drivers/acpi/acpica/exoparg1.c | 3 +-
drivers/acpi/acpica/utdelete.c | 1 +
drivers/char/ipmi/ipmi_msghandler.c | 21 ++-
drivers/char/tpm/tpm_tis_core.c | 8 +
drivers/crypto/qce/sha.c | 2 +-
drivers/crypto/stm32/stm32-cryp.c | 2 -
drivers/crypto/stm32/stm32_crc32.c | 4 +-
drivers/dma/mmp_pdma.c | 6 -
drivers/dma/pxa_dma.c | 7 -
drivers/iommu/io-pgtable-arm.c | 9 +-
drivers/md/persistent-data/dm-btree.c | 8 +-
.../md/persistent-data/dm-space-map-common.c | 5 +
drivers/mtd/nand/bbt.c | 2 +-
drivers/net/bonding/bond_main.c | 6 +-
drivers/net/bonding/bond_options.c | 2 +-
drivers/net/usb/mcs7830.c | 12 +-
drivers/scsi/libiscsi.c | 6 +-
drivers/scsi/sr.c | 2 +-
drivers/scsi/sr_vendor.c | 4 +-
drivers/scsi/ufs/tc-dwc-g210-pci.c | 1 -
drivers/scsi/ufs/ufshcd-pltfrm.c | 2 -
drivers/scsi/ufs/ufshcd.c | 7 +
drivers/tty/serial/amba-pl011.c | 27 +--
drivers/tty/serial/serial_core.c | 4 +-
drivers/tty/serial/uartlite.c | 2 +-
fs/ext4/ioctl.c | 2 -
fs/ext4/mballoc.c | 8 +
fs/ext4/migrate.c | 23 ++-
fs/ext4/super.c | 23 ++-
fs/jffs2/file.c | 40 +++--
fs/select.c | 63 +++----
include/linux/virtio_net.h | 3 +
kernel/sched/cputime.c | 4 +-
kernel/sched/rt.c | 23 ++-
kernel/trace/trace.c | 6 +-
mm/shmem.c | 37 +++--
net/bridge/br_netfilter_hooks.c | 7 +-
net/can/bcm.c | 156 ++++++------------
net/core/filter.c | 6 +-
net/core/net-sysfs.c | 3 +
net/core/net_namespace.c | 4 +-
net/ipv4/af_inet.c | 10 +-
net/ipv4/netfilter/ipt_CLUSTERIP.c | 5 +-
net/ipv6/ip6_vti.c | 2 +
net/ipv6/route.c | 28 +++-
net/ipv6/udp.c | 2 +-
net/unix/garbage.c | 14 +-
net/unix/scm.c | 6 +-
net/xfrm/xfrm_interface.c | 14 +-
net/xfrm/xfrm_user.c | 23 ++-
security/selinux/hooks.c | 2 +-
53 files changed, 375 insertions(+), 307 deletions(-)
--
2.22.0
1
55
1
0

[PATCH openEuler-5.10 1/3] net/hinic: Fix null pointer dereference in hinic_physical_port_id
by Zheng Zengkai 11 Mar '22
by Zheng Zengkai 11 Mar '22
11 Mar '22
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I4XF98
CVE: NA
-----------------------------------------------------------------------
The hinic driver currently generates a NULL pointer dereference
when performing the hinicadm tool command during device probe.
This is because the hinicadm process accesses the NULL hwif
pointer in the hwdev which have not been allocated in probe.
Fix this by checking the initialization state of device before
accessing it.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Wangxiaoyun <cloud.wangxiaoyun(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 30 ++++++++++++++-----
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index bea0c7ef51e8..9d39da0c76d4 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -801,8 +801,7 @@ static bool __is_pcidev_match_chip_name(const char *ifname,
if (dev->init_state < HINIC_INIT_STATE_HW_PART_INITED)
return false;
} else {
- if (dev->init_state >=
- HINIC_INIT_STATE_HW_PART_INITED &&
+ if (dev->init_state < HINIC_INIT_STATE_HW_PART_INITED ||
hinic_func_type(dev->hwdev) != type)
return false;
}
@@ -1153,6 +1152,10 @@ void *hinic_get_ppf_hwdev_by_pdev(struct pci_dev *pdev)
chip_node = pci_adapter->chip_node;
lld_dev_hold();
list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag) ||
+ dev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
+ continue;
+
if (dev->hwdev && hinic_func_type(dev->hwdev) == TYPE_PPF) {
lld_dev_put();
return dev->hwdev;
@@ -1365,6 +1368,10 @@ int hinic_get_pf_id(void *hwdev, u32 port_id, u32 *pf_id, u32 *isvalid)
lld_dev_hold();
list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag) ||
+ dev->init_state < HINIC_INIT_STATE_HWDEV_INITED)
+ continue;
+
if (hinic_physical_port_id(dev->hwdev) == port_id) {
*pf_id = hinic_global_func_id(dev->hwdev);
*isvalid = 1;
@@ -1852,7 +1859,8 @@ static void send_event_to_all_pf(struct hinic_pcidev *dev,
lld_dev_hold();
list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
- if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag))
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag) ||
+ des_dev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
continue;
if (hinic_func_type(des_dev->hwdev) == TYPE_VF)
@@ -1870,7 +1878,8 @@ static void send_event_to_dst_pf(struct hinic_pcidev *dev, u16 func_id,
lld_dev_hold();
list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
- if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag))
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &des_dev->flag) ||
+ des_dev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
continue;
if (hinic_func_type(des_dev->hwdev) == TYPE_VF)
@@ -2637,8 +2646,11 @@ static void slave_host_init_delay_work(struct work_struct *work)
/* Make sure the PPF must be the first one */
lld_dev_hold();
list_for_each_entry(ppf_pcidev, &chip_node->func_list, node) {
- if (ppf_pcidev &&
- hinic_func_type(ppf_pcidev->hwdev) == TYPE_PPF) {
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &ppf_pcidev->flag) ||
+ ppf_pcidev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
+ continue;
+
+ if (hinic_func_type(ppf_pcidev->hwdev) == TYPE_PPF) {
found = 1;
break;
}
@@ -2872,7 +2884,8 @@ int hinic_register_micro_log(struct hinic_micro_log_info *micro_log_info)
lld_dev_hold();
list_for_each_entry(chip_node, &g_hinic_chip_list, node) {
list_for_each_entry(dev, &chip_node->func_list, node) {
- if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag))
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag) ||
+ dev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
continue;
if (hinic_func_type(dev->hwdev) == TYPE_PPF) {
@@ -2902,7 +2915,8 @@ void hinic_unregister_micro_log(struct hinic_micro_log_info *micro_log_info)
lld_dev_hold();
list_for_each_entry(chip_node, &g_hinic_chip_list, node) {
list_for_each_entry(dev, &chip_node->func_list, node) {
- if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag))
+ if (test_bit(HINIC_FUNC_IN_REMOVE, &dev->flag) ||
+ dev->init_state < HINIC_INIT_STATE_HW_IF_INITED)
continue;
if (hinic_func_type(dev->hwdev) == TYPE_PPF)
--
2.20.1
1
2

[PATCH openEuler-1.0-LTS 1/3] net: hns3: fix pf vlan filter out of work after self test
by Laibin Qiu 11 Mar '22
by Laibin Qiu 11 Mar '22
11 Mar '22
From: Yonglong Liu <liuyonglong(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4W90Z
CVE: NA
----------------------------
After a self test, the vlan filter of PF is wrongly closed,
cause the vlan filter out of work. The second parameter of
enable_vlan_filter() must pass true, and in enable_vlan_filter(),
it will decide whether to open or keep the status of vlan filter.
Fixes: 79549957e66f ("Revert: net: hns3: adds support for extended VLAN mode and 'QOS' in vlan 802.1Q protocol.")
Signed-off-by: Yonglong Liu <liuyonglong(a)huawei.com>
Reviewed-by: li yongxin <liyongxin1(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
index ef870da7a8a4..be771c75e37e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
@@ -395,8 +395,7 @@ static void hns3_self_test(struct net_device *ndev,
#if IS_ENABLED(CONFIG_VLAN_8021Q)
if (dis_vlan_filter)
- h->ae_algo->ops->enable_vlan_filter(h,
- ndev->flags & IFF_PROMISC);
+ h->ae_algo->ops->enable_vlan_filter(h, true);
#endif
if (if_running)
--
2.22.0
1
2
From: Nick Desaulniers <ndesaulniers(a)google.com>
mainline inclusion
from mainline-v5.8-rc1
commit a194c33f45f83068ef13bf1d16e26d4ca3ecc098
category: bugfix
bugzilla: 89150
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
Will reported a UBSAN warning:
UBSAN: null-ptr-deref in arch/arm64/kernel/smp.c:596:6
member access within null pointer of type 'struct acpi_madt_generic_interrupt'
CPU: 0 PID: 0 Comm: swapper Not tainted 5.7.0-rc6-00124-g96bc42ff0a82 #1
Call trace:
dump_backtrace+0x0/0x384
show_stack+0x28/0x38
dump_stack+0xec/0x174
handle_null_ptr_deref+0x134/0x174
__ubsan_handle_type_mismatch_v1+0x84/0xa4
acpi_parse_gic_cpu_interface+0x60/0xe8
acpi_parse_entries_array+0x288/0x498
acpi_table_parse_entries_array+0x178/0x1b4
acpi_table_parse_madt+0xa4/0x110
acpi_parse_and_init_cpus+0x38/0x100
smp_init_cpus+0x74/0x258
setup_arch+0x350/0x3ec
start_kernel+0x98/0x6f4
This is from the use of the ACPI_OFFSET in
arch/arm64/include/asm/acpi.h. Replace its use with offsetof from
include/linux/stddef.h which should implement the same logic using
__builtin_offsetof, so that UBSAN wont warn.
Reported-by: Will Deacon <will(a)kernel.org>
Suggested-by: Ard Biesheuvel <ardb(a)kernel.org>
Signed-off-by: Nick Desaulniers <ndesaulniers(a)google.com>
Reviewed-by: Jeremy Linton <jeremy.linton(a)arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>
Cc: stable(a)vger.kernel.org
Link: https://lore.kernel.org/lkml/20200521100952.GA5360@willie-the-truck/
Link: https://lore.kernel.org/r/20200608203818.189423-1-ndesaulniers@google.com
Signed-off-by: Will Deacon <will(a)kernel.org>
Signed-off-by: linyujun <linyujun809(a)huawei.com>
Reviewed-by: chenlifu <chenlifu(a)huawei.com>
Reviewed-by: He Ying <heying24(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
arch/arm64/include/asm/acpi.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index f364115ac2ef..348f12447ecd 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -15,6 +15,7 @@
#include <linux/efi.h>
#include <linux/memblock.h>
#include <linux/psci.h>
+#include <linux/stddef.h>
#include <asm/cputype.h>
#include <asm/io.h>
@@ -33,14 +34,14 @@
* is therefore used to delimit the MADT GICC structure minimum length
* appropriately.
*/
-#define ACPI_MADT_GICC_MIN_LENGTH ACPI_OFFSET( \
+#define ACPI_MADT_GICC_MIN_LENGTH offsetof( \
struct acpi_madt_generic_interrupt, efficiency_class)
#define BAD_MADT_GICC_ENTRY(entry, end) \
(!(entry) || (entry)->header.length < ACPI_MADT_GICC_MIN_LENGTH || \
(unsigned long)(entry) + (entry)->header.length > (end))
-#define ACPI_MADT_GICC_SPE (ACPI_OFFSET(struct acpi_madt_generic_interrupt, \
+#define ACPI_MADT_GICC_SPE (offsetof(struct acpi_madt_generic_interrupt, \
spe_interrupt) + sizeof(u16))
/* Basic configuration for ACPI */
--
2.22.0
1
0

[PATCH openEuler-1.0-LTS] sched: Fix sleeping in atomic context at cpu_qos_write()
by Laibin Qiu 11 Mar '22
by Laibin Qiu 11 Mar '22
11 Mar '22
From: Zhang Qiao <zhangqiao22(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4WOPM
CVE: NA
--------------------------------
cfs_bandwidth_usage_inc() need hold jump_label_mutex and
might sleep, so we can not call it in atomic context.
Fix this by moving cfs_bandwidth_usage_{inc,dec}() out of
rcu read critical section.
Fixes: f7b390cd929 ("sched: Change cgroup task scheduler policy")
Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com>
Reviewed-by: Chen Hui <judy.chenhui(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
---
kernel/sched/core.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b36a3b4c60e9..496ce71f93a7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6981,13 +6981,10 @@ static int tg_change_scheduler(struct task_group *tg, void *data)
struct cgroup_subsys_state *css = &tg->css;
tg->qos_level = qos_level;
- if (qos_level == -1) {
+ if (qos_level == -1)
policy = SCHED_IDLE;
- cfs_bandwidth_usage_inc();
- } else {
+ else
policy = SCHED_NORMAL;
- cfs_bandwidth_usage_dec();
- }
param.sched_priority = 0;
css_task_iter_start(css, 0, &it);
@@ -7015,6 +7012,13 @@ static int cpu_qos_write(struct cgroup_subsys_state *css,
if (tg->qos_level == -1 && qos_level == 0)
return -EINVAL;
+ cpus_read_lock();
+ if (qos_level == -1)
+ cfs_bandwidth_usage_inc();
+ else
+ cfs_bandwidth_usage_dec();
+ cpus_read_unlock();
+
rcu_read_lock();
walk_tg_tree_from(tg, tg_change_scheduler, tg_nop, (void *)(&qos_level));
rcu_read_unlock();
--
2.22.0
1
0

[PATCH openEuler-1.0-LTS 0/5] io_uring: fix request endless wait because of resources
by Laibin Qiu 11 Mar '22
by Laibin Qiu 11 Mar '22
11 Mar '22
bugzaila: 186136, https://gitee.com/openeuler/kernel/issues/I4RM1D
Colin Ian King (1):
io_uring: remove redundant initialization of variable ret
Jens Axboe (3):
io_uring: re-issue block requests that failed because of resources
io_uring: don't double complete failed reissue request
io_uring: don't re-setup vecs/iter in io_resumit_prep() is already
there
Pavel Begunkov (1):
block: don't ignore REQ_NOWAIT for direct IO
fs/block_dev.c | 5 +++
fs/io_uring.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 104 insertions(+), 1 deletion(-)
--
2.22.0
1
5

[PATCH openEuler-5.10 01/48] Revert "blk-mq, elevator: Count requests per hctx to improve performance"
by Zheng Zengkai 11 Mar '22
by Zheng Zengkai 11 Mar '22
11 Mar '22
From: Jan Kara <jack(a)suse.cz>
mainline inclusion
from mainline-5.12-rc1
commit 5ac83c644f5fb924f0b2c09102ab82fc788f8411
category: perf
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SW26
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
-------------------------------------------------
This reverts commit b445547ec1bbd3e7bf4b1c142550942f70527d95.
Since both mq-deadline and BFQ completely ignore hctx they are passed to
their dispatch function and dispatch whatever request they deem fit
checking whether any request for a particular hctx is queued is just
pointless since we'll very likely get a request from a different hctx
anyway. In the following commit we'll deal with lock contention in these
IO schedulers in presence of multiple HW queues in a different way.
Signed-off-by: Jan Kara <jack(a)suse.cz>
Reviewed-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Conflicts:
block/bfq-iosched.c
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reviewed-by: Hou Tao <houtao1(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
block/bfq-iosched.c | 5 -----
block/blk-mq.c | 1 -
block/mq-deadline.c | 6 ------
include/linux/blk-mq.h | 4 ----
4 files changed, 16 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 70f2aeadd21c..2247db842985 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4651,9 +4651,6 @@ static bool bfq_has_work(struct blk_mq_hw_ctx *hctx)
{
struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
- if (!atomic_read(&hctx->elevator_queued))
- return false;
-
/*
* Avoiding lock: a race on bfqd->busy_queues should cause at
* most a call to dispatch for nothing
@@ -5570,7 +5567,6 @@ static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
bfq_insert_request(hctx, rq, at_head);
- atomic_inc(&hctx->elevator_queued);
}
}
@@ -5935,7 +5931,6 @@ static void bfq_finish_requeue_request(struct request *rq)
bfq_update_inject_limit(bfqd, bfqq);
bfq_completed_request(bfqq, bfqd);
- atomic_dec(&rq->mq_hctx->elevator_queued);
}
bfq_finish_requeue_request_body(bfqq);
spin_unlock_irqrestore(&bfqd->lock, flags);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b8cf684030dc..8428624ac42f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2748,7 +2748,6 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
goto free_hctx;
atomic_set(&hctx->nr_active, 0);
- atomic_set(&hctx->elevator_queued, 0);
if (node == NUMA_NO_NODE)
node = set->numa_node;
hctx->numa_node = node;
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 43994cce1eb2..42b6e9dbe7c7 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -386,8 +386,6 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
spin_lock(&dd->lock);
rq = __dd_dispatch_request(dd);
spin_unlock(&dd->lock);
- if (rq)
- atomic_dec(&rq->mq_hctx->elevator_queued);
return rq;
}
@@ -539,7 +537,6 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
dd_insert_request(hctx, rq, at_head);
- atomic_inc(&hctx->elevator_queued);
}
spin_unlock(&dd->lock);
}
@@ -586,9 +583,6 @@ static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
{
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
- if (!atomic_read(&hctx->elevator_queued))
- return false;
-
return !list_empty_careful(&dd->dispatch) ||
!list_empty_careful(&dd->fifo_list[0]) ||
!list_empty_careful(&dd->fifo_list[1]);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 3134aaf9032a..adcbef9705ca 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -141,10 +141,6 @@ struct blk_mq_hw_ctx {
* shared across request queues.
*/
atomic_t nr_active;
- /**
- * @elevator_queued: Number of queued requests on hctx.
- */
- atomic_t elevator_queued;
/** @cpuhp_online: List to store request if CPU is going to die */
struct hlist_node cpuhp_online;
--
2.20.1
2
48
From: Chiqijun <chiqijun(a)huawei.com>
driver inclusion
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I4WWH4
CVE: NA
-----------------------------------------------------------------------
When hinic_remove is executed concurrently, chip_node is double freed.
Signed-off-by: Chiqijun <chiqijun(a)huawei.com>
Reviewed-by: Wangxiaoyun <cloud.wangxiaoyun(a)huawei.com>
Acked-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/net/ethernet/huawei/hinic/hinic_lld.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_lld.c b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
index 6c960cecf101..bea0c7ef51e8 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_lld.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_lld.c
@@ -119,6 +119,7 @@ struct hinic_pcidev {
bool nic_des_enable;
struct timer_list syncfw_time_timer;
+ int card_id;
};
#define HINIC_EVENT_PROCESS_TIMEOUT 10000
@@ -2099,6 +2100,9 @@ static void free_chip_node(struct hinic_pcidev *pci_adapter)
u32 id;
int err;
+ if (!(card_bit_map & BIT(pci_adapter->card_id)))
+ return;
+
if (list_empty(&chip_node->func_list)) {
list_del(&chip_node->node);
sdk_info(&pci_adapter->pcidev->dev,
@@ -2701,6 +2705,9 @@ static int hinic_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto alloc_chip_node_fail;
}
+ sscanf(pci_adapter->chip_node->chip_name, HINIC_CHIP_NAME "%d",
+ &pci_adapter->card_id);
+
err = nictool_k_init();
if (err) {
sdk_warn(&pdev->dev, "Failed to init nictool");
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS] sched: Fix sleeping in atomic context at cpu_qos_write()
by Yang Yingliang 10 Mar '22
by Yang Yingliang 10 Mar '22
10 Mar '22
From: Zhang Qiao <zhangqiao22(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4WOPM
CVE: NA
--------------------------------
cfs_bandwidth_usage_inc() need hold jump_label_mutex and
might sleep, so we can not call it in atomic context.
Fix this by moving cfs_bandwidth_usage_{inc,dec}() out of
rcu read critical section.
Fixes: f7b390cd929 ("sched: Change cgroup task scheduler policy")
Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com>
Reviewed-by: Chen Hui <judy.chenhui(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/sched/core.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b36a3b4c60e9c..496ce71f93a7a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6981,13 +6981,10 @@ static int tg_change_scheduler(struct task_group *tg, void *data)
struct cgroup_subsys_state *css = &tg->css;
tg->qos_level = qos_level;
- if (qos_level == -1) {
+ if (qos_level == -1)
policy = SCHED_IDLE;
- cfs_bandwidth_usage_inc();
- } else {
+ else
policy = SCHED_NORMAL;
- cfs_bandwidth_usage_dec();
- }
param.sched_priority = 0;
css_task_iter_start(css, 0, &it);
@@ -7015,6 +7012,13 @@ static int cpu_qos_write(struct cgroup_subsys_state *css,
if (tg->qos_level == -1 && qos_level == 0)
return -EINVAL;
+ cpus_read_lock();
+ if (qos_level == -1)
+ cfs_bandwidth_usage_inc();
+ else
+ cfs_bandwidth_usage_dec();
+ cpus_read_unlock();
+
rcu_read_lock();
walk_tg_tree_from(tg, tg_change_scheduler, tg_nop, (void *)(&qos_level));
rcu_read_unlock();
--
2.25.1
1
0

09 Mar '22
From: "Darrick J. Wong" <darrick.wong(a)oracle.com>
mainline inclusion
from mainline-v5.11-rc4
commit 6da1b4b1ab36d80a3994fd4811c8381de10af604
category: bugfix
bugzilla: 185867 https://gitee.com/openeuler/kernel/issues/I4KIAO
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When overlayfs is running on top of xfs and the user unlinks a file in
the overlay, overlayfs will create a whiteout inode and ask xfs to
"rename" the whiteout file atop the one being unlinked. If the file
being unlinked loses its one nlink, we then have to put the inode on the
unlinked list.
This requires us to grab the AGI buffer of the whiteout inode to take it
off the unlinked list (which is where whiteouts are created) and to grab
the AGI buffer of the file being deleted. If the whiteout was created
in a higher numbered AG than the file being deleted, we'll lock the AGIs
in the wrong order and deadlock.
Therefore, grab all the AGI locks we think we'll need ahead of time, and
in order of increasing AG number per the locking rules.
Reported-by: wenli xie <wlxie7296(a)gmail.com>
Fixes: 93597ae8dac0 ("xfs: Fix deadlock between AGI and AGF when target_ip exists in xfs_rename()")
Signed-off-by: Darrick J. Wong <darrick.wong(a)oracle.com>
Reviewed-by: Brian Foster <bfoster(a)redhat.com>
Signed-off-by: Guo Xuenan <guoxuenan(a)huawei.com>
Reviewed-by: Lihong Kou <koulihong(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/xfs/libxfs/xfs_dir2.h | 2 --
fs/xfs/libxfs/xfs_dir2_sf.c | 2 +-
fs/xfs/xfs_inode.c | 42 ++++++++++++++++++++++---------------
3 files changed, 26 insertions(+), 20 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_dir2.h b/fs/xfs/libxfs/xfs_dir2.h
index e55378640b05..d03e6098ded9 100644
--- a/fs/xfs/libxfs/xfs_dir2.h
+++ b/fs/xfs/libxfs/xfs_dir2.h
@@ -47,8 +47,6 @@ extern int xfs_dir_lookup(struct xfs_trans *tp, struct xfs_inode *dp,
extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t ino,
xfs_extlen_t tot);
-extern bool xfs_dir2_sf_replace_needblock(struct xfs_inode *dp,
- xfs_ino_t inum);
extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t inum,
xfs_extlen_t tot);
diff --git a/fs/xfs/libxfs/xfs_dir2_sf.c b/fs/xfs/libxfs/xfs_dir2_sf.c
index 2463b5d73447..8c4f76bba88b 100644
--- a/fs/xfs/libxfs/xfs_dir2_sf.c
+++ b/fs/xfs/libxfs/xfs_dir2_sf.c
@@ -1018,7 +1018,7 @@ xfs_dir2_sf_removename(
/*
* Check whether the sf dir replace operation need more blocks.
*/
-bool
+static bool
xfs_dir2_sf_replace_needblock(
struct xfs_inode *dp,
xfs_ino_t inum)
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index af54224ebbe7..b72dd3f67ca7 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -3212,7 +3212,7 @@ xfs_rename(
struct xfs_trans *tp;
struct xfs_inode *wip = NULL; /* whiteout inode */
struct xfs_inode *inodes[__XFS_SORT_INODES];
- struct xfs_buf *agibp;
+ int i;
int num_inodes = __XFS_SORT_INODES;
bool new_parent = (src_dp != target_dp);
bool src_is_directory = S_ISDIR(VFS_I(src_ip)->i_mode);
@@ -3325,6 +3325,30 @@ xfs_rename(
}
}
+ /*
+ * Lock the AGI buffers we need to handle bumping the nlink of the
+ * whiteout inode off the unlinked list and to handle dropping the
+ * nlink of the target inode. Per locking order rules, do this in
+ * increasing AG order and before directory block allocation tries to
+ * grab AGFs because we grab AGIs before AGFs.
+ *
+ * The (vfs) caller must ensure that if src is a directory then
+ * target_ip is either null or an empty directory.
+ */
+ for (i = 0; i < num_inodes && inodes[i] != NULL; i++) {
+ if (inodes[i] == wip ||
+ (inodes[i] == target_ip &&
+ (VFS_I(target_ip)->i_nlink == 1 || src_is_directory))) {
+ struct xfs_buf *bp;
+ xfs_agnumber_t agno;
+
+ agno = XFS_INO_TO_AGNO(mp, inodes[i]->i_ino);
+ error = xfs_read_agi(mp, tp, agno, &bp);
+ if (error)
+ goto out_trans_cancel;
+ }
+ }
+
/*
* Directory entry creation below may acquire the AGF. Remove
* the whiteout from the unlinked list first to preserve correct
@@ -3377,22 +3401,6 @@ xfs_rename(
* In case there is already an entry with the same
* name at the destination directory, remove it first.
*/
-
- /*
- * Check whether the replace operation will need to allocate
- * blocks. This happens when the shortform directory lacks
- * space and we have to convert it to a block format directory.
- * When more blocks are necessary, we must lock the AGI first
- * to preserve locking order (AGI -> AGF).
- */
- if (xfs_dir2_sf_replace_needblock(target_dp, src_ip->i_ino)) {
- error = xfs_read_agi(mp, tp,
- XFS_INO_TO_AGNO(mp, target_ip->i_ino),
- &agibp);
- if (error)
- goto out_trans_cancel;
- }
-
error = xfs_dir_replace(tp, target_dp, target_name,
src_ip->i_ino, spaceres);
if (error)
--
2.20.1
1
27

[PATCH openEuler-1.0-LTS 1/3] block: add a switch for precise iostat accounting
by Yang Yingliang 09 Mar '22
by Yang Yingliang 09 Mar '22
09 Mar '22
From: Zhang Wensheng <zhangwensheng5(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 39265, https://gitee.com/openeuler/kernel/issues/I4WC06
CVE: NA
-----------------------------------------------
When the inflight IOs are slow and no new IOs are issued, we expect
iostat could manifest the IO hang problem. However after
commit 9c6dea45e6f7 ("block: delete part_round_stats and switch to less
precise counting"), io_tick and time_in_queue will not be updated until
the end of IO, and the avgqu-sz and %util columns of iostat will be zero.
To fix it, we could fallback to the implementation before commit
9c6dea45e6f7, but it may cause performance regression on NVMe device
or bio-based device (due to overhead of inflight calculation),
so add a switch to control whether or not to use precise iostat
accounting. It can be enabled by adding "precise_iostat=1" in kernel
boot cmdline. When precise accouting is enabled, io_tick and time_in_queue
will be updated when accessing /proc/diskstats and
/sys/block/sdX/sdXN/stat.
Fixes: 9c6dea45e6f7 ("block: delete part_round_stats and switch to less precise counting")
Signed-off-by: Zhang Wensheng <zhangwensheng5(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/bio.c | 8 ++++++--
block/blk-core.c | 30 +++++++++++++++++++++++++++---
block/blk-merge.c | 2 ++
block/genhd.c | 7 +++++++
block/partition-generic.c | 8 ++++++++
include/linux/blkdev.h | 1 +
6 files changed, 51 insertions(+), 5 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index d94243411ef30..b50d3b59c79b4 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1706,9 +1706,13 @@ void generic_end_io_acct(struct request_queue *q, int req_op,
const int sgrp = op_stat_group(req_op);
int cpu = part_stat_lock();
- update_io_ticks(cpu, part, now);
+ if (precise_iostat) {
+ part_round_stats(q, cpu, part);
+ } else {
+ update_io_ticks(cpu, part, now);
+ part_stat_add(cpu, part, time_in_queue, duration);
+ }
part_stat_add(cpu, part, nsecs[sgrp], jiffies_to_nsecs(duration));
- part_stat_add(cpu, part, time_in_queue, duration);
part_dec_in_flight(q, part, op_is_write(req_op));
part_stat_unlock();
diff --git a/block/blk-core.c b/block/blk-core.c
index 41d0b09e9a673..df733e8caa6a1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -56,6 +56,20 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug);
DEFINE_IDA(blk_queue_ida);
+bool precise_iostat;
+static int __init precise_iostat_setup(char *str)
+{
+ bool precise;
+
+ if (!strtobool(str, &precise)) {
+ precise_iostat = precise;
+ pr_info("precise iostat %d\n", precise_iostat);
+ }
+
+ return 1;
+}
+__setup("precise_iostat=", precise_iostat_setup);
+
/*
* For the allocated request tables
*/
@@ -1700,8 +1714,13 @@ static void part_round_stats_single(struct request_queue *q, int cpu,
struct hd_struct *part, unsigned long now,
unsigned int inflight)
{
- if (inflight)
+ if (inflight) {
+ if (precise_iostat) {
+ __part_stat_add(cpu, part, time_in_queue,
+ inflight * (now - part->stamp));
+ }
__part_stat_add(cpu, part, io_ticks, (now - part->stamp));
+ }
part->stamp = now;
}
@@ -2771,10 +2790,15 @@ void blk_account_io_done(struct request *req, u64 now)
cpu = part_stat_lock();
part = req->part;
- update_io_ticks(cpu, part, jiffies);
+ if (!precise_iostat) {
+ update_io_ticks(cpu, part, jiffies);
+ part_stat_add(cpu, part, time_in_queue,
+ nsecs_to_jiffies64(now - req->start_time_ns));
+ } else {
+ part_round_stats(req->q, cpu, part);
+ }
part_stat_inc(cpu, part, ios[sgrp]);
part_stat_add(cpu, part, nsecs[sgrp], now - req->start_time_ns);
- part_stat_add(cpu, part, time_in_queue, nsecs_to_jiffies64(now - req->start_time_ns));
part_dec_in_flight(req->q, part, rq_data_dir(req));
hd_struct_put(part);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 4c17c1031e34f..d2fabe1fdf326 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -669,6 +669,8 @@ static void blk_account_io_merge(struct request *req)
cpu = part_stat_lock();
part = req->part;
+ if (precise_iostat)
+ part_round_stats(req->q, cpu, part);
part_dec_in_flight(req->q, part, rq_data_dir(req));
hd_struct_put(part);
diff --git a/block/genhd.c b/block/genhd.c
index 183612cbbd6b7..e7b97fdb41731 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1352,6 +1352,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
struct hd_struct *hd;
char buf[BDEVNAME_SIZE];
unsigned int inflight[2];
+ int cpu;
/*
if (&disk_to_dev(gp)->kobj.entry == block_class.devices.next)
@@ -1363,6 +1364,12 @@ static int diskstats_show(struct seq_file *seqf, void *v)
disk_part_iter_init(&piter, gp, DISK_PITER_INCL_EMPTY_PART0);
while ((hd = disk_part_iter_next(&piter))) {
+ if (precise_iostat) {
+ cpu = part_stat_lock();
+ part_round_stats(gp->queue, cpu, hd);
+ part_stat_unlock();
+ }
+
part_in_flight(gp->queue, hd, inflight);
seq_printf(seqf, "%4d %7d %s "
"%lu %lu %lu %u "
diff --git a/block/partition-generic.c b/block/partition-generic.c
index 739c0cc5fd222..c4ac7a8c77dc5 100644
--- a/block/partition-generic.c
+++ b/block/partition-generic.c
@@ -18,6 +18,7 @@
#include <linux/ctype.h>
#include <linux/genhd.h>
#include <linux/blktrace_api.h>
+#include <linux/blkdev.h>
#include "partitions/check.h"
@@ -121,6 +122,13 @@ ssize_t part_stat_show(struct device *dev,
struct hd_struct *p = dev_to_part(dev);
struct request_queue *q = part_to_disk(p)->queue;
unsigned int inflight[2];
+ int cpu;
+
+ if (precise_iostat) {
+ cpu = part_stat_lock();
+ part_round_stats(q, cpu, p);
+ part_stat_unlock();
+ }
part_in_flight(q, p, inflight);
return sprintf(buf,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 28ea02865ecc1..a86659e78d987 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -28,6 +28,7 @@
#include <linux/scatterlist.h>
#include <linux/blkzoned.h>
+extern bool precise_iostat;
struct module;
struct scsi_ioctl_command;
--
2.25.1
1
2

[PATCH openEuler-5.10] Revert "efi/libstub: arm64: Relax 2M alignment again for relocatable kernels"
by Zheng Zengkai 09 Mar '22
by Zheng Zengkai 09 Mar '22
09 Mar '22
From: Yang Yingliang <yangyingliang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VSGH
CVE: NA
--------------------------------
This reverts commit c6d2a109d90440e9e5e927a740128a35acf9d0b5.
I got the following messages when booting kernel:
EFI stub: Booting Linux Kernel...
EFI stub: EFI_RNG_PROTOCOL unavailable, KASLR will be disabled
EFI stub: Using DTB from configuration table
EFI stub: Exiting boot services and installing virtual address map...
...
[ 0.000000] CPU features: kernel page table isolation forced ON by KASLR
[ 0.000000] CPU features: detected: Kernel page table isolation (KPTI)
[ 3.393380] KASLR disabled due to lack of seed
KPTI is forced on by KASLR, but in fact KASLR is not enabled, it's
because kaslr_offset() returns non-zero in kaslr_requires_kpti().
To avoid this problem, when efi kaslr is disabled, make image
MIN_KIMG_ALIGN align which is used to get KASLR offset in
primary_entry(), so kaslr_offset() will returns 0.
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/firmware/efi/libstub/arm64-stub.c | 28 ++++++++++++-----------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
index c1b57dfb1277..aa796324fd62 100644
--- a/drivers/firmware/efi/libstub/arm64-stub.c
+++ b/drivers/firmware/efi/libstub/arm64-stub.c
@@ -79,6 +79,18 @@ static bool check_image_region(u64 base, u64 size)
return ret;
}
+/*
+ * Although relocatable kernels can fix up the misalignment with respect to
+ * MIN_KIMG_ALIGN, the resulting virtual text addresses are subtly out of
+ * sync with those recorded in the vmlinux when kaslr is disabled but the
+ * image required relocation anyway. Therefore retain 2M alignment unless
+ * KASLR is in use.
+ */
+static u64 min_kimg_align(void)
+{
+ return efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
+}
+
efi_status_t handle_kernel_image(unsigned long *image_addr,
unsigned long *image_size,
unsigned long *reserve_addr,
@@ -89,16 +101,6 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
unsigned long kernel_size, kernel_memsize = 0;
u32 phys_seed = 0;
- /*
- * Although relocatable kernels can fix up the misalignment with
- * respect to MIN_KIMG_ALIGN, the resulting virtual text addresses are
- * subtly out of sync with those recorded in the vmlinux when kaslr is
- * disabled but the image required relocation anyway. Therefore retain
- * 2M alignment if KASLR was explicitly disabled, even if it was not
- * going to be activated to begin with.
- */
- u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
-
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
if (!efi_nokaslr) {
status = efi_get_random_bytes(sizeof(phys_seed),
@@ -132,7 +134,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
* If KASLR is enabled, and we have some randomness available,
* locate the kernel at a randomized offset in physical memory.
*/
- status = efi_random_alloc(*reserve_size, min_kimg_align,
+ status = efi_random_alloc(*reserve_size, min_kimg_align(),
reserve_addr, phys_seed);
} else {
status = EFI_OUT_OF_RESOURCES;
@@ -141,7 +143,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
if (status != EFI_SUCCESS) {
if (!check_image_region((u64)_text, kernel_memsize)) {
efi_err("FIRMWARE BUG: Image BSS overlaps adjacent EFI memory region\n");
- } else if (IS_ALIGNED((u64)_text, min_kimg_align)) {
+ } else if (IS_ALIGNED((u64)_text, min_kimg_align())) {
/*
* Just execute from wherever we were loaded by the
* UEFI PE/COFF loader if the alignment is suitable.
@@ -152,7 +154,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
}
status = efi_allocate_pages_aligned(*reserve_size, reserve_addr,
- ULONG_MAX, min_kimg_align);
+ ULONG_MAX, min_kimg_align());
if (status != EFI_SUCCESS) {
efi_err("Failed to relocate kernel\n");
--
2.20.1
1
0

[PATCH openEuler-5.10 01/12] crypto: hisilicon/sec - fix the aead software fallback for engine
by Zheng Zengkai 09 Mar '22
by Zheng Zengkai 09 Mar '22
09 Mar '22
From: Kai Ye <yekai13(a)huawei.com>
mainline inclusion
from mainline-crypto-master
commit 0a2a464f863187f97e96ebc6384c052cafd4a54c
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4W4WU
CVE: NA
--------------------------------
Due to the subreq pointer misuse the private context memory. The aead
soft crypto occasionally casues the OS panic as setting the 64K page.
Here is fix it.
Fixes: 6c46a3297bea ("crypto: hisilicon/sec - add fallback tfm...")
Signed-off-by: Kai Ye <yekai13(a)huawei.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Yang Shen <shenyang39(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/crypto/hisilicon/sec2/sec_crypto.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index f77be0e6cf65..bf5668ce2a80 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -2295,9 +2295,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
struct aead_request *aead_req,
bool encrypt)
{
- struct aead_request *subreq = aead_request_ctx(aead_req);
struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
struct device *dev = ctx->dev;
+ struct aead_request *subreq;
+ int ret;
/* Kunpeng920 aead mode not support input 0 size */
if (!a_ctx->fallback_aead_tfm) {
@@ -2305,6 +2306,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
return -EINVAL;
}
+ subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL);
+ if (!subreq)
+ return -ENOMEM;
+
aead_request_set_tfm(subreq, a_ctx->fallback_aead_tfm);
aead_request_set_callback(subreq, aead_req->base.flags,
aead_req->base.complete, aead_req->base.data);
@@ -2312,8 +2317,13 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
aead_req->cryptlen, aead_req->iv);
aead_request_set_ad(subreq, aead_req->assoclen);
- return encrypt ? crypto_aead_encrypt(subreq) :
- crypto_aead_decrypt(subreq);
+ if (encrypt)
+ ret = crypto_aead_encrypt(subreq);
+ else
+ ret = crypto_aead_decrypt(subreq);
+ aead_request_free(subreq);
+
+ return ret;
}
static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
--
2.20.1
1
11

[PATCH openEuler-1.0-LTS 1/2] bfq: fix use-after-free in bfq_dispatch_request
by Yang Yingliang 08 Mar '22
by Yang Yingliang 08 Mar '22
08 Mar '22
From: Zhang Wensheng <zhangwensheng5(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 185755, https://gitee.com/openeuler/kernel/issues/I4WLYZ
CVE:NA
Reference: https://lore.kernel.org/lkml/8b6032c7-c971-d79b-4a41-271d3cc0efdd@huawei.co…
--------------------------------
KASAN reports a use-after-free report when doing normal scsi-mq test
[69832.239032] ==================================================================
[69832.241810] BUG: KASAN: use-after-free in bfq_dispatch_request+0x1045/0x44b0
[69832.243267] Read of size 8 at addr ffff88802622ba88 by task kworker/3:1H/155
[69832.244656]
[69832.245007] CPU: 3 PID: 155 Comm: kworker/3:1H Not tainted 5.10.0-10295-g576c6382529e #8
[69832.246626] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[69832.249069] Workqueue: kblockd blk_mq_run_work_fn
[69832.250022] Call Trace:
[69832.250541] dump_stack+0x9b/0xce
[69832.251232] ? bfq_dispatch_request+0x1045/0x44b0
[69832.252243] print_address_description.constprop.6+0x3e/0x60
[69832.253381] ? __cpuidle_text_end+0x5/0x5
[69832.254211] ? vprintk_func+0x6b/0x120
[69832.254994] ? bfq_dispatch_request+0x1045/0x44b0
[69832.255952] ? bfq_dispatch_request+0x1045/0x44b0
[69832.256914] kasan_report.cold.9+0x22/0x3a
[69832.257753] ? bfq_dispatch_request+0x1045/0x44b0
[69832.258755] check_memory_region+0x1c1/0x1e0
[69832.260248] bfq_dispatch_request+0x1045/0x44b0
[69832.261181] ? bfq_bfqq_expire+0x2440/0x2440
[69832.262032] ? blk_mq_delay_run_hw_queues+0xf9/0x170
[69832.263022] __blk_mq_do_dispatch_sched+0x52f/0x830
[69832.264011] ? blk_mq_sched_request_inserted+0x100/0x100
[69832.265101] __blk_mq_sched_dispatch_requests+0x398/0x4f0
[69832.266206] ? blk_mq_do_dispatch_ctx+0x570/0x570
[69832.267147] ? __switch_to+0x5f4/0xee0
[69832.267898] blk_mq_sched_dispatch_requests+0xdf/0x140
[69832.268946] __blk_mq_run_hw_queue+0xc0/0x270
[69832.269840] blk_mq_run_work_fn+0x51/0x60
[69832.278170] process_one_work+0x6d4/0xfe0
[69832.278984] worker_thread+0x91/0xc80
[69832.279726] ? __kthread_parkme+0xb0/0x110
[69832.280554] ? process_one_work+0xfe0/0xfe0
[69832.281414] kthread+0x32d/0x3f0
[69832.282082] ? kthread_park+0x170/0x170
[69832.282849] ret_from_fork+0x1f/0x30
[69832.283573]
[69832.283886] Allocated by task 7725:
[69832.284599] kasan_save_stack+0x19/0x40
[69832.285385] __kasan_kmalloc.constprop.2+0xc1/0xd0
[69832.286350] kmem_cache_alloc_node+0x13f/0x460
[69832.287237] bfq_get_queue+0x3d4/0x1140
[69832.287993] bfq_get_bfqq_handle_split+0x103/0x510
[69832.289015] bfq_init_rq+0x337/0x2d50
[69832.289749] bfq_insert_requests+0x304/0x4e10
[69832.290634] blk_mq_sched_insert_requests+0x13e/0x390
[69832.291629] blk_mq_flush_plug_list+0x4b4/0x760
[69832.292538] blk_flush_plug_list+0x2c5/0x480
[69832.293392] io_schedule_prepare+0xb2/0xd0
[69832.294209] io_schedule_timeout+0x13/0x80
[69832.295014] wait_for_common_io.constprop.1+0x13c/0x270
[69832.296137] submit_bio_wait+0x103/0x1a0
[69832.296932] blkdev_issue_discard+0xe6/0x160
[69832.297794] blk_ioctl_discard+0x219/0x290
[69832.298614] blkdev_common_ioctl+0x50a/0x1750
[69832.304715] blkdev_ioctl+0x470/0x600
[69832.305474] block_ioctl+0xde/0x120
[69832.306232] vfs_ioctl+0x6c/0xc0
[69832.306877] __se_sys_ioctl+0x90/0xa0
[69832.307629] do_syscall_64+0x2d/0x40
[69832.308362] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[69832.309382]
[69832.309701] Freed by task 155:
[69832.310328] kasan_save_stack+0x19/0x40
[69832.311121] kasan_set_track+0x1c/0x30
[69832.311868] kasan_set_free_info+0x1b/0x30
[69832.312699] __kasan_slab_free+0x111/0x160
[69832.313524] kmem_cache_free+0x94/0x460
[69832.314367] bfq_put_queue+0x582/0x940
[69832.315112] __bfq_bfqd_reset_in_service+0x166/0x1d0
[69832.317275] bfq_bfqq_expire+0xb27/0x2440
[69832.318084] bfq_dispatch_request+0x697/0x44b0
[69832.318991] __blk_mq_do_dispatch_sched+0x52f/0x830
[69832.319984] __blk_mq_sched_dispatch_requests+0x398/0x4f0
[69832.321087] blk_mq_sched_dispatch_requests+0xdf/0x140
[69832.322225] __blk_mq_run_hw_queue+0xc0/0x270
[69832.323114] blk_mq_run_work_fn+0x51/0x60
[69832.323942] process_one_work+0x6d4/0xfe0
[69832.324772] worker_thread+0x91/0xc80
[69832.325518] kthread+0x32d/0x3f0
[69832.326205] ret_from_fork+0x1f/0x30
[69832.326932]
[69832.338297] The buggy address belongs to the object at ffff88802622b968
[69832.338297] which belongs to the cache bfq_queue of size 512
[69832.340766] The buggy address is located 288 bytes inside of
[69832.340766] 512-byte region [ffff88802622b968, ffff88802622bb68)
[69832.343091] The buggy address belongs to the page:
[69832.344097] page:ffffea0000988a00 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88802622a528 pfn:0x26228
[69832.346214] head:ffffea0000988a00 order:2 compound_mapcount:0 compound_pincount:0
[69832.347719] flags: 0x1fffff80010200(slab|head)
[69832.348625] raw: 001fffff80010200 ffffea0000dbac08 ffff888017a57650 ffff8880179fe840
[69832.354972] raw: ffff88802622a528 0000000000120008 00000001ffffffff 0000000000000000
[69832.356547] page dumped because: kasan: bad access detected
[69832.357652]
[69832.357970] Memory state around the buggy address:
[69832.358926] ffff88802622b980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[69832.360358] ffff88802622ba00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[69832.361810] >ffff88802622ba80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[69832.363273] ^
[69832.363975] ffff88802622bb00: fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc
[69832.375960] ffff88802622bb80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[69832.377405] ==================================================================
In bfq_dispatch_requestfunction, it may have function call:
bfq_dispatch_request
__bfq_dispatch_request
bfq_select_queue
bfq_bfqq_expire
__bfq_bfqd_reset_in_service
bfq_put_queue
kmem_cache_free
In this function call, in_serv_queue has beed expired and meet the
conditions to free. In the function bfq_dispatch_request, the address
of in_serv_queue pointing to has been released. For getting the value
of idle_timer_disabled, it will get flags value from the address which
in_serv_queue pointing to, then the problem of use-after-free happens;
Fix the problem by check in_serv_queue == bfqd->in_service_queue, to
get the value of idle_timer_disabled if in_serve_queue is equel to
bfqd->in_service_queue. If the space of in_serv_queue pointing has
been released, this judge will aviod use-after-free problem.
And if in_serv_queue may be expired or finished, the idle_timer_disabled
will be false which would not give effects to bfq_update_dispatch_stats.
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Signed-off-by: Zhang Wensheng <zhangwensheng5(a)huawei.com>
Reviewed-by: Jens Axboe <axboe(a)kernel.dk>
Reviewed-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
block/bfq-iosched.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index fe4255698d810..8f4275d1b11a3 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4140,7 +4140,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
struct request *rq;
struct bfq_queue *in_serv_queue;
- bool waiting_rq, idle_timer_disabled;
+ bool waiting_rq, idle_timer_disabled = false;
spin_lock_irq(&bfqd->lock);
@@ -4148,14 +4148,15 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
rq = __bfq_dispatch_request(hctx);
-
- idle_timer_disabled =
- waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
+ if (in_serv_queue == bfqd->in_service_queue) {
+ idle_timer_disabled =
+ waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
+ }
spin_unlock_irq(&bfqd->lock);
-
- bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
- idle_timer_disabled);
+ bfq_update_dispatch_stats(hctx->queue, rq,
+ idle_timer_disabled ? in_serv_queue : NULL,
+ idle_timer_disabled);
return rq;
}
--
2.25.1
1
1

08 Mar '22
From: Zhihao Cheng <chengzhihao1(a)huawei.com>
From: Max Kellermann <max.kellermann(a)ionos.com>
mainline inclusion
from mainline-v5.17-rc6
commit 9d2231c5d74e13b2a0546fee6737ee4446017903
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4WUKP?from=project-issue
CVE: CVE-2022-0847
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/l…
--------------------------------
The functions copy_page_to_iter_pipe() and push_pipe() can both
allocate a new pipe_buffer, but the "flags" member initializer is
missing.
Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed")
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
To: linux-fsdevel(a)vger.kernel.org
To: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Max Kellermann <max.kellermann(a)ionos.com>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
---
lib/iov_iter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index b364231b5fc8..1b0a349fbcd9 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -407,6 +407,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
return 0;
buf->ops = &page_cache_pipe_buf_ops;
+ buf->flags = 0;
get_page(page);
buf->page = page;
buf->offset = offset;
@@ -543,6 +544,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
break;
buf->ops = &default_pipe_buf_ops;
+ buf->flags = 0;
buf->page = page;
buf->offset = 0;
buf->len = min_t(ssize_t, left, PAGE_SIZE);
2
1

[PATCH openEuler-5.10] lib/iov_iter: initialize "flags" in new pipe_buffer
by Zheng Zengkai 08 Mar '22
by Zheng Zengkai 08 Mar '22
08 Mar '22
From: Max Kellermann <max.kellermann(a)ionos.com>
mainline inclusion
from mainline-v5.17-rc6
commit 9d2231c5d74e13b2a0546fee6737ee4446017903
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4WUKP?from=project-issue
CVE: CVE-2022-0847
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/l…
--------------------------------
The functions copy_page_to_iter_pipe() and push_pipe() can both
allocate a new pipe_buffer, but the "flags" member initializer is
missing.
Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed")
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
To: linux-fsdevel(a)vger.kernel.org
To: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Max Kellermann <max.kellermann(a)ionos.com>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
lib/iov_iter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index b364231b5fc8..1b0a349fbcd9 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -407,6 +407,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
return 0;
buf->ops = &page_cache_pipe_buf_ops;
+ buf->flags = 0;
get_page(page);
buf->page = page;
buf->offset = offset;
@@ -543,6 +544,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
break;
buf->ops = &default_pipe_buf_ops;
+ buf->flags = 0;
buf->page = page;
buf->offset = 0;
buf->len = min_t(ssize_t, left, PAGE_SIZE);
--
2.20.1
1
0
From: Yu Kuai <yukuai3(a)huawei.com>
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.11-rc1
commit 6b3ba9762f9f9f651873af34481ca20e4a6791e7
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Merge three hidden gendisk checks into one.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Hannes Reinecke <hare(a)suse.de>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index 6566eacc807d..2a61dcd98d73 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -904,6 +904,9 @@ void del_gendisk(struct gendisk *disk)
might_sleep();
+ if (WARN_ON_ONCE(!disk->queue))
+ return;
+
blk_integrity_del(disk);
disk_del_events(disk);
@@ -926,20 +929,18 @@ void del_gendisk(struct gendisk *disk)
disk->flags &= ~GENHD_FL_UP;
up_write(&disk->lookup_sem);
- if (!(disk->flags & GENHD_FL_HIDDEN))
+ if (!(disk->flags & GENHD_FL_HIDDEN)) {
sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");
- if (disk->queue) {
+
/*
* Unregister bdi before releasing device numbers (as they can
* get reused and we'd get clashes in sysfs).
*/
- if (!(disk->flags & GENHD_FL_HIDDEN))
- bdi_unregister(disk->queue->backing_dev_info);
- blk_unregister_queue(disk);
- } else {
- WARN_ON(1);
+ bdi_unregister(disk->queue->backing_dev_info);
}
+ blk_unregister_queue(disk);
+
if (!(disk->flags & GENHD_FL_HIDDEN))
blk_unregister_region(disk_devt(disk), disk->minors);
/*
From patchwork Wed Jan 26 08:35:24 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152657
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems704-chm.china.huawei.com (10.3.19.181) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:48 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems704-chm.china.huawei.com (10.3.19.181) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:47 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:46 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 02/12] block: fold register_disk into device_add_disk
Date: Wed, 26 Jan 2022 16:35:24 +0800
Message-ID: <20220126083534.4016012-3-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
3218969e-1bda-4724-aaa8-08d9e0a55088
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.8705511
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.15-rc1
commit 52b85909f85d06efa69aaf4210e72467f1f58d2b
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
There is no real reason these should be separate. Also simplify the
groups assignment a bit.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 129 +++++++++++++++++++++++---------------------------
1 file changed, 58 insertions(+), 71 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index 2a61dcd98d73..c0e1639131d9 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -700,69 +700,6 @@ static void disk_scan_partitions(struct gendisk *disk)
blkdev_put(bdev, FMODE_READ);
}
-static void register_disk(struct device *parent, struct gendisk *disk,
- const struct attribute_group **groups)
-{
- struct device *ddev = disk_to_dev(disk);
- struct disk_part_iter piter;
- struct hd_struct *part;
- int err;
-
- ddev->parent = parent;
-
- dev_set_name(ddev, "%s", disk->disk_name);
-
- /* delay uevents, until we scanned partition table */
- dev_set_uevent_suppress(ddev, 1);
-
- if (groups) {
- WARN_ON(ddev->groups);
- ddev->groups = groups;
- }
- if (device_add(ddev))
- return;
- if (!sysfs_deprecated) {
- err = sysfs_create_link(block_depr, &ddev->kobj,
- kobject_name(&ddev->kobj));
- if (err) {
- device_del(ddev);
- return;
- }
- }
-
- /*
- * avoid probable deadlock caused by allocating memory with
- * GFP_KERNEL in runtime_resume callback of its all ancestor
- * devices
- */
- pm_runtime_set_memalloc_noio(ddev, true);
-
- disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
- disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
-
- if (disk->flags & GENHD_FL_HIDDEN)
- return;
-
- disk_scan_partitions(disk);
-
- /* announce disk after possible partitions are created */
- dev_set_uevent_suppress(ddev, 0);
- kobject_uevent(&ddev->kobj, KOBJ_ADD);
-
- /* announce possible partitions */
- disk_part_iter_init(&piter, disk, 0);
- while ((part = disk_part_iter_next(&piter)))
- kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
- disk_part_iter_exit(&piter);
-
- if (disk->queue->backing_dev_info->dev) {
- err = sysfs_create_link(&ddev->kobj,
- &disk->queue->backing_dev_info->dev->kobj,
- "bdi");
- WARN_ON(err);
- }
-}
-
/**
* __device_add_disk - add disk information to kernel list
* @parent: parent device for the disk
@@ -779,8 +716,11 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
const struct attribute_group **groups,
bool register_queue)
{
+ struct device *ddev = disk_to_dev(disk);
+ struct disk_part_iter piter;
+ struct hd_struct *part;
dev_t devt;
- int retval;
+ int ret;
/*
* The disk queue should now be all set with enough information about
@@ -801,8 +741,8 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk->flags |= GENHD_FL_UP;
- retval = blk_alloc_devt(&disk->part0, &devt);
- if (retval) {
+ ret = blk_alloc_devt(&disk->part0, &devt);
+ if (ret) {
WARN_ON(1);
return;
}
@@ -820,18 +760,65 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk->flags |= GENHD_FL_NO_PART_SCAN;
} else {
struct backing_dev_info *bdi = disk->queue->backing_dev_info;
- struct device *dev = disk_to_dev(disk);
- int ret;
/* Register BDI before referencing it from bdev */
- dev->devt = devt;
+ ddev->devt = devt;
ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
WARN_ON(ret);
- bdi_set_owner(bdi, dev);
+ bdi_set_owner(bdi, ddev);
blk_register_region(disk_devt(disk), disk->minors, NULL,
exact_match, exact_lock, disk);
}
- register_disk(parent, disk, groups);
+
+ /* delay uevents, until we scanned partition table */
+ dev_set_uevent_suppress(ddev, 1);
+
+ ddev->parent = parent;
+ ddev->groups = groups;
+ dev_set_name(ddev, "%s", disk->disk_name);
+
+ if (device_add(ddev))
+ return;
+ if (!sysfs_deprecated) {
+ ret = sysfs_create_link(block_depr, &ddev->kobj,
+ kobject_name(&ddev->kobj));
+ if (ret) {
+ device_del(ddev);
+ return;
+ }
+ }
+
+ /*
+ * avoid probable deadlock caused by allocating memory with
+ * GFP_KERNEL in runtime_resume callback of its all ancestor
+ * devices
+ */
+ pm_runtime_set_memalloc_noio(ddev, true);
+
+ disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+ disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
+
+ if (!(disk->flags & GENHD_FL_HIDDEN)) {
+ disk_scan_partitions(disk);
+
+ /* announce disk after possible partitions are created */
+ dev_set_uevent_suppress(ddev, 0);
+ kobject_uevent(&ddev->kobj, KOBJ_ADD);
+
+ /* announce possible partitions */
+ disk_part_iter_init(&piter, disk, 0);
+ while ((part = disk_part_iter_next(&piter)))
+ kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
+ disk_part_iter_exit(&piter);
+
+ if (disk->queue->backing_dev_info->dev) {
+ ret = sysfs_create_link(&ddev->kobj,
+ &disk->queue->backing_dev_info->dev->kobj,
+ "bdi");
+ WARN_ON(ret);
+ }
+ }
+
if (register_queue)
blk_register_queue(disk);
From patchwork Wed Jan 26 08:35:25 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152658
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems706-chm.china.huawei.com (10.3.19.183) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:49 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:49 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:47 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 03/12] block: set GENHD_FL_UP last in
__device_add_disk()
Date: Wed, 26 Jan 2022 16:35:25 +0800
Message-ID: <20220126083534.4016012-4-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
d49516fb-3c05-44d0-73f3-08d9e0a55156
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7552871
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
The flag will be checked before operations on the block device, thus
don't set the flag is some errors occurred.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index c0e1639131d9..59bdb43ccf17 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -739,8 +739,6 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
WARN_ON(!disk->minors &&
!(disk->flags & (GENHD_FL_EXT_DEVT | GENHD_FL_HIDDEN)));
- disk->flags |= GENHD_FL_UP;
-
ret = blk_alloc_devt(&disk->part0, &devt);
if (ret) {
WARN_ON(1);
@@ -830,6 +828,8 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk_add_events(disk);
blk_integrity_add(disk);
+
+ disk->flags |= GENHD_FL_UP;
}
void device_add_disk(struct device *parent, struct gendisk *disk,
From patchwork Wed Jan 26 08:35:26 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152659
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems705-chm.china.huawei.com (10.3.19.182) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:50 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems705-chm.china.huawei.com (10.3.19.182) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:50 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:49 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 04/12] block: call bdi_register() later in
__device_add_disk()
Date: Wed, 26 Jan 2022 16:35:26 +0800
Message-ID: <20220126083534.4016012-5-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
1a9bddc2-3045-4a62-422a-08d9e0a55221
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.8140598
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
This will simplify error handling going forward.
Conflict: "ddev->devt = devt" must be set before add_disk().
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 35 +++++++++++++++++------------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index 59bdb43ccf17..b0a6214fdbe1 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -749,24 +749,8 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk_alloc_events(disk);
- if (disk->flags & GENHD_FL_HIDDEN) {
- /*
- * Don't let hidden disks show up in /proc/partitions,
- * and don't bother scanning for partitions either.
- */
- disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
- disk->flags |= GENHD_FL_NO_PART_SCAN;
- } else {
- struct backing_dev_info *bdi = disk->queue->backing_dev_info;
-
- /* Register BDI before referencing it from bdev */
+ if (!(disk->flags & GENHD_FL_HIDDEN))
ddev->devt = devt;
- ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
- WARN_ON(ret);
- bdi_set_owner(bdi, ddev);
- blk_register_region(disk_devt(disk), disk->minors, NULL,
- exact_match, exact_lock, disk);
- }
/* delay uevents, until we scanned partition table */
dev_set_uevent_suppress(ddev, 1);
@@ -796,7 +780,22 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
- if (!(disk->flags & GENHD_FL_HIDDEN)) {
+ if (disk->flags & GENHD_FL_HIDDEN) {
+ /*
+ * Don't let hidden disks show up in /proc/partitions,
+ * and don't bother scanning for partitions either.
+ */
+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
+ disk->flags |= GENHD_FL_NO_PART_SCAN;
+ } else {
+ struct backing_dev_info *bdi = disk->queue->backing_dev_info;
+
+ ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
+ WARN_ON(ret);
+ bdi_set_owner(bdi, ddev);
+ blk_register_region(disk_devt(disk), disk->minors, NULL,
+ exact_match, exact_lock, disk);
+
disk_scan_partitions(disk);
/* announce disk after possible partitions are created */
From patchwork Wed Jan 26 08:35:27 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152660
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems701-chm.china.huawei.com (10.3.19.178) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:52 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems701-chm.china.huawei.com (10.3.19.178) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:51 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:50 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 05/12] block: create the bdi link earlier in
device_add_disk
Date: Wed, 26 Jan 2022 16:35:27 +0800
Message-ID: <20220126083534.4016012-6-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
fad6c294-ac99-4316-207b-08d9e0a552ea
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.8454101
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.15-rc1
commit 9d5ee6767c85762205b788ed1245f21fafd6c504
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
This will simplify error handling going forward.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index b0a6214fdbe1..b875de09dd09 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -796,6 +796,13 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
blk_register_region(disk_devt(disk), disk->minors, NULL,
exact_match, exact_lock, disk);
+ if (disk->queue->backing_dev_info->dev) {
+ ret = sysfs_create_link(&ddev->kobj,
+ &disk->queue->backing_dev_info->dev->kobj,
+ "bdi");
+ WARN_ON(ret);
+ }
+
disk_scan_partitions(disk);
/* announce disk after possible partitions are created */
@@ -807,13 +814,6 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
while ((part = disk_part_iter_next(&piter)))
kobject_uevent(&part_to_dev(part)->kobj, KOBJ_ADD);
disk_part_iter_exit(&piter);
-
- if (disk->queue->backing_dev_info->dev) {
- ret = sysfs_create_link(&ddev->kobj,
- &disk->queue->backing_dev_info->dev->kobj,
- "bdi");
- WARN_ON(ret);
- }
}
if (register_queue)
From patchwork Wed Jan 26 08:35:28 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152661
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems702-chm.china.huawei.com (10.3.19.179) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:53 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:53 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:51 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 06/12] block: call blk_integrity_add earlier in
device_add_disk
Date: Wed, 26 Jan 2022 16:35:28 +0800
Message-ID: <20220126083534.4016012-7-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
e89abd69-efcc-473e-5645-08d9e0a553b4
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7513136
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.15-rc1
commit bab53f6b617d9f530978d6e3693f88e586d81a8a
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Doing all the sysfs file creation before adding the bdev and thus
allowing it to be opened will simplify the about to be added error
handling.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/genhd.c b/block/genhd.c
index b875de09dd09..aa109986a6b0 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -777,6 +777,8 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
*/
pm_runtime_set_memalloc_noio(ddev, true);
+ blk_integrity_add(disk);
+
disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
@@ -826,7 +828,6 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
WARN_ON_ONCE(!blk_get_queue(disk->queue));
disk_add_events(disk);
- blk_integrity_add(disk);
disk->flags |= GENHD_FL_UP;
}
From patchwork Wed Jan 26 08:35:29 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152662
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems703-chm.china.huawei.com (10.3.19.180) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:54 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems703-chm.china.huawei.com (10.3.19.180) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:54 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:53 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 07/12] block: return errors from blk_integrity_add
Date: Wed, 26 Jan 2022 16:35:29 +0800
Message-ID: <20220126083534.4016012-8-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
85f46a94-096f-4839-b1f3-08d9e0a5547e
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7611417
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Luis Chamberlain <mcgrof(a)kernel.org>
mainline inclusion
from mainline-v5.15-rc1
commit 614310c9c8ca15359f4e71a5bbd9165897b4d54e
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Prepare for proper error handling in add_disk.
Signed-off-by: Luis Chamberlain <mcgrof(a)kernel.org>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Hannes Reinecke <hare(a)suse.de>
Link: https://lore.kernel.org/r/20210818144542.19305-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/blk-integrity.c | 12 +++++++-----
block/blk.h | 5 +++--
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/block/blk-integrity.c b/block/blk-integrity.c
index 9e83159f5a52..16d5d5338392 100644
--- a/block/blk-integrity.c
+++ b/block/blk-integrity.c
@@ -438,13 +438,15 @@ void blk_integrity_unregister(struct gendisk *disk)
}
EXPORT_SYMBOL(blk_integrity_unregister);
-void blk_integrity_add(struct gendisk *disk)
+int blk_integrity_add(struct gendisk *disk)
{
- if (kobject_init_and_add(&disk->integrity_kobj, &integrity_ktype,
- &disk_to_dev(disk)->kobj, "%s", "integrity"))
- return;
+ int ret;
- kobject_uevent(&disk->integrity_kobj, KOBJ_ADD);
+ ret = kobject_init_and_add(&disk->integrity_kobj, &integrity_ktype,
+ &disk_to_dev(disk)->kobj, "%s", "integrity");
+ if (!ret)
+ kobject_uevent(&disk->integrity_kobj, KOBJ_ADD);
+ return ret;
}
void blk_integrity_del(struct gendisk *disk)
diff --git a/block/blk.h b/block/blk.h
index cd39fd0c93f1..80bb41a43120 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -134,7 +134,7 @@ static inline bool integrity_req_gap_front_merge(struct request *req,
bip_next->bip_vec[0].bv_offset);
}
-void blk_integrity_add(struct gendisk *);
+int blk_integrity_add(struct gendisk *disk);
void blk_integrity_del(struct gendisk *);
#else /* CONFIG_BLK_DEV_INTEGRITY */
static inline bool blk_integrity_merge_rq(struct request_queue *rq,
@@ -168,8 +168,9 @@ static inline bool bio_integrity_endio(struct bio *bio)
static inline void bio_integrity_free(struct bio *bio)
{
}
-static inline void blk_integrity_add(struct gendisk *disk)
+static inline int blk_integrity_add(struct gendisk *disk)
{
+ return 0;
}
static inline void blk_integrity_del(struct gendisk *disk)
{
From patchwork Wed Jan 26 08:35:30 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152663
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems704-chm.china.huawei.com (10.3.19.181) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:56 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems704-chm.china.huawei.com (10.3.19.181) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:55 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:54 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 08/12] block: return errors from disk_alloc_events
Date: Wed, 26 Jan 2022 16:35:30 +0800
Message-ID: <20220126083534.4016012-9-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
2664ecdf-dfd3-496d-02da-08d9e0a55548
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7000588
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Luis Chamberlain <mcgrof(a)kernel.org>
mainline inclusion
from mainline-v5.15-rc1
commit 92e7755ebc69233e25a2d1b760aeff536dc4016b
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Prepare for proper error handling in add_disk.
Signed-off-by: Luis Chamberlain <mcgrof(a)kernel.org>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Hannes Reinecke <hare(a)suse.de>
Link: https://lore.kernel.org/r/20210818144542.19305-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index aa109986a6b0..ec1e2fe27249 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -40,7 +40,7 @@ static DEFINE_IDR(ext_devt_idr);
static void disk_check_events(struct disk_events *ev,
unsigned int *clearing_ptr);
-static void disk_alloc_events(struct gendisk *disk);
+static int disk_alloc_events(struct gendisk *disk);
static void disk_add_events(struct gendisk *disk);
static void disk_del_events(struct gendisk *disk);
static void disk_release_events(struct gendisk *disk);
@@ -2311,17 +2311,17 @@ module_param_cb(events_dfl_poll_msecs, &disk_events_dfl_poll_msecs_param_ops,
/*
* disk_{alloc|add|del|release}_events - initialize and destroy disk_events.
*/
-static void disk_alloc_events(struct gendisk *disk)
+static int disk_alloc_events(struct gendisk *disk)
{
struct disk_events *ev;
if (!disk->fops->check_events || !disk->events)
- return;
+ return 0;
ev = kzalloc(sizeof(*ev), GFP_KERNEL);
if (!ev) {
pr_warn("%s: failed to initialize events\n", disk->disk_name);
- return;
+ return -ENOMEM;
}
INIT_LIST_HEAD(&ev->node);
@@ -2333,6 +2333,7 @@ static void disk_alloc_events(struct gendisk *disk)
INIT_DELAYED_WORK(&ev->dwork, disk_events_workfn);
disk->ev = ev;
+ return 0;
}
static void disk_add_events(struct gendisk *disk)
From patchwork Wed Jan 26 08:35:31 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152664
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems706-chm.china.huawei.com (10.3.19.183) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:57 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:57 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:55 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 09/12] block: refactor device number setup in
__device_add_disk
Date: Wed, 26 Jan 2022 16:35:31 +0800
Message-ID: <20220126083534.4016012-10-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
d9a6c01c-83ab-4afd-0792-08d9e0a55611
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.8228257
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
mainline inclusion
from mainline-v5.14-rc1
commit 7c3f828b522b07adb341b08fde1660685c5ba3eb
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Untangle the mess around blk_alloc_devt by moving the check for
the used allocation scheme into the callers.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Hannes Reinecke <hare(a)suse.de>
Reviewed-by: Luis Chamberlain <mcgrof(a)kernel.org>
Reviewed-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Link: https://lore.kernel.org/r/20210521055116.1053587-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/blk.h | 4 +-
block/genhd.c | 89 +++++++++++++----------------------------
block/partitions/core.c | 16 ++++++--
3 files changed, 42 insertions(+), 67 deletions(-)
diff --git a/block/blk.h b/block/blk.h
index 80bb41a43120..88b00aa6e1c6 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -354,8 +354,8 @@ static inline void blk_queue_free_zone_bitmaps(struct request_queue *q) {}
struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, sector_t sector);
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
-void blk_free_devt(dev_t devt);
+int blk_alloc_ext_minor(struct hd_struct *part);
+void blk_free_ext_minor(unsigned int minor);
void blk_invalidate_devt(dev_t devt);
char *disk_name(struct gendisk *hd, int partno, char *buf);
#define ADDPART_FLAG_NONE 0
diff --git a/block/genhd.c b/block/genhd.c
index ec1e2fe27249..019641280df3 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -557,31 +557,10 @@ static int blk_mangle_minor(int minor)
return minor;
}
-/**
- * blk_alloc_devt - allocate a dev_t for a partition
- * @part: partition to allocate dev_t for
- * @devt: out parameter for resulting dev_t
- *
- * Allocate a dev_t for block device.
- *
- * RETURNS:
- * 0 on success, allocated dev_t is returned in *@devt. -errno on
- * failure.
- *
- * CONTEXT:
- * Might sleep.
- */
-int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
+int blk_alloc_ext_minor(struct hd_struct *part)
{
- struct gendisk *disk = part_to_disk(part);
int idx;
- /* in consecutive minor range? */
- if (part->partno < disk->minors) {
- *devt = MKDEV(disk->major, disk->first_minor + part->partno);
- return 0;
- }
-
/* allocate ext devt */
idr_preload(GFP_KERNEL);
@@ -590,32 +569,20 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
spin_unlock_bh(&ext_devt_lock);
idr_preload_end();
- if (idx < 0)
- return idx == -ENOSPC ? -EBUSY : idx;
+ if (idx < 0) {
+ if (idx == -ENOSPC)
+ return -EBUSY;
+ return idx;
+ }
- *devt = MKDEV(BLOCK_EXT_MAJOR, blk_mangle_minor(idx));
- return 0;
+ return blk_mangle_minor(idx);
}
-/**
- * blk_free_devt - free a dev_t
- * @devt: dev_t to free
- *
- * Free @devt which was allocated using blk_alloc_devt().
- *
- * CONTEXT:
- * Might sleep.
- */
-void blk_free_devt(dev_t devt)
+void blk_free_ext_minor(unsigned int minor)
{
- if (devt == MKDEV(0, 0))
- return;
-
- if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
- spin_lock_bh(&ext_devt_lock);
- idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt)));
- spin_unlock_bh(&ext_devt_lock);
- }
+ spin_lock_bh(&ext_devt_lock);
+ idr_remove(&ext_devt_idr, blk_mangle_minor(minor));
+ spin_unlock_bh(&ext_devt_lock);
}
/*
@@ -719,7 +686,6 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
struct device *ddev = disk_to_dev(disk);
struct disk_part_iter piter;
struct hd_struct *part;
- dev_t devt;
int ret;
/*
@@ -731,26 +697,25 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
if (register_queue)
elevator_init_mq(disk->queue);
- /* minors == 0 indicates to use ext devt from part0 and should
- * be accompanied with EXT_DEVT flag. Make sure all
- * parameters make sense.
- */
- WARN_ON(disk->minors && !(disk->major || disk->first_minor));
- WARN_ON(!disk->minors &&
- !(disk->flags & (GENHD_FL_EXT_DEVT | GENHD_FL_HIDDEN)));
+ if (disk->major) {
+ WARN_ON(!disk->minors);
+ } else {
+ WARN_ON(disk->minors);
+ WARN_ON(!(disk->flags & (GENHD_FL_EXT_DEVT | GENHD_FL_HIDDEN)));
- ret = blk_alloc_devt(&disk->part0, &devt);
- if (ret) {
- WARN_ON(1);
- return;
+ ret = blk_alloc_ext_minor(&disk->part0);
+ if (ret < 0) {
+ WARN_ON(1);
+ return;
+ }
+ disk->major = BLOCK_EXT_MAJOR;
+ disk->first_minor = MINOR(ret);
}
- disk->major = MAJOR(devt);
- disk->first_minor = MINOR(devt);
disk_alloc_events(disk);
if (!(disk->flags & GENHD_FL_HIDDEN))
- ddev->devt = devt;
+ ddev->devt = MKDEV(disk->major, disk->first_minor);
/* delay uevents, until we scanned partition table */
dev_set_uevent_suppress(ddev, 1);
@@ -792,7 +757,8 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
} else {
struct backing_dev_info *bdi = disk->queue->backing_dev_info;
- ret = bdi_register(bdi, "%u:%u", MAJOR(devt), MINOR(devt));
+ ret = bdi_register(bdi, "%u:%u",
+ disk->major, disk->first_minor);
WARN_ON(ret);
bdi_set_owner(bdi, ddev);
blk_register_region(disk_devt(disk), disk->minors, NULL,
@@ -1554,7 +1520,8 @@ static void disk_release(struct device *dev)
might_sleep();
- blk_free_devt(dev->devt);
+ if (MAJOR(dev->devt) == BLOCK_EXT_MAJOR)
+ blk_free_ext_minor(MINOR(dev->devt));
disk_release_events(disk);
kfree(disk->random);
disk_replace_part_tbl(disk, NULL);
diff --git a/block/partitions/core.c b/block/partitions/core.c
index 569b0ca9f6e1..68d75acc269a 100644
--- a/block/partitions/core.c
+++ b/block/partitions/core.c
@@ -258,7 +258,9 @@ static const struct attribute_group *part_attr_groups[] = {
static void part_release(struct device *dev)
{
struct hd_struct *p = dev_to_part(dev);
- blk_free_devt(dev->devt);
+
+ if (MAJOR(dev->devt) == BLOCK_EXT_MAJOR)
+ blk_free_ext_minor(MINOR(dev->devt));
hd_free_part(p);
kfree(p);
}
@@ -439,9 +441,15 @@ static struct hd_struct *add_partition(struct gendisk *disk, int partno,
pdev->type = &part_type;
pdev->parent = ddev;
- err = blk_alloc_devt(p, &devt);
- if (err)
- goto out_free_info;
+ /* in consecutive minor range? */
+ if (partno < disk->minors) {
+ devt = MKDEV(disk->major, disk->first_minor + partno);
+ } else {
+ err = blk_alloc_ext_minor(p);
+ if (err < 0)
+ goto out_free_info;
+ devt = MKDEV(BLOCK_EXT_MAJOR, err);
+ }
pdev->devt = devt;
/* delay uevent until 'holders' subdir is created */
From patchwork Wed Jan 26 08:35:32 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152665
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems705-chm.china.huawei.com (10.3.19.182) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:24:58 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems705-chm.china.huawei.com (10.3.19.182) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:58 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:56 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 10/12] block: automatically enable GENHD_FL_EXT_DEVT
Date: Wed, 26 Jan 2022 16:35:32 +0800
Message-ID: <20220126083534.4016012-11-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
78c33535-647d-4416-5833-08d9e0a556da
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.8541754
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.14-rc1
commit 0d1feb72ffd8578f6f167ca15b2096c276c1f6df
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Automatically set the GENHD_FL_EXT_DEVT flag for all disks allocated
without an explicit number of minors. This is what all new block
drivers should do, so make sure it is the default without boilerplate
code.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Hannes Reinecke <hare(a)suse.de>
Reviewed-by: Luis Chamberlain <mcgrof(a)kernel.org>
Reviewed-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Link: https://lore.kernel.org/r/20210521055116.1053587-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 2 +-
drivers/block/brd.c | 1 -
drivers/block/loop.c | 1 -
drivers/block/null_blk_main.c | 2 +-
drivers/block/rbd.c | 2 --
drivers/block/virtio_blk.c | 1 -
drivers/ide/ide-gd.c | 1 -
drivers/lightnvm/core.c | 1 -
drivers/md/md.c | 1 -
drivers/memstick/core/ms_block.c | 1 -
drivers/mmc/core/block.c | 1 -
drivers/nvdimm/blk.c | 1 -
drivers/nvdimm/btt.c | 1 -
drivers/nvdimm/pmem.c | 1 -
drivers/nvme/host/core.c | 2 +-
drivers/nvme/host/multipath.c | 1 -
drivers/scsi/sd.c | 1 -
17 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index 019641280df3..c35bceed6870 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -701,7 +701,6 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
WARN_ON(!disk->minors);
} else {
WARN_ON(disk->minors);
- WARN_ON(!(disk->flags & (GENHD_FL_EXT_DEVT | GENHD_FL_HIDDEN)));
ret = blk_alloc_ext_minor(&disk->part0);
if (ret < 0) {
@@ -710,6 +709,7 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
}
disk->major = BLOCK_EXT_MAJOR;
disk->first_minor = MINOR(ret);
+ disk->flags |= GENHD_FL_EXT_DEVT;
}
disk_alloc_events(disk);
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 085d1ff2bc03..395195edd5d7 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -411,7 +411,6 @@ static struct brd_device *brd_alloc(int i)
disk->first_minor = i * max_part;
disk->fops = &brd_fops;
disk->private_data = brd;
- disk->flags = GENHD_FL_EXT_DEVT;
strlcpy(disk->disk_name, buf, DISK_NAME_LEN);
set_capacity(disk, rd_size * 2);
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index e354faf7c9e6..b761afd294a2 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2154,7 +2154,6 @@ static int loop_add(struct loop_device **l, int i)
*/
if (!part_shift)
disk->flags |= GENHD_FL_NO_PART_SCAN;
- disk->flags |= GENHD_FL_EXT_DEVT;
atomic_set(&lo->lo_refcnt, 0);
lo->lo_number = i;
spin_lock_init(&lo->lo_lock);
diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
index bb3686c3869d..4279db5bef59 100644
--- a/drivers/block/null_blk_main.c
+++ b/drivers/block/null_blk_main.c
@@ -1692,7 +1692,7 @@ static int null_gendisk_register(struct nullb *nullb)
return -ENOMEM;
set_capacity(disk, size);
- disk->flags |= GENHD_FL_EXT_DEVT | GENHD_FL_SUPPRESS_PARTITION_INFO;
+ disk->flags |= GENHD_FL_SUPPRESS_PARTITION_INFO;
disk->major = null_major;
disk->first_minor = nullb->index;
if (queue_is_mq(nullb->q))
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 340b1df365f7..7249448b2a51 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4973,8 +4973,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
rbd_dev->dev_id);
disk->major = rbd_dev->major;
disk->first_minor = rbd_dev->minor;
- if (single_major)
- disk->flags |= GENHD_FL_EXT_DEVT;
disk->fops = &rbd_bd_ops;
disk->private_data = rbd_dev;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 42acf9587ef3..eba1d6e06ca2 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -794,7 +794,6 @@ static int virtblk_probe(struct virtio_device *vdev)
vblk->disk->first_minor = index_to_minor(index);
vblk->disk->private_data = vblk;
vblk->disk->fops = &virtblk_fops;
- vblk->disk->flags |= GENHD_FL_EXT_DEVT;
vblk->index = index;
/* configure queue flush support */
diff --git a/drivers/ide/ide-gd.c b/drivers/ide/ide-gd.c
index e2b6c82586ce..1c19f6830704 100644
--- a/drivers/ide/ide-gd.c
+++ b/drivers/ide/ide-gd.c
@@ -395,7 +395,6 @@ static int ide_gd_probe(ide_drive_t *drive)
set_capacity(g, ide_gd_capacity(drive));
g->minors = IDE_DISK_MINORS;
- g->flags |= GENHD_FL_EXT_DEVT;
if (drive->dev_flags & IDE_DFLAG_REMOVABLE)
g->flags = GENHD_FL_REMOVABLE;
g->fops = &ide_gd_ops;
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 28ddcaa5358b..8903b7b57e61 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -383,7 +383,6 @@ static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
}
strlcpy(tdisk->disk_name, create->tgtname, sizeof(tdisk->disk_name));
- tdisk->flags = GENHD_FL_EXT_DEVT;
tdisk->major = 0;
tdisk->first_minor = 0;
tdisk->fops = tt->bops;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index a299fda5b0e9..1cc26fab10b1 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5721,7 +5721,6 @@ static int md_alloc(dev_t dev, char *name)
* 'mdp' device redundant, but we can't really
* remove it now.
*/
- disk->flags |= GENHD_FL_EXT_DEVT;
disk->events |= DISK_EVENT_MEDIA_CHANGE;
mddev->gendisk = disk;
add_disk(disk);
diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index bc1f484f50f1..0ca990f7ca3e 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -2136,7 +2136,6 @@ static int msb_init_disk(struct memstick_dev *card)
msb->disk->fops = &msb_bdops;
msb->disk->private_data = msb;
msb->disk->queue = msb->queue;
- msb->disk->flags |= GENHD_FL_EXT_DEVT;
capacity = msb->pages_in_block * msb->logical_block_count;
capacity *= (msb->page_size / 512);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 94caee49da99..78a491ddcbf6 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2347,7 +2347,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
md->disk->queue = md->queue.queue;
md->parent = parent;
set_disk_ro(md->disk, md->read_only || default_ro);
- md->disk->flags = GENHD_FL_EXT_DEVT;
if (area_type & (MMC_BLK_DATA_AREA_RPMB | MMC_BLK_DATA_AREA_BOOT))
md->disk->flags |= GENHD_FL_NO_PART_SCAN
| GENHD_FL_SUPPRESS_PARTITION_INFO;
diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
index 22e5617b2cea..f1295f6896c9 100644
--- a/drivers/nvdimm/blk.c
+++ b/drivers/nvdimm/blk.c
@@ -267,7 +267,6 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
disk->first_minor = 0;
disk->fops = &nd_blk_fops;
disk->queue = q;
- disk->flags = GENHD_FL_EXT_DEVT;
disk->private_data = nsblk;
nvdimm_namespace_disk_name(&nsblk->common, disk->disk_name);
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 12ff6f8784ac..42d79a9a404f 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1536,7 +1536,6 @@ static int btt_blk_init(struct btt *btt)
btt->btt_disk->fops = &btt_fops;
btt->btt_disk->private_data = btt;
btt->btt_disk->queue = btt->btt_queue;
- btt->btt_disk->flags = GENHD_FL_EXT_DEVT;
blk_queue_logical_block_size(btt->btt_queue, btt->sector_size);
blk_queue_max_hw_sectors(btt->btt_queue, UINT_MAX);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index d5dd79b59b16..ca2a1967c070 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -476,7 +476,6 @@ static int pmem_attach_disk(struct device *dev,
disk->fops = &pmem_fops;
disk->queue = q;
- disk->flags = GENHD_FL_EXT_DEVT;
disk->private_data = pmem;
nvdimm_namespace_disk_name(ndns, disk->disk_name);
set_capacity(disk, (pmem->size - pmem->pfn_pad - pmem->data_offset)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 9ccf44592fe4..24d59b73ac3e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3825,7 +3825,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
struct gendisk *disk;
struct nvme_id_ns *id;
char disk_name[DISK_NAME_LEN];
- int node = ctrl->numa_node, flags = GENHD_FL_EXT_DEVT, ret;
+ int node = ctrl->numa_node, flags = 0, ret;
if (nvme_identify_ns(ctrl, nsid, ids, &id))
return;
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 18a756444d5a..95a5222b3642 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -392,7 +392,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
head->disk->fops = &nvme_ns_head_ops;
head->disk->private_data = head;
head->disk->queue = q;
- head->disk->flags = GENHD_FL_EXT_DEVT;
sprintf(head->disk->disk_name, "nvme%dn%d",
ctrl->subsys->instance, head->instance);
return 0;
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 3fc184e9702f..db7166fd05e1 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3479,7 +3479,6 @@ static int sd_probe(struct device *dev)
sd_revalidate_disk(gd);
- gd->flags = GENHD_FL_EXT_DEVT;
if (sdp->removable) {
gd->flags |= GENHD_FL_REMOVABLE;
gd->events |= DISK_EVENT_MEDIA_CHANGE;
From patchwork Wed Jan 26 08:35:33 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152666
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems701-chm.china.huawei.com (10.3.19.178) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:25:00 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems701-chm.china.huawei.com (10.3.19.178) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:24:59 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:58 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 11/12] block: add error handling for device_add_disk
/ add_disk
Date: Wed, 26 Jan 2022 16:35:33 +0800
Message-ID: <20220126083534.4016012-12-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
af00f8da-255b-48f7-afb7-08d9e0a557a7
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7770389
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
Properly unwind on errors in device_add_disk. This is the initial work
as drivers are not converted yet, which will follow in separate patches.
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
block/genhd.c | 91 +++++++++++++++++++++++++++++++++++----------------
1 file changed, 63 insertions(+), 28 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index c35bceed6870..371fdd2e0786 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -676,10 +676,8 @@ static void disk_scan_partitions(struct gendisk *disk)
*
* This function registers the partitioning information in @disk
* with the kernel.
- *
- * FIXME: error handling
*/
-static void __device_add_disk(struct device *parent, struct gendisk *disk,
+static int __device_add_disk(struct device *parent, struct gendisk *disk,
const struct attribute_group **groups,
bool register_queue)
{
@@ -698,22 +696,20 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
elevator_init_mq(disk->queue);
if (disk->major) {
- WARN_ON(!disk->minors);
+ if (WARN_ON(!disk->minors))
+ return -EINVAL;
} else {
- WARN_ON(disk->minors);
+ if (WARN_ON(disk->minors))
+ return -EINVAL;
ret = blk_alloc_ext_minor(&disk->part0);
- if (ret < 0) {
- WARN_ON(1);
- return;
- }
+ if (ret < 0)
+ return ret;
disk->major = BLOCK_EXT_MAJOR;
disk->first_minor = MINOR(ret);
disk->flags |= GENHD_FL_EXT_DEVT;
}
- disk_alloc_events(disk);
-
if (!(disk->flags & GENHD_FL_HIDDEN))
ddev->devt = MKDEV(disk->major, disk->first_minor);
@@ -724,15 +720,19 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
ddev->groups = groups;
dev_set_name(ddev, "%s", disk->disk_name);
- if (device_add(ddev))
- return;
+ ret = device_add(ddev);
+ if (ret)
+ goto out_free_ext_minor;
+
+ ret = disk_alloc_events(disk);
+ if (ret)
+ goto out_device_del;
+
if (!sysfs_deprecated) {
ret = sysfs_create_link(block_depr, &ddev->kobj,
kobject_name(&ddev->kobj));
- if (ret) {
- device_del(ddev);
- return;
- }
+ if (ret)
+ goto out_device_del;
}
/*
@@ -742,10 +742,20 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
*/
pm_runtime_set_memalloc_noio(ddev, true);
- blk_integrity_add(disk);
+ ret = blk_integrity_add(disk);
+ if (ret)
+ goto out_del_block_link;
disk->part0.holder_dir = kobject_create_and_add("holders", &ddev->kobj);
+ if (!disk->part0.holder_dir) {
+ ret = -ENOMEM;
+ goto out_del_integrity;
+ }
disk->slave_dir = kobject_create_and_add("slaves", &ddev->kobj);
+ if (!disk->slave_dir) {
+ ret = -ENOMEM;
+ goto out_put_holder_dir;
+ }
if (disk->flags & GENHD_FL_HIDDEN) {
/*
@@ -759,17 +769,17 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
ret = bdi_register(bdi, "%u:%u",
disk->major, disk->first_minor);
- WARN_ON(ret);
+ if (ret)
+ goto out_put_slave_dir;
bdi_set_owner(bdi, ddev);
blk_register_region(disk_devt(disk), disk->minors, NULL,
exact_match, exact_lock, disk);
- if (disk->queue->backing_dev_info->dev) {
- ret = sysfs_create_link(&ddev->kobj,
- &disk->queue->backing_dev_info->dev->kobj,
- "bdi");
- WARN_ON(ret);
- }
+ ret = sysfs_create_link(&ddev->kobj,
+ &disk->queue->backing_dev_info->dev->kobj,
+ "bdi");
+ if (ret)
+ goto out_unregister_bdi;
disk_scan_partitions(disk);
@@ -784,8 +794,11 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
disk_part_iter_exit(&piter);
}
- if (register_queue)
- blk_register_queue(disk);
+ if (register_queue) {
+ ret = blk_register_queue(disk);
+ if (ret)
+ goto out_del_bdi_link;
+ }
/*
* Take an extra ref on queue which will be put on disk_release()
@@ -794,8 +807,30 @@ static void __device_add_disk(struct device *parent, struct gendisk *disk,
WARN_ON_ONCE(!blk_get_queue(disk->queue));
disk_add_events(disk);
-
disk->flags |= GENHD_FL_UP;
+ return 0;
+
+out_del_bdi_link:
+ if (!(disk->flags & GENHD_FL_HIDDEN))
+ sysfs_remove_link(&ddev->kobj, "bdi");
+out_unregister_bdi:
+ if (!(disk->flags & GENHD_FL_HIDDEN))
+ bdi_unregister(disk->queue->backing_dev_info);
+out_put_slave_dir:
+ kobject_put(disk->slave_dir);
+out_put_holder_dir:
+ kobject_put(disk->part0.holder_dir);
+out_del_integrity:
+ blk_integrity_del(disk);
+out_del_block_link:
+ if (!sysfs_deprecated)
+ sysfs_remove_link(block_depr, dev_name(ddev));
+out_device_del:
+ device_del(ddev);
+out_free_ext_minor:
+ if (disk->major == BLOCK_EXT_MAJOR)
+ blk_free_ext_minor(disk->first_minor);
+ return WARN_ON_ONCE(ret); /* keep until all callers handle errors */
}
void device_add_disk(struct device *parent, struct gendisk *disk,
From patchwork Wed Jan 26 08:35:34 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: Yu Kuai <yukuai3(a)huawei.com>
X-Patchwork-Id: 152667
Return-Path: <yukuai3(a)huawei.com>
Received: from dggems702-chm.china.huawei.com (10.3.19.179) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21 via Mailbox Transport; Wed, 26 Jan 2022 16:25:01 +0800
Received: from kwepemm600009.china.huawei.com (7.193.23.164) by
dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
(version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
15.1.2308.21; Wed, 26 Jan 2022 16:25:01 +0800
Received: from huawei.com (10.175.127.227) by kwepemm600009.china.huawei.com
(7.193.23.164) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan
2022 16:24:59 +0800
From: Yu Kuai <yukuai3(a)huawei.com>
To: <zhengzengkai(a)huawei.com>, <xiexiuqi(a)huawei.com>, <patchwork(a)huawei.com>
CC: <yukuai3(a)huawei.com>, <yi.zhang(a)huawei.com>, <houtao1(a)huawei.com>,
<chenxiaosong2(a)huawei.com>, <chengzhihao1(a)huawei.com>,
<libaokun1(a)huawei.com>, <luomeng12(a)huawei.com>, <yanaijie(a)huawei.com>,
<yangerkun(a)huawei.com>, <yebin10(a)huawei.com>, <yuyufen(a)huawei.com>,
<zhangxiaoxu5(a)huawei.com>, <zhengbin13(a)huawei.com>, <koulihong(a)huawei.com>,
<qiulaibin(a)huawei.com>, <zhengliang6(a)huawei.com>
Subject: [PATCH OLK-5.10 12/12] sd: don't mess with SD_MINORS for
CONFIG_DEBUG_BLOCK_EXT_DEVT
Date: Wed, 26 Jan 2022 16:35:34 +0800
Message-ID: <20220126083534.4016012-13-yukuai3(a)huawei.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20220126083534.4016012-1-yukuai3(a)huawei.com>
References: <20220126083534.4016012-1-yukuai3(a)huawei.com>
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id:
a409ac94-39e4-47fd-25f8-08d9e0a55871
X-MS-Exchange-Organization-AuthSource: kwepemm600009.china.huawei.com
X-MS-Exchange-Organization-AuthAs: Internal
X-MS-Exchange-Organization-AuthMechanism: 07
X-Originating-IP: [10.175.127.227]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
kwepemm600009.china.huawei.com (7.193.23.164)
X-MS-Exchange-Transport-EndToEndLatency: 00:00:01.7893204
X-MS-Exchange-Processed-By-BccFoldering: 15.01.2308.021
MIME-Version: 1.0
From: Christoph Hellwig <hch(a)lst.de>
mainline inclusion
from mainline-v5.14-rc2
commit 7fef2edf7cc753b51f7ccc74993971b0a9c81eca
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SA9G
---------------------------
No need to give up the original sd minor even with this option,
and if we did we'd also need to fix the number of minors for
this configuration to actually work.
Fixes: 7c3f828b522b0 ("block: refactor device number setup in __device_add_disk")
Reported-by: Guenter Roeck <linux(a)roeck-us.net>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Tested-by: Guenter Roeck <linux(a)roeck-us.net>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Yu Kuai <yukuai3(a)huawei.com>
---
drivers/scsi/sd.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index db7166fd05e1..2dfa5a34b048 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -98,11 +98,7 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_MOD);
MODULE_ALIAS_SCSI_DEVICE(TYPE_RBC);
MODULE_ALIAS_SCSI_DEVICE(TYPE_ZBC);
-#if !defined(CONFIG_DEBUG_BLOCK_EXT_DEVT)
#define SD_MINORS 16
-#else
-#define SD_MINORS 0
-#endif
static void sd_config_discard(struct scsi_disk *, unsigned int);
static void sd_config_write_same(struct scsi_disk *);
1
0

[PATCH openEuler-5.10] lib/iov_iter: initialize "flags" in new pipe_buffer
by Zheng Zengkai 08 Mar '22
by Zheng Zengkai 08 Mar '22
08 Mar '22
From: Max Kellermann <max.kellermann(a)ionos.com>
mainline inclusion
from mainline-v5.17-rc6
commit 9d2231c5d74e13b2a0546fee6737ee4446017903
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4WUKP?from=project-issue
CVE: CVE-2022-0847
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/l…
--------------------------------
The functions copy_page_to_iter_pipe() and push_pipe() can both
allocate a new pipe_buffer, but the "flags" member initializer is
missing.
Fixes: 241699cd72a8 ("new iov_iter flavour: pipe-backed")
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
To: linux-fsdevel(a)vger.kernel.org
To: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Signed-off-by: Max Kellermann <max.kellermann(a)ionos.com>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
lib/iov_iter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index b364231b5fc8..1b0a349fbcd9 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -407,6 +407,7 @@ static size_t copy_page_to_iter_pipe(struct page *page, size_t offset, size_t by
return 0;
buf->ops = &page_cache_pipe_buf_ops;
+ buf->flags = 0;
get_page(page);
buf->page = page;
buf->offset = offset;
@@ -543,6 +544,7 @@ static size_t push_pipe(struct iov_iter *i, size_t size,
break;
buf->ops = &default_pipe_buf_ops;
+ buf->flags = 0;
buf->page = page;
buf->offset = 0;
buf->len = min_t(ssize_t, left, PAGE_SIZE);
--
2.20.1
1
0

[PATCH openEuler-1.0-LTS] hugetlbfs: fix a truncation issue in hugepages parameter
by Yang Yingliang 08 Mar '22
by Yang Yingliang 08 Mar '22
08 Mar '22
From: Liu Yuntao <liuyuntao10(a)huawei.com>
mainline inclusion
from mainline-v5.17-rc6
commit e79ce9832316e09529b212a21278d68240ccbf1f
category: bugfix
bugzilla: 186043
CVE: NA
-------------------------------------------------
When we specify a large number for node in hugepages parameter, it may
be parsed to another number due to truncation in this statement:
node = tmp;
For example, add following parameter in command line:
hugepagesz=1G hugepages=4294967297:5
and kernel will allocate 5 hugepages for node 1 instead of ignoring it.
I move the validation check earlier to fix this issue, and slightly
simplifies the condition here.
Link: https://lkml.kernel.org/r/20220209134018.8242-1-liuyuntao10@huawei.com
Fixes: b5389086ad7be0 ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
Signed-off-by: Liu Yuntao <liuyuntao10(a)huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f7e41390f3d8b..68cec97bbd066 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3151,10 +3151,10 @@ static int __init hugetlb_nrpages_setup(char *s)
pr_warn("HugeTLB: architecture can't support node specific alloc, ignoring!\n");
return 0;
}
+ if (tmp >= nr_online_nodes)
+ goto invalid;
node = tmp;
p += count + 1;
- if (node < 0 || node >= nr_online_nodes)
- goto invalid;
/* Parse hugepages */
if (sscanf(p, "%lu%n", &tmp, &count) != 1)
goto invalid;
--
2.25.1
1
0

[PATCH OLK-5.10 v4 0/2]arm64: ras: copy_from_user scenario support uce kernel recovery
by Tong Tiangen 07 Mar '22
by Tong Tiangen 07 Mar '22
07 Mar '22
v3->v4:
1.remove uce_kernel_recovery.h
2.optimize do_sea() and arm64_process_kernel_sea().
3.use pr_fmt instead of pr_uce.
v2->v3:
1. put uce kernel recovery related processing into separate file.
2. fix the RAS error processing flow of do_sea().
3. add CONFIG_ARM64_UCE_KERNEL_RECOVERY=y to openeuler_defconfig.
4. update commit msg.
v1->v2:
1. update commit msg.
2. change copy_from_user return value.
3. change copy_from_user proc control bit.
Tong Tiangen (2):
arm64: ras: copy_from_user scenario support uce kernel recovery
arm64: config: enable CONFIG_ARM64_UCE_KERNEL_RECOVERY
Documentation/admin-guide/sysctl/kernel.rst | 17 ++
arch/arm64/Kconfig | 9 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/exception.h | 13 ++
arch/arm64/lib/copy_from_user.S | 11 ++
arch/arm64/mm/Makefile | 2 +
arch/arm64/mm/fault.c | 4 +
arch/arm64/mm/uce_kernel_recovery.c | 198 ++++++++++++++++++++
8 files changed, 255 insertions(+)
create mode 100644 arch/arm64/mm/uce_kernel_recovery.c
--
2.18.0.huawei.25
2
3

[PATCH OLK-5.10 v3 0/3] arm64: ras: copy_from_user scenario support uce kernel recovery
by Tong Tiangen 07 Mar '22
by Tong Tiangen 07 Mar '22
07 Mar '22
v2->v3:
1. put uce kernel recovery related processing into separate file.
2. fix the RAS error processing flow of do_sea().
3. add CONFIG_ARM64_UCE_KERNEL_RECOVERY=y to openeuler_defconfig.
4. update commit msg.
v1->v2:
1. update commit msg.
2. change copy_from_user return value.
3. change copy_from_user proc control bit.
Tong Tiangen (3):
arm64: ras: add missing call apei_claim_sea in kernel mode
arm64: ras: copy_from_user scenario support uce kernel recovery
arm64: ras: enable CONFIG_ARM64_UCE_KERNEL_RECOVERY on arm64
Documentation/admin-guide/sysctl/kernel.rst | 17 ++
arch/arm64/Kconfig | 9 +
arch/arm64/configs/openeuler_defconfig | 1 +
arch/arm64/include/asm/uce_kernel_recovery.h | 33 ++++
arch/arm64/lib/copy_from_user.S | 11 ++
arch/arm64/mm/Makefile | 2 +
arch/arm64/mm/fault.c | 7 +-
arch/arm64/mm/uce_kernel_recovery.c | 193 +++++++++++++++++++
8 files changed, 272 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/include/asm/uce_kernel_recovery.h
create mode 100644 arch/arm64/mm/uce_kernel_recovery.c
--
2.18.0.huawei.25
1
3

[PATCH openEuler-1.0-LTS] xhci: Fix a logic issue when display Zhaoxin XHCI root hub speed
by LeoLiuoc 07 Mar '22
by LeoLiuoc 07 Mar '22
07 Mar '22
Fix a logic issue when display Zhaoxin XHCI root hub speed.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 8fabb8757..fe042cf13 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5132,10 +5132,10 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5)
minor_rev = 1;
}
- if (minor_rev != 1) {
- hcd->speed = HCD_USB3;
- hcd->self.root_hub->speed = USB_SPEED_SUPER;
- }
+ }
+ if (minor_rev != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
}
}
--
2.20.1
1
1
各位好,
根据前期沟通,OSV、驱动兼容性SIG都对内核接口兼容性提出了诉求。根据前期
沟通的信息,以 openEuler-20.03-LTS 驱动列表、兼容性KABI 白名单为基础,
以如下几个方面为白名单移除或新增的输入,形成一份 openEuler-22.03-LTS
兼容性 KABI 白名单初稿,请大家评审。
ARM64 版本变更统计结果:
+--------------------------------+-----------+------------------------------------------------------------------------+
| 列表名称 | 接口数量 | 说明 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 4.19kabi_whitelist_aarch64 | 2463 | openEuler-20.03-LTS 白名单(基础) |
+--------------------------------+-----------+------------------------------------------------------------------------+
| whitelist_remove_aarch64 | -164 | openEuler-22.03-LTS 不存在或被替代的接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| whitelist_add_aarch64 | 133 | openEuler-20.03-LTS 版本演进新增接口、openEuler-22.03-LTS 新增替代接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| nvidia.ko、nvidia_vgpu_vfio.ko | 18 | nvidia.ko、nvidia_vgpu_vfio.ko 兼容诉求新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 4.19rpm_add_aarch64 | 97 | openEuler-20.03-LTS 第三方驱动 rpm 新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 5.10ko_add_aarch64 | 373 | openEuler-22.03-LTS 自带驱动 ko 新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| redhat-4.18.0-358.el8 | 4 | redhat-4.18.0-358.el8 白名单新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| summary_add_aarch64 | 515 | 以上输入、除去重复的、新增接口汇总 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 5.10kabi_whitelist_aarch64 | 2814 | openEuler-22.03-LTS 白名单初稿(2463-164+515) |
+--------------------------------+-----------+------------------------------------------------------------------------+
X86 版本变更统计结果:
+--------------------------------+-----------+------------------------------------------------------------------------+
| 列表名称 | 接口数量 | 说明 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 4.19kabi_whitelist_x86_64 | 2777 | openEuler-20.03-LTS 白名单(基础) |
+--------------------------------+-----------+------------------------------------------------------------------------+
| whitelist_remove_x86_64 | -168 | openEuler-22.03-LTS 不存在或被替代的接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| whitelist_add_x86_64 | 130 | openEuler-20.03-LTS 版本演进新增接口、openEuler-22.03-LTS 新增替代接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| nvidia.ko、nvidia_vgpu_vfio.ko | 17 | nvidia.ko、nvidia_vgpu_vfio.ko 兼容诉求新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 4.19rpm_add_x86_64 | 83 | openEuler-20.03-LTS 第三方驱动 rpm 新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 5.10ko_add_x86_64 | 422 | openEuler-22.03-LTS 自带驱动 ko 新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| redhat-4.18.0-358.el8 | 4 | redhat-4.18.0-358.el8 白名单新增接口 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| summary_add_x86_64 | 562 | 以上输入、除去重复的、新增接口汇总 |
+--------------------------------+-----------+------------------------------------------------------------------------+
| 5.10kabi_whitelist_x86_64 | 3171 | openEuler-22.03-LTS 白名单初稿(2777-168+562) |
+--------------------------------+-----------+------------------------------------------------------------------------+
KABI (Kernel Application Binary Interface) 兼容,即内核与驱动的二进制兼容。
就是驱动不用重新编译,就可以在新内核上安装使用。如果驱动用到的接口都
是兼容的,那么驱动就可以不用重新编译就可以在新版本安装使用。上游社区
考虑到便于开发、和防止架构腐化,不考虑 KABI 的兼容性。业界的 KABI 兼容
都是由 Linux 发行版来做的,而且兼容的接口越多、维护时间越长,维护成本
也越高。
openEuler 通过多次与下游OSV和驱动团队交流,根据下游反馈的诉求,
openEuler-22.03-LTS 版本,提供一定范围的 KABI 兼容能力,目的是
使常用的板卡驱动,在 openEuler-22.03-LTS 版本能兼容。
KABI 兼容列表白名单的主要输入是驱动(ko),我们根据下游的反馈,在
openEuler-20.03-LTS 驱动列表的基础上,新增nvidia.ko、nvidia_vgpu_vfio.ko,
作为兼容的目标,得到驱动列表如下。
ARM64 版本拟兼容的驱动(ko)列表:
amdgpu.ko
bnx2.ko
bnx2x.ko
bnxt_en.ko
bnxt_re.ko
hclge.ko
hclgevf.ko
hifc.ko
hinic.ko
hnae3.ko
hns3.ko
i40e.ko
ice.ko
igb.ko
ixgbe.ko
ixgbevf.ko
lpfc.ko
megaraid_sas.ko
mlx4_core.ko
mlx4_ib.ko
mlx5_core.ko
mlx5_ib.ko
mpt3sas.ko
nvidia.ko
nvidia_vgpu_vfio.ko
nouveau.ko
nvme.ko
qed.ko
qede.ko
qla2xxx.ko
smartpqi.ko
tg3.ko
txgbe.ko
X86 版本拟兼容的驱动(ko)列表:
amdgpu.ko
bnx2.ko
bnx2x.ko
bnxt_en.ko
bnxt_re.ko
hifc.ko
hinic.ko
i40e.ko
ice.ko
igb.ko
ixgbe.ko
ixgbevf.ko
lpfc.ko
megaraid_sas.ko
mlx4_core.ko
mlx4_ib.ko
mlx5_core.ko
mlx5_ib.ko
mpt3sas.ko
nvidia.ko
nvidia_vgpu_vfio.ko
nouveau.ko
nvme.ko
qed.ko
qede.ko
qla2xxx.ko
smartpqi.ko
tg3.ko
txgbe.ko
重要的说明和提示:
1. 上述列表中多数驱动,还没有针对 openEuler-22.03-LTS 的正式二进制版本,因此我们
根据开源的版本或相近的版本收集了相关 KABI 列表,可能与板卡厂商的最终发布
的版本存在少量差异。下游OSV或驱动团队如果发现有接口没在给出的列表中的,
可以在评审过程中提出来。
2. 如果有新增 KABI 兼容性的诉求,需要给出需要兼容的 KABI 接口名称,以及使用
到的驱动名称,以方便评估。
3. 评审反馈,可以在 issue 中反馈,https://gitee.com/openeuler/kernel/issues/I4U6NZ
也可以通过回复邮件反馈。
4. 收集评审反馈信息的时间为1周,截止到下周五(3月11日)下午17:00.
---
openEuler kernel SIG, 2022-3-4
---
附1:ARM64 平台 KABI 白名单列表初稿(2814个)
acpi_bus_get_device
acpi_check_dsm
acpi_dev_found
acpi_disabled
acpi_dma_configure_id
acpi_evaluate_dsm
acpi_evaluate_object
acpi_format_exception
acpi_gbl_FADT
acpi_get_devices
acpi_get_handle
acpi_get_name
acpi_get_table
acpi_gsi_to_irq
acpi_handle_printk
acpi_has_method
acpi_lid_open
acpi_match_device
__acpi_node_get_property_reference
acpi_os_map_memory
acpi_os_unmap_generic_address
acpi_os_unmap_memory
acpi_register_gsi
acpi_unregister_gsi
add_timer
add_uevent_var
add_wait_queue
add_wait_queue_exclusive
admin_timeout
alloc_chrdev_region
alloc_cpu_rmap
__alloc_disk_node
alloc_etherdev_mqs
alloc_netdev_mqs
__alloc_pages
alloc_pages
__alloc_percpu
__alloc_percpu_gfp
__alloc_skb
alloc_workqueue
anon_inode_getfd
anon_inode_getfile
apei_hest_parse
apei_map_generic_address
apei_read
__arch_clear_user
__arch_copy_from_user
__arch_copy_in_user
__arch_copy_to_user
arch_timer_read_counter
arch_wb_cache_pmem
arm64_const_caps_ready
arm64_use_ng_mappings
arp_tbl
async_schedule_node
ata_link_next
ata_tf_to_fis
_atomic_dec_and_lock
atomic_notifier_call_chain
atomic_notifier_chain_register
atomic_notifier_chain_unregister
attribute_container_find_class_device
autoremove_wake_function
backlight_device_register
backlight_device_set_brightness
backlight_device_unregister
backlight_force_update
bdevname
bdev_read_only
bdget_disk
_bin2bcd
bin2hex
bio_add_page
bio_alloc_bioset
bio_associate_blkg
bio_chain
bio_clone_fast
bio_devname
bio_endio
bio_free_pages
bio_init
bio_integrity_add_page
bio_integrity_alloc
bio_put
bioset_exit
bioset_init
__bitmap_and
__bitmap_andnot
__bitmap_clear
__bitmap_complement
__bitmap_equal
bitmap_find_free_region
bitmap_find_next_zero_area_off
bitmap_free
bitmap_from_arr32
__bitmap_intersects
__bitmap_or
bitmap_parse
bitmap_parselist
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
__bitmap_subset
__bitmap_weight
__bitmap_xor
bitmap_zalloc
bit_wait
blk_alloc_queue
blk_check_plugged
blk_cleanup_queue
blkdev_get_by_path
__blkdev_issue_discard
blkdev_issue_discard
blkdev_issue_flush
blkdev_issue_write_same
__blkdev_issue_zeroout
blkdev_issue_zeroout
blkdev_put
blk_execute_rq
blk_execute_rq_nowait
blk_finish_plug
blk_freeze_queue_start
blk_get_queue
blk_get_request
blk_integrity_register
blk_integrity_unregister
blk_mq_alloc_request
blk_mq_alloc_request_hctx
blk_mq_alloc_tag_set
blk_mq_complete_request
blk_mq_complete_request_remote
blk_mq_end_request
blk_mq_free_request
blk_mq_free_tag_set
blk_mq_freeze_queue
blk_mq_freeze_queue_wait
blk_mq_freeze_queue_wait_timeout
blk_mq_init_queue
blk_mq_map_queues
blk_mq_pci_map_queues
blk_mq_quiesce_queue
blk_mq_rdma_map_queues
blk_mq_requeue_request
blk_mq_run_hw_queues
blk_mq_start_request
blk_mq_tagset_busy_iter
blk_mq_tagset_wait_completed_request
blk_mq_tag_to_rq
blk_mq_unfreeze_queue
blk_mq_unique_tag
blk_mq_unquiesce_queue
blk_mq_update_nr_hw_queues
blk_poll
blk_put_queue
blk_put_request
blk_queue_bounce_limit
blk_queue_chunk_sectors
blk_queue_dma_alignment
blk_queue_flag_clear
blk_queue_flag_set
blk_queue_flag_test_and_set
blk_queue_io_min
blk_queue_io_opt
blk_queue_logical_block_size
blk_queue_max_discard_sectors
blk_queue_max_discard_segments
blk_queue_max_hw_sectors
blk_queue_max_segments
blk_queue_max_segment_size
blk_queue_max_write_same_sectors
blk_queue_max_write_zeroes_sectors
blk_queue_physical_block_size
blk_queue_rq_timeout
blk_queue_segment_boundary
blk_queue_split
blk_queue_update_dma_alignment
blk_queue_virt_boundary
blk_queue_write_cache
blk_rq_append_bio
blk_rq_count_integrity_sg
blk_rq_map_integrity_sg
blk_rq_map_kern
__blk_rq_map_sg
blk_rq_map_user
blk_rq_map_user_iov
blk_rq_unmap_user
blk_set_queue_dying
blk_set_stacking_limits
blk_stack_limits
blk_start_plug
blk_status_to_errno
blk_verify_command
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_dispatcher_xdp_func
bpf_prog_add
bpf_prog_inc
bpf_prog_put
bpf_prog_sub
bpf_stats_enabled_key
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
bpf_trace_run5
bpf_trace_run6
bpf_trace_run9
bpf_warn_invalid_xdp_action
bsg_job_done
btree_destroy
btree_geo32
btree_geo64
btree_get_prev
btree_init
btree_insert
btree_last
btree_lookup
btree_remove
btree_update
build_skb
bus_find_device
bus_register
bus_unregister
cache_line_size
call_netdevice_notifiers
call_rcu
call_srcu
call_usermodehelper
cancel_delayed_work
cancel_delayed_work_sync
cancel_work_sync
can_do_mlock
capable
cdev_add
cdev_alloc
cdev_del
cdev_device_add
cdev_device_del
cdev_init
cgroup_attach_task_all
__check_object_size
class_compat_create_link
class_compat_register
class_compat_remove_link
class_compat_unregister
__class_create
class_create_file_ns
class_destroy
class_find_device
class_for_each_device
__class_register
class_unregister
cleanup_srcu_struct
clk_disable
clk_enable
clk_get_rate
clk_prepare
clk_unprepare
clk_unregister
__close_fd
commit_creds
compat_alloc_user_space
complete
complete_all
complete_and_exit
completion_done
component_add
component_del
__cond_resched
configfs_register_subsystem
configfs_remove_default_groups
configfs_unregister_subsystem
config_group_init
config_group_init_type_name
config_item_put
console_lock
console_unlock
__const_udelay
consume_skb
_copy_from_iter
_copy_to_iter
cper_estatus_check
cper_estatus_check_header
cper_estatus_print
__cpu_active_mask
cpu_all_bits
cpu_bit_bitmap
cpufreq_quick_get
__cpuhp_remove_state
__cpuhp_setup_state
__cpuhp_state_add_instance
__cpuhp_state_remove_instance
cpu_hwcap_keys
cpu_hwcaps
cpumask_local_spread
cpumask_next
cpumask_next_and
cpumask_next_wrap
cpu_number
__cpu_online_mask
__cpu_possible_mask
__cpu_present_mask
cpus_read_lock
cpus_read_unlock
crc32c
__crc32c_le_shift
crc32_le
crc8
crc8_populate_msb
crc_t10dif
crypto_ahash_digest
crypto_ahash_final
crypto_ahash_setkey
crypto_alloc_ahash
crypto_alloc_akcipher
crypto_alloc_shash
crypto_destroy_tfm
crypto_inc
__crypto_memneq
crypto_register_akcipher
crypto_register_alg
crypto_register_kpp
crypto_register_shash
crypto_register_skciphers
crypto_shash_final
crypto_shash_update
crypto_unregister_akcipher
crypto_unregister_alg
crypto_unregister_kpp
crypto_unregister_shash
crypto_unregister_skciphers
csum_ipv6_magic
csum_partial
csum_tcpudp_nofold
_ctype
dcb_getapp
dcb_ieee_delapp
dcb_ieee_getapp_mask
dcb_ieee_setapp
dcbnl_cee_notify
dcbnl_ieee_notify
dcb_setapp
debugfs_attr_read
debugfs_attr_write
debugfs_create_atomic_t
debugfs_create_bool
debugfs_create_dir
debugfs_create_file
debugfs_create_file_unsafe
debugfs_create_regset32
debugfs_create_symlink
debugfs_create_u16
debugfs_create_u32
debugfs_create_u64
debugfs_create_u8
debugfs_initialized
debugfs_lookup
debugfs_remove
default_llseek
default_wake_function
__delay
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
destroy_workqueue
dev_add_pack
dev_addr_add
dev_addr_del
dev_alloc_name
dev_attr_phy_event_threshold
dev_base_lock
dev_change_flags
dev_close
_dev_crit
dev_disable_lro
dev_driver_string
_dev_emerg
_dev_err
__dev_get_by_index
dev_get_by_index
dev_get_by_index_rcu
__dev_get_by_name
dev_get_by_name
dev_get_flags
dev_get_iflink
dev_get_stats
device_add
device_add_disk
device_create
device_create_file
device_create_with_groups
device_del
device_destroy
device_for_each_child
device_get_mac_address
device_get_next_child_node
device_initialize
device_link_add
device_property_present
device_property_read_string
device_property_read_string_array
device_property_read_u32_array
device_property_read_u8_array
device_register
device_release_driver
device_remove_file
device_remove_file_self
device_rename
device_reprobe
device_set_wakeup_capable
device_set_wakeup_enable
device_unregister
device_wakeup_disable
_dev_info
__dev_kfree_skb_any
__dev_kfree_skb_irq
devlink_alloc
devlink_flash_update_begin_notify
devlink_flash_update_end_notify
devlink_flash_update_status_notify
devlink_fmsg_arr_pair_nest_end
devlink_fmsg_arr_pair_nest_start
devlink_fmsg_binary_pair_nest_end
devlink_fmsg_binary_pair_nest_start
devlink_fmsg_binary_pair_put
devlink_fmsg_binary_put
devlink_fmsg_bool_pair_put
devlink_fmsg_obj_nest_end
devlink_fmsg_obj_nest_start
devlink_fmsg_pair_nest_end
devlink_fmsg_pair_nest_start
devlink_fmsg_string_pair_put
devlink_fmsg_u32_pair_put
devlink_fmsg_u32_put
devlink_fmsg_u64_pair_put
devlink_fmsg_u8_pair_put
devlink_free
devlink_health_report
devlink_health_reporter_create
devlink_health_reporter_destroy
devlink_health_reporter_priv
devlink_health_reporter_recovery_done
devlink_health_reporter_state_update
devlink_info_board_serial_number_put
devlink_info_driver_name_put
devlink_info_serial_number_put
devlink_info_version_fixed_put
devlink_info_version_running_put
devlink_info_version_stored_put
devlink_net
devlink_param_driverinit_value_get
devlink_param_driverinit_value_set
devlink_params_publish
devlink_params_register
devlink_params_unpublish
devlink_params_unregister
devlink_param_value_changed
devlink_port_attrs_set
devlink_port_health_reporter_create
devlink_port_health_reporter_destroy
devlink_port_params_register
devlink_port_params_unregister
devlink_port_register
devlink_port_type_clear
devlink_port_type_eth_set
devlink_port_type_ib_set
devlink_port_unregister
devlink_region_create
devlink_region_destroy
devlink_region_snapshot_create
devlink_region_snapshot_id_get
devlink_region_snapshot_id_put
devlink_register
devlink_reload_disable
devlink_reload_enable
devlink_remote_reload_actions_performed
devlink_unregister
devm_add_action
dev_mc_add
dev_mc_add_excl
dev_mc_del
devm_clk_get
__devm_drm_dev_alloc
devm_free_irq
devm_hwmon_device_register_with_groups
devm_ioremap
devm_ioremap_resource
devm_ioremap_wc
devm_iounmap
devm_kfree
devm_kmalloc
devm_kmemdup
devm_mdiobus_alloc_size
devm_request_threaded_irq
_dev_notice
dev_open
dev_pm_qos_expose_latency_tolerance
dev_pm_qos_hide_latency_tolerance
dev_pm_qos_update_user_latency_tolerance
dev_printk
dev_printk_emit
dev_queue_xmit
dev_remove_pack
dev_set_mac_address
dev_set_mtu
dev_set_name
dev_set_promiscuity
dev_trans_start
dev_uc_add
dev_uc_add_excl
dev_uc_del
_dev_warn
d_find_alias
disable_irq
disable_irq_nosync
disk_end_io_acct
disk_start_io_acct
dma_alloc_attrs
dma_buf_dynamic_attach
dma_buf_get
dma_buf_map_attachment
dma_buf_move_notify
dma_buf_pin
dma_buf_put
dma_buf_unmap_attachment
dma_buf_unpin
dma_fence_add_callback
dma_fence_array_create
dma_fence_context_alloc
dma_fence_free
dma_fence_get_status
dma_fence_get_stub
dma_fence_init
dma_fence_release
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_any_timeout
dma_fence_wait_timeout
dma_free_attrs
dma_get_required_mask
dma_get_sgtable_attrs
dmam_alloc_attrs
dma_map_page_attrs
dma_map_resource
dma_map_sg_attrs
dma_max_mapping_size
dmam_free_coherent
dma_mmap_attrs
dmam_pool_create
dma_pool_alloc
dma_pool_create
dma_pool_destroy
dma_pool_free
dma_resv_add_excl_fence
dma_resv_add_shared_fence
dma_resv_get_fences_rcu
dma_resv_reserve_shared
dma_resv_test_signaled_rcu
dma_resv_wait_timeout_rcu
dma_set_coherent_mask
dma_set_mask
dma_sync_single_for_cpu
dma_sync_single_for_device
dma_unmap_page_attrs
dma_unmap_resource
dma_unmap_sg_attrs
dmi_check_system
dmi_get_system_info
dmi_match
__do_once_done
__do_once_start
do_wait_intr
down
downgrade_write
down_interruptible
down_read
down_read_killable
down_read_trylock
down_timeout
down_trylock
down_write
down_write_killable
down_write_trylock
d_path
dput
dql_completed
dql_reset
drain_workqueue
driver_create_file
driver_for_each_device
driver_register
driver_remove_file
driver_unregister
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_plane_state
drm_atomic_get_private_obj_state
drm_atomic_helper_async_check
drm_atomic_helper_calc_timestamping_constants
drm_atomic_helper_check
drm_atomic_helper_check_modeset
drm_atomic_helper_check_planes
drm_atomic_helper_check_plane_state
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit
drm_atomic_helper_commit_cleanup_done
drm_atomic_helper_commit_hw_done
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
drm_atomic_helper_connector_duplicate_state
__drm_atomic_helper_connector_reset
drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_crtc_duplicate_state
__drm_atomic_helper_crtc_reset
drm_atomic_helper_crtc_reset
drm_atomic_helper_disable_plane
drm_atomic_helper_legacy_gamma_set
drm_atomic_helper_page_flip
__drm_atomic_helper_plane_destroy_state
drm_atomic_helper_plane_destroy_state
__drm_atomic_helper_plane_duplicate_state
drm_atomic_helper_plane_duplicate_state
__drm_atomic_helper_plane_reset
drm_atomic_helper_plane_reset
drm_atomic_helper_prepare_planes
__drm_atomic_helper_private_obj_duplicate_state
drm_atomic_helper_resume
drm_atomic_helper_set_config
drm_atomic_helper_setup_commit
drm_atomic_helper_shutdown
drm_atomic_helper_suspend
drm_atomic_helper_swap_state
drm_atomic_helper_update_legacy_modeset_state
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_fences
drm_atomic_helper_wait_for_flip_done
drm_atomic_private_obj_fini
drm_atomic_private_obj_init
drm_atomic_state_alloc
drm_atomic_state_clear
drm_atomic_state_default_clear
drm_atomic_state_default_release
__drm_atomic_state_free
drm_atomic_state_init
drm_compat_ioctl
drm_connector_attach_dp_subconnector_property
drm_connector_attach_encoder
drm_connector_attach_max_bpc_property
drm_connector_attach_vrr_capable_property
drm_connector_cleanup
drm_connector_init
drm_connector_init_with_ddc
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_register
drm_connector_set_path_property
drm_connector_set_vrr_capable_property
drm_connector_unregister
drm_connector_update_edid_property
drm_crtc_accurate_vblank_count
drm_crtc_add_crc_entry
drm_crtc_arm_vblank_event
drm_crtc_cleanup
__drm_crtc_commit_free
drm_crtc_enable_color_mgmt
drm_crtc_from_index
drm_crtc_handle_vblank
drm_crtc_helper_set_config
drm_crtc_helper_set_mode
drm_crtc_init
drm_crtc_init_with_planes
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_get
drm_crtc_vblank_helper_get_vblank_timestamp
drm_crtc_vblank_helper_get_vblank_timestamp_internal
drm_crtc_vblank_off
drm_crtc_vblank_on
drm_crtc_vblank_put
drm_cvt_mode
__drm_dbg
__drm_debug
drm_debugfs_create_files
drm_detect_hdmi_monitor
drm_detect_monitor_audio
drm_dev_alloc
drm_dev_dbg
drm_dev_enter
drm_dev_exit
drm_dev_printk
drm_dev_put
drm_dev_register
drm_dev_unplug
drm_dev_unregister
drm_dp_atomic_find_vcpi_slots
drm_dp_atomic_release_vcpi_slots
drm_dp_aux_init
drm_dp_aux_register
drm_dp_aux_unregister
drm_dp_bw_code_to_link_rate
drm_dp_calc_pbn_mode
drm_dp_channel_eq_ok
drm_dp_check_act_status
drm_dp_clock_recovery_ok
drm_dp_downstream_max_dotclock
drm_dp_dpcd_read
drm_dp_dpcd_read_link_status
drm_dp_dpcd_write
drm_dp_find_vcpi_slots
drm_dp_get_adjust_request_pre_emphasis
drm_dp_get_adjust_request_voltage
drm_dp_link_rate_to_bw_code
drm_dp_link_train_channel_eq_delay
drm_dp_link_train_clock_recovery_delay
drm_dp_mst_allocate_vcpi
drm_dp_mst_atomic_check
drm_dp_mst_connector_early_unregister
drm_dp_mst_connector_late_register
drm_dp_mst_deallocate_vcpi
drm_dp_mst_detect_port
drm_dp_mst_dump_topology
drm_dp_mst_get_edid
drm_dp_mst_get_port_malloc
drm_dp_mst_hpd_irq
drm_dp_mst_put_port_malloc
drm_dp_mst_reset_vcpi_slots
drm_dp_mst_topology_mgr_destroy
drm_dp_mst_topology_mgr_init
drm_dp_mst_topology_mgr_resume
drm_dp_mst_topology_mgr_set_mst
drm_dp_mst_topology_mgr_suspend
drm_dp_read_desc
drm_dp_read_downstream_info
drm_dp_read_dpcd_caps
drm_dp_read_mst_cap
drm_dp_read_sink_count
drm_dp_read_sink_count_cap
drm_dp_send_real_edid_checksum
drm_dp_set_subconnector_property
drm_dp_start_crc
drm_dp_stop_crc
drm_dp_update_payload_part1
drm_dp_update_payload_part2
drm_edid_header_is_valid
drm_edid_is_valid
drm_edid_to_sad
drm_edid_to_speaker_allocation
drm_encoder_cleanup
drm_encoder_init
__drm_err
drm_fb_helper_alloc_fbi
drm_fb_helper_blank
drm_fb_helper_cfb_copyarea
drm_fb_helper_cfb_fillrect
drm_fb_helper_cfb_imageblit
drm_fb_helper_check_var
drm_fb_helper_debug_enter
drm_fb_helper_debug_leave
drm_fb_helper_fill_info
drm_fb_helper_fini
drm_fb_helper_hotplug_event
drm_fb_helper_init
drm_fb_helper_initial_config
drm_fb_helper_ioctl
drm_fb_helper_lastclose
drm_fb_helper_output_poll_changed
drm_fb_helper_pan_display
drm_fb_helper_prepare
drm_fb_helper_setcmap
drm_fb_helper_set_par
drm_fb_helper_set_suspend
drm_fb_helper_set_suspend_unlocked
drm_fb_helper_sys_copyarea
drm_fb_helper_sys_fillrect
drm_fb_helper_sys_imageblit
drm_fb_helper_unregister_fbi
drm_format_info
drm_framebuffer_cleanup
drm_framebuffer_init
drm_framebuffer_unregister_private
drm_gem_dmabuf_mmap
drm_gem_dmabuf_release
drm_gem_dmabuf_vmap
drm_gem_dmabuf_vunmap
drm_gem_fb_create_handle
drm_gem_fb_destroy
drm_gem_handle_create
drm_gem_handle_delete
drm_gem_map_attach
drm_gem_map_detach
drm_gem_map_dma_buf
drm_gem_object_free
drm_gem_object_init
drm_gem_object_lookup
drm_gem_object_release
drm_gem_prime_export
drm_gem_prime_fd_to_handle
drm_gem_prime_handle_to_fd
drm_gem_prime_import
drm_gem_private_object_init
drm_gem_unmap_dma_buf
drm_get_connector_status_name
drm_get_edid
drm_get_edid_switcheroo
drm_get_format_info
drm_get_format_name
drm_handle_vblank
drm_hdmi_avi_infoframe_from_display_mode
drm_hdmi_infoframe_set_hdr_metadata
drm_hdmi_vendor_infoframe_from_display_mode
drm_helper_connector_dpms
drm_helper_disable_unused_functions
drm_helper_force_disable_all
drm_helper_hpd_irq_event
drm_helper_mode_fill_fb_struct
drm_helper_probe_detect
drm_helper_probe_single_connector_modes
drm_helper_resume_force_mode
drm_i2c_encoder_detect
drm_i2c_encoder_init
drm_i2c_encoder_mode_fixup
drm_i2c_encoder_restore
drm_i2c_encoder_save
drm_invalid_op
drm_ioctl
drm_irq_install
drm_irq_uninstall
drm_is_current_master
drm_kms_helper_hotplug_event
drm_kms_helper_is_poll_worker
drm_kms_helper_poll_disable
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drm_mm_init
drm_mm_insert_node_in_range
drmm_mode_config_init
drm_mm_print
drm_mm_remove_node
drm_mm_takedown
drm_mode_config_cleanup
drm_mode_config_reset
drm_mode_copy
drm_mode_create_dvi_i_properties
drm_mode_create_scaling_mode_property
drm_mode_create_tv_properties
drm_mode_crtc_set_gamma_size
drm_mode_debug_printmodeline
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_get_hv_timing
drm_mode_is_420_also
drm_mode_is_420_only
drm_mode_legacy_fb_format
drm_mode_object_find
drm_mode_object_put
drm_mode_probed_add
drm_modeset_acquire_fini
drm_modeset_acquire_init
drm_modeset_backoff
drm_mode_set_crtcinfo
drm_modeset_drop_locks
drm_modeset_lock
drm_modeset_lock_all
drm_modeset_lock_all_ctx
drm_modeset_lock_single_interruptible
drm_mode_set_name
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_sort
drm_mode_vrefresh
drm_need_swiotlb
drm_object_attach_property
drm_object_property_set_value
drm_open
drm_plane_cleanup
drm_plane_create_alpha_property
drm_plane_create_blend_mode_property
drm_plane_create_color_properties
drm_plane_create_rotation_property
drm_plane_create_zpos_immutable_property
drm_plane_create_zpos_property
drm_plane_force_disable
drm_plane_init
drm_poll
drm_primary_helper_destroy
drm_primary_helper_funcs
drm_prime_gem_destroy
drm_prime_pages_to_sg
drm_prime_sg_to_page_addr_arrays
drm_printf
__drm_printfn_seq_file
drm_property_add_enum
drm_property_create
drm_property_create_enum
drm_property_create_range
__drm_puts_seq_file
drm_read
drm_release
drm_scdc_read
drm_scdc_write
drm_sched_dependency_optimized
drm_sched_entity_destroy
drm_sched_entity_fini
drm_sched_entity_flush
drm_sched_entity_init
drm_sched_entity_modify_sched
drm_sched_entity_push_job
drm_sched_entity_set_priority
drm_sched_fault
drm_sched_fini
drm_sched_increase_karma
drm_sched_init
drm_sched_job_cleanup
drm_sched_job_init
drm_sched_pick_best
drm_sched_resubmit_jobs
drm_sched_start
drm_sched_stop
drm_sched_suspend_timeout
drm_send_event_locked
drm_syncobj_add_point
drm_syncobj_create
drm_syncobj_find
drm_syncobj_find_fence
drm_syncobj_free
drm_syncobj_get_fd
drm_syncobj_get_handle
drm_syncobj_replace_fence
drm_universal_plane_init
drm_vblank_init
drm_vblank_work_cancel_sync
drm_vblank_work_init
drm_vblank_work_schedule
drm_vma_node_allow
drm_vma_node_is_allowed
drm_vma_node_revoke
dst_init
dst_release
dump_stack
__dynamic_dev_dbg
__dynamic_ibdev_dbg
__dynamic_netdev_dbg
__dynamic_pr_debug
elfcorehdr_addr
emergency_restart
empty_zero_page
enable_irq
errno_to_blk_status
ether_setup
eth_get_headlen
eth_mac_addr
eth_platform_get_mac_address
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
__ethtool_get_link_ksettings
ethtool_intersect_link_masks
ethtool_op_get_link
ethtool_op_get_ts_info
ethtool_rx_flow_rule_create
ethtool_rx_flow_rule_destroy
eth_type_trans
eth_validate_addr
eventfd_ctx_fdget
eventfd_ctx_fileget
eventfd_ctx_put
eventfd_fget
eventfd_signal
event_triggers_call
fasync_helper
fc_attach_transport
fc_block_scsi_eh
fc_eh_timed_out
fc_get_event_number
fc_host_fpin_rcv
fc_host_post_event
fc_host_post_vendor_event
fc_release_transport
fc_remote_port_add
fc_remote_port_delete
fc_remote_port_rolechg
fc_remove_host
fc_vport_create
fc_vport_terminate
__fdget
fd_install
fget
__fib_lookup
fib_table_lookup
filemap_fault
filp_close
filp_open
find_get_pid
find_last_bit
find_next_bit
find_next_zero_bit
find_pid_ns
find_vma
finish_wait
firmware_request_nowarn
fixed_size_llseek
flow_block_cb_alloc
flow_block_cb_lookup
flow_block_cb_setup_simple
flow_indr_block_cb_alloc
flow_indr_dev_register
flow_indr_dev_unregister
flow_keys_basic_dissector
flow_keys_dissector
flow_resources_add
flow_resources_alloc
flow_rule_match_basic
flow_rule_match_control
flow_rule_match_enc_ipv4_addrs
flow_rule_match_enc_keyid
flow_rule_match_enc_ports
flow_rule_match_eth_addrs
flow_rule_match_icmp
flow_rule_match_ipv4_addrs
flow_rule_match_ipv6_addrs
flow_rule_match_ports
flow_rule_match_vlan
flush_delayed_work
flush_signals
flush_work
flush_workqueue
force_sig
fortify_panic
fput
free_fib_info
free_irq
free_irq_cpu_rmap
free_netdev
__free_pages
free_pages
free_percpu
from_kgid
from_kgid_munged
from_kuid
from_kuid_munged
fs_bio_set
__f_setown
full_name_hash
fwnode_property_read_string
fwnode_property_read_u32_array
fwnode_property_read_u8_array
gcd
generate_random_uuid
generic_file_llseek
generic_handle_irq
genlmsg_put
genl_notify
genl_register_family
genl_unregister_family
genphy_read_status
genphy_restart_aneg
gen_pool_add_owner
gen_pool_alloc_algo_owner
gen_pool_create
gen_pool_destroy
gen_pool_free_owner
gen_pool_virt_to_phys
get_cpu_idle_time
get_cpu_idle_time_us
get_cpu_iowait_time_us
get_device
__get_free_pages
get_net_ns_by_fd
get_phy_device
get_pid_task
get_random_bytes
__get_task_comm
get_task_mm
get_task_pid
get_unused_fd_flags
get_user_pages
get_user_pages_fast
get_user_pages_longterm
get_user_pages_remote
get_zeroed_page
gic_nonsecure_priorities
gic_pmr_sync
gre_add_protocol
gre_del_protocol
groups_alloc
groups_free
guid_parse
handle_simple_irq
hdmi_avi_infoframe_pack
hdmi_drm_infoframe_pack_only
hdmi_infoframe_pack
hest_disable
hex_to_bin
hmm_range_fault
hrtimer_cancel
hrtimer_forward
hrtimer_init
hrtimer_start_range_ns
hrtimer_try_to_cancel
__hw_addr_sync_dev
__hw_addr_unsync_dev
hwmon_device_register
hwmon_device_register_with_groups
hwmon_device_register_with_info
hwmon_device_unregister
i2c_add_adapter
i2c_add_numbered_adapter
i2c_bit_add_bus
i2c_bit_algo
i2c_del_adapter
i2c_generic_scl_recovery
i2c_new_client_device
i2c_recover_bus
i2c_smbus_read_byte_data
i2c_smbus_write_byte_data
i2c_transfer
i2c_unregister_device
__ib_alloc_cq
_ib_alloc_device
__ib_alloc_pd
ib_alloc_xrcd_user
__ib_create_cq
ib_create_qp
ib_create_send_mad
ib_create_srq_user
ib_dealloc_device
ib_dealloc_pd_user
ib_dereg_mr_user
ib_destroy_cq_user
ib_destroy_qp_user
ib_destroy_srq_user
ibdev_err
ib_device_get_by_netdev
ib_device_put
ib_device_set_netdev
ibdev_info
ibdev_warn
ib_dispatch_event
ib_drain_qp
ib_event_msg
ib_find_cached_pkey
ib_free_cq
ib_free_send_mad
ib_get_cached_pkey
ib_get_cached_port_state
ib_get_eth_speed
ib_get_gids_from_rdma_hdr
ib_get_rdma_header_version
ib_map_mr_sg
ib_modify_qp
ib_modify_qp_is_ok
ib_mr_pool_destroy
ib_mr_pool_get
ib_mr_pool_init
ib_mr_pool_put
ib_post_send_mad
ib_process_cq_direct
ib_query_pkey
ib_query_port
ib_query_qp
ib_register_client
ib_register_device
ib_register_mad_agent
ib_sa_cancel_query
ib_sa_guid_info_rec_query
ib_sa_register_client
ib_sa_unregister_client
ib_set_device_ops
ib_sg_to_pages
ib_ud_header_init
ib_ud_header_pack
ib_ud_ip4_csum
ib_umem_copy_from
ib_umem_find_best_pgsz
ib_umem_get
ib_umem_odp_alloc_child
ib_umem_odp_alloc_implicit
ib_umem_odp_get
ib_umem_odp_map_dma_and_lock
ib_umem_odp_release
ib_umem_odp_unmap_dma_pages
ib_umem_release
ib_unregister_client
ib_unregister_device
ib_unregister_device_queued
ib_unregister_driver
ib_unregister_mad_agent
ib_uverbs_flow_resources_free
ib_uverbs_get_ucontext_file
ib_wc_status_msg
__icmp_send
icmpv6_send
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_alloc_cyclic
idr_alloc_u32
idr_destroy
idr_find
idr_for_each
idr_get_next
idr_get_next_ul
idr_preload
idr_remove
idr_replace
igrab
in4_pton
in6_dev_finish_destroy
in6_pton
in_aton
in_dev_finish_destroy
in_egroup_p
__inet6_lookup_established
inet_addr_is_any
inet_confirm_addr
inet_get_local_port_range
__inet_lookup_established
inet_proto_csum_replace16
inet_proto_csum_replace4
inet_pton_with_scope
in_group_p
init_net
__init_rwsem
init_srcu_struct
__init_swait_queue_head
init_task
init_timer_key
init_uts_ns
init_wait_entry
__init_waitqueue_head
input_close_device
input_open_device
input_register_handle
input_register_handler
input_unregister_handle
input_unregister_handler
interval_tree_insert
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_to_scsilun
iomem_resource
iommu_get_domain_for_dev
iommu_group_add_device
iommu_group_alloc
iommu_group_get
iommu_group_id
iommu_group_put
iommu_group_remove_device
iommu_iova_to_phys
iommu_map
iommu_unmap
__ioremap
ioremap_cache
io_schedule
io_schedule_timeout
iounmap
iov_iter_advance
iov_iter_bvec
iov_iter_init
iov_iter_npages
__iowrite32_copy
__iowrite64_copy
ip6_dst_hoplimit
ip6_local_out
ip6_route_output_flags
ip_compute_csum
ip_defrag
__ip_dev_find
ip_do_fragment
ip_local_out
ip_mc_dec_group
ip_mc_inc_group
ipmi_add_smi
ipmi_create_user
ipmi_destroy_user
ipmi_free_recv_msg
ipmi_poll_interface
ipmi_request_settime
ipmi_set_gets_events
ipmi_set_my_address
ipmi_smi_msg_received
ipmi_unregister_smi
ipmi_validate_addr
ip_route_output_flow
__ip_select_ident
ip_send_check
ip_set_get_byname
ip_set_put_byindex
ip_tos2prio
ip_tunnel_get_stats64
iput
__ipv6_addr_type
ipv6_chk_addr
ipv6_ext_hdr
ipv6_find_hdr
ipv6_mod_enabled
ipv6_skip_exthdr
ipv6_stub
ip_vs_proto_name
irq_cpu_rmap_add
irq_create_mapping_affinity
__irq_domain_add
irq_domain_remove
irq_find_mapping
irq_get_irq_data
irq_modify_status
irq_poll_complete
irq_poll_disable
irq_poll_enable
irq_poll_init
irq_poll_sched
irq_set_affinity_hint
irq_set_affinity_notifier
irq_set_chip_and_handler_name
irq_to_desc
is_acpi_data_node
is_acpi_device_node
iscsi_block_scsi_eh
iscsi_block_session
iscsi_boot_create_ethernet
iscsi_boot_create_host_kset
iscsi_boot_create_initiator
iscsi_boot_create_target
iscsi_boot_destroy_kset
iscsi_complete_pdu
iscsi_conn_bind
iscsi_conn_get_addr_param
iscsi_conn_get_param
iscsi_conn_login_event
iscsi_conn_send_pdu
iscsi_conn_setup
iscsi_conn_start
iscsi_conn_stop
iscsi_create_endpoint
iscsi_create_flashnode_conn
iscsi_create_flashnode_sess
iscsi_create_iface
iscsi_destroy_all_flashnode
iscsi_destroy_endpoint
iscsi_destroy_flashnode_sess
iscsi_destroy_iface
iscsi_find_flashnode_conn
iscsi_find_flashnode_sess
iscsi_flashnode_bus_match
iscsi_get_discovery_parent_name
iscsi_get_ipaddress_state_name
iscsi_get_port_speed_name
iscsi_get_port_state_name
iscsi_get_router_state_name
iscsi_host_alloc
iscsi_host_for_each_session
iscsi_is_session_dev
iscsi_is_session_online
iscsi_itt_to_task
iscsi_lookup_endpoint
iscsi_ping_comp_event
iscsi_post_host_event
iscsi_register_transport
iscsi_session_chkready
iscsi_session_failure
iscsi_session_get_param
iscsi_session_setup
iscsi_session_teardown
iscsi_set_param
iscsi_switch_str_param
iscsi_unblock_session
iscsi_unregister_transport
is_vmalloc_addr
iterate_fd
jiffies
jiffies_64
jiffies64_to_nsecs
jiffies_to_msecs
jiffies_to_timespec64
jiffies_to_usecs
kasprintf
kernel_bind
kernel_connect
kernel_cpustat
kernel_recvmsg
kernel_sendmsg
kernel_sock_shutdown
kernel_write
kexec_crash_loaded
__kfifo_alloc
__kfifo_free
kfree
kfree_const
kfree_sensitive
kfree_skb
kfree_skb_list
kfree_skb_partial
kgdb_active
kgdb_breakpoint
kill_fasync
kimage_voffset
__kmalloc
kmalloc_caches
__kmalloc_node
kmalloc_order_trace
kmem_cache_alloc
kmem_cache_alloc_node
kmem_cache_alloc_node_trace
kmem_cache_alloc_trace
kmem_cache_create
kmem_cache_create_usercopy
kmem_cache_destroy
kmem_cache_free
kmem_cache_shrink
kmemdup
kobject_add
kobject_create_and_add
kobject_del
kobject_get
kobject_init
kobject_init_and_add
kobject_put
kobject_set_name
kobject_uevent
kobject_uevent_env
krealloc
kset_create_and_add
kset_find_obj
kset_register
kset_unregister
ksize
kstrdup
kstrdup_const
kstrndup
kstrtobool
kstrtobool_from_user
kstrtoint
kstrtoint_from_user
kstrtoll
kstrtoll_from_user
kstrtou16
kstrtou8
kstrtouint
kstrtouint_from_user
kstrtoul_from_user
kstrtoull
kstrtoull_from_user
ksys_sync_helper
kthread_bind
kthread_create_on_node
kthread_create_worker
kthread_destroy_worker
kthread_park
kthread_queue_work
kthread_should_stop
kthread_stop
kthread_unpark
kthread_unuse_mm
kthread_use_mm
ktime_get
ktime_get_coarse_real_ts64
ktime_get_mono_fast_ns
ktime_get_raw
ktime_get_raw_ts64
ktime_get_real_seconds
ktime_get_real_ts64
ktime_get_seconds
ktime_get_ts64
ktime_get_with_offset
kvasprintf
kvfree
kvfree_call_rcu
kvmalloc_node
lcm
led_classdev_register_ext
led_classdev_resume
led_classdev_suspend
led_classdev_unregister
linkmode_set_pause
__list_add_valid
__list_del_entry_valid
list_sort
llist_add_batch
__ll_sc_atomic_fetch_add
__local_bh_enable_ip
__lock_page
lock_page_memcg
lockref_get
lock_sock_nested
logic_inw
logic_outw
make_kgid
make_kuid
map_destroy
mark_page_accessed
match_hex
match_int
match_strdup
match_string
match_token
match_u64
_mcount
mdev_dev
mdev_from_dev
mdev_get_drvdata
mdev_parent_dev
mdev_register_device
mdev_register_driver
mdev_set_drvdata
mdev_unregister_device
mdev_unregister_driver
mdev_uuid
mdio45_probe
mdiobus_alloc_size
mdiobus_free
mdiobus_get_phy
mdiobus_read
__mdiobus_register
mdiobus_unregister
mdiobus_write
mdio_mii_ioctl
memchr
memchr_inv
memcmp
memcpy
__memcpy_fromio
__memcpy_toio
memdup_user
memdup_user_nul
memmove
memory_read_from_buffer
memparse
mempool_alloc
mempool_alloc_slab
mempool_create
mempool_create_node
mempool_destroy
mempool_free
mempool_free_slab
mempool_kfree
mempool_kmalloc
memscan
mem_section
memset
__memset_io
memstart_addr
metadata_dst_alloc
mfd_add_devices
mfd_remove_devices
misc_deregister
misc_register
__mmdrop
mm_kobj
mmput
mmu_interval_notifier_insert
mmu_interval_notifier_remove
mmu_interval_read_begin
mmu_notifier_call_srcu
mmu_notifier_put
__mmu_notifier_register
mmu_notifier_register
mmu_notifier_synchronize
mmu_notifier_unregister
mod_delayed_work_on
mod_timer
mod_timer_pending
__module_get
module_layout
module_put
module_refcount
mpi_alloc
mpi_free
mpi_get_buffer
mpi_powm
mpi_read_raw_data
__msecs_to_jiffies
msleep
msleep_interruptible
mtd_device_parse_register
mtd_device_unregister
__mutex_init
mutex_is_locked
mutex_lock
mutex_lock_interruptible
mutex_lock_killable
mutex_trylock
mutex_unlock
napi_alloc_frag
__napi_alloc_skb
napi_complete_done
napi_consume_skb
napi_disable
napi_get_frags
napi_gro_flush
napi_gro_frags
napi_gro_receive
__napi_schedule
__napi_schedule_irqoff
napi_schedule_prep
native_queued_spin_lock_slowpath
__ndelay
ndo_dflt_bridge_getlink
ndo_dflt_fdb_add
__neigh_create
neigh_destroy
__neigh_event_send
neigh_lookup
netdev_alloc_frag
__netdev_alloc_skb
netdev_bind_sb_channel_queue
netdev_crit
netdev_err
netdev_features_change
netdev_has_upper_dev_all_rcu
netdev_info
netdev_lower_get_next
netdev_master_upper_dev_get
netdev_master_upper_dev_get_rcu
netdev_master_upper_dev_link
netdev_notice
netdev_pick_tx
netdev_port_same_parent_id
netdev_printk
netdev_reset_tc
netdev_rss_key_fill
netdev_rx_handler_register
netdev_rx_handler_unregister
netdev_set_num_tc
netdev_set_sb_channel
netdev_set_tc_queue
netdev_state_change
netdev_stats_to_stats64
netdev_unbind_sb_channel
netdev_update_features
netdev_upper_dev_unlink
netdev_walk_all_lower_dev_rcu
netdev_walk_all_upper_dev_rcu
netdev_warn
net_dim
net_dim_get_def_rx_moderation
net_dim_get_def_tx_moderation
net_dim_get_rx_moderation
net_dim_get_tx_moderation
netif_carrier_off
netif_carrier_on
netif_device_attach
netif_device_detach
netif_get_num_default_rss_queues
netif_napi_add
__netif_napi_del
netif_receive_skb
netif_rx
netif_rx_ni
netif_schedule_queue
netif_set_real_num_rx_queues
netif_set_real_num_tx_queues
netif_set_xps_queue
netif_tx_stop_all_queues
netif_tx_wake_queue
netlink_ack
netlink_broadcast
netlink_capable
__netlink_dump_start
netlink_has_listeners
__netlink_kernel_create
netlink_kernel_release
netlink_ns_capable
netlink_set_err
netlink_unicast
net_namespace_list
net_ns_type_operations
net_ratelimit
net_rwsem
nf_connlabels_get
nf_connlabels_put
nf_connlabels_replace
nf_conntrack_alloc
__nf_conntrack_confirm
nf_conntrack_destroy
nf_conntrack_eventmask_report
nf_conntrack_expect_lock
nf_conntrack_find_get
nf_conntrack_free
nf_conntrack_hash
nf_conntrack_hash_check_insert
__nf_conntrack_helper_find
nf_conntrack_helper_put
nf_conntrack_helper_try_module_get
nf_conntrack_htable_size
nf_conntrack_in
nf_conntrack_locks
nf_ct_delete
nf_ct_deliver_cached_events
nf_ct_expect_alloc
__nf_ct_expect_find
nf_ct_expect_find_get
nf_ct_expect_hash
nf_ct_expect_hsize
nf_ct_expect_iterate_net
nf_ct_expect_put
nf_ct_expect_register_notifier
nf_ct_expect_related_report
nf_ct_expect_unregister_notifier
nf_ct_ext_add
nf_ct_frag6_gather
nf_ct_get_tuplepr
nf_ct_helper_expectfn_find_by_name
nf_ct_helper_expectfn_find_by_symbol
nf_ct_helper_ext_add
nf_ct_invert_tuple
nf_ct_iterate_cleanup_net
nf_ct_l4proto_find
nf_ct_nat_ext_add
nf_ct_remove_expectations
nf_ct_seq_adjust
nf_ct_tmpl_alloc
nf_ct_tmpl_free
__nf_ct_try_assign_helper
nf_ct_unlink_expect_report
nf_ct_zone_dflt
nf_ipv6_ops
nf_nat_alloc_null_binding
nf_nat_hook
nf_nat_icmp_reply_translation
nf_nat_icmpv6_reply_translation
nf_nat_packet
nf_nat_setup_info
nfnetlink_has_listeners
nfnetlink_send
nfnetlink_set_err
nfnetlink_subsys_register
nfnetlink_subsys_unregister
nfnl_lock
nfnl_unlock
nf_register_net_hook
nf_register_net_hooks
nf_unregister_net_hook
nf_unregister_net_hooks
nla_find
nla_memcpy
__nla_parse
nla_policy_len
__nla_put
nla_put
nla_put_64bit
__nla_reserve
nla_reserve
nla_strlcpy
__nla_validate
__nlmsg_put
node_data
__node_distance
node_states
node_to_cpumask_map
no_llseek
nonseekable_open
noop_llseek
nr_cpu_ids
nr_irqs
nr_node_ids
ns_capable
nsecs_to_jiffies
ns_to_kernel_old_timeval
ns_to_timespec64
numa_node
__num_online_cpus
num_registered_fb
nvidia_gpu_vfio
nvme_alloc_request
nvme_cancel_request
nvme_change_ctrl_state
nvme_cleanup_cmd
nvme_complete_async_event
nvme_complete_rq
nvme_disable_ctrl
nvme_enable_ctrl
nvme_fc_rcv_ls_req
nvme_fc_register_localport
nvme_fc_register_remoteport
nvme_fc_rescan_remoteport
nvme_fc_set_remoteport_devloss
nvme_fc_unregister_localport
nvme_fc_unregister_remoteport
nvme_get_features
nvme_init_ctrl
nvme_init_identify
nvme_io_timeout
nvme_kill_queues
nvme_remove_namespaces
nvme_reset_ctrl
nvme_reset_ctrl_sync
nvme_set_features
nvme_set_queue_count
nvme_setup_cmd
nvme_shutdown_ctrl
nvme_start_admin_queue
nvme_start_ctrl
nvme_start_freeze
nvme_start_queues
nvme_stop_admin_queue
nvme_stop_ctrl
nvme_stop_queues
nvme_submit_sync_cmd
nvme_sync_queues
nvmet_fc_invalidate_host
nvmet_fc_rcv_fcp_abort
nvmet_fc_rcv_fcp_req
nvmet_fc_rcv_ls_req
nvmet_fc_register_targetport
nvmet_fc_unregister_targetport
nvme_try_sched_reset
nvme_unfreeze
nvme_uninit_ctrl
nvme_wait_freeze
nvme_wait_freeze_timeout
nvme_wait_reset
nvme_wq
of_device_is_compatible
of_find_device_by_node
of_fwnode_ops
of_match_node
of_mdiobus_register
of_node_put
of_parse_phandle
of_parse_phandle_with_fixed_args
of_phy_find_device
on_each_cpu_cond_mask
orderly_poweroff
out_of_line_wait_on_bit
override_creds
__page_file_index
__page_frag_cache_drain
page_frag_free
__page_mapcount
page_mapped
page_pool_alloc_frag
page_pool_alloc_pages
page_pool_create
page_pool_destroy
page_pool_put_page
page_pool_release_page
page_pool_update_nid
pagevec_lookup_range
pagevec_lookup_range_tag
__pagevec_release
panic
panic_notifier_list
param_array_ops
param_get_int
param_get_uint
param_ops_bint
param_ops_bool
param_ops_byte
param_ops_charp
param_ops_hexint
param_ops_int
param_ops_long
param_ops_short
param_ops_string
param_ops_uint
param_ops_ullong
param_ops_ulong
param_ops_ushort
param_set_bool
param_set_int
param_set_uint
path_get
path_put
pci_aer_clear_nonfatal_status
pci_alloc_irq_vectors_affinity
pci_assign_unassigned_bus_resources
pcibios_resource_to_bus
pci_bus_read_config_dword
pci_bus_resource_n
pci_bus_type
pci_cfg_access_lock
pci_cfg_access_unlock
pci_check_and_mask_intx
pci_choose_state
pci_clear_master
pci_clear_mwi
pci_d3cold_disable
pci_dev_driver
pci_dev_get
pci_device_is_present
pci_dev_present
pci_dev_put
pci_disable_device
pci_disable_link_state
pci_disable_msi
pci_disable_msix
pci_disable_pcie_error_reporting
pci_disable_rom
pci_disable_sriov
pcie_aspm_enabled
pcie_bandwidth_available
pcie_capability_clear_and_set_word
pcie_capability_read_dword
pcie_capability_read_word
pcie_capability_write_word
pcie_flr
pcie_get_mps
pcie_get_speed_cap
pcie_get_width_cap
pci_enable_atomic_ops_to_root
pci_enable_device
pci_enable_device_mem
pci_enable_msi
pci_enable_msix_range
pci_enable_pcie_error_reporting
pci_enable_rom
pci_enable_sriov
pci_enable_wake
pcie_print_link_status
pcie_relaxed_ordering_enabled
pcie_set_readrq
pci_find_capability
pci_find_ext_capability
pci_free_irq
pci_free_irq_vectors
pci_get_class
pci_get_device
pci_get_domain_bus_and_slot
pci_get_dsn
pci_get_slot
pci_ignore_hotplug
pci_intx
pci_iomap
pci_ioremap_bar
pci_irq_get_affinity
pci_irq_vector
pci_load_saved_state
pci_map_rom
pci_match_id
pcim_enable_device
pcim_iomap
pcim_iomap_regions
pcim_iomap_table
pcim_iounmap
pci_msi_mask_irq
pci_msi_unmask_irq
pci_msix_vec_count
pci_num_vf
pci_prepare_to_sleep
pci_read_config_byte
pci_read_config_dword
pci_read_config_word
pci_read_vpd
__pci_register_driver
pci_release_regions
pci_release_resource
pci_release_selected_regions
pci_request_irq
pci_request_regions
pci_request_selected_regions
pci_rescan_bus
pci_reset_bus
pci_resize_resource
pci_restore_msi_state
pci_restore_state
pci_save_state
pci_select_bars
pci_set_master
pci_set_mwi
pci_set_power_state
pci_sriov_configure_simple
pci_sriov_get_totalvfs
pci_sriov_set_totalvfs
pci_stop_and_remove_bus_device
pci_stop_and_remove_bus_device_locked
pci_store_saved_state
pci_try_set_mwi
pci_unmap_rom
pci_unregister_driver
pci_vfs_assigned
pci_vpd_find_info_keyword
pci_vpd_find_tag
pci_wait_for_pending_transaction
pci_wake_from_d3
pci_write_config_byte
pci_write_config_dword
pci_write_config_word
pcix_set_mmrbc
PDE_DATA
__per_cpu_offset
percpu_ref_exit
percpu_ref_init
percpu_ref_kill_and_confirm
perf_event_update_userpage
perf_pmu_register
perf_pmu_unregister
perf_trace_buf_alloc
perf_trace_run_bpf_submit
pfn_valid
phy_attach_direct
phy_attached_info
phy_connect
phy_connect_direct
phy_device_free
phy_device_register
phy_device_remove
phy_disconnect
phy_ethtool_ksettings_get
phy_ethtool_ksettings_set
phy_loopback
phy_mii_ioctl
phy_resume
phy_set_asym_pause
phy_set_max_speed
phy_start
phy_start_aneg
phy_stop
phy_support_asym_pause
phy_suspend
phy_validate_pause
pid_task
pid_vnr
platform_bus_type
platform_device_register
platform_device_register_full
platform_device_unregister
__platform_driver_register
platform_driver_unregister
platform_get_irq
platform_get_resource
platform_get_resource_byname
pldmfw_flash_image
pldmfw_op_pci_match_record
pm_power_off
pm_runtime_allow
pm_runtime_autosuspend_expiration
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
__pm_runtime_idle
__pm_runtime_resume
pm_runtime_set_autosuspend_delay
__pm_runtime_set_status
__pm_runtime_suspend
__pm_runtime_use_autosuspend
pm_schedule_suspend
pm_suspend_global_flags
power_supply_is_system_supplied
prandom_bytes
prandom_u32
prepare_creds
prepare_to_wait
prepare_to_wait_event
prepare_to_wait_exclusive
print_hex_dump
printk
__printk_ratelimit
printk_timed_ratelimit
proc_create
proc_create_data
proc_dointvec
proc_dointvec_minmax
proc_mkdir
proc_remove
proc_set_size
__pskb_copy_fclone
pskb_expand_head
__pskb_pull_tail
___pskb_trim
ptp_clock_event
ptp_clock_index
ptp_clock_register
ptp_clock_unregister
ptp_find_pin
__put_cred
put_device
put_disk
__put_net
__put_page
put_pid
__put_task_struct
put_unused_fd
qdisc_reset
qed_get_eth_ops
qed_put_eth_ops
queue_delayed_work_on
queued_read_lock_slowpath
queued_write_lock_slowpath
queue_work_on
radix_tree_delete
radix_tree_gang_lookup
radix_tree_gang_lookup_tag
radix_tree_insert
radix_tree_iter_delete
radix_tree_lookup
radix_tree_lookup_slot
radix_tree_next_chunk
radix_tree_preload
radix_tree_tagged
radix_tree_tag_set
raid_class_attach
raid_class_release
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
raw_notifier_chain_unregister
rb_erase
__rb_erase_color
rb_first
rb_first_postorder
__rb_insert_augmented
rb_insert_color
rb_next
rb_next_postorder
rb_replace_node
rcu_barrier
rcu_read_unlock_strict
rdma_accept
rdma_bind_addr
__rdma_block_iter_next
__rdma_block_iter_start
rdmacg_register_device
rdmacg_try_charge
rdmacg_uncharge
rdmacg_unregister_device
rdma_connect
rdma_consumer_reject_data
rdma_copy_ah_attr
rdma_create_ah
__rdma_create_kernel_id
rdma_create_qp
rdma_destroy_ah_attr
rdma_destroy_ah_user
rdma_destroy_id
rdma_destroy_qp
rdma_disconnect
rdma_event_msg
rdma_is_zero_gid
rdma_listen
rdma_nl_put_driver_string
rdma_nl_put_driver_u32
rdma_nl_put_driver_u64
rdma_nl_stat_hwcounter_entry
rdma_notify
rdma_port_get_link_layer
rdma_query_ah
rdma_query_gid
rdma_read_gid_hw_context
rdma_read_gid_l2_fields
rdma_reject
rdma_reject_msg
rdma_resolve_addr
rdma_resolve_route
rdma_restrack_get
rdma_restrack_put
rdma_roce_rescan_device
rdma_rw_ctx_destroy
rdma_rw_ctx_init
rdma_rw_ctx_post
rdma_rw_ctx_wrs
rdma_set_afonly
rdma_user_mmap_entry_get_pgoff
rdma_user_mmap_entry_insert_range
rdma_user_mmap_entry_put
rdma_user_mmap_entry_remove
rdma_user_mmap_io
read_cache_pages
recalc_sigpending
refcount_dec_and_mutex_lock
refcount_dec_if_one
refcount_warn_saturate
register_acpi_hed_notifier
register_acpi_notifier
register_blkdev
register_blocking_lsm_notifier
__register_chrdev
register_chrdev_region
register_console
register_die_notifier
registered_fb
register_fib_notifier
register_inet6addr_notifier
register_inetaddr_notifier
register_ip_vs_scheduler
register_kprobe
register_lsm_notifier
register_module_notifier
register_netdev
register_netdevice
register_netdevice_notifier
register_netdevice_notifier_net
register_netevent_notifier
register_net_sysctl
register_oom_notifier
register_pernet_device
register_pernet_subsys
register_reboot_notifier
register_sysctl_table
regmap_read
regmap_write
regulator_get_voltage
regulator_set_voltage
release_firmware
release_pages
__release_region
release_sock
remap_pfn_range
remap_vmalloc_range
remove_conflicting_framebuffers
remove_conflicting_pci_framebuffers
remove_proc_entry
remove_wait_queue
request_firmware
request_firmware_direct
request_firmware_nowait
__request_module
__request_region
request_threaded_irq
reservation_ww_class
reset_devices
revalidate_disk_size
revert_creds
rhashtable_destroy
rhashtable_free_and_destroy
rhashtable_init
rhashtable_insert_slow
rhashtable_walk_enter
rhashtable_walk_exit
rhashtable_walk_next
rhashtable_walk_start_check
rhashtable_walk_stop
rhltable_init
__rht_bucket_nested
rht_bucket_nested
rht_bucket_nested_insert
round_jiffies
round_jiffies_relative
round_jiffies_up
rps_cpu_mask
rps_may_expire_flow
rps_sock_flow_table
rsa_parse_priv_key
rsa_parse_pub_key
rt6_lookup
rtc_time64_to_tm
rtnl_configure_link
rtnl_create_link
rtnl_is_locked
rtnl_link_get_net
rtnl_link_register
rtnl_link_unregister
rtnl_lock
rtnl_nla_parse_ifla
rtnl_trylock
rtnl_unlock
sas_alloc_slow_task
sas_attach_transport
sas_bios_param
sas_change_queue_depth
sas_disable_tlr
sas_domain_attach_transport
sas_drain_work
sas_eh_device_reset_handler
sas_eh_target_reset_handler
sas_enable_tlr
sas_end_device_alloc
sas_expander_alloc
sas_free_task
sas_get_local_phy
sas_ioctl
sas_is_tlr_enabled
sas_phy_add
sas_phy_alloc
sas_phy_delete
sas_phy_free
sas_phy_reset
sas_port_add
sas_port_add_phy
sas_port_alloc_num
sas_port_delete
sas_port_delete_phy
sas_port_free
sas_prep_resume_ha
sas_queuecommand
sas_read_port_mode_page
sas_register_ha
sas_release_transport
sas_remove_host
sas_resume_ha
sas_rphy_add
sas_slave_configure
sas_ssp_task_response
sas_suspend_ha
sas_target_alloc
sas_target_destroy
sas_unregister_ha
sbitmap_queue_clear
__sbitmap_queue_get
scatterwalk_map_and_copy
sched_clock
sched_set_fifo
sched_set_fifo_low
sched_set_normal
schedule
schedule_hrtimeout
schedule_hrtimeout_range
schedule_timeout
schedule_timeout_interruptible
schedule_timeout_uninterruptible
scmd_printk
scnprintf
scsi_add_device
scsi_add_host_with_dma
scsi_block_requests
scsi_build_sense_buffer
scsi_change_queue_depth
scsi_command_normalize_sense
scsi_device_get
scsi_device_lookup
scsi_device_put
scsi_device_set_state
scsi_device_type
scsi_dma_map
scsi_dma_unmap
__scsi_execute
scsi_get_vpd_page
scsi_host_alloc
scsi_host_busy
scsi_host_get
scsi_host_lookup
scsi_host_put
scsi_internal_device_block_nowait
scsi_internal_device_unblock_nowait
scsi_is_fc_rport
scsi_is_host_device
scsi_is_sdev_device
__scsi_iterate_devices
scsilun_to_int
scsi_normalize_sense
scsi_print_command
scsi_queue_work
scsi_register_driver
scsi_remove_device
scsi_remove_host
scsi_remove_target
scsi_rescan_device
scsi_sanitize_inquiry_string
scsi_scan_host
scsi_sense_key_string
scsi_unblock_requests
sdev_prefix_printk
secpath_set
secure_tcp_seq
secure_tcpv6_seq
security_d_instantiate
security_ib_alloc_security
security_ib_endport_manage_subnet
security_ib_free_security
security_ib_pkey_access
security_release_secctx
security_secid_to_secctx
security_tun_dev_alloc_security
security_tun_dev_attach
security_tun_dev_attach_queue
security_tun_dev_create
security_tun_dev_free_security
security_tun_dev_open
send_sig
send_sig_info
seq_list_next
seq_list_start
seq_lseek
seq_open
seq_printf
seq_putc
seq_put_decimal_ull
seq_puts
seq_read
seq_release
seq_write
set_cpus_allowed_ptr
set_current_groups
set_device_ro
set_disk_ro
set_freezable
set_normalized_timespec64
set_page_dirty
set_page_dirty_lock
set_user_nice
sg_alloc_table
sg_alloc_table_chained
sg_alloc_table_from_pages
sg_copy_from_buffer
sg_copy_to_buffer
sg_free_table
sg_free_table_chained
sg_init_table
sgl_alloc
sgl_free
sg_miter_next
sg_miter_start
sg_miter_stop
sg_nents
sg_next
__sg_page_iter_next
__sg_page_iter_start
sg_pcopy_from_buffer
sg_pcopy_to_buffer
sg_zero_buffer
show_class_attr_string
sigprocmask
si_meminfo
simple_attr_open
simple_attr_read
simple_attr_release
simple_attr_write
simple_open
simple_read_from_buffer
simple_strtol
simple_strtoul
simple_strtoull
simple_write_to_buffer
single_open
single_release
sk_alloc
sk_attach_filter
skb_add_rx_frag
__skb_checksum
skb_checksum
__skb_checksum_complete
skb_checksum_help
skb_clone
skb_clone_tx_timestamp
skb_copy
skb_copy_bits
skb_copy_datagram_from_iter
skb_copy_datagram_iter
skb_copy_expand
skb_copy_ubufs
skb_dequeue
skb_ensure_writable
__skb_ext_del
__skb_ext_put
__skb_flow_dissect
__skb_get_hash
__skb_gso_segment
skb_gso_validate_mac_len
__skb_pad
skb_partial_csum_set
skb_pull
skb_pull_rcsum
skb_push
skb_put
skb_queue_purge
skb_queue_tail
skb_realloc_headroom
__skb_recv_datagram
skb_scrub_packet
skb_set_owner_w
skb_store_bits
skb_to_sgvec
skb_trim
skb_try_coalesce
skb_tstamp_tx
skb_tx_error
skb_vlan_pop
skb_vlan_push
__skb_warn_lro_forwarding
skb_zerocopy
skb_zerocopy_headlen
sk_detach_filter
sk_filter_trim_cap
sk_free
skip_spaces
smp_call_function_many
smp_call_function_single
snprintf
sock_alloc_send_pskb
sock_create
sock_create_kern
sock_edemux
sockfd_lookup
sock_init_data
sock_queue_err_skb
sock_recv_errqueue
sock_release
sock_zerocopy_callback
softnet_data
sort
sprintf
sprint_symbol
srcu_barrier
__srcu_read_lock
__srcu_read_unlock
sscanf
__stack_chk_fail
__stack_chk_guard
stack_trace_print
stack_trace_save
starget_for_each_device
strcasecmp
strcat
strchr
strcmp
strcpy
strcspn
stream_open
strim
strlcat
strlcpy
strlen
strncasecmp
strncat
strncmp
strncpy
strncpy_from_user
strnlen
strnstr
strpbrk
strrchr
strscpy
strscpy_pad
strsep
strspn
strstr
submit_bio
submit_bio_noacct
__sw_hweight32
__sw_hweight64
__sw_hweight8
swiotlb_nr_tbl
__symbol_put
sync_file_create
synchronize_irq
synchronize_net
synchronize_rcu
synchronize_srcu
syscon_node_to_regmap
syscon_regmap_lookup_by_phandle
sysfs_add_file_to_group
sysfs_create_bin_file
sysfs_create_file_ns
sysfs_create_files
sysfs_create_group
sysfs_create_groups
sysfs_create_link
sysfs_format_mac
sysfs_remove_bin_file
sysfs_remove_file_from_group
sysfs_remove_file_ns
sysfs_remove_files
sysfs_remove_group
sysfs_remove_groups
sysfs_remove_link
sysfs_streq
system_highpri_wq
system_state
system_unbound_wq
system_wq
sys_tz
t10_pi_type1_crc
t10_pi_type1_ip
t10_pi_type3_crc
t10_pi_type3_ip
tap_get_socket
task_active_pid_ns
tasklet_init
tasklet_kill
__tasklet_schedule
tasklet_setup
__task_pid_nr_ns
tcp_gro_complete
tcp_hashinfo
time64_to_tm
timecounter_cyc2time
timecounter_init
timecounter_read
tls_get_record
tls_validate_xmit_skb
to_drm_sched_fence
_totalram_pages
trace_define_field
trace_event_buffer_commit
trace_event_buffer_reserve
trace_event_ignore_this_pid
trace_event_raw_init
trace_event_reg
trace_handle_return
__traceiter_dma_fence_emit
__traceiter_nvme_sq
__traceiter_xdp_exception
__tracepoint_dma_fence_emit
__tracepoint_nvme_sq
__tracepoint_xdp_exception
trace_print_array_seq
trace_print_flags_seq
trace_print_symbols_seq
trace_raw_output_prep
trace_seq_printf
trace_seq_putc
try_module_get
try_wait_for_completion
ttm_bo_bulk_move_lru_tail
ttm_bo_device_init
ttm_bo_device_release
ttm_bo_dma_acc_size
ttm_bo_eviction_valuable
ttm_bo_evict_mm
ttm_bo_glob
ttm_bo_init
ttm_bo_init_reserved
ttm_bo_kmap
ttm_bo_kunmap
ttm_bo_lock_delayed_workqueue
ttm_bo_mem_space
ttm_bo_mmap
ttm_bo_mmap_obj
ttm_bo_move_accel_cleanup
ttm_bo_move_memcpy
ttm_bo_move_to_lru_tail
ttm_bo_move_ttm
ttm_bo_put
ttm_bo_unlock_delayed_workqueue
ttm_bo_validate
ttm_bo_vm_access
ttm_bo_vm_close
ttm_bo_vm_fault_reserved
ttm_bo_vm_open
ttm_bo_vm_reserve
ttm_bo_wait
ttm_dma_page_alloc_debugfs
ttm_dma_populate
ttm_dma_tt_fini
ttm_dma_tt_init
ttm_dma_unpopulate
ttm_eu_backoff_reservation
ttm_eu_fence_buffer_objects
ttm_eu_reserve_buffers
ttm_page_alloc_debugfs
ttm_pool_populate
ttm_pool_unpopulate
ttm_populate_and_map_pages
ttm_range_man_fini
ttm_range_man_init
ttm_resource_free
ttm_resource_manager_force_list_clean
ttm_resource_manager_init
ttm_sg_tt_init
ttm_tt_destroy_common
ttm_tt_fini
ttm_tt_init
ttm_tt_populate
ttm_tt_set_placement_caching
ttm_unmap_and_unpopulate_pages
__udelay
udp4_hwcsum
udp_encap_enable
udp_tunnel_nic_ops
uio_event_notify
__uio_register_device
uio_unregister_device
unlock_page
unlock_page_memcg
unmap_mapping_range
unregister_acpi_hed_notifier
unregister_acpi_notifier
unregister_blkdev
unregister_blocking_lsm_notifier
__unregister_chrdev
unregister_chrdev_region
unregister_console
unregister_die_notifier
unregister_fib_notifier
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_ip_vs_scheduler
unregister_kprobe
unregister_lsm_notifier
unregister_module_notifier
unregister_netdev
unregister_netdevice_many
unregister_netdevice_notifier
unregister_netdevice_notifier_net
unregister_netdevice_queue
unregister_netevent_notifier
unregister_net_sysctl_table
unregister_oom_notifier
unregister_pernet_device
unregister_pernet_subsys
unregister_reboot_notifier
unregister_sysctl_table
up
up_read
up_write
__usecs_to_jiffies
usleep_range
uuid_gen
uuid_null
uuid_parse
_uverbs_alloc
uverbs_copy_to
uverbs_copy_to_struct_or_zero
uverbs_destroy_def_handler
uverbs_fd_class
uverbs_finalize_uobj_create
_uverbs_get_const
uverbs_get_flags32
uverbs_get_flags64
uverbs_idr_class
uverbs_uobject_fd_release
uverbs_uobject_put
vabits_actual
vfio_add_group_dev
vfio_del_group_dev
vfio_info_add_capability
vfio_info_cap_shift
vfio_pin_pages
vfio_register_iommu_driver
vfio_register_notifier
vfio_set_irqs_validate_and_prepare
vfio_unpin_pages
vfio_unregister_iommu_driver
vfio_unregister_notifier
vfree
vfs_fallocate
vfs_fsync
vfs_getattr
vfs_statfs
vga_client_register
vga_remove_vgacon
vlan_dev_real_dev
vlan_dev_vlan_id
vlan_dev_vlan_proto
__vlan_find_dev_deep_rcu
__vmalloc
vmalloc
vmalloc_node
vmalloc_to_page
vmalloc_user
vmap
vm_get_page_prot
vm_insert_page
vm_insert_pfn_prot
vm_mmap
vm_munmap
vm_zone_stat
vprintk
vscnprintf
vsnprintf
vsprintf
vunmap
vzalloc
vzalloc_node
wait_for_completion
wait_for_completion_interruptible
wait_for_completion_interruptible_timeout
wait_for_completion_io_timeout
wait_for_completion_killable
wait_for_completion_timeout
wait_on_page_bit
__wake_up
wake_up_bit
__wake_up_locked
wake_up_process
__wake_up_sync_key
__warn_printk
work_busy
write_cache_pages
ww_mutex_lock
ww_mutex_lock_interruptible
ww_mutex_unlock
__xa_alloc_cyclic
__xa_cmpxchg
xa_destroy
__xa_erase
xa_erase
xa_find
xa_find_after
__xa_insert
xa_load
__xa_store
xa_store
xdp_convert_zc_to_xdp_frame
xdp_do_flush
xdp_do_redirect
xdp_return_frame
xdp_return_frame_rx_napi
xdp_rxq_info_is_reg
xdp_rxq_info_reg
xdp_rxq_info_reg_mem_model
xdp_rxq_info_unreg
xdp_rxq_info_unreg_mem_model
xdp_rxq_info_unused
xdp_warn
xfrm_aead_get_byname
xfrm_replay_seqhi
xz_dec_end
xz_dec_init
xz_dec_run
yield
zap_vma_ptes
zerocopy_sg_from_iter
zgid
zlib_inflate
zlib_inflateEnd
zlib_inflateInit2
zlib_inflate_workspacesize
附2:x86 平台 KABI 白名单列表初稿(3171个)
acpi_bus_get_device
acpi_bus_register_driver
acpi_bus_unregister_driver
acpi_check_dsm
acpi_dev_found
acpi_disabled
acpi_dma_configure_id
acpi_evaluate_dsm
acpi_evaluate_integer
acpi_evaluate_object
acpi_format_exception
acpi_gbl_FADT
acpi_get_devices
acpi_get_handle
acpi_get_name
acpi_get_table
acpi_gsi_to_irq
acpi_handle_printk
acpi_has_method
acpi_install_notify_handler
acpi_lid_open
acpi_match_device
__acpi_node_get_property_reference
acpi_os_map_memory
acpi_os_unmap_generic_address
acpi_os_unmap_memory
acpi_register_gsi
acpi_remove_notify_handler
acpi_unregister_gsi
acpi_video_get_edid
acpi_walk_namespace
address_space_init_once
add_timer
add_uevent_var
add_wait_queue
add_wait_queue_exclusive
admin_timeout
alloc_chrdev_region
alloc_cpumask_var
alloc_cpu_rmap
__alloc_disk_node
alloc_etherdev_mqs
alloc_netdev_mqs
__alloc_pages
alloc_pages
__alloc_percpu
__alloc_percpu_gfp
__alloc_skb
alloc_workqueue
anon_inode_getfd
anon_inode_getfile
apei_hest_parse
apei_map_generic_address
apei_read
apic
arch_io_free_memtype_wc
arch_io_reserve_memtype_wc
arch_phys_wc_add
arch_phys_wc_del
arch_wb_cache_pmem
argv_free
argv_split
arp_tbl
async_schedule_node
ata_link_next
ata_tf_to_fis
_atomic_dec_and_lock
atomic_notifier_call_chain
atomic_notifier_chain_register
atomic_notifier_chain_unregister
attribute_container_find_class_device
autoremove_wake_function
backlight_device_register
backlight_device_set_brightness
backlight_device_unregister
backlight_force_update
bdevname
bdev_read_only
bdget_disk
_bin2bcd
bin2hex
bio_add_page
bio_alloc_bioset
bio_associate_blkg
bio_chain
bio_clone_fast
bio_devname
bio_endio
bio_free_pages
bio_init
bio_integrity_add_page
bio_integrity_alloc
bio_put
bioset_exit
bioset_init
__bitmap_and
__bitmap_andnot
__bitmap_clear
__bitmap_complement
__bitmap_equal
bitmap_find_free_region
bitmap_find_next_zero_area_off
bitmap_free
bitmap_from_arr32
__bitmap_intersects
__bitmap_or
bitmap_parse
bitmap_parselist
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
__bitmap_shift_left
__bitmap_shift_right
__bitmap_subset
__bitmap_weight
__bitmap_xor
bitmap_zalloc
bit_wait
blk_alloc_queue
blk_check_plugged
blk_cleanup_queue
blkdev_get_by_path
__blkdev_issue_discard
blkdev_issue_discard
blkdev_issue_flush
blkdev_issue_write_same
__blkdev_issue_zeroout
blkdev_issue_zeroout
blkdev_put
blk_execute_rq
blk_execute_rq_nowait
blk_finish_plug
blk_freeze_queue_start
blk_get_queue
blk_get_request
blk_integrity_register
blk_integrity_unregister
blk_mq_alloc_request
blk_mq_alloc_request_hctx
blk_mq_alloc_tag_set
blk_mq_complete_request
blk_mq_complete_request_remote
blk_mq_end_request
blk_mq_free_request
blk_mq_free_tag_set
blk_mq_freeze_queue
blk_mq_freeze_queue_wait
blk_mq_freeze_queue_wait_timeout
blk_mq_init_queue
blk_mq_map_queues
blk_mq_pci_map_queues
blk_mq_quiesce_queue
blk_mq_rdma_map_queues
blk_mq_requeue_request
blk_mq_run_hw_queues
blk_mq_start_request
blk_mq_tagset_busy_iter
blk_mq_tagset_wait_completed_request
blk_mq_tag_to_rq
blk_mq_unfreeze_queue
blk_mq_unique_tag
blk_mq_unquiesce_queue
blk_mq_update_nr_hw_queues
blk_poll
blk_put_queue
blk_put_request
blk_queue_bounce_limit
blk_queue_chunk_sectors
blk_queue_dma_alignment
blk_queue_flag_clear
blk_queue_flag_set
blk_queue_flag_test_and_set
blk_queue_io_min
blk_queue_io_opt
blk_queue_logical_block_size
blk_queue_max_discard_sectors
blk_queue_max_discard_segments
blk_queue_max_hw_sectors
blk_queue_max_segments
blk_queue_max_segment_size
blk_queue_max_write_same_sectors
blk_queue_max_write_zeroes_sectors
blk_queue_physical_block_size
blk_queue_rq_timeout
blk_queue_segment_boundary
blk_queue_split
blk_queue_update_dma_alignment
blk_queue_virt_boundary
blk_queue_write_cache
blk_rq_append_bio
blk_rq_count_integrity_sg
blk_rq_map_integrity_sg
blk_rq_map_kern
__blk_rq_map_sg
blk_rq_map_user
blk_rq_map_user_iov
blk_rq_unmap_user
blk_set_queue_dying
blk_set_stacking_limits
blk_stack_limits
blk_start_plug
blk_status_to_errno
blk_verify_command
blocking_notifier_call_chain
blocking_notifier_chain_register
blocking_notifier_chain_unregister
boot_cpu_data
bpf_dispatcher_xdp_func
bpf_prog_add
bpf_prog_inc
bpf_prog_put
bpf_prog_sub
bpf_stats_enabled_key
bpf_trace_run1
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
bpf_trace_run5
bpf_trace_run6
bpf_trace_run9
bpf_warn_invalid_xdp_action
bsg_job_done
btree_destroy
btree_geo32
btree_geo64
btree_get_prev
btree_init
btree_insert
btree_last
btree_lookup
btree_remove
btree_update
build_skb
bus_find_device
bus_register
bus_unregister
cachemode2protval
call_netdevice_notifiers
call_rcu
call_srcu
call_usermodehelper
cancel_delayed_work
cancel_delayed_work_sync
cancel_work_sync
can_do_mlock
capable
cdev_add
cdev_alloc
cdev_del
cdev_device_add
cdev_device_del
cdev_init
cdev_set_parent
cgroup_attach_task_all
__check_object_size
class_compat_create_link
class_compat_register
class_compat_remove_link
class_compat_unregister
__class_create
class_create_file_ns
class_destroy
class_find_device
class_for_each_device
__class_register
class_remove_file_ns
class_unregister
cleanup_srcu_struct
clear_user
clflush_cache_range
clk_disable
clk_enable
clk_get_rate
clk_prepare
clk_unprepare
clk_unregister
__close_fd
commit_creds
compat_alloc_user_space
complete
complete_all
complete_and_exit
completion_done
component_add
component_del
__cond_resched
configfs_register_subsystem
configfs_remove_default_groups
configfs_unregister_subsystem
config_group_init
config_group_init_type_name
config_item_put
console_lock
console_unlock
__const_udelay
consume_skb
convert_art_to_tsc
_copy_from_iter
_copy_from_user
_copy_to_iter
_copy_to_user
copy_user_enhanced_fast_string
copy_user_generic_string
copy_user_generic_unrolled
cper_estatus_check
cper_estatus_check_header
cper_estatus_print
__cpu_active_mask
cpu_bit_bitmap
cpu_core_map
cpufreq_get
cpufreq_quick_get
__cpuhp_remove_state
__cpuhp_setup_state
__cpuhp_state_add_instance
__cpuhp_state_remove_instance
cpu_info
cpu_khz
cpumask_local_spread
cpumask_next
cpumask_next_and
cpumask_next_wrap
cpu_number
__cpu_online_mask
__cpu_possible_mask
__cpu_present_mask
cpu_sibling_map
cpus_read_lock
cpus_read_unlock
crc16
crc32c
__crc32c_le_shift
crc32_le
crc8
crc8_populate_msb
crc_t10dif
crypto_ahash_digest
crypto_ahash_final
crypto_ahash_setkey
crypto_alloc_ahash
crypto_alloc_akcipher
crypto_alloc_shash
crypto_destroy_tfm
crypto_inc
__crypto_memneq
crypto_register_akcipher
crypto_register_alg
crypto_register_kpp
crypto_register_shash
crypto_register_skciphers
crypto_shash_final
crypto_shash_update
crypto_unregister_akcipher
crypto_unregister_alg
crypto_unregister_kpp
crypto_unregister_shash
crypto_unregister_skciphers
csum_ipv6_magic
csum_partial
_ctype
current_task
dca3_get_tag
dca_add_requester
dca_register_notify
dca_remove_requester
dca_unregister_notify
dcb_getapp
dcb_ieee_delapp
dcb_ieee_getapp_mask
dcb_ieee_setapp
dcbnl_cee_notify
dcbnl_ieee_notify
dcb_setapp
debugfs_attr_read
debugfs_attr_write
debugfs_create_atomic_t
debugfs_create_bool
debugfs_create_dir
debugfs_create_file
debugfs_create_file_unsafe
debugfs_create_regset32
debugfs_create_u32
debugfs_create_u64
debugfs_create_u8
debugfs_initialized
debugfs_lookup
debugfs_remove
__default_kernel_pte_mask
default_llseek
default_wake_function
__delay
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
destroy_workqueue
dev_add_pack
dev_addr_add
dev_addr_del
dev_alloc_name
dev_attr_phy_event_threshold
dev_base_lock
dev_change_flags
dev_close
_dev_crit
dev_disable_lro
dev_driver_string
_dev_emerg
_dev_err
__dev_get_by_index
dev_get_by_index
dev_get_by_index_rcu
__dev_get_by_name
dev_get_by_name
dev_get_flags
dev_get_iflink
dev_get_stats
device_add
device_add_disk
device_create
device_create_file
device_create_with_groups
device_del
device_destroy
device_for_each_child
device_get_mac_address
device_get_next_child_node
device_initialize
device_link_add
device_match_name
device_property_present
device_property_read_string
device_property_read_string_array
device_property_read_u32_array
device_property_read_u8_array
device_register
device_release_driver
device_remove_file
device_remove_file_self
device_rename
device_reprobe
device_set_wakeup_capable
device_set_wakeup_enable
device_unregister
device_wakeup_disable
_dev_info
__dev_kfree_skb_any
__dev_kfree_skb_irq
devlink_alloc
devlink_flash_update_begin_notify
devlink_flash_update_end_notify
devlink_flash_update_status_notify
devlink_fmsg_arr_pair_nest_end
devlink_fmsg_arr_pair_nest_start
devlink_fmsg_binary_pair_nest_end
devlink_fmsg_binary_pair_nest_start
devlink_fmsg_binary_pair_put
devlink_fmsg_binary_put
devlink_fmsg_bool_pair_put
devlink_fmsg_obj_nest_end
devlink_fmsg_obj_nest_start
devlink_fmsg_pair_nest_end
devlink_fmsg_pair_nest_start
devlink_fmsg_string_pair_put
devlink_fmsg_u32_pair_put
devlink_fmsg_u32_put
devlink_fmsg_u64_pair_put
devlink_fmsg_u8_pair_put
devlink_free
devlink_health_report
devlink_health_reporter_create
devlink_health_reporter_destroy
devlink_health_reporter_priv
devlink_health_reporter_recovery_done
devlink_health_reporter_state_update
devlink_info_board_serial_number_put
devlink_info_driver_name_put
devlink_info_serial_number_put
devlink_info_version_fixed_put
devlink_info_version_running_put
devlink_info_version_stored_put
devlink_net
devlink_param_driverinit_value_get
devlink_param_driverinit_value_set
devlink_params_publish
devlink_params_register
devlink_params_unpublish
devlink_params_unregister
devlink_param_value_changed
devlink_port_attrs_pci_pf_set
devlink_port_attrs_pci_vf_set
devlink_port_attrs_set
devlink_port_health_reporter_create
devlink_port_health_reporter_destroy
devlink_port_params_register
devlink_port_params_unregister
devlink_port_register
devlink_port_type_clear
devlink_port_type_eth_set
devlink_port_type_ib_set
devlink_port_unregister
devlink_region_create
devlink_region_destroy
devlink_region_snapshot_create
devlink_region_snapshot_id_get
devlink_region_snapshot_id_put
devlink_register
devlink_reload_disable
devlink_reload_enable
devlink_remote_reload_actions_performed
devlink_unregister
devm_add_action
devmap_managed_key
dev_mc_add
dev_mc_add_excl
dev_mc_del
devm_clk_get
__devm_drm_dev_alloc
devm_free_irq
devm_hwmon_device_register_with_groups
devm_ioremap
devm_ioremap_resource
devm_iounmap
devm_kfree
devm_kmalloc
devm_kmemdup
devm_mdiobus_alloc_size
devm_request_threaded_irq
_dev_notice
dev_open
dev_pm_qos_expose_latency_tolerance
dev_pm_qos_hide_latency_tolerance
dev_pm_qos_update_user_latency_tolerance
dev_printk
dev_printk_emit
dev_queue_xmit
__dev_remove_pack
dev_remove_pack
dev_set_mac_address
dev_set_mtu
dev_set_name
dev_set_promiscuity
dev_trans_start
dev_uc_add
dev_uc_add_excl
dev_uc_del
_dev_warn
d_find_alias
disable_irq
disable_irq_nosync
disk_end_io_acct
disk_start_io_acct
dma_alloc_attrs
dma_buf_dynamic_attach
dma_buf_get
dma_buf_map_attachment
dma_buf_move_notify
dma_buf_pin
dma_buf_put
dma_buf_unmap_attachment
dma_buf_unpin
dma_common_get_sgtable
dma_fence_add_callback
dma_fence_array_create
dma_fence_context_alloc
dma_fence_free
dma_fence_get_status
dma_fence_get_stub
dma_fence_init
dma_fence_release
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_any_timeout
dma_fence_wait_timeout
dma_free_attrs
dma_get_required_mask
dmam_alloc_attrs
dma_map_page_attrs
dma_map_resource
dma_map_sg_attrs
dma_max_mapping_size
dmam_free_coherent
dma_mmap_attrs
dmam_pool_create
dma_ops
dma_pool_alloc
dma_pool_create
dma_pool_destroy
dma_pool_free
dma_resv_add_excl_fence
dma_resv_add_shared_fence
dma_resv_get_fences_rcu
dma_resv_reserve_shared
dma_resv_test_signaled_rcu
dma_resv_wait_timeout_rcu
dma_set_coherent_mask
dma_set_mask
dma_sync_single_for_cpu
dma_sync_single_for_device
dma_unmap_page_attrs
dma_unmap_resource
dma_unmap_sg_attrs
dmi_check_system
dmi_get_system_info
dmi_match
__do_once_done
__do_once_start
do_wait_intr
down
downgrade_write
down_interruptible
down_read
down_read_killable
down_read_trylock
down_timeout
down_trylock
down_write
down_write_killable
down_write_trylock
d_path
dput
dql_completed
dql_reset
drain_workqueue
driver_create_file
driver_find_device
driver_for_each_device
driver_register
driver_remove_file
driver_unregister
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_plane_state
drm_atomic_get_private_obj_state
drm_atomic_helper_async_check
drm_atomic_helper_calc_timestamping_constants
drm_atomic_helper_check
drm_atomic_helper_check_modeset
drm_atomic_helper_check_planes
drm_atomic_helper_check_plane_state
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit
drm_atomic_helper_commit_cleanup_done
drm_atomic_helper_commit_hw_done
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
drm_atomic_helper_connector_duplicate_state
__drm_atomic_helper_connector_reset
drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_crtc_duplicate_state
__drm_atomic_helper_crtc_reset
drm_atomic_helper_crtc_reset
drm_atomic_helper_disable_plane
drm_atomic_helper_legacy_gamma_set
drm_atomic_helper_page_flip
__drm_atomic_helper_plane_destroy_state
drm_atomic_helper_plane_destroy_state
__drm_atomic_helper_plane_duplicate_state
drm_atomic_helper_plane_duplicate_state
__drm_atomic_helper_plane_reset
drm_atomic_helper_plane_reset
drm_atomic_helper_prepare_planes
__drm_atomic_helper_private_obj_duplicate_state
drm_atomic_helper_resume
drm_atomic_helper_set_config
drm_atomic_helper_setup_commit
drm_atomic_helper_shutdown
drm_atomic_helper_suspend
drm_atomic_helper_swap_state
drm_atomic_helper_update_legacy_modeset_state
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_fences
drm_atomic_helper_wait_for_flip_done
drm_atomic_private_obj_fini
drm_atomic_private_obj_init
drm_atomic_state_alloc
drm_atomic_state_clear
drm_atomic_state_default_clear
drm_atomic_state_default_release
__drm_atomic_state_free
drm_atomic_state_init
drm_compat_ioctl
drm_connector_attach_dp_subconnector_property
drm_connector_attach_encoder
drm_connector_attach_max_bpc_property
drm_connector_attach_vrr_capable_property
drm_connector_cleanup
drm_connector_init
drm_connector_init_with_ddc
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_register
drm_connector_set_path_property
drm_connector_set_vrr_capable_property
drm_connector_unregister
drm_connector_update_edid_property
drm_crtc_accurate_vblank_count
drm_crtc_add_crc_entry
drm_crtc_arm_vblank_event
drm_crtc_cleanup
__drm_crtc_commit_free
drm_crtc_enable_color_mgmt
drm_crtc_from_index
drm_crtc_handle_vblank
drm_crtc_helper_set_config
drm_crtc_helper_set_mode
drm_crtc_init
drm_crtc_init_with_planes
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_get
drm_crtc_vblank_helper_get_vblank_timestamp
drm_crtc_vblank_helper_get_vblank_timestamp_internal
drm_crtc_vblank_off
drm_crtc_vblank_on
drm_crtc_vblank_put
drm_cvt_mode
__drm_dbg
__drm_debug
drm_debugfs_create_files
drm_detect_hdmi_monitor
drm_detect_monitor_audio
drm_dev_alloc
drm_dev_dbg
drm_dev_enter
drm_dev_exit
drm_dev_printk
drm_dev_put
drm_dev_register
drm_dev_unplug
drm_dev_unregister
drm_dp_atomic_find_vcpi_slots
drm_dp_atomic_release_vcpi_slots
drm_dp_aux_init
drm_dp_aux_register
drm_dp_aux_unregister
drm_dp_bw_code_to_link_rate
drm_dp_calc_pbn_mode
drm_dp_cec_irq
drm_dp_cec_register_connector
drm_dp_cec_set_edid
drm_dp_cec_unregister_connector
drm_dp_cec_unset_edid
drm_dp_channel_eq_ok
drm_dp_check_act_status
drm_dp_clock_recovery_ok
drm_dp_downstream_max_dotclock
drm_dp_dpcd_read
drm_dp_dpcd_read_link_status
drm_dp_dpcd_write
drm_dp_find_vcpi_slots
drm_dp_get_adjust_request_pre_emphasis
drm_dp_get_adjust_request_voltage
drm_dp_link_rate_to_bw_code
drm_dp_link_train_channel_eq_delay
drm_dp_link_train_clock_recovery_delay
drm_dp_mst_add_affected_dsc_crtcs
drm_dp_mst_allocate_vcpi
drm_dp_mst_atomic_check
drm_dp_mst_atomic_enable_dsc
drm_dp_mst_connector_early_unregister
drm_dp_mst_connector_late_register
drm_dp_mst_deallocate_vcpi
drm_dp_mst_detect_port
drm_dp_mst_dsc_aux_for_port
drm_dp_mst_dump_topology
drm_dp_mst_get_edid
drm_dp_mst_get_port_malloc
drm_dp_mst_hpd_irq
drm_dp_mst_put_port_malloc
drm_dp_mst_reset_vcpi_slots
drm_dp_mst_topology_mgr_destroy
drm_dp_mst_topology_mgr_init
drm_dp_mst_topology_mgr_resume
drm_dp_mst_topology_mgr_set_mst
drm_dp_mst_topology_mgr_suspend
drm_dp_read_desc
drm_dp_read_downstream_info
drm_dp_read_dpcd_caps
drm_dp_read_mst_cap
drm_dp_read_sink_count
drm_dp_read_sink_count_cap
drm_dp_send_real_edid_checksum
drm_dp_set_subconnector_property
drm_dp_start_crc
drm_dp_stop_crc
drm_dp_update_payload_part1
drm_dp_update_payload_part2
drm_dsc_compute_rc_parameters
drm_dsc_pps_payload_pack
drm_edid_header_is_valid
drm_edid_is_valid
drm_edid_to_sad
drm_edid_to_speaker_allocation
drm_encoder_cleanup
drm_encoder_init
__drm_err
drm_fb_helper_alloc_fbi
drm_fb_helper_blank
drm_fb_helper_cfb_copyarea
drm_fb_helper_cfb_fillrect
drm_fb_helper_cfb_imageblit
drm_fb_helper_check_var
drm_fb_helper_debug_enter
drm_fb_helper_debug_leave
drm_fb_helper_fill_info
drm_fb_helper_fini
drm_fb_helper_hotplug_event
drm_fb_helper_init
drm_fb_helper_initial_config
drm_fb_helper_ioctl
drm_fb_helper_lastclose
drm_fb_helper_output_poll_changed
drm_fb_helper_pan_display
drm_fb_helper_prepare
drm_fb_helper_setcmap
drm_fb_helper_set_par
drm_fb_helper_set_suspend
drm_fb_helper_set_suspend_unlocked
drm_fb_helper_sys_copyarea
drm_fb_helper_sys_fillrect
drm_fb_helper_sys_imageblit
drm_fb_helper_unregister_fbi
drm_format_info
drm_framebuffer_cleanup
drm_framebuffer_init
drm_framebuffer_unregister_private
drm_gem_dmabuf_mmap
drm_gem_dmabuf_release
drm_gem_dmabuf_vmap
drm_gem_dmabuf_vunmap
drm_gem_fb_create_handle
drm_gem_fb_destroy
drm_gem_handle_create
drm_gem_handle_delete
drm_gem_map_attach
drm_gem_map_detach
drm_gem_map_dma_buf
drm_gem_object_free
drm_gem_object_init
drm_gem_object_lookup
drm_gem_object_release
drm_gem_prime_export
drm_gem_prime_fd_to_handle
drm_gem_prime_handle_to_fd
drm_gem_prime_import
drm_gem_private_object_init
drm_gem_unmap_dma_buf
drm_get_connector_status_name
drm_get_edid
drm_get_edid_switcheroo
drm_get_format_info
drm_get_format_name
drm_handle_vblank
drm_hdmi_avi_infoframe_from_display_mode
drm_hdmi_infoframe_set_hdr_metadata
drm_hdmi_vendor_infoframe_from_display_mode
drm_helper_connector_dpms
drm_helper_disable_unused_functions
drm_helper_force_disable_all
drm_helper_hpd_irq_event
drm_helper_mode_fill_fb_struct
drm_helper_probe_detect
drm_helper_probe_single_connector_modes
drm_helper_resume_force_mode
drm_i2c_encoder_detect
drm_i2c_encoder_init
drm_i2c_encoder_mode_fixup
drm_i2c_encoder_restore
drm_i2c_encoder_save
drm_invalid_op
drm_ioctl
drm_irq_install
drm_irq_uninstall
drm_is_current_master
drm_kms_helper_hotplug_event
drm_kms_helper_is_poll_worker
drm_kms_helper_poll_disable
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drm_mm_init
drm_mm_insert_node_in_range
drmm_mode_config_init
drm_mm_print
drm_mm_remove_node
drm_mm_takedown
drm_mode_config_cleanup
drm_mode_config_reset
drm_mode_copy
drm_mode_create_dvi_i_properties
drm_mode_create_scaling_mode_property
drm_mode_create_tv_properties
drm_mode_crtc_set_gamma_size
drm_mode_debug_printmodeline
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_get_hv_timing
drm_mode_is_420_also
drm_mode_is_420_only
drm_mode_legacy_fb_format
drm_mode_object_find
drm_mode_object_put
drm_mode_probed_add
drm_modeset_acquire_fini
drm_modeset_acquire_init
drm_modeset_backoff
drm_mode_set_crtcinfo
drm_modeset_drop_locks
drm_modeset_lock
drm_modeset_lock_all
drm_modeset_lock_all_ctx
drm_modeset_lock_single_interruptible
drm_mode_set_name
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_sort
drm_mode_vrefresh
drm_need_swiotlb
drm_object_attach_property
drm_object_property_set_value
drm_open
drm_plane_cleanup
drm_plane_create_alpha_property
drm_plane_create_blend_mode_property
drm_plane_create_color_properties
drm_plane_create_rotation_property
drm_plane_create_zpos_immutable_property
drm_plane_create_zpos_property
drm_plane_force_disable
drm_plane_init
drm_poll
drm_primary_helper_destroy
drm_primary_helper_funcs
drm_prime_gem_destroy
drm_prime_pages_to_sg
drm_prime_sg_to_page_addr_arrays
drm_printf
__drm_printfn_seq_file
drm_property_add_enum
drm_property_create
drm_property_create_enum
drm_property_create_range
__drm_puts_seq_file
drm_read
drm_release
drm_scdc_read
drm_scdc_write
drm_sched_dependency_optimized
drm_sched_entity_destroy
drm_sched_entity_fini
drm_sched_entity_flush
drm_sched_entity_init
drm_sched_entity_modify_sched
drm_sched_entity_push_job
drm_sched_entity_set_priority
drm_sched_fault
drm_sched_fini
drm_sched_increase_karma
drm_sched_init
drm_sched_job_cleanup
drm_sched_job_init
drm_sched_pick_best
drm_sched_resubmit_jobs
drm_sched_start
drm_sched_stop
drm_sched_suspend_timeout
drm_send_event_locked
drm_syncobj_add_point
drm_syncobj_create
drm_syncobj_find
drm_syncobj_find_fence
drm_syncobj_free
drm_syncobj_get_fd
drm_syncobj_get_handle
drm_syncobj_replace_fence
drm_universal_plane_init
drm_vblank_init
drm_vblank_work_cancel_sync
drm_vblank_work_init
drm_vblank_work_schedule
drm_vma_node_allow
drm_vma_node_is_allowed
drm_vma_node_revoke
dst_init
dst_release
dump_stack
__dynamic_dev_dbg
__dynamic_ibdev_dbg
__dynamic_netdev_dbg
__dynamic_pr_debug
efi
elfcorehdr_addr
emergency_restart
empty_zero_page
enable_irq
errno_to_blk_status
ether_setup
eth_get_headlen
eth_mac_addr
eth_platform_get_mac_address
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
__ethtool_get_link_ksettings
ethtool_intersect_link_masks
ethtool_op_get_link
ethtool_op_get_ts_info
ethtool_rx_flow_rule_create
ethtool_rx_flow_rule_destroy
eth_type_trans
eth_validate_addr
eventfd_ctx_fdget
eventfd_ctx_fileget
eventfd_ctx_put
eventfd_fget
eventfd_signal
event_triggers_call
ex_handler_default
fasync_helper
fc_attach_transport
fc_block_scsi_eh
fc_disc_config
fc_disc_init
fc_eh_host_reset
fc_eh_timed_out
fc_elsct_init
fc_elsct_send
fc_exch_init
fc_exch_mgr_alloc
fc_exch_mgr_free
fc_exch_mgr_list_clone
fc_exch_recv
fc_fabric_login
fc_fabric_logoff
_fc_frame_alloc
fc_frame_alloc_fill
fc_get_event_number
fc_get_host_port_state
fc_get_host_speed
fc_get_host_stats
fc_host_fpin_rcv
fc_host_post_event
fc_host_post_vendor_event
fc_lport_bsg_request
fc_lport_config
fc_lport_destroy
fc_lport_flogi_resp
fc_lport_init
fc_lport_logo_resp
fc_lport_reset
fcoe_check_wait_queue
fcoe_clean_pending_queue
fcoe_ctlr_destroy
fcoe_ctlr_device_add
fcoe_ctlr_device_delete
fcoe_ctlr_els_send
fcoe_ctlr_get_lesb
fcoe_ctlr_init
fcoe_ctlr_link_down
fcoe_ctlr_link_up
fcoe_ctlr_recv
fcoe_ctlr_recv_flogi
fcoe_fc_crc
fcoe_fcf_get_selected
__fcoe_get_lesb
fcoe_get_lesb
fcoe_get_paged_crc_eof
fcoe_get_wwn
fcoe_libfc_config
fcoe_link_speed_update
fcoe_queue_timer
fcoe_start_io
fcoe_transport_attach
fcoe_transport_detach
fcoe_validate_vport_create
fcoe_wwn_from_mac
fcoe_wwn_to_str
fc_release_transport
fc_remote_port_add
fc_remote_port_delete
fc_remote_port_rolechg
fc_remove_host
fc_rport_create
fc_rport_destroy
fc_rport_login
fc_rport_logoff
fc_rport_lookup
fc_rport_terminate_io
fc_set_mfs
fc_set_rport_loss_tmo
fc_slave_alloc
fc_vport_create
fc_vport_id_lookup
fc_vport_setlink
fc_vport_terminate
__fdget
fd_install
__fentry__
fget
__fib_lookup
fib_table_lookup
filemap_fault
filp_close
filp_open
find_first_bit
find_first_zero_bit
find_get_pid
find_last_bit
find_next_bit
find_next_zero_bit
find_pid_ns
find_vma
finish_wait
firmware_request_nowarn
fixed_size_llseek
flow_block_cb_alloc
flow_block_cb_lookup
flow_block_cb_setup_simple
flow_indr_block_cb_alloc
flow_indr_dev_register
flow_indr_dev_unregister
flow_keys_basic_dissector
flow_keys_dissector
flow_resources_add
flow_resources_alloc
flow_rule_match_basic
flow_rule_match_control
flow_rule_match_cvlan
flow_rule_match_enc_control
flow_rule_match_enc_ip
flow_rule_match_enc_ipv4_addrs
flow_rule_match_enc_ipv6_addrs
flow_rule_match_enc_keyid
flow_rule_match_enc_opts
flow_rule_match_enc_ports
flow_rule_match_eth_addrs
flow_rule_match_icmp
flow_rule_match_ip
flow_rule_match_ipv4_addrs
flow_rule_match_ipv6_addrs
flow_rule_match_meta
flow_rule_match_mpls
flow_rule_match_ports
flow_rule_match_tcp
flow_rule_match_vlan
flush_delayed_work
flush_signals
flush_work
flush_workqueue
follow_pfn
force_sig
fortify_panic
fput
free_cpumask_var
free_fib_info
free_irq
free_irq_cpu_rmap
free_netdev
__free_pages
free_pages
free_percpu
from_kgid
from_kgid_munged
from_kuid
from_kuid_munged
fs_bio_set
__f_setown
full_name_hash
fwnode_property_read_string
fwnode_property_read_u32_array
fwnode_property_read_u8_array
gcd
generate_random_uuid
generic_file_llseek
generic_handle_irq
genlmsg_put
genl_notify
genl_register_family
genl_unregister_family
genphy_read_status
genphy_restart_aneg
gen_pool_add_owner
gen_pool_alloc_algo_owner
gen_pool_create
gen_pool_destroy
gen_pool_free_owner
gen_pool_virt_to_phys
get_cpu_idle_time
get_cpu_idle_time_us
get_cpu_iowait_time_us
get_device
get_device_system_crosststamp
__get_free_pages
get_net_ns_by_fd
get_net_ns_by_pid
get_phy_device
get_pid_task
get_random_bytes
__get_task_comm
get_task_mm
get_task_pid
get_unused_fd_flags
__get_user_2
__get_user_4
__get_user_8
get_user_pages
get_user_pages_fast
get_user_pages_longterm
get_user_pages_remote
get_zeroed_page
gre_add_protocol
gre_del_protocol
groups_alloc
groups_free
guid_parse
handle_simple_irq
hdmi_avi_infoframe_pack
hdmi_drm_infoframe_pack_only
hdmi_infoframe_pack
hest_disable
hex_to_bin
hrtimer_cancel
hrtimer_forward
hrtimer_init
hrtimer_start_range_ns
hrtimer_try_to_cancel
__hw_addr_sync_dev
__hw_addr_unsync_dev
hwmon_device_register
hwmon_device_register_with_groups
hwmon_device_register_with_info
hwmon_device_unregister
hyperv_read_cfg_blk
hyperv_reg_block_invalidate
hyperv_write_cfg_blk
i2c_add_adapter
i2c_add_numbered_adapter
i2c_bit_add_bus
i2c_bit_algo
i2c_del_adapter
i2c_generic_scl_recovery
i2c_new_client_device
i2c_recover_bus
i2c_smbus_read_byte_data
i2c_smbus_write_byte_data
i2c_transfer
i2c_unregister_device
__ib_alloc_cq
_ib_alloc_device
__ib_alloc_pd
ib_attach_mcast
ib_cache_gid_parse_type_str
ib_cache_gid_type_str
ib_cancel_mad
ib_cm_init_qp_attr
ib_cm_insert_listen
ib_cm_listen
ib_cm_notify
ibcm_reject_msg
ib_copy_ah_attr_to_user
ib_copy_path_rec_from_user
ib_copy_path_rec_to_user
ib_copy_qp_attr_to_user
ib_create_ah_from_wc
ib_create_cm_id
__ib_create_cq
ib_create_qp
ib_create_qp_security
ib_create_send_mad
ib_create_srq_user
ib_dealloc_device
ib_dealloc_pd_user
ib_dealloc_xrcd_user
ib_dereg_mr_user
ib_destroy_cm_id
ib_destroy_cq_user
ib_destroy_qp_user
ib_destroy_srq_user
ib_detach_mcast
ibdev_err
ib_device_get_by_netdev
ib_device_put
ib_device_set_netdev
ibdev_info
ibdev_warn
ib_dispatch_event
ib_drain_qp
ib_event_msg
ib_find_cached_pkey
ib_free_cq
ib_free_recv_mad
ib_free_send_mad
ib_get_cached_pkey
ib_get_cached_port_state
ib_get_eth_speed
ib_get_gids_from_rdma_hdr
ib_get_mad_data_offset
ib_get_net_dev_by_params
ib_get_rdma_header_version
ib_get_rmpp_segment
ib_init_ah_attr_from_path
ib_init_ah_attr_from_wc
ib_init_ah_from_mcmember
ib_is_mad_class_rmpp
ib_mad_kernel_rmpp_agent
ib_map_mr_sg
ib_modify_mad
ib_modify_port
ib_modify_qp
ib_modify_qp_is_ok
ib_modify_qp_with_udata
ib_mr_pool_destroy
ib_mr_pool_get
ib_mr_pool_init
ib_mr_pool_put
ibnl_put_attr
ibnl_put_msg
ib_open_qp
ib_post_send_mad
ib_process_cq_direct
ib_query_pkey
ib_query_port
ib_query_qp
ib_query_srq
ib_rdmacg_try_charge
ib_rdmacg_uncharge
ib_register_client
ib_register_device
ib_register_event_handler
ib_register_mad_agent
ib_response_mad
ib_sa_cancel_query
ib_sa_free_multicast
ib_sa_get_mcmember_rec
ib_sa_guid_info_rec_query
ib_sa_join_multicast
ib_sa_pack_path
ib_sa_path_rec_get
ib_sa_register_client
ib_sa_sendonly_fullmem_support
ib_sa_unpack_path
ib_sa_unregister_client
ib_send_cm_drep
ib_send_cm_dreq
ib_send_cm_mra
ib_send_cm_rej
ib_send_cm_rep
ib_send_cm_req
ib_send_cm_rtu
ib_send_cm_sidr_rep
ib_send_cm_sidr_req
ib_set_client_data
ib_set_device_ops
ib_sg_to_pages
ib_ud_header_init
ib_ud_header_pack
ib_ud_ip4_csum
ib_umem_copy_from
ib_umem_find_best_pgsz
ib_umem_get
ib_umem_odp_alloc_child
ib_umem_odp_alloc_implicit
ib_umem_odp_get
ib_umem_odp_map_dma_and_lock
ib_umem_odp_release
ib_umem_odp_unmap_dma_pages
ib_umem_release
ib_unregister_client
ib_unregister_device
ib_unregister_device_queued
ib_unregister_driver
ib_unregister_event_handler
ib_unregister_mad_agent
ib_uverbs_flow_resources_free
ib_uverbs_get_ucontext_file
ib_wc_status_msg
__icmp_send
icmpv6_send
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_alloc_cyclic
idr_alloc_u32
idr_destroy
idr_find
idr_for_each
idr_get_next
idr_get_next_ul
idr_preload
idr_remove
idr_replace
igrab
in4_pton
in6_dev_finish_destroy
in6_pton
in_aton
in_dev_finish_destroy
in_egroup_p
__inet6_lookup_established
inet_addr_is_any
inet_confirm_addr
inet_get_local_port_range
__inet_lookup_established
inet_proto_csum_replace16
inet_proto_csum_replace4
inet_pton_with_scope
in_group_p
init_net
__init_rwsem
init_srcu_struct
__init_swait_queue_head
init_task
init_timer_key
init_uts_ns
init_wait_entry
__init_waitqueue_head
input_close_device
input_open_device
input_register_handle
input_register_handler
input_unregister_handle
input_unregister_handler
interval_tree_insert
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_to_scsilun
iomem_resource
iommu_get_domain_for_dev
iommu_group_add_device
iommu_group_alloc
iommu_group_get
iommu_group_id
iommu_group_put
iommu_group_remove_device
iommu_iova_to_phys
iommu_map
iommu_unmap
ioread16
ioread16be
ioread32
ioread32be
ioread8
ioremap
ioremap_cache
ioremap_wc
io_schedule
io_schedule_timeout
iounmap
iov_iter_advance
iov_iter_bvec
iov_iter_init
iov_iter_npages
iowrite16
iowrite32
iowrite32be
__iowrite32_copy
__iowrite64_copy
iowrite8
ip6_dst_hoplimit
ip6_local_out
ip6_route_output_flags
ip_compute_csum
ip_defrag
__ip_dev_find
ip_do_fragment
ip_local_out
__ip_mc_dec_group
ip_mc_inc_group
ipmi_add_smi
ipmi_create_user
ipmi_destroy_user
ipmi_free_recv_msg
ipmi_poll_interface
ipmi_request_settime
ipmi_set_gets_events
ipmi_set_my_address
ipmi_smi_msg_received
ipmi_unregister_smi
ipmi_validate_addr
ip_route_output_flow
__ip_select_ident
ip_send_check
ip_set_get_byname
ip_set_put_byindex
ip_tos2prio
ip_tunnel_get_stats64
iput
__ipv6_addr_type
ipv6_chk_addr
ipv6_ext_hdr
ipv6_find_hdr
ipv6_mod_enabled
ipv6_skip_exthdr
ipv6_stub
ip_vs_proto_name
irq_cpu_rmap_add
irq_create_mapping_affinity
__irq_domain_add
irq_domain_remove
irq_find_mapping
irq_get_irq_data
irq_modify_status
irq_poll_complete
irq_poll_disable
irq_poll_enable
irq_poll_init
irq_poll_sched
irq_set_affinity_hint
irq_set_affinity_notifier
irq_set_chip_and_handler_name
irq_to_desc
is_acpi_data_node
is_acpi_device_node
iscsi_block_scsi_eh
iscsi_block_session
iscsi_boot_create_ethernet
iscsi_boot_create_host_kset
iscsi_boot_create_initiator
iscsi_boot_create_target
iscsi_boot_destroy_kset
__iscsi_complete_pdu
iscsi_complete_pdu
iscsi_complete_scsi_task
iscsi_conn_bind
iscsi_conn_error_event
iscsi_conn_failure
iscsi_conn_get_addr_param
iscsi_conn_get_param
iscsi_conn_login_event
iscsi_conn_send_pdu
iscsi_conn_setup
iscsi_conn_start
iscsi_conn_stop
iscsi_conn_teardown
iscsi_create_endpoint
iscsi_create_flashnode_conn
iscsi_create_flashnode_sess
iscsi_create_iface
iscsi_destroy_all_flashnode
iscsi_destroy_endpoint
iscsi_destroy_flashnode_sess
iscsi_destroy_iface
iscsi_eh_abort
iscsi_eh_cmd_timed_out
iscsi_eh_device_reset
iscsi_eh_recover_target
iscsi_eh_session_reset
iscsi_find_flashnode_conn
iscsi_find_flashnode_sess
iscsi_flashnode_bus_match
iscsi_get_discovery_parent_name
iscsi_get_ipaddress_state_name
iscsi_get_port_speed_name
iscsi_get_port_state_name
iscsi_get_router_state_name
__iscsi_get_task
iscsi_host_add
iscsi_host_alloc
iscsi_host_for_each_session
iscsi_host_free
iscsi_host_get_param
iscsi_host_remove
iscsi_is_session_dev
iscsi_is_session_online
iscsi_itt_to_task
iscsi_lookup_endpoint
iscsi_offload_mesg
iscsi_ping_comp_event
iscsi_post_host_event
__iscsi_put_task
iscsi_put_task
iscsi_queuecommand
iscsi_register_transport
iscsi_session_chkready
iscsi_session_failure
iscsi_session_get_param
iscsi_session_recovery_timedout
iscsi_session_setup
iscsi_session_teardown
iscsi_set_param
iscsi_suspend_queue
iscsi_switch_str_param
iscsi_target_alloc
iscsi_unblock_session
iscsi_unregister_transport
is_uv_system
is_vmalloc_addr
iterate_fd
iw_cm_accept
iw_cm_connect
iw_cm_disconnect
iw_cm_init_qp_attr
iw_cm_listen
iw_cm_reject
iwcm_reject_msg
iw_create_cm_id
iw_destroy_cm_id
jiffies
jiffies_64
jiffies64_to_nsecs
jiffies_to_msecs
jiffies_to_timespec64
jiffies_to_usecs
kasprintf
kernel_bind
kernel_connect
kernel_cpustat
kernel_fpu_begin_mask
kernel_fpu_end
kernel_recvmsg
kernel_sendmsg
kernel_sock_shutdown
kernel_write
kexec_crash_loaded
__kfifo_alloc
__kfifo_free
kfree
kfree_const
kfree_sensitive
kfree_skb
kfree_skb_list
kfree_skb_partial
kgdb_active
kgdb_breakpoint
kill_fasync
__kmalloc
kmalloc_caches
__kmalloc_node
kmalloc_order_trace
kmem_cache_alloc
kmem_cache_alloc_node
kmem_cache_alloc_node_trace
kmem_cache_alloc_trace
kmem_cache_create
kmem_cache_create_usercopy
kmem_cache_destroy
kmem_cache_free
kmem_cache_shrink
kmemdup
kobject_add
kobject_create_and_add
kobject_del
kobject_get
kobject_init
kobject_init_and_add
kobject_put
kobject_set_name
kobject_uevent
kobject_uevent_env
krealloc
kset_create_and_add
kset_find_obj
kset_register
kset_unregister
ksize
kstrdup
kstrdup_const
kstrndup
kstrtobool
kstrtobool_from_user
kstrtoint
kstrtoint_from_user
kstrtoll
kstrtoll_from_user
kstrtou16
kstrtou8
kstrtouint
kstrtouint_from_user
kstrtoul_from_user
kstrtoull
kstrtoull_from_user
ksys_sync_helper
kthread_bind
kthread_create_on_node
kthread_create_worker
kthread_destroy_worker
kthread_park
kthread_queue_work
kthread_should_stop
kthread_stop
kthread_unpark
kthread_unuse_mm
kthread_use_mm
ktime_get
ktime_get_coarse_real_ts64
ktime_get_mono_fast_ns
ktime_get_raw
ktime_get_raw_ts64
ktime_get_real_seconds
ktime_get_real_ts64
ktime_get_seconds
ktime_get_ts64
ktime_get_with_offset
kvasprintf
kvfree
kvfree_call_rcu
kvmalloc_node
lcm
led_classdev_register_ext
led_classdev_resume
led_classdev_suspend
led_classdev_unregister
libfc_vport_create
__list_add_valid
__list_del_entry_valid
list_sort
llist_add_batch
__local_bh_enable_ip
__lock_page
lock_page_memcg
lockref_get
lock_sock_nested
make_kgid
make_kuid
map_destroy
mark_page_accessed
match_hex
match_int
match_strdup
match_string
match_token
match_u64
mdev_dev
mdev_from_dev
mdev_get_drvdata
mdev_parent_dev
mdev_register_device
mdev_set_drvdata
mdev_unregister_device
mdev_uuid
mdio45_probe
mdiobus_alloc_size
mdiobus_free
mdiobus_get_phy
mdiobus_read
__mdiobus_register
mdiobus_unregister
mdiobus_write
mdio_mii_ioctl
memchr
memchr_inv
memcmp
memcpy
memcpy_fromio
memcpy_toio
memdup_user
memdup_user_nul
memmove
memory_read_from_buffer
memparse
mempool_alloc
mempool_alloc_slab
mempool_create
mempool_create_node
mempool_destroy
mempool_free
mempool_free_slab
mempool_kfree
mempool_kmalloc
memscan
mem_section
memset
memset_io
metadata_dst_alloc
mfd_add_devices
mfd_remove_devices
misc_deregister
misc_register
__mmdrop
mm_kobj
mmput
mmu_interval_notifier_insert
mmu_interval_notifier_remove
mmu_notifier_call_srcu
mmu_notifier_put
__mmu_notifier_register
mmu_notifier_register
mmu_notifier_synchronize
mmu_notifier_unregister
mod_delayed_work_on
mod_timer
mod_timer_pending
__module_get
module_layout
module_put
module_refcount
mpi_alloc
mpi_free
mpi_get_buffer
mpi_powm
mpi_read_raw_data
__msecs_to_jiffies
msleep
msleep_interruptible
mtd_device_parse_register
mtd_device_unregister
__mutex_init
mutex_is_locked
mutex_lock
mutex_lock_interruptible
mutex_lock_killable
mutex_trylock
mutex_unlock
mxm_wmi_call_mxds
mxm_wmi_call_mxmx
mxm_wmi_supported
napi_alloc_frag
__napi_alloc_skb
napi_complete_done
napi_consume_skb
napi_disable
napi_get_frags
napi_gro_flush
napi_gro_frags
napi_gro_receive
__napi_schedule
__napi_schedule_irqoff
napi_schedule_prep
__ndelay
ndo_dflt_bridge_getlink
ndo_dflt_fdb_add
nd_tbl
__neigh_create
neigh_destroy
__neigh_event_send
neigh_lookup
netdev_alloc_frag
__netdev_alloc_skb
netdev_bind_sb_channel_queue
netdev_crit
netdev_err
netdev_features_change
netdev_has_upper_dev_all_rcu
netdev_info
netdev_is_rx_handler_busy
netdev_lower_get_next
netdev_lower_get_next_private
netdev_master_upper_dev_get
netdev_master_upper_dev_get_rcu
netdev_master_upper_dev_link
netdev_notice
netdev_pick_tx
netdev_port_same_parent_id
netdev_printk
netdev_reset_tc
netdev_rss_key_fill
netdev_rx_handler_register
netdev_rx_handler_unregister
netdev_set_num_tc
netdev_set_sb_channel
netdev_set_tc_queue
netdev_state_change
netdev_stats_to_stats64
netdev_unbind_sb_channel
netdev_update_features
netdev_upper_dev_unlink
netdev_walk_all_lower_dev_rcu
netdev_walk_all_upper_dev_rcu
netdev_warn
net_dim
net_dim_get_def_rx_moderation
net_dim_get_def_tx_moderation
net_dim_get_rx_moderation
net_dim_get_tx_moderation
netif_carrier_off
netif_carrier_on
netif_device_attach
netif_device_detach
netif_get_num_default_rss_queues
netif_napi_add
__netif_napi_del
netif_receive_skb
netif_rx
netif_rx_ni
netif_schedule_queue
netif_set_real_num_rx_queues
netif_set_real_num_tx_queues
netif_set_xps_queue
netif_tx_stop_all_queues
netif_tx_wake_queue
netlink_ack
netlink_broadcast
netlink_capable
__netlink_dump_start
netlink_has_listeners
__netlink_kernel_create
netlink_kernel_release
netlink_ns_capable
netlink_set_err
netlink_unicast
net_namespace_list
net_ns_type_operations
net_ratelimit
net_rwsem
nf_connlabels_get
nf_connlabels_put
nf_connlabels_replace
nf_conntrack_alloc
__nf_conntrack_confirm
nf_conntrack_destroy
nf_conntrack_eventmask_report
nf_conntrack_expect_lock
nf_conntrack_find_get
nf_conntrack_free
nf_conntrack_hash
nf_conntrack_hash_check_insert
__nf_conntrack_helper_find
nf_conntrack_helper_put
nf_conntrack_helper_try_module_get
nf_conntrack_htable_size
nf_conntrack_in
nf_conntrack_locks
nf_ct_delete
nf_ct_deliver_cached_events
nf_ct_expect_alloc
__nf_ct_expect_find
nf_ct_expect_find_get
nf_ct_expect_hash
nf_ct_expect_hsize
nf_ct_expect_iterate_net
nf_ct_expect_put
nf_ct_expect_register_notifier
nf_ct_expect_related_report
nf_ct_expect_unregister_notifier
nf_ct_ext_add
nf_ct_frag6_gather
nf_ct_get_tuplepr
nf_ct_helper_expectfn_find_by_name
nf_ct_helper_expectfn_find_by_symbol
nf_ct_helper_ext_add
nf_ct_invert_tuple
nf_ct_iterate_cleanup_net
nf_ct_l4proto_find
nf_ct_nat_ext_add
nf_ct_remove_expectations
nf_ct_seq_adjust
nf_ct_tmpl_alloc
nf_ct_tmpl_free
__nf_ct_try_assign_helper
nf_ct_unlink_expect_report
nf_ct_zone_dflt
nf_ipv6_ops
nf_nat_alloc_null_binding
nf_nat_hook
nf_nat_icmp_reply_translation
nf_nat_icmpv6_reply_translation
nf_nat_packet
nf_nat_setup_info
nfnetlink_has_listeners
nfnetlink_send
nfnetlink_set_err
nfnetlink_subsys_register
nfnetlink_subsys_unregister
nfnl_lock
nfnl_unlock
nf_register_net_hook
nf_register_net_hooks
nf_unregister_net_hook
nf_unregister_net_hooks
nla_find
nla_memcpy
__nla_parse
nla_policy_len
__nla_put
nla_put
nla_put_64bit
__nla_reserve
nla_reserve
nla_strlcpy
__nla_validate
__nlmsg_put
node_data
__node_distance
node_states
node_to_cpumask_map
no_llseek
nonseekable_open
noop_llseek
nr_cpu_ids
nr_irqs
nr_node_ids
ns_capable
nsecs_to_jiffies
ns_to_kernel_old_timeval
ns_to_timespec64
numa_node
__num_online_cpus
num_registered_fb
nvidia_gpu_vfio
nvme_alloc_request
nvme_cancel_request
nvme_change_ctrl_state
nvme_cleanup_cmd
nvme_complete_async_event
nvme_complete_rq
nvme_disable_ctrl
nvme_enable_ctrl
nvme_fc_rcv_ls_req
nvme_fc_register_localport
nvme_fc_register_remoteport
nvme_fc_rescan_remoteport
nvme_fc_set_remoteport_devloss
nvme_fc_unregister_localport
nvme_fc_unregister_remoteport
nvme_get_features
nvme_init_ctrl
nvme_init_identify
nvme_io_timeout
nvme_kill_queues
nvme_remove_namespaces
nvme_reset_ctrl
nvme_reset_ctrl_sync
nvme_set_features
nvme_set_queue_count
nvme_setup_cmd
nvme_shutdown_ctrl
nvme_start_admin_queue
nvme_start_ctrl
nvme_start_freeze
nvme_start_queues
nvme_stop_admin_queue
nvme_stop_ctrl
nvme_stop_queues
nvme_submit_sync_cmd
nvme_sync_queues
nvmet_fc_invalidate_host
nvmet_fc_rcv_fcp_abort
nvmet_fc_rcv_fcp_req
nvmet_fc_rcv_ls_req
nvmet_fc_register_targetport
nvmet_fc_unregister_targetport
nvme_try_sched_reset
nvme_unfreeze
nvme_uninit_ctrl
nvme_wait_freeze
nvme_wait_freeze_timeout
nvme_wait_reset
nvme_wq
on_each_cpu_cond_mask
orderly_poweroff
out_of_line_wait_on_bit
out_of_line_wait_on_bit_lock
override_creds
__page_file_index
__page_frag_cache_drain
page_frag_free
__page_mapcount
page_mapped
page_offset_base
page_pool_alloc_pages
page_pool_create
page_pool_destroy
page_pool_put_page
page_pool_release_page
page_pool_update_nid
pagevec_lookup_range
pagevec_lookup_range_tag
__pagevec_release
panic
panic_notifier_list
param_array_ops
param_get_int
param_get_uint
param_ops_bint
param_ops_bool
param_ops_byte
param_ops_charp
param_ops_hexint
param_ops_int
param_ops_long
param_ops_short
param_ops_string
param_ops_uint
param_ops_ullong
param_ops_ulong
param_ops_ushort
param_set_bool
param_set_int
param_set_uint
pat_enabled
path_get
path_put
pci_aer_clear_nonfatal_status
pci_alloc_irq_vectors_affinity
pci_assign_unassigned_bus_resources
pcibios_resource_to_bus
pci_bus_read_config_dword
pci_bus_resource_n
pci_bus_type
pci_cfg_access_lock
pci_cfg_access_unlock
pci_choose_state
pci_clear_master
pci_clear_mwi
pci_d3cold_disable
pci_dev_driver
pci_dev_get
pci_device_is_present
pci_dev_present
pci_dev_put
pci_disable_device
pci_disable_link_state
pci_disable_msi
pci_disable_msix
pci_disable_pcie_error_reporting
pci_disable_rom
pci_disable_sriov
pcie_aspm_enabled
pcie_bandwidth_available
pcie_capability_clear_and_set_word
pcie_capability_read_dword
pcie_capability_read_word
pcie_capability_write_word
pcie_flr
pcie_get_mps
pcie_get_speed_cap
pcie_get_width_cap
pci_enable_atomic_ops_to_root
pci_enable_device
pci_enable_device_mem
pci_enable_msi
pci_enable_msix_range
pci_enable_pcie_error_reporting
pci_enable_rom
pci_enable_sriov
pci_enable_wake
pcie_print_link_status
pcie_relaxed_ordering_enabled
pcie_set_readrq
pci_find_capability
pci_find_ext_capability
pci_free_irq
pci_free_irq_vectors
pci_get_class
pci_get_device
pci_get_domain_bus_and_slot
pci_get_dsn
pci_get_slot
pci_ignore_hotplug
pci_intx
pci_iomap
pci_ioremap_bar
pci_iounmap
pci_irq_get_affinity
pci_irq_vector
pci_load_saved_state
pci_map_rom
pci_match_id
pcim_enable_device
pcim_iomap
pcim_iomap_regions
pcim_iomap_table
pcim_iounmap
pci_msix_vec_count
pci_num_vf
pci_pr3_present
pci_prepare_to_sleep
pci_read_config_byte
pci_read_config_dword
pci_read_config_word
pci_read_vpd
__pci_register_driver
pci_release_regions
pci_release_resource
pci_release_selected_regions
pci_request_irq
pci_request_regions
pci_request_selected_regions
pci_rescan_bus
pci_reset_bus
pci_resize_resource
pci_restore_msi_state
pci_restore_state
pci_save_state
pci_select_bars
pci_set_master
pci_set_mwi
pci_set_power_state
pci_sriov_configure_simple
pci_sriov_get_totalvfs
pci_sriov_set_totalvfs
pci_stop_and_remove_bus_device
pci_stop_and_remove_bus_device_locked
pci_store_saved_state
pci_try_set_mwi
pci_unmap_rom
pci_unregister_driver
pci_vfs_assigned
pci_vpd_find_info_keyword
pci_vpd_find_tag
pci_wait_for_pending_transaction
pci_wake_from_d3
pci_walk_bus
pci_write_config_byte
pci_write_config_dword
pci_write_config_word
pcix_set_mmrbc
PDE_DATA
__per_cpu_offset
percpu_ref_exit
percpu_ref_init
percpu_ref_kill_and_confirm
perf_event_update_userpage
perf_pmu_register
perf_pmu_unregister
perf_tp_event
perf_trace_buf_alloc
perf_trace_run_bpf_submit
pgprot_writecombine
phy_attach_direct
phy_attached_info
phy_connect
phy_connect_direct
phy_device_free
phy_device_register
phy_device_remove
phy_disconnect
phy_ethtool_ksettings_get
phy_ethtool_ksettings_set
phy_loopback
phy_mii_ioctl
phy_resume
phys_base
phy_set_asym_pause
phy_set_max_speed
physical_mask
phy_start
phy_start_aneg
phy_stop
phy_support_asym_pause
phy_suspend
phy_validate_pause
pid_task
pid_vnr
platform_bus_type
platform_device_register
platform_device_register_full
platform_device_unregister
__platform_driver_register
platform_driver_unregister
platform_get_irq
platform_get_resource
platform_get_resource_byname
pldmfw_flash_image
pldmfw_op_pci_match_record
pm_genpd_add_device
pm_genpd_init
pm_genpd_remove_device
pm_power_off
pm_runtime_allow
pm_runtime_autosuspend_expiration
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
__pm_runtime_idle
__pm_runtime_resume
pm_runtime_set_autosuspend_delay
__pm_runtime_set_status
__pm_runtime_suspend
__pm_runtime_use_autosuspend
pm_schedule_suspend
pm_suspend_global_flags
pm_vt_switch_required
pm_vt_switch_unregister
power_supply_is_system_supplied
prandom_bytes
prandom_seed
prandom_u32
__preempt_count
prepare_creds
prepare_to_wait
prepare_to_wait_event
prepare_to_wait_exclusive
print_hex_dump
printk
__printk_ratelimit
printk_timed_ratelimit
proc_create
proc_create_data
proc_dointvec
proc_dointvec_minmax
proc_mkdir
proc_mkdir_mode
proc_remove
proc_set_size
proc_symlink
__pskb_copy_fclone
pskb_expand_head
__pskb_pull_tail
___pskb_trim
ptp_clock_event
ptp_clock_index
ptp_clock_register
ptp_clock_unregister
ptp_find_pin
__put_cred
put_device
put_devmap_managed_page
put_disk
__put_net
__put_page
put_pid
__put_task_struct
put_unused_fd
__put_user_1
__put_user_2
__put_user_4
__put_user_8
pv_ops
qdisc_reset
qed_get_eth_ops
qed_put_eth_ops
queue_delayed_work_on
queued_read_lock_slowpath
queued_write_lock_slowpath
queue_work_on
radix_tree_delete
radix_tree_gang_lookup
radix_tree_gang_lookup_tag
radix_tree_insert
radix_tree_iter_delete
radix_tree_lookup
radix_tree_lookup_slot
radix_tree_next_chunk
radix_tree_preload
radix_tree_preloads
radix_tree_tagged
radix_tree_tag_set
raid_class_attach
raid_class_release
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
raw_notifier_chain_unregister
_raw_read_lock
_raw_read_lock_bh
_raw_read_lock_irq
_raw_read_lock_irqsave
_raw_read_unlock_bh
_raw_read_unlock_irqrestore
_raw_spin_lock
_raw_spin_lock_bh
_raw_spin_lock_irq
_raw_spin_lock_irqsave
_raw_spin_trylock
_raw_spin_unlock_bh
_raw_spin_unlock_irqrestore
_raw_write_lock
_raw_write_lock_bh
_raw_write_lock_irq
_raw_write_lock_irqsave
_raw_write_unlock_bh
_raw_write_unlock_irqrestore
rb_erase
__rb_erase_color
rb_first
rb_first_postorder
__rb_insert_augmented
rb_insert_color
rb_next
rb_next_postorder
rb_replace_node
rcu_barrier
rcu_read_unlock_strict
rdma_accept
rdma_addr_cancel
rdma_addr_size
rdma_addr_size_in6
rdma_addr_size_kss
rdma_bind_addr
__rdma_block_iter_next
__rdma_block_iter_start
rdmacg_register_device
rdmacg_try_charge
rdmacg_uncharge
rdmacg_unregister_device
rdma_connect
rdma_consumer_reject_data
rdma_copy_ah_attr
rdma_copy_src_l2_addr
rdma_create_ah
__rdma_create_kernel_id
rdma_create_qp
rdma_create_user_ah
rdma_destroy_ah_attr
rdma_destroy_ah_user
rdma_destroy_id
rdma_destroy_qp
rdma_disconnect
rdma_event_msg
rdma_find_gid
rdma_find_gid_by_port
rdma_get_gid_attr
rdma_get_service_id
rdma_init_qp_attr
rdma_is_zero_gid
rdma_join_multicast
rdma_leave_multicast
rdma_listen
rdma_move_ah_attr
rdma_nl_multicast
rdma_nl_put_driver_string
rdma_nl_put_driver_u32
rdma_nl_put_driver_u64
rdma_nl_register
rdma_nl_stat_hwcounter_entry
rdma_nl_unicast
rdma_nl_unicast_wait
rdma_nl_unregister
rdma_node_get_transport
rdma_notify
rdma_port_get_link_layer
rdma_put_gid_attr
rdma_query_ah
rdma_query_gid
rdma_read_gid_hw_context
rdma_read_gid_l2_fields
rdma_read_gids
rdma_reject
rdma_reject_msg
rdma_resolve_addr
rdma_resolve_ip
rdma_resolve_route
rdma_restrack_add
rdma_restrack_del
rdma_roce_rescan_device
rdma_rw_ctx_destroy
rdma_rw_ctx_init
rdma_rw_ctx_post
rdma_rw_ctx_wrs
rdma_set_afonly
rdma_set_cq_moderation
rdma_set_ib_path
rdma_set_reuseaddr
rdma_set_service_type
rdma_translate_ip
rdma_user_mmap_entry_get_pgoff
rdma_user_mmap_entry_insert_range
rdma_user_mmap_entry_put
rdma_user_mmap_entry_remove
rdma_user_mmap_io
read_cache_pages
recalc_sigpending
refcount_dec_and_mutex_lock
refcount_dec_if_one
refcount_warn_saturate
register_acpi_hed_notifier
register_acpi_notifier
register_blkdev
__register_chrdev
register_chrdev_region
register_console
register_die_notifier
registered_fb
register_fib_notifier
register_inet6addr_notifier
register_inetaddr_notifier
register_ip_vs_scheduler
register_kprobe
register_lsm_notifier
register_module_notifier
register_netdev
register_netdevice
register_netdevice_notifier
register_netdevice_notifier_dev_net
register_netdevice_notifier_net
register_netevent_notifier
register_net_sysctl
__register_nmi_handler
register_oom_notifier
register_pernet_device
register_pernet_subsys
register_reboot_notifier
register_sysctl_table
regmap_read
regmap_write
release_firmware
release_pages
__release_region
release_sock
remap_pfn_range
remap_vmalloc_range
remove_conflicting_framebuffers
remove_conflicting_pci_framebuffers
remove_proc_entry
remove_wait_queue
request_firmware
request_firmware_direct
request_firmware_nowait
__request_module
__request_region
request_threaded_irq
reservation_ww_class
reset_devices
revalidate_disk_size
revert_creds
rhashtable_destroy
rhashtable_free_and_destroy
rhashtable_init
rhashtable_insert_slow
rhashtable_walk_enter
rhashtable_walk_exit
rhashtable_walk_next
rhashtable_walk_start_check
rhashtable_walk_stop
rhltable_init
__rht_bucket_nested
rht_bucket_nested
rht_bucket_nested_insert
ring_buffer_event_data
roce_gid_type_mask_support
round_jiffies
round_jiffies_relative
round_jiffies_up
rps_cpu_mask
rps_may_expire_flow
rps_sock_flow_table
rsa_parse_priv_key
rsa_parse_pub_key
rt6_lookup
rtc_time64_to_tm
rtnl_configure_link
rtnl_create_link
rtnl_is_locked
rtnl_link_get_net
rtnl_link_register
rtnl_link_unregister
rtnl_lock
rtnl_nla_parse_ifla
rtnl_trylock
rtnl_unlock
sas_alloc_slow_task
sas_attach_transport
sas_bios_param
sas_change_queue_depth
sas_disable_tlr
sas_domain_attach_transport
sas_drain_work
sas_eh_device_reset_handler
sas_eh_target_reset_handler
sas_enable_tlr
sas_end_device_alloc
sas_expander_alloc
sas_free_task
sas_get_local_phy
sas_ioctl
sas_is_tlr_enabled
sas_phy_add
sas_phy_alloc
sas_phy_delete
sas_phy_free
sas_phy_reset
sas_port_add
sas_port_add_phy
sas_port_alloc_num
sas_port_delete
sas_port_delete_phy
sas_port_free
sas_prep_resume_ha
sas_queuecommand
sas_read_port_mode_page
sas_register_ha
sas_release_transport
sas_remove_host
sas_resume_ha
sas_rphy_add
sas_slave_configure
sas_ssp_task_response
sas_suspend_ha
sas_target_alloc
sas_target_destroy
sas_unregister_ha
sbitmap_queue_clear
__sbitmap_queue_get
scatterwalk_map_and_copy
sched_clock
sched_clock_cpu
sched_set_fifo
sched_set_fifo_low
sched_set_normal
schedule
schedule_hrtimeout
schedule_hrtimeout_range
schedule_timeout
schedule_timeout_interruptible
schedule_timeout_uninterruptible
__SCK__tp_func_dma_fence_emit
__SCK__tp_func_nvme_sq
__SCK__tp_func_xdp_exception
scmd_printk
scnprintf
screen_info
scsi_add_device
scsi_add_host_with_dma
scsi_block_requests
scsi_build_sense_buffer
scsi_change_queue_depth
scsi_command_normalize_sense
scsi_device_get
scsi_device_lookup
scsi_device_put
scsi_device_set_state
scsi_device_type
scsi_dma_map
scsi_dma_unmap
__scsi_execute
scsi_get_vpd_page
scsi_host_alloc
scsi_host_busy
scsi_host_get
scsi_host_lookup
scsi_host_put
scsi_internal_device_block_nowait
scsi_internal_device_unblock_nowait
scsi_is_fc_rport
scsi_is_host_device
scsi_is_sdev_device
__scsi_iterate_devices
scsilun_to_int
scsi_normalize_sense
scsi_print_command
scsi_queue_work
scsi_register_driver
scsi_remove_device
scsi_remove_host
scsi_remove_target
scsi_rescan_device
scsi_sanitize_inquiry_string
scsi_scan_host
scsi_sense_key_string
scsi_track_queue_full
scsi_unblock_requests
__SCT__tp_func_dma_fence_emit
__SCT__tp_func_nvme_sq
__SCT__tp_func_xdp_exception
sdev_prefix_printk
secpath_set
secure_tcp_seq
secure_tcpv6_seq
security_d_instantiate
security_ib_alloc_security
security_ib_endport_manage_subnet
security_ib_free_security
security_ib_pkey_access
security_release_secctx
security_secid_to_secctx
security_tun_dev_alloc_security
security_tun_dev_attach
security_tun_dev_attach_queue
security_tun_dev_create
security_tun_dev_free_security
security_tun_dev_open
send_sig
send_sig_info
seq_list_next
seq_list_start
seq_lseek
seq_open
seq_printf
seq_putc
seq_put_decimal_ull
seq_puts
seq_read
seq_release
seq_write
set_cpus_allowed_ptr
set_current_groups
set_device_ro
set_disk_ro
set_freezable
set_memory_uc
set_memory_wb
set_memory_wc
set_normalized_timespec64
set_page_dirty
set_page_dirty_lock
set_user_nice
sg_alloc_table
sg_alloc_table_chained
sg_alloc_table_from_pages
sg_copy_from_buffer
sg_copy_to_buffer
sg_free_table
sg_free_table_chained
sg_init_table
sgl_alloc
sgl_free
sg_miter_next
sg_miter_start
sg_miter_stop
sg_nents
sg_next
__sg_page_iter_next
__sg_page_iter_start
sg_pcopy_from_buffer
sg_pcopy_to_buffer
sg_zero_buffer
show_class_attr_string
sigprocmask
si_meminfo
simple_attr_open
simple_attr_read
simple_attr_release
simple_attr_write
simple_open
simple_read_from_buffer
simple_strtol
simple_strtoul
simple_strtoull
simple_write_to_buffer
single_open
single_release
sk_alloc
sk_attach_filter
skb_add_rx_frag
__skb_checksum
skb_checksum
__skb_checksum_complete
skb_checksum_help
skb_clone
skb_clone_tx_timestamp
skb_copy
skb_copy_bits
skb_copy_datagram_from_iter
skb_copy_datagram_iter
skb_copy_expand
skb_copy_ubufs
skb_dequeue
skb_ensure_writable
__skb_ext_del
__skb_ext_put
__skb_flow_dissect
__skb_get_hash
__skb_gso_segment
skb_gso_validate_mac_len
__skb_pad
skb_partial_csum_set
skb_pull
skb_pull_rcsum
skb_push
skb_put
skb_queue_purge
skb_queue_tail
skb_realloc_headroom
__skb_recv_datagram
skb_scrub_packet
skb_set_owner_w
skb_store_bits
skb_trim
skb_try_coalesce
skb_tstamp_tx
skb_tx_error
skb_vlan_pop
skb_vlan_push
__skb_warn_lro_forwarding
skb_zerocopy
skb_zerocopy_headlen
sk_detach_filter
sk_filter_trim_cap
sk_free
skip_spaces
sme_me_mask
smp_call_function_many
smp_call_function_single
snprintf
sn_rtc_cycles_per_second
sock_alloc_send_pskb
sock_create
sock_create_kern
sock_edemux
sockfd_lookup
sock_init_data
sock_recv_errqueue
sock_release
sock_zerocopy_callback
softnet_data
sort
sprintf
sprint_symbol
__srcu_read_lock
__srcu_read_unlock
sscanf
__stack_chk_fail
stack_trace_print
stack_trace_save
starget_for_each_device
strcasecmp
strcat
strchr
strcmp
strcpy
strcspn
stream_open
strim
strlcat
strlcpy
strlen
strncasecmp
strncat
strncmp
strncpy
strncpy_from_user
strnlen
strnlen_user
strnstr
strpbrk
strrchr
strscpy
strscpy_pad
strsep
strspn
strstr
submit_bio
submit_bio_noacct
__sw_hweight32
__sw_hweight64
swiotlb_nr_tbl
__symbol_get
__symbol_put
sync_file_create
synchronize_irq
synchronize_net
synchronize_rcu
synchronize_srcu
sysfs_add_file_to_group
sysfs_create_bin_file
sysfs_create_file_ns
sysfs_create_files
sysfs_create_group
sysfs_create_groups
sysfs_create_link
sysfs_format_mac
sysfs_remove_bin_file
sysfs_remove_file_from_group
sysfs_remove_file_ns
sysfs_remove_files
sysfs_remove_group
sysfs_remove_groups
sysfs_remove_link
sysfs_streq
system_highpri_wq
system_state
system_unbound_wq
system_wq
sys_tz
t10_pi_type1_crc
t10_pi_type1_ip
t10_pi_type3_crc
t10_pi_type3_ip
tap_get_socket
task_active_pid_ns
tasklet_init
tasklet_kill
__tasklet_schedule
tasklet_setup
__task_pid_nr_ns
tcp_gro_complete
tcp_hashinfo
this_cpu_off
time64_to_tm
timecounter_cyc2time
timecounter_init
timecounter_read
tls_get_record
tls_validate_xmit_skb
to_drm_sched_fence
_totalram_pages
trace_define_field
trace_event_buffer_commit
trace_event_buffer_lock_reserve
trace_event_buffer_reserve
trace_event_ignore_this_pid
trace_event_raw_init
trace_event_reg
trace_handle_return
__tracepoint_dma_fence_emit
__tracepoint_nvme_sq
__tracepoint_xdp_exception
trace_print_array_seq
trace_print_flags_seq
trace_print_symbols_seq
trace_raw_output_prep
trace_seq_printf
trace_seq_putc
try_module_get
try_wait_for_completion
tsc_khz
ttm_bo_bulk_move_lru_tail
ttm_bo_device_init
ttm_bo_device_release
ttm_bo_dma_acc_size
ttm_bo_eviction_valuable
ttm_bo_evict_mm
ttm_bo_glob
ttm_bo_init
ttm_bo_init_reserved
ttm_bo_kmap
ttm_bo_kunmap
ttm_bo_lock_delayed_workqueue
ttm_bo_mem_space
ttm_bo_mmap
ttm_bo_mmap_obj
ttm_bo_move_accel_cleanup
ttm_bo_move_memcpy
ttm_bo_move_to_lru_tail
ttm_bo_move_ttm
ttm_bo_put
ttm_bo_unlock_delayed_workqueue
ttm_bo_validate
ttm_bo_vm_access
ttm_bo_vm_close
ttm_bo_vm_fault_reserved
ttm_bo_vm_open
ttm_bo_vm_reserve
ttm_bo_wait
ttm_dma_page_alloc_debugfs
ttm_dma_populate
ttm_dma_tt_fini
ttm_dma_tt_init
ttm_dma_unpopulate
ttm_eu_backoff_reservation
ttm_eu_fence_buffer_objects
ttm_eu_reserve_buffers
ttm_page_alloc_debugfs
ttm_pool_populate
ttm_pool_unpopulate
ttm_populate_and_map_pages
ttm_range_man_fini
ttm_range_man_init
ttm_resource_free
ttm_resource_manager_force_list_clean
ttm_resource_manager_init
ttm_sg_tt_init
ttm_tt_destroy_common
ttm_tt_fini
ttm_tt_init
ttm_tt_populate
ttm_tt_set_placement_caching
ttm_unmap_and_unpopulate_pages
__udelay
udp4_hwcsum
udp4_lib_lookup_skb
udp6_lib_lookup_skb
udp_encap_enable
udp_gro_complete
udp_tunnel_nic_ops
uio_event_notify
__uio_register_device
uio_unregister_device
unlock_page
unlock_page_memcg
unmap_mapping_range
unregister_acpi_hed_notifier
unregister_acpi_notifier
unregister_blkdev
__unregister_chrdev
unregister_chrdev_region
unregister_console
unregister_die_notifier
unregister_fib_notifier
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_ip_vs_scheduler
unregister_kprobe
unregister_lsm_notifier
unregister_module_notifier
unregister_netdev
unregister_netdevice_many
unregister_netdevice_notifier
unregister_netdevice_notifier_dev_net
unregister_netdevice_notifier_net
unregister_netdevice_queue
unregister_netevent_notifier
unregister_net_sysctl_table
unregister_nmi_handler
unregister_oom_notifier
unregister_pernet_device
unregister_pernet_subsys
unregister_reboot_notifier
unregister_sysctl_table
up
up_read
up_write
__usecs_to_jiffies
usleep_range
uuid_gen
uuid_null
uuid_parse
__uv_cpu_info
_uverbs_alloc
uverbs_copy_to
uverbs_copy_to_struct_or_zero
uverbs_destroy_def_handler
uverbs_fd_class
uverbs_finalize_uobj_create
_uverbs_get_const
uverbs_get_flags32
uverbs_get_flags64
uverbs_idr_class
uverbs_uobject_fd_release
uverbs_uobject_put
__uv_hub_info_list
uv_possible_blades
uv_setup_irq
uv_teardown_irq
vfio_add_group_dev
vfio_del_group_dev
vfio_info_add_capability
vfio_info_cap_shift
vfio_pin_pages
vfio_register_iommu_driver
vfio_register_notifier
vfio_set_irqs_validate_and_prepare
vfio_unpin_pages
vfio_unregister_iommu_driver
vfio_unregister_notifier
vfree
vfs_fallocate
vfs_fsync
vfs_getattr
vfs_statfs
vga_client_register
vgacon_text_force
vga_remove_vgacon
vga_set_legacy_decoding
vga_switcheroo_client_fb_set
vga_switcheroo_client_probe_defer
vga_switcheroo_fini_domain_pm_ops
vga_switcheroo_handler_flags
vga_switcheroo_init_domain_pm_ops
vga_switcheroo_lock_ddc
vga_switcheroo_process_delayed_switch
vga_switcheroo_register_client
vga_switcheroo_register_handler
vga_switcheroo_unlock_ddc
vga_switcheroo_unregister_client
vga_switcheroo_unregister_handler
__virt_addr_valid
vlan_dev_real_dev
vlan_dev_vlan_id
vlan_dev_vlan_proto
__vlan_find_dev_deep_rcu
__vmalloc
vmalloc
vmalloc_base
vmalloc_node
vmalloc_to_page
vmalloc_user
vmap
vmemmap_base
vm_get_page_prot
vm_insert_page
vm_insert_pfn_prot
vm_mmap
vm_munmap
vm_zone_stat
vprintk
vscnprintf
vsnprintf
vsprintf
vunmap
vzalloc
vzalloc_node
wait_for_completion
wait_for_completion_interruptible
wait_for_completion_interruptible_timeout
wait_for_completion_io_timeout
wait_for_completion_killable
wait_for_completion_timeout
wait_on_page_bit
__wake_up
wake_up_bit
__wake_up_locked
wake_up_process
__wake_up_sync_key
__warn_printk
wmi_evaluate_method
wmi_has_guid
work_busy
write_cache_pages
ww_mutex_lock
ww_mutex_lock_interruptible
ww_mutex_unlock
x86_cpu_to_apicid
__x86_indirect_thunk_r10
__x86_indirect_thunk_r11
__x86_indirect_thunk_r12
__x86_indirect_thunk_r13
__x86_indirect_thunk_r14
__x86_indirect_thunk_r15
__x86_indirect_thunk_r8
__x86_indirect_thunk_r9
__x86_indirect_thunk_rax
__x86_indirect_thunk_rbp
__x86_indirect_thunk_rbx
__x86_indirect_thunk_rcx
__x86_indirect_thunk_rdi
__x86_indirect_thunk_rdx
__x86_indirect_thunk_rsi
__xa_alloc
__xa_alloc_cyclic
__xa_cmpxchg
xa_destroy
__xa_erase
xa_erase
xa_find
xa_find_after
__xa_insert
xa_load
__xa_store
xa_store
xdp_convert_zc_to_xdp_frame
xdp_do_flush
xdp_do_redirect
xdp_return_frame
xdp_return_frame_rx_napi
xdp_rxq_info_is_reg
xdp_rxq_info_reg
xdp_rxq_info_reg_mem_model
xdp_rxq_info_unreg
xdp_rxq_info_unreg_mem_model
xdp_rxq_info_unused
xdp_warn
xfrm_aead_get_byname
xfrm_replay_seqhi
xp_alloc
xp_can_alloc
xp_dma_map
xp_dma_sync_for_cpu_slow
xp_dma_sync_for_device_slow
xp_dma_unmap
xp_free
xp_raw_get_data
xp_raw_get_dma
xp_set_rxq_info
xsk_clear_rx_need_wakeup
xsk_clear_tx_need_wakeup
xsk_get_pool_from_qid
xsk_set_rx_need_wakeup
xsk_set_tx_need_wakeup
xsk_tx_completed
xsk_tx_peek_desc
xsk_tx_release
xsk_uses_need_wakeup
xz_dec_end
xz_dec_init
xz_dec_run
yield
zalloc_cpumask_var
zap_vma_ptes
zerocopy_sg_from_iter
zgid
zlib_inflate
zlib_inflateEnd
zlib_inflateInit2
zlib_inflate_workspacesize
.
1
0

[PATCH openEuler-1.0-LTS] mm: fix missing reclaim of low-reliable page cache
by Yang Yingliang 04 Mar '22
by Yang Yingliang 04 Mar '22
04 Mar '22
From: Chen Wandun <chenwandun(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
--------------------------------
Low-reliable memory is located in ZONE_MOVABLE, so gfp
should contain GFP_HIGHMEM and GFP_MOVABLE when reclaim
memory.
Signed-off-by: Chen Wandun <chenwandun(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/vmscan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 994c116306aa2..b6afafdef5075 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4042,8 +4042,8 @@ unsigned long shrink_page_cache(gfp_t mask)
{
unsigned long nr_pages;
- /* We reclaim the highmem zone too, it is useful for 32bit arch */
- nr_pages = __shrink_page_cache(mask | __GFP_HIGHMEM);
+ /* reclaim from movable zone */
+ nr_pages = __shrink_page_cache(mask | __GFP_HIGHMEM | __GFP_MOVABLE);
return nr_pages;
}
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/5] mm: Fix reliable_debug in proc not consistent with boot parameter problem
by Yang Yingliang 04 Mar '22
by Yang Yingliang 04 Mar '22
04 Mar '22
From: Ma Wupeng <mawupeng1(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
--------------------------------
Now reliable_debug will be consistent with boot parameter. If
reliable_debug in boot parameter is set t "F", reliable_debug in proc will
be 13 which means only fallback is closed.
Fixes: 851a3ff0b4de ("mm: Introduce proc interface to control memory reliable features")
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/mem_reliable.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/mem_reliable.c b/mm/mem_reliable.c
index ae4e9609f43cf..7ce97184e08ec 100644
--- a/mm/mem_reliable.c
+++ b/mm/mem_reliable.c
@@ -275,10 +275,8 @@ static void mem_reliable_parse_ctrl_bits(unsigned long ctrl_bits)
for (i = MEM_RELIABLE_FALLBACK; i < MEM_RELIABLE_MAX; i++) {
status = !!test_bit(i, &ctrl_bits);
- if (mem_reliable_ctrl_bit_is_enabled(i) ^ status) {
- mem_reliable_ctrl_bit_set(i, status);
+ if (mem_reliable_ctrl_bit_is_enabled(i) ^ status)
mem_reliable_feature_set(i, status);
- }
}
}
@@ -484,6 +482,8 @@ static int __init reliable_sysctl_init(void)
return 0;
}
late_initcall(reliable_sysctl_init);
+#else
+static void mem_reliable_ctrl_bit_set(int idx, bool enable) {}
#endif
static void mem_reliable_feature_set(int idx, bool enable)
@@ -508,6 +508,7 @@ static void mem_reliable_feature_set(int idx, bool enable)
return;
}
+ mem_reliable_ctrl_bit_set(idx, enable);
pr_info("%s is %s\n", str, enable ? "enabled" : "disabled");
}
--
2.25.1
1
4

[PATCH OLK-5.10] xhci: Fix a logic issue when display Zhaoxin XHCI root hub, speed
by LeoLiuoc 03 Mar '22
by LeoLiuoc 03 Mar '22
03 Mar '22
Fix a logic issue when display Zhaoxin XHCI root hub speed.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
---
drivers/usb/host/xhci.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 7f1e5296d..71dcc1ba7 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -5261,10 +5261,10 @@ int xhci_gen_setup(struct usb_hcd *hcd,
xhci_get_quirks_t get_quirks)
if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5)
minor_rev = 1;
}
- if (minor_rev != 1) {
- hcd->speed = HCD_USB3;
- hcd->self.root_hub->speed = USB_SPEED_SUPER;
- }
+ }
+ if (minor_rev != 1) {
+ hcd->speed = HCD_USB3;
+ hcd->self.root_hub->speed = USB_SPEED_SUPER;
}
}
--
2.20.1
1
0

[PATCH openEuler-5.10 1/4] rseq, ptrace: Add PTRACE_GET_RSEQ_CONFIGURATION request
by Zheng Zengkai 03 Mar '22
by Zheng Zengkai 03 Mar '22
03 Mar '22
From: Piotr Figiel <figiel(a)google.com>
mainline inclusion
from mainline-5.13-rc1
commit 90f093fa8ea48e5d991332cee160b761423d55c1
category: feature
feature: Userspace percpu
bugzilla: https://gitee.com/openeuler/kernel/issues/I4W2BQ
CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
For userspace checkpoint and restore (C/R) a way of getting process state
containing RSEQ configuration is needed.
There are two ways this information is going to be used:
- to re-enable RSEQ for threads which had it enabled before C/R
- to detect if a thread was in a critical section during C/R
Since C/R preserves TLS memory and addresses RSEQ ABI will be restored
using the address registered before C/R.
Detection whether the thread is in a critical section during C/R is needed
to enforce behavior of RSEQ abort during C/R. Attaching with ptrace()
before registers are dumped itself doesn't cause RSEQ abort.
Restoring the instruction pointer within the critical section is
problematic because rseq_cs may get cleared before the control is passed
to the migrated application code leading to RSEQ invariants not being
preserved. C/R code will use RSEQ ABI address to find the abort handler
to which the instruction pointer needs to be set.
To achieve above goals expose the RSEQ ABI address and the signature value
with the new ptrace request PTRACE_GET_RSEQ_CONFIGURATION.
This new ptrace request can also be used by debuggers so they are aware
of stops within restartable sequences in progress.
Signed-off-by: Piotr Figiel <figiel(a)google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Michal Miroslaw <emmir(a)google.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers(a)efficios.com>
Acked-by: Oleg Nesterov <oleg(a)redhat.com>
Link: https://lkml.kernel.org/r/20210226135156.1081606-1-figiel@google.com
Signed-off-by: Yunfeng Ye <yeyunfeng(a)huawei.com>
Reviewed-by: Chao Liu <liuchao173(a)huawei.com>
Reviewed-by: Kuohai Xu <xukuohai(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
include/uapi/linux/ptrace.h | 10 ++++++++++
kernel/ptrace.c | 25 +++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/include/uapi/linux/ptrace.h b/include/uapi/linux/ptrace.h
index 83ee45fa634b..3747bf816f9a 100644
--- a/include/uapi/linux/ptrace.h
+++ b/include/uapi/linux/ptrace.h
@@ -102,6 +102,16 @@ struct ptrace_syscall_info {
};
};
+#define PTRACE_GET_RSEQ_CONFIGURATION 0x420f
+
+struct ptrace_rseq_configuration {
+ __u64 rseq_abi_pointer;
+ __u32 rseq_abi_size;
+ __u32 signature;
+ __u32 flags;
+ __u32 pad;
+};
+
/*
* These values are stored in task->ptrace_message
* by tracehook_report_syscall_* to describe the current syscall-stop.
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 0087ce50d99e..e3210358bcd2 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -31,6 +31,7 @@
#include <linux/cn_proc.h>
#include <linux/compat.h>
#include <linux/sched/signal.h>
+#include <linux/minmax.h>
#include <asm/syscall.h> /* for syscall_get_* */
@@ -795,6 +796,24 @@ static int ptrace_peek_siginfo(struct task_struct *child,
return ret;
}
+#ifdef CONFIG_RSEQ
+static long ptrace_get_rseq_configuration(struct task_struct *task,
+ unsigned long size, void __user *data)
+{
+ struct ptrace_rseq_configuration conf = {
+ .rseq_abi_pointer = (u64)(uintptr_t)task->rseq,
+ .rseq_abi_size = sizeof(*task->rseq),
+ .signature = task->rseq_sig,
+ .flags = 0,
+ };
+
+ size = min_t(unsigned long, size, sizeof(conf));
+ if (copy_to_user(data, &conf, size))
+ return -EFAULT;
+ return sizeof(conf);
+}
+#endif
+
#ifdef PTRACE_SINGLESTEP
#define is_singlestep(request) ((request) == PTRACE_SINGLESTEP)
#else
@@ -1243,6 +1262,12 @@ int ptrace_request(struct task_struct *child, long request,
ret = seccomp_get_metadata(child, addr, datavp);
break;
+#ifdef CONFIG_RSEQ
+ case PTRACE_GET_RSEQ_CONFIGURATION:
+ ret = ptrace_get_rseq_configuration(child, addr, datavp);
+ break;
+#endif
+
default:
break;
}
--
2.20.1
1
3

03 Mar '22
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VPIB
CVE: NA
-------------------------------------------------
If this config is enabled, block mapping is not used. The linear address page
table is mapped to 4 KB. As a result, the TLB miss rate is high, affecting
performance.
For examples, tested by libMicro benchmark:
enable disable Improve
memsetP2_10m 3540.37760 2129.715200 66.2%
memset_4k 0.38400 0.204800 87.5%
mprot_twz8k 7.16800 3.072000 133.3%
unmap_ra8k 7.93600 4.096000 93.8%
unmap_wa128k 68.86400 33.024000 108.5%
This additional enhancement can be turned on with rodata=full if this option
is set to 'n'.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index b476b105ee10..8ded00ca3364 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -433,7 +433,7 @@ CONFIG_ARM64_CPU_PARK=y
# CONFIG_XEN is not set
CONFIG_FORCE_MAX_ZONEORDER=11
CONFIG_UNMAP_KERNEL_AT_EL0=y
-CONFIG_RODATA_FULL_DEFAULT_ENABLED=y
+# CONFIG_RODATA_FULL_DEFAULT_ENABLED is not set
CONFIG_ARM64_PMEM_RESERVE=y
CONFIG_ARM64_PMEM_LEGACY=m
# CONFIG_ARM64_SW_TTBR0_PAN is not set
--
2.23.0
2
1

[PATCH openEuler-5.10 1/9] kfence: Add a module parameter to adjust kfence objects
by Zheng Zengkai 02 Mar '22
by Zheng Zengkai 02 Mar '22
02 Mar '22
From: Peng Liu <liupeng256(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4V388
CVE: NA
--------------------------------
KFENCE is designed to be enabled in production kernels, but it can
be also useful in some debug situations. For machines with limited
memory and CPU resources, KASAN is really hard to run. Fortunately,
KFENCE can be a suitable candidate. For KFENCE running on a single
machine, the possibility of discovering existed bugs will increase
as the increasing of KFENCE objects, but this will cost more memory.
In order to balance the possibility of discovering existed bugs and
memory cost, KFENCE objects need to be adjusted according to memory
resources for a compiled kernel Image. Add a module parameter to
adjust KFENCE objects will make kfence to use in different machines
with the same kernel Image.
In short, the following reasons motivate us to add this parameter.
1) In some debug situations, this will make kfence flexible.
2) For some production machines with different memory and CPU size,
this will reduce the kernel-Image-version burden.
The main change is just using kfence_num_objects to replace config
CONFIG_KFENCE_NUM_OBJECTS for dynamic configuration convenient. To
make compatible, kfence_metadata and alloc_covered are alloced by
memblock_alloc. Considering "cat /sys/kernel/debug/kfence/objects"
will read kfence_metadata, the initialization of this fs should
check whether kfence_metadata is successfully allocated.
Unfortunately, dynamic allocation require the KFENCE pool size to
be a configurable variable, which lead to additional instructions
(eg, load) added to the fast path of the memory allocation. As a
result, the performance will degrade. To avoid bad performance in
production machine, an ugly macro is used to isolate the changes.
Signed-off-by: Peng Liu <liupeng256(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
Documentation/dev-tools/kfence.rst | 8 +-
include/linux/kfence.h | 9 +-
lib/Kconfig.kfence | 10 +++
mm/kfence/core.c | 138 ++++++++++++++++++++++++++---
mm/kfence/kfence.h | 6 +-
mm/kfence/kfence_test.c | 2 +-
6 files changed, 155 insertions(+), 18 deletions(-)
diff --git a/Documentation/dev-tools/kfence.rst b/Documentation/dev-tools/kfence.rst
index ac6b89d1a8c3..5d194615aed0 100644
--- a/Documentation/dev-tools/kfence.rst
+++ b/Documentation/dev-tools/kfence.rst
@@ -41,13 +41,19 @@ guarded by KFENCE. The default is configurable via the Kconfig option
``CONFIG_KFENCE_SAMPLE_INTERVAL``. Setting ``kfence.sample_interval=0``
disables KFENCE.
-The KFENCE memory pool is of fixed size, and if the pool is exhausted, no
+If ``CONFIG_KFENCE_DYNAMIC_OBJECTS`` is disabled,
+the KFENCE memory pool is of fixed size, and if the pool is exhausted, no
further KFENCE allocations occur. With ``CONFIG_KFENCE_NUM_OBJECTS`` (default
255), the number of available guarded objects can be controlled. Each object
requires 2 pages, one for the object itself and the other one used as a guard
page; object pages are interleaved with guard pages, and every object page is
therefore surrounded by two guard pages.
+If ``CONFIG_KFENCE_DYNAMIC_OBJECTS`` is enabled,
+the KFENCE memory pool size could be set via the kernel boot parameter
+``kfence.num_objects``. Note, the performance will degrade due to additional
+instructions(eg, load) added to the fast path of the memory allocation.
+
The total memory dedicated to the KFENCE memory pool can be computed as::
( #objects + 1 ) * 2 * PAGE_SIZE
diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 4b5e3679a72c..3ea58c70d9c7 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -17,12 +17,19 @@
#include <linux/atomic.h>
#include <linux/static_key.h>
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+extern unsigned long kfence_num_objects;
+#define KFENCE_NR_OBJECTS kfence_num_objects
+#else
+#define KFENCE_NR_OBJECTS CONFIG_KFENCE_NUM_OBJECTS
+#endif
+
/*
* We allocate an even number of pages, as it simplifies calculations to map
* address to metadata indices; effectively, the very first page serves as an
* extended guard page, but otherwise has no special purpose.
*/
-#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE)
+#define KFENCE_POOL_SIZE ((KFENCE_NR_OBJECTS + 1) * 2 * PAGE_SIZE)
extern char *__kfence_pool;
DECLARE_STATIC_KEY_FALSE(kfence_allocation_key);
diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence
index 912f252a41fc..f7adceb4a4cf 100644
--- a/lib/Kconfig.kfence
+++ b/lib/Kconfig.kfence
@@ -45,6 +45,16 @@ config KFENCE_NUM_OBJECTS
pages are required; with one containing the object and two adjacent
ones used as guard pages.
+config KFENCE_DYNAMIC_OBJECTS
+ bool "Support dynamic configuration of the number of guarded objects"
+ default n
+ help
+ Enable dynamic configuration of the number of KFENCE guarded objects.
+ If this config is enabled, the number of KFENCE guarded objects could
+ be overridden via boot parameter "kfence.num_objects". Note that the
+ performance will degrade due to additional instructions(eg, load)
+ added to the fast path of the memory allocation.
+
config KFENCE_STATIC_KEYS
bool "Use static keys to set up allocations" if EXPERT
depends on JUMP_LABEL
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index a19154a8d196..0249af5f8244 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -93,13 +93,73 @@ module_param_named(skip_covered_thresh, kfence_skip_covered_thresh, ulong, 0644)
char *__kfence_pool __ro_after_init;
EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+/*
+ * The number of kfence objects will affect performance and bug detection
+ * accuracy. The initial value of this global parameter is determined by
+ * compiling settings.
+ */
+unsigned long kfence_num_objects = CONFIG_KFENCE_NUM_OBJECTS;
+EXPORT_SYMBOL(kfence_num_objects); /* Export for test modules. */
+
+#define MIN_KFENCE_OBJECTS 1
+#define MAX_KFENCE_OBJECTS 65535
+
+static int param_set_num_objects(const char *val, const struct kernel_param *kp)
+{
+ unsigned long num;
+
+ if (system_state != SYSTEM_BOOTING)
+ return -EINVAL; /* Cannot adjust KFENCE objects number on-the-fly. */
+
+ if (kstrtoul(val, 0, &num) < 0)
+ return -EINVAL;
+
+ if (num < MIN_KFENCE_OBJECTS || num > MAX_KFENCE_OBJECTS) {
+ pr_warn("kfence_num_objects = %lu is not in valid range [%d, %d]\n",
+ num, MIN_KFENCE_OBJECTS, MAX_KFENCE_OBJECTS);
+ return -EINVAL;
+ }
+
+ *((unsigned long *)kp->arg) = num;
+ return 0;
+}
+
+static int param_get_num_objects(char *buffer, const struct kernel_param *kp)
+{
+ if (!READ_ONCE(kfence_enabled))
+ return sprintf(buffer, "0\n");
+
+ return param_get_ulong(buffer, kp);
+}
+
+static const struct kernel_param_ops num_objects_param_ops = {
+ .set = param_set_num_objects,
+ .get = param_get_num_objects,
+};
+module_param_cb(num_objects, &num_objects_param_ops, &kfence_num_objects, 0600);
+#endif
+
/*
* Per-object metadata, with one-to-one mapping of object metadata to
* backing pages (in __kfence_pool).
*/
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+struct kfence_metadata *kfence_metadata;
+static phys_addr_t metadata_size;
+
+static inline bool kfence_metadata_valid(void)
+{
+ return !!kfence_metadata;
+}
+
+#else
static_assert(CONFIG_KFENCE_NUM_OBJECTS > 0);
struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+static inline bool kfence_metadata_valid(void) { return true; }
+#endif
+
/* Freelist with available objects. */
static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist);
static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */
@@ -124,11 +184,16 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1);
* P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM
*/
#define ALLOC_COVERED_HNUM 2
-#define ALLOC_COVERED_ORDER (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)
+#define ALLOC_COVERED_ORDER (const_ilog2(KFENCE_NR_OBJECTS) + 2)
#define ALLOC_COVERED_SIZE (1 << ALLOC_COVERED_ORDER)
#define ALLOC_COVERED_HNEXT(h) hash_32(h, ALLOC_COVERED_ORDER)
#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1)
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+static atomic_t *alloc_covered;
+static phys_addr_t covered_size;
+#else
static atomic_t alloc_covered[ALLOC_COVERED_SIZE];
+#endif
/* Stack depth used to determine uniqueness of an allocation. */
#define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8)
@@ -168,7 +233,7 @@ static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT);
static inline bool should_skip_covered(void)
{
- unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100;
+ unsigned long thresh = (KFENCE_NR_OBJECTS * kfence_skip_covered_thresh) / 100;
return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh;
}
@@ -236,7 +301,7 @@ static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
* error.
*/
index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
- if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
+ if (index < 0 || index >= KFENCE_NR_OBJECTS)
return NULL;
return &kfence_metadata[index];
@@ -251,7 +316,7 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m
/* Only call with a pointer into kfence_metadata. */
if (KFENCE_WARN_ON(meta < kfence_metadata ||
- meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS))
+ meta >= kfence_metadata + KFENCE_NR_OBJECTS))
return 0;
/*
@@ -576,7 +641,7 @@ static bool __init kfence_init_pool(void)
addr += PAGE_SIZE;
}
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < KFENCE_NR_OBJECTS; i++) {
struct kfence_metadata *meta = &kfence_metadata[i];
/* Initialize metadata. */
@@ -637,7 +702,7 @@ DEFINE_SHOW_ATTRIBUTE(stats);
*/
static void *start_object(struct seq_file *seq, loff_t *pos)
{
- if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ if (*pos < KFENCE_NR_OBJECTS)
return (void *)((long)*pos + 1);
return NULL;
}
@@ -649,7 +714,7 @@ static void stop_object(struct seq_file *seq, void *v)
static void *next_object(struct seq_file *seq, void *v, loff_t *pos)
{
++*pos;
- if (*pos < CONFIG_KFENCE_NUM_OBJECTS)
+ if (*pos < KFENCE_NR_OBJECTS)
return (void *)((long)*pos + 1);
return NULL;
}
@@ -691,7 +756,11 @@ static int __init kfence_debugfs_init(void)
struct dentry *kfence_dir = debugfs_create_dir("kfence", NULL);
debugfs_create_file("stats", 0444, kfence_dir, NULL, &stats_fops);
- debugfs_create_file("objects", 0400, kfence_dir, NULL, &objects_fops);
+
+ /* Variable kfence_metadata may fail to allocate. */
+ if (kfence_metadata_valid())
+ debugfs_create_file("objects", 0400, kfence_dir, NULL, &objects_fops);
+
return 0;
}
@@ -751,6 +820,40 @@ static void toggle_allocation_gate(struct work_struct *work)
}
static DECLARE_DELAYED_WORK(kfence_timer, toggle_allocation_gate);
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+static int __init kfence_dynamic_init(void)
+{
+ metadata_size = sizeof(struct kfence_metadata) * KFENCE_NR_OBJECTS;
+ kfence_metadata = memblock_alloc(metadata_size, PAGE_SIZE);
+ if (!kfence_metadata) {
+ pr_err("failed to allocate metadata\n");
+ return -ENOMEM;
+ }
+
+ covered_size = sizeof(atomic_t) * KFENCE_NR_OBJECTS;
+ alloc_covered = memblock_alloc(covered_size, PAGE_SIZE);
+ if (!alloc_covered) {
+ memblock_free((phys_addr_t)kfence_metadata, metadata_size);
+ kfence_metadata = NULL;
+ pr_err("failed to allocate covered\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void __init kfence_dynamic_destroy(void)
+{
+ memblock_free((phys_addr_t)alloc_covered, covered_size);
+ alloc_covered = NULL;
+ memblock_free((phys_addr_t)kfence_metadata, metadata_size);
+ kfence_metadata = NULL;
+}
+#else
+static int __init kfence_dynamic_init(void) { return 0; }
+static void __init kfence_dynamic_destroy(void) { }
+#endif
+
/* === Public interface ===================================================== */
void __init kfence_alloc_pool(void)
@@ -758,10 +861,14 @@ void __init kfence_alloc_pool(void)
if (!kfence_sample_interval)
return;
- __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+ if (kfence_dynamic_init())
+ return;
- if (!__kfence_pool)
+ __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+ if (!__kfence_pool) {
pr_err("failed to allocate pool\n");
+ kfence_dynamic_destroy();
+ }
}
void __init kfence_init(void)
@@ -780,8 +887,8 @@ void __init kfence_init(void)
static_branch_enable(&kfence_allocation_key);
WRITE_ONCE(kfence_enabled, true);
queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
- pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
- CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool,
+ pr_info("initialized - using %lu bytes for %lu objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE,
+ (unsigned long)KFENCE_NR_OBJECTS, (void *)__kfence_pool,
(void *)(__kfence_pool + KFENCE_POOL_SIZE));
}
@@ -791,7 +898,10 @@ void kfence_shutdown_cache(struct kmem_cache *s)
struct kfence_metadata *meta;
int i;
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ if (!kfence_metadata_valid())
+ return;
+
+ for (i = 0; i < KFENCE_NR_OBJECTS; i++) {
bool in_use;
meta = &kfence_metadata[i];
@@ -830,7 +940,7 @@ void kfence_shutdown_cache(struct kmem_cache *s)
}
}
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) {
+ for (i = 0; i < KFENCE_NR_OBJECTS; i++) {
meta = &kfence_metadata[i];
/* See above. */
diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
index 2a2d5de9d379..e5f8f8577911 100644
--- a/mm/kfence/kfence.h
+++ b/mm/kfence/kfence.h
@@ -91,7 +91,11 @@ struct kfence_metadata {
u32 alloc_stack_hash;
};
-extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS
+extern struct kfence_metadata *kfence_metadata;
+#else
+extern struct kfence_metadata kfence_metadata[KFENCE_NR_OBJECTS];
+#endif
/* KFENCE error types for report generation. */
enum kfence_error_type {
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index f1690cf54199..213a49e0c742 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -621,7 +621,7 @@ static void test_gfpzero(struct kunit *test)
break;
test_free(buf2);
- if (i == CONFIG_KFENCE_NUM_OBJECTS) {
+ if (i == KFENCE_NR_OBJECTS) {
kunit_warn(test, "giving up ... cannot get same object back\n");
return;
}
--
2.20.1
1
8

[PATCH openEuler-5.10 1/6] ext4: convert DIV_ROUND_UP to DIV_ROUND_UP_ULL
by Zheng Zengkai 02 Mar '22
by Zheng Zengkai 02 Mar '22
02 Mar '22
From: Zhang Yi <yi.zhang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VAQC?from=project-issue
CVE: NA
--------------------------------
Fix a compile error on arm32 architecture.
build failed: arm, allmodconfig
ERROR: modpost: "__aeabi_ldivmod" [fs/ext4/ext4.ko] undefined!
make[1]: *** [modules-only.symvers] Error 1
make[1]: *** Deleting file 'modules-only.symvers'
make: *** [modules] Error 2
Fixes: 356efe60eb78 ("ext4: fix underflow in ext4_max_bitmap_size()")
Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
Reviewed-by: Yang Erkun <yangerkun(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/ext4/super.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 19539be45aec..f1a089ebe848 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -3274,15 +3274,15 @@ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
upper_limit -= ppb;
/* double indirect blocks */
if (upper_limit < ppb * ppb) {
- meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
+ meta_blocks += 1 + DIV_ROUND_UP_ULL(upper_limit, ppb);
res -= meta_blocks;
goto check_lfs;
}
meta_blocks += 1 + ppb;
upper_limit -= ppb * ppb;
/* tripple indirect blocks for the rest */
- meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
- DIV_ROUND_UP(upper_limit, ppb*ppb);
+ meta_blocks += 1 + DIV_ROUND_UP_ULL(upper_limit, ppb) +
+ DIV_ROUND_UP_ULL(upper_limit, ppb*ppb);
res -= meta_blocks;
check_lfs:
res <<= bits;
--
2.20.1
1
5

[PATCH openEuler-1.0-LTS] f2fs: fix to do sanity check on inode type during garbage collection
by Yang Yingliang 02 Mar '22
by Yang Yingliang 02 Mar '22
02 Mar '22
From: Chao Yu <chao(a)kernel.org>
mainline inclusion
from mainline-v5.16-rc1
commit 9056d6489f5a41cfbb67f719d2c0ce61ead72d9f
category: bugfix
bugzilla: 186264
CVE: CVE-2021-44879
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
As report by Wenqing Liu in bugzilla:
https://bugzilla.kernel.org/show_bug.cgi?id=215231
- Overview
kernel NULL pointer dereference triggered in folio_mark_dirty() when mount and operate on a crafted f2fs image
- Reproduce
tested on kernel 5.16-rc3, 5.15.X under root
1. mkdir mnt
2. mount -t f2fs tmp1.img mnt
3. touch tmp
4. cp tmp mnt
F2FS-fs (loop0): sanity_check_inode: inode (ino=49) extent info [5942, 4294180864, 4] is incorrect, run fsck to fix
F2FS-fs (loop0): f2fs_check_nid_range: out-of-range nid=31340049, run fsck to fix.
BUG: kernel NULL pointer dereference, address: 0000000000000000
folio_mark_dirty+0x33/0x50
move_data_page+0x2dd/0x460 [f2fs]
do_garbage_collect+0xc18/0x16a0 [f2fs]
f2fs_gc+0x1d3/0xd90 [f2fs]
f2fs_balance_fs+0x13a/0x570 [f2fs]
f2fs_create+0x285/0x840 [f2fs]
path_openat+0xe6d/0x1040
do_filp_open+0xc5/0x140
do_sys_openat2+0x23a/0x310
do_sys_open+0x57/0x80
The root cause is for special file: e.g. character, block, fifo or socket file,
f2fs doesn't assign address space operations pointer array for mapping->a_ops field,
so, in a fuzzed image, SSA table indicates a data block belong to special file, when
f2fs tries to migrate that block, it causes NULL pointer access once move_data_page()
calls a_ops->set_dirty_page().
Cc: stable(a)vger.kernel.org
Reported-by: Wenqing Liu <wenqingliu0120(a)gmail.com>
Signed-off-by: Chao Yu <chao(a)kernel.org>
Signed-off-by: Jaegeuk Kim <jaegeuk(a)kernel.org>
Signed-off-by: Guo Xuenan <guoxuenan(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: fang wei <fangwei1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/f2fs/gc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 700c39ec99f5a..f4f7b8feebefc 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -953,7 +953,8 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
if (phase == 3) {
inode = f2fs_iget(sb, dni.ino);
- if (IS_ERR(inode) || is_bad_inode(inode))
+ if (IS_ERR(inode) || is_bad_inode(inode) ||
+ special_file(inode->i_mode))
continue;
if (!down_write_trylock(
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 01/18] mm: Ratelimited mirrored memory related warning messages
by Yang Yingliang 01 Mar '22
by Yang Yingliang 01 Mar '22
01 Mar '22
From: Ma Wupeng <mawupeng1(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
--------------------------------
If system has mirrored memory, memblock will try to allocate mirrored
memory firstly and fallback to non-mirrored memory when fails, but if with
limited mirrored memory or some numa node without mirrored memory, lots of
warning message about memblock allocation will occur.
This patch ratelimit the warning message to avoid a very long print during
bootup.
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Reviewed-by: Kefeng Wang<wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memblock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 13be610a381f4..80c6975ace6f2 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -291,7 +291,7 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
NUMA_NO_NODE, flags);
if (!ret && (flags & MEMBLOCK_MIRROR)) {
- pr_warn("Could not allocate %pap bytes of mirrored memory\n",
+ pr_warn_ratelimited("Could not allocate %pap bytes of mirrored memory\n",
&size);
flags &= ~MEMBLOCK_MIRROR;
goto again;
@@ -1425,7 +1425,7 @@ static void * __init memblock_virt_alloc_internal(
if (flags & MEMBLOCK_MIRROR) {
flags &= ~MEMBLOCK_MIRROR;
- pr_warn("Could not allocate %pap bytes of mirrored memory\n",
+ pr_warn_ratelimited("Could not allocate %pap bytes of mirrored memory\n",
&size);
goto again;
}
--
2.25.1
1
17
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VPL5
CVE: NA
-------------------------------------------------
Enable configs to support 9P.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 8ded00ca3364..d19da343bf1e 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1876,7 +1876,10 @@ CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
CONFIG_RFKILL_GPIO=m
-# CONFIG_NET_9P is not set
+CONFIG_NET_9P=m
+CONFIG_NET_9P_VIRTIO=m
+# CONFIG_NET_9P_RDMA is not set
+# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
@@ -6397,6 +6400,10 @@ CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
+CONFIG_9P_FS=m
+CONFIG_9P_FSCACHE=y
+CONFIG_9P_FS_POSIX_ACL=y
+CONFIG_9P_FS_SECURITY=y
CONFIG_EULER_FS=m
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
--
2.23.0
2
1

01 Mar '22
If this config is enabled, block mapping is not used. The linear address page
table is mapped to 4 KB. As a result, the TLB miss rate is high, affecting
performance.
This additional enhancement can be turned on with rodata=full if this option
is set to 'n'.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index b476b105ee10..8ded00ca3364 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -433,7 +433,7 @@ CONFIG_ARM64_CPU_PARK=y
# CONFIG_XEN is not set
CONFIG_FORCE_MAX_ZONEORDER=11
CONFIG_UNMAP_KERNEL_AT_EL0=y
-CONFIG_RODATA_FULL_DEFAULT_ENABLED=y
+# CONFIG_RODATA_FULL_DEFAULT_ENABLED is not set
CONFIG_ARM64_PMEM_RESERVE=y
CONFIG_ARM64_PMEM_LEGACY=m
# CONFIG_ARM64_SW_TTBR0_PAN is not set
--
2.23.0
2
1

01 Mar '22
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4VPIB
CVE: NA
-------------------------------------------------
If this config is enabled, block mapping is not used. The linear address page
table is mapped to 4 KB. As a result, the TLB miss rate is high, affecting
performance.
This additional enhancement can be turned on with rodata=full if this option
is set to 'n'.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index b476b105ee10..8ded00ca3364 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -433,7 +433,7 @@ CONFIG_ARM64_CPU_PARK=y
# CONFIG_XEN is not set
CONFIG_FORCE_MAX_ZONEORDER=11
CONFIG_UNMAP_KERNEL_AT_EL0=y
-CONFIG_RODATA_FULL_DEFAULT_ENABLED=y
+# CONFIG_RODATA_FULL_DEFAULT_ENABLED is not set
CONFIG_ARM64_PMEM_RESERVE=y
CONFIG_ARM64_PMEM_LEGACY=m
# CONFIG_ARM64_SW_TTBR0_PAN is not set
--
2.23.0
1
0

[PATCH openEuler-1.0-LTS 1/2] USB: gadget: validate interface OS descriptor requests
by Yang Yingliang 01 Mar '22
by Yang Yingliang 01 Mar '22
01 Mar '22
From: Szymon Heidrich <szymon.heidrich(a)gmail.com>
stable inclusion
from linux-4.19.230
commit e5eb8d19aee115d8fb354d1eff1b8df700467164
CVE: CVE-2022-25258
--------------------------------
commit 75e5b4849b81e19e9efe1654b30d7f3151c33c2c upstream.
Stall the control endpoint in case provided index exceeds array size of
MAX_CONFIG_INTERFACES or when the retrieved function pointer is null.
Signed-off-by: Szymon Heidrich <szymon.heidrich(a)gmail.com>
Cc: stable(a)kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/gadget/composite.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
index d39ebf5bdbdd5..0f2af1e1e8f4b 100644
--- a/drivers/usb/gadget/composite.c
+++ b/drivers/usb/gadget/composite.c
@@ -1865,6 +1865,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
if (w_index != 0x5 || (w_value >> 8))
break;
interface = w_value & 0xFF;
+ if (interface >= MAX_CONFIG_INTERFACES ||
+ !os_desc_cfg->interface[interface])
+ break;
buf[6] = w_index;
count = count_ext_prop(os_desc_cfg,
interface);
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] mm/hwpoison: clear MF_COUNT_INCREASED before retrying get_any_page()
by Yang Yingliang 28 Feb '22
by Yang Yingliang 28 Feb '22
28 Feb '22
From: Liu Shixin <liushixin2(a)huawei.com>
mainline inclusion
from linux-v5.16-rc7
commit 2a57d83c78f889bf3f54eede908d0643c40d5418
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4LE22
CVE: NA
--------------------------------
Hulk Robot reported a panic in put_page_testzero() when testing
madvise() with MADV_SOFT_OFFLINE. The BUG() is triggered when retrying
get_any_page(). This is because we keep MF_COUNT_INCREASED flag in
second try but the refcnt is not increased.
page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
------------[ cut here ]------------
kernel BUG at include/linux/mm.h:737!
invalid opcode: 0000 [#1] PREEMPT SMP
CPU: 5 PID: 2135 Comm: sshd Tainted: G B 5.16.0-rc6-dirty #373
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
RIP: release_pages+0x53f/0x840
Call Trace:
free_pages_and_swap_cache+0x64/0x80
tlb_flush_mmu+0x6f/0x220
unmap_page_range+0xe6c/0x12c0
unmap_single_vma+0x90/0x170
unmap_vmas+0xc4/0x180
exit_mmap+0xde/0x3a0
mmput+0xa3/0x250
do_exit+0x564/0x1470
do_group_exit+0x3b/0x100
__do_sys_exit_group+0x13/0x20
__x64_sys_exit_group+0x16/0x20
do_syscall_64+0x34/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
Modules linked in:
---[ end trace e99579b570fe0649 ]---
RIP: 0010:release_pages+0x53f/0x840
Link: https://lkml.kernel.org/r/20211221074908.3910286-1-liushixin2@huawei.com
Fixes: b94e02822deb ("mm,hwpoison: try to narrow window race for free pages")
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Acked-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memory-failure.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 578859c94866f..c743ad409908c 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1955,6 +1955,7 @@ int soft_offline_page(struct page *page, int flags)
else if (ret == 0)
if (soft_offline_free_page(page) && try_again) {
try_again = false;
+ flags &= ~MF_COUNT_INCREASED;
goto retry;
}
--
2.25.1
1
0
各位好,
openEuler 22.03 LTS 已经发布了第二轮测试版本,为支撑版本发布和维护,
我们基于 OLK-5.10 创建了 openEuler-22.03-LTS 分支。
该分支用于 22.03-LTS 版本的维护,不再接受未事先规划的新特性。
仅接受 bugfix、CVE 和 性能优化。如有补丁需要合入该分支,需在补丁标题
增加版本分支前缀。
- 需要推送到该分支的补丁,需要首先合入 OLK-5.10;该分支原则上不接受
没有合入 OLK-5.10 的补丁。
- 22.03-LTS 发布之后,可以接受从 OLK-5.10 backport 的不影响兼容性的
优化、硬件使能等影响小的特性。
同时基于 OLK-5.10 创建 openEuler-22.09 分支,用于支撑 22.09 创新版本
的开发。创新版本接受新特性、Preview 特性、验证特性等,促进新技术、新
特性的成熟和稳定。
OLK-5.10 分支继续接受补丁合入请求。
新创建的分支路径:
https://gitee.com/openeuler/kernel/tree/openEuler-22.03-LTS/
https://gitee.com/openeuler/kernel/tree/openEuler-22.09/
1
0
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4URME
CVE: NA
--------------------------------
Enable CONFIG_INTEL_IDXD in openeuler_defconfig for x86.
Support Intel Data Accelerators on Xeon hardware.
Signed-off-by: fuyufan <fuyufan(a)huawei.com>
---
arch/x86/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 316c4122a859..350a02583978 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -6353,7 +6353,7 @@ CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
CONFIG_INTEL_IDMA64=m
-# CONFIG_INTEL_IDXD is not set
+CONFIG_INTEL_IDXD=m
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
--
2.27.0
1
0
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4URME
CVE: NA
--------------------------------
Enable CONFIG_INTEL_IDXD in openeuler_defconfig for x86.
Support Intel Data Accelerators on Xeon hardware.
Signed-off-by: fuyufan <fuyufan(a)huawei.com>
---
arch/x86/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 316c4122a859..350a02583978 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -6353,7 +6353,7 @@ CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
CONFIG_INTEL_IDMA64=m
-# CONFIG_INTEL_IDXD is not set
+CONFIG_INTEL_IDXD=y
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
--
2.27.0
1
0

[PATCH openEuler-1.0-LTS 1/2] udf: Fix NULL ptr deref when converting from inline format
by Yang Yingliang 24 Feb '22
by Yang Yingliang 24 Feb '22
24 Feb '22
From: Jan Kara <jack(a)suse.cz>
mainline inclusion
from mainline-v5.17-rc2
commit 7fc3b7c2981bbd1047916ade327beccb90994eee
category: bugfix
bugzilla: 186269
CVE: CVE-2022-0617
-----------------------------------------------
udf_expand_file_adinicb() calls directly ->writepage to write data
expanded into a page. This however misses to setup inode for writeback
properly and so we can crash on inode->i_wb dereference when submitting
page for IO like:
BUG: kernel NULL pointer dereference, address: 0000000000000158
#PF: supervisor read access in kernel mode
...
<TASK>
__folio_start_writeback+0x2ac/0x350
__block_write_full_page+0x37d/0x490
udf_expand_file_adinicb+0x255/0x400 [udf]
udf_file_write_iter+0xbe/0x1b0 [udf]
new_sync_write+0x125/0x1c0
vfs_write+0x28e/0x400
Fix the problem by marking the page dirty and going through the standard
writeback path to write the page. Strictly speaking we would not even
have to write the page but we want to catch e.g. ENOSPC errors early.
Reported-by: butt3rflyh4ck <butterflyhuangxx(a)gmail.com>
CC: stable(a)vger.kernel.org
Fixes: 52ebea749aae ("writeback: make backing_dev_info host cgroup-specific bdi_writebacks")
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Zhang Wensheng <zhangwensheng5(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/udf/inode.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index 4c46ebf0e773b..d1fab052ac759 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -248,10 +248,6 @@ int udf_expand_file_adinicb(struct inode *inode)
char *kaddr;
struct udf_inode_info *iinfo = UDF_I(inode);
int err;
- struct writeback_control udf_wbc = {
- .sync_mode = WB_SYNC_NONE,
- .nr_to_write = 1,
- };
WARN_ON_ONCE(!inode_is_locked(inode));
if (!iinfo->i_lenAlloc) {
@@ -295,8 +291,10 @@ int udf_expand_file_adinicb(struct inode *inode)
iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG;
/* from now on we have normal address_space methods */
inode->i_data.a_ops = &udf_aops;
+ set_page_dirty(page);
+ unlock_page(page);
up_write(&iinfo->i_data_sem);
- err = inode->i_data.a_ops->writepage(page, &udf_wbc);
+ err = filemap_fdatawrite(inode->i_mapping);
if (err) {
/* Restore everything back so that we don't lose data... */
lock_page(page);
--
2.25.1
1
1

23 Feb '22
From: Ma Wupeng <mawupeng1(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4PM01
CVE: NA
--------------------------------
Make efi_print_memmap() public in preparation for adding fake memory
support for architecture with efi support, eg, arm64.
Co-developed-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/x86/include/asm/efi.h | 1 -
arch/x86/platform/efi/efi.c | 16 ----------------
drivers/firmware/efi/memmap.c | 16 ++++++++++++++++
include/linux/efi.h | 1 +
4 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index bc9758ef292e..3be8754408d5 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -138,7 +138,6 @@ struct efi_scratch {
extern struct efi_scratch efi_scratch;
extern int __init efi_memblock_x86_reserve_range(void);
-extern void __init efi_print_memmap(void);
extern void __init efi_map_region(efi_memory_desc_t *md);
extern void __init efi_map_region_fixed(efi_memory_desc_t *md);
extern void efi_sync_low_kernel_mappings(void);
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 8a26e705cb06..ef6f4cbffe28 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -323,22 +323,6 @@ static void __init efi_clean_memmap(void)
}
}
-void __init efi_print_memmap(void)
-{
- efi_memory_desc_t *md;
- int i = 0;
-
- for_each_efi_memory_desc(md) {
- char buf[64];
-
- pr_info("mem%02u: %s range=[0x%016llx-0x%016llx] (%lluMB)\n",
- i++, efi_md_typeattr_format(buf, sizeof(buf), md),
- md->phys_addr,
- md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1,
- (md->num_pages >> (20 - EFI_PAGE_SHIFT)));
- }
-}
-
static int __init efi_systab_init(unsigned long phys)
{
int size = efi_enabled(EFI_64BIT) ? sizeof(efi_system_table_64_t)
diff --git a/drivers/firmware/efi/memmap.c b/drivers/firmware/efi/memmap.c
index 2ff1883dc788..0155bf066ba5 100644
--- a/drivers/firmware/efi/memmap.c
+++ b/drivers/firmware/efi/memmap.c
@@ -376,3 +376,19 @@ void __init efi_memmap_insert(struct efi_memory_map *old_memmap, void *buf,
}
}
}
+
+void __init efi_print_memmap(void)
+{
+ efi_memory_desc_t *md;
+ int i = 0;
+
+ for_each_efi_memory_desc(md) {
+ char buf[64];
+
+ pr_info("mem%02u: %s range=[0x%016llx-0x%016llx] (%lluMB)\n",
+ i++, efi_md_typeattr_format(buf, sizeof(buf), md),
+ md->phys_addr,
+ md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1,
+ (md->num_pages >> (20 - EFI_PAGE_SHIFT)));
+ }
+}
diff --git a/include/linux/efi.h b/include/linux/efi.h
index e17cd4c44f93..280f36cb7c14 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -643,6 +643,7 @@ extern int __init efi_memmap_split_count(efi_memory_desc_t *md,
struct range *range);
extern void __init efi_memmap_insert(struct efi_memory_map *old_memmap,
void *buf, struct efi_mem_range *mem);
+extern void __init efi_print_memmap(void);
#ifdef CONFIG_EFI_ESRT
extern void __init efi_esrt_init(void);
--
2.20.1
1
17

[PATCH openEuler-5.10 04/25] ahci: Fix some bugs like plugin support and sata link stability when user enable ahci RTD3
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
zhaoxin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
CVE: NA
----------------------------------------------------------------
The kernel driver now will disable interrupt when port is suspended,
causing plugin not work. It is useful to make plugin work,when the
plugin interrupt is enabled with power management status.
Adding pm_request_resume() is to reume port for plugin or PME signal
waking up controller which in D3.
with the AHCI controller frequently enters into D3 and leaves from D3,
the identify cmd may be timeout when controller resumes and establishes
a connect with the device.it is effective to delay 10ms between
controller resume and port resume,with link’s smooth transition.
with non power management request and power management competing with
each other in queue, it is often found that block IO hang 120s when
system disk is suspending or resuming.it is now guaranteed that PM
requests will enter the queue no matter other non-PM requests are
waiting. Increase the pm_only counter before checking whether any
non-PM blk_queue_enter() calls are in progress.
Meanwhile, the new blk_pm_request_resume() call is necessary to occur
during request assigned to a queue when device is suspended.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/ata/ahci.c | 13 +++++++++++++
drivers/ata/libahci.c | 15 +++++++++++++++
2 files changed, 28 insertions(+)
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index ff2add0101fe..8a7140ee88b5 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -869,6 +869,19 @@ static int ahci_pci_device_runtime_resume(struct device *dev)
if (rc)
return rc;
ahci_pci_init_controller(host);
+
+ /* Port resume for Zhaoxin platform */
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) {
+ if (pdev->revision == 0x01)
+ ata_msleep(NULL, 10);
+
+ for (rc = 0; rc < host->n_ports; rc++) {
+ struct ata_port *ap = host->ports[rc];
+
+ pm_request_resume(&ap->tdev);
+ }
+ }
+
return 0;
}
diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
index fec2e9754aed..4514f3f28b7c 100644
--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -823,9 +823,15 @@ static int ahci_set_lpm(struct ata_link *link, enum ata_lpm_policy policy,
static void ahci_power_down(struct ata_port *ap)
{
struct ahci_host_priv *hpriv = ap->host->private_data;
+ struct pci_dev *pdev;
void __iomem *port_mmio = ahci_port_base(ap);
u32 cmd, scontrol;
+ /* port suspended enable Plugin intr for Zhaoxin platform */
+ pdev = (ap->dev && dev_is_pci(ap->dev)) ? to_pci_dev(ap->dev) : NULL;
+ if ((pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && !ap->link.device->sdev)
+ writel(PORT_IRQ_CONNECT, port_mmio + PORT_IRQ_MASK);
+
if (!(hpriv->cap & HOST_CAP_SSS))
return;
@@ -1701,6 +1707,7 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat)
struct ata_link *link = NULL;
struct ata_queued_cmd *active_qc;
struct ata_eh_info *active_ehi;
+ struct pci_dev *pdev;
bool fbs_need_dec = false;
u32 serror;
@@ -1791,6 +1798,14 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat)
ata_ehi_push_desc(host_ehi, "%s",
irq_stat & PORT_IRQ_CONNECT ?
"connection status changed" : "PHY RDY changed");
+
+ /* When plugin intr happen, now resume suspended port for Zhaoxin platform */
+ pdev = (ap->dev && dev_is_pci(ap->dev)) ? to_pci_dev(ap->dev) : NULL;
+ if ((pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) &&
+ (ap->pflags & ATA_PFLAG_SUSPENDED)) {
+ pm_request_resume(&ap->tdev);
+ return;
+ }
}
/* okay, let's hand over to EH */
--
2.20.1
1
3

[PATCH openEuler-5.10 01/25] rtc: Fix set RTC time delay 500ms on some Zhaoxin SOCs
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
zhaoxin inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
CVE: NA
----------------------------------------------------------------
When the RTC divider is changed from reset to an operating time base,
the first update cycle should be 500ms later. But on some Zhaoxin SOCs,
this first update cycle is one second later.
So set RTC time on these Zhaoxin SOCs will causing 500ms delay.
Skip setup RTC divider on these SOCs in mc146818_set_time to fix it.
Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
---
drivers/rtc/rtc-mc146818-lib.c | 26 +++++++++++++++++++++++---
1 file changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
index 2ecd8752b088..033dcc0645a5 100644
--- a/drivers/rtc/rtc-mc146818-lib.c
+++ b/drivers/rtc/rtc-mc146818-lib.c
@@ -8,6 +8,22 @@
#include <linux/acpi.h>
#endif
+#ifdef CONFIG_X86
+static inline bool follow_mc146818_divider_reset(void)
+{
+ if ((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR ||
+ boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) &&
+ (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59))
+ return false;
+ return true;
+}
+#else
+static inline bool follow_mc146818_divider_reset(void)
+{
+ return true;
+}
+#endif
+
/*
* Returns true if a clock update is in progress
*/
@@ -171,8 +187,11 @@ int mc146818_set_time(struct rtc_time *time)
save_control = CMOS_READ(RTC_CONTROL);
CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
- save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
- CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+ if (follow_mc146818_divider_reset()) {
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
+ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+ }
+
#ifdef CONFIG_MACH_DECSTATION
CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
@@ -190,7 +209,8 @@ int mc146818_set_time(struct rtc_time *time)
#endif
CMOS_WRITE(save_control, RTC_CONTROL);
- CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+ if (follow_mc146818_divider_reset())
+ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
spin_unlock_irqrestore(&rtc_lock, flags);
--
2.20.1
1
24

[PATCH openEuler-1.0-LTS] ext4: fix underflow in ext4_max_bitmap_size()
by Yang Yingliang 23 Feb '22
by Yang Yingliang 23 Feb '22
23 Feb '22
From: Zhang Yi <yi.zhang(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186216, https://gitee.com/openeuler/kernel/issues/I4PW7R
CVE: NA
--------------------------------
The same to commit 1c2d14212b15 ("ext2: Fix underflow in ext2_max_size()")
in ext2 filesystem, ext4 driver has the same issue with 64K block size
and ^huge_file, fix this issue the same as ext2. This patch also revert
commit 75ca6ad408f4 ("ext4: fix loff_t overflow in ext4_max_bitmap_size()")
because it's no longer needed.
Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
Conflicts:
fs/ext4/super.c
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reviewed-by: Yang Erkun <yangerkun(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 46 +++++++++++++++++++++++++++++++---------------
1 file changed, 31 insertions(+), 15 deletions(-)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 59637e70f9e32..8f9d60ec607ea 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -2958,9 +2958,10 @@ static loff_t ext4_max_size(int blkbits, int has_huge_files)
*/
static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
{
- loff_t res = EXT4_NDIR_BLOCKS;
+ loff_t upper_limit, res = EXT4_NDIR_BLOCKS;
int meta_blocks;
- loff_t upper_limit;
+ unsigned int ppb = 1 << (bits - 2);
+
/* This is calculated to be the largest file size for a dense, block
* mapped file such that the file's total number of 512-byte sectors,
* including data and all indirect blocks, does not exceed (2^48 - 1).
@@ -2991,23 +2992,38 @@ static loff_t ext4_max_bitmap_size(int bits, int has_huge_files)
}
- /* indirect blocks */
- meta_blocks = 1;
- /* double indirect blocks */
- meta_blocks += 1 + (1LL << (bits-2));
- /* tripple indirect blocks */
- meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
-
- upper_limit -= meta_blocks;
- upper_limit <<= bits;
-
+ /* Compute how many blocks we can address by block tree */
res += 1LL << (bits-2);
res += 1LL << (2*(bits-2));
res += 1LL << (3*(bits-2));
+ /* Compute how many metadata blocks are needed */
+ meta_blocks = 1;
+ meta_blocks += 1 + ppb;
+ meta_blocks += 1 + ppb + ppb * ppb;
+ /* Does block tree limit file size? */
+ if (res + meta_blocks <= upper_limit)
+ goto check_lfs;
+
+ res = upper_limit;
+ /* How many metadata blocks are needed for addressing upper_limit? */
+ upper_limit -= EXT4_NDIR_BLOCKS;
+ /* indirect blocks */
+ meta_blocks = 1;
+ upper_limit -= ppb;
+ /* double indirect blocks */
+ if (upper_limit < ppb * ppb) {
+ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
+ res -= meta_blocks;
+ goto check_lfs;
+ }
+ meta_blocks += 1 + ppb;
+ upper_limit -= ppb * ppb;
+ /* tripple indirect blocks for the rest */
+ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
+ DIV_ROUND_UP(upper_limit, ppb*ppb);
+ res -= meta_blocks;
+check_lfs:
res <<= bits;
- if (res > upper_limit)
- res = upper_limit;
-
if (res > MAX_LFS_FILESIZE)
res = MAX_LFS_FILESIZE;
--
2.25.1
1
0

23 Feb '22
v3->v4:
Refector 0001 patch.
v2->v3:
Fix compile warning "ahci_error_intr ISO C90 forbids mixed declarations and
code"
Backport zhaoxin enhancement and bugfix patches.
LeoLiu-oc (7):
rtc: Fix set RTC time delay 500ms on some Zhaoxin SOCs
XHCI:Fix some device identify fail when enable xHCI runtime suspend
EHCI:Clear wakeup signal locked in S0 state when device plug in
Fix some bugs like plugin support and sata link stability when user
enable ahci RTD3
Add support for disabling PhyRdy Change Interrupt based on actual LPM
capability
Add support for PxSCT.LPM set based on actual LPM capability
x86/tsc: Make cur->adjusted values in package#1 to be the same
arch/x86/kernel/tsc_sync.c | 5 +++++
drivers/ata/ahci.c | 24 ++++++++++++++++++++++++
drivers/ata/libahci.c | 15 +++++++++++++++
drivers/ata/libata-eh.c | 7 +++++++
drivers/ata/libata-sata.c | 20 ++++++++++++++++++--
drivers/pci/pci-driver.c | 6 +++++-
drivers/rtc/rtc-mc146818-lib.c | 28 +++++++++++++++++++++++++---
drivers/usb/host/ehci-hcd.c | 21 +++++++++++++++++++++
drivers/usb/host/ehci-pci.c | 4 ++++
drivers/usb/host/ehci.h | 1 +
drivers/usb/host/xhci-pci.c | 2 ++
include/linux/libata.h | 4 ++++
12 files changed, 131 insertions(+), 6 deletions(-)
--
2.20.1
2
9

[PATCH openEuler-5.10 01/90] ACPICA: Interpreter: fix memory leak by using existing buffer
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
From: Erik Kaneda <erik.kaneda(a)intel.com>
mainline inclusion
from mainline-v5.11-rc1
commit 32cf1a12cad43358e47dac8014379c2f33dfbed4
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4T7OX
CVE: NA
--------------------------------
ACPICA commit 52d1da5dcbd79a722b70f02a1a83f04088f51ff6
There was a memory leak that ocurred when a _CID object is defined as
a package containing string objects. When _CID is checked for any
possible repairs, it calls a helper function to repair _HID (because
_CID basically contains multiple _HID entries).
The _HID repair function assumes that string objects are standalone
objects that are not contained inside of any packages. The _HID
repair function replaces the string object with a brand new object
and attempts to delete the old object by decrementing the reference
count of the old object. Strings inside of packages have a reference
count of 2 so the _HID repair function leaves this object in a
dangling state and causes a memory leak.
Instead of allocating a brand new object and removing the old object,
use the existing object when repairing the _HID object.
Link: https://github.com/acpica/acpica/commit/52d1da5d
Signed-off-by: Erik Kaneda <erik.kaneda(a)intel.com>
Signed-off-by: Bob Moore <robert.moore(a)intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Conflicts:
drivers/acpi/acpica/nsrepair2.c
[wangxiongfeng: fix conflicts caused by context mismatch]
Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
drivers/acpi/acpica/nsrepair2.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
index 125143c41bb8..fb4f8f6209bd 100644
--- a/drivers/acpi/acpica/nsrepair2.c
+++ b/drivers/acpi/acpica/nsrepair2.c
@@ -491,9 +491,8 @@ acpi_ns_repair_HID(struct acpi_evaluate_info *info,
union acpi_operand_object **return_object_ptr)
{
union acpi_operand_object *return_object = *return_object_ptr;
- union acpi_operand_object *new_string;
- char *source;
char *dest;
+ char *source;
ACPI_FUNCTION_NAME(ns_repair_HID);
@@ -514,13 +513,6 @@ acpi_ns_repair_HID(struct acpi_evaluate_info *info,
return (AE_OK);
}
- /* It is simplest to always create a new string object */
-
- new_string = acpi_ut_create_string_object(return_object->string.length);
- if (!new_string) {
- return (AE_NO_MEMORY);
- }
-
/*
* Remove a leading asterisk if present. For some unknown reason, there
* are many machines in the field that contains IDs like this.
@@ -530,7 +522,7 @@ acpi_ns_repair_HID(struct acpi_evaluate_info *info,
source = return_object->string.pointer;
if (*source == '*') {
source++;
- new_string->string.length--;
+ return_object->string.length--;
ACPI_DEBUG_PRINT((ACPI_DB_REPAIR,
"%s: Removed invalid leading asterisk\n",
@@ -545,12 +537,11 @@ acpi_ns_repair_HID(struct acpi_evaluate_info *info,
* "NNNN####" where N is an uppercase letter or decimal digit, and
* # is a hex digit.
*/
- for (dest = new_string->string.pointer; *source; dest++, source++) {
+ for (dest = return_object->string.pointer; *source; dest++, source++) {
*dest = (char)toupper((int)*source);
}
+ return_object->string.pointer[return_object->string.length] = 0;
- acpi_ut_remove_reference(return_object);
- *return_object_ptr = new_string;
return (AE_OK);
}
--
2.20.1
2
91

Re: [PATCH OLK-5.10 v3 3/7] EHCI: Clear wakeup signal locked in S0 state when device plug in
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
Loop Leo
在 2022/2/22 17:49, Jackie Liu 写道:
>
>
> 在 2022/2/22 下午4:16, Zheng Zengkai 写道:
>> From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>>
>> zhaoxin inclusion
>> category: feature
>> bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
>> CVE: NA
>>
>> ----------------------------------------------------------------
>>
>> If we plug in a LS/FS device on USB2 port of EHCI, it will latch a wakeup
>> signal in EHCI internal. This is a bug of EHCI for Some project of
>> ZhaoXin. If enable EHCI runtime suspend and no device attach.
>> PM core will let EHCI go to D3 to save power. However, once EHCI go to D3,
>> it will release wakeup signal that latched on device connect to port
>> during S0. Which will generate a SCI interrupt and bring EHCI to D0.
>> But without device connect, EHCI will go to D3 again.
>> So, there is suspend-resume loop and generate SCI interrupt Continuously.
>>
>> In order to fix this issues, we need to clear the wakeup signal latched
>> in EHCI when EHCI suspend function is called.
>>
>> Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
>> ---
>> drivers/pci/pci-driver.c | 6 +++++-
>> drivers/usb/host/ehci-hcd.c | 21 +++++++++++++++++++++
>> drivers/usb/host/ehci-pci.c | 4 ++++
>> drivers/usb/host/ehci.h | 1 +
>> 4 files changed, 31 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
>> index 8b587fc97f7b..c33d7e0a60b5 100644
>> --- a/drivers/pci/pci-driver.c
>> +++ b/drivers/pci/pci-driver.c
>> @@ -518,7 +518,11 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev)
>> }
>>
>> pci_restore_state(pci_dev);
>> - pci_pme_restore(pci_dev);
>> + if (!((pci_dev->vendor == PCI_VENDOR_ID_ZHAOXIN) &&
>> + (pci_dev->device == 0x3104) &&
>> + ((pci_dev->revision & 0xf0) == 0x90)) ||
>
>> + !(pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI))
> 上面这句话是不是改变了通用逻辑?增加了一个 pci_dev->class !=
> PCI_CLASS_SERIAL_USB_EHCI 才进行 restore?
>
>> + pci_pme_restore(pci_dev);
>> return 0;
>> }
>>
>> diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
>> index 8aff19ff8e8f..586e2735d6e0 100644
>> --- a/drivers/usb/host/ehci-hcd.c
>> +++ b/drivers/usb/host/ehci-hcd.c
>> @@ -1142,6 +1142,27 @@ int ehci_suspend(struct usb_hcd *hcd, bool do_wakeup)
>> return -EBUSY;
>> }
>>
>> + /*clear wakeup signal locked in S0 state when device plug in*/
>> + if (ehci->zx_wakeup_clear == 1) {
>> + u32 __iomem *reg = &ehci->regs->port_status[4];
>> + u32 t1 = ehci_readl(ehci, reg);
>> +
>> + t1 &= (u32)~0xf0000;
>> + t1 |= PORT_TEST_FORCE;
>> + ehci_writel(ehci, t1, reg);
>> + t1 = ehci_readl(ehci, reg);
>> + usleep_range(1000, 2000);
>> + t1 &= (u32)~0xf0000;
>> + ehci_writel(ehci, t1, reg);
>> + usleep_range(1000, 2000);
>> + t1 = ehci_readl(ehci, reg);
>> + ehci_writel(ehci, t1 | PORT_CSC, reg);
>> + udelay(500);
>> + t1 = ehci_readl(ehci, &ehci->regs->status);
>> + ehci_writel(ehci, t1 & STS_PCD, &ehci->regs->status);
>> + ehci_readl(ehci, &ehci->regs->status);
>> + }
>> +
>> return 0;
>> }
>> EXPORT_SYMBOL_GPL(ehci_suspend);
>> diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
>> index e87cf3a00fa4..a5e27deda83a 100644
>> --- a/drivers/usb/host/ehci-pci.c
>> +++ b/drivers/usb/host/ehci-pci.c
>> @@ -222,6 +222,10 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
>> ehci->has_synopsys_hc_bug = 1;
>> }
>> break;
>> + case PCI_VENDOR_ID_ZHAOXIN:
>> + if (pdev->device == 0x3104 && (pdev->revision & 0xf0) == 0x90)
>> + ehci->zx_wakeup_clear = 1;
>> + break;
>> }
>>
>> /* optional debug port, normally in the first BAR */
>> diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h
>> index 59fd523c55f3..8da080a54668 100644
>> --- a/drivers/usb/host/ehci.h
>> +++ b/drivers/usb/host/ehci.h
>> @@ -219,6 +219,7 @@ struct ehci_hcd { /* one per controller */
>> unsigned need_oc_pp_cycle:1; /* MPC834X port power */
>> unsigned imx28_write_fix:1; /* For Freescale i.MX28 */
>> unsigned is_aspeed:1;
>> + unsigned zx_wakeup_clear:1;
>>
>> /* required for usb32 quirk */
>> #define OHCI_CTRL_HCFS (3 << 6)
>>
>>
>> Content-type: Text/plain
>>
>> No virus found
>> Checked by Hillstone Network AntiVirus
2
1

Re: [PATCH OLK-5.10 v3 1/7] rtc: Fix set RTC time delay 500ms on some Zhaoxin SOCs
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
Hi Jackie, Leo
在 2022/2/22 17:46, Jackie Liu 写道:
>
>
> 在 2022/2/22 下午4:16, Zheng Zengkai 写道:
>> From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>>
>> zhaoxin inclusion
>> category: bugfix
>> bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
>> CVE: NA
>>
>> ----------------------------------------------------------------
>>
>> When the RTC divider is changed from reset to an operating time base,
>> the first update cycle should be 500ms later. But on some Zhaoxin SOCs,
>> this first update cycle is one second later.
>> So set RTC time on these Zhaoxin SOCs will causing 500ms delay.
>> Skip setup RTC divider on these SOCs in mc146818_set_time to fix it.
>>
>> Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
>> ---
>> drivers/rtc/rtc-mc146818-lib.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>> diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
>> index 2ecd8752b088..96d9d0219394 100644
>> --- a/drivers/rtc/rtc-mc146818-lib.c
>> +++ b/drivers/rtc/rtc-mc146818-lib.c
>> @@ -171,8 +171,17 @@ int mc146818_set_time(struct rtc_time *time)
>>
>> save_control = CMOS_READ(RTC_CONTROL);
>> CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
>> +#ifdef CONFIG_X86
>> + if (!((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR ||
>> + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) &&
>> + (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59))) {
>> + save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
>> + CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
>> + }
>> +#else
>> save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
>> CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
>> +#endif
>>
>> #ifdef CONFIG_MACH_DECSTATION
>> CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
>> @@ -190,7 +199,14 @@ int mc146818_set_time(struct rtc_time *time)
>> #endif
>>
>> CMOS_WRITE(save_control, RTC_CONTROL);
>> +#ifdef CONFIG_X86
>> + if (!((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR ||
>> + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) &&
>> + (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59)))
> 在这里直接写个 #endif 后面的 else 和 endif 就不需要了
>
>> + CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
>> +#else
>> CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
>> +#endif
这里依据上次兆芯同事解释的代码逻辑:
X86架构情况下,兆芯的相应平台不执行CMOS_WRITE(save_freq_select,
RTC_FREQ_SELECT), 即 commit message中Skip setup RTC divider on these SOCs
in mc146818_set_time to fix it;
而其他架构(#else 分支),仍然需要执行CMOS_WRITE(save_freq_select,
RTC_FREQ_SELECT);
@Jackie @ Leo,
两位看下理解是否OK?
>>
>> spin_unlock_irqrestore(&rtc_lock, flags);
>>
>>
>>
>> Content-type: Text/plain
>>
>> No virus found
>> Checked by Hillstone Network AntiVirus
2
1

Re: [PATCH OLK-5.10 v3 3/7] EHCI: Clear wakeup signal locked in S0 state when device plug in
by Zheng Zengkai 23 Feb '22
by Zheng Zengkai 23 Feb '22
23 Feb '22
Hi Jackie,
在 2022/2/22 17:49, Jackie Liu 写道:
>
>
> 在 2022/2/22 下午4:16, Zheng Zengkai 写道:
>> From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>>
>> zhaoxin inclusion
>> category: feature
>> bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
>> CVE: NA
>>
>> ----------------------------------------------------------------
>>
>> If we plug in a LS/FS device on USB2 port of EHCI, it will latch a wakeup
>> signal in EHCI internal. This is a bug of EHCI for Some project of
>> ZhaoXin. If enable EHCI runtime suspend and no device attach.
>> PM core will let EHCI go to D3 to save power. However, once EHCI go to D3,
>> it will release wakeup signal that latched on device connect to port
>> during S0. Which will generate a SCI interrupt and bring EHCI to D0.
>> But without device connect, EHCI will go to D3 again.
>> So, there is suspend-resume loop and generate SCI interrupt Continuously.
>>
>> In order to fix this issues, we need to clear the wakeup signal latched
>> in EHCI when EHCI suspend function is called.
>>
>> Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
>> ---
>> drivers/pci/pci-driver.c | 6 +++++-
>> drivers/usb/host/ehci-hcd.c | 21 +++++++++++++++++++++
>> drivers/usb/host/ehci-pci.c | 4 ++++
>> drivers/usb/host/ehci.h | 1 +
>> 4 files changed, 31 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
>> index 8b587fc97f7b..c33d7e0a60b5 100644
>> --- a/drivers/pci/pci-driver.c
>> +++ b/drivers/pci/pci-driver.c
>> @@ -518,7 +518,11 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev)
>> }
>>
>> pci_restore_state(pci_dev);
>> - pci_pme_restore(pci_dev);
>> + if (!((pci_dev->vendor == PCI_VENDOR_ID_ZHAOXIN) &&
>> + (pci_dev->device == 0x3104) &&
>> + ((pci_dev->revision & 0xf0) == 0x90)) ||
>
>> + !(pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI))
> 上面这句话是不是改变了通用逻辑?增加了一个 pci_dev->class !=
> PCI_CLASS_SERIAL_USB_EHCI 才进行 restore?
>
>> + pci_pme_restore(pci_dev);
逻辑上应该是和
!((pci_dev->vendor == PCI_VENDOR_ID_ZHAOXIN) && (pci_dev->device ==
0x3104) && ((pci_dev->revision & 0xf0) == 0x90) && (pci_dev->class ==
PCI_CLASS_SERIAL_USB_EHCI))
等同的, 即只有zhaoxin的0x3104 devid的PCI_CLASS_SERIAL_USB_EHCI设备
不做pci_pme_restore(pci_dev)操作
>> return 0;
>> }
>>
>> diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
>> index 8aff19ff8e8f..586e2735d6e0 100644
>> --- a/drivers/usb/host/ehci-hcd.c
>> +++ b/drivers/usb/host/ehci-hcd.c
>> @@ -1142,6 +1142,27 @@ int ehci_suspend(struct usb_hcd *hcd, bool do_wakeup)
>> return -EBUSY;
>> }
>>
>> + /*clear wakeup signal locked in S0 state when device plug in*/
>> + if (ehci->zx_wakeup_clear == 1) {
>> + u32 __iomem *reg = &ehci->regs->port_status[4];
>> + u32 t1 = ehci_readl(ehci, reg);
>> +
>> + t1 &= (u32)~0xf0000;
>> + t1 |= PORT_TEST_FORCE;
>> + ehci_writel(ehci, t1, reg);
>> + t1 = ehci_readl(ehci, reg);
>> + usleep_range(1000, 2000);
>> + t1 &= (u32)~0xf0000;
>> + ehci_writel(ehci, t1, reg);
>> + usleep_range(1000, 2000);
>> + t1 = ehci_readl(ehci, reg);
>> + ehci_writel(ehci, t1 | PORT_CSC, reg);
>> + udelay(500);
>> + t1 = ehci_readl(ehci, &ehci->regs->status);
>> + ehci_writel(ehci, t1 & STS_PCD, &ehci->regs->status);
>> + ehci_readl(ehci, &ehci->regs->status);
>> + }
>> +
>> return 0;
>> }
>> EXPORT_SYMBOL_GPL(ehci_suspend);
>> diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
>> index e87cf3a00fa4..a5e27deda83a 100644
>> --- a/drivers/usb/host/ehci-pci.c
>> +++ b/drivers/usb/host/ehci-pci.c
>> @@ -222,6 +222,10 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
>> ehci->has_synopsys_hc_bug = 1;
>> }
>> break;
>> + case PCI_VENDOR_ID_ZHAOXIN:
>> + if (pdev->device == 0x3104 && (pdev->revision & 0xf0) == 0x90)
>> + ehci->zx_wakeup_clear = 1;
>> + break;
>> }
>>
>> /* optional debug port, normally in the first BAR */
>> diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h
>> index 59fd523c55f3..8da080a54668 100644
>> --- a/drivers/usb/host/ehci.h
>> +++ b/drivers/usb/host/ehci.h
>> @@ -219,6 +219,7 @@ struct ehci_hcd { /* one per controller */
>> unsigned need_oc_pp_cycle:1; /* MPC834X port power */
>> unsigned imx28_write_fix:1; /* For Freescale i.MX28 */
>> unsigned is_aspeed:1;
>> + unsigned zx_wakeup_clear:1;
>>
>> /* required for usb32 quirk */
>> #define OHCI_CTRL_HCFS (3 << 6)
>>
>>
>> Content-type: Text/plain
>>
>> No virus found
>> Checked by Hillstone Network AntiVirus
1
0

[PATCH OLK-5.10,v4,0/3] Add support for kunpeng920 Ultrasoc System Memory Buffer driver
by Qi Liu 23 Feb '22
by Qi Liu 23 Feb '22
23 Feb '22
Add support for kunpeng920 Ultrasoc System Memory Buffer driver
v3->4:
Resend this patchset as openeuler maillist hasn't received patch[1/3] in v3 series.
Qi Liu (3):
coresight: etm4x: Modify core-commit to avoid HiSilicon ETM overflow
drivers/coresight: Add Ultrasoc System Memory Buffer driver
arm64: openeuler_defconfig: Enable config for ultrasoc driver
arch/arm64/configs/openeuler_defconfig | 13 +-
drivers/hwtracing/coresight/Kconfig | 20 +
drivers/hwtracing/coresight/Makefile | 1 +
.../coresight/coresight-etm4x-core.c | 98 +++
drivers/hwtracing/coresight/coresight-etm4x.h | 8 +
drivers/hwtracing/coresight/ultrasoc-smb.c | 602 ++++++++++++++++++
drivers/hwtracing/coresight/ultrasoc-smb.h | 100 +++
7 files changed, 841 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.c
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.h
--
2.33.0
1
3

22 Feb '22
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4PM10
CVE: NA
--------------------------------
During kernel copy_from_user processing, the kernel triggers a RAS
exception when reading pages. In this solution, we identify this
scenario in the kernel do_sea processing process, send SIGBUS signals
to the process that triggers copy_from_user and isolate memory pages,
preventing kernel panic.
At the same time, we use both cmdline(uce_kernel_recovery) and proc
(/proc/sys/kernel/uce_kernel_recovery) to control this feature on/off.
Signed-off-by: Tong Tiangen <tongtiangen(a)huawei.com>
---
v1->v2:
1. update commit msg.
2. change copy_from_user return value.
3. change copy_from_user proc control bit.
arch/arm64/Kconfig | 9 ++
arch/arm64/include/asm/exception.h | 12 +++
arch/arm64/lib/copy_from_user.S | 5 +
arch/arm64/mm/fault.c | 154 +++++++++++++++++++++++++++++
include/linux/kernel.h | 4 +
kernel/sysctl.c | 11 +++
6 files changed, 195 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c0f6a275f798..5858cb9a3dac 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2155,6 +2155,15 @@ config ARCH_HIBERNATION_HEADER
config ARCH_SUSPEND_POSSIBLE
def_bool y
+config UCE_KERNEL_RECOVERY
+ bool "uce kernel recovery from special scenario"
+ def_bool n
+ help
+ With ARM v8.2 RAS Extension, SEA are usually triggered when memory errors
+ are consumed. In some cases, if the error address is in a user page there
+ is a chance to recover. Such as error occurs in COW and pagecache reading
+ scenario, we can isolate this page and killing process instead of die.
+
endmenu
menu "CPU Power Management"
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 0756191f44f6..bad92fe763fb 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -53,4 +53,16 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
+
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+struct uce_kernel_recovery_info {
+ int (*fn)(void);
+ const char *name;
+ unsigned long addr;
+ unsigned long size;
+};
+
+extern int copy_from_user_sea_fallback(void);
+#endif
+
#endif /* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 2cf999e41d30..b03ea0faa118 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -60,6 +60,11 @@ SYM_FUNC_START(__arch_copy_from_user)
#include "copy_template.S"
mov x0, #0 // Nothing to copy
ret
+
+ .global copy_from_user_sea_fallback
+copy_from_user_sea_fallback:
+ sub x0, end, dst // bytes not copied
+ ret
SYM_FUNC_END(__arch_copy_from_user)
EXPORT_SYMBOL(__arch_copy_from_user)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 3fc5aceb72eb..7f76e277daa3 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -634,6 +634,127 @@ static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs)
return 1; /* "fault" */
}
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+int kernel_access_sea_recovery;
+
+#define UCE_KER_REC_NUM ARRAY_SIZE(reco_info)
+static struct uce_kernel_recovery_info reco_info[] = {
+ {NULL, NULL, 0, 0},
+ {NULL, NULL, 0, 0},
+ {copy_from_user_sea_fallback, "__arch_copy_from_user", (unsigned long)__arch_copy_from_user, 0},
+};
+
+static int __init kernel_access_sea_recovery_init(void)
+{
+ unsigned long addr, size, offset;
+ unsigned int i;
+
+ for (i = 0; i < UCE_KER_REC_NUM; i++) {
+ addr = reco_info[i].addr;
+ if (!kallsyms_lookup_size_offset(addr, &size, &offset)) {
+ pr_info("UCE: symbol %s lookup addr fail.\n",
+ reco_info[i].name);
+ size = 0;
+ }
+
+ reco_info[i].size = size;
+ }
+
+ return 1;
+}
+fs_initcall(kernel_access_sea_recovery_init);
+
+static int __init enable_kernel_access_sea_recovery(char *str)
+{
+ int max = (1 << UCE_KER_REC_NUM) - 1;
+ int val;
+
+ if (kstrtoint(str, 0, &val))
+ return -EINVAL;
+
+ if (val < 0 || val > max) {
+ pr_info("UCE: invalid uce_kernel_recovery value %d", val);
+ return -EINVAL;
+ }
+
+ kernel_access_sea_recovery = val;
+
+ return 1;
+}
+__setup("uce_kernel_recovery=", enable_kernel_access_sea_recovery);
+
+/*
+ * what is kernel recovery?
+ * If the process's private data is accessed in the kernel mode to trigger
+ * special sea fault, it can controlled by killing the process and isolating
+ * the failure pages instead of die.
+ */
+static int is_in_kernel_recovery(unsigned int esr, struct pt_regs *regs)
+{
+ /*
+ * target insn: ldp-pre, ldp-post, ldp-offset,
+ * ldr-64bit-pre/pose, ldr-32bit-pre/post, ldrb-pre/post, ldrh-pre/post
+ */
+ u32 target_insn[] = {0xa8c, 0xa9c, 0xa94, 0xf84, 0x784, 0x384, 0xb84};
+ void *pc = (void *)instruction_pointer(regs);
+ struct uce_kernel_recovery_info *info;
+ bool insn_match = false;
+ u32 insn;
+ int i;
+
+ pr_emerg("UCE: %s-%d, kernel recovery: 0x%x, esr: 0x%08x -- %s, %pS\n",
+ current->comm, current->pid, kernel_access_sea_recovery, esr,
+ esr_get_class_string(esr), pc);
+
+ if (aarch64_insn_read((void *)pc, &insn)) {
+ pr_emerg("UCE: insn read fail.\n");
+ return -EFAULT;
+ }
+
+ /*
+ * We process special ESR:
+ * EC : 0b100101 Data Abort taken without a change in Exception level.
+ * DFSC : 0b010000 Synchronous External abort, not on translation table
+ * walk or hardware update of translation table.
+ * eg: 0x96000610
+ */
+ if (ESR_ELx_EC(esr) != ESR_ELx_EC_DABT_CUR ||
+ (esr & ESR_ELx_FSC) != ESR_ELx_FSC_EXTABT) {
+ pr_emerg("UCE: esr not match.\n");
+ return -EINVAL;
+ }
+
+ insn = (insn >> 20) & 0xffc;
+ for (i = 0; i < ARRAY_SIZE(target_insn); i++) {
+ if (insn == target_insn[i]) {
+ insn_match = true;
+ break;
+ }
+ }
+
+ if (!insn_match) {
+ pr_emerg("UCE: insn 0x%x is not match.\n", insn);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < UCE_KER_REC_NUM; i++) {
+ if (!((kernel_access_sea_recovery >> i) & 0x1))
+ continue;
+
+ info = &reco_info[i];
+ if (info->fn && regs->pc >= info->addr &&
+ regs->pc < (info->addr + info->size)) {
+ pr_emerg("UCE: total match %s success.\n", info->name);
+ return i;
+ }
+ }
+
+ pr_emerg("UCE: symbol is not match or switch if off, kernel recovery %d.\n",
+ kernel_access_sea_recovery);
+ return -EINVAL;
+}
+#endif
+
static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
const struct fault_info *inf;
@@ -649,6 +770,39 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
return 0;
}
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+ if (!user_mode(regs)) {
+ int idx;
+
+ if (!current->mm || !kernel_access_sea_recovery) {
+ pr_emerg("UCE: kernel recovery %d, %s-%d is %s-thread.\n",
+ kernel_access_sea_recovery,
+ current->comm, current->pid,
+ (current->mm) ? "user" : "kernel");
+ die("Uncorrected hardware memory error in kernel-access\n",
+ regs, esr);
+ }
+
+ idx = is_in_kernel_recovery(esr, regs);
+ if (idx >= 0 && idx < UCE_KER_REC_NUM) {
+ current->thread.fault_address = 0;
+ current->thread.fault_code = esr;
+ regs->pc = (unsigned long)reco_info[idx].fn;
+
+ if (esr & ESR_ELx_FnV)
+ siaddr = NULL;
+ else
+ siaddr = (void __user *)addr;
+
+ arm64_force_sig_fault(inf->sig, inf->code, siaddr,
+ "Uncorrected hardware memory use with kernel recovery in kernel-access\n");
+ } else {
+ die("Uncorrected hardware memory error (not match idx or sence switch is off) in kernel-access\n",
+ regs, esr);
+ }
+ }
+#endif
+
if (esr & ESR_ELx_FnV)
siaddr = NULL;
else
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 78a0907f0b04..b634fb1cce38 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -558,6 +558,10 @@ extern int sysctl_panic_on_stackoverflow;
extern bool crash_kexec_post_notifiers;
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+extern int kernel_access_sea_recovery;
+#endif
+
/*
* panic_cpu is used for synchronizing panic() and crash_kexec() execution. It
* holds a CPU number which is executing panic() currently. A value of
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 89ef0c1a1642..e38fff657683 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -2711,6 +2711,17 @@ static struct ctl_table kern_table[] = {
.extra1 = &one_hundred,
.extra2 = &one_thousand,
},
+#endif
+#if defined(CONFIG_UCE_KERNEL_RECOVERY)
+ {
+ .procname = "uce_kernel_recovery",
+ .data = &kernel_access_sea_recovery,
+ .maxlen = sizeof(kernel_access_sea_recovery),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &four,
+ .extra2 = &four,
+ },
#endif
{ }
};
--
2.18.0.huawei.25
2
1

[PATCH openEuler-5.10 12/90] x86/cpu/intel: Add a nosgx kernel parameter
by Zheng Zengkai 22 Feb '22
by Zheng Zengkai 22 Feb '22
22 Feb '22
From: Jarkko Sakkinen <jarkko(a)kernel.org>
mainline inclusion
from mainline-v5.11-rc1
commit 38853a303982e3be3eccb1a1132399a5c5e2d806
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SIGI
CVE: NA
--------------------------------
Add a kernel parameter to disable SGX kernel support and document it.
[ bp: Massage. ]
Intel-SIG: commit 38853a303982 x86/cpu/intel: Add a nosgx kernel parameter
Backport for SGX Foundations support
Signed-off-by: Jarkko Sakkinen <jarkko(a)kernel.org>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Reviewed-by: Sean Christopherson <sean.j.christopherson(a)intel.com>
Acked-by: Jethro Beekman <jethro(a)fortanix.com>
Tested-by: Sean Christopherson <sean.j.christopherson(a)intel.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-9-jarkko@kernel.org
Signed-off-by: Fan Du <fan.du(a)intel.com> #openEuler_contributor
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Bamvor Zhang <bamvor.zhang(a)suse.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
Documentation/admin-guide/kernel-parameters.txt | 2 ++
arch/x86/kernel/cpu/feat_ctl.c | 9 +++++++++
2 files changed, 11 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index b73108a82f18..a4e5614bee12 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3498,6 +3498,8 @@
nosep [BUGS=X86-32] Disables x86 SYSENTER/SYSEXIT support.
+ nosgx [X86-64,SGX] Disables Intel SGX kernel support.
+
nosmp [SMP] Tells an SMP kernel to act as a UP kernel,
and disable the IO APIC. legacy for "maxcpus=0".
diff --git a/arch/x86/kernel/cpu/feat_ctl.c b/arch/x86/kernel/cpu/feat_ctl.c
index d38e97325018..3b1b01f2b248 100644
--- a/arch/x86/kernel/cpu/feat_ctl.c
+++ b/arch/x86/kernel/cpu/feat_ctl.c
@@ -99,6 +99,15 @@ static void clear_sgx_caps(void)
setup_clear_cpu_cap(X86_FEATURE_SGX_LC);
}
+static int __init nosgx(char *str)
+{
+ clear_sgx_caps();
+
+ return 0;
+}
+
+early_param("nosgx", nosgx);
+
void init_ia32_feat_ctl(struct cpuinfo_x86 *c)
{
bool tboot = tboot_enabled();
--
2.20.1
1
1
Fix 5 bugs:
1. Fix array bounds error in ethtool get_link_ksettings
2. Fix ethtool loopback command failure
3. Fix xor checksum error when sending a non 4B-aligned
message to firmware
4. Fix an error when netdev failed to link up
5. Reduce the timeout of the channel between driver and
firmware
Yanling Song (5):
net/spnic: Fix array bounds error in ethtool get_link_ksettings
net/spnic: Fix ethtool loopback command failure
net/spnic: Fix xor checksum error when sending a non 4B-aligned
message to firmware
net/spnic: Fix an error when netdev failed to link up
net/spnic: Reduce the timeout of the channel between driver and
firmware
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 2 +-
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 10 ++--
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 2 +-
.../ramaxel/spnic/spnic_ethtool_stats.c | 4 +-
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 2 +-
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 49 ++++++++++++-------
6 files changed, 41 insertions(+), 28 deletions(-)
--
2.32.0
2
6

[PATCH OLK-5.10,v3,0/3] Add support for kunpeng920 Ultrasoc System Memory Buffer driver
by Qi Liu 22 Feb '22
by Qi Liu 22 Feb '22
22 Feb '22
Add support for kunpeng920 Ultrasoc System Memory Buffer driver
Qi Liu (3):
coresight: etm4x: Modify core-commit to avoid HiSilicon ETM overflow
drivers/coresight: Add Ultrasoc System Memory Buffer driver
arm64: openeuler_defconfig: Enable config for ultrasoc driver
arch/arm64/configs/openeuler_defconfig | 13 +-
drivers/hwtracing/coresight/Kconfig | 20 +
drivers/hwtracing/coresight/Makefile | 1 +
.../coresight/coresight-etm4x-core.c | 98 +++
drivers/hwtracing/coresight/coresight-etm4x.h | 8 +
drivers/hwtracing/coresight/ultrasoc-smb.c | 602 ++++++++++++++++++
drivers/hwtracing/coresight/ultrasoc-smb.h | 100 +++
7 files changed, 841 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.c
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.h
--
2.33.0
3
5

Re: [PATCH OLK-5.10 v3 1/7] rtc: Fix set RTC time delay 500ms on some Zhaoxin SOCs
by Zheng Zengkai 22 Feb '22
by Zheng Zengkai 22 Feb '22
22 Feb '22
loop Leo
在 2022/2/22 17:46, Jackie Liu 写道:
>
>
> 在 2022/2/22 下午4:16, Zheng Zengkai 写道:
>> From: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>>
>> zhaoxin inclusion
>> category: bugfix
>> bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN
>> CVE: NA
>>
>> ----------------------------------------------------------------
>>
>> When the RTC divider is changed from reset to an operating time base,
>> the first update cycle should be 500ms later. But on some Zhaoxin SOCs,
>> this first update cycle is one second later.
>> So set RTC time on these Zhaoxin SOCs will causing 500ms delay.
>> Skip setup RTC divider on these SOCs in mc146818_set_time to fix it.
>>
>> Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com>
>> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
>> ---
>> drivers/rtc/rtc-mc146818-lib.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>> diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
>> index 2ecd8752b088..96d9d0219394 100644
>> --- a/drivers/rtc/rtc-mc146818-lib.c
>> +++ b/drivers/rtc/rtc-mc146818-lib.c
>> @@ -171,8 +171,17 @@ int mc146818_set_time(struct rtc_time *time)
>>
>> save_control = CMOS_READ(RTC_CONTROL);
>> CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
>> +#ifdef CONFIG_X86
>> + if (!((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR ||
>> + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) &&
>> + (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59))) {
>> + save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
>> + CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
>> + }
>> +#else
>> save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
>> CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
>> +#endif
>>
>> #ifdef CONFIG_MACH_DECSTATION
>> CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
>> @@ -190,7 +199,14 @@ int mc146818_set_time(struct rtc_time *time)
>> #endif
>>
>> CMOS_WRITE(save_control, RTC_CONTROL);
>> +#ifdef CONFIG_X86
>> + if (!((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR ||
>> + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) &&
>> + (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59)))
> 在这里直接写个 #endif 后面的 else 和 endif 就不需要了
>
>> + CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
>> +#else
>> CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
>> +#endif
>>
>> spin_unlock_irqrestore(&rtc_lock, flags);
>>
>>
>>
>> Content-type: Text/plain
>>
>> No virus found
>> Checked by Hillstone Network AntiVirus
1
0

[PATCH OLK-5.10,v2,0/3] Add support for kunpeng920 Ultrasoc System Memory Buffer driver
by Qi Liu 22 Feb '22
by Qi Liu 22 Feb '22
22 Feb '22
Add support for kunpeng920 Ultrasoc System Memory Buffer driver
Qi Liu (3):
coresight: etm4x: Modify core-commit to avoid HiSilicon ETM overflow
drivers/coresight: Add Ultrasoc System Memory Buffer driver
arm64: openeuler_defconfig: Enable config for ultrasoc driver
arch/arm64/configs/openeuler_defconfig | 7 +-
drivers/hwtracing/coresight/Kconfig | 20 +
drivers/hwtracing/coresight/Makefile | 1 +
.../coresight/coresight-etm4x-core.c | 98 +++
drivers/hwtracing/coresight/coresight-etm4x.h | 8 +
drivers/hwtracing/coresight/ultrasoc-smb.c | 602 ++++++++++++++++++
drivers/hwtracing/coresight/ultrasoc-smb.h | 100 +++
7 files changed, 835 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.c
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.h
--
2.33.0
4
7
V3: add missing config enable.
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SIGI
Borislav Petkov (1):
x86/sgx: Fix sgx_ioc_enclave_provision() kernel-doc
Daniel Vetter (1):
x86/sgx: Drop racy follow_pfn() check
Dave Hansen (3):
x86/sgx: Clarify 'laundry_list' locking
selftests/sgx: Improve error detection and messages
selftests/sgx: remove checks for file execute permissions
Ira Weiny (1):
x86/sgx: Remove unnecessary kmap() from sgx_ioc_enclave_init()
Jarkko Sakkinen (25):
x86/sgx: Add SGX architectural data structures
x86/sgx: Add wrappers for ENCLS functions
x86/cpu/intel: Add a nosgx kernel parameter
x86/sgx: Add SGX page allocator functions
x86/sgx: Add an SGX misc driver interface
x86/sgx: Add SGX_IOC_ENCLAVE_CREATE
x86/sgx: Add SGX_IOC_ENCLAVE_ADD_PAGES
x86/sgx: Add SGX_IOC_ENCLAVE_INIT
x86/sgx: Add SGX_IOC_ENCLAVE_PROVISION
selftests/x86: Add a selftest for SGX
x86/sgx: Add a page reclaimer
x86/sgx: Add ptrace() support for the SGX driver
Documentation/x86: Document SGX kernel architecture
x86/sgx: Update MAINTAINERS
x86/sgx: Return -ERESTARTSYS in sgx_ioc_enclave_add_pages()
x86/sgx: Return -EINVAL on a zero length buffer in
sgx_ioc_enclave_add_pages()
x86/sgx: Maintain encl->refcount for each encl->mm_list entry
x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list
x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()
selftests/sgx: Use a statically generated 3072-bit RSA key
selftests/sgx: Rename 'eenter' and 'sgx_call_vdso'
selftests/sgx: Migrate to kselftest harness
selftests/sgx: Dump enclave memory map
selftests/sgx: Add EXPECT_EEXIT() macro
selftests/sgx: Refine the test enclave to have storage
Laibin Qiu (1):
deconfig: intel ice-lake missing config enable
Sami Tolvanen (1):
x86/sgx: Fix the return type of sgx_init()
Sean Christopherson (10):
x86/cpufeatures: Add Intel SGX hardware bits
x86/{cpufeatures,msr}: Add Intel SGX Launch Control hardware bits
x86/sgx: Initialize metadata for Enclave Page Cache (EPC) sections
x86/cpu/intel: Detect SGX support
mm: Add 'mprotect' hook to struct vm_operations_struct
x86/vdso: Add support for exception fixup in vDSO functions
x86/fault: Add a helper function to sanitize error code
x86/traps: Attempt to fixup exceptions in vDSO before signaling
x86/vdso: Implement a vDSO for Intel SGX enclave call
x86/sgx: Expose SGX architectural definitions to the kernel
Tianjia Zhang (2):
selftests/sgx: Use getauxval() to simplify test code
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c
.../admin-guide/kernel-parameters.txt | 2 +
.../userspace-api/ioctl/ioctl-number.rst | 1 +
Documentation/x86/index.rst | 1 +
Documentation/x86/sgx.rst | 211 +++++
MAINTAINERS | 14 +
arch/x86/Kconfig | 18 +
arch/x86/configs/openeuler_defconfig | 3 +-
arch/x86/entry/vdso/Makefile | 8 +-
arch/x86/entry/vdso/extable.c | 46 ++
arch/x86/entry/vdso/extable.h | 28 +
arch/x86/entry/vdso/vdso-layout.lds.S | 9 +-
arch/x86/entry/vdso/vdso.lds.S | 1 +
arch/x86/entry/vdso/vdso2c.h | 50 +-
arch/x86/entry/vdso/vsgx.S | 151 ++++
arch/x86/include/asm/cpufeatures.h | 2 +
arch/x86/include/asm/disabled-features.h | 8 +-
arch/x86/include/asm/enclu.h | 9 +
arch/x86/include/asm/msr-index.h | 8 +
arch/x86/include/asm/sgx.h | 348 ++++++++
arch/x86/include/asm/vdso.h | 5 +
arch/x86/include/uapi/asm/sgx.h | 168 ++++
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/feat_ctl.c | 38 +-
arch/x86/kernel/cpu/sgx/Makefile | 5 +
arch/x86/kernel/cpu/sgx/driver.c | 197 +++++
arch/x86/kernel/cpu/sgx/driver.h | 29 +
arch/x86/kernel/cpu/sgx/encl.c | 737 +++++++++++++++++
arch/x86/kernel/cpu/sgx/encl.h | 119 +++
arch/x86/kernel/cpu/sgx/encls.h | 231 ++++++
arch/x86/kernel/cpu/sgx/ioctl.c | 718 ++++++++++++++++
arch/x86/kernel/cpu/sgx/main.c | 768 ++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 83 ++
arch/x86/kernel/traps.c | 10 +
arch/x86/mm/fault.c | 33 +-
include/linux/mm.h | 7 +
mm/mprotect.c | 7 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/sgx/.gitignore | 2 +
tools/testing/selftests/sgx/Makefile | 57 ++
tools/testing/selftests/sgx/call.S | 44 +
tools/testing/selftests/sgx/defines.h | 31 +
tools/testing/selftests/sgx/load.c | 305 +++++++
tools/testing/selftests/sgx/main.c | 293 +++++++
tools/testing/selftests/sgx/main.h | 41 +
tools/testing/selftests/sgx/sign_key.S | 12 +
tools/testing/selftests/sgx/sign_key.pem | 39 +
tools/testing/selftests/sgx/sigstruct.c | 382 +++++++++
tools/testing/selftests/sgx/test_encl.c | 35 +
tools/testing/selftests/sgx/test_encl.lds | 41 +
.../selftests/sgx/test_encl_bootstrap.S | 89 ++
50 files changed, 5426 insertions(+), 20 deletions(-)
create mode 100644 Documentation/x86/sgx.rst
create mode 100644 arch/x86/entry/vdso/extable.c
create mode 100644 arch/x86/entry/vdso/extable.h
create mode 100644 arch/x86/entry/vdso/vsgx.S
create mode 100644 arch/x86/include/asm/enclu.h
create mode 100644 arch/x86/include/asm/sgx.h
create mode 100644 arch/x86/include/uapi/asm/sgx.h
create mode 100644 arch/x86/kernel/cpu/sgx/Makefile
create mode 100644 arch/x86/kernel/cpu/sgx/driver.c
create mode 100644 arch/x86/kernel/cpu/sgx/driver.h
create mode 100644 arch/x86/kernel/cpu/sgx/encl.c
create mode 100644 arch/x86/kernel/cpu/sgx/encl.h
create mode 100644 arch/x86/kernel/cpu/sgx/encls.h
create mode 100644 arch/x86/kernel/cpu/sgx/ioctl.c
create mode 100644 arch/x86/kernel/cpu/sgx/main.c
create mode 100644 arch/x86/kernel/cpu/sgx/sgx.h
create mode 100644 tools/testing/selftests/sgx/.gitignore
create mode 100644 tools/testing/selftests/sgx/Makefile
create mode 100644 tools/testing/selftests/sgx/call.S
create mode 100644 tools/testing/selftests/sgx/defines.h
create mode 100644 tools/testing/selftests/sgx/load.c
create mode 100644 tools/testing/selftests/sgx/main.c
create mode 100644 tools/testing/selftests/sgx/main.h
create mode 100644 tools/testing/selftests/sgx/sign_key.S
create mode 100644 tools/testing/selftests/sgx/sign_key.pem
create mode 100644 tools/testing/selftests/sgx/sigstruct.c
create mode 100644 tools/testing/selftests/sgx/test_encl.c
create mode 100644 tools/testing/selftests/sgx/test_encl.lds
create mode 100644 tools/testing/selftests/sgx/test_encl_bootstrap.S
--
2.22.0
4
74
LGTM.
Thanks.
Reviewed-by: Kai Liu <kai.liu(a)suse.com>
On 2022/02/22 Tue 09:09, fuyufan wrote:
>euler inclusion
>category: feature
>bugzilla: https://gitee.com/openeuler/kernel/issues/I4URME
>CVE: NA
>
>--------------------------------
>
>Enable CONFIG_INTEL_IDXD in openeuler_defconfig for x86.
>Support Intel Data Accelerators on Xeon hardware.
>
>Signed-off-by: fuyufan <fuyufan(a)huawei.com>
>---
> arch/x86/configs/openeuler_defconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
>index 316c4122a859..350a02583978 100644
>--- a/arch/x86/configs/openeuler_defconfig
>+++ b/arch/x86/configs/openeuler_defconfig
>@@ -6353,7 +6353,7 @@ CONFIG_DMA_VIRTUAL_CHANNELS=y
> CONFIG_DMA_ACPI=y
> # CONFIG_ALTERA_MSGDMA is not set
> CONFIG_INTEL_IDMA64=m
>-# CONFIG_INTEL_IDXD is not set
>+CONFIG_INTEL_IDXD=m
> CONFIG_INTEL_IOATDMA=m
> # CONFIG_PLX_DMA is not set
> # CONFIG_QCOM_HIDMA_MGMT is not set
>--
>2.27.0
>
1
0
On 2022/02/22 Tue 08:27, fuyufan wrote:
>euler inclusion
>category: feature
>bugzilla: https://gitee.com/openeuler/kernel/issues/I4URME
>CVE: NA
>
>--------------------------------
>
>Enable CONFIG_INTEL_IDXD in openeuler_defconfig for x86.
>Support Intel Data Accelerators on Xeon hardware.
>
>Signed-off-by: fuyufan <fuyufan(a)huawei.com>
>---
> arch/x86/configs/openeuler_defconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
>index 316c4122a859..350a02583978 100644
>--- a/arch/x86/configs/openeuler_defconfig
>+++ b/arch/x86/configs/openeuler_defconfig
>@@ -6353,7 +6353,7 @@ CONFIG_DMA_VIRTUAL_CHANNELS=y
> CONFIG_DMA_ACPI=y
> # CONFIG_ALTERA_MSGDMA is not set
> CONFIG_INTEL_IDMA64=m
>-# CONFIG_INTEL_IDXD is not set
>+CONFIG_INTEL_IDXD=y
为什么不=m?
类似情况已经有几次了,我觉得值得设置一个规范,支持编译为模块的除有特别原
因支撑的,一律应设置为m。
Regards,
Kai
1
0

22 Feb '22
Ramaxel inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA67
CVE: NA
Changes from v1:
Update description.
Fix:
1.Remove UNF_ORIGIN_HOTTAG_MASK and UNF_HOTTAG_FLAG
2.Update some output string
3.Remove spinlock protect in free_parent_sq() because there is
spinlock protect in caller function free_parent_queue_info()
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
Reviewed-by: Yun Xu <xuyun(a)ramaxel.com>
---
drivers/scsi/spfc/common/unf_common.h | 2 --
drivers/scsi/spfc/common/unf_io.c | 3 +--
drivers/scsi/spfc/common/unf_io_abnormal.c | 2 +-
drivers/scsi/spfc/common/unf_rport.c | 2 +-
drivers/scsi/spfc/common/unf_service.c | 19 +++++--------------
drivers/scsi/spfc/hw/spfc_hba.c | 2 +-
drivers/scsi/spfc/hw/spfc_io.c | 2 +-
drivers/scsi/spfc/hw/spfc_queue.c | 5 -----
drivers/scsi/spfc/hw/spfc_service.c | 22 ++++++++++++----------
9 files changed, 22 insertions(+), 37 deletions(-)
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
index bf9d156e07ce..9613649308bf 100644
--- a/drivers/scsi/spfc/common/unf_common.h
+++ b/drivers/scsi/spfc/common/unf_common.h
@@ -12,8 +12,6 @@
#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
#define UNF_MAX_SECTORS 0xffff
-#define UNF_ORIGIN_HOTTAG_MASK 0x7fff
-#define UNF_HOTTAG_FLAG (1 << 15)
#define UNF_PKG_FREE_OXID 0x0
#define UNF_PKG_FREE_RXID 0x1
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
index b1255ecba88c..5de69f8ddc6d 100644
--- a/drivers/scsi/spfc/common/unf_io.c
+++ b/drivers/scsi/spfc/common/unf_io.c
@@ -890,8 +890,7 @@ static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
- unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
unf_select_sq(unf_xchg, &pkg);
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
index fece7aa5f441..4e268ac026ca 100644
--- a/drivers/scsi/spfc/common/unf_io_abnormal.c
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.c
@@ -763,7 +763,7 @@ int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
pkg.xchg_contex = unf_xchg;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
pkg.frame_head.csctl_sid = lport->nport_id;
pkg.frame_head.rctl_did = rport->nport_id;
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
index aa4967fc0ab6..9b06df884524 100644
--- a/drivers/scsi/spfc/common/unf_rport.c
+++ b/drivers/scsi/spfc/common/unf_rport.c
@@ -352,7 +352,7 @@ struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 si
spin_unlock_irqrestore(rport_state_lock, flags);
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[err]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
+ "[info]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
lport->port_id, rport_by_wwpn, wwpn);
rport_by_wwpn = NULL;
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
index 8f72f6470647..9c86c99374c8 100644
--- a/drivers/scsi/spfc/common/unf_service.c
+++ b/drivers/scsi/spfc/common/unf_service.c
@@ -130,7 +130,7 @@ void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
}
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
@@ -250,7 +250,7 @@ u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
@@ -407,19 +407,10 @@ static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg
rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
unf_rport = unf_get_rport_by_nport_id(lport, sid);
- if (unf_rport) {
- if (unf_rport->rport_index !=
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) NPort handle(0x%x) from low level is not equal to RPort index(0x%x)",
- lport->port_id, lport->nport_id,
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
- unf_rport->rport_index);
- }
+ if (unf_rport)
ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
- } else {
+ else
ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
- }
return ret;
}
@@ -1389,7 +1380,7 @@ static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *p
pkg->frame_head.csctl_sid = xchg->sid;
pkg->frame_head.rctl_did = xchg->did;
pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
index e12299c9e2c9..b033dcb78bb3 100644
--- a/drivers/scsi/spfc/hw/spfc_hba.c
+++ b/drivers/scsi/spfc/hw/spfc_hba.c
@@ -56,7 +56,7 @@ static struct unf_cfg_item spfc_port_cfg_parm[] = {
{"port_topology", 0, 0xf, 0x20},
{"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
/* queue depth of originator registered to SCSI midlayer */
- {"max_queue_depth", 0, 128, 128},
+ {"max_queue_depth", 0, 512, 512},
{"sest_num", 0, 2048, 2048},
{"max_login", 0, 2048, 2048},
/* nodename from 32 bit to 64 bit */
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
index 2b1d1c607b13..7184eb6a10af 100644
--- a/drivers/scsi/spfc/hw/spfc_io.c
+++ b/drivers/scsi/spfc/hw/spfc_io.c
@@ -1138,7 +1138,7 @@ u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
- hot_tag = (u16)iresp->wd2.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = (u16)iresp->wd2.hotpooltag;
/* 2. HotTag validity check */
if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
pkg.status = UNF_IO_SUCCESS;
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
index abcf1ff3f49f..fa4295832da7 100644
--- a/drivers/scsi/spfc/hw/spfc_queue.c
+++ b/drivers/scsi/spfc/hw/spfc_queue.c
@@ -2138,11 +2138,9 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
u32 uidelaycnt = 0;
struct list_head *list = NULL;
struct spfc_suspend_sqe_info *suspend_sqe = NULL;
- ulong flag = 0;
sq_info = &parq_info->parent_sq_info;
- spin_lock_irqsave(&parq_info->parent_queue_state_lock, flag);
while (!list_empty(&sq_info->suspend_sqe_list)) {
list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
list_del(list);
@@ -2156,7 +2154,6 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
kfree(suspend_sqe);
}
}
- spin_unlock_irqrestore(&parq_info->parent_queue_state_lock, flag);
/* Free data cos */
spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
@@ -4475,9 +4472,7 @@ void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *pa
* with the sq in the queue of the parent
*/
- spin_unlock_irqrestore(prtq_state_lock, flag);
spfc_free_parent_sq(hba, parent_queue_info);
- spin_lock_irqsave(prtq_state_lock, flag);
/* The initialization of all queue id is invalid */
parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
index e99802df50a2..1da58e3f9fbe 100644
--- a/drivers/scsi/spfc/hw/spfc_service.c
+++ b/drivers/scsi/spfc/hw/spfc_service.c
@@ -742,7 +742,7 @@ u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
ox_id = (u32)(abts_rsp->wd0.ox_id);
- hot_tag = abts_rsp->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = abts_rsp->wd1.hotpooltag;
if (unlikely(hot_tag < (u32)hba->exi_base ||
hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
@@ -1210,7 +1210,7 @@ u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
ox_id = ls_gs_rsp_scqe->wd1.ox_id;
- hot_tag = ((u16)(ls_gs_rsp_scqe->wd5.hotpooltag) & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = ((u16)ls_gs_rsp_scqe->wd5.hotpooltag) - hba->exi_base;
pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
@@ -1317,8 +1317,7 @@ u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
els_rsp_sts_scqe->magic_num;
pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
- hot_tag = (u32)((els_rsp_sts_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) -
- hba->exi_base);
+ hot_tag = (u32)(els_rsp_sts_scqe->wd1.hotpooltag - hba->exi_base);
if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
pkg.status = UNF_IO_FAILED;
@@ -1759,7 +1758,7 @@ u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
- hot_tag = (tmf_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = tmf_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
@@ -1800,7 +1799,7 @@ u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *sc
ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
- hot_tag = (abts_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = abts_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
@@ -1972,8 +1971,7 @@ u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
- hot_tag = (free_xid_sts_scqe->wd1.hotpooltag &
- UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = free_xid_sts_scqe->wd1.hotpooltag - hba->exi_base;
}
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
@@ -1998,7 +1996,7 @@ u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)time_out_scqe->wd0.rx_id;
if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
- hot_tag = (time_out_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = time_out_scqe->wd1.hotpooltag - hba->exi_base;
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
"Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
@@ -2054,7 +2052,7 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
"[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
hba->port_cfg.port_id, rport_index);
- if (sqn < sqn_max) {
+ if ((sqn < sqn_max) && (sqn >= sqn_base)) {
ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
} else if (sqn == sqn_max) {
if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
@@ -2065,6 +2063,10 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
kfree(suspend_sqe);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) rcv error sqn(0x%x)",
+ hba->port_cfg.port_id, rport_index, sqn);
}
} else {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
--
2.32.0
1
0

[PATCH openEuler-1.0-LTS] bpf: Verifer, adjust_scalar_min_max_vals to always call update_reg_bounds()
by Yang Yingliang 22 Feb '22
by Yang Yingliang 22 Feb '22
22 Feb '22
From: John Fastabend <john.fastabend(a)gmail.com>
mainline inclusion
from mainline-v5.7-rc1
commit 294f2fc6da27620a506e6c050241655459ccd6bd
category: bugfix
bugzilla: NA
CVE: CVE-2021-4159
-----------------------------------------------
Currently, for all op verification we call __red_deduce_bounds() and
__red_bound_offset() but we only call __update_reg_bounds() in bitwise
ops. However, we could benefit from calling __update_reg_bounds() in
BPF_ADD, BPF_SUB, and BPF_MUL cases as well.
For example, a register with state 'R1_w=invP0' when we subtract from
it,
w1 -= 2
Before coerce we will now have an smin_value=S64_MIN, smax_value=U64_MAX
and unsigned bounds umin_value=0, umax_value=U64_MAX. These will then
be clamped to S32_MIN, U32_MAX values by coerce in the case of alu32 op
as done in above example. However tnum will be a constant because the
ALU op is done on a constant.
Without update_reg_bounds() we have a scenario where tnum is a const
but our unsigned bounds do not reflect this. By calling update_reg_bounds
after coerce to 32bit we further refine the umin_value to U64_MAX in the
alu64 case or U32_MAX in the alu32 case above.
Signed-off-by: John Fastabend <john.fastabend(a)gmail.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Link: https://lore.kernel.org/bpf/158507151689.15666.566796274289413203.stgit@joh…
Signed-off-by: He Fengqing <hefengqing(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: Kuohai Xu <xukuohai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/bpf/verifier.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 446655caa2625..c3c592654cf10 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3553,6 +3553,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
coerce_reg_to_size(dst_reg, 4);
}
+ __update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
--
2.25.1
1
0

22 Feb '22
v2->v3:
Fix compile warning "ahci_error_intr ISO C90 forbids mixed declarations and code"
Backport zhaoxin enhancement and bugfix patches.
LeoLiu-oc (7):
rtc: Fix set RTC time delay 500ms on some Zhaoxin SOCs
XHCI: Fix some device identify fail when enable xHCI runtime suspend
EHCI: Clear wakeup signal locked in S0 state when device plug in
Fix some bugs like plugin support and sata link stability when user
enable ahci RTD3
Add support for disabling PhyRdy Change Interrupt based on actual LPM
capability
Add support for PxSCT.LPM set based on actual LPM capability
x86/tsc: Make cur->adjusted values in package#1 to be the same
arch/x86/kernel/tsc_sync.c | 5 +++++
drivers/ata/ahci.c | 24 ++++++++++++++++++++++++
drivers/ata/libahci.c | 15 +++++++++++++++
drivers/ata/libata-eh.c | 7 +++++++
drivers/ata/libata-sata.c | 20 ++++++++++++++++++--
drivers/pci/pci-driver.c | 6 +++++-
drivers/rtc/rtc-mc146818-lib.c | 16 ++++++++++++++++
drivers/usb/host/ehci-hcd.c | 21 +++++++++++++++++++++
drivers/usb/host/ehci-pci.c | 4 ++++
drivers/usb/host/ehci.h | 1 +
drivers/usb/host/xhci-pci.c | 2 ++
include/linux/libata.h | 4 ++++
12 files changed, 122 insertions(+), 3 deletions(-)
--
2.20.1
1
7

[PATCH openEuler-1.0-LTS] livepatch/x86: Fix incorrect use of 'strncpy'
by Yang Yingliang 22 Feb '22
by Yang Yingliang 22 Feb '22
22 Feb '22
From: Zheng Yejian <zhengyejian1(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186253, https://gitee.com/openeuler/kernel/issues/I4TYA9
CVE: NA
-----------------------------------------------
Refer to following codes, 'strncpy' would stop copying if Null character
encountered. For example, when 'code' is "53 be 00 0a 05", 'old_code'
would be "53 be 00 00 00".
> 276 static unsigned char *klp_old_code(unsigned char *code)
> 277 {
> 278 static union klp_code_union old_code;
> 279
> 280 strncpy(old_code.code, code, JMP_E9_INSN_SIZE);
> 281 return old_code.code;
> 282 }
As a result, the instructions cannot be restored completely, and the
system becomes abnormal.
Fixes: 7e2ab91ea076 ("livepatch/x86: support livepatch without ftrace")
Signed-off-by: Zheng Yejian <zhengyejian1(a)huawei.com>
Reviewed-by: Kuohai Xu <xukuohai(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/kernel/livepatch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c
index 0e118adf14087..785bba03b77fa 100644
--- a/arch/x86/kernel/livepatch.c
+++ b/arch/x86/kernel/livepatch.c
@@ -277,7 +277,7 @@ static unsigned char *klp_old_code(unsigned char *code)
{
static union klp_code_union old_code;
- strncpy(old_code.code, code, JMP_E9_INSN_SIZE);
+ memcpy(old_code.code, code, JMP_E9_INSN_SIZE);
return old_code.code;
}
--
2.25.1
1
0

22 Feb '22
Ramaxel inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA67
CVE: NA
Changes from v1:
Update description.
Fix:
1.Remove UNF_ORIGIN_HOTTAG_MASK and UNF_HOTTAG_FLAG
2.Update some output string
3.Remove spinlock protect in free_parent_sq() because there is
spinlock protect in caller function free_parent_queue_info()
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
Reviewed-by: Yun Xu <xuyun(a)ramaxel.com>
---
drivers/scsi/spfc/common/unf_common.h | 2 --
drivers/scsi/spfc/common/unf_io.c | 3 +--
drivers/scsi/spfc/common/unf_io_abnormal.c | 2 +-
drivers/scsi/spfc/common/unf_rport.c | 2 +-
drivers/scsi/spfc/common/unf_service.c | 19 +++++--------------
drivers/scsi/spfc/hw/spfc_hba.c | 2 +-
drivers/scsi/spfc/hw/spfc_io.c | 2 +-
drivers/scsi/spfc/hw/spfc_queue.c | 5 -----
drivers/scsi/spfc/hw/spfc_service.c | 22 ++++++++++++----------
9 files changed, 22 insertions(+), 37 deletions(-)
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
index bf9d156e07ce..9613649308bf 100644
--- a/drivers/scsi/spfc/common/unf_common.h
+++ b/drivers/scsi/spfc/common/unf_common.h
@@ -12,8 +12,6 @@
#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
#define UNF_MAX_SECTORS 0xffff
-#define UNF_ORIGIN_HOTTAG_MASK 0x7fff
-#define UNF_HOTTAG_FLAG (1 << 15)
#define UNF_PKG_FREE_OXID 0x0
#define UNF_PKG_FREE_RXID 0x1
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
index b1255ecba88c..5de69f8ddc6d 100644
--- a/drivers/scsi/spfc/common/unf_io.c
+++ b/drivers/scsi/spfc/common/unf_io.c
@@ -890,8 +890,7 @@ static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
- unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
unf_select_sq(unf_xchg, &pkg);
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
index fece7aa5f441..4e268ac026ca 100644
--- a/drivers/scsi/spfc/common/unf_io_abnormal.c
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.c
@@ -763,7 +763,7 @@ int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
pkg.xchg_contex = unf_xchg;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
pkg.frame_head.csctl_sid = lport->nport_id;
pkg.frame_head.rctl_did = rport->nport_id;
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
index aa4967fc0ab6..9b06df884524 100644
--- a/drivers/scsi/spfc/common/unf_rport.c
+++ b/drivers/scsi/spfc/common/unf_rport.c
@@ -352,7 +352,7 @@ struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 si
spin_unlock_irqrestore(rport_state_lock, flags);
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[err]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
+ "[info]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
lport->port_id, rport_by_wwpn, wwpn);
rport_by_wwpn = NULL;
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
index 8f72f6470647..9c86c99374c8 100644
--- a/drivers/scsi/spfc/common/unf_service.c
+++ b/drivers/scsi/spfc/common/unf_service.c
@@ -130,7 +130,7 @@ void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
}
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
@@ -250,7 +250,7 @@ u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
@@ -407,19 +407,10 @@ static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg
rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
unf_rport = unf_get_rport_by_nport_id(lport, sid);
- if (unf_rport) {
- if (unf_rport->rport_index !=
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) NPort handle(0x%x) from low level is not equal to RPort index(0x%x)",
- lport->port_id, lport->nport_id,
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
- unf_rport->rport_index);
- }
+ if (unf_rport)
ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
- } else {
+ else
ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
- }
return ret;
}
@@ -1389,7 +1380,7 @@ static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *p
pkg->frame_head.csctl_sid = xchg->sid;
pkg->frame_head.rctl_did = xchg->did;
pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
index e12299c9e2c9..b033dcb78bb3 100644
--- a/drivers/scsi/spfc/hw/spfc_hba.c
+++ b/drivers/scsi/spfc/hw/spfc_hba.c
@@ -56,7 +56,7 @@ static struct unf_cfg_item spfc_port_cfg_parm[] = {
{"port_topology", 0, 0xf, 0x20},
{"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
/* queue depth of originator registered to SCSI midlayer */
- {"max_queue_depth", 0, 128, 128},
+ {"max_queue_depth", 0, 512, 512},
{"sest_num", 0, 2048, 2048},
{"max_login", 0, 2048, 2048},
/* nodename from 32 bit to 64 bit */
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
index 2b1d1c607b13..7184eb6a10af 100644
--- a/drivers/scsi/spfc/hw/spfc_io.c
+++ b/drivers/scsi/spfc/hw/spfc_io.c
@@ -1138,7 +1138,7 @@ u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
- hot_tag = (u16)iresp->wd2.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = (u16)iresp->wd2.hotpooltag;
/* 2. HotTag validity check */
if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
pkg.status = UNF_IO_SUCCESS;
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
index abcf1ff3f49f..fa4295832da7 100644
--- a/drivers/scsi/spfc/hw/spfc_queue.c
+++ b/drivers/scsi/spfc/hw/spfc_queue.c
@@ -2138,11 +2138,9 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
u32 uidelaycnt = 0;
struct list_head *list = NULL;
struct spfc_suspend_sqe_info *suspend_sqe = NULL;
- ulong flag = 0;
sq_info = &parq_info->parent_sq_info;
- spin_lock_irqsave(&parq_info->parent_queue_state_lock, flag);
while (!list_empty(&sq_info->suspend_sqe_list)) {
list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
list_del(list);
@@ -2156,7 +2154,6 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
kfree(suspend_sqe);
}
}
- spin_unlock_irqrestore(&parq_info->parent_queue_state_lock, flag);
/* Free data cos */
spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
@@ -4475,9 +4472,7 @@ void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *pa
* with the sq in the queue of the parent
*/
- spin_unlock_irqrestore(prtq_state_lock, flag);
spfc_free_parent_sq(hba, parent_queue_info);
- spin_lock_irqsave(prtq_state_lock, flag);
/* The initialization of all queue id is invalid */
parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
index e99802df50a2..1da58e3f9fbe 100644
--- a/drivers/scsi/spfc/hw/spfc_service.c
+++ b/drivers/scsi/spfc/hw/spfc_service.c
@@ -742,7 +742,7 @@ u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
ox_id = (u32)(abts_rsp->wd0.ox_id);
- hot_tag = abts_rsp->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = abts_rsp->wd1.hotpooltag;
if (unlikely(hot_tag < (u32)hba->exi_base ||
hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
@@ -1210,7 +1210,7 @@ u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
ox_id = ls_gs_rsp_scqe->wd1.ox_id;
- hot_tag = ((u16)(ls_gs_rsp_scqe->wd5.hotpooltag) & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = ((u16)ls_gs_rsp_scqe->wd5.hotpooltag) - hba->exi_base;
pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
@@ -1317,8 +1317,7 @@ u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
els_rsp_sts_scqe->magic_num;
pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
- hot_tag = (u32)((els_rsp_sts_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) -
- hba->exi_base);
+ hot_tag = (u32)(els_rsp_sts_scqe->wd1.hotpooltag - hba->exi_base);
if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
pkg.status = UNF_IO_FAILED;
@@ -1759,7 +1758,7 @@ u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
- hot_tag = (tmf_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = tmf_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
@@ -1800,7 +1799,7 @@ u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *sc
ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
- hot_tag = (abts_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = abts_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
@@ -1972,8 +1971,7 @@ u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
- hot_tag = (free_xid_sts_scqe->wd1.hotpooltag &
- UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = free_xid_sts_scqe->wd1.hotpooltag - hba->exi_base;
}
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
@@ -1998,7 +1996,7 @@ u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)time_out_scqe->wd0.rx_id;
if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
- hot_tag = (time_out_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = time_out_scqe->wd1.hotpooltag - hba->exi_base;
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
"Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
@@ -2054,7 +2052,7 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
"[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
hba->port_cfg.port_id, rport_index);
- if (sqn < sqn_max) {
+ if ((sqn < sqn_max) && (sqn >= sqn_base)) {
ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
} else if (sqn == sqn_max) {
if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
@@ -2065,6 +2063,10 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
kfree(suspend_sqe);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) rcv error sqn(0x%x)",
+ hba->port_cfg.port_id, rport_index, sqn);
}
} else {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
--
2.32.0
1
0
Fix 5 bugs:
1. Fix array bounds error in ethtool get_link_ksettings
2. Fix ethtool loopback command failure
3. Fix xor checksum error when sending a non 4B-aligned
message to firmware
4. Fix an error when netdev failed to link up
5. Reduce the timeout of the channel between driver and
firmware
Yanling Song (5):
net/spnic: Fix array bounds error in ethtool get_link_ksettings
net/spnic: Fix ethtool loopback command failure
net/spnic: Fix xor checksum error when sending a non 4B-aligned
message to firmware
net/spnic: Fix an error when netdev failed to link up
net/spnic: Reduce the timeout of the channel between driver and
firmware
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 2 +-
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 10 ++--
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 2 +-
.../ramaxel/spnic/spnic_ethtool_stats.c | 4 +-
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 2 +-
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 49 ++++++++++++-------
6 files changed, 41 insertions(+), 28 deletions(-)
--
2.32.0
1
5
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4UP2Q
CVE: NA
--------------------------------
Enable CONFIG_NTB_INTEL in openeuler_defconfig for x86.
Support Intel NTB on capable Xeon and Atom hardware.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/x86/configs/openeuler_defconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index 316c4122a859..7cda312e96a3 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -7121,7 +7121,7 @@ CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
# CONFIG_NTB_AMD is not set
# CONFIG_NTB_IDT is not set
-# CONFIG_NTB_INTEL is not set
+CONFIG_NTB_INTEL=m
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
--
2.23.0
2
1

22 Feb '22
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4PM10
CVE: NA
--------------------------------
This patch add uce kernel recovery path support in copy_from_user.
Signed-off-by: Tong Tiangen <tongtiangen(a)huawei.com>
---
arch/arm64/Kconfig | 9 ++
arch/arm64/include/asm/exception.h | 12 +++
arch/arm64/lib/copy_from_user.S | 5 +
arch/arm64/mm/fault.c | 154 +++++++++++++++++++++++++++++
include/linux/kernel.h | 4 +
kernel/sysctl.c | 11 +++
6 files changed, 195 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c0f6a275f798..f0198d7f8333 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2155,6 +2155,15 @@ config ARCH_HIBERNATION_HEADER
config ARCH_SUSPEND_POSSIBLE
def_bool y
+config UCE_KERNEL_RECOVERY
+ bool "uce kernel recovery from special scenario"
+ def_bool y
+ help
+ With ARM v8.2 RAS Extension, SEA are usually triggered when memory errors
+ are consumed. In some cases, if the error address is in a user page there
+ is a chance to recover. Such as error occurs in COW and pagecache reading
+ scenario, we can isolate this page and killing process instead of die.
+
endmenu
menu "CPU Power Management"
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 0756191f44f6..bad92fe763fb 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -53,4 +53,16 @@ void do_cp15instr(unsigned int esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
+
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+struct uce_kernel_recovery_info {
+ int (*fn)(void);
+ const char *name;
+ unsigned long addr;
+ unsigned long size;
+};
+
+extern int copy_from_user_sea_fallback(void);
+#endif
+
#endif /* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 2cf999e41d30..ffd07e9822f2 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -60,6 +60,11 @@ SYM_FUNC_START(__arch_copy_from_user)
#include "copy_template.S"
mov x0, #0 // Nothing to copy
ret
+
+ .global copy_from_user_sea_fallback
+copy_from_user_sea_fallback:
+ mov x0, #-1
+ ret
SYM_FUNC_END(__arch_copy_from_user)
EXPORT_SYMBOL(__arch_copy_from_user)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 3fc5aceb72eb..7f76e277daa3 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -634,6 +634,127 @@ static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs)
return 1; /* "fault" */
}
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+int kernel_access_sea_recovery;
+
+#define UCE_KER_REC_NUM ARRAY_SIZE(reco_info)
+static struct uce_kernel_recovery_info reco_info[] = {
+ {NULL, NULL, 0, 0},
+ {NULL, NULL, 0, 0},
+ {copy_from_user_sea_fallback, "__arch_copy_from_user", (unsigned long)__arch_copy_from_user, 0},
+};
+
+static int __init kernel_access_sea_recovery_init(void)
+{
+ unsigned long addr, size, offset;
+ unsigned int i;
+
+ for (i = 0; i < UCE_KER_REC_NUM; i++) {
+ addr = reco_info[i].addr;
+ if (!kallsyms_lookup_size_offset(addr, &size, &offset)) {
+ pr_info("UCE: symbol %s lookup addr fail.\n",
+ reco_info[i].name);
+ size = 0;
+ }
+
+ reco_info[i].size = size;
+ }
+
+ return 1;
+}
+fs_initcall(kernel_access_sea_recovery_init);
+
+static int __init enable_kernel_access_sea_recovery(char *str)
+{
+ int max = (1 << UCE_KER_REC_NUM) - 1;
+ int val;
+
+ if (kstrtoint(str, 0, &val))
+ return -EINVAL;
+
+ if (val < 0 || val > max) {
+ pr_info("UCE: invalid uce_kernel_recovery value %d", val);
+ return -EINVAL;
+ }
+
+ kernel_access_sea_recovery = val;
+
+ return 1;
+}
+__setup("uce_kernel_recovery=", enable_kernel_access_sea_recovery);
+
+/*
+ * what is kernel recovery?
+ * If the process's private data is accessed in the kernel mode to trigger
+ * special sea fault, it can controlled by killing the process and isolating
+ * the failure pages instead of die.
+ */
+static int is_in_kernel_recovery(unsigned int esr, struct pt_regs *regs)
+{
+ /*
+ * target insn: ldp-pre, ldp-post, ldp-offset,
+ * ldr-64bit-pre/pose, ldr-32bit-pre/post, ldrb-pre/post, ldrh-pre/post
+ */
+ u32 target_insn[] = {0xa8c, 0xa9c, 0xa94, 0xf84, 0x784, 0x384, 0xb84};
+ void *pc = (void *)instruction_pointer(regs);
+ struct uce_kernel_recovery_info *info;
+ bool insn_match = false;
+ u32 insn;
+ int i;
+
+ pr_emerg("UCE: %s-%d, kernel recovery: 0x%x, esr: 0x%08x -- %s, %pS\n",
+ current->comm, current->pid, kernel_access_sea_recovery, esr,
+ esr_get_class_string(esr), pc);
+
+ if (aarch64_insn_read((void *)pc, &insn)) {
+ pr_emerg("UCE: insn read fail.\n");
+ return -EFAULT;
+ }
+
+ /*
+ * We process special ESR:
+ * EC : 0b100101 Data Abort taken without a change in Exception level.
+ * DFSC : 0b010000 Synchronous External abort, not on translation table
+ * walk or hardware update of translation table.
+ * eg: 0x96000610
+ */
+ if (ESR_ELx_EC(esr) != ESR_ELx_EC_DABT_CUR ||
+ (esr & ESR_ELx_FSC) != ESR_ELx_FSC_EXTABT) {
+ pr_emerg("UCE: esr not match.\n");
+ return -EINVAL;
+ }
+
+ insn = (insn >> 20) & 0xffc;
+ for (i = 0; i < ARRAY_SIZE(target_insn); i++) {
+ if (insn == target_insn[i]) {
+ insn_match = true;
+ break;
+ }
+ }
+
+ if (!insn_match) {
+ pr_emerg("UCE: insn 0x%x is not match.\n", insn);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < UCE_KER_REC_NUM; i++) {
+ if (!((kernel_access_sea_recovery >> i) & 0x1))
+ continue;
+
+ info = &reco_info[i];
+ if (info->fn && regs->pc >= info->addr &&
+ regs->pc < (info->addr + info->size)) {
+ pr_emerg("UCE: total match %s success.\n", info->name);
+ return i;
+ }
+ }
+
+ pr_emerg("UCE: symbol is not match or switch if off, kernel recovery %d.\n",
+ kernel_access_sea_recovery);
+ return -EINVAL;
+}
+#endif
+
static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
const struct fault_info *inf;
@@ -649,6 +770,39 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
return 0;
}
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+ if (!user_mode(regs)) {
+ int idx;
+
+ if (!current->mm || !kernel_access_sea_recovery) {
+ pr_emerg("UCE: kernel recovery %d, %s-%d is %s-thread.\n",
+ kernel_access_sea_recovery,
+ current->comm, current->pid,
+ (current->mm) ? "user" : "kernel");
+ die("Uncorrected hardware memory error in kernel-access\n",
+ regs, esr);
+ }
+
+ idx = is_in_kernel_recovery(esr, regs);
+ if (idx >= 0 && idx < UCE_KER_REC_NUM) {
+ current->thread.fault_address = 0;
+ current->thread.fault_code = esr;
+ regs->pc = (unsigned long)reco_info[idx].fn;
+
+ if (esr & ESR_ELx_FnV)
+ siaddr = NULL;
+ else
+ siaddr = (void __user *)addr;
+
+ arm64_force_sig_fault(inf->sig, inf->code, siaddr,
+ "Uncorrected hardware memory use with kernel recovery in kernel-access\n");
+ } else {
+ die("Uncorrected hardware memory error (not match idx or sence switch is off) in kernel-access\n",
+ regs, esr);
+ }
+ }
+#endif
+
if (esr & ESR_ELx_FnV)
siaddr = NULL;
else
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 78a0907f0b04..b634fb1cce38 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -558,6 +558,10 @@ extern int sysctl_panic_on_stackoverflow;
extern bool crash_kexec_post_notifiers;
+#ifdef CONFIG_UCE_KERNEL_RECOVERY
+extern int kernel_access_sea_recovery;
+#endif
+
/*
* panic_cpu is used for synchronizing panic() and crash_kexec() execution. It
* holds a CPU number which is executing panic() currently. A value of
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 89ef0c1a1642..e38fff657683 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -2711,6 +2711,17 @@ static struct ctl_table kern_table[] = {
.extra1 = &one_hundred,
.extra2 = &one_thousand,
},
+#endif
+#if defined(CONFIG_UCE_KERNEL_RECOVERY)
+ {
+ .procname = "uce_kernel_recovery",
+ .data = &kernel_access_sea_recovery,
+ .maxlen = sizeof(kernel_access_sea_recovery),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &four,
+ .extra2 = &four,
+ },
#endif
{ }
};
--
2.18.0.huawei.25
1
0

[PATCH openEuler-1.0-LTS 1/2] yam: fix a memory leak in yam_siocdevprivate()
by Yang Yingliang 22 Feb '22
by Yang Yingliang 22 Feb '22
22 Feb '22
From: Hangyu Hua <hbh25y(a)gmail.com>
mainline inclusion
from mainline-v5.17
commit 29eb31542787e1019208a2e1047bb7c76c069536
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4U4NY
CVE: CVE-2022-24959
-----------------------------------------------
ym needs to be free when ym->cmd != SIOCYAMSMCS.
Fixes: 0781168e23a2 ("yam: fix a missing-check bug")
Signed-off-by: Hangyu Hua <hbh25y(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
conflict:
The bug is in function yam_siocdevprivate() in mainline,
but it is in function yam_ioctl() because the function name is
changed in 25ec92fbdd("hamradio: use ndo_siocdevprivate") in
mainline.
Signed-off-by: Lu Wei <luwei32(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/hamradio/yam.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
index ba9df430fca6e..f124ceab5f5e1 100644
--- a/drivers/net/hamradio/yam.c
+++ b/drivers/net/hamradio/yam.c
@@ -966,9 +966,7 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
sizeof(struct yamdrv_ioctl_mcs));
if (IS_ERR(ym))
return PTR_ERR(ym);
- if (ym->cmd != SIOCYAMSMCS)
- return -EINVAL;
- if (ym->bitrate > YAM_MAXBITRATE) {
+ if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
kfree(ym);
return -EINVAL;
}
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] yam: fix a memory leak in yam_siocdevprivate()
by Yang Yingliang 21 Feb '22
by Yang Yingliang 21 Feb '22
21 Feb '22
From: Hangyu Hua <hbh25y(a)gmail.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4U4NY
CVE: CVE-2022-24959
-------------------------------------------------
ym needs to be free when ym->cmd != SIOCYAMSMCS.
Fixes: 0781168e23a2 ("yam: fix a missing-check bug")
Signed-off-by: Hangyu Hua <hbh25y(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
conflict:
The bug is in function yam_siocdevprivate() in mainline,
but it is in function yam_ioctl() because the function name is
changed in 25ec92fbdd("hamradio: use ndo_siocdevprivate") in
mainline.
Signed-off-by: Lu Wei <luwei32(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/net/hamradio/yam.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
index ba9df430fca6e..f124ceab5f5e1 100644
--- a/drivers/net/hamradio/yam.c
+++ b/drivers/net/hamradio/yam.c
@@ -966,9 +966,7 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
sizeof(struct yamdrv_ioctl_mcs));
if (IS_ERR(ym))
return PTR_ERR(ym);
- if (ym->cmd != SIOCYAMSMCS)
- return -EINVAL;
- if (ym->bitrate > YAM_MAXBITRATE) {
+ if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
kfree(ym);
return -EINVAL;
}
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] ipmi_si: Phytium S2500 missing timeout counter reset in intf_mem_inw
by Yang Yingliang 21 Feb '22
by Yang Yingliang 21 Feb '22
21 Feb '22
From: Laibin Qiu <qiulaibin(a)huawei.com>
phytium inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4RK58
CVE: NA
--------------------------------
The system would hang up when the Phytium S2500 communicates with
some BMCs after several rounds of transactions, unless we reset
the controller timeout counter manually by calling firmware through
SMC.
The reset in intf_mem_inw was missed previously.
Fixes: 88f04d8bd2d27 ("ipmi_si: Phytium S2500 workaround for MMIO-based IPMI")
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/char/ipmi/ipmi_si_mem_io.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
index 9403e35a19934..bb4ed90a11535 100644
--- a/drivers/char/ipmi/ipmi_si_mem_io.c
+++ b/drivers/char/ipmi/ipmi_si_mem_io.c
@@ -84,6 +84,8 @@ static void intf_mem_outb(const struct si_sm_io *io, unsigned int offset,
static unsigned char intf_mem_inw(const struct si_sm_io *io,
unsigned int offset)
{
+ ipmi_phytium_workaround();
+
return (readw((io->addr)+(offset * io->regspacing)) >> io->regshift)
& 0xff;
}
--
2.25.1
1
0
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SIGI
Borislav Petkov (1):
x86/sgx: Fix sgx_ioc_enclave_provision() kernel-doc
Daniel Vetter (1):
x86/sgx: Drop racy follow_pfn() check
Dave Hansen (3):
x86/sgx: Clarify 'laundry_list' locking
selftests/sgx: Improve error detection and messages
selftests/sgx: remove checks for file execute permissions
Ira Weiny (1):
x86/sgx: Remove unnecessary kmap() from sgx_ioc_enclave_init()
Jarkko Sakkinen (25):
x86/sgx: Add SGX architectural data structures
x86/sgx: Add wrappers for ENCLS functions
x86/cpu/intel: Add a nosgx kernel parameter
x86/sgx: Add SGX page allocator functions
x86/sgx: Add an SGX misc driver interface
x86/sgx: Add SGX_IOC_ENCLAVE_CREATE
x86/sgx: Add SGX_IOC_ENCLAVE_ADD_PAGES
x86/sgx: Add SGX_IOC_ENCLAVE_INIT
x86/sgx: Add SGX_IOC_ENCLAVE_PROVISION
selftests/x86: Add a selftest for SGX
x86/sgx: Add a page reclaimer
x86/sgx: Add ptrace() support for the SGX driver
Documentation/x86: Document SGX kernel architecture
x86/sgx: Update MAINTAINERS
x86/sgx: Return -ERESTARTSYS in sgx_ioc_enclave_add_pages()
x86/sgx: Return -EINVAL on a zero length buffer in
sgx_ioc_enclave_add_pages()
x86/sgx: Maintain encl->refcount for each encl->mm_list entry
x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list
x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()
selftests/sgx: Use a statically generated 3072-bit RSA key
selftests/sgx: Rename 'eenter' and 'sgx_call_vdso'
selftests/sgx: Migrate to kselftest harness
selftests/sgx: Dump enclave memory map
selftests/sgx: Add EXPECT_EEXIT() macro
selftests/sgx: Refine the test enclave to have storage
Sami Tolvanen (1):
x86/sgx: Fix the return type of sgx_init()
Sean Christopherson (9):
x86/cpufeatures: Add Intel SGX hardware bits
x86/{cpufeatures,msr}: Add Intel SGX Launch Control hardware bits
x86/sgx: Initialize metadata for Enclave Page Cache (EPC) sections
x86/cpu/intel: Detect SGX support
mm: Add 'mprotect' hook to struct vm_operations_struct
x86/vdso: Add support for exception fixup in vDSO functions
x86/fault: Add a helper function to sanitize error code
x86/vdso: Implement a vDSO for Intel SGX enclave call
x86/sgx: Expose SGX architectural definitions to the kernel
Tianjia Zhang (2):
selftests/sgx: Use getauxval() to simplify test code
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c
root (1):
x86/traps: Attempt to fixup exceptions in vDSO before signaling
.../admin-guide/kernel-parameters.txt | 2 +
.../userspace-api/ioctl/ioctl-number.rst | 1 +
Documentation/x86/index.rst | 1 +
Documentation/x86/sgx.rst | 211 +++++
MAINTAINERS | 14 +
arch/x86/Kconfig | 18 +
arch/x86/entry/vdso/Makefile | 8 +-
arch/x86/entry/vdso/extable.c | 46 ++
arch/x86/entry/vdso/extable.h | 28 +
arch/x86/entry/vdso/vdso-layout.lds.S | 9 +-
arch/x86/entry/vdso/vdso.lds.S | 1 +
arch/x86/entry/vdso/vdso2c.h | 50 +-
arch/x86/entry/vdso/vsgx.S | 151 ++++
arch/x86/include/asm/cpufeatures.h | 2 +
arch/x86/include/asm/disabled-features.h | 8 +-
arch/x86/include/asm/enclu.h | 9 +
arch/x86/include/asm/msr-index.h | 8 +
arch/x86/include/asm/sgx.h | 348 ++++++++
arch/x86/include/asm/vdso.h | 5 +
arch/x86/include/uapi/asm/sgx.h | 168 ++++
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/feat_ctl.c | 38 +-
arch/x86/kernel/cpu/sgx/Makefile | 5 +
arch/x86/kernel/cpu/sgx/driver.c | 197 +++++
arch/x86/kernel/cpu/sgx/driver.h | 29 +
arch/x86/kernel/cpu/sgx/encl.c | 737 +++++++++++++++++
arch/x86/kernel/cpu/sgx/encl.h | 119 +++
arch/x86/kernel/cpu/sgx/encls.h | 231 ++++++
arch/x86/kernel/cpu/sgx/ioctl.c | 718 ++++++++++++++++
arch/x86/kernel/cpu/sgx/main.c | 768 ++++++++++++++++++
arch/x86/kernel/cpu/sgx/sgx.h | 83 ++
arch/x86/kernel/traps.c | 10 +
arch/x86/mm/fault.c | 32 +-
include/linux/mm.h | 7 +
mm/mprotect.c | 7 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/sgx/.gitignore | 2 +
tools/testing/selftests/sgx/Makefile | 57 ++
tools/testing/selftests/sgx/call.S | 44 +
tools/testing/selftests/sgx/defines.h | 31 +
tools/testing/selftests/sgx/load.c | 305 +++++++
tools/testing/selftests/sgx/main.c | 293 +++++++
tools/testing/selftests/sgx/main.h | 41 +
tools/testing/selftests/sgx/sign_key.S | 12 +
tools/testing/selftests/sgx/sign_key.pem | 39 +
tools/testing/selftests/sgx/sigstruct.c | 382 +++++++++
tools/testing/selftests/sgx/test_encl.c | 35 +
tools/testing/selftests/sgx/test_encl.lds | 41 +
.../selftests/sgx/test_encl_bootstrap.S | 89 ++
49 files changed, 5423 insertions(+), 19 deletions(-)
create mode 100644 Documentation/x86/sgx.rst
create mode 100644 arch/x86/entry/vdso/extable.c
create mode 100644 arch/x86/entry/vdso/extable.h
create mode 100644 arch/x86/entry/vdso/vsgx.S
create mode 100644 arch/x86/include/asm/enclu.h
create mode 100644 arch/x86/include/asm/sgx.h
create mode 100644 arch/x86/include/uapi/asm/sgx.h
create mode 100644 arch/x86/kernel/cpu/sgx/Makefile
create mode 100644 arch/x86/kernel/cpu/sgx/driver.c
create mode 100644 arch/x86/kernel/cpu/sgx/driver.h
create mode 100644 arch/x86/kernel/cpu/sgx/encl.c
create mode 100644 arch/x86/kernel/cpu/sgx/encl.h
create mode 100644 arch/x86/kernel/cpu/sgx/encls.h
create mode 100644 arch/x86/kernel/cpu/sgx/ioctl.c
create mode 100644 arch/x86/kernel/cpu/sgx/main.c
create mode 100644 arch/x86/kernel/cpu/sgx/sgx.h
create mode 100644 tools/testing/selftests/sgx/.gitignore
create mode 100644 tools/testing/selftests/sgx/Makefile
create mode 100644 tools/testing/selftests/sgx/call.S
create mode 100644 tools/testing/selftests/sgx/defines.h
create mode 100644 tools/testing/selftests/sgx/load.c
create mode 100644 tools/testing/selftests/sgx/main.c
create mode 100644 tools/testing/selftests/sgx/main.h
create mode 100644 tools/testing/selftests/sgx/sign_key.S
create mode 100644 tools/testing/selftests/sgx/sign_key.pem
create mode 100644 tools/testing/selftests/sgx/sigstruct.c
create mode 100644 tools/testing/selftests/sgx/test_encl.c
create mode 100644 tools/testing/selftests/sgx/test_encl.lds
create mode 100644 tools/testing/selftests/sgx/test_encl_bootstrap.S
--
2.22.0
3
64

[PATCH RESEND OLK-5.10 0/3] Add support for kunpeng920 Ultrasoc System Memory Buffer driver
by Qi Liu 21 Feb '22
by Qi Liu 21 Feb '22
21 Feb '22
Add support for kunpeng920 Ultrasoc System Memory Buffer driver
Qi Liu (3):
coresight: etm4x: Modify core-commit to avoid HiSilicon ETM overflow
drivers/coresight: Add Ultrasoc System Memory Buffer driver
arm64: openeuler_defconfig: Enable config for ultrasoc driver
arch/arm64/configs/openeuler_defconfig | 7 +-
drivers/hwtracing/coresight/Kconfig | 20 +
drivers/hwtracing/coresight/Makefile | 1 +
.../coresight/coresight-etm4x-core.c | 98 +++
drivers/hwtracing/coresight/coresight-etm4x.h | 8 +
drivers/hwtracing/coresight/ultrasoc-smb.c | 602 ++++++++++++++++++
drivers/hwtracing/coresight/ultrasoc-smb.h | 100 +++
7 files changed, 835 insertions(+), 1 deletion(-)
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.c
create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.h
--
2.33.0
2
4

[PATCH openEuler-1.0-LTS] mm,hwpoison: Fix use-after-free in memory_failure()
by Yang Yingliang 21 Feb '22
by Yang Yingliang 21 Feb '22
21 Feb '22
From: Ma Wupeng <mawupeng1(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4LE22
CVE: NA
--------------------------------
BUG: KASAN: use-after-free in __mutex_lock.isra.1+0x77c/0x860
Read of size 4 at addr ffff8000d8382040 by task syz-executor658/454
CPU: 3 PID: 454 Comm: syz-executor658 Not tainted 4.19.90+ #8
Hardware name: linux,dummy-virt (DT)
Call trace:
dump_backtrace+0x0/0x3f0
show_stack+0x28/0x38
dump_stack+0x170/0x1dc
print_address_description+0x68/0x2c8
kasan_report+0x130/0x2e8
__asan_report_load4_noabort+0x30/0x40
__mutex_lock.isra.1+0x77c/0x860
__mutex_lock_slowpath+0x24/0x30
mutex_lock+0x4c/0x58
memory_failure+0x1a8/0xf00
do_madvise+0x8bc/0x12b0
__arm64_sys_madvise+0x74/0x218
el0_svc_common+0x134/0x570
el0_svc_handler+0x190/0x260
el0_svc+0x10/0x218
Allocated by task 423:
kasan_kmalloc+0xdc/0x190
kasan_slab_alloc+0x14/0x20
kmem_cache_alloc_node+0xec/0x2a0
copy_process.isra.7.part.8+0x117c/0x58f0
_do_fork+0x188/0x8f0
__arm64_sys_clone+0xb0/0x108
el0_svc_common+0x134/0x570
el0_svc_handler+0x190/0x260
el0_svc+0x10/0x218
Freed by task 19:
__kasan_slab_free+0x120/0x228
kasan_slab_free+0x10/0x18
kmem_cache_free+0x1b8/0x270
free_task+0xb8/0xe0
__put_task_struct+0x248/0x318
delayed_put_task_struct+0x58/0x210
rcu_nocb_kthread+0x2b0/0x508
kthread+0x2c8/0x348
ret_from_fork+0x10/0x18
After commit 02d80b17ba49 ("mm/memory-failure: use a mutex to avoid
memory_failure() races"), all the error paths in memory_failure() need
unlock mf_mutx, or the above use-after-free occurred, fix the missing
one if try_to_split_thp_page() fails.
Fixes: a668355ac487 ("mm,hwpoison: unify THP handling for hard and soft offline")
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memory-failure.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index cd3394dd70e16..578859c94866f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1408,7 +1408,8 @@ int memory_failure(unsigned long pfn, int flags)
if (PageTransHuge(hpage)) {
if (try_to_split_thp_page(p, "Memory Failure") < 0) {
action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
- return -EBUSY;
+ res = -EBUSY;
+ goto unlock_mutex;
}
VM_BUG_ON_PAGE(!page_count(p), p);
}
--
2.25.1
1
0

21 Feb '22
From: Luo Meng <luomeng12(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186184
CVE: NA
--------------------------------
If dm_get_device() create dd in multipath_message(),
and then call table_deps() after dm_put_table_device(),
it will lead to concurrency UAF bugs.
One of the concurrency UAF can be shown as below:
(USE) | (FREE)
| target_message
| multipath_message
| dm_put_device
| dm_put_table_device #
| kfree(td) # table_device *td
ioctl # DM_TABLE_DEPS_CMD | ...
table_deps | ...
dm_get_live_or_inactive_table | ...
retrieve_dep | ...
list_for_each_entry | ...
deps->dev[count++] = | ...
huge_encode_dev | ...
(dd->dm_dev->bdev->bd_dev) | list_del(&dd->list)
| kfree(dd) # dm_dev_internal
The root cause of UAF bugs is that find_device() failed in
dm_get_device() and will create dd and refcount set 1, kfree()
in dm_put_table() is not protected. When td, which there are
still pointers point to, is released, the concurrency UAF bug
will happen.
This patch add a flag to determine whether to create a new dd.
Signed-off-by: Luo Meng <luomeng12(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/md/dm-mpath.c | 2 +-
drivers/md/dm-table.c | 43 +++++++++++++++++++++--------------
include/linux/device-mapper.h | 2 ++
3 files changed, 29 insertions(+), 18 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index bced42f082b02..e0bfa16aab379 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -1962,7 +1962,7 @@ static int multipath_message(struct dm_target *ti, unsigned argc, char **argv,
goto out;
}
- r = dm_get_device(ti, argv[1], dm_table_get_mode(ti->table), &dev);
+ r = __dm_get_device(ti, argv[1], dm_table_get_mode(ti->table), &dev, false);
if (r) {
DMWARN("message: error getting device %s",
argv[1]);
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 5c590895c14c3..f01c639f83875 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -361,12 +361,8 @@ dev_t dm_get_dev_t(const char *path)
}
EXPORT_SYMBOL_GPL(dm_get_dev_t);
-/*
- * Add a device to the list, or just increment the usage count if
- * it's already present.
- */
-int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
- struct dm_dev **result)
+int __dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
+ struct dm_dev **result, bool create_dd)
{
int r;
dev_t dev;
@@ -390,19 +386,21 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
dd = find_device(&t->devices, dev);
if (!dd) {
- dd = kmalloc(sizeof(*dd), GFP_KERNEL);
- if (!dd)
- return -ENOMEM;
-
- if ((r = dm_get_table_device(t->md, dev, mode, &dd->dm_dev))) {
- kfree(dd);
- return r;
- }
+ if (create_dd) {
+ dd = kmalloc(sizeof(*dd), GFP_KERNEL);
+ if (!dd)
+ return -ENOMEM;
- refcount_set(&dd->count, 1);
- list_add(&dd->list, &t->devices);
- goto out;
+ if ((r = dm_get_table_device(t->md, dev, mode, &dd->dm_dev))) {
+ kfree(dd);
+ return r;
+ }
+ refcount_set(&dd->count, 1);
+ list_add(&dd->list, &t->devices);
+ goto out;
+ } else
+ return -ENODEV;
} else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) {
r = upgrade_mode(dd, mode, t->md);
if (r)
@@ -413,6 +411,17 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
*result = dd->dm_dev;
return 0;
}
+EXPORT_SYMBOL(__dm_get_device);
+
+/*
+ * Add a device to the list, or just increment the usage count if
+ * it's already present.
+ */
+int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
+ struct dm_dev **result)
+{
+ return __dm_get_device(ti, path, mode, result, true);
+}
EXPORT_SYMBOL(dm_get_device);
static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev,
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 50cc070cb1f7c..47db4a14c9258 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -162,6 +162,8 @@ dev_t dm_get_dev_t(const char *path);
int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
struct dm_dev **result);
void dm_put_device(struct dm_target *ti, struct dm_dev *d);
+int __dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
+ struct dm_dev **result, bool create_dd);
/*
* Information about a target type
--
2.25.1
1
0

Re: [PATCH OLK-5.10 1/3] coresight: etm4x: Modify core-commit to avoid HiSilicon ETM overflow
by Xie XiuQi 19 Feb '22
by Xie XiuQi 19 Feb '22
19 Feb '22
你需要订阅 kernel(a)openeuler.org 邮件列表,不然别人都收不到你这个邮件。
On 2022/2/19 15:19, Qi Liu wrote:
>
> driver inclusion
> ategory: feature
> bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA33
你这个不断的来源是哪里的,如果是上游社区的,不断作者应该是原作者。
并标明补丁的上游 commit。
如果还没进入上游版本,也需要给出上游讨论的链接。
>
> -----------------------------------------
>
> The ETM device can't keep up with the core pipeline when cpu core
> is at full speed. This may cause overflow within core and its ETM.
> This is a common phenomenon on ETM devices.
>
> On HiSilicon Hip08 platform, a specific feature is added to set
> core pipeline. So commit rate can be reduced manually to avoid ETM
> overflow.
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose(a)arm.com>
> Signed-off-by: Qi Liu <liuqi115(a)huawei.com>
> [Modified changelog title and Kconfig description]
> Signed-off-by: Mathieu Poirier <mathieu.poirier(a)linaro.org>
> Link: https://lore.kernel.org/r/20201208182651.1597945-4-mathieu.poirier@linaro.o…
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
看这个补丁签名,社区应该是要讨论的。
你如果要推送到 openEuler 社区,需要有你自己的签名。
表明你对这个补丁负责。
> ---
> drivers/hwtracing/coresight/Kconfig | 8 ++
> .../coresight/coresight-etm4x-core.c | 98 +++++++++++++++++++
> drivers/hwtracing/coresight/coresight-etm4x.h | 8 ++
> 3 files changed, 114 insertions(+)
>
> diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
> index c1198245461d..7b44ba22cbe1 100644
> --- a/drivers/hwtracing/coresight/Kconfig
> +++ b/drivers/hwtracing/coresight/Kconfig
> @@ -110,6 +110,14 @@ config CORESIGHT_SOURCE_ETM4X
> To compile this driver as a module, choose M here: the
> module will be called coresight-etm4x.
>
> +config ETM4X_IMPDEF_FEATURE
> + bool "Control implementation defined overflow support in ETM 4.x driver"
> + depends on CORESIGHT_SOURCE_ETM4X
> + help
> + This control provides implementation define control for CoreSight
> + ETM 4.x tracer module that can't reduce commit rate automatically.
> + This avoids overflow between the ETM tracer module and the cpu core.
> +
> config CORESIGHT_STM
> tristate "CoreSight System Trace Macrocell driver"
> depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> index 74d3e2fe43d4..02d0b92cf510 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> @@ -3,6 +3,7 @@
> * Copyright (c) 2014, The Linux Foundation. All rights reserved.
> */
>
> +#include <linux/bitops.h>
> #include <linux/kernel.h>
> #include <linux/moduleparam.h>
> #include <linux/init.h>
> @@ -28,7 +29,9 @@
> #include <linux/perf_event.h>
> #include <linux/pm_runtime.h>
> #include <linux/property.h>
> +
> #include <asm/sections.h>
> +#include <asm/sysreg.h>
> #include <asm/local.h>
> #include <asm/virt.h>
>
> @@ -103,6 +106,97 @@ struct etm4_enable_arg {
> int rc;
> };
>
> +#ifdef CONFIG_ETM4X_IMPDEF_FEATURE
> +
> +#define HISI_HIP08_AMBA_ID 0x000b6d01
> +#define ETM4_AMBA_MASK 0xfffff
> +#define HISI_HIP08_CORE_COMMIT_MASK 0x3000
> +#define HISI_HIP08_CORE_COMMIT_SHIFT 12
> +#define HISI_HIP08_CORE_COMMIT_FULL 0b00
> +#define HISI_HIP08_CORE_COMMIT_LVL_1 0b01
> +#define HISI_HIP08_CORE_COMMIT_REG sys_reg(3, 1, 15, 2, 5)
> +
> +struct etm4_arch_features {
> + void (*arch_callback)(bool enable);
> +};
> +
> +static bool etm4_hisi_match_pid(unsigned int id)
> +{
> + return (id & ETM4_AMBA_MASK) == HISI_HIP08_AMBA_ID;
> +}
> +
> +static void etm4_hisi_config_core_commit(bool enable)
> +{
> + u8 commit = enable ? HISI_HIP08_CORE_COMMIT_LVL_1 :
> + HISI_HIP08_CORE_COMMIT_FULL;
> + u64 val;
> +
> + /*
> + * bit 12 and 13 of HISI_HIP08_CORE_COMMIT_REG are used together
> + * to set core-commit, 2'b00 means cpu is at full speed, 2'b01,
> + * 2'b10, 2'b11 mean reduce pipeline speed, and 2'b01 means level-1
> + * speed(minimun value). So bit 12 and 13 should be cleared together.
> + */
> + val = read_sysreg_s(HISI_HIP08_CORE_COMMIT_REG);
> + val &= ~HISI_HIP08_CORE_COMMIT_MASK;
> + val |= commit << HISI_HIP08_CORE_COMMIT_SHIFT;
> + write_sysreg_s(val, HISI_HIP08_CORE_COMMIT_REG);
> +}
> +
> +static struct etm4_arch_features etm4_features[] = {
> + [ETM4_IMPDEF_HISI_CORE_COMMIT] = {
> + .arch_callback = etm4_hisi_config_core_commit,
> + },
> + {},
> +};
> +
> +static void etm4_enable_arch_specific(struct etmv4_drvdata *drvdata)
> +{
> + struct etm4_arch_features *ftr;
> + int bit;
> +
> + for_each_set_bit(bit, drvdata->arch_features, ETM4_IMPDEF_FEATURE_MAX) {
> + ftr = &etm4_features[bit];
> +
> + if (ftr->arch_callback)
> + ftr->arch_callback(true);
> + }
> +}
> +
> +static void etm4_disable_arch_specific(struct etmv4_drvdata *drvdata)
> +{
> + struct etm4_arch_features *ftr;
> + int bit;
> +
> + for_each_set_bit(bit, drvdata->arch_features, ETM4_IMPDEF_FEATURE_MAX) {
> + ftr = &etm4_features[bit];
> +
> + if (ftr->arch_callback)
> + ftr->arch_callback(false);
> + }
> +}
> +
> +static void etm4_check_arch_features(struct etmv4_drvdata *drvdata,
> + unsigned int id)
> +{
> + if (etm4_hisi_match_pid(id))
> + set_bit(ETM4_IMPDEF_HISI_CORE_COMMIT, drvdata->arch_features);
> +}
> +#else
> +static void etm4_enable_arch_specific(struct etmv4_drvdata *drvdata)
> +{
> +}
> +
> +static void etm4_disable_arch_specific(struct etmv4_drvdata *drvdata)
> +{
> +}
> +
> +static void etm4_check_arch_features(struct etmv4_drvdata *drvdata,
> + unsigned int id)
> +{
> +}
> +#endif /* CONFIG_ETM4X_IMPDEF_FEATURE */
> +
> static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
> {
> int i, rc;
> @@ -110,6 +204,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
> struct device *etm_dev = &drvdata->csdev->dev;
>
> CS_UNLOCK(drvdata->base);
> + etm4_enable_arch_specific(drvdata);
>
> etm4_os_unlock(drvdata);
>
> @@ -480,6 +575,7 @@ static void etm4_disable_hw(void *info)
> int i;
>
> CS_UNLOCK(drvdata->base);
> + etm4_disable_arch_specific(drvdata);
>
> if (!drvdata->skip_power_up) {
> /* power can be removed from the trace unit now */
> @@ -1563,6 +1659,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
> drvdata->boot_enable = true;
> }
>
> + etm4_check_arch_features(drvdata, id->id);
> +
> return 0;
> }
>
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
> index eefc7371c6c4..3dd3e0633328 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x.h
> +++ b/drivers/hwtracing/coresight/coresight-etm4x.h
> @@ -8,6 +8,7 @@
>
> #include <asm/local.h>
> #include <linux/spinlock.h>
> +#include <linux/types.h>
> #include "coresight-priv.h"
>
> /*
> @@ -203,6 +204,11 @@
> /* Interpretation of resource numbers change at ETM v4.3 architecture */
> #define ETM4X_ARCH_4V3 0x43
>
> +enum etm_impdef_type {
> + ETM4_IMPDEF_HISI_CORE_COMMIT,
> + ETM4_IMPDEF_FEATURE_MAX,
> +};
> +
> /**
> * struct etmv4_config - configuration information related to an ETMv4
> * @mode: Controls various modes supported by this ETM.
> @@ -415,6 +421,7 @@ struct etmv4_save_state {
> * @state_needs_restore: True when there is context to restore after PM exit
> * @skip_power_up: Indicates if an implementation can skip powering up
> * the trace unit.
> + * @arch_features: Bitmap of arch features of etmv4 devices.
> */
> struct etmv4_drvdata {
> void __iomem *base;
> @@ -463,6 +470,7 @@ struct etmv4_drvdata {
> struct etmv4_save_state *save_state;
> bool state_needs_restore;
> bool skip_power_up;
> + DECLARE_BITMAP(arch_features, ETM4_IMPDEF_FEATURE_MAX);
> };
>
> /* Address comparator access types */
>
1
0

18 Feb '22
Ramaxel inclusion
from OLK-5.10
commit dff67aa564e7 ("scsi: spfc: initial commit the spfc module")
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA67
CVE: NA
Fix:
1.Remove UNF_ORIGIN_HOTTAG_MASK and UNF_HOTTAG_FLAG
2.Update some output string
3.Remove spinlock protect in free_parent_sq() because there is
spinlock protect in caller function free_parent_queue_info()
Signed-off-by: Yanling Song <songyl(a)ramaxel.com>
Reviewed-by: Yun Xu <xuyun(a)ramaxel.com>
---
drivers/scsi/spfc/common/unf_common.h | 2 --
drivers/scsi/spfc/common/unf_io.c | 3 +--
drivers/scsi/spfc/common/unf_io_abnormal.c | 2 +-
drivers/scsi/spfc/common/unf_rport.c | 2 +-
drivers/scsi/spfc/common/unf_service.c | 19 +++++--------------
drivers/scsi/spfc/hw/spfc_hba.c | 2 +-
drivers/scsi/spfc/hw/spfc_io.c | 2 +-
drivers/scsi/spfc/hw/spfc_queue.c | 5 -----
drivers/scsi/spfc/hw/spfc_service.c | 22 ++++++++++++----------
9 files changed, 22 insertions(+), 37 deletions(-)
diff --git a/drivers/scsi/spfc/common/unf_common.h b/drivers/scsi/spfc/common/unf_common.h
index bf9d156e07ce..9613649308bf 100644
--- a/drivers/scsi/spfc/common/unf_common.h
+++ b/drivers/scsi/spfc/common/unf_common.h
@@ -12,8 +12,6 @@
#define SPFC_DRV_DESC "Ramaxel Memory Technology Fibre Channel Driver"
#define UNF_MAX_SECTORS 0xffff
-#define UNF_ORIGIN_HOTTAG_MASK 0x7fff
-#define UNF_HOTTAG_FLAG (1 << 15)
#define UNF_PKG_FREE_OXID 0x0
#define UNF_PKG_FREE_RXID 0x1
diff --git a/drivers/scsi/spfc/common/unf_io.c b/drivers/scsi/spfc/common/unf_io.c
index b1255ecba88c..5de69f8ddc6d 100644
--- a/drivers/scsi/spfc/common/unf_io.c
+++ b/drivers/scsi/spfc/common/unf_io.c
@@ -890,8 +890,7 @@ static int unf_send_fcpcmnd(struct unf_lport *lport, struct unf_rport *rport,
unf_xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg.private_data[PKG_PRIVATE_XCHG_VP_INDEX] = unf_lport->vp_index;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = unf_rport->rport_index;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] =
- unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
unf_select_sq(unf_xchg, &pkg);
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
diff --git a/drivers/scsi/spfc/common/unf_io_abnormal.c b/drivers/scsi/spfc/common/unf_io_abnormal.c
index fece7aa5f441..4e268ac026ca 100644
--- a/drivers/scsi/spfc/common/unf_io_abnormal.c
+++ b/drivers/scsi/spfc/common/unf_io_abnormal.c
@@ -763,7 +763,7 @@ int unf_send_scsi_mgmt_cmnd(struct unf_xchg *xchg, struct unf_lport *lport,
pkg.xchg_contex = unf_xchg;
pkg.private_data[PKG_PRIVATE_XCHG_RPORT_INDEX] = rport->rport_index;
pkg.fcp_cmnd = &unf_xchg->fcp_cmnd;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = unf_xchg->hotpooltag;
pkg.frame_head.csctl_sid = lport->nport_id;
pkg.frame_head.rctl_did = rport->nport_id;
diff --git a/drivers/scsi/spfc/common/unf_rport.c b/drivers/scsi/spfc/common/unf_rport.c
index aa4967fc0ab6..9b06df884524 100644
--- a/drivers/scsi/spfc/common/unf_rport.c
+++ b/drivers/scsi/spfc/common/unf_rport.c
@@ -352,7 +352,7 @@ struct unf_rport *unf_find_valid_rport(struct unf_lport *lport, u64 wwpn, u32 si
spin_unlock_irqrestore(rport_state_lock, flags);
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
- "[err]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
+ "[info]Port(0x%x) RPort(0x%p) find by WWPN(0x%llx) is invalid",
lport->port_id, rport_by_wwpn, wwpn);
rport_by_wwpn = NULL;
diff --git a/drivers/scsi/spfc/common/unf_service.c b/drivers/scsi/spfc/common/unf_service.c
index 8f72f6470647..9c86c99374c8 100644
--- a/drivers/scsi/spfc/common/unf_service.c
+++ b/drivers/scsi/spfc/common/unf_service.c
@@ -130,7 +130,7 @@ void unf_fill_package(struct unf_frame_pkg *pkg, struct unf_xchg *xchg,
pkg->private_data[PKG_PRIVATE_RPORT_RX_SIZE] = rport->max_frame_size;
}
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
pkg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
xchg->private_data[PKG_PRIVATE_XCHG_ALLOC_TIME];
pkg->private_data[PKG_PRIVATE_LOWLEVEL_XCHG_ADD] =
@@ -250,7 +250,7 @@ u32 unf_send_abts(struct unf_lport *lport, struct unf_xchg *xchg)
pkg.unf_cmnd_pload_bl.buffer_ptr = (u8 *)xchg->fcp_sfs_union.sfs_entry.fc_sfs_entry_ptr;
pkg.unf_cmnd_pload_bl.buf_dma_addr = xchg->fcp_sfs_union.sfs_entry.sfs_buff_phy_addr;
- pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(&pkg, xchg);
UNF_SET_ABORT_INFO_IOTYPE(&pkg, xchg);
@@ -407,19 +407,10 @@ static u32 unf_els_cmnd_default_handler(struct unf_lport *lport, struct unf_xchg
rjt_info.reason_code = UNF_LS_RJT_NOT_SUPPORTED;
unf_rport = unf_get_rport_by_nport_id(lport, sid);
- if (unf_rport) {
- if (unf_rport->rport_index !=
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX]) {
- FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
- "[warn]Port(0x%x_0x%x) NPort handle(0x%x) from low level is not equal to RPort index(0x%x)",
- lport->port_id, lport->nport_id,
- xchg->private_data[PKG_PRIVATE_XCHG_RPORT_INDEX],
- unf_rport->rport_index);
- }
+ if (unf_rport)
ret = unf_send_els_rjt_by_rport(lport, xchg, unf_rport, &rjt_info);
- } else {
+ else
ret = unf_send_els_rjt_by_did(lport, xchg, sid, &rjt_info);
- }
return ret;
}
@@ -1389,7 +1380,7 @@ static void unf_fill_free_xid_pkg(struct unf_xchg *xchg, struct unf_frame_pkg *p
pkg->frame_head.csctl_sid = xchg->sid;
pkg->frame_head.rctl_did = xchg->did;
pkg->frame_head.oxid_rxid = (u32)(((u32)xchg->oxid << UNF_SHIFT_16) | xchg->rxid);
- pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag | UNF_HOTTAG_FLAG;
+ pkg->private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = xchg->hotpooltag;
UNF_SET_XCHG_ALLOC_TIME(pkg, xchg);
if (xchg->xchg_type == UNF_XCHG_TYPE_SFS) {
diff --git a/drivers/scsi/spfc/hw/spfc_hba.c b/drivers/scsi/spfc/hw/spfc_hba.c
index e12299c9e2c9..b033dcb78bb3 100644
--- a/drivers/scsi/spfc/hw/spfc_hba.c
+++ b/drivers/scsi/spfc/hw/spfc_hba.c
@@ -56,7 +56,7 @@ static struct unf_cfg_item spfc_port_cfg_parm[] = {
{"port_topology", 0, 0xf, 0x20},
{"port_alpa", 0, 0xdead, 0xffff}, /* alpa address of port */
/* queue depth of originator registered to SCSI midlayer */
- {"max_queue_depth", 0, 128, 128},
+ {"max_queue_depth", 0, 512, 512},
{"sest_num", 0, 2048, 2048},
{"max_login", 0, 2048, 2048},
/* nodename from 32 bit to 64 bit */
diff --git a/drivers/scsi/spfc/hw/spfc_io.c b/drivers/scsi/spfc/hw/spfc_io.c
index 2b1d1c607b13..7184eb6a10af 100644
--- a/drivers/scsi/spfc/hw/spfc_io.c
+++ b/drivers/scsi/spfc/hw/spfc_io.c
@@ -1138,7 +1138,7 @@ u32 spfc_scq_recv_iresp(struct spfc_hba_info *hba, union spfc_scqe *wqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = iresp->magic_num;
pkg.frame_head.oxid_rxid = (((iresp->wd0.ox_id) << UNF_SHIFT_16) | (iresp->wd0.rx_id));
- hot_tag = (u16)iresp->wd2.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = (u16)iresp->wd2.hotpooltag;
/* 2. HotTag validity check */
if (likely(hot_tag >= hba->exi_base && (hot_tag < hba->exi_base + hba->exi_count))) {
pkg.status = UNF_IO_SUCCESS;
diff --git a/drivers/scsi/spfc/hw/spfc_queue.c b/drivers/scsi/spfc/hw/spfc_queue.c
index abcf1ff3f49f..fa4295832da7 100644
--- a/drivers/scsi/spfc/hw/spfc_queue.c
+++ b/drivers/scsi/spfc/hw/spfc_queue.c
@@ -2138,11 +2138,9 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
u32 uidelaycnt = 0;
struct list_head *list = NULL;
struct spfc_suspend_sqe_info *suspend_sqe = NULL;
- ulong flag = 0;
sq_info = &parq_info->parent_sq_info;
- spin_lock_irqsave(&parq_info->parent_queue_state_lock, flag);
while (!list_empty(&sq_info->suspend_sqe_list)) {
list = UNF_OS_LIST_NEXT(&sq_info->suspend_sqe_list);
list_del(list);
@@ -2156,7 +2154,6 @@ static void spfc_free_parent_sq(struct spfc_hba_info *hba,
kfree(suspend_sqe);
}
}
- spin_unlock_irqrestore(&parq_info->parent_queue_state_lock, flag);
/* Free data cos */
spfc_update_cos_rport_cnt(hba, parq_info->queue_data_cos);
@@ -4475,9 +4472,7 @@ void spfc_free_parent_queue_info(void *handle, struct spfc_parent_queue_info *pa
* with the sq in the queue of the parent
*/
- spin_unlock_irqrestore(prtq_state_lock, flag);
spfc_free_parent_sq(hba, parent_queue_info);
- spin_lock_irqsave(prtq_state_lock, flag);
/* The initialization of all queue id is invalid */
parent_queue_info->parent_cmd_scq_info.cqm_queue_id = INVALID_VALUE32;
diff --git a/drivers/scsi/spfc/hw/spfc_service.c b/drivers/scsi/spfc/hw/spfc_service.c
index e99802df50a2..1da58e3f9fbe 100644
--- a/drivers/scsi/spfc/hw/spfc_service.c
+++ b/drivers/scsi/spfc/hw/spfc_service.c
@@ -742,7 +742,7 @@ u32 spfc_scq_recv_abts_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
ox_id = (u32)(abts_rsp->wd0.ox_id);
- hot_tag = abts_rsp->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK;
+ hot_tag = abts_rsp->wd1.hotpooltag;
if (unlikely(hot_tag < (u32)hba->exi_base ||
hot_tag >= (u32)(hba->exi_base + hba->exi_count))) {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_ERR,
@@ -1210,7 +1210,7 @@ u32 spfc_scq_recv_ls_gs_rsp(struct spfc_hba_info *hba, union spfc_scqe *scqe)
spfc_swap_16_in_32((u32 *)ls_gs_rsp_scqe->user_id, SPFC_LS_GS_USERID_LEN);
ox_id = ls_gs_rsp_scqe->wd1.ox_id;
- hot_tag = ((u16)(ls_gs_rsp_scqe->wd5.hotpooltag) & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = ((u16)ls_gs_rsp_scqe->wd5.hotpooltag) - hba->exi_base;
pkg.frame_head.oxid_rxid = (u32)(ls_gs_rsp_scqe->wd1.rx_id) | ox_id << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = ls_gs_rsp_scqe->magic_num;
pkg.private_data[PKG_PRIVATE_XCHG_HOT_POOL_INDEX] = hot_tag;
@@ -1317,8 +1317,7 @@ u32 spfc_scq_recv_els_rsp_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] =
els_rsp_sts_scqe->magic_num;
pkg.frame_head.oxid_rxid = rx_id | (u32)(els_rsp_sts_scqe->wd0.ox_id) << UNF_SHIFT_16;
- hot_tag = (u32)((els_rsp_sts_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) -
- hba->exi_base);
+ hot_tag = (u32)(els_rsp_sts_scqe->wd1.hotpooltag - hba->exi_base);
if (unlikely(SPFC_SCQE_HAS_ERRCODE(scqe)))
pkg.status = UNF_IO_FAILED;
@@ -1759,7 +1758,7 @@ u32 spfc_scq_recv_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
tmf_marker_sts_scqe = &scqe->itmf_marker_sts;
ox_id = (u32)tmf_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)tmf_marker_sts_scqe->wd1.rx_id;
- hot_tag = (tmf_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = tmf_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.private_data[PKG_PRIVATE_XCHG_ALLOC_TIME] = tmf_marker_sts_scqe->magic_num;
pkg.frame_head.csctl_sid = tmf_marker_sts_scqe->wd3.sid;
@@ -1800,7 +1799,7 @@ u32 spfc_scq_recv_abts_marker_sts(struct spfc_hba_info *hba, union spfc_scqe *sc
ox_id = (u32)abts_marker_sts_scqe->wd1.ox_id;
rx_id = (u32)abts_marker_sts_scqe->wd1.rx_id;
- hot_tag = (abts_marker_sts_scqe->wd4.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = abts_marker_sts_scqe->wd4.hotpooltag - hba->exi_base;
pkg.frame_head.oxid_rxid = rx_id | (u32)(ox_id) << UNF_SHIFT_16;
pkg.frame_head.csctl_sid = abts_marker_sts_scqe->wd3.sid;
pkg.frame_head.rctl_did = abts_marker_sts_scqe->wd2.did;
@@ -1972,8 +1971,7 @@ u32 spfc_scq_free_xid_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)free_xid_sts_scqe->wd0.rx_id;
if (free_xid_sts_scqe->wd1.hotpooltag != INVALID_VALUE16) {
- hot_tag = (free_xid_sts_scqe->wd1.hotpooltag &
- UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = free_xid_sts_scqe->wd1.hotpooltag - hba->exi_base;
}
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
@@ -1998,7 +1996,7 @@ u32 spfc_scq_exchg_timeout_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
rx_id = (u32)time_out_scqe->wd0.rx_id;
if (time_out_scqe->wd1.hotpooltag != INVALID_VALUE16)
- hot_tag = (time_out_scqe->wd1.hotpooltag & UNF_ORIGIN_HOTTAG_MASK) - hba->exi_base;
+ hot_tag = time_out_scqe->wd1.hotpooltag - hba->exi_base;
FC_DRV_PRINT(UNF_LOG_EQUIP_ATT, UNF_INFO,
"Port(0x%x) recv timer time out sts hotpooltag(0x%x) magicnum(0x%x) ox_id(0x%x) rxid(0x%x) sts(%d)",
@@ -2054,7 +2052,7 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_INFO,
"[info]Port(0x%x) rport_index(0x%x) find suspend sqe.",
hba->port_cfg.port_id, rport_index);
- if (sqn < sqn_max) {
+ if ((sqn < sqn_max) && (sqn >= sqn_base)) {
ret = spfc_send_nop_cmd(hba, parent_sq_info, magic_num, sqn + 1);
} else if (sqn == sqn_max) {
if (!cancel_delayed_work(&suspend_sqe->timeout_work)) {
@@ -2065,6 +2063,10 @@ u32 spfc_scq_rcv_sq_nop_sts(struct spfc_hba_info *hba, union spfc_scqe *scqe)
parent_sq_info->need_offloaded = suspend_sqe->old_offload_sts;
ret = spfc_pop_suspend_sqe(hba, prt_qinfo, suspend_sqe);
kfree(suspend_sqe);
+ } else {
+ FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
+ "[warn]Port(0x%x) rport(0x%x) rcv error sqn(0x%x)",
+ hba->port_cfg.port_id, rport_index, sqn);
}
} else {
FC_DRV_PRINT(UNF_LOG_LOGIN_ATT, UNF_WARN,
--
2.32.0
1
0

[PATCH openEuler-1.0-LTS 1/2] usb: gadget: don't release an existing dev->buf
by Yang Yingliang 18 Feb '22
by Yang Yingliang 18 Feb '22
18 Feb '22
From: Hangyu Hua <hbh25y(a)gmail.com>
mainline inclusion
from mainline-v5.17-rc1
commit 89f3594d0de58e8a57d92d497dea9fee3d4b9cda
category: bugfix
bugzilla: NA
CVE: CVE-2022-24958
--------------------------------
dev->buf does not need to be released if it already exists before
executing dev_config.
Acked-by: Alan Stern <stern(a)rowland.harvard.edu>
Signed-off-by: Hangyu Hua <hbh25y(a)gmail.com>
Link: https://lore.kernel.org/r/20211231172138.7993-2-hbh25y@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/usb/gadget/legacy/inode.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
index f91d403da3141..a1488de12d450 100644
--- a/drivers/usb/gadget/legacy/inode.c
+++ b/drivers/usb/gadget/legacy/inode.c
@@ -1829,8 +1829,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
spin_lock_irq (&dev->lock);
value = -EINVAL;
if (dev->buf) {
+ spin_unlock_irq(&dev->lock);
kfree(kbuf);
- goto fail;
+ return value;
}
dev->buf = kbuf;
--
2.25.1
1
1

[PATCH OLK-5.10 0/5] net/spnic: fix bugs:Remove unused functions about ceq
by Yanling Song 18 Feb '22
by Yanling Song 18 Feb '22
18 Feb '22
Fix 5 bugs:
1. Fix array bounds error in ethtool get_link_ksettings
2. Fix ethtool loopback command failure
3. Fix xor checksum error when sending a non 4B-aligned
message to firmware
4. Fix an error when netdev failed to link up
5. Reduce the timeout of the channel between driver and
firmware
Yanling Song (5):
net/spnic: Fix array bounds error in ethtool get_link_ksettings
net/spnic: Fix ethtool loopback command failure
net/spnic: Fix xor checksum error when sending a non 4B-aligned
message to firmware
net/spnic: Fix an error when netdev failed to link up
net/spnic: Reduce the timeout of the channel between driver and
firmware
.../net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 2 +-
.../net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 10 ++--
.../net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 2 +-
.../ramaxel/spnic/spnic_ethtool_stats.c | 4 +-
.../ethernet/ramaxel/spnic/spnic_mag_cfg.c | 2 +-
.../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 49 ++++++++++++-------
6 files changed, 41 insertions(+), 28 deletions(-)
--
2.32.0
1
5

[PATCH openEuler-1.0-LTS 1/2] cgroup-v1: Require capabilities to set release_agent
by Yang Yingliang 17 Feb '22
by Yang Yingliang 17 Feb '22
17 Feb '22
From: "Eric W. Biederman" <ebiederm(a)xmission.com>
stable inclusion
from stable-v4.19.229
commit 939f8b491887c27585933ea7dc5ad4123de58ff3
CVE: CVE-2022-0492
-------------------------------
commit 24f6008564183aa120d07c03d9289519c2fe02af upstream.
The cgroup release_agent is called with call_usermodehelper. The function
call_usermodehelper starts the release_agent with a full set fo capabilities.
Therefore require capabilities when setting the release_agaent.
Reported-by: Tabitha Sable <tabitha.c.sable(a)gmail.com>
Tested-by: Tabitha Sable <tabitha.c.sable(a)gmail.com>
Fixes: 81a6a5cdd2c5 ("Task Control Groups: automatic userspace notification of idle cgroups")
Cc: stable(a)vger.kernel.org # v2.6.24+
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
Signed-off-by: Tejun Heo <tj(a)kernel.org>
[mkoutny: Adjust for pre-fs_context, duplicate mount/remount check, drop log messages.]
Acked-by: Michal Koutný <mkoutny(a)suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Lu Jialin <lujialin4(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/cgroup/cgroup-v1.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c
index f833fe71fa5f6..c4cc6c1ddacde 100644
--- a/kernel/cgroup/cgroup-v1.c
+++ b/kernel/cgroup/cgroup-v1.c
@@ -580,6 +580,14 @@ static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
+ /*
+ * Release agent gets called with all capabilities,
+ * require capabilities to set release agent.
+ */
+ if ((of->file->f_cred->user_ns != &init_user_ns) ||
+ !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
cgrp = cgroup_kn_lock_live(of->kn, false);
if (!cgrp)
return -ENODEV;
@@ -1054,6 +1062,7 @@ static int cgroup1_remount(struct kernfs_root *kf_root, int *flags, char *data)
{
int ret = 0;
struct cgroup_root *root = cgroup_root_from_kf(kf_root);
+ struct cgroup_namespace *ns = current->nsproxy->cgroup_ns;
struct cgroup_sb_opts opts;
u16 added_mask, removed_mask;
@@ -1067,6 +1076,12 @@ static int cgroup1_remount(struct kernfs_root *kf_root, int *flags, char *data)
if (opts.subsys_mask != root->subsys_mask || opts.release_agent)
pr_warn("option changes via remount are deprecated (pid=%d comm=%s)\n",
task_tgid_nr(current), current->comm);
+ /* See cgroup1_mount release_agent handling */
+ if (opts.release_agent &&
+ ((ns->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN))) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
added_mask = opts.subsys_mask & ~root->subsys_mask;
removed_mask = root->subsys_mask & ~opts.subsys_mask;
@@ -1205,6 +1220,15 @@ struct dentry *cgroup1_mount(struct file_system_type *fs_type, int flags,
ret = -EPERM;
goto out_unlock;
}
+ /*
+ * Release agent gets called with all capabilities,
+ * require capabilities to set release agent.
+ */
+ if (opts.release_agent &&
+ ((ns->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN))) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
root = kzalloc(sizeof(*root), GFP_KERNEL);
if (!root) {
--
2.25.1
1
1

[PATCH openEuler-5.10 1/9] livepatch: Fix kobject refcount bug on klp_init_patch_early failure path
by Zheng Zengkai 17 Feb '22
by Zheng Zengkai 17 Feb '22
17 Feb '22
From: David Vernet <void(a)manifault.com>
mainline inclusion
from mainline-v5.17-rc1
commit 5ef3dd20555e8e878ac390a71e658db5fd02845c
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4TF7T
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When enabling a klp patch with klp_enable_patch(), klp_init_patch_early()
is invoked to initialize the kobjects for the patch itself, as well as the
'struct klp_object' and 'struct klp_func' objects that comprise it.
However, there are some error paths in klp_enable_patch() where some
kobjects may have been initialized with kobject_init(), but an error code
is still returned due to e.g. a 'struct klp_object' having a NULL funcs
pointer.
In these paths, the initial reference of the kobject of the 'struct
klp_patch' may never be released, along with one or more of its objects and
their functions, as kobject_put() is not invoked on the cleanup path if
klp_init_patch_early() returns an error code.
For example, if an object entry such as the following were added to the
sample livepatch module's klp patch, it would cause the vmlinux klp_object,
and its klp_func which updates 'cmdline_proc_show', to never be released:
static struct klp_object objs[] = {
{
/* name being NULL means vmlinux */
.funcs = funcs,
},
{
/* NULL funcs -- would cause reference leak */
.name = "kvm",
}, { }
};
Without this change, if CONFIG_DEBUG_KOBJECT is enabled, and the sample klp
patch is loaded, the kobjects (the patch, the vmlinux 'struct klp_object',
and its func) are observed as initialized, but never released, in the dmesg
log output. With the change, these kobject references no longer fail to be
released as the error case is properly handled before they are initialized.
Since 81fd525cedd9 ("[Huawei] livepatch: Add klp_{register,unregister}_patch
for stop_machine model"), klp_register_patch was born out of klp_enable_patch
with similar issue, we also fix it in this patch.
Signed-off-by: David Vernet <void(a)manifault.com>
Reviewed-by: Petr Mladek <pmladek(a)suse.com>
Acked-by: Miroslav Benes <mbenes(a)suse.cz>
Acked-by: Josh Poimboeuf <jpoimboe(a)redhat.com>
Signed-off-by: Petr Mladek <pmladek(a)suse.com>
Conflicts:
kernel/livepatch/core.c
Fixes: 0430f78bf38f ("livepatch: Consolidate klp_free functions")
Fixes: c33e42836a74 ("livepatch/core: Allow implementation without ftrace")
Signed-off-by: Zheng Yejian <zhengyejian1(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
kernel/livepatch/core.c | 50 +++++++++++++++++------------------------
1 file changed, 20 insertions(+), 30 deletions(-)
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index b46ef236424d..b0f54d4c663b 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -1146,14 +1146,11 @@ static void klp_init_object_early(struct klp_patch *patch,
#endif
}
-static int klp_init_patch_early(struct klp_patch *patch)
+static void klp_init_patch_early(struct klp_patch *patch)
{
struct klp_object *obj;
struct klp_func *func;
- if (!patch->objs)
- return -EINVAL;
-
INIT_LIST_HEAD(&patch->list);
INIT_LIST_HEAD(&patch->obj_list);
kobject_init(&patch->kobj, &klp_ktype_patch);
@@ -1163,26 +1160,12 @@ static int klp_init_patch_early(struct klp_patch *patch)
init_completion(&patch->finish);
klp_for_each_object_static(patch, obj) {
- if (!obj->funcs)
- return -EINVAL;
-
klp_init_object_early(patch, obj);
klp_for_each_func_static(obj, func) {
klp_init_func_early(obj, func);
}
}
-
- /*
- * For stop_machine model, we only need to module_get and module_put once when
- * enable_patch and disable_patch respectively.
- */
-#ifdef CONFIG_LIVEPATCH_PER_TASK_CONSISTENCY
- if (!try_module_get(patch->mod))
- return -ENODEV;
-#endif
-
- return 0;
}
static int klp_init_patch(struct klp_patch *patch)
@@ -1431,10 +1414,16 @@ static int __klp_enable_patch(struct klp_patch *patch)
int klp_enable_patch(struct klp_patch *patch)
{
int ret;
+ struct klp_object *obj;
- if (!patch || !patch->mod)
+ if (!patch || !patch->mod || !patch->objs)
return -EINVAL;
+ klp_for_each_object_static(patch, obj) {
+ if (!obj->funcs)
+ return -EINVAL;
+ }
+
if (!is_livepatch_module(patch->mod)) {
pr_err("module %s is not marked as a livepatch module\n",
patch->mod->name);
@@ -1458,11 +1447,10 @@ int klp_enable_patch(struct klp_patch *patch)
return -EINVAL;
}
- ret = klp_init_patch_early(patch);
- if (ret) {
- mutex_unlock(&klp_mutex);
- return ret;
- }
+ if (!try_module_get(patch->mod))
+ return -ENODEV;
+
+ klp_init_patch_early(patch);
ret = klp_init_patch(patch);
if (ret)
@@ -1609,10 +1597,16 @@ static int __klp_enable_patch(struct klp_patch *patch)
int klp_register_patch(struct klp_patch *patch)
{
int ret;
+ struct klp_object *obj;
- if (!patch || !patch->mod)
+ if (!patch || !patch->mod || !patch->objs)
return -EINVAL;
+ klp_for_each_object_static(patch, obj) {
+ if (!obj->funcs)
+ return -EINVAL;
+ }
+
if (!is_livepatch_module(patch->mod)) {
pr_err("module %s is not marked as a livepatch module\n",
patch->mod->name);
@@ -1629,11 +1623,7 @@ int klp_register_patch(struct klp_patch *patch)
return -EINVAL;
}
- ret = klp_init_patch_early(patch);
- if (ret) {
- mutex_unlock(&klp_mutex);
- return ret;
- }
+ klp_init_patch_early(patch);
ret = klp_init_patch(patch);
if (ret)
--
2.20.1
1
8

[PATCH openEuler-1.0-LTS 1/2] NFSv4: Handle case where the lookup of a directory fails
by Yang Yingliang 16 Feb '22
by Yang Yingliang 16 Feb '22
16 Feb '22
From: Trond Myklebust <trond.myklebust(a)hammerspace.com>
mainline inclusion
from mainline-v5.16
commit ac795161c93699d600db16c1a8cc23a65a1eceaf
category: bugfix
bugzilla: 186205
CVE: CVE-2022-24448
-----------------------------------------------
If the application sets the O_DIRECTORY flag, and tries to open a
regular file, nfs_atomic_open() will punt to doing a regular lookup.
If the server then returns a regular file, we will happily return a
file descriptor with uninitialised open state.
The fix is to return the expected ENOTDIR error in these cases.
Reported-by: Lyu Tao <tao.lyu(a)epfl.ch>
Fixes: 0dd2b474d0b6 ("nfs: implement i_op->atomic_open()")
Signed-off-by: Trond Myklebust <trond.myklebust(a)hammerspace.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker(a)Netapp.com>
Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/nfs/dir.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index ff9129c0572d9..757a83556b003 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1637,6 +1637,19 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
no_open:
res = nfs_lookup(dir, dentry, lookup_flags);
+ if (!res) {
+ inode = d_inode(dentry);
+ if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+ !S_ISDIR(inode->i_mode))
+ res = ERR_PTR(-ENOTDIR);
+ } else if (!IS_ERR(res)) {
+ inode = d_inode(res);
+ if ((lookup_flags & LOOKUP_DIRECTORY) && inode &&
+ !S_ISDIR(inode->i_mode)) {
+ dput(res);
+ res = ERR_PTR(-ENOTDIR);
+ }
+ }
if (switched) {
d_lookup_done(dentry);
if (!res)
--
2.25.1
1
1
您好!
Kernel SIG 邀请您参加 2022-02-18 14:00 召开的ZOOM会议(自动录制)
会议主题:openEuler kernel 双周例会
会议内容:
议题1: Intel sgx特性介绍及合入
议题2: 申请适配toa开源插件
议题3: 实现ima namespace的digest list
会议链接:https://us06web.zoom.us/j/81827052778?pwd=c2MxN2RzMXR4cEh2RVFieFlUNDk4Zz09
温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the ZOOM conference(auto recording) will be held at 2022-02-18 14:00,
The subject of the conference is openEuler kernel 双周例会,
Summary:
议题1: Intel sgx特性介绍及合入
议题2: 申请适配toa开源插件
议题3: 实现ima namespace的digest list
You can join the meeting at https://us06web.zoom.us/j/81827052778?pwd=c2MxN2RzMXR4cEh2RVFieFlUNDk4Zz09.
Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
More information: https://openeuler.org/en/
1
0

[PATCH openEuler-1.0-LTS] configfs: fix a race in configfs_{,un}register_subsystem()
by Yang Yingliang 15 Feb '22
by Yang Yingliang 15 Feb '22
15 Feb '22
From: ChenXiaoSong <chenxiaosong2(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186233, https://gitee.com/src-openeuler/kernel/issues/I4TLA2?from=project-issue
CVE: NA
-----------------------------------------------
When configfs_register_subsystem() or configfs_unregister_subsystem()
is executing link_group() or unlink_group(),
it is possible that two processes add or delete list concurrently.
Some unfortunate interleavings of them can cause kernel panic.
One of cases is:
A --> B --> C --> D
A <-- B <-- C <-- D
delete list_head *B | delete list_head *C
--------------------------------|-----------------------------------
configfs_unregister_subsystem | configfs_unregister_subsystem
unlink_group | unlink_group
unlink_obj | unlink_obj
list_del_init | list_del_init
__list_del_entry | __list_del_entry
__list_del | __list_del
// next == C |
next->prev = prev |
| next->prev = prev
prev->next = next |
| // prev == B
| prev->next = next
Fix this by adding mutex when calling link_group() or unlink_group(),
but parent configfs_subsystem is NULL when config_item is root.
So I create a mutex configfs_subsystem_mutex.
Fixes: 7063fbf22611 ("[PATCH] configfs: User-driven configuration filesystem")
Signed-off-by: ChenXiaoSong <chenxiaosong2(a)huawei.com>
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: yangerkun <yangerkun(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/configfs/dir.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
index 2cc6b1c49d348..9b43611e1bb35 100644
--- a/fs/configfs/dir.c
+++ b/fs/configfs/dir.c
@@ -50,6 +50,14 @@ DECLARE_RWSEM(configfs_rename_sem);
*/
DEFINE_SPINLOCK(configfs_dirent_lock);
+/*
+ * All of link_obj/unlink_obj/link_group/unlink_group require that
+ * subsys->su_mutex is held.
+ * But parent configfs_subsystem is NULL when config_item is root.
+ * Use this mutex when config_item is root.
+ */
+static DEFINE_MUTEX(configfs_subsystem_mutex);
+
static void configfs_d_iput(struct dentry * dentry,
struct inode * inode)
{
@@ -1936,7 +1944,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
group->cg_item.ci_name = group->cg_item.ci_namebuf;
sd = root->d_fsdata;
+ mutex_lock(&configfs_subsystem_mutex);
link_group(to_config_group(sd->s_element), group);
+ mutex_unlock(&configfs_subsystem_mutex);
inode_lock_nested(d_inode(root), I_MUTEX_PARENT);
@@ -1961,7 +1971,9 @@ int configfs_register_subsystem(struct configfs_subsystem *subsys)
inode_unlock(d_inode(root));
if (err) {
+ mutex_lock(&configfs_subsystem_mutex);
unlink_group(group);
+ mutex_unlock(&configfs_subsystem_mutex);
configfs_release_fs();
}
put_fragment(frag);
@@ -2007,7 +2019,9 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
dput(dentry);
+ mutex_lock(&configfs_subsystem_mutex);
unlink_group(group);
+ mutex_unlock(&configfs_subsystem_mutex);
configfs_release_fs();
}
--
2.25.1
1
0
1
0
这个议题先在 kernel sig 讨论吧。
可以在 https://gitee.com/openeuler/kernel/issues 提个 issue。
我把这个提议报到 kernel sig。
On 2022/2/14 16:39, Jianmin Wang wrote:
> Hi,王书,您好,
>
> 感谢提交议题,因为和 kernel 密切相关,是否有 Kernel SIG 的意见以及对应的 PR / Issue 提前发出来呢?
>
> ——王建民
>
>> On 14 Feb 2022, at 4:18 PM, 王书 <wangshu(a)uniontech.com> wrote:
>>
>> 议题申报:
>> 《推送TOA模块到内核sig组申请》---统信 刘冬华
>>
>>
>> ------------------ Original ------------------
>> From: "叶青龙"<yeqinglong(a)uniontech.com>;
>> Date: Mon, Feb 14, 2022 03:22 PM
>> To: "王书"<wangshu(a)uniontech.com>;
>> Subject: Fw:[Dev] Fwd: openEuler 技术委员会例会
>>
>> 请在例会通知邮件上申报议题。
>>
>>
>>
>>
>>
>>> 服务器OS与云计算产线 叶青龙
>>>
>>> 联系电话:18991378194
>>>
>>> 地址:西安市雁塔区天谷八路西安软件新城软件研发基地2期
>>
>>
>>
>>
>> ------------------ Original ------------------
>> From: "王建民"<jianmin(a)iscas.ac.cn>;
>> Date: Tue, Feb 8, 2022 04:43 PM
>> To: "tc"<tc(a)openeuler.org>; "dev"<dev(a)openeuler.org>;
>> Subject: [Dev] Fwd: openEuler 技术委员会例会
>>
>> 大家新年好,
>>
>> 春节后第一次 TC 例会将于 2022 年 2 月 16 日进行,欢迎大家回复邮件提交议题。
>>
>> 谢谢。
>>
>> -王建民
>>
>> Hello, Everyone,
>>
>> Happy New Year!
>>
>> The first bi-weekly meeting of openEuler TC will be held on 2022/02/16 Wednesday.
>>
>> Welcome to submit topics by replying to the email.
>>
>>
>> Best Regards,
>> Jianmin Wang
>>
>>
>> Begin forwarded message:
>>
>> From: openEuler conference <public(a)openeuler.org>
>> Date: 8 February 2022 at 2:28:47 PM GMT+8
>> Subject: [Dev] openEuler 技术委员会例会
>>
>> 您好!
>>
>> TC SIG 邀请您参加 2022-02-16 10:00 召开的ZOOM会议
>>
>> 会议主题:openEuler 技术委员会例会
>>
>> 会议链接:https://us06web.zoom.us/j/82775432385?pwd=Y2hkOGZkeklrMzJVaUVnM2dONTBxdz09
>>
>> 温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
>>
>> 更多资讯尽在:https://openeuler.org/zh/
>>
>>
>>
>>
>> Hello!
>>
>> openEuler TC SIG invites you to attend the ZOOM conference will be held at 2022-02-16 10:00,
>>
>> The subject of the conference is openEuler 技术委员会例会,
>>
>> You can join the meeting at https://us06web.zoom.us/j/82775432385?pwd=Y2hkOGZkeklrMzJVaUVnM2dONTBxdz09.
>>
>> Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
>>
>> More information: https://openeuler.org/en/
>>
>> _______________________________________________
>> Dev mailing list -- dev(a)openeuler.org
>> To unsubscribe send an email to dev-leave(a)openeuler.org
>> _______________________________________________
>> Dev mailing list -- dev(a)openeuler.org
>> To unsubscribe send an email to dev-leave(a)openeuler.org
>
> _______________________________________________
> Tc mailing list -- tc(a)openeuler.org
> To unsubscribe send an email to tc-leave(a)openeuler.org
>
1
0

[PATCH openEuler-1.0-LTS] fs/filesystems.c: downgrade user-reachable WARN_ONCE() to pr_warn_once()
by Yang Yingliang 14 Feb '22
by Yang Yingliang 14 Feb '22
14 Feb '22
From: Eric Biggers <ebiggers(a)google.com>
mainline inclusion
from mainline-5.7-rc1
commit 26c5d78c976ca298e59a56f6101a97b618ba3539
category: bugfix
bugzilla: 34247
CVE: NA
---------------------------
After request_module(), nothing is stopping the module from being
unloaded until someone takes a reference to it via try_get_module().
The WARN_ONCE() in get_fs_type() is thus user-reachable, via userspace
running 'rmmod' concurrently.
Since WARN_ONCE() is for kernel bugs only, not for user-reachable
situations, downgrade this warning to pr_warn_once().
Keep it printed once only, since the intent of this warning is to detect
a bug in modprobe at boot time. Printing the warning more than once
wouldn't really provide any useful extra information.
Fixes: 41124db869b7 ("fs: warn in case userspace lied about modprobe return")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Reviewed-by: Jessica Yu <jeyu(a)kernel.org>
Cc: Alexei Starovoitov <ast(a)kernel.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Jeff Vander Stoep <jeffv(a)google.com>
Cc: Jessica Yu <jeyu(a)kernel.org>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Luis Chamberlain <mcgrof(a)kernel.org>
Cc: NeilBrown <neilb(a)suse.com>
Cc: <stable(a)vger.kernel.org> [4.13+]
Link: http://lkml.kernel.org/r/20200312202552.241885-3-ebiggers@kernel.org
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/filesystems.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/filesystems.c b/fs/filesystems.c
index b03f57b1105b3..181200daeeba7 100644
--- a/fs/filesystems.c
+++ b/fs/filesystems.c
@@ -267,7 +267,9 @@ struct file_system_type *get_fs_type(const char *name)
fs = __get_fs_type(name, len);
if (!fs && (request_module("fs-%.*s", len, name) == 0)) {
fs = __get_fs_type(name, len);
- WARN_ONCE(!fs, "request_module fs-%.*s succeeded, but still no fs?\n", len, name);
+ if (!fs)
+ pr_warn_once("request_module fs-%.*s succeeded, but still no fs?\n",
+ len, name);
}
if (dot && fs && !(fs->fs_flags & FS_HAS_SUBTYPE)) {
--
2.25.1
1
0

11 Feb '22
From: Bharata B Rao <bharata(a)amd.com>
mainline inclusion
from mainline-v5.16-rc1
commit 6cf253925df72e522c06dac09ede7e81a6e38121
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4T0ML
CVE: NA
-------------------------------------------------
Patch series "Fix NUMA nodes fallback list ordering".
For a NUMA system that has multiple nodes at same distance from other
nodes, the fallback list generation prefers same node order for them
instead of round-robin thereby penalizing one node over others. This
series fixes it.
More description of the problem and the fix is present in the patch
description.
This patch (of 2):
Print information message about the allocation fallback order for each
NUMA node during boot.
No functional changes here. This makes it easier to illustrate the
problem in the node fallback list generation, which the next patch
fixes.
Link: https://lkml.kernel.org/r/20210830121603.1081-1-bharata@amd.com
Link: https://lkml.kernel.org/r/20210830121603.1081-2-bharata@amd.com
Signed-off-by: Bharata B Rao <bharata(a)amd.com>
Acked-by: Mel Gorman <mgorman(a)suse.de>
Reviewed-by: Anshuman Khandual <anshuman.khandual(a)arm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com>
Cc: Lee Schermerhorn <lee.schermerhorn(a)hp.com>
Cc: Krupa Ramakrishnan <krupa.ramakrishnan(a)amd.com>
Cc: Sadagopan Srinivasan <Sadagopan.Srinivasan(a)amd.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Peng Liu <liupeng256(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
mm/page_alloc.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3791bdc958bd..4e67b4506238 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6067,6 +6067,10 @@ static void build_zonelists(pg_data_t *pgdat)
build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
build_thisnode_zonelists(pgdat);
+ pr_info("Fallback order for Node %d: ", local_node);
+ for (node = 0; node < nr_nodes; node++)
+ pr_cont("%d ", node_order[node]);
+ pr_cont("\n");
}
#ifdef CONFIG_HAVE_MEMORYLESS_NODES
--
2.20.1
1
4

[PATCH openEuler-5.10] arm64: openeuler_defconfig: Enable Kunpeng related configs
by Zheng Zengkai 11 Feb '22
by Zheng Zengkai 11 Feb '22
11 Feb '22
driver inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SJLU
CVE: NA
-----------------------------------------
Enable following configs in arm64 openeuler_defconfig for Kunpeng platform:
CONFIG_PCIE_EDR=y
CONFIG_HISI_PCIE_PMU=m
CONFIG_MLX5_ESWITCH=y
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
Reviewed-by: Chao Liu <liuchao173(a)huawei.com>
Acked-by: Xinwei Kong<kong.kongxinwei(a)hisilicon.com>
Reviewed-by: Yicong Yang <yangyicong(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index a86e97dd015e..a93616556d12 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -1915,7 +1915,7 @@ CONFIG_PCIEASPM_DEFAULT=y
CONFIG_PCIE_PME=y
CONFIG_PCIE_DPC=y
# CONFIG_PCIE_PTM is not set
-# CONFIG_PCIE_EDR is not set
+CONFIG_PCIE_EDR=y
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
@@ -2757,7 +2757,7 @@ CONFIG_MLX5_CORE_EN=y
CONFIG_MLX5_EN_ARFS=y
CONFIG_MLX5_EN_RXNFC=y
CONFIG_MLX5_MPFS=y
-# CONFIG_MLX5_ESWITCH is not set
+CONFIG_MLX5_ESWITCH=y
CONFIG_MLX5_CORE_EN_DCB=y
CONFIG_MLX5_CORE_IPOIB=y
# CONFIG_MLX5_IPSEC is not set
@@ -6070,6 +6070,7 @@ CONFIG_THUNDERX2_PMU=m
CONFIG_XGENE_PMU=y
CONFIG_ARM_SPE_PMU=y
CONFIG_HISI_PMU=m
+CONFIG_HISI_PCIE_PMU=m
# end of Performance monitor support
CONFIG_RAS=y
--
2.20.1
3
2

[PATCH] configs: enable CONFIG_INTEGRITY_PLATFORM_KEYRING and CONFIG_LOAD_UEFI_KEYS
by Chao Liu 11 Feb '22
by Chao Liu 11 Feb '22
11 Feb '22
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4T7MX
CVE: NA
--------------------------------
Provide a separate, distinct keyring for platform trusted keys which is
used in secure boot.
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 3 ++-
arch/x86/configs/openeuler_defconfig | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index a86e97dd015e..08cf4f419553 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -6502,7 +6502,8 @@ CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
-# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
+CONFIG_INTEGRITY_PLATFORM_KEYRING=y
+CONFIG_LOAD_UEFI_KEYS=y
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
# CONFIG_IMA_KEXEC is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index febdd7627f8b..316c4122a859 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -7603,7 +7603,8 @@ CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
-# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
+CONFIG_INTEGRITY_PLATFORM_KEYRING=y
+CONFIG_LOAD_UEFI_KEYS=y
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
--
2.23.0
3
2

11 Feb '22
在 2022/2/11 16:12, Zhang Yi 写道:
> The same to commit 1c2d14212b15 ("ext2: Fix underflow in ext2_max_size()")
> in ext2 filesystem, ext4 driver has the same issue with 64K block size
> and ^huge_file, fix this issue the same as ext2. This patch also revert
> commit 75ca6ad408f4 ("ext4: fix loff_t overflow in ext4_max_bitmap_size()")
> because it's no longer needed.
>
> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
> ---
> fs/ext4/super.c | 46 +++++++++++++++++++++++++++++++---------------
> 1 file changed, 31 insertions(+), 15 deletions(-)
>
Looks good to me
1
0
在 2022/2/11 16:12, Zhang Yi 写道:
> We need to calculate the max file size accurately if the total blocks
> that can address by block tree exceed the upper_limit. But this check is
> not correct now, it only compute the total data blocks but missing
> metadata blocks are needed. So in the case of "data blocks < upper_limit
> && total blocks > upper_limit", we will get wrong result. Fortunately,
> this case could not happen in reality, but it's confused and better to
> correct the computing.
>
> bits data blocks metadatablocks upper_limit
> 10 16843020 66051 2147483647
> 11 134480396 263171 1073741823
> 12 1074791436 1050627 536870911 (*)
> 13 8594130956 4198403 268435455 (*)
> 14 68736258060 16785411 134217727 (*)
> 15 549822930956 67125251 67108863 (*)
> 16 4398314962956 268468227 33554431 (*)
>
> [*] Need to calculate in depth.
>
> Fixes: 1c2d14212b15 ("ext2: Fix underflow in ext2_max_size()")
> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
Looks good to me
1
0

[PATCH openEuler-1.0-LTS] drm/i915: Flush TLBs before releasing backing store
by Yang Yingliang 11 Feb '22
by Yang Yingliang 11 Feb '22
11 Feb '22
From: Tvrtko Ursulin <tvrtko.ursulin(a)intel.com>
stable inclusion
from linux-4.19.227
commit b188780649081782e341e52223db47c49f172712
CVE: CVE-2022-0330
--------------------------------
commit 7938d61591d33394a21bdd7797a245b65428f44c upstream.
We need to flush TLBs before releasing backing store otherwise userspace
is able to encounter stale entries if a) it is not declaring access to
certain buffers and b) it races with the backing store release from a
such undeclared execution already executing on the GPU in parallel.
The approach taken is to mark any buffer objects which were ever bound
to the GPU and to trigger a serialized TLB flush when their backing
store is released.
Alternatively the flushing could be done on VMA unbind, at which point
we would be able to ascertain whether there is potential a parallel GPU
execution (which could race), but essentially it boils down to paying
the cost of TLB flushes potentially needlessly at VMA unbind time (when
the backing store is not known to be going away so not needed for
safety), versus potentially needlessly at backing store relase time
(since we at that point cannot tell whether there is anything executing
on the GPU which uses that object).
Thereforce simplicity of implementation has been chosen for now with
scope to benchmark and refine later as required.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)intel.com>
Reported-by: Sushma Venkatesh Reddy <sushma.venkatesh.reddy(a)intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Acked-by: Dave Airlie <airlied(a)redhat.com>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Jon Bloomfield <jon.bloomfield(a)intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen(a)linux.intel.com>
Cc: Jani Nikula <jani.nikula(a)intel.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/gpu/drm/i915/i915_drv.h | 2 +
drivers/gpu/drm/i915/i915_gem.c | 83 ++++++++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_object.h | 1 +
drivers/gpu/drm/i915/i915_reg.h | 6 ++
drivers/gpu/drm/i915/i915_vma.c | 4 ++
5 files changed, 96 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index db2e9af49ae6f..b65f3e38208a3 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1593,6 +1593,8 @@ struct drm_i915_private {
struct intel_uncore uncore;
+ struct mutex tlb_invalidate_lock;
+
struct i915_virtual_gpu vgpu;
struct intel_gvt *gvt;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3139047f72465..b1b207747f39f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2443,6 +2443,78 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
rcu_read_unlock();
}
+struct reg_and_bit {
+ i915_reg_t reg;
+ u32 bit;
+};
+
+static struct reg_and_bit
+get_reg_and_bit(const struct intel_engine_cs *engine,
+ const i915_reg_t *regs, const unsigned int num)
+{
+ const unsigned int class = engine->class;
+ struct reg_and_bit rb = { .bit = 1 };
+
+ if (WARN_ON_ONCE(class >= num || !regs[class].reg))
+ return rb;
+
+ rb.reg = regs[class];
+ if (class == VIDEO_DECODE_CLASS)
+ rb.reg.reg += 4 * engine->instance; /* GEN8_M2TCR */
+
+ return rb;
+}
+
+static void invalidate_tlbs(struct drm_i915_private *dev_priv)
+{
+ static const i915_reg_t gen8_regs[] = {
+ [RENDER_CLASS] = GEN8_RTCR,
+ [VIDEO_DECODE_CLASS] = GEN8_M1TCR, /* , GEN8_M2TCR */
+ [VIDEO_ENHANCEMENT_CLASS] = GEN8_VTCR,
+ [COPY_ENGINE_CLASS] = GEN8_BTCR,
+ };
+ const unsigned int num = ARRAY_SIZE(gen8_regs);
+ const i915_reg_t *regs = gen8_regs;
+ struct intel_engine_cs *engine;
+ enum intel_engine_id id;
+
+ if (INTEL_GEN(dev_priv) < 8)
+ return;
+
+ GEM_TRACE("\n");
+
+ assert_rpm_wakelock_held(dev_priv);
+
+ mutex_lock(&dev_priv->tlb_invalidate_lock);
+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
+
+ for_each_engine(engine, dev_priv, id) {
+ /*
+ * HW architecture suggest typical invalidation time at 40us,
+ * with pessimistic cases up to 100us and a recommendation to
+ * cap at 1ms. We go a bit higher just in case.
+ */
+ const unsigned int timeout_us = 100;
+ const unsigned int timeout_ms = 4;
+ struct reg_and_bit rb;
+
+ rb = get_reg_and_bit(engine, regs, num);
+ if (!i915_mmio_reg_offset(rb.reg))
+ continue;
+
+ I915_WRITE_FW(rb.reg, rb.bit);
+ if (__intel_wait_for_register_fw(dev_priv,
+ rb.reg, rb.bit, 0,
+ timeout_us, timeout_ms,
+ NULL))
+ DRM_ERROR_RATELIMITED("%s TLB invalidation did not complete in %ums!\n",
+ engine->name, timeout_ms);
+ }
+
+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
+ mutex_unlock(&dev_priv->tlb_invalidate_lock);
+}
+
static struct sg_table *
__i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
{
@@ -2472,6 +2544,15 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
__i915_gem_object_reset_page_iter(obj);
obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+ if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, &obj->flags)) {
+ struct drm_i915_private *i915 = to_i915(obj->base.dev);
+
+ if (intel_runtime_pm_get_if_in_use(i915)) {
+ invalidate_tlbs(i915);
+ intel_runtime_pm_put(i915);
+ }
+ }
+
return pages;
}
@@ -5789,6 +5870,8 @@ int i915_gem_init_early(struct drm_i915_private *dev_priv)
spin_lock_init(&dev_priv->fb_tracking.lock);
+ mutex_init(&dev_priv->tlb_invalidate_lock);
+
err = i915_gemfs_init(dev_priv);
if (err)
DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 83e5e01fa9eaa..2e3a713e9bcd8 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -136,6 +136,7 @@ struct drm_i915_gem_object {
* activity?
*/
#define I915_BO_ACTIVE_REF 0
+#define I915_BO_WAS_BOUND_BIT 1
/*
* Is the object to be mapped as read-only to the GPU
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index a6f4f32dd71ce..830049985e56d 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -2431,6 +2431,12 @@ enum i915_power_well_id {
#define GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING (1 << 28)
#define GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT (1 << 24)
+#define GEN8_RTCR _MMIO(0x4260)
+#define GEN8_M1TCR _MMIO(0x4264)
+#define GEN8_M2TCR _MMIO(0x4268)
+#define GEN8_BTCR _MMIO(0x426c)
+#define GEN8_VTCR _MMIO(0x4270)
+
#if 0
#define PRB0_TAIL _MMIO(0x2030)
#define PRB0_HEAD _MMIO(0x2034)
diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
index 98358b4b36dea..9aceacc43f4b7 100644
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -335,6 +335,10 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
return ret;
vma->flags |= bind_flags;
+
+ if (vma->obj)
+ set_bit(I915_BO_WAS_BOUND_BIT, &vma->obj->flags);
+
return 0;
}
--
2.25.1
1
0
On 2022/02/11 Fri 11:38, 郑振鹏 wrote:
>好的, 麻烦给个示例, 我以后按照示例的格式提交.
你前面就有两个来自于Zheng Zengkai和Cao Liu的修改kconfig的patch,可以参考。
Regards,
Kai
1
0

11 Feb '22
From: Tong Zhang <ztong0001(a)gmail.com>
mainline inclusion
from mainline-v5.14-rc1
commit 42933c8aa14be1caa9eda41f65cde8a3a95d3e39
category: bugfix
bugzilla: NA
CVE: CVE-2022-0487
--------------------------------
This patch fixes the following issues:
1. memstick_free_host() will free the host, so the use of ms_dev(host) after
it will be a problem. To fix this, move memstick_free_host() after when we
are done with ms_dev(host).
2. In rtsx_usb_ms_drv_remove(), pm need to be disabled before we remove
and free host otherwise memstick_check will be called and UAF will
happen.
[ 11.351173] BUG: KASAN: use-after-free in rtsx_usb_ms_drv_remove+0x94/0x140 [rtsx_usb_ms]
[ 11.357077] rtsx_usb_ms_drv_remove+0x94/0x140 [rtsx_usb_ms]
[ 11.357376] platform_remove+0x2a/0x50
[ 11.367531] Freed by task 298:
[ 11.368537] kfree+0xa4/0x2a0
[ 11.368711] device_release+0x51/0xe0
[ 11.368905] kobject_put+0xa2/0x120
[ 11.369090] rtsx_usb_ms_drv_remove+0x8c/0x140 [rtsx_usb_ms]
[ 11.369386] platform_remove+0x2a/0x50
[ 12.038408] BUG: KASAN: use-after-free in __mutex_lock.isra.0+0x3ec/0x7c0
[ 12.045432] mutex_lock+0xc9/0xd0
[ 12.046080] memstick_check+0x6a/0x578 [memstick]
[ 12.046509] process_one_work+0x46d/0x750
[ 12.052107] Freed by task 297:
[ 12.053115] kfree+0xa4/0x2a0
[ 12.053272] device_release+0x51/0xe0
[ 12.053463] kobject_put+0xa2/0x120
[ 12.053647] rtsx_usb_ms_drv_remove+0xc4/0x140 [rtsx_usb_ms]
[ 12.053939] platform_remove+0x2a/0x50
Signed-off-by: Tong Zhang <ztong0001(a)gmail.com>
Co-developed-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Link: https://lore.kernel.org/r/20210511163944.1233295-1-ztong0001@gmail.com
Signed-off-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Conflicts:
drivers/memstick/host/rtsx_usb_ms.c
[yyl: rtsx_usb_ms_drv_probe() don't need to fix]
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/memstick/host/rtsx_usb_ms.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
index 4f64563df7ded..aee62f0618c70 100644
--- a/drivers/memstick/host/rtsx_usb_ms.c
+++ b/drivers/memstick/host/rtsx_usb_ms.c
@@ -798,9 +798,6 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
mutex_unlock(&host->host_mutex);
wait_for_completion(&host->detect_ms_exit);
- memstick_remove_host(msh);
- memstick_free_host(msh);
-
/* Balance possible unbalanced usage count
* e.g. unconditional module removal
*/
@@ -808,10 +805,11 @@ static int rtsx_usb_ms_drv_remove(struct platform_device *pdev)
pm_runtime_put(ms_dev(host));
pm_runtime_disable(&pdev->dev);
- platform_set_drvdata(pdev, NULL);
-
+ memstick_remove_host(msh);
dev_dbg(&(pdev->dev),
": Realtek USB Memstick controller has been removed\n");
+ memstick_free_host(msh);
+ platform_set_drvdata(pdev, NULL);
return 0;
}
--
2.25.1
1
1
euler inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4T7MX
CVE: NA
--------------------------------
Ensure the netswift 10G NIC driver ko can be distributed in ISO on arm64
and provide a separate, distinct keyring for platform trusted keys which
is used in secure boot
Signed-off-by: Chao Liu <liuchao173(a)huawei.com>
---
arch/arm64/configs/openeuler_defconfig | 6 ++++--
arch/x86/configs/openeuler_defconfig | 3 ++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index a86e97dd015e..66853efd6431 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2742,7 +2742,8 @@ CONFIG_I40EVF=m
CONFIG_ICE=m
CONFIG_FM10K=m
# CONFIG_IGC is not set
-# CONFIG_NET_VENDOR_NETSWIFT is not set
+CONFIG_NET_VENDOR_NETSWIFT=y
+CONFIG_TXGBE=m
# CONFIG_JME is not set
# CONFIG_NET_VENDOR_MARVELL is not set
CONFIG_NET_VENDOR_MELLANOX=y
@@ -6502,7 +6503,8 @@ CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
-# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
+CONFIG_INTEGRITY_PLATFORM_KEYRING=y
+CONFIG_LOAD_UEFI_KEYS=y
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
# CONFIG_IMA_KEXEC is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index febdd7627f8b..316c4122a859 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -7603,7 +7603,8 @@ CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
-# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
+CONFIG_INTEGRITY_PLATFORM_KEYRING=y
+CONFIG_LOAD_UEFI_KEYS=y
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
--
2.23.0
2
1

[PATCH openEuler-5.10 1/6] sysctl: returns -EINVAL when a negative value is passed to proc_doulongvec_minmax
by Zheng Zengkai 10 Feb '22
by Zheng Zengkai 10 Feb '22
10 Feb '22
From: Baokun Li <libaokun1(a)huawei.com>
mainline inclusion
from mainline-5.17-rc1
commit 1622ed7d0743201293094162c26019d2573ecacb
category: bugfix
bugzilla: 185873, https://gitee.com/openeuler/kernel/issues/I4MTTR
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
-------------------------------------------------
When we pass a negative value to the proc_doulongvec_minmax() function,
the function returns 0, but the corresponding interface value does not
change.
we can easily reproduce this problem with the following commands:
cd /proc/sys/fs/epoll
echo -1 > max_user_watches; echo $?; cat max_user_watches
This function requires a non-negative number to be passed in, so when a
negative number is passed in, -EINVAL is returned.
Link: https://lkml.kernel.org/r/20211220092627.3744624-1-libaokun1@huawei.com
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Acked-by: Luis Chamberlain <mcgrof(a)kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
kernel/sysctl.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index d7473cd5e72b..89ef0c1a1642 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1186,10 +1186,11 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table,
err = proc_get_long(&p, &left, &val, &neg,
proc_wspace_sep,
sizeof(proc_wspace_sep), NULL);
- if (err)
+ if (err || neg) {
+ err = -EINVAL;
break;
- if (neg)
- continue;
+ }
+
val = convmul * val / convdiv;
if ((min && val < *min) || (max && val > *max)) {
err = -EINVAL;
--
2.20.1
1
5

10 Feb '22
From: Mark Rutland <mark.rutland(a)arm.com>
mainline inclusion
from mainline-v5.11-rc1
commit f80d034086d5bfcfd3bf4ab6f52b2df78c3ad2fa
category: performance
bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7
CVE: NA
-------------------------------------------------------------------------
For consistency, all tasks have a pt_regs reserved at the highest
portion of their task stack. Among other things, this ensures that a
task's SP is always pointing within its stack rather than pointing
immediately past the end.
While it is never legitimate to ERET from a kthread, we take pains to
initialize pt_regs for kthreads as if this were legitimate. As this is
never legitimate, the effects of an erroneous return are rarely tested.
Let's simplify things by initializing a kthread's pt_regs such that an
ERET is caught as an illegal exception return, and removing the explicit
initialization of other exception context. Note that as
spectre_v4_enable_task_mitigation() only manipulates the PSTATE within
the unused regs this is safe to remove.
As user tasks will have their exception context initialized via
start_thread() or start_compat_thread(), this should only impact cases
where something has gone very wrong and we'd like that to be clearly
indicated.
Signed-off-by: Mark Rutland <mark.rutland(a)arm.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: James Morse <james.morse(a)arm.com>
Cc: Will Deacon <will(a)kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Zhen Lei <thunder.leizhen(a)huawei.com>
Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm64/kernel/process.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 1e5202080895..a55d518ee868 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -430,16 +430,15 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
if (clone_flags & CLONE_SETTLS)
p->thread.uw.tp_value = tls;
} else {
+ /*
+ * A kthread has no context to ERET to, so ensure any buggy
+ * ERET is treated as an illegal exception return.
+ *
+ * When a user task is created from a kthread, childregs will
+ * be initialized by start_thread() or start_compat_thread().
+ */
memset(childregs, 0, sizeof(struct pt_regs));
- childregs->pstate = PSR_MODE_EL1h;
- if (IS_ENABLED(CONFIG_ARM64_UAO) &&
- cpus_have_const_cap(ARM64_HAS_UAO))
- childregs->pstate |= PSR_UAO_BIT;
-
- spectre_v4_enable_task_mitigation(p);
-
- if (system_uses_irq_prio_masking())
- childregs->pmr_save = GIC_PRIO_IRQON;
+ childregs->pstate = PSR_MODE_EL1h | PSR_IL_BIT;
p->thread.cpu_context.x19 = stack_start;
p->thread.cpu_context.x20 = stk_sz;
--
2.20.1
1
22

[PATCH openEuler-1.0-LTS] ext4:fix file system corrupted when rmdir non empty directory with IO error
by Yang Yingliang 10 Feb '22
by Yang Yingliang 10 Feb '22
10 Feb '22
From: Ye Bin <yebin10(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: 186220, https://gitee.com/openeuler/kernel/issues/I4T50S
-----------------------------------------------
We inject IO error when rmdir non empty direcory, then got issue as follows:
step1: mkfs.ext4 -F /dev/sda
step2: mount /dev/sda test
step3: cd test
step4: mkdir -p 1/2
step5: rmdir 1
[ 110.920551] ext4_empty_dir: inject fault
[ 110.921926] EXT4-fs warning (device sda): ext4_rmdir:3113: inode #12:
comm rmdir: empty directory '1' has too many links (3)
step6: cd ..
step7: umount test
step8: fsck.ext4 -f /dev/sda
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Entry '..' in .../??? (13) has deleted/unused inode 12. Clear<y>? yes
Pass 3: Checking directory connectivity
Unconnected directory inode 13 (...)
Connect to /lost+found<y>? yes
Pass 4: Checking reference counts
Inode 13 ref count is 3, should be 2. Fix<y>? yes
Pass 5: Checking group summary information
/dev/sda: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda: 12/131072 files (0.0% non-contiguous), 26157/524288 blocks
ext4_rmdir
if (!ext4_empty_dir(inode))
goto end_rmdir;
ext4_empty_dir
bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
if (IS_ERR(bh))
return true;
Now if read directory block failed, 'ext4_empty_dir' will return true, assume
directory is empty. Obviously, it will lead to above issue.
To solve this issue, if read directory block failed 'ext4_empty_dir' just
return false.
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/namei.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 58dd74b804805..84f68e15e853b 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -2750,7 +2750,7 @@ bool ext4_empty_dir(struct inode *inode)
*/
bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
if (IS_ERR(bh))
- return true;
+ return false;
de = (struct ext4_dir_entry_2 *) bh->b_data;
if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, bh->b_size,
@@ -2781,7 +2781,7 @@ bool ext4_empty_dir(struct inode *inode)
continue;
}
if (IS_ERR(bh))
- return true;
+ return false;
}
de = (struct ext4_dir_entry_2 *) (bh->b_data +
(offset & (sb->s_blocksize - 1)));
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/3] bpf: Use dedicated bpf_trace_printk event instead of trace_printk()
by Yang Yingliang 10 Feb '22
by Yang Yingliang 10 Feb '22
10 Feb '22
From: Alan Maguire <alan.maguire(a)oracle.com>
mainline inclusion
from mainline-v5.9-rc1
commit ac5a72ea5c8989871e61f6bb0852e0f91de51ebe
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4PKDW
CVE: NA
--------------------------------
The bpf helper bpf_trace_printk() uses trace_printk() under the hood.
This leads to an alarming warning message originating from trace
buffer allocation which occurs the first time a program using
bpf_trace_printk() is loaded.
We can instead create a trace event for bpf_trace_printk() and enable
it in-kernel when/if we encounter a program using the
bpf_trace_printk() helper. With this approach, trace_printk()
is not used directly and no warning message appears.
This work was started by Steven (see Link) and finished by Alan; added
Steven's Signed-off-by with his permission.
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
Signed-off-by: Alan Maguire <alan.maguire(a)oracle.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Andrii Nakryiko <andriin(a)fb.com>
Link: https://lore.kernel.org/r/20200628194334.6238b933@oasis.local.home
Link: https://lore.kernel.org/bpf/1594641154-18897-2-git-send-email-alan.maguire@…
Signed-off-by: Liu Xinpeng <liuxp11(a)chinatelecom.cn> # openEuler_contributor
Signed-off-by: Ctyun Kernel <ctyuncommiter01(a)chinatelecom.cn> # openEuler_contributor
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
Reviewed-by: Yang Jihong <yangjihong1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/trace/Makefile | 2 ++
kernel/trace/bpf_trace.c | 42 +++++++++++++++++++++++++++++++++++-----
kernel/trace/bpf_trace.h | 34 ++++++++++++++++++++++++++++++++
3 files changed, 73 insertions(+), 5 deletions(-)
create mode 100644 kernel/trace/bpf_trace.h
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index f81dadbc7c4ac..846b6b0d025e9 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -28,6 +28,8 @@ ifdef CONFIG_GCOV_PROFILE_FTRACE
GCOV_PROFILE := y
endif
+CFLAGS_bpf_trace.o := -I$(src)
+
CFLAGS_trace_benchmark.o := -I$(src)
CFLAGS_trace_events_filter.o := -I$(src)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index e0b0e921f1ab7..f359f79d9b690 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -11,12 +11,16 @@
#include <linux/uaccess.h>
#include <linux/ctype.h>
#include <linux/kprobes.h>
+#include <linux/spinlock.h>
#include <linux/syscalls.h>
#include <linux/error-injection.h>
#include "trace_probe.h"
#include "trace.h"
+#define CREATE_TRACE_POINTS
+#include "bpf_trace.h"
+
#ifdef CONFIG_MODULES
struct bpf_trace_module {
struct module *module;
@@ -318,6 +322,30 @@ static const struct bpf_func_proto *bpf_get_probe_write_proto(void)
return &bpf_probe_write_user_proto;
}
+static DEFINE_RAW_SPINLOCK(trace_printk_lock);
+
+#define BPF_TRACE_PRINTK_SIZE 1024
+
+static inline __printf(1, 0) int bpf_do_trace_printk(const char *fmt, ...)
+{
+ static char buf[BPF_TRACE_PRINTK_SIZE];
+ unsigned long flags;
+ va_list ap;
+ int ret;
+
+ raw_spin_lock_irqsave(&trace_printk_lock, flags);
+ va_start(ap, fmt);
+ ret = vsnprintf(buf, sizeof(buf), fmt, ap);
+ va_end(ap);
+ /* vsnprintf() will not append null for zero-length strings */
+ if (ret == 0)
+ buf[0] = '\0';
+ trace_bpf_trace_printk(buf);
+ raw_spin_unlock_irqrestore(&trace_printk_lock, flags);
+
+ return ret;
+}
+
/*
* Only limited trace_printk() conversion specifiers allowed:
* %d %i %u %x %ld %li %lu %lx %lld %lli %llu %llx %p %s
@@ -408,8 +436,7 @@ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1,
*/
#define __BPF_TP_EMIT() __BPF_ARG3_TP()
#define __BPF_TP(...) \
- __trace_printk(0 /* Fake ip */, \
- fmt, ##__VA_ARGS__)
+ bpf_do_trace_printk(fmt, ##__VA_ARGS__)
#define __BPF_ARG1_TP(...) \
((mod[0] == 2 || (mod[0] == 1 && __BITS_PER_LONG == 64)) \
@@ -446,10 +473,15 @@ static const struct bpf_func_proto bpf_trace_printk_proto = {
const struct bpf_func_proto *bpf_get_trace_printk_proto(void)
{
/*
- * this program might be calling bpf_trace_printk,
- * so allocate per-cpu printk buffers
+ * This program might be calling bpf_trace_printk,
+ * so enable the associated bpf_trace/bpf_trace_printk event.
+ * Repeat this each time as it is possible a user has
+ * disabled bpf_trace_printk events. By loading a program
+ * calling bpf_trace_printk() however the user has expressed
+ * the intent to see such events.
*/
- trace_printk_init_buffers();
+ if (trace_set_clr_event("bpf_trace", "bpf_trace_printk", 1))
+ pr_warn_ratelimited("could not enable bpf_trace_printk events");
return &bpf_trace_printk_proto;
}
diff --git a/kernel/trace/bpf_trace.h b/kernel/trace/bpf_trace.h
new file mode 100644
index 0000000000000..9acbc11ac7bbc
--- /dev/null
+++ b/kernel/trace/bpf_trace.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM bpf_trace
+
+#if !defined(_TRACE_BPF_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+
+#define _TRACE_BPF_TRACE_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(bpf_trace_printk,
+
+ TP_PROTO(const char *bpf_string),
+
+ TP_ARGS(bpf_string),
+
+ TP_STRUCT__entry(
+ __string(bpf_string, bpf_string)
+ ),
+
+ TP_fast_assign(
+ __assign_str(bpf_string, bpf_string);
+ ),
+
+ TP_printk("%s", __get_str(bpf_string))
+);
+
+#endif /* _TRACE_BPF_TRACE_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE bpf_trace
+
+#include <trace/define_trace.h>
--
2.25.1
1
2

[PATCH openEuler-1.0-LTS] net: cipso: fix warnings in netlbl_cipsov4_add_std
by Yang Yingliang 10 Feb '22
by Yang Yingliang 10 Feb '22
10 Feb '22
From: Pavel Skripkin <paskripkin(a)gmail.com>
mainline inclusion
from mainline-v5.15-rc1
commit 8ca34a13f7f9b3fa2c464160ffe8cc1a72088204
category: bugfix
bugzilla: 186065
CVE: NA
-------------------------------------------------
Syzbot reported warning in netlbl_cipsov4_add(). The
problem was in too big doi_def->map.std->lvl.local_size
passed to kcalloc(). Since this value comes from userpace there is
no need to warn if value is not correct.
The same problem may occur with other kcalloc() calls in
this function, so, I've added __GFP_NOWARN flag to all
kcalloc() calls there.
Reported-and-tested-by: syzbot+cdd51ee2e6b0b2e18c0d(a)syzkaller.appspotmail.com
Fixes: 96cb8e3313c7 ("[NetLabel]: CIPSOv4 and Unlabeled packet integration")
Acked-by: Paul Moore <paul(a)paul-moore.com>
Signed-off-by: Pavel Skripkin <paskripkin(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/netlabel/netlabel_cipso_v4.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
index 3e3494c8d42f8..0559d442ad807 100644
--- a/net/netlabel/netlabel_cipso_v4.c
+++ b/net/netlabel/netlabel_cipso_v4.c
@@ -198,14 +198,14 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
}
doi_def->map.std->lvl.local = kcalloc(doi_def->map.std->lvl.local_size,
sizeof(u32),
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (doi_def->map.std->lvl.local == NULL) {
ret_val = -ENOMEM;
goto add_std_failure;
}
doi_def->map.std->lvl.cipso = kcalloc(doi_def->map.std->lvl.cipso_size,
sizeof(u32),
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (doi_def->map.std->lvl.cipso == NULL) {
ret_val = -ENOMEM;
goto add_std_failure;
@@ -273,7 +273,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
doi_def->map.std->cat.local = kcalloc(
doi_def->map.std->cat.local_size,
sizeof(u32),
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (doi_def->map.std->cat.local == NULL) {
ret_val = -ENOMEM;
goto add_std_failure;
@@ -281,7 +281,7 @@ static int netlbl_cipsov4_add_std(struct genl_info *info,
doi_def->map.std->cat.cipso = kcalloc(
doi_def->map.std->cat.cipso_size,
sizeof(u32),
- GFP_KERNEL);
+ GFP_KERNEL | __GFP_NOWARN);
if (doi_def->map.std->cat.cipso == NULL) {
ret_val = -ENOMEM;
goto add_std_failure;
--
2.25.1
1
0
From: Magnus Karlsson <magnus.karlsson(a)intel.com>
mainline inclusion
from mainline-v5.6-rc1
commit 1d9cb1f381860b529edec57cf7a08133f40366eb
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SGA9?from=project-issue
CVE: NA
--------------------------------
Improve readability and maintainability by using the struct_size()
helper when allocating the AF_XDP rings.
Signed-off-by: Magnus Karlsson <magnus.karlsson(a)intel.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Link: https://lore.kernel.org/bpf/1576759171-28550-13-git-send-email-magnus.karls…
Signed-off-by: Huang Guobin <huangguobin4(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/xdp/xsk_queue.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c
index 6c32e92e98fcb..de10bd9401f84 100644
--- a/net/xdp/xsk_queue.c
+++ b/net/xdp/xsk_queue.c
@@ -15,14 +15,14 @@ void xskq_set_umem(struct xsk_queue *q, struct xdp_umem_props *umem_props)
q->umem_props = *umem_props;
}
-static u32 xskq_umem_get_ring_size(struct xsk_queue *q)
+static size_t xskq_get_ring_size(struct xsk_queue *q, bool umem_queue)
{
- return sizeof(struct xdp_umem_ring) + q->nentries * sizeof(u64);
-}
+ struct xdp_umem_ring *umem_ring;
+ struct xdp_rxtx_ring *rxtx_ring;
-static u32 xskq_rxtx_get_ring_size(struct xsk_queue *q)
-{
- return sizeof(struct xdp_ring) + q->nentries * sizeof(struct xdp_desc);
+ if (umem_queue)
+ return struct_size(umem_ring, desc, q->nentries);
+ return struct_size(rxtx_ring, desc, q->nentries);
}
struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
@@ -40,8 +40,7 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN |
__GFP_COMP | __GFP_NORETRY;
- size = umem_queue ? xskq_umem_get_ring_size(q) :
- xskq_rxtx_get_ring_size(q);
+ size = xskq_get_ring_size(q, umem_queue);
q->ring = (struct xdp_ring *)__get_free_pages(gfp_flags,
get_order(size));
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 01/16] mm,hwpoison: cleanup unused PageHuge() check
by Yang Yingliang 09 Feb '22
by Yang Yingliang 09 Feb '22
09 Feb '22
From: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
mainline inclusion
from linux-v5.10-rc1
commit 7d9d46ac87f91b8dedad5241d64382b650e26487
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4LE22
CVE: NA
--------------------------------
Drop the PageHuge check, which is dead code since memory_failure() forks
into memory_failure_hugetlb() for hugetlb pages.
memory_failure() and memory_failure_hugetlb() shares some functions like
hwpoison_user_mappings() and identify_page_state(), so they should
properly handle 4kB page, thp, and hugetlb.
Signed-off-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Signed-off-by: Oscar Salvador <osalvador(a)suse.de>
Reviewed-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: tong tiangen <tongtiangen(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/memory-failure.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 72e1746e5386f..a432ba37e132a 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1373,10 +1373,7 @@ int memory_failure(unsigned long pfn, int flags)
* page_remove_rmap() in try_to_unmap_one(). So to determine page status
* correctly, we save a copy of the page flags at this time.
*/
- if (PageHuge(p))
- page_flags = hpage->flags;
- else
- page_flags = p->flags;
+ page_flags = p->flags;
/*
* unpoison always clear PG_hwpoison inside page lock
--
2.25.1
1
15

[PATCH openEuler-1.0-LTS] scsi: Revert "target: iscsi: Wait for all commands to finish before freeing a session"
by Yang Yingliang 09 Feb '22
by Yang Yingliang 09 Feb '22
09 Feb '22
From: Bart Van Assche <bvanassche(a)acm.org>
mainline inclusion
from mainline-v5.6-rc3
commit 807b9515b7d044cf77df31f1af9d842a76ecd5cb
category: bugfix
bugzilla: 186215
CVE: NA
--------------------------------
Since commit e9d3009cb936 introduced a regression and since the fix for
that regression was not perfect, revert this commit.
Link: https://marc.info/?l=target-devel&m=158157054906195
Cc: Rahul Kundu <rahul.kundu(a)chelsio.com>
Cc: Mike Marciniszyn <mike.marciniszyn(a)intel.com>
Cc: Sagi Grimberg <sagi(a)grimberg.me>
Reported-by: Dakshaja Uppalapati <dakshaja(a)chelsio.com>
Fixes: e9d3009cb936 ("scsi: target: iscsi: Wait for all commands to finish before freeing a session")
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
Signed-off-by: Ye Bin <yebin10(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/target/iscsi/iscsi_target.c | 10 ++--------
include/scsi/iscsi_proto.h | 1 -
2 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 92843e8db711b..58ccded1be857 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -1156,9 +1156,7 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
hdr->cmdsn, be32_to_cpu(hdr->data_length), payload_length,
conn->cid);
- if (target_get_sess_cmd(&cmd->se_cmd, true) < 0)
- return iscsit_add_reject_cmd(cmd,
- ISCSI_REASON_WAITING_FOR_LOGOUT, buf);
+ target_get_sess_cmd(&cmd->se_cmd, true);
cmd->sense_reason = transport_lookup_cmd_lun(&cmd->se_cmd,
scsilun_to_int(&hdr->lun));
@@ -2012,9 +2010,7 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
conn->sess->se_sess, 0, DMA_NONE,
TCM_SIMPLE_TAG, cmd->sense_buffer + 2);
- if (target_get_sess_cmd(&cmd->se_cmd, true) < 0)
- return iscsit_add_reject_cmd(cmd,
- ISCSI_REASON_WAITING_FOR_LOGOUT, buf);
+ target_get_sess_cmd(&cmd->se_cmd, true);
/*
* TASK_REASSIGN for ERL=2 / connection stays inside of
@@ -4230,8 +4226,6 @@ int iscsit_close_connection(
* must wait until they have completed.
*/
iscsit_check_conn_usage_count(conn);
- target_sess_cmd_list_set_waiting(sess->se_sess);
- target_wait_for_sess_cmds(sess->se_sess);
ahash_request_free(conn->conn_tx_hash);
if (conn->conn_rx_hash) {
diff --git a/include/scsi/iscsi_proto.h b/include/scsi/iscsi_proto.h
index f0a01a54bd153..df156f1d50b2d 100644
--- a/include/scsi/iscsi_proto.h
+++ b/include/scsi/iscsi_proto.h
@@ -638,7 +638,6 @@ struct iscsi_reject {
#define ISCSI_REASON_BOOKMARK_INVALID 9
#define ISCSI_REASON_BOOKMARK_NO_RESOURCES 10
#define ISCSI_REASON_NEGOTIATION_RESET 11
-#define ISCSI_REASON_WAITING_FOR_LOGOUT 12
/* Max. number of Key=Value pairs in a text message */
#define MAX_KEY_VALUE_PAIRS 8192
--
2.25.1
1
0

09 Feb '22
From: Guo Mengqi <guomengqi3(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SPNL
CVE: NA
-----------------------------------
In sp_mmap(), if use offset = va - MMAP_BASE/DVPP_BASE, then normal
sp_alloc pgoff may have same value with DVPP pgoff, causing DVPP
and sp_alloc mapped to overlapped part of file unexpectedly.
To fix the problem, pass VA value as mmap offset, for in this scenario,
VA value in one task address space will not be same.
Signed-off-by: Guo Mengqi <guomengqi3(a)huawei.com>
Reviewed-by: Ding Tianhong <dingtianhong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
mm/share_pool.c | 21 +++++----------------
1 file changed, 5 insertions(+), 16 deletions(-)
diff --git a/mm/share_pool.c b/mm/share_pool.c
index ef74de39053b2..90733b807f12e 100644
--- a/mm/share_pool.c
+++ b/mm/share_pool.c
@@ -57,6 +57,11 @@
#define spg_valid(spg) ((spg)->is_alive == true)
+/* Use spa va address as mmap offset. This can work because spa_file
+ * is setup with 64-bit address space. So va shall be well covered.
+ */
+#define addr_offset(spa) ((spa)->va_start)
+
#define byte2kb(size) ((size) >> 10)
#define byte2mb(size) ((size) >> 20)
#define page2kb(page_num) ((page_num) << (PAGE_SHIFT - 10))
@@ -950,22 +955,6 @@ static bool is_device_addr(unsigned long addr)
return false;
}
-static loff_t addr_offset(struct sp_area *spa)
-{
- unsigned long addr;
-
- if (unlikely(!spa)) {
- WARN(1, "invalid spa when calculate addr offset\n");
- return 0;
- }
- addr = spa->va_start;
-
- if (!is_device_addr(addr))
- return (loff_t)(addr - MMAP_SHARE_POOL_START);
-
- return (loff_t)(addr - sp_dev_va_start[spa->device_id]);
-}
-
static struct sp_group *create_spg(int spg_id)
{
int ret;
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/2] uce: copy_from_user scenario support kernel recovery
by Yang Yingliang 09 Feb '22
by Yang Yingliang 09 Feb '22
09 Feb '22
From: Tong Tiangen <tongtiangen(a)huawei.com>
hulk inclusion
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
--------------------------------
This patch add uce kernel recovery path support in copy_from_user.
Signed-off-by: Tong Tiangen <tongtiangen(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/include/asm/exception.h | 1 +
arch/arm64/lib/copy_from_user.S | 6 ++++++
arch/arm64/mm/fault.c | 17 +++++++++++++++--
kernel/sysctl.c | 3 ++-
4 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h
index 559d86ad9e5d5..d0c8a1fda453a 100644
--- a/arch/arm64/include/asm/exception.h
+++ b/arch/arm64/include/asm/exception.h
@@ -51,6 +51,7 @@ struct uce_kernel_recovery_info {
extern int copy_page_cow_sea_fallback(void);
extern int copy_generic_read_sea_fallback(void);
+extern int copy_from_user_sea_fallback(void);
#endif
#endif /* __ASM_EXCEPTION_H */
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 7cd6eeaa216cf..d1afb61df158b 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -72,6 +72,12 @@ ENTRY(__arch_copy_from_user)
uaccess_disable_not_uao x3, x4
mov x0, #0 // Nothing to copy
ret
+
+ .global copy_from_user_sea_fallback
+copy_from_user_sea_fallback:
+ uaccess_disable_not_uao x3, x4
+ mov x0, #-1
+ ret
ENDPROC(__arch_copy_from_user)
.section .fixup,"ax"
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 08040fe73199a..50e37f4097ccc 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -660,9 +660,18 @@ static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs)
int kernel_access_sea_recovery;
#define UCE_KER_REC_NUM ARRAY_SIZE(reco_info)
+/*
+ * One entry corresponds to one scene, and the scene switch is controlled by the
+ * corresponding bit of kernel_access_sea_recovery
+ * (the first entry corresponds to bit0, the second entry corresponds to bit1...),
+ * and the switch is visible to the user, so the order of eatch entry here cannot
+ * be easily change. Now the maximum entry is limited by the type of variable
+ * kernel_access_sea_recovery.
+ */
static struct uce_kernel_recovery_info reco_info[] = {
{copy_page_cow_sea_fallback, "copy_page_cow", (unsigned long)copy_page_cow, 0},
{copy_generic_read_sea_fallback, "__arch_copy_to_user_generic_read", (unsigned long)__arch_copy_to_user_generic_read, 0},
+ {copy_from_user_sea_fallback, "__arch_copy_from_user", (unsigned long)__arch_copy_from_user, 0},
};
static int __init kernel_access_sea_recovery_init(void)
@@ -769,6 +778,9 @@ static int is_in_kernel_recovery(unsigned int esr, struct pt_regs *regs)
}
for (i = 0; i < UCE_KER_REC_NUM; i++) {
+ if (!((kernel_access_sea_recovery >> i) & 0x1))
+ continue;
+
info = &reco_info[i];
if (info->fn && regs->pc >= info->addr &&
regs->pc < (info->addr + info->size)) {
@@ -777,7 +789,8 @@ static int is_in_kernel_recovery(unsigned int esr, struct pt_regs *regs)
}
}
- pr_emerg("UCE: symbol is not match.\n");
+ pr_emerg("UCE: symbol is not match or switch if off, kernel recovery %d.\n",
+ kernel_access_sea_recovery);
return -EINVAL;
}
#endif
@@ -847,7 +860,7 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
"Uncorrected hardware memory use with kernel recovery in kernel-access\n",
current);
} else {
- die("Uncorrected hardware memory error (kernel recovery on but not match idx) in kernel-access\n",
+ die("Uncorrected hardware memory error (not match idx or sence switch is off) in kernel-access\n",
regs, esr);
}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 542611081c610..302cc4c7de4df 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -130,6 +130,7 @@ static int __maybe_unused two = 2;
static int __maybe_unused three = 3;
static int __maybe_unused four = 4;
static int __maybe_unused five = 5;
+static int __maybe_unused seven = 7;
static unsigned long zero_ul;
static unsigned long one_ul = 1;
static unsigned long long_max = LONG_MAX;
@@ -1280,7 +1281,7 @@ static struct ctl_table kern_table[] = {
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
.extra1 = &zero,
- .extra2 = &three,
+ .extra2 = &seven,
},
#endif
--
2.25.1
1
1

09 Feb '22
From: Wang Wensheng <wangwensheng4(a)huawei.com>
ascend inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SON8
CVE: NA
-------------------------------------------------
MAP_SHARE_POOL and MAP_FIXED_NOREPLACE have the same value.
Redefine MAP_SHARE_POOL to fix it.
Signed-off-by: Wang Wensheng <wangwensheng4(a)huawei.com>
Reviewed-by: Weilong Chen <chenweilong(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
include/linux/share_pool.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/share_pool.h b/include/linux/share_pool.h
index cd4c305449dd9..88ef96ac0bfb3 100644
--- a/include/linux/share_pool.h
+++ b/include/linux/share_pool.h
@@ -170,7 +170,7 @@ struct sp_walk_data {
pmd_t *pmd;
};
-#define MAP_SHARE_POOL 0x100000
+#define MAP_SHARE_POOL 0x200000
#define MMAP_TOP_4G_SIZE 0x100000000UL
--
2.25.1
1
4

[PATCH openEuler-1.0-LTS 01/36] mm/memory_hotplug: drop "online" parameter from add_memory_resource()
by Yang Yingliang 09 Feb '22
by Yang Yingliang 09 Feb '22
09 Feb '22
From: David Hildenbrand <david(a)redhat.com>
mainline inclusion
from linux-5.0-rc1
commit f29d8e9c0191a2a02500945db505e5c89159c3f4
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
CVE: NA
--------------------------------
Userspace should always be in charge of how to online memory and if memory
should be onlined automatically in the kernel. Let's drop the parameter
to overwrite this - XEN passes memhp_auto_online, just like add_memory(),
so we can directly use that instead internally.
Link: http://lkml.kernel.org/r/20181123123740.27652-1-david@redhat.com
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Acked-by: Juergen Gross <jgross(a)suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Stefano Stabellini <sstabellini(a)kernel.org>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Pavel Tatashin <pasha.tatashin(a)oracle.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Arun KS <arunks(a)codeaurora.org>
Cc: Mathieu Malaterre <malat(a)debian.org>
Cc: Stephen Rothwell <sfr(a)canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/xen/balloon.c | 2 +-
include/linux/memory_hotplug.h | 2 +-
mm/memory_hotplug.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 7703ac47062fa..e692dcf576d66 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -352,7 +352,7 @@ static enum bp_state reserve_additional_memory(void)
mutex_unlock(&balloon_mutex);
/* add_memory_resource() requires the device_hotplug lock */
lock_device_hotplug();
- rc = add_memory_resource(nid, resource, memhp_auto_online);
+ rc = add_memory_resource(nid, resource);
unlock_device_hotplug();
mutex_lock(&balloon_mutex);
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 8782f0e993704..9d28fca5bbde8 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -326,7 +326,7 @@ extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn,
void *arg, int (*func)(struct memory_block *, void *));
extern int __add_memory(int nid, u64 start, u64 size);
extern int add_memory(int nid, u64 start, u64 size);
-extern int add_memory_resource(int nid, struct resource *resource, bool online);
+extern int add_memory_resource(int nid, struct resource *resource);
extern int arch_add_memory(int nid, u64 start, u64 size,
struct vmem_altmap *altmap, bool want_memblock);
extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1e20ad038496d..f13144a65b127 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1051,7 +1051,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
*
* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG
*/
-int __ref add_memory_resource(int nid, struct resource *res, bool online)
+int __ref add_memory_resource(int nid, struct resource *res)
{
u64 start, size;
bool new_node = false;
@@ -1114,7 +1114,7 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online)
mem_hotplug_done();
/* online pages if requested */
- if (online)
+ if (memhp_auto_online)
walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1),
NULL, online_memory_block);
@@ -1138,7 +1138,7 @@ int __ref __add_memory(int nid, u64 start, u64 size)
if (IS_ERR(res))
return PTR_ERR(res);
- ret = add_memory_resource(nid, res, memhp_auto_online);
+ ret = add_memory_resource(nid, res);
if (ret < 0)
release_memory_resource(res);
return ret;
--
2.25.1
1
34

Re: [PATCH -next] dm: make sure dm_table is binded before queue request
by weiyongjun (A) 09 Feb '22
by weiyongjun (A) 09 Feb '22
09 Feb '22
在 2022/2/9 10:55, Zhang Yi 写道:
> We found a NULL pointer dereference problem when using dm-mpath target.
> The problem is if we submit IO between loading and binding the table,
> we could neither get a valid dm_target nor a valid dm table when
> submitting request in dm_mq_queue_rq(). BIO based dm target could
> handle this case in dm_submit_bio(). This patch fix this by checking
> the valid live table before submitting request.
>
> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com>
> ---
> drivers/md/dm-rq.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
Looks good to me
1
0

[PATCH openEuler-1.0-LTS v3 0/2]Add copy_from_user and get_user support uce kernel recovery
by Tong Tiangen 08 Feb '22
by Tong Tiangen 08 Feb '22
08 Feb '22
v3 -> v2:
make is_get_user_kernel_recovery_enable() inline.
v2 -> v1:
update commit message.
Tong Tiangen (2):
uce: copy_from_user scenario support kernel recovery
uce: get_user scenario support kernel recovery
arch/arm64/include/asm/exception.h | 6 +++
arch/arm64/include/asm/uaccess.h | 67 ++++++++++++++++++++++++++++++
arch/arm64/lib/Makefile | 2 +
arch/arm64/lib/copy_from_user.S | 6 +++
arch/arm64/lib/get_user.c | 20 +++++++++
arch/arm64/mm/fault.c | 29 +++++++++++--
include/linux/sched.h | 1 +
kernel/sysctl.c | 3 +-
mm/internal.h | 1 +
9 files changed, 130 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/lib/get_user.c
--
2.18.0.huawei.25
1
2

[PATCH openEuler-1.0-LTS] ext4: fix e2fsprogs checksum failure for mounted filesystem
by Yang Yingliang 08 Feb '22
by Yang Yingliang 08 Feb '22
08 Feb '22
From: Jan Kara <jack(a)suse.cz>
mainline inclusion
from mainline-v5.15-rc1
commit b2bbb92f7042e8075fb036bf97043339576330c3
category: bugfix
bugzilla: 186203
CVE: NA
--------------------------------
Commit 81414b4dd48 ("ext4: remove redundant sb checksum
recomputation") removed checksum recalculation after updating
superblock free space / inode counters in ext4_fill_super() based on
the fact that we will recalculate the checksum on superblock
writeout.
That is correct assumption but until the writeout happens (which can
take a long time) the checksum is incorrect in the buffer cache and if
programs such as tune2fs or resize2fs is called shortly after a file
system is mounted can fail. So return back the checksum recalculation
and add a comment explaining why.
Fixes: 81414b4dd48f ("ext4: remove redundant sb checksum recomputation")
Cc: stable(a)kernel.org
Reported-by: Boyang Xue <bxue(a)redhat.com>
Signed-off-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Link: https://lore.kernel.org/r/20210812124737.21981-1-jack@suse.cz
Signed-off-by: Zheng Liang <zhengliang6(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/ext4/super.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index b89f431ec78d3..59637e70f9e32 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -4693,6 +4693,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
GFP_KERNEL);
}
+ /*
+ * Update the checksum after updating free space/inode
+ * counters. Otherwise the superblock can have an incorrect
+ * checksum in the buffer cache until it is written out and
+ * e2fsprogs programs trying to open a file system immediately
+ * after it is mounted can fail.
+ */
+ ext4_superblock_csum_set(sb);
if (!err)
err = percpu_counter_init(&sbi->s_dirs_counter,
ext4_count_dirs(sb), GFP_KERNEL);
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS 1/2] mm/memcg_memfs_info: show files that having pages charged in mem_cgroup
by Yang Yingliang 08 Feb '22
by Yang Yingliang 08 Feb '22
08 Feb '22
From: Liu Shixin <liushixin2(a)huawei.com>
hulk inclusion
category: feature
bugzilla: 186182, https://gitee.com/openeuler/kernel/issues/I4SBQX
CVE: NA
--------------------------------
Support to print rootfs files and tmpfs files that having pages charged
in given memory cgroup. The files infomations can be printed through
interface "memory.memfs_files_info" or printed when OOM is triggered.
In order not to flush memory logs, we limit the maximum number of files
to be printed when oom through interface "max_print_files_in_oom". And
in order to filter out small files, we limit the minimum size of files
that can be printed through interface "size_threshold".
Signed-off-by: Liu Shixin <liushixin2(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
Documentation/vm/memcg_memfs_info.rst | 40 ++++
include/linux/memcg_memfs_info.h | 21 ++
init/Kconfig | 10 +
mm/Makefile | 1 +
mm/memcg_memfs_info.c | 321 ++++++++++++++++++++++++++
mm/memcontrol.c | 11 +
6 files changed, 404 insertions(+)
create mode 100644 Documentation/vm/memcg_memfs_info.rst
create mode 100644 include/linux/memcg_memfs_info.h
create mode 100644 mm/memcg_memfs_info.c
diff --git a/Documentation/vm/memcg_memfs_info.rst b/Documentation/vm/memcg_memfs_info.rst
new file mode 100644
index 0000000000000..aff432d125e52
--- /dev/null
+++ b/Documentation/vm/memcg_memfs_info.rst
@@ -0,0 +1,40 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+================
+Memcg Memfs Info
+================
+
+Overview
+========
+
+Support to print rootfs files and tmpfs files that having pages charged
+in given memory cgroup. The files infomations can be printed through
+interface "memory.memfs_files_info" or printed when OOM is triggered.
+
+User control
+============
+
+1. /sys/kernel/mm/memcg_memfs_info/enable
+-----------------------------------------
+
+Boolean type. The default value is 0, set it to 1 to enable the feature.
+
+2. /sys/kernel/mm/memcg_memfs_info/max_print_files_in_oom
+---------------------------------------------------------
+
+Unsigned long type. The default value is 500, indicating that the maximum of
+files can be print to console when OOM is triggered.
+
+3. /sys/kernel/mm/memcg_memfs_info/size_threshold
+-------------------------------------------------
+
+Unsigned long type. The default value is 0, indicating that the minimum size of
+files that can be printed.
+
+4. /sys/fs/cgroup/memory/<memory>/memory.memfs_files_info
+---------------------------------------------------------
+
+Outputs the files who use memory in this memory cgroup.
+
+---
+Liu Shixin, Jan 2022
diff --git a/include/linux/memcg_memfs_info.h b/include/linux/memcg_memfs_info.h
new file mode 100644
index 0000000000000..658a91e22bd7e
--- /dev/null
+++ b/include/linux/memcg_memfs_info.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#ifndef _LINUX_MEMCG_MEMFS_INFO_H
+#define _LINUX_MEMCG_MEMFS_INFO_H
+
+#include <linux/memcontrol.h>
+#include <linux/seq_file.h>
+
+#ifdef CONFIG_MEMCG_MEMFS_INFO
+void mem_cgroup_print_memfs_info(struct mem_cgroup *memcg, struct seq_file *m);
+int mem_cgroup_memfs_files_show(struct seq_file *m, void *v);
+void mem_cgroup_memfs_info_init(void);
+#else
+static inline void mem_cgroup_print_memfs_info(struct mem_cgroup *memcg,
+ struct seq_file *m)
+{
+}
+static inline void mem_cgroup_memfs_info_init(void)
+{
+}
+#endif
+#endif
diff --git a/init/Kconfig b/init/Kconfig
index a338519692d54..1a0b15c5a82b9 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -733,6 +733,16 @@ config MEMCG_KMEM
depends on MEMCG && !SLOB
default y
+config MEMCG_MEMFS_INFO
+ bool "Show memfs files that have pages charged in given memory cgroup"
+ depends on MEMCG
+ default n
+ help
+ Support to print rootfs files and tmpfs files that having pages
+ charged in given memory cgroup. The files infomations can be printed
+ through interface "memory.memfs_files_info" or printed when OOM is
+ triggered.
+
config BLK_CGROUP
bool "IO controller"
depends on BLOCK
diff --git a/mm/Makefile b/mm/Makefile
index deee05d22a853..8fba091be3868 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -108,3 +108,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_ASCEND_AUTO_TUNING_HUGEPAGE) += hugepage_tuning.o
obj-$(CONFIG_PIN_MEMORY) += pin_mem.o
obj-$(CONFIG_ASCEND_SHARE_POOL) += share_pool.o
+obj-$(CONFIG_MEMCG_MEMFS_INFO) += memcg_memfs_info.o
diff --git a/mm/memcg_memfs_info.c b/mm/memcg_memfs_info.c
new file mode 100644
index 0000000000000..346175026cae6
--- /dev/null
+++ b/mm/memcg_memfs_info.c
@@ -0,0 +1,321 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/memcg_memfs_info.h>
+#include <linux/fs.h>
+#include <linux/sysfs.h>
+#include <linux/kobject.h>
+#include <linux/slab.h>
+#include "../fs/mount.h"
+
+#define SEQ_printf(m, x...) \
+do { \
+ if (m) \
+ seq_printf(m, x); \
+ else \
+ pr_info(x); \
+} while (0)
+
+struct print_files_control {
+ struct mem_cgroup *memcg;
+ struct seq_file *m;
+ unsigned long size_threshold;
+ unsigned long max_print_files;
+
+ char *pathbuf;
+ unsigned long pathbuf_size;
+
+ const char *fs_type_name;
+ struct vfsmount *vfsmnt;
+ unsigned long total_print_files;
+ unsigned long total_files_size;
+};
+
+static bool memfs_enable;
+static unsigned long memfs_size_threshold;
+static unsigned long memfs_max_print_files = 500;
+
+static const char *const fs_type_names[] = {
+ "rootfs",
+ "tmpfs",
+};
+
+static struct vfsmount *memfs_get_vfsmount(struct super_block *sb)
+{
+ struct mount *mnt;
+ struct vfsmount *vfsmnt;
+
+ lock_mount_hash();
+ list_for_each_entry(mnt, &sb->s_mounts, mnt_instance) {
+ /*
+ * There may be multiple mount points for a super_block,
+ * just need to print one of these mount points to determine
+ * the file path.
+ */
+ vfsmnt = mntget(&mnt->mnt);
+ unlock_mount_hash();
+ return vfsmnt;
+ }
+ unlock_mount_hash();
+
+ return NULL;
+}
+
+static unsigned long memfs_count_in_mem_cgroup(struct mem_cgroup *memcg,
+ struct address_space *mapping)
+{
+ struct radix_tree_iter iter;
+ unsigned long size = 0;
+ struct page *page, *head;
+ void __rcu **slot;
+
+ rcu_read_lock();
+ radix_tree_for_each_slot(slot, &mapping->i_pages, &iter, 0) {
+ page = radix_tree_deref_slot(slot);
+
+ if (unlikely(!page))
+ continue;
+ if (radix_tree_exception(page)) {
+ if (radix_tree_deref_retry(page))
+ slot = radix_tree_iter_retry(&iter);
+ continue;
+ }
+
+ head = compound_head(page);
+ if (memcg == head->mem_cgroup)
+ size += PAGE_SIZE;
+ }
+ rcu_read_unlock();
+ return size;
+}
+
+static void memfs_show_file_in_mem_cgroup(void *data, struct inode *inode)
+{
+ struct print_files_control *pfc = data;
+ struct dentry *dentry;
+ unsigned long size;
+ struct path path;
+ char *filepath;
+
+ size = memfs_count_in_mem_cgroup(pfc->memcg, inode->i_mapping);
+ if (!size || size < pfc->size_threshold)
+ return;
+
+ dentry = d_find_alias(inode);
+ if (!dentry)
+ return;
+ path.mnt = pfc->vfsmnt;
+ path.dentry = dentry;
+ filepath = d_absolute_path(&path, pfc->pathbuf, pfc->pathbuf_size);
+ if (!filepath || IS_ERR(filepath))
+ filepath = "(too long)";
+ pfc->total_print_files++;
+ pfc->total_files_size += size;
+ dput(dentry);
+
+ /*
+ * To prevent excessive logs, limit the amount of data
+ * that can be output to logs.
+ */
+ if (!pfc->m && pfc->total_print_files > pfc->max_print_files)
+ return;
+
+ SEQ_printf(pfc->m, "%lukB %llukB %s\n",
+ size >> 10, inode->i_size >> 10, filepath);
+}
+
+static void memfs_show_files_in_mem_cgroup(struct super_block *sb, void *data)
+{
+ struct print_files_control *pfc = data;
+ struct inode *inode, *toput_inode = NULL;
+
+ if (strncmp(sb->s_type->name,
+ pfc->fs_type_name, strlen(pfc->fs_type_name)))
+ return;
+
+ pfc->vfsmnt = memfs_get_vfsmount(sb);
+ if (!pfc->vfsmnt)
+ return;
+
+ spin_lock(&sb->s_inode_list_lock);
+ list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ spin_lock(&inode->i_lock);
+
+ if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
+ (inode->i_mapping->nrpages == 0 && !need_resched())) {
+ spin_unlock(&inode->i_lock);
+ continue;
+ }
+ __iget(inode);
+ spin_unlock(&inode->i_lock);
+ spin_unlock(&sb->s_inode_list_lock);
+
+ memfs_show_file_in_mem_cgroup(pfc, inode);
+
+ iput(toput_inode);
+ toput_inode = inode;
+
+ cond_resched();
+ spin_lock(&sb->s_inode_list_lock);
+ }
+ spin_unlock(&sb->s_inode_list_lock);
+ iput(toput_inode);
+ mntput(pfc->vfsmnt);
+}
+
+void mem_cgroup_print_memfs_info(struct mem_cgroup *memcg, struct seq_file *m)
+{
+ struct print_files_control pfc = {
+ .memcg = memcg,
+ .m = m,
+ .max_print_files = memfs_max_print_files,
+ .size_threshold = memfs_size_threshold,
+ };
+ char *pathbuf;
+ int i;
+
+ if (!memfs_enable || !memcg)
+ return;
+
+ pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
+ if (!pathbuf) {
+ SEQ_printf(m, "Show memfs failed due to OOM\n");
+ return;
+ }
+ pfc.pathbuf = pathbuf;
+ pfc.pathbuf_size = PATH_MAX;
+
+ for (i = 0; i < ARRAY_SIZE(fs_type_names); i++) {
+ pfc.fs_type_name = fs_type_names[i];
+ pfc.total_print_files = 0;
+ pfc.total_files_size = 0;
+
+ SEQ_printf(m, "Show %s files (memory-size > %lukB):\n",
+ pfc.fs_type_name, pfc.size_threshold >> 10);
+ SEQ_printf(m, "<memory-size> <file-size> <path>\n");
+ iterate_supers(memfs_show_files_in_mem_cgroup, &pfc);
+
+ SEQ_printf(m, "total files: %lu, total memory-size: %lukB\n",
+ pfc.total_print_files, pfc.total_files_size >> 10);
+ }
+
+ kfree(pfc.pathbuf);
+}
+
+int mem_cgroup_memfs_files_show(struct seq_file *m, void *v)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m));
+
+ mem_cgroup_print_memfs_info(memcg, m);
+ return 0;
+}
+
+static ssize_t memfs_size_threshold_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%lu\n", memfs_size_threshold);
+}
+
+static ssize_t memfs_size_threshold_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ unsigned long count;
+ int err;
+
+ err = kstrtoul(buf, 10, &count);
+ if (err)
+ return err;
+ memfs_size_threshold = count;
+ return len;
+}
+
+static struct kobj_attribute memfs_size_threshold_attr = {
+ .attr = {"size_threshold", 0644},
+ .show = &memfs_size_threshold_show,
+ .store = &memfs_size_threshold_store,
+};
+
+static ssize_t memfs_max_print_files_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%lu\n", memfs_max_print_files);
+}
+
+static ssize_t memfs_max_print_files_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ unsigned long count;
+ int err;
+
+ err = kstrtoul(buf, 10, &count);
+ if (err)
+ return err;
+ memfs_max_print_files = count;
+ return len;
+}
+
+static struct kobj_attribute memfs_max_print_files_attr = {
+ .attr = {"max_print_files_in_oom", 0644},
+ .show = &memfs_max_print_files_show,
+ .store = &memfs_max_print_files_store,
+};
+
+static ssize_t memfs_enable_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", memfs_enable);
+}
+
+static ssize_t memfs_enable_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ bool enable;
+ int err;
+
+ err = kstrtobool(buf, &enable);
+ if (err)
+ return err;
+
+ memfs_enable = enable;
+ return len;
+}
+
+static struct kobj_attribute memfs_enable_attr = {
+ .attr = {"enable", 0644},
+ .show = &memfs_enable_show,
+ .store = &memfs_enable_store,
+};
+
+static struct attribute *memfs_attr[] = {
+ &memfs_size_threshold_attr.attr,
+ &memfs_max_print_files_attr.attr,
+ &memfs_enable_attr.attr,
+ NULL,
+};
+
+static struct attribute_group memfs_attr_group = {
+ .attrs = memfs_attr,
+};
+
+void mem_cgroup_memfs_info_init(void)
+{
+ struct kobject *memcg_memfs_kobj;
+
+ if (mem_cgroup_disabled())
+ return;
+
+ memcg_memfs_kobj = kobject_create_and_add("memcg_memfs_info", mm_kobj);
+ if (unlikely(!memcg_memfs_kobj)) {
+ pr_err("failed to create memcg_memfs kobject\n");
+ return;
+ }
+
+ if (sysfs_create_group(memcg_memfs_kobj, &memfs_attr_group)) {
+ pr_err("failed to register memcg_memfs group\n");
+ kobject_put(memcg_memfs_kobj);
+ }
+}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 974fc5dc6dc81..18b5660dc2459 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -66,6 +66,7 @@
#include <linux/lockdep.h>
#include <linux/file.h>
#include <linux/tracehook.h>
+#include <linux/memcg_memfs_info.h>
#include "internal.h"
#include <net/sock.h>
#include <net/ip.h>
@@ -1484,6 +1485,8 @@ void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
pr_cont("\n");
}
+
+ mem_cgroup_print_memfs_info(memcg, NULL);
}
/*
@@ -4705,6 +4708,12 @@ static struct cftype mem_cgroup_legacy_files[] = {
.write_s64 = memcg_qos_write,
},
#endif
+#ifdef CONFIG_MEMCG_MEMFS_INFO
+ {
+ .name = "memfs_files_info",
+ .seq_show = mem_cgroup_memfs_files_show,
+ },
+#endif
#ifdef CONFIG_NUMA
{
.name = "numa_stat",
@@ -6916,6 +6925,8 @@ static int __init mem_cgroup_init(void)
soft_limit_tree.rb_tree_per_node[node] = rtpn;
}
+ mem_cgroup_memfs_info_init();
+
return 0;
}
subsys_initcall(mem_cgroup_init);
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] drm/vmwgfx: Fix stale file descriptors on failed usercopy
by Yang Yingliang 07 Feb '22
by Yang Yingliang 07 Feb '22
07 Feb '22
From: Mathias Krause <minipli(a)grsecurity.net>
stable inclusion
from linux-4.19.227
commit 0008a0c78fc33a84e2212a7c04e6b21a36ca6f4d
CVE: CVE-2022-22942
--------------------------------
commit a0f90c8815706981c483a652a6aefca51a5e191c upstream.
A failing usercopy of the fence_rep object will lead to a stale entry in
the file descriptor table as put_unused_fd() won't release it. This
enables userland to refer to a dangling 'file' object through that still
valid file descriptor, leading to all kinds of use-after-free
exploitation scenarios.
Fix this by deferring the call to fd_install() until after the usercopy
has succeeded.
Fixes: c906965dee22 ("drm/vmwgfx: Add export fence to file descriptor support")
Signed-off-by: Mathias Krause <minipli(a)grsecurity.net>
Signed-off-by: Zack Rusin <zackr(a)vmware.com>
Signed-off-by: Dave Airlie <airlied(a)redhat.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Reviewed-by: Jason Yan <yanaijie(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 5 ++--
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 34 ++++++++++++-------------
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 2 +-
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 2 +-
4 files changed, 21 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
index 1abe21758b0d7..bca0b8980c0e7 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
@@ -855,15 +855,14 @@ extern int vmw_execbuf_fence_commands(struct drm_file *file_priv,
struct vmw_private *dev_priv,
struct vmw_fence_obj **p_fence,
uint32_t *p_handle);
-extern void vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
+extern int vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
struct vmw_fpriv *vmw_fp,
int ret,
struct drm_vmw_fence_rep __user
*user_fence_rep,
struct vmw_fence_obj *fence,
uint32_t fence_handle,
- int32_t out_fence_fd,
- struct sync_file *sync_file);
+ int32_t out_fence_fd);
extern int vmw_validate_single_buffer(struct vmw_private *dev_priv,
struct ttm_buffer_object *bo,
bool interruptible,
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
index 3834aa71c9c4c..e65554f5a89d5 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -3873,20 +3873,19 @@ int vmw_execbuf_fence_commands(struct drm_file *file_priv,
* object so we wait for it immediately, and then unreference the
* user-space reference.
*/
-void
+int
vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
struct vmw_fpriv *vmw_fp,
int ret,
struct drm_vmw_fence_rep __user *user_fence_rep,
struct vmw_fence_obj *fence,
uint32_t fence_handle,
- int32_t out_fence_fd,
- struct sync_file *sync_file)
+ int32_t out_fence_fd)
{
struct drm_vmw_fence_rep fence_rep;
if (user_fence_rep == NULL)
- return;
+ return 0;
memset(&fence_rep, 0, sizeof(fence_rep));
@@ -3914,20 +3913,14 @@ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
* and unreference the handle.
*/
if (unlikely(ret != 0) && (fence_rep.error == 0)) {
- if (sync_file)
- fput(sync_file->file);
-
- if (fence_rep.fd != -1) {
- put_unused_fd(fence_rep.fd);
- fence_rep.fd = -1;
- }
-
ttm_ref_object_base_unref(vmw_fp->tfile,
fence_handle, TTM_REF_USAGE);
DRM_ERROR("Fence copy error. Syncing.\n");
(void) vmw_fence_obj_wait(fence, false, false,
VMW_FENCE_WAIT_TIMEOUT);
}
+
+ return ret ? -EFAULT : 0;
}
/**
@@ -4287,16 +4280,23 @@ int vmw_execbuf_process(struct drm_file *file_priv,
(void) vmw_fence_obj_wait(fence, false, false,
VMW_FENCE_WAIT_TIMEOUT);
+ }
+ }
+
+ ret = vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
+ user_fence_rep, fence, handle, out_fence_fd);
+
+ if (sync_file) {
+ if (ret) {
+ /* usercopy of fence failed, put the file object */
+ fput(sync_file->file);
+ put_unused_fd(out_fence_fd);
} else {
/* Link the fence with the FD created earlier */
fd_install(out_fence_fd, sync_file->file);
}
}
- vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv), ret,
- user_fence_rep, fence, handle,
- out_fence_fd, sync_file);
-
/* Don't unreference when handing fence out */
if (unlikely(out_fence != NULL)) {
*out_fence = fence;
@@ -4315,7 +4315,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
*/
vmw_resource_list_unreference(sw_context, &resource_list);
- return 0;
+ return ret;
out_unlock_binding:
mutex_unlock(&dev_priv->binding_mutex);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
index 3d546d4093341..72a75316d472b 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
@@ -1169,7 +1169,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
}
vmw_execbuf_copy_fence_user(dev_priv, vmw_fp, 0, user_fence_rep, fence,
- handle, -1, NULL);
+ handle, -1);
vmw_fence_obj_unreference(&fence);
return 0;
out_no_create:
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
index 6a712a8d59e93..248d92c85cf69 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
@@ -2662,7 +2662,7 @@ void vmw_kms_helper_buffer_finish(struct vmw_private *dev_priv,
if (file_priv)
vmw_execbuf_copy_fence_user(dev_priv, vmw_fpriv(file_priv),
ret, user_fence_rep, fence,
- handle, -1, NULL);
+ handle, -1);
if (out_fence)
*out_fence = fence;
else
--
2.25.1
1
0

30 Jan '22
From: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
mainline inclusion
from mainline-v5.2-rc1
commit c806e88734b9e9aea260bf2261c129aa23968fca
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4
CVE: NA
--------------------------------
Dave Hansen asked for __read_pkru() and __write_pkru() to be
symmetrical.
As part of the series __write_pkru() will read back the value and only
write it if it is different.
In order to make both functions symmetrical, move the function
containing only the opcode asm into a function called like the
instruction itself.
__write_pkru() will just invoke wrpkru() but in a follow-up patch will
also read back the value.
[ bp: Convert asm opcode wrapper names to rd/wrpkru(). ]
Suggested-by: Dave Hansen <dave.hansen(a)intel.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Signed-off-by: Borislav Petkov <bp(a)suse.de>
Reviewed-by: Dave Hansen <dave.hansen(a)intel.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Cc: Joerg Roedel <jroedel(a)suse.de>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: kvm ML <kvm(a)vger.kernel.org>
Cc: Michal Hocko <mhocko(a)suse.cz>
Cc: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: "Radim Krčmář" <rkrcmar(a)redhat.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: x86-ml <x86(a)kernel.org>
Link: https://lkml.kernel.org/r/20190403164156.19645-13-bigeasy@linutronix.de
Signed-off-by: Jackie Liu <liuyun01(a)kylinos.cn> #openEuler_contributor
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Wei Li <liwei391(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/include/asm/special_insns.h | 12 +++++++++---
arch/x86/kvm/vmx.c | 2 +-
3 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 6f4bc1d890d0f..fe6f9e7f15bb5 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -128,7 +128,7 @@ static inline int pte_dirty(pte_t pte)
static inline u32 read_pkru(void)
{
if (boot_cpu_has(X86_FEATURE_OSPKE))
- return __read_pkru();
+ return rdpkru();
return 0;
}
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 317fc59b512c5..cddf923d405ce 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -92,7 +92,7 @@ static inline void native_write_cr8(unsigned long val)
#endif
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-static inline u32 __read_pkru(void)
+static inline u32 rdpkru(void)
{
u32 ecx = 0;
u32 edx, pkru;
@@ -107,7 +107,7 @@ static inline u32 __read_pkru(void)
return pkru;
}
-static inline void __write_pkru(u32 pkru)
+static inline void wrpkru(u32 pkru)
{
u32 ecx = 0, edx = 0;
@@ -118,8 +118,14 @@ static inline void __write_pkru(u32 pkru)
asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (pkru), "c"(ecx), "d"(edx));
}
+
+static inline void __write_pkru(u32 pkru)
+{
+ wrpkru(pkru);
+}
+
#else
-static inline u32 __read_pkru(void)
+static inline u32 rdpkru(void)
{
return 0;
}
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b8fceac285f67..aa0045ce56eae 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10989,7 +10989,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
*/
if (static_cpu_has(X86_FEATURE_PKU) &&
kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
- vcpu->arch.pkru = __read_pkru();
+ vcpu->arch.pkru = rdpkru();
if (vcpu->arch.pkru != vmx->host_pkru)
__write_pkru(vmx->host_pkru);
}
--
2.25.1
1
91

[PATCH openEuler-1.0-LTS v2 0/2]Add copy_from_user and get_user support uce kernel recovery
by Tong Tiangen 30 Jan '22
by Tong Tiangen 30 Jan '22
30 Jan '22
v2 -> v1:
update commit message.
Tong Tiangen (2):
uce: copy_from_user scenario support kernel recovery
uce: get_user scenario support kernel recovery
arch/arm64/include/asm/exception.h | 6 +++
arch/arm64/include/asm/uaccess.h | 67 ++++++++++++++++++++++++++++++
arch/arm64/lib/Makefile | 2 +
arch/arm64/lib/copy_from_user.S | 6 +++
arch/arm64/lib/get_user.c | 20 +++++++++
arch/arm64/mm/fault.c | 29 +++++++++++--
include/linux/sched.h | 1 +
kernel/sysctl.c | 3 +-
mm/internal.h | 1 +
9 files changed, 130 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/lib/get_user.c
--
2.18.0.huawei.25
1
2

[PATCH openEuler-1.0-LTS] arm64: move jump_label_init() before parse_early_param()
by Yang Yingliang 30 Jan '22
by Yang Yingliang 30 Jan '22
30 Jan '22
From: Kees Cook <keescook(a)chromium.org>
mainline inclusion
from linux-v5.3-rc1
commit ba5c5e4a5da443e80a3722e67515de5e37375b18
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4RIZO
CVE: NA
--------------------------------
While jump_label_init() was moved earlier in the boot process in
efd9e03facd0 ("arm64: Use static keys for CPU features"), it wasn't early
enough for early params to use it. The old state of things was as
described here...
init/main.c calls out to arch-specific things before general jump label
and early param handling:
asmlinkage __visible void __init start_kernel(void)
{
...
setup_arch(&command_line);
...
smp_prepare_boot_cpu();
...
/* parameters may set static keys */
jump_label_init();
parse_early_param();
...
}
x86 setup_arch() wants those earlier, so it handles jump label and
early param:
void __init setup_arch(char **cmdline_p)
{
...
jump_label_init();
...
parse_early_param();
...
}
arm64 setup_arch() only had early param:
void __init setup_arch(char **cmdline_p)
{
...
parse_early_param();
...
}
with jump label later in smp_prepare_boot_cpu():
void __init smp_prepare_boot_cpu(void)
{
...
jump_label_init();
...
}
This moves arm64 jump_label_init() from smp_prepare_boot_cpu() to
setup_arch(), as done already on x86, in preparation from early param
usage in the init_on_alloc/free() series:
https://lkml.kernel.org/r/1561572949.5154.81.camel@lca.pw
Link: http://lkml.kernel.org/r/201906271003.005303B52@keescook
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Acked-by: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Qian Cai <cai(a)lca.pw>
Cc: Will Deacon <will(a)kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang<wangkefeng.wang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
arch/arm64/kernel/setup.c | 5 +++++
arch/arm64/kernel/smp.c | 5 -----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index e23b804773874..6d7bb45717e0a 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -341,6 +341,11 @@ void __init setup_arch(char **cmdline_p)
setup_machine_fdt(__fdt_pointer);
+ /*
+ * Initialise the static keys early as they may be enabled by the
+ * cpufeature code and early parameters.
+ */
+ jump_label_init();
parse_early_param();
/*
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index f09c10863867b..4ef5bdb65b9db 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -614,11 +614,6 @@ void __init smp_cpus_done(unsigned int max_cpus)
void __init smp_prepare_boot_cpu(void)
{
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
- /*
- * Initialise the static keys early as they may be enabled by the
- * cpufeature code.
- */
- jump_label_init();
cpuinfo_store_boot_cpu();
/*
--
2.25.1
1
0

[PATCH openEuler-1.0-LTS] sysctl: returns -EINVAL when a negative value is passed to proc_doulongvec_minmax
by Yang Yingliang 30 Jan '22
by Yang Yingliang 30 Jan '22
30 Jan '22
From: Baokun Li <libaokun1(a)huawei.com>
mainline inclusion
from mainline-5.17-rc1
commit 1622ed7d0743201293094162c26019d2573ecacb
category: bugfix
bugzilla: 185873, https://gitee.com/openeuler/kernel/issues/I4MTTR
CVE: NA
-------------------------------------------------
When we pass a negative value to the proc_doulongvec_minmax() function,
the function returns 0, but the corresponding interface value does not
change.
we can easily reproduce this problem with the following commands:
cd /proc/sys/fs/epoll
echo -1 > max_user_watches; echo $?; cat max_user_watches
This function requires a non-negative number to be passed in, so when a
negative number is passed in, -EINVAL is returned.
Link: https://lkml.kernel.org/r/20211220092627.3744624-1-libaokun1@huawei.com
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Acked-by: Luis Chamberlain <mcgrof(a)kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
kernel/sysctl.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 35512e2ea8a34..31c35a65fbb78 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -3017,10 +3017,11 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int
err = proc_get_long(&p, &left, &val, &neg,
proc_wspace_sep,
sizeof(proc_wspace_sep), NULL);
- if (err)
+ if (err || neg) {
+ err = -EINVAL;
break;
- if (neg)
- continue;
+ }
+
val = convmul * val / convdiv;
if ((min && val < *min) || (max && val > *max)) {
err = -EINVAL;
--
2.25.1
1
0

[PATCH openEuler-5.10 1/3] perf tools: Update powerpc's syscall.tbl copy from the kernel sources
by Zheng Zengkai 29 Jan '22
by Zheng Zengkai 29 Jan '22
29 Jan '22
From: Tiezhu Yang <yangtiezhu(a)loongson.cn>
mainline inclusion
from mainline-v5.11-rc1
commit c5ef52944a2d
category: bugfix
bugzilla: 186175, https://gitee.com/openeuler/kernel/issues/I4S77Z
CVE: NA
-------------------------------------------------
This silences the following tools/perf/ build warning:
Warning: Kernel ABI header at 'tools/perf/arch/powerpc/entry/syscalls/syscall.tbl' differs from latest version at 'arch/powerpc/kernel/syscalls/syscall.tbl'
Just make them same:
cp arch/powerpc/kernel/syscalls/syscall.tbl tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
Signed-off-by: Tiezhu Yang <yangtiezhu(a)loongson.cn>
Reviewed-by: Naveen N. Rao <naveen.n.rao(a)linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Mark Rutland <mark.rutland(a)arm.com>
Cc: Namhyung Kim <namhyung(a)kernel.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Xuefeng Li <lixuefeng(a)loongson.cn>
Link: http://lore.kernel.org/lkml/1608278364-6733-4-git-send-email-yangtiezhu@loo…
[ There were updates after Tiezhu's post, so I just updated the copy ]
Signed-off-by: Arnaldo Carvalho de Melo <acme(a)redhat.com>
[liwei391(a)huawei.com: remove 441]
Signed-off-by: Wei Li <liwei391(a)huawei.com>
Reviewed-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
.../arch/powerpc/entry/syscalls/syscall.tbl | 25 +++++++++++++------
1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
index b168364ac050..1275daec7fec 100644
--- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
+++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
@@ -9,7 +9,9 @@
#
0 nospu restart_syscall sys_restart_syscall
1 nospu exit sys_exit
-2 nospu fork ppc_fork
+2 32 fork ppc_fork sys_fork
+2 64 fork sys_fork
+2 spu fork sys_ni_syscall
3 common read sys_read
4 common write sys_write
5 common open sys_open compat_sys_open
@@ -158,7 +160,9 @@
119 32 sigreturn sys_sigreturn compat_sys_sigreturn
119 64 sigreturn sys_ni_syscall
119 spu sigreturn sys_ni_syscall
-120 nospu clone ppc_clone
+120 32 clone ppc_clone sys_clone
+120 64 clone sys_clone
+120 spu clone sys_ni_syscall
121 common setdomainname sys_setdomainname
122 common uname sys_newuname
123 common modify_ldt sys_ni_syscall
@@ -240,7 +244,9 @@
186 spu sendfile sys_sendfile64
187 common getpmsg sys_ni_syscall
188 common putpmsg sys_ni_syscall
-189 nospu vfork ppc_vfork
+189 32 vfork ppc_vfork sys_vfork
+189 64 vfork sys_vfork
+189 spu vfork sys_ni_syscall
190 common ugetrlimit sys_getrlimit compat_sys_getrlimit
191 common readahead sys_readahead compat_sys_readahead
192 32 mmap2 sys_mmap2 compat_sys_mmap2
@@ -316,8 +322,8 @@
248 32 clock_nanosleep sys_clock_nanosleep_time32
248 64 clock_nanosleep sys_clock_nanosleep
248 spu clock_nanosleep sys_clock_nanosleep
-249 32 swapcontext ppc_swapcontext ppc32_swapcontext
-249 64 swapcontext ppc64_swapcontext
+249 32 swapcontext ppc_swapcontext compat_sys_swapcontext
+249 64 swapcontext sys_swapcontext
249 spu swapcontext sys_ni_syscall
250 common tgkill sys_tgkill
251 32 utimes sys_utimes_time32
@@ -456,7 +462,7 @@
361 common bpf sys_bpf
362 nospu execveat sys_execveat compat_sys_execveat
363 32 switch_endian sys_ni_syscall
-363 64 switch_endian ppc_switch_endian
+363 64 switch_endian sys_switch_endian
363 spu switch_endian sys_ni_syscall
364 common userfaultfd sys_userfaultfd
365 common membarrier sys_membarrier
@@ -516,6 +522,11 @@
432 common fsmount sys_fsmount
433 common fspick sys_fspick
434 common pidfd_open sys_pidfd_open
-435 nospu clone3 ppc_clone3
+435 32 clone3 ppc_clone3 sys_clone3
+435 64 clone3 sys_clone3
+435 spu clone3 sys_ni_syscall
+436 common close_range sys_close_range
437 common openat2 sys_openat2
438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
--
2.20.1
1
2

[PATCH openEuler-1.0-LTS] tcp: fix memleak when tcp internal pacing is used
by Yang Yingliang 29 Jan '22
by Yang Yingliang 29 Jan '22
29 Jan '22
From: Zhang Changzhong <zhangchangzhong(a)huawei.com>
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4S8PN
CVE: NA
--------------------------------
The sock_hold() in tcp_internal_pacing() is expected to pair with
sock_put() in tcp_pace_kick(). But in some path tcp_internal_pacing()
is called without checking if pacing timer is already armed, causing
sock_hold() to be called one more time and tcp sock can't be released.
As Neal pointed out, this could happen from some of the retransmission
code paths that don't use tcp_xmit_retransmit_queue(), such as
tcp_retransmit_timer() and tcp_send_loss_probe().
The fix is provided by Eric, it extends the timer to cover all these
points that Neal mentioned.
Following is the reproduce procedure provided by Jason:
0) cat /proc/slabinfo | grep TCP
1) switch net.ipv4.tcp_congestion_control to bbr
2) using wrk tool something like that to send packages
3) using tc to increase the delay and loss to simulate the RTO case.
4) cat /proc/slabinfo | grep TCP
5) kill the wrk command and observe the number of objects and slabs in
TCP.
6) at last, you could notice that the number would not decrease.
Link: https://lore.kernel.org/all/CANn89i+7-wE4xr5D9DpH+N-xkL1SB8oVghCKgz+CT5eG1O…
Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
net/ipv4/tcp_output.c | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 6710056fd1b23..44250df02ee18 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1021,6 +1021,8 @@ enum hrtimer_restart tcp_pace_kick(struct hrtimer *timer)
static void tcp_internal_pacing(struct sock *sk, const struct sk_buff *skb)
{
+ struct tcp_sock *tp = tcp_sk(sk);
+ ktime_t expire, now;
u64 len_ns;
u32 rate;
@@ -1032,12 +1034,28 @@ static void tcp_internal_pacing(struct sock *sk, const struct sk_buff *skb)
len_ns = (u64)skb->len * NSEC_PER_SEC;
do_div(len_ns, rate);
- hrtimer_start(&tcp_sk(sk)->pacing_timer,
- ktime_add_ns(ktime_get(), len_ns),
+ now = ktime_get();
+ /* If hrtimer is already armed, then our caller has not
+ * used tcp_pacing_check().
+ */
+ if (unlikely(hrtimer_is_queued(&tp->pacing_timer))) {
+ expire = hrtimer_get_softexpires(&tp->pacing_timer);
+ if (ktime_after(expire, now))
+ now = expire;
+ if (hrtimer_try_to_cancel(&tp->pacing_timer) == 1)
+ __sock_put(sk);
+ }
+ hrtimer_start(&tp->pacing_timer, ktime_add_ns(now, len_ns),
HRTIMER_MODE_ABS_PINNED_SOFT);
sock_hold(sk);
}
+static bool tcp_pacing_check(const struct sock *sk)
+{
+ return tcp_needs_internal_pacing(sk) &&
+ hrtimer_is_queued(&tcp_sk(sk)->pacing_timer);
+}
+
static void tcp_update_skb_after_send(struct tcp_sock *tp, struct sk_buff *skb)
{
skb->skb_mstamp = tp->tcp_mstamp;
@@ -2174,6 +2192,9 @@ static int tcp_mtu_probe(struct sock *sk)
if (!tcp_can_coalesce_send_queue_head(sk, probe_size))
return -1;
+ if (tcp_pacing_check(sk))
+ return -1;
+
/* We're allowed to probe. Build it now. */
nskb = sk_stream_alloc_skb(sk, probe_size, GFP_ATOMIC, false);
if (!nskb)
@@ -2247,12 +2268,6 @@ static int tcp_mtu_probe(struct sock *sk)
return -1;
}
-static bool tcp_pacing_check(const struct sock *sk)
-{
- return tcp_needs_internal_pacing(sk) &&
- hrtimer_is_queued(&tcp_sk(sk)->pacing_timer);
-}
-
/* TCP Small Queues :
* Control number of packets in qdisc/devices to two packets / or ~1 ms.
* (These limits are doubled for retransmits)
--
2.25.1
1
0

[PATCH openEuler-5.10 1/9] fget: clarify and improve __fget_files() implementation
by Zheng Zengkai 29 Jan '22
by Zheng Zengkai 29 Jan '22
29 Jan '22
From: Linus Torvalds <torvalds(a)linux-foundation.org>
mainline inclusion
from mainline-5.16-rc6
commit e386dfc56f837da66d00a078e5314bc8382fab83
category: perf
bugzilla: https://gitee.com/openeuler/kernel/issues/I4S0SZ
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
-------------------------------------------------
Commit 054aa8d439b9 ("fget: check that the fd still exists after getting
a ref to it") fixed a race with getting a reference to a file just as it
was being closed. It was a fairly minimal patch, and I didn't think
re-checking the file pointer lookup would be a measurable overhead,
since it was all right there and cached.
But I was wrong, as pointed out by the kernel test robot.
The 'poll2' case of the will-it-scale.per_thread_ops benchmark regressed
quite noticeably. Admittedly it seems to be a very artificial test:
doing "poll()" system calls on regular files in a very tight loop in
multiple threads.
That means that basically all the time is spent just looking up file
descriptors without ever doing anything useful with them (not that doing
'poll()' on a regular file is useful to begin with). And as a result it
shows the extra "re-check fd" cost as a sore thumb.
Happily, the regression is fixable by just writing the code to loook up
the fd to be better and clearer. There's still a cost to verify the
file pointer, but now it's basically in the noise even for that
benchmark that does nothing else - and the code is more understandable
and has better comments too.
[ Side note: this patch is also a classic case of one that looks very
messy with the default greedy Myers diff - it's much more legible with
either the patience of histogram diff algorithm ]
Link: https://lore.kernel.org/lkml/20211210053743.GA36420@xsang-OptiPlex-9020/
Link: https://lore.kernel.org/lkml/20211213083154.GA20853@linux.intel.com/
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Tested-by: Carel Si <beibei.si(a)intel.com>
Cc: Jann Horn <jannh(a)google.com>
Cc: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Conflicts:
fs/file.c
Signed-off-by: Baokun Li <libaokun1(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
fs/file.c | 72 ++++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 56 insertions(+), 16 deletions(-)
diff --git a/fs/file.c b/fs/file.c
index e4c168ddd8e7..0aa251ca02a6 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -866,28 +866,68 @@ void do_close_on_exec(struct files_struct *files)
spin_unlock(&files->file_lock);
}
-static struct file *__fget_files(struct files_struct *files, unsigned int fd,
- fmode_t mask, unsigned int refs)
+static inline struct file *__fget_files_rcu(struct files_struct *files,
+ unsigned int fd, fmode_t mask, unsigned int refs)
{
- struct file *file;
+ for (;;) {
+ struct file *file;
+ struct fdtable *fdt = rcu_dereference_raw(files->fdt);
+ struct file __rcu **fdentry;
- rcu_read_lock();
-loop:
- file = fcheck_files(files, fd);
- if (file) {
- /* File object ref couldn't be taken.
- * dup2() atomicity guarantee is the reason
- * we loop to catch the new file (or NULL pointer)
+ if (unlikely(fd >= fdt->max_fds))
+ return NULL;
+
+ fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds);
+ file = rcu_dereference_raw(*fdentry);
+ if (unlikely(!file))
+ return NULL;
+
+ if (unlikely(file->f_mode & mask))
+ return NULL;
+
+ /*
+ * Ok, we have a file pointer. However, because we do
+ * this all locklessly under RCU, we may be racing with
+ * that file being closed.
+ *
+ * Such a race can take two forms:
+ *
+ * (a) the file ref already went down to zero,
+ * and get_file_rcu_many() fails. Just try
+ * again:
*/
- if (file->f_mode & mask)
- file = NULL;
- else if (!get_file_rcu_many(file, refs))
- goto loop;
- else if (__fcheck_files(files, fd) != file) {
+ if (unlikely(!get_file_rcu_many(file, refs)))
+ continue;
+
+ /*
+ * (b) the file table entry has changed under us.
+ * Note that we don't need to re-check the 'fdt->fd'
+ * pointer having changed, because it always goes
+ * hand-in-hand with 'fdt'.
+ *
+ * If so, we need to put our refs and try again.
+ */
+ if (unlikely(rcu_dereference_raw(files->fdt) != fdt) ||
+ unlikely(rcu_dereference_raw(*fdentry) != file)) {
fput_many(file, refs);
- goto loop;
+ continue;
}
+
+ /*
+ * Ok, we have a ref to the file, and checked that it
+ * still exists.
+ */
+ return file;
}
+}
+
+static struct file *__fget_files(struct files_struct *files, unsigned int fd,
+ fmode_t mask, unsigned int refs)
+{
+ struct file *file;
+
+ rcu_read_lock();
+ file = __fget_files_rcu(files, fd, mask, refs);
rcu_read_unlock();
return file;
--
2.20.1
1
8

[PATCH openEuler-1.0-LTS 0/2]Add copy_from_user and get_user support uce kernel recovery
by Tong Tiangen 29 Jan '22
by Tong Tiangen 29 Jan '22
29 Jan '22
Tong Tiangen (2):
uce: copy_from_user scenario support kernel recovery
uce: get_user scenario support kernel recovery
arch/arm64/include/asm/exception.h | 6 +++
arch/arm64/include/asm/uaccess.h | 67 ++++++++++++++++++++++++++++++
arch/arm64/lib/Makefile | 2 +
arch/arm64/lib/copy_from_user.S | 6 +++
arch/arm64/lib/get_user.c | 20 +++++++++
arch/arm64/mm/fault.c | 29 +++++++++++--
include/linux/sched.h | 1 +
kernel/sysctl.c | 3 +-
mm/internal.h | 1 +
9 files changed, 130 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/lib/get_user.c
--
2.18.0.huawei.25
1
2
Best Regards,
hcni
1
0

29 Jan '22
Adding some more kabi reserve padding to base structures before kernel freeze.
Ard Biesheuvel (7):
riscv: rely on core code to keep thread_info::cpu updated
powerpc: smp: remove hack to obtain offset of task_struct::cpu
arm64: add CPU field to struct thread_info
x86: add CPU field to struct thread_info
s390: add CPU field to struct thread_info
powerpc: add CPU field to struct thread_info
sched: move CPU field back into thread_info if THREAD_INFO_IN_TASK=y
Chen Jiahao (2):
KABI: reserve space for several i2c structures
KABI: reserve space for struct input_dev
Guan Jing (1):
KABI: add reserve space for sched structures
Guo Zihua (1):
KABI: KABI reservation for IMA namespace
Wang Hai (1):
kabi: net: reserve space for some net subsystems related structure
Wang ShaoBo (1):
kabi: reserve space for arm64 SME in thread_struct
Wenchao Hao (1):
kabi:fuse: reserve space for future expansion
Yu Liao (1):
kabi: Reserve space for struct acpi_device_power
arch/arm64/include/asm/processor.h | 9 +++++++++
arch/arm64/include/asm/thread_info.h | 1 +
arch/arm64/kernel/asm-offsets.c | 1 +
arch/powerpc/Makefile | 8 --------
arch/powerpc/include/asm/smp.h | 17 +---------------
arch/powerpc/include/asm/thread_info.h | 3 +++
arch/powerpc/kernel/asm-offsets.c | 4 +---
arch/powerpc/kernel/smp.c | 2 +-
arch/riscv/kernel/asm-offsets.c | 1 -
arch/riscv/kernel/entry.S | 5 -----
arch/riscv/kernel/head.S | 1 -
arch/s390/include/asm/thread_info.h | 1 +
arch/x86/include/asm/thread_info.h | 3 +++
fs/fuse/fuse_i.h | 10 +++++++++
include/acpi/acpi_bus.h | 3 +++
include/linux/fs.h | 2 ++
include/linux/i2c.h | 12 +++++++++++
include/linux/inetdevice.h | 4 ++++
include/linux/input.h | 4 ++++
include/linux/key-type.h | 2 ++
include/linux/key.h | 2 ++
include/linux/nsproxy.h | 9 +++++++++
include/linux/proc_ns.h | 2 +-
include/linux/sched.h | 28 +++++++++++++++-----------
include/net/devlink.h | 4 ++++
include/net/flow_dissector.h | 6 ++++++
include/net/sch_generic.h | 1 +
include/net/tls.h | 10 +++++++++
include/net/xsk_buff_pool.h | 3 +++
kernel/sched/sched.h | 17 ++++++++++++----
30 files changed, 123 insertions(+), 52 deletions(-)
--
2.20.1
1
15
Backport 5.10.91 LTS patches from upstream.
Arthur Kiyanovski (2):
net: ena: Fix undefined state when tx request id is out of bounds
net: ena: Fix error handling when calculating max IO queues number
Chao Yu (1):
f2fs: quota: fix potential deadlock
Christoph Hellwig (1):
netrom: fix copying in user data in nr_setsockopt
Chunfeng Yun (1):
usb: mtu3: fix interval value for intr and isoc
David Ahern (7):
ipv4: Check attribute length for RTA_GATEWAY in multipath route
ipv4: Check attribute length for RTA_FLOW in multipath route
ipv6: Check attribute length for RTA_GATEWAY in multipath route
ipv6: Check attribute length for RTA_GATEWAY when deleting multipath
route
lwtunnel: Validate RTA_ENCAP_TYPE attribute length
ipv6: Continue processing multipath route even if gateway attribute is
invalid
ipv6: Do cleanup if attribute validation fails in multipath route
Di Zhu (1):
i40e: fix use-after-free in i40e_sync_filters_subtask()
Eric Dumazet (1):
sch_qfq: prevent shift-out-of-bounds in qfq_init_qdisc
Jedrzej Jagielski (1):
i40e: Fix incorrect netdev's real number of RX/TX queues
Jiasheng Jiang (1):
RDMA/uverbs: Check for null return of kmalloc_array
Jiri Olsa (1):
ftrace/samples: Add missing prototypes direct functions
Karen Sornek (1):
iavf: Fix limit of total number of queues to active queues of VF
Lai, Derek (1):
drm/amd/display: Added power down for DCN10
Leon Romanovsky (1):
RDMA/core: Don't infoleak GRH fields
Linus Lüssing (1):
batman-adv: mcast: don't send link-local multicast to mcast routers
Linus Walleij (1):
power: supply: core: Break capacity loop
Lixiaokeng (1):
scsi: libiscsi: Fix UAF in
iscsi_conn_get_param()/iscsi_conn_teardown()
Martin Habets (1):
sfc: The RX page_ring is optional
Mateusz Palczewski (2):
i40e: Fix to not show opcode msg on unsuccessful VF MAC change
i40e: Fix for displaying message regarding NVM version
Nathan Chancellor (1):
power: reset: ltc2952: Fix use of floating point literals
Naveen N. Rao (2):
tracing: Fix check for trace_percpu_buffer validity in get_trace_buf()
tracing: Tag trace_percpu_buffer as a percpu pointer
Nikita Travkin (1):
Input: zinitix - make sure the IRQ is allocated before it gets enabled
Pavel Skripkin (1):
ieee802154: atusb: fix uninit value in atusb_set_extended_addr
Phil Elwell (1):
ARM: dts: gpio-ranges property is now required
Shuah Khan (1):
selftests: x86: fix [-Wstringop-overread] warn in
test_process_vm_readv()
Tamir Duberstein (1):
ipv6: raw: check passed optlen before reading
Thomas Toye (1):
rndis_host: support Hytera digital radios
Tom Rix (1):
mac80211: initialize variable have_higher_than_11mbit
William Zhao (1):
ip6_vti: initialize __ip6_tnl_parm struct in vti6_siocdevprivate
Yauhen Kharuzhy (1):
power: bq25890: Enable continuous conversion for ADC at charging
Zekun Shen (1):
atlantic: Fix buff_ring OOB in aq_ring_rx_clean
wolfgang huang (1):
mISDN: change function names to avoid conflicts
yangxingwu (1):
net: udp: fix alignment problem in udp4_seq_show()
arch/arm/boot/dts/bcm2711.dtsi | 2 +
arch/arm/boot/dts/bcm283x.dtsi | 2 +
.../gpu/drm/amd/display/dc/dcn10/dcn10_init.c | 1 +
drivers/infiniband/core/uverbs_marshall.c | 2 +-
drivers/infiniband/core/uverbs_uapi.c | 3 +
drivers/input/touchscreen/zinitix.c | 16 ++---
drivers/isdn/mISDN/core.c | 6 +-
drivers/isdn/mISDN/core.h | 4 +-
drivers/isdn/mISDN/layer1.c | 4 +-
drivers/net/ethernet/amazon/ena/ena_netdev.c | 38 ++++++------
.../net/ethernet/aquantia/atlantic/aq_ring.c | 8 +++
drivers/net/ethernet/intel/i40e/i40e_main.c | 60 ++++++++++++++++---
.../ethernet/intel/i40e/i40e_virtchnl_pf.c | 40 ++++++++++---
drivers/net/ethernet/intel/iavf/iavf_main.c | 5 +-
drivers/net/ethernet/sfc/falcon/rx.c | 5 ++
drivers/net/ethernet/sfc/rx_common.c | 5 ++
drivers/net/ieee802154/atusb.c | 10 ++--
drivers/net/usb/rndis_host.c | 5 ++
drivers/power/reset/ltc2952-poweroff.c | 4 +-
drivers/power/supply/bq25890_charger.c | 4 +-
drivers/power/supply/power_supply_core.c | 4 ++
drivers/scsi/libiscsi.c | 6 +-
drivers/usb/mtu3/mtu3_gadget.c | 4 +-
fs/f2fs/checkpoint.c | 3 +-
kernel/trace/trace.c | 6 +-
net/batman-adv/multicast.c | 15 +++--
net/batman-adv/multicast.h | 10 ++--
net/batman-adv/soft-interface.c | 7 ++-
net/core/lwtunnel.c | 4 ++
net/ipv4/fib_semantics.c | 49 +++++++++++++--
net/ipv4/udp.c | 2 +-
net/ipv6/ip6_vti.c | 2 +
net/ipv6/raw.c | 3 +
net/ipv6/route.c | 32 +++++++++-
net/mac80211/mlme.c | 2 +-
net/netrom/af_netrom.c | 2 +-
net/sched/sch_qfq.c | 6 +-
samples/ftrace/ftrace-direct-modify.c | 3 +
samples/ftrace/ftrace-direct-too.c | 3 +
samples/ftrace/ftrace-direct.c | 2 +
tools/testing/selftests/x86/test_vsyscall.c | 2 +-
41 files changed, 297 insertions(+), 94 deletions(-)
--
2.20.1
1
41
Backport 5.10.90 LTS patches from upstream.
Adrian Hunter (1):
perf script: Fix CPU filtering of a script's switch events
Aleksander Jan Bajkowski (1):
net: lantiq_xrx200: fix statistics of received bytes
Alex Deucher (1):
drm/amdgpu: add support for IP discovery gc_info table v2
Alexey Makhalov (1):
scsi: vmw_pvscsi: Set residual data length conditionally
Amir Tzin (1):
net/mlx5e: Wrap the tx reporter dump callback to extract the sq
Christophe JAILLET (2):
net: ag71xx: Fix a potential double free in error handling paths
ionic: Initialize the 'lif->dbid_inuse' bitmap
Chunfeng Yun (3):
usb: mtu3: add memory barrier before set GPD's HWO
usb: mtu3: fix list_head check warning
usb: mtu3: set interval of FS intr and isoc endpoint
Coco Li (2):
udp: using datalen to cap ipv6 udp max gso segments
selftests: Calculate udpgso segment count without header adjustment
Dan Carpenter (1):
scsi: lpfc: Terminate string in lpfc_debugfs_nvmeio_trc_write()
Daniel Borkmann (1):
bpf: Add kconfig knob for disabling unpriv bpf by default
Dmitry V. Levin (1):
uapi: fix linux/nfc.h userspace compilation errors
Dmitry Vyukov (1):
tomoyo: Check exceeded quota early in tomoyo_domain_quota_is_ok().
Dust Li (2):
net/smc: don't send CDC/LLC message if link not ready
net/smc: fix kernel panic caused by race of smc_sock
Gal Pressman (1):
net/mlx5e: Fix wrong features assignment in case of error
Heiko Carstens (1):
recordmcount.pl: fix typo in s390 mcount regex
Helge Deller (1):
parisc: Clear stale IIR value on instruction access rights trap
Jackie Liu (1):
memblock: fix memblock_phys_alloc() section mismatch error
James McLaughlin (1):
igc: Fix TX timestamp support for non-MSI-X platforms
Jiasheng Jiang (1):
net/ncsi: check for error return from call to nla_put_u32
Karsten Graul (2):
net/smc: fix using of uninitialized completions
net/smc: improved fix wait on already cleared link
Krzysztof Kozlowski (1):
nfc: uapi: use kernel size_t to fix user-space builds
Leo L. Schwab (1):
Input: spaceball - fix parsing of movement data packets
Mathias Nyman (1):
xhci: Fresco FL1100 controller should not have BROKEN_MSI quirk set.
Matthias-Christian Ott (1):
net: usb: pegasus: Do not drop long Ethernet frames
Maxim Mikityanskiy (1):
net/mlx5e: Fix ICOSQ recovery flow for XSK
Miaoqian Lin (2):
net/mlx5: DR, Fix NULL vs IS_ERR checking in dr_domain_init_resources
fsl/fman: Fix missing put_device() call in fman_port_probe
Muchun Song (1):
net: fix use-after-free in tw_timer_handler
Pavel Skripkin (2):
i2c: validate user data in compat ioctl
Input: appletouch - initialize work before device registration
Samuel Čavoj (1):
Input: i8042 - enable deferred probe quirk for ASUS UM325UA
Takashi Iwai (1):
Input: i8042 - add deferred probe support
Tetsuo Handa (1):
tomoyo: use hwight16() in tomoyo_domain_quota_is_ok()
Todd Kjos (1):
binder: fix async_free_space accounting for empty parcels
Tom Rix (1):
selinux: initialize proto variable in selinux_ip_postroute_compat()
Vincent Pelletier (1):
usb: gadget: f_fs: Clear ffs_eventfd in ffs_data_clear.
Wang Qing (1):
platform/x86: apple-gmux: use resource_size() with res
Wei Yongjun (1):
NFC: st21nfca: Fix memory leak in device probe and remove
Xin Long (1):
sctp: use call_rcu to free endpoint
chen gong (1):
drm/amdgpu: When the VCN(1.0) block is suspended, powergating is
explicitly enabled
wujianguo (1):
selftests/net: udpgso_bench_tx: fix dst ip argument
.../admin-guide/kernel-parameters.txt | 2 +
Documentation/admin-guide/sysctl/kernel.rst | 17 ++++-
arch/parisc/kernel/traps.c | 2 +
drivers/android/binder_alloc.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 76 +++++++++++++------
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 7 ++
drivers/gpu/drm/amd/include/discovery.h | 49 ++++++++++++
drivers/i2c/i2c-dev.c | 3 +
drivers/input/joystick/spaceball.c | 11 ++-
drivers/input/mouse/appletouch.c | 4 +-
drivers/input/serio/i8042-x86ia64io.h | 21 +++++
drivers/input/serio/i8042.c | 54 ++++++++-----
drivers/net/ethernet/atheros/ag71xx.c | 23 ++----
.../net/ethernet/freescale/fman/fman_port.c | 12 +--
drivers/net/ethernet/intel/igc/igc_main.c | 6 ++
drivers/net/ethernet/lantiq_xrx200.c | 2 +-
drivers/net/ethernet/mellanox/mlx5/core/en.h | 3 -
.../mellanox/mlx5/core/en/reporter_tx.c | 10 ++-
.../net/ethernet/mellanox/mlx5/core/en_main.c | 41 ++++++----
.../mellanox/mlx5/core/steering/dr_domain.c | 5 +-
.../net/ethernet/pensando/ionic/ionic_lif.c | 2 +-
drivers/net/usb/pegasus.c | 4 +-
drivers/nfc/st21nfca/i2c.c | 29 ++++---
drivers/platform/x86/apple-gmux.c | 2 +-
drivers/scsi/lpfc/lpfc_debugfs.c | 4 +-
drivers/scsi/vmw_pvscsi.c | 7 +-
drivers/usb/gadget/function/f_fs.c | 9 ++-
drivers/usb/host/xhci-pci.c | 5 +-
drivers/usb/mtu3/mtu3_gadget.c | 8 ++
drivers/usb/mtu3/mtu3_qmu.c | 7 +-
include/linux/memblock.h | 4 +-
include/net/sctp/sctp.h | 6 +-
include/net/sctp/structs.h | 3 +-
include/uapi/linux/nfc.h | 6 +-
init/Kconfig | 10 +++
kernel/bpf/syscall.c | 3 +-
kernel/sysctl.c | 29 +++++--
net/ipv4/af_inet.c | 10 +--
net/ipv6/udp.c | 2 +-
net/ncsi/ncsi-netlink.c | 6 +-
net/sctp/diag.c | 12 +--
net/sctp/endpointola.c | 23 ++++--
net/sctp/socket.c | 23 ++++--
net/smc/smc.h | 5 ++
net/smc/smc_cdc.c | 59 +++++++-------
net/smc/smc_cdc.h | 2 +-
net/smc/smc_core.c | 47 ++++++++----
net/smc/smc_core.h | 6 ++
net/smc/smc_ib.c | 4 +-
net/smc/smc_ib.h | 1 +
net/smc/smc_llc.c | 65 ++++++++++++----
net/smc/smc_tx.c | 22 ++----
net/smc/smc_wr.c | 51 +++----------
net/smc/smc_wr.h | 17 ++++-
scripts/recordmcount.pl | 2 +-
security/selinux/hooks.c | 2 +-
security/tomoyo/util.c | 31 ++++----
tools/perf/builtin-script.c | 2 +-
tools/testing/selftests/net/udpgso.c | 12 +--
tools/testing/selftests/net/udpgso_bench_tx.c | 8 +-
60 files changed, 593 insertions(+), 307 deletions(-)
--
2.20.1
1
47
This patch adds NTFS Read-Write driver to fs/ntfs3.
Having decades of expertise in commercial file systems development and huge
test coverage, we at Paragon Software GmbH want to make our contribution to
the Open Source Community by providing implementation of NTFS Read-Write
driver for the Linux Kernel.
This is fully functional NTFS Read-Write driver. Current version works with
NTFS(including v3.1) and normal/compressed/sparse files and supports journal replaying.
We plan to support this version after the codebase once merged, and add new
features and fix bugs. For example, full journaling support over JBD will be
added in later updates.
v2:
- patch splitted to chunks (file-wise)
- build issues fixed
- sparse and checkpatch.pl errors fixed
- NULL pointer dereference on mkfs.ntfs-formatted volume mount fixed
- cosmetics + code cleanup
v3:
- added acl, noatime, no_acs_rules, prealloc mount options
- added fiemap support
- fixed encodings support
- removed typedefs
- adapted Kernel-way logging mechanisms
- fixed typos and corner-case issues
v4:
- atomic_open() refactored
- code style updated
- bugfixes
v5:
- nls/nls_alt mount options added
- Unicode conversion fixes
- Improved very fragmented files operations
- logging cosmetics
v6:
- Security Descriptors processing changed
added system.ntfs_security xattr to set
SD
- atomic_open() optimized
- cosmetics
Christophe JAILLET (2):
fs/ntfs3: Remove a useless test in 'indx_find()'
fs/ntfs3: Remove a useless shadowing variable
Colin Ian King (4):
fs/ntfs3: Fix various spelling mistakes
fs/ntfs3: Fix integer overflow in multiplication
fs/ntfs3: Remove redundant initialization of variable err
fs/ntfs3: Fix a memory leak on object opts
Dan Carpenter (5):
fs/ntfs3: add checks for allocation failure
fs/ntfs3: fix an error code in ntfs_get_acl_ex()
fs/ntfs3: Fix error code in indx_add_allocate()
fs/ntfs3: Potential NULL dereference in hdr_find_split()
fs/ntfs3: Fix error handling in indx_insert_into_root()
Gustavo A. R. Silva (1):
fs/ntfs3: Fix fall-through warnings for Clang
Jiapeng Chong (1):
fs/ntfs3: Remove unused including <linux/version.h>
Kari Argillander (54):
fs/ntfs3: Use linux/log2 is_power_of_2 function
fs/ntfs3: Add ifndef + define to all header files
fs/ntfs3: Fix one none utf8 char in source file
fs/ntfs3: Restyle comment block in ni_parse_reparse()
fs/ntfs3: Use kernel ALIGN macros over driver specific
fs/ntfs3: Do not use driver own alloc wrappers
fs/ntfs3: Use kcalloc/kmalloc_array over kzalloc/kmalloc
fs/ntfs3: Restyle comments to better align with kernel-doc
fs/ntfs3: Remove fat ioctl's from ntfs3 driver for now
fs/ntfs3: Fix integer overflow in ni_fiemap with fiemap_prep()
fs/ntfs3: Remove unnecessary condition checking from
ntfs_file_read_iter
fs/ntfs3: Remove GPL boilerplates from decompress lib files
fs/ntfs3: Change how module init/info messages are displayed
fs/ntfs3: Remove unnecesarry mount option noatime
fs/ntfs3: Remove unnecesarry remount flag handling
fs/ntfs3: Convert mount options to pointer in sbi
fs/ntfs3: Use new api for mounting
fs/ntfs3: Init spi more in init_fs_context than fill_super
fs/ntfs3: Make mount option nohidden more universal
fs/ntfs3: Add iocharset= mount option as alias for nls=
fs/ntfs3: Rename mount option no_acs_rules > (no)acsrules
fs/ntfs3: Show uid/gid always in show_options()
fs/ntfs3. Add forward declarations for structs to debug.h
fs/ntfs3: Add missing header files to ntfs.h
fs/ntfs3: Add missing headers and forward declarations to ntfs_fs.h
fs/ntfs3: Add missing header and guards to lib/ headers
fs/ntfs3: Change right headers to bitfunc.c
fs/ntfs3: Change right headers to upcase.c
fs/ntfs3: Change right headers to lznt.c
fs/ntfs3: Remove unneeded header files from c files
fs/ntfs3: Limit binary search table size
fs/ntfs3: Make binary search to search smaller chunks in beginning
fs/ntfs3: Always use binary search with entry search
fs/ntfs3: Remove '+' before constant in ni_insert_resident()
fs/ntfs3: Place Comparisons constant right side of the test
fs/ntfs3: Remove braces from single statment block
fs/ntfs3: Remove tabs before spaces from comment
fs/ntfs3: Fix ntfs_look_for_free_space() does only report -ENOSPC
fs/ntfs3: Remove always false condition check
fs/ntfs3: Use clamp/max macros instead of comparisons
fs/ntfs3: Use min/max macros instated of ternary operators
fs/ntfs3: Fix wrong error message $Logfile -> $UpCase
fs/ntfs3: Change EINVAL to ENOMEM when d_make_root fails
fs/ntfs3: Remove impossible fault condition in fill_super
fs/ntfs3: Return straight without goto in fill_super
fs/ntfs3: Remove unnecessary variable loading in fill_super
fs/ntfs3: Use sb instead of sbi->sb in fill_super
fs/ntfs3: Remove tmp var is_ro in ntfs_fill_super
fs/ntfs3: Remove tmp pointer bd_inode in fill_super
fs/ntfs3: Remove tmp pointer upcase in fill_super
fs/ntfs3: Initialize pointer before use place in fill_super
fs/ntfs3: Initiliaze sb blocksize only in one place + refactor
Doc/fs/ntfs3: Fix rst format and make it cleaner
fs/ntfs3: Remove deprecated mount options nls
Konstantin Komarov (37):
fs/ntfs3: Add headers and misc files
fs/ntfs3: Add initialization of super block
fs/ntfs3: Add bitmap
fs/ntfs3: Add file operations and implementation
fs/ntfs3: Add attrib operations
fs/ntfs3: Add compression
fs/ntfs3: Add NTFS journal
fs/ntfs3: Add Kconfig, Makefile and doc
fs/ntfs3: Rework file operations
fs/ntfs3: Restyle comments to better align with kernel-doc
fs/ntfs3: Fix insertion of attr in ni_ins_attr_ext
fs/ntfs3: Change max hardlinks limit to 4000
fs/ntfs3: Add sync flag to ntfs_sb_write_run and al_update
fs/ntfs3: Fix logical error in ntfs_create_inode
fs/ntfs3: Move ni_lock_dir and ni_unlock into ntfs_create_inode
fs/ntfs3: Refactor ntfs_get_acl_ex for better readability
fs/ntfs3: Pass flags to ntfs_set_ea in ntfs_set_acl_ex
fs/ntfs3: Change posix_acl_equiv_mode to posix_acl_update_mode
fs/ntfs3: Refactoring lock in ntfs_init_acl
fs/ntfs3: Reject mount if boot's cluster size < media sector size
fs/ntfs3: Refactoring of ntfs_init_from_boot
fs/ntfs3: Check for NULL if ATTR_EA_INFO is incorrect
fs/ntfs3: Use available posix_acl_release instead of
ntfs_posix_acl_release
fs/ntfs3: Remove locked argument in ntfs_set_ea
fs/ntfs3: Refactoring of ntfs_set_ea
fs/ntfs3: Forbid FALLOC_FL_PUNCH_HOLE for normal files
fs/ntfs3: Remove unnecessary functions
fs/ntfs3: Keep prealloc for all types of files
fs/ntfs3: Fix memory leak if fill_super failed
fs/ntfs3: Rework ntfs_utf16_to_nls
fs/ntfs3: Refactor ntfs_readlink_hlp
fs/ntfs3: Refactor ntfs_create_inode
fs/ntfs3: Refactor ni_parse_reparse
fs/ntfs3: Refactor ntfs_read_mft
fs/ntfs3: Check for NULL pointers in ni_try_remove_attr_list
fs/ntfs3: Add MAINTAINERS
fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile
Nathan Chancellor (1):
fs/ntfs3: Remove unused variable cnt in ntfs_security_init()
Yin Xiujiang (2):
fs/ntfs3: Fix the issue from backport 5.15 to 5.10
fs/ntfs3: Add ntfs3 module in openeuler_defconfig
Documentation/filesystems/index.rst | 1 +
Documentation/filesystems/ntfs3.rst | 115 +
MAINTAINERS | 9 +
arch/arm64/configs/openeuler_defconfig | 4 +
arch/x86/configs/openeuler_defconfig | 4 +
fs/Kconfig | 1 +
fs/Makefile | 1 +
fs/ntfs3/Kconfig | 46 +
fs/ntfs3/Makefile | 36 +
fs/ntfs3/attrib.c | 2083 ++++++++++
fs/ntfs3/attrlist.c | 457 +++
fs/ntfs3/bitfunc.c | 128 +
fs/ntfs3/bitmap.c | 1491 +++++++
fs/ntfs3/debug.h | 55 +
fs/ntfs3/dir.c | 593 +++
fs/ntfs3/file.c | 1254 ++++++
fs/ntfs3/frecord.c | 3284 +++++++++++++++
fs/ntfs3/fslog.c | 5213 ++++++++++++++++++++++++
fs/ntfs3/fsntfs.c | 2506 ++++++++++++
fs/ntfs3/index.c | 2584 ++++++++++++
fs/ntfs3/inode.c | 1960 +++++++++
fs/ntfs3/lib/decompress_common.c | 319 ++
fs/ntfs3/lib/decompress_common.h | 343 ++
fs/ntfs3/lib/lib.h | 32 +
fs/ntfs3/lib/lzx_decompress.c | 670 +++
fs/ntfs3/lib/xpress_decompress.c | 142 +
fs/ntfs3/lznt.c | 453 ++
fs/ntfs3/namei.c | 387 ++
fs/ntfs3/ntfs.h | 1224 ++++++
fs/ntfs3/ntfs_fs.h | 1138 ++++++
fs/ntfs3/record.c | 602 +++
fs/ntfs3/run.c | 1111 +++++
fs/ntfs3/super.c | 1507 +++++++
fs/ntfs3/upcase.c | 104 +
fs/ntfs3/xattr.c | 991 +++++
35 files changed, 30848 insertions(+)
create mode 100644 Documentation/filesystems/ntfs3.rst
create mode 100644 fs/ntfs3/Kconfig
create mode 100644 fs/ntfs3/Makefile
create mode 100644 fs/ntfs3/attrib.c
create mode 100644 fs/ntfs3/attrlist.c
create mode 100644 fs/ntfs3/bitfunc.c
create mode 100644 fs/ntfs3/bitmap.c
create mode 100644 fs/ntfs3/debug.h
create mode 100644 fs/ntfs3/dir.c
create mode 100644 fs/ntfs3/file.c
create mode 100644 fs/ntfs3/frecord.c
create mode 100644 fs/ntfs3/fslog.c
create mode 100644 fs/ntfs3/fsntfs.c
create mode 100644 fs/ntfs3/index.c
create mode 100644 fs/ntfs3/inode.c
create mode 100644 fs/ntfs3/lib/decompress_common.c
create mode 100644 fs/ntfs3/lib/decompress_common.h
create mode 100644 fs/ntfs3/lib/lib.h
create mode 100644 fs/ntfs3/lib/lzx_decompress.c
create mode 100644 fs/ntfs3/lib/xpress_decompress.c
create mode 100644 fs/ntfs3/lznt.c
create mode 100644 fs/ntfs3/namei.c
create mode 100644 fs/ntfs3/ntfs.h
create mode 100644 fs/ntfs3/ntfs_fs.h
create mode 100644 fs/ntfs3/record.c
create mode 100644 fs/ntfs3/run.c
create mode 100644 fs/ntfs3/super.c
create mode 100644 fs/ntfs3/upcase.c
create mode 100644 fs/ntfs3/xattr.c
--
2.20.1
1
107

27 Jan '22
This patch adds NTFS Read-Write driver to fs/ntfs3.
Having decades of expertise in commercial file systems development and huge
test coverage, we at Paragon Software GmbH want to make our contribution to
the Open Source Community by providing implementation of NTFS Read-Write
driver for the Linux Kernel.
This is fully functional NTFS Read-Write driver. Current version works with
NTFS(including v3.1) and normal/compressed/sparse files and supports journal replaying.
We plan to support this version after the codebase once merged, and add new
features and fix bugs. For example, full journaling support over JBD will be
added in later updates.
v2:
- patch splitted to chunks (file-wise)
- build issues fixed
- sparse and checkpatch.pl errors fixed
- NULL pointer dereference on mkfs.ntfs-formatted volume mount fixed
- cosmetics + code cleanup
v3:
- added acl, noatime, no_acs_rules, prealloc mount options
- added fiemap support
- fixed encodings support
- removed typedefs
- adapted Kernel-way logging mechanisms
- fixed typos and corner-case issues
v4:
- atomic_open() refactored
- code style updated
- bugfixes
v5:
- nls/nls_alt mount options added
- Unicode conversion fixes
- Improved very fragmented files operations
- logging cosmetics
v6:
- Security Descriptors processing changed
added system.ntfs_security xattr to set
SD
- atomic_open() optimized
- cosmetics
Christophe JAILLET (2):
fs/ntfs3: Remove a useless test in 'indx_find()'
fs/ntfs3: Remove a useless shadowing variable
Colin Ian King (4):
fs/ntfs3: Fix various spelling mistakes
fs/ntfs3: Fix integer overflow in multiplication
fs/ntfs3: Remove redundant initialization of variable err
fs/ntfs3: Fix a memory leak on object opts
Dan Carpenter (5):
fs/ntfs3: add checks for allocation failure
fs/ntfs3: fix an error code in ntfs_get_acl_ex()
fs/ntfs3: Fix error code in indx_add_allocate()
fs/ntfs3: Potential NULL dereference in hdr_find_split()
fs/ntfs3: Fix error handling in indx_insert_into_root()
Gustavo A. R. Silva (1):
fs/ntfs3: Fix fall-through warnings for Clang
Jiapeng Chong (1):
fs/ntfs3: Remove unused including <linux/version.h>
Kari Argillander (54):
fs/ntfs3: Use linux/log2 is_power_of_2 function
fs/ntfs3: Add ifndef + define to all header files
fs/ntfs3: Fix one none utf8 char in source file
fs/ntfs3: Restyle comment block in ni_parse_reparse()
fs/ntfs3: Use kernel ALIGN macros over driver specific
fs/ntfs3: Do not use driver own alloc wrappers
fs/ntfs3: Use kcalloc/kmalloc_array over kzalloc/kmalloc
fs/ntfs3: Restyle comments to better align with kernel-doc
fs/ntfs3: Remove fat ioctl's from ntfs3 driver for now
fs/ntfs3: Fix integer overflow in ni_fiemap with fiemap_prep()
fs/ntfs3: Remove unnecessary condition checking from
ntfs_file_read_iter
fs/ntfs3: Remove GPL boilerplates from decompress lib files
fs/ntfs3: Change how module init/info messages are displayed
fs/ntfs3: Remove unnecesarry mount option noatime
fs/ntfs3: Remove unnecesarry remount flag handling
fs/ntfs3: Convert mount options to pointer in sbi
fs/ntfs3: Use new api for mounting
fs/ntfs3: Init spi more in init_fs_context than fill_super
fs/ntfs3: Make mount option nohidden more universal
fs/ntfs3: Add iocharset= mount option as alias for nls=
fs/ntfs3: Rename mount option no_acs_rules > (no)acsrules
fs/ntfs3: Show uid/gid always in show_options()
fs/ntfs3. Add forward declarations for structs to debug.h
fs/ntfs3: Add missing header files to ntfs.h
fs/ntfs3: Add missing headers and forward declarations to ntfs_fs.h
fs/ntfs3: Add missing header and guards to lib/ headers
fs/ntfs3: Change right headers to bitfunc.c
fs/ntfs3: Change right headers to upcase.c
fs/ntfs3: Change right headers to lznt.c
fs/ntfs3: Remove unneeded header files from c files
fs/ntfs3: Limit binary search table size
fs/ntfs3: Make binary search to search smaller chunks in beginning
fs/ntfs3: Always use binary search with entry search
fs/ntfs3: Remove '+' before constant in ni_insert_resident()
fs/ntfs3: Place Comparisons constant right side of the test
fs/ntfs3: Remove braces from single statment block
fs/ntfs3: Remove tabs before spaces from comment
fs/ntfs3: Fix ntfs_look_for_free_space() does only report -ENOSPC
fs/ntfs3: Remove always false condition check
fs/ntfs3: Use clamp/max macros instead of comparisons
fs/ntfs3: Use min/max macros instated of ternary operators
fs/ntfs3: Fix wrong error message $Logfile -> $UpCase
fs/ntfs3: Change EINVAL to ENOMEM when d_make_root fails
fs/ntfs3: Remove impossible fault condition in fill_super
fs/ntfs3: Return straight without goto in fill_super
fs/ntfs3: Remove unnecessary variable loading in fill_super
fs/ntfs3: Use sb instead of sbi->sb in fill_super
fs/ntfs3: Remove tmp var is_ro in ntfs_fill_super
fs/ntfs3: Remove tmp pointer bd_inode in fill_super
fs/ntfs3: Remove tmp pointer upcase in fill_super
fs/ntfs3: Initialize pointer before use place in fill_super
fs/ntfs3: Initiliaze sb blocksize only in one place + refactor
Doc/fs/ntfs3: Fix rst format and make it cleaner
fs/ntfs3: Remove deprecated mount options nls
Konstantin Komarov (37):
fs/ntfs3: Add headers and misc files
fs/ntfs3: Add initialization of super block
fs/ntfs3: Add bitmap
fs/ntfs3: Add file operations and implementation
fs/ntfs3: Add attrib operations
fs/ntfs3: Add compression
fs/ntfs3: Add NTFS journal
fs/ntfs3: Add Kconfig, Makefile and doc
fs/ntfs3: Rework file operations
fs/ntfs3: Restyle comments to better align with kernel-doc
fs/ntfs3: Fix insertion of attr in ni_ins_attr_ext
fs/ntfs3: Change max hardlinks limit to 4000
fs/ntfs3: Add sync flag to ntfs_sb_write_run and al_update
fs/ntfs3: Fix logical error in ntfs_create_inode
fs/ntfs3: Move ni_lock_dir and ni_unlock into ntfs_create_inode
fs/ntfs3: Refactor ntfs_get_acl_ex for better readability
fs/ntfs3: Pass flags to ntfs_set_ea in ntfs_set_acl_ex
fs/ntfs3: Change posix_acl_equiv_mode to posix_acl_update_mode
fs/ntfs3: Refactoring lock in ntfs_init_acl
fs/ntfs3: Reject mount if boot's cluster size < media sector size
fs/ntfs3: Refactoring of ntfs_init_from_boot
fs/ntfs3: Check for NULL if ATTR_EA_INFO is incorrect
fs/ntfs3: Use available posix_acl_release instead of
ntfs_posix_acl_release
fs/ntfs3: Remove locked argument in ntfs_set_ea
fs/ntfs3: Refactoring of ntfs_set_ea
fs/ntfs3: Forbid FALLOC_FL_PUNCH_HOLE for normal files
fs/ntfs3: Remove unnecessary functions
fs/ntfs3: Keep prealloc for all types of files
fs/ntfs3: Fix memory leak if fill_super failed
fs/ntfs3: Rework ntfs_utf16_to_nls
fs/ntfs3: Refactor ntfs_readlink_hlp
fs/ntfs3: Refactor ntfs_create_inode
fs/ntfs3: Refactor ni_parse_reparse
fs/ntfs3: Refactor ntfs_read_mft
fs/ntfs3: Check for NULL pointers in ni_try_remove_attr_list
fs/ntfs3: Add MAINTAINERS
fs/ntfs3: Add NTFS3 in fs/Kconfig and fs/Makefile
Nathan Chancellor (1):
fs/ntfs3: Remove unused variable cnt in ntfs_security_init()
Yin Xiujiang (2):
fs/ntfs3: Fix the issue from backport 5.15 to 5.10
fs/ntfs3: Add ntfs3 module in openeuler_defconfig
Documentation/filesystems/index.rst | 1 +
Documentation/filesystems/ntfs3.rst | 115 +
MAINTAINERS | 9 +
arch/arm64/configs/openeuler_defconfig | 4 +
arch/x86/configs/openeuler_defconfig | 4 +
fs/Kconfig | 1 +
fs/Makefile | 1 +
fs/ntfs3/Kconfig | 46 +
fs/ntfs3/Makefile | 36 +
fs/ntfs3/attrib.c | 2083 ++++++++++
fs/ntfs3/attrlist.c | 457 +++
fs/ntfs3/bitfunc.c | 128 +
fs/ntfs3/bitmap.c | 1491 +++++++
fs/ntfs3/debug.h | 55 +
fs/ntfs3/dir.c | 593 +++
fs/ntfs3/file.c | 1254 ++++++
fs/ntfs3/frecord.c | 3284 +++++++++++++++
fs/ntfs3/fslog.c | 5213 ++++++++++++++++++++++++
fs/ntfs3/fsntfs.c | 2506 ++++++++++++
fs/ntfs3/index.c | 2584 ++++++++++++
fs/ntfs3/inode.c | 1960 +++++++++
fs/ntfs3/lib/decompress_common.c | 319 ++
fs/ntfs3/lib/decompress_common.h | 343 ++
fs/ntfs3/lib/lib.h | 32 +
fs/ntfs3/lib/lzx_decompress.c | 670 +++
fs/ntfs3/lib/xpress_decompress.c | 142 +
fs/ntfs3/lznt.c | 453 ++
fs/ntfs3/namei.c | 387 ++
fs/ntfs3/ntfs.h | 1224 ++++++
fs/ntfs3/ntfs_fs.h | 1138 ++++++
fs/ntfs3/record.c | 602 +++
fs/ntfs3/run.c | 1111 +++++
fs/ntfs3/super.c | 1507 +++++++
fs/ntfs3/upcase.c | 104 +
fs/ntfs3/xattr.c | 991 +++++
35 files changed, 30848 insertions(+)
create mode 100644 Documentation/filesystems/ntfs3.rst
create mode 100644 fs/ntfs3/Kconfig
create mode 100644 fs/ntfs3/Makefile
create mode 100644 fs/ntfs3/attrib.c
create mode 100644 fs/ntfs3/attrlist.c
create mode 100644 fs/ntfs3/bitfunc.c
create mode 100644 fs/ntfs3/bitmap.c
create mode 100644 fs/ntfs3/debug.h
create mode 100644 fs/ntfs3/dir.c
create mode 100644 fs/ntfs3/file.c
create mode 100644 fs/ntfs3/frecord.c
create mode 100644 fs/ntfs3/fslog.c
create mode 100644 fs/ntfs3/fsntfs.c
create mode 100644 fs/ntfs3/index.c
create mode 100644 fs/ntfs3/inode.c
create mode 100644 fs/ntfs3/lib/decompress_common.c
create mode 100644 fs/ntfs3/lib/decompress_common.h
create mode 100644 fs/ntfs3/lib/lib.h
create mode 100644 fs/ntfs3/lib/lzx_decompress.c
create mode 100644 fs/ntfs3/lib/xpress_decompress.c
create mode 100644 fs/ntfs3/lznt.c
create mode 100644 fs/ntfs3/namei.c
create mode 100644 fs/ntfs3/ntfs.h
create mode 100644 fs/ntfs3/ntfs_fs.h
create mode 100644 fs/ntfs3/record.c
create mode 100644 fs/ntfs3/run.c
create mode 100644 fs/ntfs3/super.c
create mode 100644 fs/ntfs3/upcase.c
create mode 100644 fs/ntfs3/xattr.c
--
2.20.1
1
5
From: Miklos Szeredi <mszeredi(a)redhat.com>
stable inclusion
from linux-4.19.219
commit 65f1f3eb09a3907c5f5edaff6c473b99e49e6020
--------------------------------
commit 712a951025c0667ff00b25afc360f74e639dfabe upstream.
It is possible to trigger a crash by splicing anon pipe bufs to the fuse
device.
The reason for this is that anon_pipe_buf_release() will reuse buf->page if
the refcount is 1, but that page might have already been stolen and its
flags modified (e.g. PG_lru added).
This happens in the unlikely case of fuse_dev_splice_write() getting around
to calling pipe_buf_release() after a page has been stolen, added to the
page cache and removed from the page cache.
Fix by calling pipe_buf_release() right after the page was inserted into
the page cache. In this case the page has an elevated refcount so any
release function will know that the page isn't reusable.
Reported-by: Frank Dinoff <fdinoff(a)google.com>
Link: https://lore.kernel.org/r/CAAmZXrsGg2xsP1CK+cbuEMumtrqdvD-NKnWzhNcvn71RV3c1…
Fixes: dd3bb14f44a6 ("fuse: support splice() writing to fuse device")
Cc: <stable(a)vger.kernel.org> # v2.6.35
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
fs/fuse/dev.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index 498a4fab40801..3fee28ee63745 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -905,6 +905,12 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep)
goto out_put_old;
}
+ /*
+ * Release while we have extra ref on stolen page. Otherwise
+ * anon_pipe_buf_release() might think the page can be reused.
+ */
+ pipe_buf_release(cs->pipe, buf);
+
get_page(newpage);
if (!(buf->flags & PIPE_BUF_FLAG_LRU))
@@ -2054,8 +2060,12 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
pipe_lock(pipe);
out_free:
- for (idx = 0; idx < nbuf; idx++)
- pipe_buf_release(pipe, &bufs[idx]);
+ for (idx = 0; idx < nbuf; idx++) {
+ struct pipe_buffer *buf = &bufs[idx];
+
+ if (buf->ops)
+ pipe_buf_release(pipe, buf);
+ }
pipe_unlock(pipe);
kvfree(bufs);
--
2.25.1
1
39

[PATCH openEuler-5.10 01/47] net: add and use skb_unclone_keeptruesize() helper
by Zheng Zengkai 27 Jan '22
by Zheng Zengkai 27 Jan '22
27 Jan '22
From: Eric Dumazet <edumazet(a)google.com>
mainline inclusion
from mainline-v5.16-rc1
commit c4777efa751d293e369aec464ce6875e957be255
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4RYXI
CVE: NA
--------------------------------
While commit 097b9146c0e2 ("net: fix up truesize of cloned
skb in skb_prepare_for_shift()") fixed immediate issues found
when KFENCE was enabled/tested, there are still similar issues,
when tcp_trim_head() hits KFENCE while the master skb
is cloned.
This happens under heavy networking TX workloads,
when the TX completion might be delayed after incoming ACK.
This patch fixes the WARNING in sk_stream_kill_queues
when sk->sk_mem_queued/sk->sk_forward_alloc are not zero.
Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Acked-by: Marco Elver <elver(a)google.com>
Link: https://lore.kernel.org/r/20211102004555.1359210-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
Signed-off-by: YueHaibing <yuehaibing(a)huawei.com>
Reviewed-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Reviewed-by: Liu Jian <liujian56(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
include/linux/skbuff.h | 16 ++++++++++++++++
net/core/skbuff.c | 14 +-------------
net/ipv4/tcp_output.c | 6 +++---
3 files changed, 20 insertions(+), 16 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index d485f17ff33a..e5f61bdd42a8 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1652,6 +1652,22 @@ static inline int skb_unclone(struct sk_buff *skb, gfp_t pri)
return 0;
}
+/* This variant of skb_unclone() makes sure skb->truesize is not changed */
+static inline int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
+{
+ might_sleep_if(gfpflags_allow_blocking(pri));
+
+ if (skb_cloned(skb)) {
+ unsigned int save = skb->truesize;
+ int res;
+
+ res = pskb_expand_head(skb, 0, 0, pri);
+ skb->truesize = save;
+ return res;
+ }
+ return 0;
+}
+
/**
* skb_header_cloned - is the header a clone
* @skb: buffer to check
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 8d73503c8b32..16c74a81b7bf 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3311,19 +3311,7 @@ EXPORT_SYMBOL(skb_split);
*/
static int skb_prepare_for_shift(struct sk_buff *skb)
{
- int ret = 0;
-
- if (skb_cloned(skb)) {
- /* Save and restore truesize: pskb_expand_head() may reallocate
- * memory where ksize(kmalloc(S)) != ksize(kmalloc(S)), but we
- * cannot change truesize at this point.
- */
- unsigned int save_truesize = skb->truesize;
-
- ret = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
- skb->truesize = save_truesize;
- }
- return ret;
+ return skb_unclone_keeptruesize(skb, GFP_ATOMIC);
}
/**
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index c8420b48de36..da8bcdd90715 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1613,7 +1613,7 @@ int tcp_fragment(struct sock *sk, enum tcp_queue tcp_queue,
return -ENOMEM;
}
- if (skb_unclone(skb, gfp))
+ if (skb_unclone_keeptruesize(skb, gfp))
return -ENOMEM;
/* Get a new skb... force flag on. */
@@ -1722,7 +1722,7 @@ int tcp_trim_head(struct sock *sk, struct sk_buff *skb, u32 len)
{
u32 delta_truesize;
- if (skb_unclone(skb, GFP_ATOMIC))
+ if (skb_unclone_keeptruesize(skb, GFP_ATOMIC))
return -ENOMEM;
delta_truesize = __pskb_trim_head(skb, len);
@@ -3236,7 +3236,7 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
cur_mss, GFP_ATOMIC))
return -ENOMEM; /* We'll try again later. */
} else {
- if (skb_unclone(skb, GFP_ATOMIC))
+ if (skb_unclone_keeptruesize(skb, GFP_ATOMIC))
return -ENOMEM;
diff = tcp_skb_pcount(skb);
--
2.20.1
1
43

[PATCH openEuler-5.10 1/9] crypto: Add PMULL judgment during initialization to prevent oops
by Zheng Zengkai 26 Jan '22
by Zheng Zengkai 26 Jan '22
26 Jan '22
From: wangshouping <wangshouping(a)huawei.com>
openEuler inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4OKIE?from=project-issue
CVE: NA
----------------------------------------
For servers that do not support PMULL on
the cpu, execute "modprobe crct10dif-ce",
and accur oops.
Signed-off-by: wangshouping <wangshouping(a)huawei.com>
Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com>
Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com>
---
arch/arm64/crypto/crct10dif-neon_glue.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/crypto/crct10dif-neon_glue.c b/arch/arm64/crypto/crct10dif-neon_glue.c
index e0c4a9acee27..af731b3ec30e 100644
--- a/arch/arm64/crypto/crct10dif-neon_glue.c
+++ b/arch/arm64/crypto/crct10dif-neon_glue.c
@@ -97,7 +97,11 @@ static struct shash_alg alg = {
static int __init crct10dif_arm64_mod_init(void)
{
- return crypto_register_shash(&alg);
+ if (cpu_have_named_feature(PMULL)) {
+ return crypto_register_shash(&alg);
+ } else {
+ return -ENODEV;
+ }
}
static void __exit crct10dif_arm64_mod_fini(void)
--
2.20.1
1
8

Re: [PATCH openEuler-1.0-LTS V3 34/92] KVM: mmu: Fix SPTE encoding of MMIO generation upper half
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 32/92] KVM: x86: fix overlap between SPTE_MMIO_MASK and generation
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

[PATCH openEuler-1.0-LTS] ipmi_si: Phytium S2500 workaround for MMIO-based IPMI
by Yang Yingliang 26 Jan '22
by Yang Yingliang 26 Jan '22
26 Jan '22
From: Laibin Qiu <qiulaibin(a)huawei.com>
phytium inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I4RK58
CVE: NA
--------------------------------
The system would hang up when the Phytium S2500 communicates with
some BMCs after several rounds of transactions, unless we reset
the controller timeout counter manually by calling firmware through
SMC.
Signed-off-by: Wang Yinfeng <wangyinfeng(a)phytium.com.cn>
Signed-off-by: Chen Baozi <chenbaozi(a)phytium.com.cn> #openEuler_contributor
Signed-off-by: Laibin Qiu <qiulaibin(a)huawei.com>
Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
---
drivers/char/ipmi/ipmi_si_mem_io.c | 72 ++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
index 75583612ab105..9403e35a19934 100644
--- a/drivers/char/ipmi/ipmi_si_mem_io.c
+++ b/drivers/char/ipmi/ipmi_si_mem_io.c
@@ -3,9 +3,75 @@
#include <linux/io.h>
#include "ipmi_si.h"
+#ifdef CONFIG_ARM_GIC_PHYTIUM_2500
+#include <linux/arm-smccc.h>
+
+#define CTL_RST_FUNC_ID 0xC2000011
+
+static bool apply_phytium2500_workaround;
+
+struct ipmi_workaround_oem_info {
+ char oem_id[ACPI_OEM_ID_SIZE + 1];
+};
+
+static struct ipmi_workaround_oem_info wa_info[] = {
+ {
+ .oem_id = "KPSVVJ",
+ }
+};
+
+static void ipmi_check_phytium_workaround(void)
+{
+#ifdef CONFIG_ACPI
+ struct acpi_table_header tbl;
+ int i;
+
+ if (ACPI_FAILURE(acpi_get_table_header(ACPI_SIG_DSDT, 0, &tbl)))
+ return;
+
+ for (i = 0; i < ARRAY_SIZE(wa_info); i++) {
+ if (strncmp(wa_info[i].oem_id, tbl.oem_id, ACPI_OEM_ID_SIZE))
+ continue;
+
+ apply_phytium2500_workaround = true;
+ break;
+ }
+#endif
+}
+
+static void ctl_smc(unsigned long arg0, unsigned long arg1,
+ unsigned long arg2, unsigned long arg3)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_smc(arg0, arg1, arg2, arg3, 0, 0, 0, 0, &res);
+ if (res.a0 != 0)
+ pr_err("Error: Firmware call SMC reset Failed: %d, addr: 0x%lx\n",
+ (int)res.a0, arg2);
+}
+
+static void ctl_timeout_reset(void)
+{
+ ctl_smc(CTL_RST_FUNC_ID, 0x1, 0x28100208, 0x1);
+ ctl_smc(CTL_RST_FUNC_ID, 0x1, 0x2810020C, 0x1);
+}
+
+static inline void ipmi_phytium_workaround(void)
+{
+ if (apply_phytium2500_workaround)
+ ctl_timeout_reset();
+}
+
+#else
+static inline void ipmi_check_phytium_workaround(void) {}
+static inline void ipmi_phytium_workaround(void) {}
+#endif
+
static unsigned char intf_mem_inb(const struct si_sm_io *io,
unsigned int offset)
{
+ ipmi_phytium_workaround();
+
return readb((io->addr)+(offset * io->regspacing));
}
@@ -31,6 +97,8 @@ static void intf_mem_outw(const struct si_sm_io *io, unsigned int offset,
static unsigned char intf_mem_inl(const struct si_sm_io *io,
unsigned int offset)
{
+ ipmi_phytium_workaround();
+
return (readl((io->addr)+(offset * io->regspacing)) >> io->regshift)
& 0xff;
}
@@ -44,6 +112,8 @@ static void intf_mem_outl(const struct si_sm_io *io, unsigned int offset,
#ifdef readq
static unsigned char mem_inq(const struct si_sm_io *io, unsigned int offset)
{
+ ipmi_phytium_workaround();
+
return (readq((io->addr)+(offset * io->regspacing)) >> io->regshift)
& 0xff;
}
@@ -81,6 +151,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
if (!addr)
return -ENODEV;
+ ipmi_check_phytium_workaround();
+
/*
* Figure out the actual readb/readw/readl/etc routine to use based
* upon the register size.
--
2.25.1
1
0

Re: [PATCH openEuler-1.0-LTS V3 31/92] KVM: x86: assign two bits to track SPTE kinds
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 30/92] KVM: Move the memslot update in-progress flag to bit 63
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 29/92] KVM: Remove the hack to trigger memslot generation wraparound
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 13/92] KVM: x86: Refactor the MMIO SPTE generation handling
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 11/92] KVM: x86: Use a u64 when passing the MMIO gen around
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 12/92] KVM: Explicitly define the "memslot update in-progress" bit
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 27/92] KVM: SVM: Clear the CR4 register on reset
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 28/92] KVM: x86: clflushopt should be treated as a no-op by emulation
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 26/92] KVM: SVM: Replace hard-coded value with #define
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 25/92] KVM: x86/mmu: Set mmio_value to '0' if reserved #PF can't be generated
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 24/92] KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 23/92] KVM: x86: only do L1TF workaround on affected processors
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 22/92] kvm: x86: Fix L1TF mitigation for shadow MMU
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 20/92] KVM: SVM: Override default MMIO mask if memory encryption is enabled
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 21/92] KVM: x86/mmu: Consolidate "is MMIO SPTE" code
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 19/92] KVM: x86/mmu: Add explicit access mask for MMIO SPTEs
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 17/92] KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0
Backport 5.10.89 LTS patches from upstream.
Andrea Righi (1):
Input: elantech - fix stack out of bound access in
elantech_change_report_id()
Andrew Cooper (1):
x86/pkey: Fix undefined behaviour with PKRU_WD_BIT
Andrey Ryabinin (1):
mm: mempolicy: fix THP allocations escaping mempolicy restrictions
Ard Biesheuvel (1):
ARM: 9169/1: entry: fix Thumb2 bug in iWMMXt exception handling
Benjamin Tissoires (1):
HID: holtek: fix mouse probing
Bradley Scott (2):
ALSA: hda/realtek: Amp init fixup for HP ZBook 15 G6
ALSA: hda/realtek: Add new alc285-hp-amp-init model
Christian Brauner (1):
ceph: fix up non-directory creation in SGID directories
Colin Ian King (1):
ALSA: drivers: opl3: Fix incorrect use of vp->state
Derek Fang (1):
ASoC: rt5682: fix the wrong jack type detected
Dongliang Mu (1):
spi: change clk_disable_unprepare to clk_unprepare
Fabien Dessenne (1):
pinctrl: stm32: consider the GPIO offset to expose all the GPIO lines
Fernando Fernandez Mancera (1):
bonding: fix ad_actor_system option setting to default
Greg Jesionowski (1):
net: usb: lan78xx: add Allied Telesis AT29M2-AF
Guenter Roeck (6):
hwmon: (lm90) Fix usage of CONFIG2 register in detect function
hwmon: (lm90) Introduce flag indicating extended temperature support
hwmon: (lm90) Add basic support for TI TMP461
hwmon: (lm90) Drop critical attribute support for MAX6654
hwmom: (lm90) Fix citical alarm status for MAX6680/MAX6681
hwmon: (lm90) Do not report 'busy' status bit as alarm
Guodong Liu (1):
pinctrl: mediatek: fix global-out-of-bounds issue
Hans de Goede (1):
Input: goodix - add id->model mapping for the "9111" model
Heiner Kallweit (1):
igb: fix deadlock caused by taking RTNL in RPM resume path
Jiacheng Shi (1):
RDMA/hns: Replace kfree() with kvfree()
Jiasheng Jiang (7):
HID: potential dereference of null pointer
qlcnic: potential dereference null pointer of rx_queue->page_ring
fjes: Check for error irq
drivers: net: smc911x: Check for error irq
net: ks8851: Check for error irq
sfc: Check null pointer of rx_queue->page_ring
sfc: falcon: Check null pointer of rx_queue->page_ring
Johan Hovold (1):
platform/x86: intel_pmc_core: fix memleak on registration failure
Johannes Berg (1):
mac80211: fix locking in ieee80211_start_ap error path
John David Anglin (2):
parisc: Correct completer in lws start
parisc: Fix mask used to select futex spinlock
Johnny Chuang (1):
Input: elants_i2c - do not check Remark ID on eKTH3900/eKTH5312
José Expósito (2):
IB/qib: Fix memory leak in qib_user_sdma_queue_pkts()
Input: atmel_mxt_ts - fix double free in mxt_read_info_block
Lin Ma (3):
ax25: NPD bug when detaching AX25 device
hamradio: defer ax25 kfree after unregister_netdev
hamradio: improve the incomplete fix to avoid NPD
Marian Postevca (1):
usb: gadget: u_ether: fix race in setting MAC address in setup phase
Martin Blumenstingl (3):
ASoC: meson: aiu: fifo: Add missing dma_coerce_mask_and_coherent()
ASoC: meson: aiu: Move AIU_I2S_MISC hold setting to aiu-fifo-i2s
mmc: meson-mx-sdhc: Set MANUAL_STOP for multi-block SDIO commands
Martin Haaß (1):
ARM: dts: imx6qdl-wandboard: Fix Ethernet support
Martin Povišer (1):
ASoC: tas2770: Fix setting of high sample rates
Mian Yousaf Kaukab (1):
ipmi: ssif: initialize ssif_info->client early
Nick Desaulniers (2):
arm64: vdso32: drop -no-integrated-as flag
arm64: vdso32: require CROSS_COMPILE_COMPAT for gcc+bfd
Noralf Trønnes (1):
gpio: dln2: Fix interrupts when replugging the device
Phil Elwell (1):
pinctrl: bcm2835: Change init order for gpio hogs
Prathamesh Shete (1):
mmc: sdhci-tegra: Fix switch to HS400ES mode
Robert Marko (1):
arm64: dts: allwinner: orangepi-zero-plus: fix PHY mode
Rémi Denis-Courmont (1):
phonet/pep: refuse to enable an unbound pipe
Sean Christopherson (2):
KVM: VMX: Wake vCPU when delivering posted IRQ even if vCPU == this
vCPU
KVM: VMX: Fix stale docs for kvm-intel.emulate_invalid_guest_state
Sumit Garg (1):
tee: optee: Fix incorrect page free bug
Thadeu Lima de Souza Cascardo (2):
ipmi: bail out if init_srcu_struct fails
ipmi: fix initialization when workqueue allocation fails
Ulf Hansson (1):
mmc: core: Disable card detect during shutdown
Werner Sembach (1):
ALSA: hda/realtek: Fix quirk for Clevo NJ51CU
Willem de Bruijn (2):
net: accept UFOv6 packages in virtio_net_hdr_to_skb
net: skip virtio_net_hdr_set_proto if protocol already set
Wu Bo (1):
ipmi: Fix UAF when uninstall ipmi_si and ipmi_msghandler module
Xiaoke Wang (1):
ALSA: jack: Check the return value of kstrdup()
Yann Gautier (1):
mmc: mmci: stm32: clear DLYB_CR after sending tuning command
Yevhen Orlov (1):
net: marvell: prestera: fix incorrect return of port_find
.../admin-guide/kernel-parameters.txt | 8 +-
Documentation/hwmon/lm90.rst | 10 +
Documentation/networking/bonding.rst | 11 +-
Documentation/sound/hd-audio/models.rst | 2 +
arch/arm/boot/dts/imx6qdl-wandboard.dtsi | 1 +
arch/arm/kernel/entry-armv.S | 8 +-
arch/arm64/Kconfig | 3 +-
.../sun50i-h5-orangepi-zero-plus.dts | 2 +-
arch/arm64/kernel/vdso32/Makefile | 25 +--
arch/parisc/include/asm/futex.h | 4 +-
arch/parisc/kernel/syscall.S | 2 +-
arch/x86/include/asm/pgtable.h | 4 +-
arch/x86/kvm/vmx/vmx.c | 3 +-
drivers/char/ipmi/ipmi_msghandler.c | 21 ++-
drivers/char/ipmi/ipmi_ssif.c | 7 +-
drivers/gpio/gpio-dln2.c | 19 +-
drivers/hid/hid-holtek-mouse.c | 15 ++
drivers/hid/hid-vivaldi.c | 3 +
drivers/hwmon/Kconfig | 2 +-
drivers/hwmon/lm90.c | 171 +++++++++++-------
drivers/infiniband/hw/hns/hns_roce_srq.c | 2 +-
drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +-
drivers/input/mouse/elantech.c | 8 +-
drivers/input/touchscreen/atmel_mxt_ts.c | 2 +-
drivers/input/touchscreen/elants_i2c.c | 46 ++++-
drivers/input/touchscreen/goodix.c | 1 +
drivers/mmc/core/core.c | 7 +-
drivers/mmc/core/core.h | 1 +
drivers/mmc/core/host.c | 9 +
drivers/mmc/host/meson-mx-sdhc-mmc.c | 16 ++
drivers/mmc/host/mmci_stm32_sdmmc.c | 2 +
drivers/mmc/host/sdhci-tegra.c | 43 +++--
drivers/net/bonding/bond_options.c | 2 +-
drivers/net/ethernet/intel/igb/igb_main.c | 19 +-
.../ethernet/marvell/prestera/prestera_main.c | 16 +-
drivers/net/ethernet/micrel/ks8851_par.c | 2 +
.../net/ethernet/qlogic/qlcnic/qlcnic_sriov.h | 2 +-
.../qlogic/qlcnic/qlcnic_sriov_common.c | 12 +-
.../ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c | 4 +-
drivers/net/ethernet/sfc/falcon/rx.c | 5 +-
drivers/net/ethernet/sfc/rx_common.c | 5 +-
drivers/net/ethernet/smsc/smc911x.c | 5 +
drivers/net/fjes/fjes_main.c | 5 +
drivers/net/hamradio/mkiss.c | 5 +-
drivers/net/usb/lan78xx.c | 6 +
drivers/pinctrl/bcm/pinctrl-bcm2835.c | 29 +--
.../pinctrl/mediatek/pinctrl-mtk-common-v2.c | 8 +-
drivers/pinctrl/stm32/pinctrl-stm32.c | 8 +-
drivers/platform/x86/intel_pmc_core_pltdrv.c | 2 +-
drivers/spi/spi-armada-3700.c | 2 +-
drivers/tee/optee/shm_pool.c | 6 +-
drivers/usb/gadget/function/u_ether.c | 15 +-
fs/ceph/file.c | 18 +-
include/linux/virtio_net.h | 25 ++-
mm/mempolicy.c | 3 +-
net/ax25/af_ax25.c | 4 +-
net/mac80211/cfg.c | 3 +
net/phonet/pep.c | 2 +
sound/core/jack.c | 4 +
sound/drivers/opl3/opl3_midi.c | 2 +-
sound/pci/hda/patch_realtek.c | 28 ++-
sound/soc/codecs/rt5682.c | 4 +
sound/soc/codecs/tas2770.c | 4 +-
sound/soc/meson/aiu-encoder-i2s.c | 33 ----
sound/soc/meson/aiu-fifo-i2s.c | 19 ++
sound/soc/meson/aiu-fifo.c | 6 +
66 files changed, 521 insertions(+), 252 deletions(-)
--
2.20.1
1
68

Re: [PATCH openEuler-1.0-LTS V3 18/92] kvm: x86: Fix reserved bits related calculation errors caused by MKTME
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 16/92] kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 15/92] kvm/svm: PKU not currently supported
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 14/92] kvm: x86: Expose RDPID in KVM_GET_SUPPORTED_CPUID
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0

Re: [PATCH openEuler-1.0-LTS V3 10/92] KVM: x86: expose MOVDIR64B CPU feature into VM.
by Zenghui Yu 26 Jan '22
by Zenghui Yu 26 Jan '22
26 Jan '22
Reviewed-by: Zenghui Yu <yuzenghui(a)huawei.com>
1
0