Kernel
Threads by month
- ----- 2025 -----
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
May 2023
- 17 participants
- 60 discussions
From: xiabing <xiabing12(a)h-partners.com>
bugfix
Xingui Yang (2):
scsi: hisi_sas: Modify v3 HW SATA disk error state completion
processing
scsi: hisi_sas: Change DMA setup lock timeout to 2.5s
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--
2.30.0
1
2

[PATCH OLK-5.10 0/2] crypto: hisilicon/qm - support dumping stop queue status
by Weili Qian 19 May '23
by Weili Qian 19 May '23
19 May '23
From: JiangShui Yang <yangjiangshui(a)h-partners.com>
Add debugfs 'dev_state' to query the status of the stop queue.
And the root user can set 'dev_timeout', if task flow fails to be
stopped, the driver waits dev_timeout * 20ms before releasing the queue.
Weili Qian (2):
crypto: hisilicon/qm - add debugfs to query the status of the stop
queue
crypto: hisilicon/qm - support dumping stop queue status
Documentation/ABI/testing/debugfs-hisi-hpre | 15 ++++
Documentation/ABI/testing/debugfs-hisi-sec | 15 ++++
Documentation/ABI/testing/debugfs-hisi-zip | 15 ++++
drivers/crypto/hisilicon/debugfs.c | 5 ++
drivers/crypto/hisilicon/qm.c | 98 +++++++++++++++------
include/linux/hisi_acc_qm.h | 14 +++
6 files changed, 135 insertions(+), 27 deletions(-)
--
2.30.0
1
2

[PATCH openEuler-1.0-LTS] x86/msr-index: make SPEC_CTRL_IBRS assembler-portable
by Yongqiang Liu 18 May '23
by Yongqiang Liu 18 May '23
18 May '23
From: Nick Desaulniers <ndesaulniers(a)google.com>
maillist inclusion
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6V709
CVE: NA
Reference: https://lore.kernel.org/lkml/20221103210748.1343090-1-ndesaulniers@google.c…
--------------------------------
GNU binutils' assembler (GAS) didn't support L suffixes on immediates
until binutils 2.28 release. Building arch/x86/entry/entry_64.S with GAS
v2.27 will produce the following assembler errors:
arch/x86/entry/entry_64.S: Assembler messages:
arch/x86/entry/entry_64.S:308: Error: found 'L', expected: ')'
arch/x86/entry/entry_64.S:308: Error: found 'L', expected: ')'
arch/x86/entry/entry_64.S:308: Error: junk `L<<(0)))' after expression
arch/x86/entry/entry_64.S:596: Error: found 'L', expected: ')'
arch/x86/entry/entry_64.S:596: Error: found 'L', expected: ')'
arch/x86/entry/entry_64.S:596: Error: junk `L<<(0)))' after expression
These come from the use of the preprocessor defined SPEC_CTRL_IBRS in
the IBRS_ENTER and IBRS_EXIT assembler macros. SPEC_CTRL_IBRS was using
the BIT macros from include/linux/bits.h which are only portable between
C and assembler for assemblers such as GAS v2.28 (or newer) or clang
because they use the L suffixes for immediate operands, which older GAS
releases cannot parse. The kernel still supports GAS v2.23 and newer
(and older for branches of stable). Let's expand the value of
SPEC_CTRL_IBRS in place so that assemblers don't have issues parsing the
value.
Fixes: 2dbb887e875b ("x86/entry: Add kernel IBRS implementation")
Reported-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Nick Desaulniers <ndesaulniers(a)google.com>
Signed-off-by: Lin Yujun <linyujun809(a)huawei.com>
Reviewed-by: Liao Chang <liaochang1(a)huawei.com>
Reviewed-by: Zhang Jianhua <chris.zjh(a)huawei.com>
Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com>
---
arch/x86/include/asm/msr-index.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 1336c900e723..779b653f6546 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -42,7 +42,7 @@
/* Intel MSRs. Some also available on other CPUs */
#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
-#define SPEC_CTRL_IBRS BIT(0) /* Indirect Branch Restricted Speculation */
+#define SPEC_CTRL_IBRS 1 /* Indirect Branch Restricted Speculation */
#define SPEC_CTRL_STIBP_SHIFT 1 /* Single Thread Indirect Branch Predictor (STIBP) bit */
#define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
#define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
--
2.25.1
1
1

[PATCH openEuler-1.0-LTS] xfs: verify buffer contents when we skip log replay
by Yongqiang Liu 18 May '23
by Yongqiang Liu 18 May '23
18 May '23
From: "Darrick J. Wong" <djwong(a)kernel.org>
mainline inclusion
from mainline-v6.3-rc6
commit 22ed903eee23a5b174e240f1cdfa9acf393a5210
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6X4UN
CVE: CVE-2023-2124
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
syzbot detected a crash during log recovery:
XFS (loop0): Mounting V5 Filesystem bfdc47fc-10d8-4eed-a562-11a831b3f791
XFS (loop0): Torn write (CRC failure) detected at log block 0x180. Truncating head block from 0x200.
XFS (loop0): Starting recovery (logdev: internal)
==================================================================
BUG: KASAN: slab-out-of-bounds in xfs_btree_lookup_get_block+0x15c/0x6d0 fs/xfs/libxfs/xfs_btree.c:1813
Read of size 8 at addr ffff88807e89f258 by task syz-executor132/5074
CPU: 0 PID: 5074 Comm: syz-executor132 Not tainted 6.2.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1b1/0x290 lib/dump_stack.c:106
print_address_description+0x74/0x340 mm/kasan/report.c:306
print_report+0x107/0x1f0 mm/kasan/report.c:417
kasan_report+0xcd/0x100 mm/kasan/report.c:517
xfs_btree_lookup_get_block+0x15c/0x6d0 fs/xfs/libxfs/xfs_btree.c:1813
xfs_btree_lookup+0x346/0x12c0 fs/xfs/libxfs/xfs_btree.c:1913
xfs_btree_simple_query_range+0xde/0x6a0 fs/xfs/libxfs/xfs_btree.c:4713
xfs_btree_query_range+0x2db/0x380 fs/xfs/libxfs/xfs_btree.c:4953
xfs_refcount_recover_cow_leftovers+0x2d1/0xa60 fs/xfs/libxfs/xfs_refcount.c:1946
xfs_reflink_recover_cow+0xab/0x1b0 fs/xfs/xfs_reflink.c:930
xlog_recover_finish+0x824/0x920 fs/xfs/xfs_log_recover.c:3493
xfs_log_mount_finish+0x1ec/0x3d0 fs/xfs/xfs_log.c:829
xfs_mountfs+0x146a/0x1ef0 fs/xfs/xfs_mount.c:933
xfs_fs_fill_super+0xf95/0x11f0 fs/xfs/xfs_super.c:1666
get_tree_bdev+0x400/0x620 fs/super.c:1282
vfs_get_tree+0x88/0x270 fs/super.c:1489
do_new_mount+0x289/0xad0 fs/namespace.c:3145
do_mount fs/namespace.c:3488 [inline]
__do_sys_mount fs/namespace.c:3697 [inline]
__se_sys_mount+0x2d3/0x3c0 fs/namespace.c:3674
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f89fa3f4aca
Code: 83 c4 08 5b 5d c3 66 2e 0f 1f 84 00 00 00 00 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fffd5fb5ef8 EFLAGS: 00000206 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00646975756f6e2c RCX: 00007f89fa3f4aca
RDX: 0000000020000100 RSI: 0000000020009640 RDI: 00007fffd5fb5f10
RBP: 00007fffd5fb5f10 R08: 00007fffd5fb5f50 R09: 000000000000970d
R10: 0000000000200800 R11: 0000000000000206 R12: 0000000000000004
R13: 0000555556c6b2c0 R14: 0000000000200800 R15: 00007fffd5fb5f50
</TASK>
The fuzzed image contains an AGF with an obviously garbage
agf_refcount_level value of 32, and a dirty log with a buffer log item
for that AGF. The ondisk AGF has a higher LSN than the recovered log
item. xlog_recover_buf_commit_pass2 reads the buffer, compares the
LSNs, and decides to skip replay because the ondisk buffer appears to be
newer.
Unfortunately, the ondisk buffer is corrupt, but recovery just read the
buffer with no buffer ops specified:
error = xfs_buf_read(mp->m_ddev_targp, buf_f->blf_blkno,
buf_f->blf_len, buf_flags, &bp, NULL);
Skipping the buffer leaves its contents in memory unverified. This sets
us up for a kernel crash because xfs_refcount_recover_cow_leftovers
reads the buffer (which is still around in XBF_DONE state, so no read
verification) and creates a refcountbt cursor of height 32. This is
impossible so we run off the end of the cursor object and crash.
Fix this by invoking the verifier on all skipped buffers and aborting
log recovery if the ondisk buffer is corrupt. It might be smarter to
force replay the log item atop the buffer and then see if it'll pass the
write verifier (like ext4 does) but for now let's go with the
conservative option where we stop immediately.
Link: https://syzkaller.appspot.com/bug?extid=7e9494b8b399902e994e
Signed-off-by: Darrick J. Wong <djwong(a)kernel.org>
Reviewed-by: Dave Chinner <dchinner(a)redhat.com>
Signed-off-by: Dave Chinner <david(a)fromorbit.com>
Conflicts:
fs/xfs/xfs_log_recover.c
Signed-off-by: Long Li <leo.lilong(a)huawei.com>
Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com>
---
fs/xfs/xfs_log_recover.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index ca8075894bea..a6e8fadae007 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -2855,6 +2855,16 @@ xlog_recover_buffer_pass2(
if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) {
trace_xfs_log_recover_buf_skip(log, buf_f);
xlog_recover_validate_buf_type(mp, bp, buf_f, NULLCOMMITLSN);
+
+ /*
+ * We're skipping replay of this buffer log item due to the log
+ * item LSN being behind the ondisk buffer. Verify the buffer
+ * contents since we aren't going to run the write verifier.
+ */
+ if (bp->b_ops) {
+ bp->b_ops->verify_read(bp);
+ error = bp->b_error;
+ }
goto out_release;
}
--
2.25.1
1
0
driver inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I6X8PA
CVE: NA
Reference: NA
---------------------------------
When using allyesconfig to configure the kernel,
errors may occur during the linking process when making.
Signed-off-by: zhoujiadong <zhoujiadong5(a)huawei.com>
Reviewed-by: Wulike (Collin) <wulike1(a)huawei.com>
---
drivers/net/ethernet/huawei/Kconfig | 1 +
drivers/net/ethernet/huawei/Makefile | 1 +
drivers/net/ethernet/huawei/hinic3/Kconfig | 13 +
drivers/net/ethernet/huawei/hinic3/Makefile | 45 +
.../ethernet/huawei/hinic3/cfg_mgt_comm_pub.h | 212 ++
.../ethernet/huawei/hinic3/comm_cmdq_intf.h | 239 ++
.../net/ethernet/huawei/hinic3/comm_defs.h | 105 +
.../ethernet/huawei/hinic3/comm_msg_intf.h | 664 +++++
.../ethernet/huawei/hinic3/hinic3_comm_cmd.h | 185 ++
.../ethernet/huawei/hinic3/hinic3_common.h | 119 +
.../net/ethernet/huawei/hinic3/hinic3_crm.h | 1162 +++++++++
.../net/ethernet/huawei/hinic3/hinic3_dbg.c | 983 ++++++++
.../net/ethernet/huawei/hinic3/hinic3_dcb.c | 405 ++++
.../net/ethernet/huawei/hinic3/hinic3_dcb.h | 78 +
.../ethernet/huawei/hinic3/hinic3_ethtool.c | 1331 ++++++++++
.../huawei/hinic3/hinic3_ethtool_stats.c | 1233 ++++++++++
.../ethernet/huawei/hinic3/hinic3_filter.c | 483 ++++
.../net/ethernet/huawei/hinic3/hinic3_hw.h | 828 +++++++
.../net/ethernet/huawei/hinic3/hinic3_irq.c | 189 ++
.../net/ethernet/huawei/hinic3/hinic3_lld.h | 204 ++
.../ethernet/huawei/hinic3/hinic3_mag_cfg.c | 953 ++++++++
.../net/ethernet/huawei/hinic3/hinic3_main.c | 1125 +++++++++
.../huawei/hinic3/hinic3_mgmt_interface.h | 1252 ++++++++++
.../net/ethernet/huawei/hinic3/hinic3_mt.h | 681 ++++++
.../huawei/hinic3/hinic3_netdev_ops.c | 1975 +++++++++++++++
.../net/ethernet/huawei/hinic3/hinic3_nic.h | 183 ++
.../ethernet/huawei/hinic3/hinic3_nic_cfg.c | 1608 +++++++++++++
.../ethernet/huawei/hinic3/hinic3_nic_cfg.h | 620 +++++
.../huawei/hinic3/hinic3_nic_cfg_vf.c | 637 +++++
.../ethernet/huawei/hinic3/hinic3_nic_cmd.h | 159 ++
.../ethernet/huawei/hinic3/hinic3_nic_dbg.c | 146 ++
.../ethernet/huawei/hinic3/hinic3_nic_dbg.h | 21 +
.../ethernet/huawei/hinic3/hinic3_nic_dev.h | 387 +++
.../ethernet/huawei/hinic3/hinic3_nic_event.c | 580 +++++
.../ethernet/huawei/hinic3/hinic3_nic_io.c | 1122 +++++++++
.../ethernet/huawei/hinic3/hinic3_nic_io.h | 325 +++
.../ethernet/huawei/hinic3/hinic3_nic_prof.c | 47 +
.../ethernet/huawei/hinic3/hinic3_nic_prof.h | 59 +
.../ethernet/huawei/hinic3/hinic3_nic_qp.h | 384 +++
.../ethernet/huawei/hinic3/hinic3_ntuple.c | 907 +++++++
.../ethernet/huawei/hinic3/hinic3_profile.h | 146 ++
.../net/ethernet/huawei/hinic3/hinic3_rss.c | 978 ++++++++
.../net/ethernet/huawei/hinic3/hinic3_rss.h | 100 +
.../ethernet/huawei/hinic3/hinic3_rss_cfg.c | 384 +++
.../net/ethernet/huawei/hinic3/hinic3_rx.c | 1344 +++++++++++
.../net/ethernet/huawei/hinic3/hinic3_rx.h | 155 ++
.../ethernet/huawei/hinic3/hinic3_srv_nic.h | 213 ++
.../net/ethernet/huawei/hinic3/hinic3_tx.c | 1016 ++++++++
.../net/ethernet/huawei/hinic3/hinic3_tx.h | 157 ++
.../net/ethernet/huawei/hinic3/hinic3_wq.h | 130 +
.../huawei/hinic3/hw/hinic3_api_cmd.c | 1211 ++++++++++
.../huawei/hinic3/hw/hinic3_api_cmd.h | 286 +++
.../ethernet/huawei/hinic3/hw/hinic3_cmdq.c | 1543 ++++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_cmdq.h | 204 ++
.../ethernet/huawei/hinic3/hw/hinic3_common.c | 93 +
.../ethernet/huawei/hinic3/hw/hinic3_csr.h | 187 ++
.../huawei/hinic3/hw/hinic3_dev_mgmt.c | 803 +++++++
.../huawei/hinic3/hw/hinic3_dev_mgmt.h | 105 +
.../huawei/hinic3/hw/hinic3_devlink.c | 431 ++++
.../huawei/hinic3/hw/hinic3_devlink.h | 149 ++
.../ethernet/huawei/hinic3/hw/hinic3_eqs.c | 1381 +++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_eqs.h | 164 ++
.../ethernet/huawei/hinic3/hw/hinic3_hw_api.c | 453 ++++
.../ethernet/huawei/hinic3/hw/hinic3_hw_api.h | 141 ++
.../ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c | 1480 ++++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h | 332 +++
.../huawei/hinic3/hw/hinic3_hw_comm.c | 1540 ++++++++++++
.../huawei/hinic3/hw/hinic3_hw_comm.h | 51 +
.../ethernet/huawei/hinic3/hw/hinic3_hw_mt.c | 599 +++++
.../ethernet/huawei/hinic3/hw/hinic3_hw_mt.h | 49 +
.../ethernet/huawei/hinic3/hw/hinic3_hwdev.c | 2141 +++++++++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_hwdev.h | 175 ++
.../ethernet/huawei/hinic3/hw/hinic3_hwif.c | 994 ++++++++
.../ethernet/huawei/hinic3/hw/hinic3_hwif.h | 113 +
.../ethernet/huawei/hinic3/hw/hinic3_lld.c | 1410 +++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_mbox.c | 1841 ++++++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_mbox.h | 267 ++
.../ethernet/huawei/hinic3/hw/hinic3_mgmt.c | 1515 ++++++++++++
.../ethernet/huawei/hinic3/hw/hinic3_mgmt.h | 179 ++
.../huawei/hinic3/hw/hinic3_nictool.c | 974 ++++++++
.../huawei/hinic3/hw/hinic3_nictool.h | 35 +
.../huawei/hinic3/hw/hinic3_pci_id_tbl.h | 15 +
.../huawei/hinic3/hw/hinic3_prof_adap.c | 44 +
.../huawei/hinic3/hw/hinic3_prof_adap.h | 109 +
.../ethernet/huawei/hinic3/hw/hinic3_sm_lt.h | 160 ++
.../ethernet/huawei/hinic3/hw/hinic3_sml_lt.c | 160 ++
.../ethernet/huawei/hinic3/hw/hinic3_sriov.c | 267 ++
.../ethernet/huawei/hinic3/hw/hinic3_sriov.h | 35 +
.../net/ethernet/huawei/hinic3/hw/hinic3_wq.c | 159 ++
.../huawei/hinic3/hw/ossl_knl_linux.c | 121 +
drivers/net/ethernet/huawei/hinic3/mag_cmd.h | 886 +++++++
.../ethernet/huawei/hinic3/mgmt_msg_base.h | 27 +
.../net/ethernet/huawei/hinic3/nic_cfg_comm.h | 63 +
drivers/net/ethernet/huawei/hinic3/ossl_knl.h | 36 +
.../ethernet/huawei/hinic3/ossl_knl_linux.h | 284 +++
95 files changed, 49486 insertions(+)
create mode 100644 drivers/net/ethernet/huawei/hinic3/Kconfig
create mode 100644 drivers/net/ethernet/huawei/hinic3/Makefile
create mode 100644 drivers/net/ethernet/huawei/hinic3/cfg_mgt_comm_pub.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/comm_cmdq_intf.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/comm_defs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/comm_msg_intf.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_comm_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_common.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_lld.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_main.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_profile.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
create mode 100644 drivers/net/ethernet/huawei/hinic3/mag_cmd.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/mgmt_msg_base.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/nic_cfg_comm.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl.h
create mode 100644 drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
diff --git a/drivers/net/ethernet/huawei/Kconfig b/drivers/net/ethernet/huawei/Kconfig
index 0afeb5021f17..0df9544dcf74 100644
--- a/drivers/net/ethernet/huawei/Kconfig
+++ b/drivers/net/ethernet/huawei/Kconfig
@@ -16,6 +16,7 @@ config NET_VENDOR_HUAWEI
if NET_VENDOR_HUAWEI
source "drivers/net/ethernet/huawei/hinic/Kconfig"
+source "drivers/net/ethernet/huawei/hinic3/Kconfig"
source "drivers/net/ethernet/huawei/bma/Kconfig"
endif # NET_VENDOR_HUAWEI
diff --git a/drivers/net/ethernet/huawei/Makefile b/drivers/net/ethernet/huawei/Makefile
index f5bf4ae195a3..d88e8fd772e3 100644
--- a/drivers/net/ethernet/huawei/Makefile
+++ b/drivers/net/ethernet/huawei/Makefile
@@ -4,4 +4,5 @@
#
obj-$(CONFIG_HINIC) += hinic/
+obj-$(CONFIG_HINIC3) += hinic3/
obj-$(CONFIG_BMA) += bma/
diff --git a/drivers/net/ethernet/huawei/hinic3/Kconfig b/drivers/net/ethernet/huawei/hinic3/Kconfig
new file mode 100644
index 000000000000..72088646a9bf
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Huawei driver configuration
+#
+
+config HINIC3
+ tristate "Huawei Intelligent Network Interface Card 3rd"
+ depends on PCI_MSI && NUMA && PCI_IOV && DCB && (X86 || ARM64)
+ help
+ This driver supports HiNIC PCIE Ethernet cards.
+ To compile this driver as part of the kernel, choose Y here.
+ If unsure, choose N.
+ The default is N.
diff --git a/drivers/net/ethernet/huawei/hinic3/Makefile b/drivers/net/ethernet/huawei/hinic3/Makefile
new file mode 100644
index 000000000000..b17f80ff19b8
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/Makefile
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: GPL-2.0-only
+ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/
+
+obj-$(CONFIG_HINIC3) += hinic3.o
+hinic3-objs := hw/hinic3_hwdev.o \
+ hw/hinic3_hw_cfg.o \
+ hw/hinic3_hw_comm.o \
+ hw/hinic3_prof_adap.o \
+ hw/hinic3_sriov.o \
+ hw/hinic3_lld.o \
+ hw/hinic3_dev_mgmt.o \
+ hw/hinic3_common.o \
+ hw/hinic3_hwif.o \
+ hw/hinic3_wq.o \
+ hw/hinic3_cmdq.o \
+ hw/hinic3_eqs.o \
+ hw/hinic3_mbox.o \
+ hw/hinic3_mgmt.o \
+ hw/hinic3_api_cmd.o \
+ hw/hinic3_hw_api.o \
+ hw/hinic3_sml_lt.o \
+ hw/hinic3_hw_mt.o \
+ hw/hinic3_nictool.o \
+ hw/hinic3_devlink.o \
+ hw/ossl_knl_linux.o \
+ hinic3_main.o \
+ hinic3_tx.o \
+ hinic3_rx.o \
+ hinic3_rss.o \
+ hinic3_ntuple.o \
+ hinic3_dcb.o \
+ hinic3_ethtool.o \
+ hinic3_ethtool_stats.o \
+ hinic3_dbg.o \
+ hinic3_irq.o \
+ hinic3_filter.o \
+ hinic3_netdev_ops.o \
+ hinic3_nic_prof.o \
+ hinic3_nic_cfg.o \
+ hinic3_mag_cfg.o \
+ hinic3_nic_cfg_vf.o \
+ hinic3_rss_cfg.o \
+ hinic3_nic_event.o \
+ hinic3_nic_io.o \
+ hinic3_nic_dbg.o
\ No newline at end of file
diff --git a/drivers/net/ethernet/huawei/hinic3/cfg_mgt_comm_pub.h b/drivers/net/ethernet/huawei/hinic3/cfg_mgt_comm_pub.h
new file mode 100644
index 000000000000..6d391d0423a9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/cfg_mgt_comm_pub.h
@@ -0,0 +1,212 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2016-2022. All rights reserved.
+ * File name: Cfg_mgt_comm_pub.h
+ * Version No.: Draft
+ * Generation date: 2016 year 05 month 07 day
+ * Latest modification:
+ * Function description: Header file for communication between the: Host and FW
+ * Function list:
+ * Modification history:
+ * 1. Date: 2016 May 07
+ * Modify content: Create a file.
+ */
+#ifndef CFG_MGT_COMM_PUB_H
+#define CFG_MGT_COMM_PUB_H
+
+#include "mgmt_msg_base.h"
+
+typedef enum {
+ SERVICE_BIT_NIC = 0,
+ SERVICE_BIT_ROCE = 1,
+ SERVICE_BIT_VBS = 2,
+ SERVICE_BIT_TOE = 3,
+ SERVICE_BIT_IPSEC = 4,
+ SERVICE_BIT_FC = 5,
+ SERVICE_BIT_VIRTIO = 6,
+ SERVICE_BIT_OVS = 7,
+ SERVICE_BIT_NVME = 8,
+ SERVICE_BIT_ROCEAA = 9,
+ SERVICE_BIT_CURRENET = 10,
+ SERVICE_BIT_PPA = 11,
+ SERVICE_BIT_MIGRATE = 12,
+ SERVICE_BIT_MAX
+} servic_bit_define_e;
+
+#define CFG_SERVICE_MASK_NIC (0x1 << SERVICE_BIT_NIC)
+#define CFG_SERVICE_MASK_ROCE (0x1 << SERVICE_BIT_ROCE)
+#define CFG_SERVICE_MASK_VBS (0x1 << SERVICE_BIT_VBS)
+#define CFG_SERVICE_MASK_TOE (0x1 << SERVICE_BIT_TOE)
+#define CFG_SERVICE_MASK_IPSEC (0x1 << SERVICE_BIT_IPSEC)
+#define CFG_SERVICE_MASK_FC (0x1 << SERVICE_BIT_FC)
+#define CFG_SERVICE_MASK_VIRTIO (0x1 << SERVICE_BIT_VIRTIO)
+#define CFG_SERVICE_MASK_OVS (0x1 << SERVICE_BIT_OVS)
+#define CFG_SERVICE_MASK_NVME (0x1 << SERVICE_BIT_NVME)
+#define CFG_SERVICE_MASK_ROCEAA (0x1 << SERVICE_BIT_ROCEAA)
+#define CFG_SERVICE_MASK_CURRENET (0x1 << SERVICE_BIT_CURRENET)
+#define CFG_SERVICE_MASK_PPA (0x1 << SERVICE_BIT_PPA)
+#define CFG_SERVICE_MASK_MIGRATE (0x1 << SERVICE_BIT_MIGRATE)
+
+/* Definition of the scenario ID in the cfg_data, which is used for SML memory allocation. */
+typedef enum {
+ SCENES_ID_FPGA_ETH = 0,
+ SCENES_ID_FPGA_TIOE = 1, /* Discarded */
+ SCENES_ID_STORAGE_ROCEAA_2x100 = 2,
+ SCENES_ID_STORAGE_ROCEAA_4x25 = 3,
+ SCENES_ID_CLOUD = 4,
+ SCENES_ID_FC = 5,
+ SCENES_ID_STORAGE_ROCE = 6,
+ SCENES_ID_COMPUTE_ROCE = 7,
+ SCENES_ID_STORAGE_TOE = 8,
+ SCENES_ID_MAX
+} scenes_id_define_e;
+
+/* struct cfg_cmd_dev_cap.sf_svc_attr */
+enum {
+ SF_SVC_FT_BIT = (1 << 0),
+ SF_SVC_RDMA_BIT = (1 << 1),
+};
+
+enum cfg_cmd {
+ CFG_CMD_GET_DEV_CAP = 0,
+ CFG_CMD_GET_HOST_TIMER = 1,
+};
+
+struct cfg_cmd_host_timer {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd1;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+ u32 rsvd2[8];
+};
+
+struct cfg_cmd_dev_cap {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+
+ /* Public resources */
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id;
+ u8 port_id;
+
+ u16 host_total_func;
+ u8 host_pf_num;
+ u8 pf_id_start;
+ u16 host_vf_num;
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 timer_en;
+ u8 host_valid_bitmap;
+ u8 rsvd_host;
+
+ u16 svc_cap_en;
+ u16 max_vf;
+ u8 flexq_en;
+ u8 valid_cos_bitmap;
+ /* Reserved for func_valid_cos_bitmap */
+ u8 port_cos_valid_bitmap;
+ u8 rsvd_func1;
+ u32 rsvd_func2;
+
+ u8 sf_svc_attr;
+ u8 func_sf_en;
+ u8 lb_mode;
+ u8 smf_pg;
+
+ u32 max_conn_num;
+ u16 max_stick2cache_num;
+ u16 max_bfilter_start_addr;
+ u16 bfilter_len;
+ u16 hash_bucket_num;
+
+ /* shared resource */
+ u8 host_sf_en;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u8 virtio_vq_size;
+
+ u32 rsvd_func3[5];
+
+ /* l2nic */
+ u16 nic_max_sq_id;
+ u16 nic_max_rq_id;
+ u16 nic_default_num_queues;
+ u16 rsvd1_nic;
+ u32 rsvd2_nic[2];
+
+ /* RoCE */
+ u32 roce_max_qp;
+ u32 roce_max_cq;
+ u32 roce_max_srq;
+ u32 roce_max_mpt;
+ u32 roce_max_drc_qp;
+
+ u32 roce_cmtt_cl_start;
+ u32 roce_cmtt_cl_end;
+ u32 roce_cmtt_cl_size;
+
+ u32 roce_dmtt_cl_start;
+ u32 roce_dmtt_cl_end;
+ u32 roce_dmtt_cl_size;
+
+ u32 roce_wqe_cl_start;
+ u32 roce_wqe_cl_end;
+ u32 roce_wqe_cl_size;
+ u8 roce_srq_container_mode;
+ u8 rsvd_roce1[3];
+ u32 rsvd_roce2[5];
+
+ /* IPsec */
+ u32 ipsec_max_sactx;
+ u16 ipsec_max_cq;
+ u16 rsvd_ipsec1;
+ u32 rsvd_ipsec[2];
+
+ /* OVS */
+ u32 ovs_max_qpc;
+ u32 rsvd_ovs1[3];
+
+ /* ToE */
+ u32 toe_max_pctx;
+ u32 toe_max_cq;
+ u16 toe_max_srq;
+ u16 toe_srq_id_start;
+ u16 toe_max_mpt;
+ u16 toe_max_cctxt;
+ u32 rsvd_toe[2];
+
+ /* FC */
+ u32 fc_max_pctx;
+ u32 fc_max_scq;
+ u32 fc_max_srq;
+
+ u32 fc_max_cctx;
+ u32 fc_cctx_id_start;
+
+ u8 fc_vp_id_start;
+ u8 fc_vp_id_end;
+ u8 rsvd_fc1[2];
+ u32 rsvd_fc2[5];
+
+ /* VBS */
+ u16 vbs_max_volq;
+ u16 rsvd0_vbs;
+ u32 rsvd1_vbs[3];
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+ u32 rsvd_glb[8];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/comm_cmdq_intf.h b/drivers/net/ethernet/huawei/hinic3/comm_cmdq_intf.h
new file mode 100644
index 000000000000..6f5f87bc19b7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/comm_cmdq_intf.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/******************************************************************************
+ * Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.
+ ******************************************************************************
+ File Name : comm_cmdq_intf.h
+ Version : Initial Draft
+ Description : common command queue interface
+ Function List :
+ History :
+ Modification: Created file
+
+******************************************************************************/
+
+#ifndef COMM_CMDQ_INTF_H
+#define COMM_CMDQ_INTF_H
+
+/* Cmdq ack type */
+enum hinic3_ack_type {
+ HINIC3_ACK_TYPE_CMDQ,
+ HINIC3_ACK_TYPE_SHARE_CQN,
+ HINIC3_ACK_TYPE_APP_CQN,
+
+ HINIC3_MOD_ACK_MAX = 15,
+};
+
+/* Defines the queue type of the set arm bit. */
+enum {
+ SET_ARM_BIT_FOR_CMDQ = 0,
+ SET_ARM_BIT_FOR_L2NIC_SQ,
+ SET_ARM_BIT_FOR_L2NIC_RQ,
+ SET_ARM_BIT_TYPE_NUM
+};
+
+/* Defines the type. Each function supports a maximum of eight CMDQ types. */
+enum {
+ CMDQ_0 = 0,
+ CMDQ_1 = 1, /* dedicated and non-blocking queues */
+ CMDQ_NUM
+};
+
+/* *******************cmd common command data structure ************************ */
+// Func->ucode, which is used to set arm bit data,
+// The microcode needs to perform big-endian conversion.
+struct comm_info_ucode_set_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+/* *******************WQE data structure ************************ */
+union cmdq_wqe_cs_dw0 {
+ struct {
+ u32 err_status : 29;
+ u32 error_code : 2;
+ u32 rsvd : 1;
+ } bs;
+ u32 val;
+};
+
+union cmdq_wqe_cs_dw1 {
+ // This structure is used when the driver writes the wqe.
+ struct {
+ u32 token : 16; // [15:0]
+ u32 cmd : 8; // [23:16]
+ u32 mod : 5; // [28:24]
+ u32 ack_type : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } drv_wr;
+
+ /* The uCode writes back the structure of the CS_DW1.
+ * The driver reads and uses the structure. */
+ struct {
+ u32 mod : 5; // [4:0]
+ u32 ack_type : 3; // [7:5]
+ u32 cmd : 8; // [15:8]
+ u32 arm : 1; // [16]
+ u32 rsvd : 14; // [30:17]
+ u32 obit : 1; // [31]
+ } wb;
+ u32 val;
+};
+
+/* CmdQ BD information or write back buffer information */
+struct cmdq_sge {
+ u32 pa_h; // Upper 32 bits of the physical address
+ u32 pa_l; // Upper 32 bits of the physical address
+ u32 len; // Invalid bit[31].
+ u32 resv;
+};
+
+/* Ctrls section definition of WQE */
+struct cmdq_wqe_ctrls {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+};
+
+/* Complete section definition of WQE */
+struct cmdq_wqe_cs {
+ union cmdq_wqe_cs_dw0 dw0;
+ union cmdq_wqe_cs_dw1 dw1;
+ union {
+ struct cmdq_sge sge;
+ u32 dw2_5[4];
+ } ack;
+};
+
+/* Inline header in WQE inline, describing the length of inline data */
+union cmdq_wqe_inline_header {
+ struct {
+ u32 buf_len : 11; // [10:0] inline data len
+ u32 rsv : 21; // [31:11]
+ } bs;
+ u32 val;
+};
+
+/* Definition of buffer descriptor section in WQE */
+union cmdq_wqe_bds {
+ struct {
+ struct cmdq_sge bds_sge;
+ u32 rsvd[4]; /* Zwy is used to transfer the virtual address of the buffer. */
+ } lcmd; /* Long command, non-inline, and SGE describe the buffer information. */
+};
+
+/* Definition of CMDQ WQE */
+/* (long cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(16B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(16B) |
+ * +----------------------------------------+
+ *
+ *
+ * (middle cmd, 128B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(88B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ *
+ *
+ * (short cmd, 64B)
+ * +----------------------------------------+
+ * | ctrl section(8B) |
+ * +----------------------------------------+
+ * | |
+ * | complete section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | |
+ * | buffer descriptor section(24B) |
+ * | |
+ * +----------------------------------------+
+ * | driver section(8B) |
+ * +----------------------------------------+
+ */
+struct cmdq_wqe {
+ struct cmdq_wqe_ctrls ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds bds;
+};
+
+/* Definition of ctrls section in inline WQE */
+struct cmdq_wqe_ctrls_inline {
+ union {
+ struct {
+ u32 bdsl : 8; // [7:0]
+ u32 drvsl : 2; // [9:8]
+ u32 rsv : 4; // [13:10]
+ u32 wf : 1; // [14]
+ u32 cf : 1; // [15]
+ u32 tsl : 5; // [20:16]
+ u32 va : 1; // [21]
+ u32 df : 1; // [22]
+ u32 cr : 1; // [23]
+ u32 difsl : 3; // [26:24]
+ u32 csl : 2; // [28:27]
+ u32 ctrlsl : 2; // [30:29]
+ u32 obit : 1; // [31]
+ } bs;
+ u32 val;
+ } header;
+ u32 qsf;
+ u64 db;
+};
+
+/* Buffer descriptor section definition of WQE */
+union cmdq_wqe_bds_inline {
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[80];
+ } mcmd; /* Middle command, inline mode */
+
+ struct {
+ union cmdq_wqe_inline_header header;
+ u32 rsvd;
+ u8 data_inline[16];
+ } scmd; /* Short command, inline mode */
+};
+
+struct cmdq_wqe_inline {
+ struct cmdq_wqe_ctrls_inline ctrls;
+ struct cmdq_wqe_cs cs;
+ union cmdq_wqe_bds_inline bds;
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/comm_defs.h b/drivers/net/ethernet/huawei/hinic3/comm_defs.h
new file mode 100644
index 000000000000..70697a64b44e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/comm_defs.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
+ * File Name : comm_defs.h
+ * Version : Initial Draft
+ * Description : common definitions
+ * Function List :
+ * History :
+ * Modification: Created file
+ */
+
+#ifndef COMM_DEFS_H
+#define COMM_DEFS_H
+
+/* CMDQ MODULE_TYPE */
+typedef enum hinic3_mod_type {
+ HINIC3_MOD_COMM = 0, /* HW communication module */
+ HINIC3_MOD_L2NIC = 1, /* L2NIC module */
+ HINIC3_MOD_ROCE = 2,
+ HINIC3_MOD_PLOG = 3,
+ HINIC3_MOD_TOE = 4,
+ HINIC3_MOD_FLR = 5,
+ HINIC3_MOD_RSVD1 = 6,
+ HINIC3_MOD_CFGM = 7, /* Configuration module */
+ HINIC3_MOD_CQM = 8,
+ HINIC3_MOD_RSVD2 = 9,
+ COMM_MOD_FC = 10,
+ HINIC3_MOD_OVS = 11,
+ HINIC3_MOD_DSW = 12,
+ HINIC3_MOD_MIGRATE = 13,
+ HINIC3_MOD_HILINK = 14,
+ HINIC3_MOD_CRYPT = 15, /* secure crypto module */
+ HINIC3_MOD_VIO = 16,
+ HINIC3_MOD_IMU = 17,
+ HINIC3_MOD_DFT = 18, /* DFT */
+ HINIC3_MOD_HW_MAX = 19, /* hardware max module id */
+ /* Software module id, for PF/VF and multi-host */
+ HINIC3_MOD_SW_FUNC = 20,
+ HINIC3_MOD_MAX,
+} hinic3_mod_type_e;
+
+/* func reset的flag ,用于指示清理哪种资源 */
+typedef enum {
+ RES_TYPE_FLUSH_BIT = 0,
+ RES_TYPE_MQM,
+ RES_TYPE_SMF,
+ RES_TYPE_PF_BW_CFG,
+
+ RES_TYPE_COMM = 10,
+ RES_TYPE_COMM_MGMT_CH, /* clear mbox and aeq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_COMM_CMD_CH, /* clear cmdq and ceq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_NIC,
+ RES_TYPE_OVS,
+ RES_TYPE_VBS,
+ RES_TYPE_ROCE,
+ RES_TYPE_FC,
+ RES_TYPE_TOE,
+ RES_TYPE_IPSEC,
+ RES_TYPE_MAX,
+} func_reset_flag_e;
+
+#define HINIC3_COMM_RES \
+ ((1 << RES_TYPE_COMM) | (1 << RES_TYPE_COMM_CMD_CH) | \
+ (1 << RES_TYPE_FLUSH_BIT) | (1 << RES_TYPE_MQM) | \
+ (1 << RES_TYPE_SMF) | (1 << RES_TYPE_PF_BW_CFG))
+
+#define HINIC3_NIC_RES (1 << RES_TYPE_NIC)
+#define HINIC3_OVS_RES (1 << RES_TYPE_OVS)
+#define HINIC3_VBS_RES (1 << RES_TYPE_VBS)
+#define HINIC3_ROCE_RES (1 << RES_TYPE_ROCE)
+#define HINIC3_FC_RES (1 << RES_TYPE_FC)
+#define HINIC3_TOE_RES (1 << RES_TYPE_TOE)
+#define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC)
+
+/* MODE OVS、NIC、UNKNOWN */
+#define HINIC3_WORK_MODE_OVS 0
+#define HINIC3_WORK_MODE_UNKNOWN 1
+#define HINIC3_WORK_MODE_NIC 2
+
+#define DEVICE_TYPE_L2NIC 0
+#define DEVICE_TYPE_NVME 1
+#define DEVICE_TYPE_VIRTIO_NET 2
+#define DEVICE_TYPE_VIRTIO_BLK 3
+#define DEVICE_TYPE_VIRTIO_VSOCK 4
+#define DEVICE_TYPE_VIRTIO_NET_TRANSITION 5
+#define DEVICE_TYPE_VIRTIO_BLK_TRANSITION 6
+#define DEVICE_TYPE_VIRTIO_SCSI_TRANSITION 7
+#define DEVICE_TYPE_VIRTIO_HPC 8
+
+#define IS_STORAGE_DEVICE_TYPE(dev_type) \
+ ((dev_type) == DEVICE_TYPE_VIRTIO_BLK || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_BLK_TRANSITION || \
+ (dev_type) == DEVICE_TYPE_VIRTIO_SCSI_TRANSITION)
+
+/* Common header control information of the COMM message
+ * interaction command word between the driver and PF
+ */
+struct comm_info_head {
+ u8 status;
+ u8 version;
+ u8 rep_aeq_num;
+ u8 rsvd[5];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/comm_msg_intf.h b/drivers/net/ethernet/huawei/hinic3/comm_msg_intf.h
new file mode 100644
index 000000000000..eb11d39ba66c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/comm_msg_intf.h
@@ -0,0 +1,664 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
+ * File Name : comm_msg_intf.h
+ * Version : Initial Draft
+ * Created : 2021/6/28
+ * Last Modified :
+ * Description : COMM Command interfaces between Driver and MPU
+ * Function List :
+ */
+
+#ifndef COMM_MSG_INTF_H
+#define COMM_MSG_INTF_H
+
+#include "comm_defs.h"
+#include "mgmt_msg_base.h"
+
+/* func_reset_flag的边界值 */
+#define FUNC_RESET_FLAG_MAX_VALUE ((1U << (RES_TYPE_MAX + 1)) - 1)
+struct comm_cmd_func_reset {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1[3];
+ u64 reset_flag;
+};
+
+struct comm_cmd_ppf_flr_type_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 rsvd1[2];
+ u32 ppf_flr_type;
+};
+
+enum {
+ COMM_F_API_CHAIN = 1U << 0,
+ COMM_F_CLP = 1U << 1,
+ COMM_F_CHANNEL_DETECT = 1U << 2,
+ COMM_F_MBOX_SEGMENT = 1U << 3,
+ COMM_F_CMDQ_NUM = 1U << 4,
+ COMM_F_VIRTIO_VQ_SIZE = 1U << 5,
+};
+
+#define COMM_MAX_FEATURE_QWORD 4
+struct comm_cmd_feature_nego {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[COMM_MAX_FEATURE_QWORD];
+};
+
+struct comm_cmd_clear_doorbell {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1[3];
+};
+
+struct comm_cmd_clear_resource {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1[3];
+};
+
+struct comm_global_attr {
+ u8 max_host_num;
+ u8 max_pf_num;
+ u16 vf_id_start;
+
+ u8 mgmt_host_node_id; /* for api cmd to mgmt cpu */
+ u8 cmdq_num;
+ u8 rsvd1[2];
+
+ u32 rsvd2[8];
+};
+
+typedef struct {
+ struct comm_info_head head;
+
+ u8 op_code; /* 0: get 1: set 2: check */
+ u8 rsvd[3];
+ u32 freq;
+} spu_cmd_freq_operation;
+
+typedef struct {
+ struct comm_info_head head;
+
+ u8 op_code; /* 0: get 1: set 2: init */
+ u8 slave_addr;
+ u8 cmd_id;
+ u8 size;
+ u32 value;
+} spu_cmd_power_operation;
+
+typedef struct {
+ struct comm_info_head head;
+
+ u8 op_code;
+ u8 rsvd[3];
+ s16 fabric_tsensor_temp_avg;
+ s16 fabric_tsensor_temp;
+ s16 sys_tsensor_temp_avg;
+ s16 sys_tsensor_temp;
+} spu_cmd_tsensor_operation;
+
+struct comm_cmd_heart_event {
+ struct mgmt_msg_head head;
+
+ u8 init_sta; /* 0: mpu init ok, 1: mpu init error. */
+ u8 rsvd1[3];
+ u32 heart; /* add one by one */
+ u32 heart_handshake; /* should be alwasys: 0x5A5A5A5A */
+};
+
+struct comm_cmd_channel_detect {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1[3];
+ u32 rsvd2[2];
+};
+
+enum hinic3_svc_type {
+ SVC_T_COMM = 0,
+ SVC_T_NIC,
+ SVC_T_OVS,
+ SVC_T_ROCE,
+ SVC_T_TOE,
+ SVC_T_IOE,
+ SVC_T_FC,
+ SVC_T_VBS,
+ SVC_T_IPSEC,
+ SVC_T_VIRTIO,
+ SVC_T_MIGRATE,
+ SVC_T_PPA,
+ SVC_T_MAX,
+};
+
+struct comm_cmd_func_svc_used_state {
+ struct mgmt_msg_head head;
+ u16 func_id;
+ u16 svc_type;
+ u8 used_state;
+ u8 rsvd[35];
+};
+
+#define TABLE_INDEX_MAX 129
+
+struct sml_table_id_info {
+ u8 node_id;
+ u8 instance_id;
+};
+
+struct comm_cmd_get_sml_tbl_data {
+ struct comm_info_head head; /* 8B */
+ u8 tbl_data[512];
+};
+
+struct comm_cmd_get_glb_attr {
+ struct mgmt_msg_head head;
+
+ struct comm_global_attr attr;
+};
+
+enum hinic3_fw_ver_type {
+ HINIC3_FW_VER_TYPE_BOOT,
+ HINIC3_FW_VER_TYPE_MPU,
+ HINIC3_FW_VER_TYPE_NPU,
+ HINIC3_FW_VER_TYPE_SMU_L0,
+ HINIC3_FW_VER_TYPE_SMU_L1,
+ HINIC3_FW_VER_TYPE_CFG,
+};
+
+#define HINIC3_FW_VERSION_LEN 16
+#define HINIC3_FW_COMPILE_TIME_LEN 20
+struct comm_cmd_get_fw_version {
+ struct mgmt_msg_head head;
+
+ u16 fw_type;
+ u16 rsvd1;
+ u8 ver[HINIC3_FW_VERSION_LEN];
+ u8 time[HINIC3_FW_COMPILE_TIME_LEN];
+};
+
+/* hardware define: cmdq context */
+struct cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct comm_cmd_cmdq_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 cmdq_id;
+ u8 rsvd1[5];
+
+ struct cmdq_ctxt_info ctxt;
+};
+
+struct comm_cmd_root_ctxt {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u16 rx_buf_sz;
+ u8 lro_en;
+ u8 rsvd1;
+ u16 sq_depth;
+ u16 rq_depth;
+ u64 rsvd2;
+};
+
+struct comm_cmd_wq_page_size {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode;
+ /* real_size=4KB*2^page_size, range(0~20) must be checked by driver */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+struct comm_cmd_msix_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesce_timer_cnt;
+ u8 resend_timer_cnt;
+ u8 lli_timer_cnt;
+ u8 lli_credit_cnt;
+ u8 rsvd2[5];
+};
+
+enum cfg_msix_operation {
+ CFG_MSIX_OPERATION_FREE = 0,
+ CFG_MSIX_OPERATION_ALLOC = 1,
+};
+
+struct comm_cmd_cfg_msix_num {
+ struct comm_info_head head; /* 8B */
+
+ u16 func_id;
+ u8 op_code; /* 1: alloc 0: free */
+ u8 rsvd0;
+
+ u16 msix_num;
+ u16 rsvd1;
+};
+
+struct comm_cmd_dma_attr_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 entry_idx;
+ u8 st;
+ u8 at;
+ u8 ph;
+ u8 no_snooping;
+ u8 tph_en;
+ u32 resv1;
+};
+
+struct comm_cmd_ceq_ctrl_reg {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 q_id;
+ u32 ctrl0;
+ u32 ctrl1;
+ u32 rsvd1;
+};
+
+struct comm_cmd_func_tmr_bitmap_op {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode; /* 1: start, 0: stop */
+ u8 rsvd1[5];
+};
+
+struct comm_cmd_ppf_tmr_op {
+ struct mgmt_msg_head head;
+
+ u8 ppf_id;
+ u8 opcode; /* 1: start, 0: stop */
+ u8 rsvd1[6];
+};
+
+struct comm_cmd_ht_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd0[3];
+ u32 rsvd1[7];
+ u64 page_pa0;
+ u64 page_pa1;
+};
+
+struct comm_cmd_get_eqm_num {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd1[3];
+ u32 chunk_num;
+ u32 search_gpa_num;
+};
+
+struct comm_cmd_eqm_cfg {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 valid;
+ u16 rsvd1;
+ u32 page_size;
+ u32 rsvd2;
+};
+
+struct comm_cmd_eqm_search_gpa {
+ struct mgmt_msg_head head;
+
+ u8 host_id;
+ u8 rsvd1[3];
+ u32 start_idx;
+ u32 num;
+ u32 rsvd2;
+ u64 gpa_hi52[0]; /*lint !e1501*/
+};
+
+struct comm_cmd_ffm_info {
+ struct mgmt_msg_head head;
+
+ u8 node_id;
+ /* error level of the interrupt source */
+ u8 err_level;
+ /* Classification by interrupt source properties */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+ u32 rsvd1;
+};
+
+#define HARDWARE_ID_1XX3V100_TAG 31 /* 1xx3v100 tag */
+
+struct hinic3_board_info {
+ u8 board_type;
+ u8 port_num;
+ u8 port_speed;
+ u8 pcie_width;
+ u8 host_num;
+ u8 pf_num;
+ u16 vf_total_num;
+ u8 tile_num;
+ u8 qcm_num;
+ u8 core_num;
+ u8 work_mode;
+ u8 service_mode;
+ u8 pcie_mode;
+ u8 boot_sel;
+ u8 board_id;
+ u32 cfg_addr;
+ u32 service_en_bitmap;
+ u8 scenes_id;
+ u8 cfg_template_id;
+ u8 hardware_id;
+ u8 spu_en;
+ u16 pf_vendor_id;
+ u8 tile_bitmap;
+ u8 sm_bitmap;
+};
+
+struct comm_cmd_board_info {
+ struct mgmt_msg_head head;
+
+ struct hinic3_board_info info;
+ u32 rsvd[22];
+};
+
+struct comm_cmd_sync_time {
+ struct mgmt_msg_head head;
+
+ u64 mstime;
+ u64 rsvd1;
+};
+
+struct comm_cmd_sdi_info {
+ struct mgmt_msg_head head;
+ u32 cfg_sdi_mode;
+};
+
+/* func flr set */
+struct comm_cmd_func_flr_set {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type; /* 1: close 置flush */
+ u8 isall; /* 是否操作对应pf下的所有vf 1: all vf */
+ u32 rsvd;
+};
+
+struct comm_cmd_bdf_info {
+ struct mgmt_msg_head head;
+
+ u16 function_idx;
+ u8 rsvd1[2];
+ u8 bus;
+ u8 device;
+ u8 function;
+ u8 rsvd2[5];
+};
+
+struct hw_pf_info {
+ u16 glb_func_idx;
+ u16 glb_pf_vf_offset;
+ u8 p2p_idx;
+ u8 itf_idx;
+ u16 max_vfs;
+ u16 max_queue_num;
+ u16 vf_max_queue_num;
+ u16 port_id;
+ u16 rsvd0;
+ u32 pf_service_en_bitmap;
+ u32 vf_service_en_bitmap;
+ u16 rsvd1[2];
+
+ u8 device_type;
+ u8 bus_num; /* tl_cfg_bus_num */
+ u16 vf_stride; /* VF_RID_SETTING.vf_stride */
+ u16 vf_offset; /* VF_RID_SETTING.vf_offset */
+ u8 rsvd[2];
+};
+
+#define CMD_MAX_MAX_PF_NUM 32
+struct hinic3_hw_pf_infos {
+ u8 num_pfs;
+ u8 rsvd1[3];
+
+ struct hw_pf_info infos[CMD_MAX_MAX_PF_NUM];
+};
+
+struct comm_cmd_hw_pf_infos {
+ struct mgmt_msg_head head;
+
+ struct hinic3_hw_pf_infos infos;
+};
+
+#define DD_CFG_TEMPLATE_MAX_IDX 12
+#define DD_CFG_TEMPLATE_MAX_TXT_LEN 64
+#define CFG_TEMPLATE_OP_QUERY 0
+#define CFG_TEMPLATE_OP_SET 1
+#define CFG_TEMPLATE_SET_MODE_BY_IDX 0
+#define CFG_TEMPLATE_SET_MODE_BY_NAME 1
+
+struct comm_cmd_cfg_template {
+ struct mgmt_msg_head head;
+ u8 opt_type; /* 0: query 1: set */
+ u8 set_mode; /* 0-index mode. 1-name mode. */
+ u8 tp_err;
+ u8 rsvd0;
+
+ u8 cur_index; /* Current cfg tempalte index. */
+ u8 cur_max_index; /* Max support cfg tempalte index. */
+ u8 rsvd1[2];
+ u8 cur_name[DD_CFG_TEMPLATE_MAX_TXT_LEN];
+ u8 cur_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+
+ u8 next_index; /* Next reset cfg tempalte index. */
+ u8 next_max_index; /* Max support cfg tempalte index. */
+ u8 rsvd2[2];
+ u8 next_name[DD_CFG_TEMPLATE_MAX_TXT_LEN];
+ u8 next_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
+};
+
+#define MQM_SUPPORT_COS_NUM 8
+#define MQM_INVALID_WEIGHT 256
+#define MQM_LIMIT_SET_FLAG_READ 0
+#define MQM_LIMIT_SET_FLAG_WRITE 1
+struct comm_cmd_set_mqm_limit {
+ struct mgmt_msg_head head;
+
+ u16 set_flag; /* 置位该标记位表示设置 */
+ u16 func_id;
+ /* 对应cos_id所占的权重,0-255, 0为SP调度. */
+ u16 cos_weight[MQM_SUPPORT_COS_NUM];
+ u32 host_min_rate; /* 本host支持的最低限速 */
+ u32 func_min_rate; /* 本function支持的最低限速,单位Mbps */
+ u32 func_max_rate; /* 本function支持的最高限速,单位Mbps */
+ u8 rsvd[64]; /* Reserved */
+};
+
+#define DUMP_16B_PER_LINE 16
+#define DUMP_8_VAR_PER_LINE 8
+#define DUMP_4_VAR_PER_LINE 4
+
+#define DATA_LEN_1K 1024
+/* 软狗超时信息上报接口 */
+struct comm_info_sw_watchdog {
+ struct comm_info_head head;
+
+ /* 全局信息 */
+ u32 curr_time_h; /* 发生死循环的时间,cycle */
+ u32 curr_time_l; /* 发生死循环的时间,cycle */
+ u32 task_id; /* 发生死循环的任务 */
+ u32 rsv; /* 保留字段,用于扩展 */
+
+ /* 寄存器信息,TSK_CONTEXT_S */
+ u64 pc;
+
+ u64 elr;
+ u64 spsr;
+ u64 far;
+ u64 esr;
+ u64 xzr;
+ u64 x30;
+ u64 x29;
+ u64 x28;
+ u64 x27;
+ u64 x26;
+ u64 x25;
+ u64 x24;
+ u64 x23;
+ u64 x22;
+ u64 x21;
+ u64 x20;
+ u64 x19;
+ u64 x18;
+ u64 x17;
+ u64 x16;
+ u64 x15;
+ u64 x14;
+ u64 x13;
+ u64 x12;
+ u64 x11;
+ u64 x10;
+ u64 x09;
+ u64 x08;
+ u64 x07;
+ u64 x06;
+ u64 x05;
+ u64 x04;
+ u64 x03;
+ u64 x02;
+ u64 x01;
+ u64 x00;
+
+ /* 堆栈控制信息,STACK_INFO_S */
+ u64 stack_top; /* 栈顶 */
+ u64 stack_bottom; /* 栈底 */
+ u64 sp; /* 栈当前SP指针值 */
+ u32 curr_used; /* 栈当前使用的大小 */
+ u32 peak_used; /* 栈使用的历史峰值 */
+ u32 is_overflow; /* 栈是否溢出 */
+
+ /* 堆栈具体内容 */
+ u32 stack_actlen; /* 实际的堆栈长度(<=1024) */
+ u8 stack_data[DATA_LEN_1K]; /* 超过1024部分,会被截断 */
+};
+
+/* 临终遗言信息 */
+#define XREGS_NUM 31
+typedef struct tag_cpu_tick {
+ u32 cnt_hi; /* *< cycle计数高32位 */
+ u32 cnt_lo; /* *< cycle计数低32位 */
+} CPU_TICK;
+
+typedef struct tag_ax_exc_reg_info {
+ u64 ttbr0;
+ u64 ttbr1;
+ u64 tcr;
+ u64 mair;
+ u64 sctlr;
+ u64 vbar;
+ u64 current_el;
+ u64 sp;
+ /* 以下字段的内存布局与TskContext保持一致 */
+ u64 elr; /* 返回地址 */
+ u64 spsr;
+ u64 far_r;
+ u64 esr;
+ u64 xzr;
+ u64 xregs[XREGS_NUM]; /* 0~30: x30~x0 */
+} EXC_REGS_S;
+
+typedef struct tag_exc_info {
+ char os_ver[48]; /* *< OS版本号 */
+ char app_ver[64]; /* *< 产品版本号 */
+ u32 exc_cause; /* *< 异常原因 */
+ u32 thread_type; /* *< 异常前的线程类型 */
+ u32 thread_id; /* *< 异常前线程PID */
+ u16 byte_order; /* *< 字节序 */
+ u16 cpu_type; /* *< CPU类型 */
+ u32 cpu_id; /* *< CPU ID */
+ CPU_TICK cpu_tick; /* *< CPU Tick */
+ u32 nest_cnt; /* *< 异常嵌套计数 */
+ u32 fatal_errno; /* *< 致命错误码,发生致命错误时有效 */
+ u64 uw_sp; /* *< 异常前栈指针 */
+ u64 stack_bottom; /* *< 异常前栈底 */
+ /* 异常发生时的核内寄存器上下文信息,82\57必须位于152字节处,
+ * 若有改动,需更新sre_platform.eh中的OS_EXC_REGINFO_OFFSET宏
+ */
+ EXC_REGS_S reg_info;
+} EXC_INFO_S;
+
+/* 上报给驱动的up lastword模块接口 */
+#define MPU_LASTWORD_SIZE 1024
+typedef struct tag_comm_info_up_lastword {
+ struct comm_info_head head;
+
+ EXC_INFO_S stack_info;
+
+ /* 堆栈具体内容 */
+ u32 stack_actlen; /* 实际的堆栈长度(<=1024) */
+ u8 stack_data[MPU_LASTWORD_SIZE]; /* 超过1024部分,会被截断 */
+} comm_info_up_lastword_s;
+
+#define FW_UPDATE_MGMT_TIMEOUT 3000000U
+
+struct hinic3_cmd_update_firmware {
+ struct mgmt_msg_head msg_head;
+
+ struct {
+ u32 sl : 1;
+ u32 sf : 1;
+ u32 flag : 1;
+ u32 bit_signed : 1;
+ u32 reserved : 12;
+ u32 fragment_len : 16;
+ } ctl_info;
+
+ struct {
+ u32 section_crc;
+ u32 section_type;
+ } section_info;
+
+ u32 total_len;
+ u32 section_len;
+ u32 section_version;
+ u32 section_offset;
+ u32 data[384];
+};
+
+struct hinic3_cmd_activate_firmware {
+ struct mgmt_msg_head msg_head;
+ u8 index; /* 0 ~ 7 */
+ u8 data[7];
+};
+
+struct hinic3_cmd_switch_config {
+ struct mgmt_msg_head msg_head;
+ u8 index; /* 0 ~ 7 */
+ u8 data[7];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_comm_cmd.h b/drivers/net/ethernet/huawei/hinic3/hinic3_comm_cmd.h
new file mode 100644
index 000000000000..ad732c337520
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_comm_cmd.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2019-2022. All rights reserved.
+ * File Name : hinic3_comm_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : COMM Commands between Driver and MPU
+ * Function List :
+ */
+
+#ifndef HINIC3_COMMON_CMD_H
+#define HINIC3_COMMON_CMD_H
+
+/* COMM Commands between Driver to MPU */
+enum hinic3_mgmt_cmd {
+ /* flr及资源清理相关命令 */
+ COMM_MGMT_CMD_FUNC_RESET = 0,
+ COMM_MGMT_CMD_FEATURE_NEGO,
+ COMM_MGMT_CMD_FLUSH_DOORBELL,
+ COMM_MGMT_CMD_START_FLUSH,
+ COMM_MGMT_CMD_SET_FUNC_FLR,
+ COMM_MGMT_CMD_GET_GLOBAL_ATTR,
+ COMM_MGMT_CMD_SET_PPF_FLR_TYPE,
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
+
+ /* 分配msi-x中断资源 */
+ COMM_MGMT_CMD_CFG_MSIX_NUM = 10,
+
+ /* 驱动相关配置命令 */
+ COMM_MGMT_CMD_SET_CMDQ_CTXT = 20,
+ COMM_MGMT_CMD_SET_VAT,
+ COMM_MGMT_CMD_CFG_PAGESIZE,
+ COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
+ COMM_MGMT_CMD_SET_DMA_ATTR,
+
+ /* INFRA配置相关命令字 */
+ COMM_MGMT_CMD_GET_MQM_FIX_INFO = 40,
+ COMM_MGMT_CMD_SET_MQM_CFG_INFO,
+ COMM_MGMT_CMD_SET_MQM_SRCH_GPA,
+ COMM_MGMT_CMD_SET_PPF_TMR,
+ COMM_MGMT_CMD_SET_PPF_HT_GPA,
+ COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT,
+ COMM_MGMT_CMD_SET_MBX_CRDT,
+ COMM_MGMT_CMD_CFG_TEMPLATE,
+ COMM_MGMT_CMD_SET_MQM_LIMIT,
+
+ /* 信息获取相关命令字 */
+ COMM_MGMT_CMD_GET_FW_VERSION = 60,
+ COMM_MGMT_CMD_GET_BOARD_INFO,
+ COMM_MGMT_CMD_SYNC_TIME,
+ COMM_MGMT_CMD_GET_HW_PF_INFOS,
+ COMM_MGMT_CMD_SEND_BDF_INFO,
+ COMM_MGMT_CMD_GET_VIRTIO_BDF_INFO,
+ COMM_MGMT_CMD_GET_SML_TABLE_INFO,
+ COMM_MGMT_CMD_GET_SDI_INFO,
+
+ /* 升级相关命令字 */
+ COMM_MGMT_CMD_UPDATE_FW = 80,
+ COMM_MGMT_CMD_ACTIVE_FW,
+ COMM_MGMT_CMD_HOT_ACTIVE_FW,
+ COMM_MGMT_CMD_HOT_ACTIVE_DONE_NOTICE,
+ COMM_MGMT_CMD_SWITCH_CFG,
+ COMM_MGMT_CMD_CHECK_FLASH,
+ COMM_MGMT_CMD_CHECK_FLASH_RW,
+ COMM_MGMT_CMD_RESOURCE_CFG,
+ COMM_MGMT_CMD_UPDATE_BIOS, /* TODO: merge to COMM_MGMT_CMD_UPDATE_FW */
+ COMM_MGMT_CMD_MPU_GIT_CODE,
+
+ /* chip reset相关 */
+ COMM_MGMT_CMD_FAULT_REPORT = 100,
+ COMM_MGMT_CMD_WATCHDOG_INFO,
+ COMM_MGMT_CMD_MGMT_RESET,
+ COMM_MGMT_CMD_FFM_SET, /* TODO: check if needed */
+
+ /* chip info/log 相关 */
+ COMM_MGMT_CMD_GET_LOG = 120,
+ COMM_MGMT_CMD_TEMP_OP,
+ COMM_MGMT_CMD_EN_AUTO_RST_CHIP,
+ COMM_MGMT_CMD_CFG_REG,
+ COMM_MGMT_CMD_GET_CHIP_ID,
+ COMM_MGMT_CMD_SYSINFO_DFX,
+ COMM_MGMT_CMD_PCIE_DFX_NTC,
+ COMM_MGMT_CMD_DICT_LOG_STATUS, /* LOG STATUS 127 */
+ COMM_MGMT_CMD_MSIX_INFO,
+ COMM_MGMT_CMD_CHANNEL_DETECT,
+ COMM_MGMT_CMD_DICT_COUNTER_STATUS,
+
+ /* switch workmode 相关 */
+ COMM_MGMT_CMD_CHECK_IF_SWITCH_WORKMODE = 140,
+ COMM_MGMT_CMD_SWITCH_WORKMODE,
+
+ /* mpu 相关 */
+ COMM_MGMT_CMD_MIGRATE_DFX_HPA = 150,
+ COMM_MGMT_CMD_BDF_INFO,
+ COMM_MGMT_CMD_NCSI_CFG_INFO_GET_PROC,
+
+ /* rsvd0 section */
+ COMM_MGMT_CMD_SECTION_RSVD_0 = 160,
+
+ /* rsvd1 section */
+ COMM_MGMT_CMD_SECTION_RSVD_1 = 170,
+
+ /* rsvd2 section */
+ COMM_MGMT_CMD_SECTION_RSVD_2 = 180,
+
+ /* rsvd3 section */
+ COMM_MGMT_CMD_SECTION_RSVD_3 = 190,
+
+ /* TODO: move to DFT mode */
+ COMM_MGMT_CMD_GET_DIE_ID = 200,
+ COMM_MGMT_CMD_GET_EFUSE_TEST,
+ COMM_MGMT_CMD_EFUSE_INFO_CFG,
+ COMM_MGMT_CMD_GPIO_CTL,
+ COMM_MGMT_CMD_HI30_SERLOOP_START, /* TODO: DFT or hilink */
+ COMM_MGMT_CMD_HI30_SERLOOP_STOP, /* TODO: DFT or hilink */
+ COMM_MGMT_CMD_HI30_MBIST_SET_FLAG, /* TODO: DFT or hilink */
+ COMM_MGMT_CMD_HI30_MBIST_GET_RESULT, /* TODO: DFT or hilink */
+ COMM_MGMT_CMD_ECC_TEST,
+ COMM_MGMT_CMD_FUNC_BIST_TEST, /* 209 */
+
+ COMM_MGMT_CMD_VPD_SET = 210,
+ COMM_MGMT_CMD_VPD_GET,
+
+ COMM_MGMT_CMD_ERASE_FLASH,
+ COMM_MGMT_CMD_QUERY_FW_INFO,
+ COMM_MGMT_CMD_GET_CFG_INFO,
+ COMM_MGMT_CMD_GET_UART_LOG,
+ COMM_MGMT_CMD_SET_UART_CMD,
+ COMM_MGMT_CMD_SPI_TEST,
+
+ /* TODO: ALL reg read/write merge to COMM_MGMT_CMD_CFG_REG */
+ COMM_MGMT_CMD_UP_REG_GET,
+ COMM_MGMT_CMD_UP_REG_SET, /* 219 */
+
+ COMM_MGMT_CMD_REG_READ = 220,
+ COMM_MGMT_CMD_REG_WRITE,
+ COMM_MGMT_CMD_MAG_REG_WRITE,
+ COMM_MGMT_CMD_ANLT_REG_WRITE,
+
+ COMM_MGMT_CMD_HEART_EVENT, /* TODO: delete */
+ COMM_MGMT_CMD_NCSI_OEM_GET_DRV_INFO, /* TODO: delete */
+ COMM_MGMT_CMD_LASTWORD_GET,
+ COMM_MGMT_CMD_READ_BIN_DATA, /* TODO: delete */
+ /* COMM_MGMT_CMD_WWPN_GET, TODO: move to FC? */
+ /* COMM_MGMT_CMD_WWPN_SET, TODO: move to FC? */ /* 229 */
+
+ /* TODO: check if needed */
+ COMM_MGMT_CMD_SET_VIRTIO_DEV = 230,
+ COMM_MGMT_CMD_SET_MAC,
+ /* MPU patch cmd */
+ COMM_MGMT_CMD_LOAD_PATCH,
+ COMM_MGMT_CMD_REMOVE_PATCH,
+ COMM_MGMT_CMD_PATCH_ACTIVE,
+ COMM_MGMT_CMD_PATCH_DEACTIVE,
+ COMM_MGMT_CMD_PATCH_SRAM_OPTIMIZE,
+ /* container host process */
+ COMM_MGMT_CMD_CONTAINER_HOST_PROC,
+ /* nsci counter */
+ COMM_MGMT_CMD_NCSI_COUNTER_PROC,
+ COMM_MGMT_CMD_CHANNEL_STATUS_CHECK, /* 239 */
+
+ /* hot patch rsvd cmd */
+ COMM_MGMT_CMD_RSVD_0 = 240,
+ COMM_MGMT_CMD_RSVD_1,
+ COMM_MGMT_CMD_RSVD_2,
+ COMM_MGMT_CMD_RSVD_3,
+ COMM_MGMT_CMD_RSVD_4,
+ /* 无效字段,版本收编删除,编译使用 */
+ COMM_MGMT_CMD_SEND_API_ACK_BY_UP,
+
+ /* 注:添加cmd,不能修改已有命令字的值,请在前方rsvd
+ * section中添加;原则上所有分支cmd表完全一致
+ */
+ COMM_MGMT_CMD_MAX = 255,
+};
+
+/* CmdQ Common subtype */
+enum comm_cmdq_cmd {
+ COMM_CMD_UCODE_ARM_BIT_SET = 2,
+ COMM_CMD_SEND_NPU_DFT_CMD,
+};
+
+#endif /* HINIC3_COMMON_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_common.h b/drivers/net/ethernet/huawei/hinic3/hinic3_common.h
new file mode 100644
index 000000000000..3010083e5200
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_common.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMMON_H
+#define HINIC3_COMMON_H
+
+#include <linux/types.h>
+
+struct hinic3_dma_addr_align {
+ u32 real_size;
+
+ void *ori_vaddr;
+ dma_addr_t ori_paddr;
+
+ void *align_vaddr;
+ dma_addr_t align_paddr;
+};
+
+enum hinic3_wait_return {
+ WAIT_PROCESS_CPL = 0,
+ WAIT_PROCESS_WAITING = 1,
+ WAIT_PROCESS_ERR = 2,
+};
+
+struct hinic3_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+/* *
+ * hinic_cpu_to_be32 - convert data to big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert, must be Multiple of 4B
+ */
+static inline void hinic3_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_cpu_to_be32 - convert data from big endian 32 bit format
+ * @data: the data to convert
+ * @len: length of data to convert
+ */
+static inline void hinic3_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ int data_len = len;
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ data_len = data_len / chunk_sz;
+
+ for (i = 0; i < data_len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+/* *
+ * hinic3_set_sge - set dma area in scatter gather entry
+ * @sge: scatter gather entry
+ * @addr: dma address
+ * @len: length of relevant data in the dma address
+ */
+static inline void hinic3_set_sge(struct hinic3_sge *sge, dma_addr_t addr,
+ int len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = len;
+}
+
+#define hinic3_hw_be32(val) (val)
+#define hinic3_hw_cpu32(val) (val)
+#define hinic3_hw_cpu16(val) (val)
+
+static inline void hinic3_hw_be32_len(void *data, int len)
+{
+}
+
+static inline void hinic3_hw_cpu32_len(void *data, int len)
+{
+}
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align);
+
+
+typedef enum hinic3_wait_return (*wait_cpl_handler)(void *priv_data);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
new file mode 100644
index 000000000000..98adaf057b47
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_crm.h
@@ -0,0 +1,1162 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CRM_H
+#define HINIC3_CRM_H
+
+#define HINIC3_DBG
+
+#define HINIC3_DRV_VERSION ""
+#define HINIC3_DRV_DESC "Intelligent Network Interface Card Driver"
+#define HIUDK_DRV_DESC "Intelligent Network Unified Driver"
+
+#define ARRAY_LEN(arr) ((int)((int)sizeof(arr) / (int)sizeof((arr)[0])))
+
+#define HINIC3_MGMT_VERSION_MAX_LEN 32
+
+#define HINIC3_FW_VERSION_NAME 16
+#define HINIC3_FW_VERSION_SECTION_CNT 4
+#define HINIC3_FW_VERSION_SECTION_BORDER 0xFF
+struct hinic3_fw_version {
+ u8 mgmt_ver[HINIC3_FW_VERSION_NAME];
+ u8 microcode_ver[HINIC3_FW_VERSION_NAME];
+ u8 boot_ver[HINIC3_FW_VERSION_NAME];
+};
+
+#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF
+
+/* show each drivers only such as nic_service_cap,
+ * toe_service_cap structure, but not show service_cap
+ */
+enum hinic3_service_type {
+ SERVICE_T_NIC = 0,
+ SERVICE_T_OVS,
+ SERVICE_T_ROCE,
+ SERVICE_T_TOE,
+ SERVICE_T_IOE,
+ SERVICE_T_FC,
+ SERVICE_T_VBS,
+ SERVICE_T_IPSEC,
+ SERVICE_T_VIRTIO,
+ SERVICE_T_MIGRATE,
+ SERVICE_T_PPA,
+ SERVICE_T_CUSTOM,
+ SERVICE_T_VROCE,
+ SERVICE_T_MAX,
+
+ /* Only used for interruption resource management,
+ * mark the request module
+ */
+ SERVICE_T_INTF = (1 << 15),
+ SERVICE_T_CQM = (1 << 16),
+};
+
+enum hinic3_ppf_flr_type {
+ STATELESS_FLR_TYPE,
+ STATEFUL_FLR_TYPE,
+};
+
+struct nic_service_cap {
+ u16 max_sqs;
+ u16 max_rqs;
+ u16 default_num_queues;
+};
+
+struct ppa_service_cap {
+ u16 qpc_fake_vf_start;
+ u16 qpc_fake_vf_num;
+ u32 qpc_fake_vf_ctx_num;
+ u32 pctx_sz; /* 512B */
+ u32 bloomfilter_length;
+ u8 bloomfilter_en;
+ u8 rsvd;
+ u16 rsvd1;
+};
+
+struct vbs_service_cap {
+ u16 vbs_max_volq;
+ u16 rsvd1;
+};
+
+struct migr_service_cap {
+ u8 master_host_id;
+ u8 rsvd[3];
+};
+
+/* PF/VF ToE service resource structure */
+struct dev_toe_svc_cap {
+ /* PF resources */
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 max_cctxt;
+ u32 max_cqs;
+ u16 max_srqs;
+ u32 srq_id_start;
+ u32 max_mpts;
+};
+
+/* ToE services */
+struct toe_service_cap {
+ struct dev_toe_svc_cap dev_toe_cap;
+
+ bool alloc_flag;
+ u32 pctx_sz; /* 1KB */
+ u32 scqc_sz; /* 64B */
+};
+
+/* PF FC service resource structure defined */
+struct dev_fc_svc_cap {
+ /* PF Parent QPC */
+ u32 max_parent_qpc_num; /* max number is 2048 */
+
+ /* PF Child QPC */
+ u32 max_child_qpc_num; /* max number is 2048 */
+ u32 child_qpc_id_start;
+
+ /* PF SCQ */
+ u32 scq_num; /* 16 */
+
+ /* PF supports SRQ */
+ u32 srq_num; /* Number of SRQ is 2 */
+
+ u8 vp_id_start;
+ u8 vp_id_end;
+};
+
+/* FC services */
+struct fc_service_cap {
+ struct dev_fc_svc_cap dev_fc_cap;
+
+ /* Parent QPC */
+ u32 parent_qpc_size; /* 256B */
+
+ /* Child QPC */
+ u32 child_qpc_size; /* 256B */
+
+ /* SQ */
+ u32 sqe_size; /* 128B(in linked list mode) */
+
+ /* SCQ */
+ u32 scqc_size; /* Size of the Context 32B */
+ u32 scqe_size; /* 64B */
+
+ /* SRQ */
+ u32 srqc_size; /* Size of SRQ Context (64B) */
+ u32 srqe_size; /* 32B */
+};
+
+struct dev_roce_svc_own_cap {
+ u32 max_qps;
+ u32 max_cqs;
+ u32 max_srqs;
+ u32 max_mpts;
+ u32 max_drc_qps;
+
+ u32 cmtt_cl_start;
+ u32 cmtt_cl_end;
+ u32 cmtt_cl_sz;
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u32 wqe_cl_start;
+ u32 wqe_cl_end;
+ u32 wqe_cl_sz;
+
+ u32 qpc_entry_sz;
+ u32 max_wqes;
+ u32 max_rq_sg;
+ u32 max_sq_inline_data_sz;
+ u32 max_rq_desc_sz;
+
+ u32 rdmarc_entry_sz;
+ u32 max_qp_init_rdma;
+ u32 max_qp_dest_rdma;
+
+ u32 max_srq_wqes;
+ u32 reserved_srqs;
+ u32 max_srq_sge;
+ u32 srqc_entry_sz;
+
+ u32 max_msg_sz; /* Message size 2GB */
+};
+
+/* RDMA service capability structure */
+struct dev_rdma_svc_cap {
+ /* ROCE service unique parameter structure */
+ struct dev_roce_svc_own_cap roce_own_cap;
+};
+
+/* Defines the RDMA service capability flag */
+enum {
+ RDMA_BMME_FLAG_LOCAL_INV = (1 << 0),
+ RDMA_BMME_FLAG_REMOTE_INV = (1 << 1),
+ RDMA_BMME_FLAG_FAST_REG_WR = (1 << 2),
+ RDMA_BMME_FLAG_RESERVED_LKEY = (1 << 3),
+ RDMA_BMME_FLAG_TYPE_2_WIN = (1 << 4),
+ RDMA_BMME_FLAG_WIN_TYPE_2B = (1 << 5),
+
+ RDMA_DEV_CAP_FLAG_XRC = (1 << 6),
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW = (1 << 7),
+ RDMA_DEV_CAP_FLAG_ATOMIC = (1 << 8),
+ RDMA_DEV_CAP_FLAG_APM = (1 << 9),
+};
+
+/* RDMA services */
+struct rdma_service_cap {
+ struct dev_rdma_svc_cap dev_rdma_cap;
+
+ u8 log_mtt; /* 1. the number of MTT PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+ /* todo: need to check whether related to max_mtt_seg */
+ u32 num_mtts; /* Number of MTT table (4M),
+ * is actually MTT seg number
+ */
+ u32 log_mtt_seg;
+ u32 mtt_entry_sz; /* MTT table size 8B, including 1 PA(64bits) */
+ u32 mpt_entry_sz; /* MPT table size (64B) */
+
+ u32 dmtt_cl_start;
+ u32 dmtt_cl_end;
+ u32 dmtt_cl_sz;
+
+ u8 log_rdmarc; /* 1. the number of RDMArc PA must be integer power of 2
+ * 2. represented by logarithm. Each MTT table can
+ * contain 1, 2, 4, 8, and 16 PA)
+ */
+
+ u32 reserved_qps; /* Number of reserved QP */
+ u32 max_sq_sg; /* Maximum SGE number of SQ (8) */
+ u32 max_sq_desc_sz; /* WQE maximum size of SQ(1024B), inline maximum
+ * size if 960B(944B aligned to the 960B),
+ * 960B=>wqebb alignment=>1024B
+ */
+ u32 wqebb_size; /* Currently, the supports 64B and 128B,
+ * defined as 64Bytes
+ */
+
+ u32 max_cqes; /* Size of the depth of the CQ (64K-1) */
+ u32 reserved_cqs; /* Number of reserved CQ */
+ u32 cqc_entry_sz; /* Size of the CQC (64B/128B) */
+ u32 cqe_size; /* Size of CQE (32B) */
+
+ u32 reserved_mrws; /* Number of reserved MR/MR Window */
+
+ u32 max_fmr_maps; /* max MAP of FMR,
+ * (1 << (32-ilog2(num_mpt)))-1;
+ */
+
+ /* todo: max value needs to be confirmed */
+ /* MTT table number of Each MTT seg(3) */
+
+ u32 log_rdmarc_seg; /* table number of each RDMArc seg(3) */
+
+ /* Timeout time. Formula:Tr=4.096us*2(local_ca_ack_delay), [Tr,4Tr] */
+ u32 local_ca_ack_delay;
+ u32 num_ports; /* Physical port number */
+
+ u32 db_page_size; /* Size of the DB (4KB) */
+ u32 direct_wqe_size; /* Size of the DWQE (256B) */
+
+ u32 num_pds; /* Maximum number of PD (128K) */
+ u32 reserved_pds; /* Number of reserved PD */
+ u32 max_xrcds; /* Maximum number of xrcd (64K) */
+ u32 reserved_xrcds; /* Number of reserved xrcd */
+
+ u32 max_gid_per_port; /* gid number (16) of each port */
+ u32 gid_entry_sz; /* RoCE v2 GID table is 32B,
+ * compatible RoCE v1 expansion
+ */
+
+ u32 reserved_lkey; /* local_dma_lkey */
+ u32 num_comp_vectors; /* Number of complete vector (32) */
+ u32 page_size_cap; /* Supports 4K,8K,64K,256K,1M and 4M page_size */
+
+ u32 flags; /* RDMA some identity */
+ u32 max_frpl_len; /* Maximum number of pages frmr registration */
+ u32 max_pkeys; /* Number of supported pkey group */
+};
+
+/* PF OVS service resource structure defined */
+struct dev_ovs_svc_cap {
+ u32 max_pctxs; /* Parent Context: max specifications 1M */
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_num;
+ u16 fake_vf_start_id;
+ u8 dynamic_qp_en;
+};
+
+/* OVS services */
+struct ovs_service_cap {
+ struct dev_ovs_svc_cap dev_ovs_cap;
+
+ u32 pctx_sz; /* 512B */
+};
+
+/* PF IPsec service resource structure defined */
+struct dev_ipsec_svc_cap {
+ u32 max_sactxs; /* max IPsec SA context num */
+ u16 max_cqs; /* max IPsec SCQC num */
+ u16 rsvd0;
+};
+
+/* IPsec services */
+struct ipsec_service_cap {
+ struct dev_ipsec_svc_cap dev_ipsec_cap;
+ u32 sactx_sz; /* 512B */
+};
+
+/* Defines the IRQ information structure */
+struct irq_info {
+ u16 msix_entry_idx; /* IRQ corresponding index number */
+ u32 irq_id; /* the IRQ number from OS */
+};
+
+struct interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+enum hinic3_msix_state {
+ HINIC3_MSIX_ENABLE,
+ HINIC3_MSIX_DISABLE,
+};
+
+enum hinic3_msix_auto_mask {
+ HINIC3_CLR_MSIX_AUTO_MASK,
+ HINIC3_SET_MSIX_AUTO_MASK,
+};
+
+enum func_type {
+ TYPE_PF,
+ TYPE_VF,
+ TYPE_PPF,
+ TYPE_UNKNOWN,
+};
+
+struct hinic3_init_para {
+ /* Record hinic_pcidev or NDIS_Adapter pointer address */
+ void *adapter_hdl;
+ /* Record pcidev or Handler pointer address
+ * for example: ioremap interface input parameter
+ */
+ void *pcidev_hdl;
+ /* Record pcidev->dev or Handler pointer address which used to
+ * dma address application or dev_err print the parameter
+ */
+ void *dev_hdl;
+
+ /* Configure virtual address, PF is bar1, VF is bar0/1 */
+ void *cfg_reg_base;
+ /* interrupt configuration register address, PF is bar2, VF is bar2/3
+ */
+ void *intr_reg_base;
+ /* for PF bar3 virtual address, if function is VF should set to NULL */
+ void *mgmt_reg_base;
+
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ /* the doorbell address, bar4/5 higher 4M space */
+ void *db_base;
+ /* direct wqe 4M, follow the doorbell address space */
+ void *dwqe_mapping;
+ void **hwdev;
+ void *chip_node;
+ /* if use polling mode, set it true */
+ bool poll;
+
+ u16 probe_fault_level;
+};
+
+/* B200 config BAR45 4MB, DB & DWQE both 2MB */
+#define HINIC3_DB_DWQE_SIZE 0x00400000
+
+/* db/dwqe page size: 4K */
+#define HINIC3_DB_PAGE_SIZE 0x00001000ULL
+#define HINIC3_DWQE_OFFSET 0x00000800ULL
+
+#define HINIC3_DB_MAX_AREAS (HINIC3_DB_DWQE_SIZE / HINIC3_DB_PAGE_SIZE)
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+#define MAX_FUNCTION_NUM 4096
+
+struct card_node {
+ struct list_head node;
+ struct list_head func_list;
+ char chip_name[IFNAMSIZ];
+ void *log_info;
+ void *dbgtool_info;
+ void *func_handle_array[MAX_FUNCTION_NUM];
+ unsigned char bus_num;
+ u16 func_num;
+ u32 rsvd1;
+ atomic_t channel_busy_cnt;
+ void *priv_data;
+ u64 rsvd2;
+};
+
+#define HINIC3_SYNFW_TIME_PERIOD (60 * 60 * 1000)
+#define HINIC3_SYNC_YEAR_OFFSET 1900
+#define HINIC3_SYNC_MONTH_OFFSET 1
+
+#define FAULT_SHOW_STR_LEN 16
+
+enum hinic3_fault_source_type {
+ /* same as FAULT_TYPE_CHIP */
+ HINIC3_FAULT_SRC_HW_MGMT_CHIP = 0,
+ /* same as FAULT_TYPE_UCODE */
+ HINIC3_FAULT_SRC_HW_MGMT_UCODE,
+ /* same as FAULT_TYPE_MEM_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_RD_TIMEOUT,
+ /* same as FAULT_TYPE_MEM_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_MEM_WR_TIMEOUT,
+ /* same as FAULT_TYPE_REG_RD_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_RD_TIMEOUT,
+ /* same as FAULT_TYPE_REG_WR_TIMEOUT */
+ HINIC3_FAULT_SRC_HW_MGMT_REG_WR_TIMEOUT,
+ HINIC3_FAULT_SRC_SW_MGMT_UCODE,
+ HINIC3_FAULT_SRC_MGMT_WATCHDOG,
+ HINIC3_FAULT_SRC_MGMT_RESET = 8,
+ HINIC3_FAULT_SRC_HW_PHY_FAULT,
+ HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ HINIC3_FAULT_SRC_PCIE_LINK_DOWN = 20,
+ HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST = 21,
+ HINIC3_FAULT_SRC_TX_TIMEOUT,
+ HINIC3_FAULT_SRC_TYPE_MAX,
+};
+
+union hinic3_fault_hw_mgmt {
+ u32 val[4];
+ /* valid only type == FAULT_TYPE_CHIP */
+ struct {
+ u8 node_id;
+ /* enum hinic_fault_err_level */
+ u8 err_level;
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+ /* func_id valid only if err_level == FAULT_LEVEL_SERIOUS_FLR */
+ u8 rsvd1;
+ u8 host_id;
+ u16 func_id;
+ } chip;
+
+ /* valid only if type == FAULT_TYPE_UCODE */
+ struct {
+ u8 cause_id;
+ u8 core_id;
+ u8 c_id;
+ u8 rsvd3;
+ u32 epc;
+ u32 rsvd4;
+ u32 rsvd5;
+ } ucode;
+
+ /* valid only if type == FAULT_TYPE_MEM_RD_TIMEOUT ||
+ * FAULT_TYPE_MEM_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr_ctrl;
+ u32 err_csr_data;
+ u32 ctrl_tab;
+ u32 mem_index;
+ } mem_timeout;
+
+ /* valid only if type == FAULT_TYPE_REG_RD_TIMEOUT ||
+ * FAULT_TYPE_REG_WR_TIMEOUT
+ */
+ struct {
+ u32 err_csr;
+ u32 rsvd6;
+ u32 rsvd7;
+ u32 rsvd8;
+ } reg_timeout;
+
+ struct {
+ /* 0: read; 1: write */
+ u8 op_type;
+ u8 port_id;
+ u8 dev_ad;
+ u8 rsvd9;
+ u32 csr_addr;
+ u32 op_data;
+ u32 rsvd10;
+ } phy_fault;
+};
+
+/* defined by chip */
+struct hinic3_fault_event {
+ /* enum hinic_fault_type */
+ u8 type;
+ u8 fault_level; /* sdk write fault level for uld event */
+ u8 rsvd0[2];
+ union hinic3_fault_hw_mgmt event;
+};
+
+struct hinic3_cmd_fault_event {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ struct hinic3_fault_event event;
+};
+
+struct hinic3_sriov_state_info {
+ u8 enable;
+ u16 num_vfs;
+};
+
+enum hinic3_comm_event_type {
+ EVENT_COMM_PCIE_LINK_DOWN,
+ EVENT_COMM_HEART_LOST,
+ EVENT_COMM_FAULT,
+ EVENT_COMM_SRIOV_STATE_CHANGE,
+ EVENT_COMM_CARD_REMOVE,
+ EVENT_COMM_MGMT_WATCHDOG,
+};
+
+enum hinic3_event_service_type {
+ EVENT_SRV_COMM = 0,
+#define SERVICE_EVENT_BASE (EVENT_SRV_COMM + 1)
+ EVENT_SRV_NIC = SERVICE_EVENT_BASE + SERVICE_T_NIC,
+ EVENT_SRV_MIGRATE = SERVICE_EVENT_BASE + SERVICE_T_MIGRATE,
+};
+
+#define HINIC3_SRV_EVENT_TYPE(svc, type) ((((u32)(svc)) << 16) | (type))
+struct hinic3_event_info {
+ u16 service; /* enum hinic3_event_service_type */
+ u16 type;
+ u8 event_data[104];
+};
+
+typedef void (*hinic3_event_handler)(void *handle, struct hinic3_event_info *event);
+
+/* *
+ * @brief hinic3_event_register - register hardware event
+ * @param dev: device pointer to hwdev
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ */
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback);
+
+/* *
+ * @brief hinic3_event_unregister - unregister hardware event
+ * @param dev: device pointer to hwdev
+ */
+void hinic3_event_unregister(void *dev);
+
+/* *
+ * @brief hinic3_set_msix_auto_mask - set msix auto mask function
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix auto_mask flag, 1-enable, 2-clear
+ */
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag);
+
+/* *
+ * @brief hinic3_set_msix_state - set msix state
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param flag: msix state flag, 0-enable, 1-disable
+ */
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag);
+
+/* *
+ * @brief hinic3_misx_intr_clear_resend_bit - clear msix resend bit
+ * @param hwdev: device pointer to hwdev
+ * @param msix_idx: msix id
+ * @param clear_resend_en: 1-clear
+ */
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en);
+
+/* *
+ * @brief hinic3_set_interrupt_cfg_direct - set interrupt cfg
+ * @param hwdev: device pointer to hwdev
+ * @param interrupt_para: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_interrupt_cfg_direct(void *hwdev,
+ struct interrupt_info *info,
+ u16 channel);
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_interrupt_cfg - get interrupt cfg
+ * @param dev: device pointer to hwdev
+ * @param info: interrupt info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_alloc_irqs - alloc irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc number
+ * @param irq_info_array: alloc irq info
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free irq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: irq id
+ */
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id);
+
+/* *
+ * @brief hinic3_alloc_ceqs - alloc ceqs
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param num: alloc ceq number
+ * @param ceq_id_array: alloc ceq_id_array
+ * @param act_num: alloc actual number
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num);
+
+/* *
+ * @brief hinic3_free_irq - free ceq
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param irq_id: ceq id
+ */
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id);
+
+/* *
+ * @brief hinic3_get_pcidev_hdl - get pcidev_hdl
+ * @param hwdev: device pointer to hwdev
+ * @retval non-null: success
+ * @retval null: failure
+ */
+void *hinic3_get_pcidev_hdl(void *hwdev);
+
+/* *
+ * @brief hinic3_ppf_idx - get ppf id
+ * @param hwdev: device pointer to hwdev
+ * @retval ppf id
+ */
+u8 hinic3_ppf_idx(void *hwdev);
+
+/* *
+ * @brief hinic3_get_chip_present_flag - get chip present flag
+ * @param hwdev: device pointer to hwdev
+ * @retval 1: chip is present
+ * @retval 0: chip is absent
+ */
+int hinic3_get_chip_present_flag(const void *hwdev);
+
+/* *
+ * @brief hinic3_get_heartbeat_status - get heartbeat status
+ * @param hwdev: device pointer to hwdev
+ * @retval heartbeat status
+ */
+u32 hinic3_get_heartbeat_status(void *hwdev);
+
+/* *
+ * @brief hinic3_support_nic - function support nic
+ * @param hwdev: device pointer to hwdev
+ * @param cap: nic service capbility
+ * @retval true: function support nic
+ * @retval false: function not support nic
+ */
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ipsec - function support ipsec
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ipsec service capbility
+ * @retval true: function support ipsec
+ * @retval false: function not support ipsec
+ */
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_roce - function support roce
+ * @param hwdev: device pointer to hwdev
+ * @param cap: roce service capbility
+ * @retval true: function support roce
+ * @retval false: function not support roce
+ */
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_fc - function support fc
+ * @param hwdev: device pointer to hwdev
+ * @param cap: fc service capbility
+ * @retval true: function support fc
+ * @retval false: function not support fc
+ */
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_rdma - function support rdma
+ * @param hwdev: device pointer to hwdev
+ * @param cap: rdma service capbility
+ * @retval true: function support rdma
+ * @retval false: function not support rdma
+ */
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ovs - function support ovs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ovs service capbility
+ * @retval true: function support ovs
+ * @retval false: function not support ovs
+ */
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_vbs - function support vbs
+ * @param hwdev: device pointer to hwdev
+ * @param cap: vbs service capbility
+ * @retval true: function support vbs
+ * @retval false: function not support vbs
+ */
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_toe - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param cap: toe service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_ppa - function support ppa
+ * @param hwdev: device pointer to hwdev
+ * @param cap: ppa service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap);
+
+/* *
+ * @brief hinic3_support_migr - function support migrate
+ * @param hwdev: device pointer to hwdev
+ * @param cap: migrate service capbility
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap);
+
+/* *
+ * @brief hinic3_sync_time - sync time to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param time: time to sync
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_sync_time(void *hwdev, u64 time);
+
+/* *
+ * @brief hinic3_disable_mgmt_msg_report - disable mgmt report msg
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_disable_mgmt_msg_report(void *hwdev);
+
+/* *
+ * @brief hinic3_func_for_mgmt - get function service type
+ * @param hwdev: device pointer to hwdev
+ * @retval true: function for mgmt
+ * @retval false: function is not for mgmt
+ */
+bool hinic3_func_for_mgmt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pcie_order_cfg - set pcie order cfg
+ * @param handle: device pointer to hwdev
+ */
+void hinic3_set_pcie_order_cfg(void *handle);
+
+/* *
+ * @brief hinic3_init_hwdev - call to init hwdev
+ * @param para: device pointer to para
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_hwdev(struct hinic3_init_para *para);
+
+/* *
+ * @brief hinic3_free_hwdev - free hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_detect_hw_present - detect hardware present
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_detect_hw_present(void *hwdev);
+
+/* *
+ * @brief hinic3_record_pcie_error - record pcie error
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_record_pcie_error(void *hwdev);
+
+/* *
+ * @brief hinic3_shutdown_hwdev - shutdown hwdev
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_shutdown_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_set_ppf_flr_type - set ppf flr type
+ * @param hwdev: device pointer to hwdev
+ * @param ppf_flr_type: ppf flr type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type);
+
+/* *
+ * @brief hinic3_get_mgmt_version - get management cpu version
+ * @param hwdev: device pointer to hwdev
+ * @param mgmt_ver: output management version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_fw_version - get firmware version
+ * @param hwdev: device pointer to hwdev
+ * @param fw_ver: firmware version
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel);
+
+/* *
+ * @brief hinic3_global_func_id - get global function id
+ * @param hwdev: device pointer to hwdev
+ * @retval global function id
+ */
+u16 hinic3_global_func_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vector_to_eqn - vector to eq id
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @param vector: vertor
+ * @retval eq id
+ */
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type,
+ int vector);
+
+/* *
+ * @brief hinic3_glb_pf_vf_offset - get vf offset id of pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset id
+ */
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_id_of_vf - get pf id of vf
+ * @param hwdev: device pointer to hwdev
+ * @retval pf id
+ */
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+/* *
+ * @brief hinic3_func_type - get function type
+ * @param hwdev: device pointer to hwdev
+ * @retval function type
+ */
+enum func_type hinic3_func_type(void *hwdev);
+
+/* *
+ * @brief hinic3_get_stateful_enable - get stateful status
+ * @param hwdev: device pointer to hwdev
+ * @retval stateful enabel status
+ */
+bool hinic3_get_stateful_enable(void *hwdev);
+
+/* *
+ * @brief hinic3_host_oq_id_mask - get oq id
+ * @param hwdev: device pointer to hwdev
+ * @retval oq id
+ */
+u8 hinic3_host_oq_id_mask(void *hwdev);
+
+/* *
+ * @brief hinic3_host_id - get host id
+ * @param hwdev: device pointer to hwdev
+ * @retval host id
+ */
+u8 hinic3_host_id(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get host total function number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: host total function number
+ * @retval zero: failure
+ */
+u16 hinic3_host_total_func(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max nic queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max nic queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_nic_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_func_max_qnum - get max queue number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: max queue number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_qnum(void *hwdev);
+
+/* *
+ * @brief hinic3_er_id - get ep id
+ * @param hwdev: device pointer to hwdev
+ * @retval ep id
+ */
+u8 hinic3_ep_id(void *hwdev); /* Obtain service_cap.ep_id */
+
+/* *
+ * @brief hinic3_er_id - get er id
+ * @param hwdev: device pointer to hwdev
+ * @retval er id
+ */
+u8 hinic3_er_id(void *hwdev); /* Obtain service_cap.er_id */
+
+/* *
+ * @brief hinic3_physical_port_id - get physical port id
+ * @param hwdev: device pointer to hwdev
+ * @retval physical port id
+ */
+u8 hinic3_physical_port_id(void *hwdev); /* Obtain service_cap.port_id */
+
+/* *
+ * @brief hinic3_func_max_vf - get vf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: vf number
+ * @retval zero: failure
+ */
+u16 hinic3_func_max_vf(void *hwdev); /* Obtain service_cap.max_vf */
+
+/* *
+ * @brief hinic3_max_pf_num - get global max pf number
+ */
+u8 hinic3_max_pf_num(void *hwdev);
+
+/* *
+ * @brief hinic3_host_pf_num - get current host pf number
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf number
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_num(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_host_pf_id_start - get current host pf id start
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: pf id start
+ * @retval zero: failure
+ */
+u32 hinic3_host_pf_id_start(void *hwdev); /* Obtain service_cap.pf_num */
+
+/* *
+ * @brief hinic3_pcie_itf_id - get pcie port id
+ * @param hwdev: device pointer to hwdev
+ * @retval pcie port id
+ */
+u8 hinic3_pcie_itf_id(void *hwdev);
+
+/* *
+ * @brief hinic3_vf_in_pf - get vf offset in pf
+ * @param hwdev: device pointer to hwdev
+ * @retval vf offset in pf
+ */
+u8 hinic3_vf_in_pf(void *hwdev);
+
+/* *
+ * @brief hinic3_cos_valid_bitmap - get cos valid bitmap
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: valid cos bit map
+ * @retval zero: failure
+ */
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap);
+
+/* *
+ * @brief hinic3_stateful_init - init stateful resource
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_stateful_init(void *hwdev);
+
+/* *
+ * @brief hinic3_stateful_deinit - deinit stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_stateful_deinit(void *hwdev);
+
+/* *
+ * @brief hinic3_free_stateful - sdk remove free stateful resource
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_stateful(void *hwdev);
+
+/* *
+ * @brief hinic3_need_init_stateful_default - get need init stateful default
+ * @param hwdev: device pointer to hwdev
+ */
+bool hinic3_need_init_stateful_default(void *hwdev);
+
+/* *
+ * @brief hinic3_get_card_present_state - get card present state
+ * @param hwdev: device pointer to hwdev
+ * @param card_present_state: return card present state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state);
+
+/* *
+ * @brief hinic3_func_rx_tx_flush - function flush
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel);
+
+/* *
+ * @brief hinic3_flush_mgmt_workq - when remove function should flush work queue
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_flush_mgmt_workq(void *hwdev);
+
+/* *
+ * @brief hinic3_ceq_num get toe ceq num
+ */
+u8 hinic3_ceq_num(void *hwdev);
+
+/* *
+ * @brief hinic3_intr_num get intr num
+ */
+u16 hinic3_intr_num(void *hwdev);
+
+/* *
+ * @brief hinic3_flexq_en get flexq en
+ */
+u8 hinic3_flexq_en(void *hwdev);
+
+/* *
+ * @brief hinic3_fault_event_report - report fault event
+ * @param hwdev: device pointer to hwdev
+ * @param src: fault event source, reference to enum hinic3_fault_source_type
+ * @param level: fault level, reference to enum hinic3_fault_err_level
+ */
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level);
+
+/* *
+ * @brief hinic3_probe_success - notify device probe successful
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_probe_success(void *hwdev);
+
+/* *
+ * @brief hinic3_set_func_svc_used_state - set function service used state
+ * @param hwdev: device pointer to hwdev
+ * @param svc_type: service type
+ * @param state: function used state
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel);
+
+/* *
+ * @brief hinic3_get_self_test_result - get self test result
+ * @param hwdev: device pointer to hwdev
+ * @retval self test result
+ */
+u32 hinic3_get_self_test_result(void *hwdev);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: set host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_slave_bitmap - get slave host bitmap
+ * @param hwdev: device pointer to hwdev
+ * @param slave_host_bitmap-zero: slave host bitmap
+ * @retval zero: failure
+ */
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap);
+
+/* *
+ * @brief hinic3_get_slave_host_enable - get slave host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: slave is enable
+ * @retval zero: failure
+ */
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en);
+
+/* *
+ * @brief hinic3_set_host_migrate_enable - set migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrate is enable
+ * @retval zero: failure
+ */
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable);
+
+/* *
+ * @brief hinic3_get_host_migrate_enable - get migrate host enable
+ * @param hwdev: device pointer to hwdev
+ * @param host_id: get host id
+ * @param slave_en-zero: migrte enable ptr
+ * @retval zero: failure
+ */
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
new file mode 100644
index 000000000000..4a688f190864
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dbg.c
@@ -0,0 +1,983 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_nic_dbg.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic.h"
+#include "hinic3_mgmt_interface.h"
+
+typedef int (*nic_driv_module)(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+struct nic_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ nic_driv_module driv_func;
+};
+
+static int get_nic_drv_version(void *buf_out, const u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s",
+ HINIC3_NIC_DRV_VERSION, "2023-05-17_19:56:38");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_tx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 q_id;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect in buf size from user :%u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ return hinic3_dbg_get_sq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+}
+
+static int get_q_num(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get queue number\n");
+ return -EFAULT;
+ }
+
+ if (!buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get queue number para buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || *out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EINVAL;
+ }
+
+ *((u16 *)buf_out) = nic_dev->q_params.num_qps;
+
+ return 0;
+}
+
+static int get_tx_wqe_info(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get tx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_SQ);
+}
+
+static int get_rx_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct nic_rq_info *rq_info = buf_out;
+ u16 q_id;
+ int err;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(u32));
+ return -EINVAL;
+ }
+
+ q_id = (u16)(*((u32 *)buf_in));
+
+ err = hinic3_dbg_get_rq_info(nic_dev->hwdev, q_id, buf_out, *out_size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Get rq info failed, ret is %d.\n", err);
+ return err;
+ }
+
+ rq_info->delta = (u16)nic_dev->rxqs[q_id].delta;
+ rq_info->ci = (u16)(nic_dev->rxqs[q_id].cons_idx & nic_dev->rxqs[q_id].q_mask);
+ rq_info->sw_pi = nic_dev->rxqs[q_id].next_to_update;
+ rq_info->msix_vector = nic_dev->rxqs[q_id].irq_id;
+
+ rq_info->coalesc_timer_cfg = nic_dev->rxqs[q_id].last_coalesc_timer_cfg;
+ rq_info->pending_limt = nic_dev->rxqs[q_id].last_pending_limt;
+
+ return 0;
+}
+
+static int get_rx_wqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 wqebb_cnt = 1;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx wqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (!out_size || in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ return hinic3_dbg_get_wqe_info(nic_dev->hwdev, (u16)info->q_id,
+ (u16)info->wqe_id, wqebb_cnt,
+ buf_out, (u16 *)out_size, HINIC3_RQ);
+}
+
+static int get_rx_cqe_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct wqe_info *info = buf_in;
+ u16 q_id = 0;
+ u16 idx = 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't get rx cqe info\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Buf_in or buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (in_size != sizeof(struct wqe_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(struct wqe_info));
+ return -EINVAL;
+ }
+
+ if (!out_size || *out_size != sizeof(struct hinic3_rq_cqe)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_rq_cqe));
+ return -EINVAL;
+ }
+ q_id = (u16)info->q_id;
+ idx = (u16)info->wqe_id;
+
+ if (q_id >= nic_dev->q_params.num_qps || idx >= nic_dev->rxqs[q_id].q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid q_id[%u] >= %u, or wqe idx[%u] >= %u.\n",
+ q_id, nic_dev->q_params.num_qps, idx, nic_dev->rxqs[q_id].q_depth);
+ return -EFAULT;
+ }
+
+ memcpy(buf_out, nic_dev->rxqs[q_id].rx_info[idx].cqe,
+ sizeof(struct hinic3_rq_cqe));
+
+ return 0;
+}
+
+static void clean_nicdev_stats(struct hinic3_nic_dev *nic_dev)
+{
+ u64_stats_update_begin(&nic_dev->stats.syncp);
+ nic_dev->stats.netdev_tx_timeout = 0;
+ nic_dev->stats.tx_carrier_off_drop = 0;
+ nic_dev->stats.tx_invalid_qid = 0;
+ nic_dev->stats.rsvd1 = 0;
+ nic_dev->stats.rsvd2 = 0;
+ u64_stats_update_end(&nic_dev->stats.syncp);
+}
+
+static int clear_func_static(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int i;
+
+ *out_size = 0;
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ memset(&nic_dev->net_stats, 0, sizeof(nic_dev->net_stats));
+#endif
+ clean_nicdev_stats(nic_dev);
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ hinic3_rxq_clean_stats(&nic_dev->rxqs[i].rxq_stats);
+ hinic3_txq_clean_stats(&nic_dev->txqs[i].txq_stats);
+ }
+
+ return 0;
+}
+
+static int get_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_loop_mode *mode = buf_out;
+
+ if (!out_size || !mode)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ return hinic3_get_loopback_mode(nic_dev->hwdev, (u8 *)&mode->loop_mode,
+ (u8 *)&mode->loop_ctrl);
+}
+
+static int set_loopback_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_nic_loop_mode *mode = buf_in;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set loopback mode\n");
+ return -EFAULT;
+ }
+
+ if (!mode || !out_size || in_size != sizeof(*mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(*mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(*mode));
+ return -EINVAL;
+ }
+
+ err = hinic3_set_loopback_mode(nic_dev->hwdev, (u8)mode->loop_mode,
+ (u8)mode->loop_ctrl);
+ if (err == 0)
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Set loopback mode %u en %u succeed\n",
+ mode->loop_mode, mode->loop_ctrl);
+
+ return err;
+}
+
+enum hinic3_nic_link_mode {
+ HINIC3_LINK_MODE_AUTO = 0,
+ HINIC3_LINK_MODE_UP,
+ HINIC3_LINK_MODE_DOWN,
+ HINIC3_LINK_MODE_MAX,
+};
+
+static int set_link_mode_param_valid(struct hinic3_nic_dev *nic_dev,
+ const void *buf_in, u32 in_size,
+ const u32 *out_size)
+{
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Netdev is down, can't set link mode\n");
+ return -EFAULT;
+ }
+
+ if (!buf_in || !out_size ||
+ in_size != sizeof(enum hinic3_nic_link_mode))
+ return -EINVAL;
+
+ if (*out_size != sizeof(enum hinic3_nic_link_mode)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %u, expect: %lu\n",
+ *out_size, sizeof(enum hinic3_nic_link_mode));
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_link_mode(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const enum hinic3_nic_link_mode *link = buf_in;
+ u8 link_status;
+
+ if (set_link_mode_param_valid(nic_dev, buf_in, in_size, out_size))
+ return -EFAULT;
+
+ switch (*link) {
+ case HINIC3_LINK_MODE_AUTO:
+ if (hinic3_get_link_state(nic_dev->hwdev, &link_status))
+ link_status = false;
+ hinic3_link_status_change(nic_dev, (bool)link_status);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: auto succeed, now is link %s\n",
+ (link_status ? "up" : "down"));
+ break;
+ case HINIC3_LINK_MODE_UP:
+ hinic3_link_status_change(nic_dev, true);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: up succeed\n");
+ break;
+ case HINIC3_LINK_MODE_DOWN:
+ hinic3_link_status_change(nic_dev, false);
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Set link mode: down succeed\n");
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid link mode %d to set\n", *link);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int set_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 pf_bw_limit;
+ int err;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "To set VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in || !buf_out || in_size != sizeof(u32) || !out_size || *out_size != sizeof(u8))
+ return -EINVAL;
+
+ pf_bw_limit = *((u32 *)buf_in);
+
+ err = hinic3_set_pf_bw_limit(nic_dev->hwdev, pf_bw_limit);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to set pf bandwidth limit to %d%%\n",
+ pf_bw_limit);
+ if (err < 0)
+ return err;
+ }
+
+ *((u8 *)buf_out) = (u8)err;
+
+ return 0;
+}
+
+static int get_pf_bw_limit(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "To get VF bandwidth rate, please use ip link cmd\n");
+ return -EINVAL;
+ }
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(u32)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user: %d, expect: %lu\n",
+ *out_size, sizeof(u32));
+ return -EFAULT;
+ }
+
+ nic_io = hinic3_get_service_adapter(nic_dev->hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ *((u32 *)buf_out) = nic_io->nic_cfg.pf_bw_limit;
+
+ return 0;
+}
+
+static int get_sset_count(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u32 count;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size ||
+ *out_size != sizeof(u32) || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Invalid parameters, in_size: %u\n",
+ in_size);
+ return -EINVAL;
+ }
+
+ switch (*((u32 *)buf_in)) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ count = hinic3_get_io_stats_size(nic_dev);
+ break;
+ default:
+ count = 0;
+ break;
+ }
+
+ *((u32 *)buf_out) = count;
+
+ return 0;
+}
+
+static int get_sset_stats(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct hinic3_show_item *items = buf_out;
+ u32 sset, count, size;
+ int err;
+
+ if (!buf_in || in_size != sizeof(u32) || !out_size || !buf_out) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Invalid parameters, in_size: %u\n",
+ in_size);
+ return -EINVAL;
+ }
+
+ size = sizeof(u32);
+ err = get_sset_count(nic_dev, buf_in, in_size, &count, &size);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Get sset count failed, ret=%d\n",
+ err);
+ return -EINVAL;
+ }
+ if (count * sizeof(*items) != *out_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, count * sizeof(*items));
+ return -EINVAL;
+ }
+
+ sset = *((u32 *)buf_in);
+
+ switch (sset) {
+ case HINIC3_SHOW_SSET_IO_STATS:
+ hinic3_get_io_stats(nic_dev, items);
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unknown %u to get stats\n",
+ sset);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static int update_pcp_dscp_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ int i;
+ u8 cos_num = 0, valid_cos_bitmap = 0;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_PCP2COS) {
+ for (i = 0; i < NIC_DCB_UP_MAX; i++) {
+ if (!(nic_dev->func_dft_cos_bitmap & BIT(qos_in->pcp2cos[i]))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ qos_in->pcp2cos[i], nic_dev->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(qos_in->pcp2cos[i]) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(qos_in->pcp2cos[i]);
+ cos_num++;
+ }
+ }
+
+ memcpy(wanted_dcb_cfg->pcp2cos, qos_in->pcp2cos, sizeof(qos_in->pcp2cos));
+ wanted_dcb_cfg->pcp_user_cos_num = cos_num;
+ wanted_dcb_cfg->pcp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DSCP2COS) {
+ cos_num = 0;
+ valid_cos_bitmap = 0;
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++) {
+ u8 cos = qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ nic_dev->wanted_dcb_cfg.dscp2cos[i] : qos_in->dscp2cos[i];
+
+ if (cos >= NIC_DCB_UP_MAX || !(nic_dev->func_dft_cos_bitmap & BIT(cos))) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid cos=%u, func cos valid map is %u",
+ cos, nic_dev->func_dft_cos_bitmap);
+ return -EINVAL;
+ }
+
+ if ((BIT(cos) & valid_cos_bitmap) == 0) {
+ valid_cos_bitmap |= (u8)BIT(cos);
+ cos_num++;
+ }
+ }
+
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ wanted_dcb_cfg->dscp2cos[i] = qos_in->dscp2cos[i] == DBG_DFLT_DSCP_VAL ?
+ nic_dev->hw_dcb_cfg.dscp2cos[i] : qos_in->dscp2cos[i];
+ wanted_dcb_cfg->dscp_user_cos_num = cos_num;
+ wanted_dcb_cfg->dscp_valid_cos_map = valid_cos_bitmap;
+ }
+
+ return 0;
+}
+
+static int update_wanted_qos_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *wanted_dcb_cfg,
+ const struct hinic3_mt_qos_dev_cfg *qos_in)
+{
+ int ret;
+ u8 cos_num, valid_cos_bitmap;
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_TRUST) {
+ if (qos_in->trust > DCB_DSCP) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid trust=%u\n", qos_in->trust);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->trust = qos_in->trust;
+ }
+
+ if (qos_in->cfg_bitmap & CMD_QOS_DEV_DFT_COS) {
+ if (!(BIT(qos_in->dft_cos) & nic_dev->func_dft_cos_bitmap)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid dft_cos=%u\n", qos_in->dft_cos);
+ return -EINVAL;
+ }
+
+ wanted_dcb_cfg->default_cos = qos_in->dft_cos;
+ }
+
+ ret = update_pcp_dscp_cfg(nic_dev, wanted_dcb_cfg, qos_in);
+ if (ret)
+ return ret;
+
+ if (wanted_dcb_cfg->trust == DCB_PCP) {
+ cos_num = wanted_dcb_cfg->pcp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->pcp_valid_cos_map;
+ } else {
+ cos_num = wanted_dcb_cfg->dscp_user_cos_num;
+ valid_cos_bitmap = wanted_dcb_cfg->dscp_valid_cos_map;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (cos_num > nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "DCB is on, cos num should not more than channel num:%u\n",
+ nic_dev->q_params.num_qps);
+ return -EOPNOTSUPP;
+ }
+ }
+
+ if (!(BIT(wanted_dcb_cfg->default_cos) & valid_cos_bitmap)) {
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Current default_cos=%u, change to %u\n",
+ wanted_dcb_cfg->default_cos, (u8)fls(valid_cos_bitmap) - 1);
+ wanted_dcb_cfg->default_cos = (u8)fls(valid_cos_bitmap) - 1;
+ }
+
+ return 0;
+}
+
+static int dcb_mt_qos_map(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_mt_qos_dev_cfg *qos_in = buf_in;
+ struct hinic3_mt_qos_dev_cfg *qos_out = buf_out;
+ u8 i;
+ int err;
+
+ if (!buf_out || !out_size || !buf_in)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*qos_out) || in_size != sizeof(*qos_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*qos_in));
+ return -EINVAL;
+ }
+
+ memcpy(qos_out, qos_in, sizeof(*qos_in));
+ qos_out->head.status = 0;
+ if (qos_in->op_code & MT_DCB_OPCODE_WR) {
+ memcpy(&nic_dev->wanted_dcb_cfg, &nic_dev->hw_dcb_cfg,
+ sizeof(struct hinic3_dcb_config));
+ err = update_wanted_qos_cfg(nic_dev, &nic_dev->wanted_dcb_cfg, qos_in);
+ if (err) {
+ qos_out->head.status = MT_EINVAL;
+ return 0;
+ }
+
+ err = hinic3_dcbcfg_set_up_bitmap(nic_dev);
+ if (err)
+ qos_out->head.status = MT_EIO;
+ } else {
+ qos_out->dft_cos = nic_dev->hw_dcb_cfg.default_cos;
+ qos_out->trust = nic_dev->hw_dcb_cfg.trust;
+ for (i = 0; i < NIC_DCB_UP_MAX; i++)
+ qos_out->pcp2cos[i] = nic_dev->hw_dcb_cfg.pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ qos_out->dscp2cos[i] = nic_dev->hw_dcb_cfg.dscp2cos[i];
+ }
+
+ return 0;
+}
+
+static int dcb_mt_dcb_state(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_mt_dcb_state *dcb_in = buf_in;
+ struct hinic3_mt_dcb_state *dcb_out = buf_out;
+ int err;
+ u8 user_cos_num;
+ u8 netif_run = 0;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*dcb_out) || in_size != sizeof(*dcb_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*dcb_in));
+ return -EINVAL;
+ }
+
+ user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+ memcpy(dcb_out, dcb_in, sizeof(*dcb_in));
+ dcb_out->head.status = 0;
+ if (dcb_in->op_code & MT_DCB_OPCODE_WR) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) == dcb_in->state)
+ return 0;
+
+ if (dcb_in->state) {
+ if (user_cos_num > nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "cos num %u should not more than channel num %u\n",
+ user_cos_num,
+ nic_dev->q_params.num_qps);
+
+ return -EOPNOTSUPP;
+ }
+ }
+
+ rtnl_lock();
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = hinic3_setup_cos(nic_dev->netdev, dcb_in->state ? user_cos_num : 0,
+ netif_run);
+ if (err)
+ goto setup_cos_fail;
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+ rtnl_unlock();
+ } else {
+ dcb_out->state = !!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ }
+
+ return 0;
+
+vport_up_fail:
+ hinic3_setup_cos(nic_dev->netdev, dcb_in->state ? 0 : user_cos_num, netif_run);
+
+setup_cos_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+ rtnl_unlock();
+
+ return err;
+}
+
+static int dcb_mt_hw_qos_get(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ const struct hinic3_mt_qos_cos_cfg *cos_cfg_in = buf_in;
+ struct hinic3_mt_qos_cos_cfg *cos_cfg_out = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*cos_cfg_out) || in_size != sizeof(*cos_cfg_in)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*cos_cfg_in));
+ return -EINVAL;
+ }
+
+ memcpy(cos_cfg_out, cos_cfg_in, sizeof(*cos_cfg_in));
+ cos_cfg_out->head.status = 0;
+
+ cos_cfg_out->port_id = hinic3_physical_port_id(nic_dev->hwdev);
+ cos_cfg_out->func_cos_bitmap = (u8)nic_dev->func_dft_cos_bitmap;
+ cos_cfg_out->port_cos_bitmap = (u8)nic_dev->port_dft_cos_bitmap;
+ cos_cfg_out->func_max_cos_num = nic_dev->cos_config_num_max;
+
+ return 0;
+}
+
+static int get_inter_num(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ u16 intr_num;
+
+ intr_num = hinic3_intr_num(nic_dev->hwdev);
+
+ if (!buf_out || !out_size || *out_size != sizeof(u16)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+ *(u16 *)buf_out = intr_num;
+
+ return 0;
+}
+
+static int get_netdev_name(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || !out_size || *out_size != IFNAMSIZ) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, IFNAMSIZ);
+ return -EFAULT;
+ }
+
+ strlcpy(buf_out, nic_dev->netdev->name, IFNAMSIZ);
+
+ return 0;
+}
+
+static int get_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ int *tx_timeout = buf_out;
+
+ if (!buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev, "Unexpect buf size from user, out_size: %u, expect: %lu\n",
+ *out_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ *tx_timeout = net_dev->watchdog_timeo;
+
+ return 0;
+}
+
+static int set_netdev_tx_timeout(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct net_device *net_dev = nic_dev->netdev;
+ const int *tx_timeout = buf_in;
+
+ if (!buf_in)
+ return -EINVAL;
+
+ if (in_size != sizeof(int)) {
+ nicif_err(nic_dev, drv, net_dev, "Unexpect buf size from user, in_size: %u, expect: %lu\n",
+ in_size, sizeof(int));
+ return -EINVAL;
+ }
+
+ net_dev->watchdog_timeo = *tx_timeout * HZ;
+ nicif_info(nic_dev, drv, net_dev, "Set tx timeout check period to %ds\n", *tx_timeout);
+
+ return 0;
+}
+
+static int get_xsfp_present(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_out;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_abs) || in_size != sizeof(*sfp_abs)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_abs));
+ return -EINVAL;
+ }
+
+ sfp_abs->head.status = 0;
+ sfp_abs->abs_status = hinic3_if_sfp_absent(nic_dev->hwdev);
+
+ return 0;
+}
+
+static int get_xsfp_info(struct hinic3_nic_dev *nic_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_out;
+ int err;
+
+ if (!buf_in || !buf_out || !out_size)
+ return -EINVAL;
+
+ if (*out_size != sizeof(*sfp_info) || in_size != sizeof(*sfp_info)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unexpect buf size from user, in_size: %u, out_size: %u, expect: %lu\n",
+ in_size, *out_size, sizeof(*sfp_info));
+ return -EINVAL;
+ }
+
+ err = hinic3_get_sfp_info(nic_dev->hwdev, sfp_info);
+ if (err) {
+ sfp_info->head.status = MT_EIO;
+ return 0;
+ }
+
+ return 0;
+}
+
+static const struct nic_drv_module_handle nic_driv_module_cmd_handle[] = {
+ {TX_INFO, get_tx_info},
+ {Q_NUM, get_q_num},
+ {TX_WQE_INFO, get_tx_wqe_info},
+ {RX_INFO, get_rx_info},
+ {RX_WQE_INFO, get_rx_wqe_info},
+ {RX_CQE_INFO, get_rx_cqe_info},
+ {GET_INTER_NUM, get_inter_num},
+ {CLEAR_FUNC_STASTIC, clear_func_static},
+ {GET_LOOPBACK_MODE, get_loopback_mode},
+ {SET_LOOPBACK_MODE, set_loopback_mode},
+ {SET_LINK_MODE, set_link_mode},
+ {SET_PF_BW_LIMIT, set_pf_bw_limit},
+ {GET_PF_BW_LIMIT, get_pf_bw_limit},
+ {GET_SSET_COUNT, get_sset_count},
+ {GET_SSET_ITEMS, get_sset_stats},
+ {DCB_STATE, dcb_mt_dcb_state},
+ {QOS_DEV, dcb_mt_qos_map},
+ {GET_QOS_COS, dcb_mt_hw_qos_get},
+ {GET_ULD_DEV_NAME, get_netdev_name},
+ {GET_TX_TIMEOUT, get_netdev_tx_timeout},
+ {SET_TX_TIMEOUT, set_netdev_tx_timeout},
+ {GET_XSFP_PRESENT, get_xsfp_present},
+ {GET_XSFP_INFO, get_xsfp_info},
+};
+
+static int send_to_nic_driver(struct hinic3_nic_dev *nic_dev,
+ u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = sizeof(nic_driv_module_cmd_handle) /
+ sizeof(nic_driv_module_cmd_handle[0]);
+ enum driver_cmd_type cmd_type = (enum driver_cmd_type)cmd;
+ int err = 0;
+
+ mutex_lock(&nic_dev->nic_mutex);
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ nic_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = nic_driv_module_cmd_handle[index].driv_func
+ (nic_dev, buf_in,
+ in_size, buf_out, out_size);
+ break;
+ }
+ }
+ mutex_unlock(&nic_dev->nic_mutex);
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (cmd == GET_DRV_VERSION)
+ return get_nic_drv_version(buf_out, out_size);
+ else if (!uld_dev)
+ return -EINVAL;
+
+ return send_to_nic_driver(uld_dev, cmd, buf_in,
+ in_size, buf_out, out_size);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
new file mode 100644
index 000000000000..a1fb4afb323e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.c
@@ -0,0 +1,405 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_dcb.h"
+
+#define MAX_BW_PERCENT 100
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev)
+{
+ if (nic_dev->hw_dcb_cfg.trust == 0)
+ return nic_dev->hw_dcb_cfg.pcp_user_cos_num;
+ if (nic_dev->hw_dcb_cfg.trust == 1)
+ return nic_dev->hw_dcb_cfg.dscp_user_cos_num;
+ return 0;
+}
+
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev)
+{
+ if (nic_dev->hw_dcb_cfg.trust == 0)
+ return nic_dev->hw_dcb_cfg.pcp_valid_cos_map;
+ if (nic_dev->hw_dcb_cfg.trust == 1)
+ return nic_dev->hw_dcb_cfg.dscp_valid_cos_map;
+ return 0;
+}
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos)
+{
+ struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
+ u8 i, remainder, num_sq_per_cos, cur_cos_num = 0;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ if (num_cos == 0)
+ return;
+
+ num_sq_per_cos = (u8)(nic_dev->q_params.num_qps / num_cos);
+ if (num_sq_per_cos == 0)
+ return;
+
+ remainder = nic_dev->q_params.num_qps % num_sq_per_cos;
+
+ memset(dcb_cfg->cos_qp_offset, 0, sizeof(dcb_cfg->cos_qp_offset));
+ memset(dcb_cfg->cos_qp_num, 0, sizeof(dcb_cfg->cos_qp_num));
+
+ for (i = 0; i < PCP_MAX_UP; i++) {
+ if (BIT(i) & valid_cos_map) {
+ u8 cos_qp_num = num_sq_per_cos;
+ u8 cos_qp_offset = (u8)(cur_cos_num * num_sq_per_cos);
+
+ if (cur_cos_num < remainder) {
+ cos_qp_num++;
+ cos_qp_offset += cur_cos_num;
+ } else {
+ cos_qp_offset += remainder;
+ }
+
+ cur_cos_num++;
+ valid_cos_map -= (u8)BIT(i);
+
+ dcb_cfg->cos_qp_offset[i] = cos_qp_offset;
+ dcb_cfg->cos_qp_num[i] = cos_qp_num;
+ hinic3_info(nic_dev, drv, "cos %u, cos_qp_offset=%u cos_qp_num=%u\n",
+ i, cos_qp_offset, cos_qp_num);
+ }
+ }
+
+ memcpy(nic_dev->wanted_dcb_cfg.cos_qp_offset, dcb_cfg->cos_qp_offset,
+ sizeof(dcb_cfg->cos_qp_offset));
+ memcpy(nic_dev->wanted_dcb_cfg.cos_qp_num, dcb_cfg->cos_qp_num,
+ sizeof(dcb_cfg->cos_qp_num));
+}
+
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ u8 i;
+ u16 start_qid, q_num;
+
+ hinic3_set_txq_cos(nic_dev, 0, nic_dev->q_params.num_qps,
+ nic_dev->hw_dcb_cfg.default_cos);
+ if (!dcb_en)
+ return;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ q_num = (u16)nic_dev->hw_dcb_cfg.cos_qp_num[i];
+ if (q_num) {
+ start_qid = (u16)nic_dev->hw_dcb_cfg.cos_qp_offset[i];
+
+ hinic3_set_txq_cos(nic_dev, start_qid, q_num, i);
+ hinic3_info(nic_dev, drv, "update tx db cos, start_qid %u, q_num=%u cos=%u\n",
+ start_qid, q_num, i);
+ }
+ }
+}
+
+static int hinic3_set_tx_cos_state(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
+ struct hinic3_dcb_state dcb_state = {0};
+ u8 i;
+ int err;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ /* VF does not support DCB, use the default cos */
+ dcb_cfg->default_cos = (u8)fls(nic_dev->func_dft_cos_bitmap) - 1;
+
+ return 0;
+ }
+
+ dcb_state.dcb_on = dcb_en;
+ dcb_state.default_cos = dcb_cfg->default_cos;
+ dcb_state.trust = dcb_cfg->trust;
+
+ if (dcb_en) {
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ dcb_state.pcp2cos[i] = dcb_cfg->pcp2cos[i];
+ for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
+ dcb_state.dscp2cos[i] = dcb_cfg->dscp2cos[i];
+ } else {
+ memset(dcb_state.pcp2cos, dcb_cfg->default_cos, sizeof(dcb_state.pcp2cos));
+ memset(dcb_state.dscp2cos, dcb_cfg->default_cos, sizeof(dcb_state.dscp2cos));
+ }
+
+ err = hinic3_set_dcb_state(nic_dev->hwdev, &dcb_state);
+ if (err)
+ hinic3_err(nic_dev, drv, "Failed to set dcb state\n");
+
+ return err;
+}
+
+static int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
+{
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+ hinic3_update_tx_db_cos(nic_dev, dcb_en);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ goto set_tx_cos_fail;
+ }
+
+ err = hinic3_rx_configure(nic_dev->netdev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "rx configure failed\n");
+ goto rx_configure_fail;
+ }
+
+ if (dcb_en)
+ set_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ else
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+
+ return 0;
+rx_configure_fail:
+ hinic3_set_tx_cos_state(nic_dev, dcb_en ? 0 : 1);
+
+set_tx_cos_fail:
+ hinic3_update_tx_db_cos(nic_dev, dcb_en ? 0 : 1);
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en ? 0 : 1);
+
+ return err;
+}
+
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (cos && test_bit(HINIC3_SAME_RXTX, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable DCB while Symmetric RSS is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (cos > nic_dev->cos_config_num_max) {
+ nicif_err(nic_dev, drv, netdev, "Invalid num_tc: %u, max cos: %u\n",
+ cos, nic_dev->cos_config_num_max);
+ return -EINVAL;
+ }
+
+ err = hinic3_configure_dcb_hw(nic_dev, cos ? 1 : 0);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static u8 get_cos_num(u8 hw_valid_cos_bitmap)
+{
+ u8 support_cos = 0;
+ u8 i;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++)
+ if (hw_valid_cos_bitmap & BIT(i))
+ support_cos++;
+
+ return support_cos;
+}
+
+static void hinic3_sync_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct hinic3_dcb_config *hw_cfg = &nic_dev->hw_dcb_cfg;
+
+ memcpy(hw_cfg, dcb_cfg, sizeof(struct hinic3_dcb_config));
+}
+
+static int init_default_dcb_cfg(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dcb_config *dcb_cfg)
+{
+ u8 i, hw_dft_cos_map, port_cos_bitmap, dscp_ind;
+ int err;
+
+ err = hinic3_cos_valid_bitmap(nic_dev->hwdev, &hw_dft_cos_map, &port_cos_bitmap);
+ if (err) {
+ hinic3_err(nic_dev, drv, "None cos supported\n");
+ return -EFAULT;
+ }
+ nic_dev->func_dft_cos_bitmap = hw_dft_cos_map;
+ nic_dev->port_dft_cos_bitmap = port_cos_bitmap;
+
+ nic_dev->cos_config_num_max = get_cos_num(hw_dft_cos_map);
+
+ dcb_cfg->trust = DCB_PCP;
+ dcb_cfg->pcp_user_cos_num = nic_dev->cos_config_num_max;
+ dcb_cfg->dscp_user_cos_num = nic_dev->cos_config_num_max;
+ dcb_cfg->default_cos = (u8)fls(nic_dev->func_dft_cos_bitmap) - 1;
+ dcb_cfg->pcp_valid_cos_map = hw_dft_cos_map;
+ dcb_cfg->dscp_valid_cos_map = hw_dft_cos_map;
+
+ for (i = 0; i < NIC_DCB_COS_MAX; i++) {
+ dcb_cfg->pcp2cos[i] = hw_dft_cos_map & BIT(i) ? i : dcb_cfg->default_cos;
+ for (dscp_ind = 0; dscp_ind < NIC_DCB_COS_MAX; dscp_ind++)
+ dcb_cfg->dscp2cos[i * NIC_DCB_DSCP_NUM + dscp_ind] = dcb_cfg->pcp2cos[i];
+ }
+
+ return 0;
+}
+
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config dft_cfg = {0};
+
+ init_default_dcb_cfg(nic_dev, &dft_cfg);
+ hinic3_sync_dcb_cfg(nic_dev, &dft_cfg);
+
+ hinic3_info(nic_dev, drv, "Reset DCB configuration done\n");
+}
+
+int hinic3_configure_dcb(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ err = hinic3_sync_dcb_state(nic_dev->hwdev, 1,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set dcb state failed\n");
+ return err;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_cfg(nic_dev, &nic_dev->wanted_dcb_cfg);
+ else
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ return 0;
+}
+
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
+ int err;
+ u8 dcb_en = test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0;
+
+ if (HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ return hinic3_set_tx_cos_state(nic_dev, dcb_en);
+
+ err = init_default_dcb_cfg(nic_dev, dcb_cfg);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Initialize dcb configuration failed\n");
+ return err;
+ }
+
+ memcpy(&nic_dev->wanted_dcb_cfg, &nic_dev->hw_dcb_cfg, sizeof(struct hinic3_dcb_config));
+
+ hinic3_info(nic_dev, drv, "Support num cos %u, default cos %u\n",
+ nic_dev->cos_config_num_max, dcb_cfg->default_cos);
+
+ err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
+ return err;
+ }
+
+ sema_init(&nic_dev->dcb_sem, 1);
+
+ return 0;
+}
+
+static int change_qos_cfg(struct hinic3_nic_dev *nic_dev, const struct hinic3_dcb_config *dcb_cfg)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err = 0;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (test_and_set_bit(HINIC3_DCB_UP_COS_SETTING, &nic_dev->dcb_flags)) {
+ nicif_warn(nic_dev, drv, netdev,
+ "Cos_up map setting in inprocess, please try again later\n");
+ return -EFAULT;
+ }
+
+ hinic3_sync_dcb_cfg(nic_dev, dcb_cfg);
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+
+ clear_bit(HINIC3_DCB_UP_COS_SETTING, &nic_dev->dcb_flags);
+
+ return err;
+}
+
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev)
+{
+ int err, rollback_err;
+ u8 netif_run = 0;
+ struct hinic3_dcb_config old_dcb_cfg;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ memcpy(&old_dcb_cfg, &nic_dev->hw_dcb_cfg, sizeof(struct hinic3_dcb_config));
+
+ if (!memcmp(&nic_dev->wanted_dcb_cfg, &old_dcb_cfg, sizeof(struct hinic3_dcb_config))) {
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Same valid up bitmap, don't need to change anything\n");
+ return 0;
+ }
+
+ rtnl_lock();
+ if (netif_running(nic_dev->netdev)) {
+ netif_run = 1;
+ hinic3_vport_down(nic_dev);
+ }
+
+ err = change_qos_cfg(nic_dev, &nic_dev->wanted_dcb_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Set cos_up map to hw failed\n");
+ goto change_qos_cfg_fail;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ err = hinic3_setup_cos(nic_dev->netdev, user_cos_num, netif_run);
+ if (err)
+ goto set_err;
+ }
+
+ if (netif_run) {
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_fail;
+ }
+
+ rtnl_unlock();
+
+ return 0;
+
+vport_up_fail:
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_setup_cos(nic_dev->netdev, user_cos_num ? 0 : user_cos_num, netif_run);
+
+set_err:
+ rollback_err = change_qos_cfg(nic_dev, &old_dcb_cfg);
+ if (rollback_err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to rollback qos configure\n");
+
+change_qos_cfg_fail:
+ if (netif_run)
+ hinic3_vport_up(nic_dev);
+
+ rtnl_unlock();
+
+ return err;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
new file mode 100644
index 000000000000..7987f563cfff
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_dcb.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DCB_H
+#define HINIC3_DCB_H
+
+#include "ossl_knl.h"
+
+enum HINIC3_DCB_FLAGS {
+ HINIC3_DCB_UP_COS_SETTING,
+ HINIC3_DCB_TRAFFIC_STOPPED,
+};
+
+struct hinic3_cos_cfg {
+ u8 up;
+ u8 bw_pct;
+ u8 tc_id;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+};
+
+struct hinic3_tc_cfg {
+ u8 bw_pct;
+ u8 prio_sp; /* 0 - DWRR, 1 - SP */
+ u16 rsvd;
+};
+
+enum HINIC3_DCB_TRUST {
+ DCB_PCP,
+ DCB_DSCP,
+};
+
+#define PCP_MAX_UP 8
+#define DSCP_MAC_UP 64
+#define DBG_DFLT_DSCP_VAL 0xFF
+
+struct hinic3_dcb_config {
+ u8 trust; /* pcp, dscp */
+ u8 default_cos;
+ u8 pcp_user_cos_num;
+ u8 pcp_valid_cos_map;
+ u8 dscp_user_cos_num;
+ u8 dscp_valid_cos_map;
+ u8 pcp2cos[PCP_MAX_UP];
+ u8 dscp2cos[DSCP_MAC_UP];
+
+ u8 cos_qp_offset[NIC_DCB_COS_MAX];
+ u8 cos_qp_num[NIC_DCB_COS_MAX];
+};
+
+u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev);
+u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev);
+int hinic3_configure_dcb(struct net_device *netdev);
+int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run);
+void hinic3_dcbcfg_set_pfc_state(struct hinic3_nic_dev *nic_dev, u8 pfc_state);
+u8 hinic3_dcbcfg_get_pfc_state(struct hinic3_nic_dev *nic_dev);
+void hinic3_dcbcfg_set_pfc_pri_en(struct hinic3_nic_dev *nic_dev,
+ u8 pfc_en_bitmap);
+u8 hinic3_dcbcfg_get_pfc_pri_en(struct hinic3_nic_dev *nic_dev);
+int hinic3_dcbcfg_set_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ const u8 *up_tc_map);
+void hinic3_dcbcfg_get_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
+ u8 *up_tc_map);
+int hinic3_dcbcfg_set_ets_tc_bw(struct hinic3_nic_dev *nic_dev,
+ const u8 *tc_bw);
+void hinic3_dcbcfg_get_ets_tc_bw(struct hinic3_nic_dev *nic_dev, u8 *tc_bw);
+void hinic3_dcbcfg_set_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 tc_prio_bitmap);
+void hinic3_dcbcfg_get_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
+ u8 *tc_prio_bitmap);
+int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev);
+void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
+
+void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos);
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev);
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
new file mode 100644
index 000000000000..2b3561e5bca1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool.c
@@ -0,0 +1,1331 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_rss.h"
+
+#define COALESCE_ALL_QUEUE 0xFFFF
+#define COALESCE_PENDING_LIMIT_UNIT 8
+#define COALESCE_TIMER_CFG_UNIT 5
+#define COALESCE_MAX_PENDING_LIMIT (255 * COALESCE_PENDING_LIMIT_UNIT)
+#define COALESCE_MAX_TIMER_CFG (255 * COALESCE_TIMER_CFG_UNIT)
+#define HINIC3_WAIT_PKTS_TO_RX_BUFFER 200
+#define HINIC3_WAIT_CLEAR_LP_TEST 100
+
+#ifndef SET_ETHTOOL_OPS
+#define SET_ETHTOOL_OPS(netdev, ops) \
+ ((netdev)->ethtool_ops = (ops))
+#endif
+
+static void hinic3_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *info)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ u8 mgmt_ver[HINIC3_MGMT_VERSION_MAX_LEN] = {0};
+ int err;
+
+ strlcpy(info->driver, HINIC3_NIC_DRV_NAME, sizeof(info->driver));
+ strlcpy(info->version, HINIC3_NIC_DRV_VERSION, sizeof(info->version));
+ strlcpy(info->bus_info, pci_name(pdev), sizeof(info->bus_info));
+
+ err = hinic3_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+ HINIC3_MGMT_VERSION_MAX_LEN,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get fw version\n");
+ return;
+ }
+
+ err = snprintf(info->fw_version, sizeof(info->fw_version), "%s", mgmt_ver);
+ if (err < 0)
+ nicif_err(nic_dev, drv, netdev, "Failed to snprintf fw version\n");
+}
+
+static u32 hinic3_get_msglevel(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return nic_dev->msg_enable;
+}
+
+static void hinic3_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->msg_enable = data;
+
+ nicif_info(nic_dev, drv, netdev, "Set message level: 0x%x\n", data);
+}
+
+static int hinic3_nway_reset(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ while (test_and_set_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags))
+ msleep(100); /* sleep 100 ms, waiting for another autoneg restart progress done */
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Get port info failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ if (port_info.autoneg_state != PORT_CFG_AN_ON) {
+ nicif_err(nic_dev, drv, netdev, "Autonegotiation is not on, don't support to restart it\n");
+ err = -EOPNOTSUPP;
+ goto reset_err;
+ }
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, false);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation off failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+
+ err = hinic3_set_autoneg(nic_dev->hwdev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Set autonegotiation on failed\n");
+ err = -EFAULT;
+ goto reset_err;
+ }
+
+ msleep(200); /* sleep 200 ms, waiting for status polling finished */
+ nicif_info(nic_dev, drv, netdev, "Restart autonegotiation successfully\n");
+
+reset_err:
+ clear_bit(HINIC3_AUTONEG_RESET, &nic_dev->flags);
+ return err;
+}
+
+static void hinic3_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ ring->rx_max_pending = HINIC3_MAX_RX_QUEUE_DEPTH;
+ ring->tx_max_pending = HINIC3_MAX_TX_QUEUE_DEPTH;
+ ring->rx_pending = nic_dev->rxqs[0].q_depth;
+ ring->tx_pending = nic_dev->txqs[0].q_depth;
+}
+
+static void hinic3_update_qp_depth(struct hinic3_nic_dev *nic_dev,
+ u32 sq_depth, u32 rq_depth)
+{
+ u16 i;
+
+ nic_dev->q_params.sq_depth = sq_depth;
+ nic_dev->q_params.rq_depth = rq_depth;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ nic_dev->txqs[i].q_depth = sq_depth;
+ nic_dev->txqs[i].q_mask = sq_depth - 1;
+ nic_dev->rxqs[i].q_depth = rq_depth;
+ nic_dev->rxqs[i].q_mask = rq_depth - 1;
+ }
+}
+
+static int check_ringparam_valid(struct net_device *netdev,
+ const struct ethtool_ringparam *ring)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (ring->rx_jumbo_pending || ring->rx_mini_pending) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported rx_jumbo_pending/rx_mini_pending\n");
+ return -EINVAL;
+ }
+
+ if (ring->tx_pending > HINIC3_MAX_TX_QUEUE_DEPTH ||
+ ring->tx_pending < HINIC3_MIN_QUEUE_DEPTH ||
+ ring->rx_pending > HINIC3_MAX_RX_QUEUE_DEPTH ||
+ ring->rx_pending < HINIC3_MIN_QUEUE_DEPTH) {
+ nicif_err(nic_dev, drv, netdev,
+ "Queue depth out of rang tx[%d-%d] rx[%d-%d]\n",
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_TX_QUEUE_DEPTH,
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_RX_QUEUE_DEPTH);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kernel_ring,
+ struct netlink_ext_ack *extack)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ u32 new_sq_depth, new_rq_depth;
+ int err;
+
+ err = check_ringparam_valid(netdev, ring);
+ if (err)
+ return err;
+
+ new_sq_depth = (u32)(1U << (u16)ilog2(ring->tx_pending));
+ new_rq_depth = (u32)(1U << (u16)ilog2(ring->rx_pending));
+ if (new_sq_depth == nic_dev->q_params.sq_depth &&
+ new_rq_depth == nic_dev->q_params.rq_depth)
+ return 0; /* nothing to do */
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change Tx/Rx ring depth from %u/%u to %u/%u\n",
+ nic_dev->q_params.sq_depth, nic_dev->q_params.rq_depth,
+ new_sq_depth, new_rq_depth);
+
+ if (!netif_running(netdev)) {
+ hinic3_update_qp_depth(nic_dev, new_sq_depth, new_rq_depth);
+ } else {
+ q_params = nic_dev->q_params;
+ q_params.sq_depth = new_sq_depth;
+ q_params.rq_depth = new_rq_depth;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ NULL, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info *interrupt_info = NULL;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ /* get tx/rx irq0 as default parameters */
+ interrupt_info = &nic_dev->intr_coalesce[0];
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ interrupt_info = &nic_dev->intr_coalesce[queue];
+ }
+
+ /* coalescs_timer is in unit of 5us */
+ coal->rx_coalesce_usecs = interrupt_info->coalesce_timer_cfg *
+ COALESCE_TIMER_CFG_UNIT;
+ /* coalescs_frams is in unit of 8 */
+ coal->rx_max_coalesced_frames = interrupt_info->pending_limt *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ /* tx/rx use the same interrupt */
+ coal->tx_coalesce_usecs = coal->rx_coalesce_usecs;
+ coal->tx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ coal->use_adaptive_rx_coalesce = nic_dev->adaptive_rx_coal;
+
+ coal->pkt_rate_high = (u32)interrupt_info->pkt_rate_high;
+ coal->rx_coalesce_usecs_high = interrupt_info->rx_usecs_high *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_high =
+ interrupt_info->rx_pending_limt_high *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ coal->pkt_rate_low = (u32)interrupt_info->pkt_rate_low;
+ coal->rx_coalesce_usecs_low = interrupt_info->rx_usecs_low *
+ COALESCE_TIMER_CFG_UNIT;
+ coal->rx_max_coalesced_frames_low =
+ interrupt_info->rx_pending_limt_low *
+ COALESCE_PENDING_LIMIT_UNIT;
+
+ return 0;
+}
+
+static int set_queue_coalesce(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ struct hinic3_intr_coal_info *coal)
+{
+ struct hinic3_intr_coal_info *intr_coal;
+ struct interrupt_info info = {0};
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ intr_coal = &nic_dev->intr_coalesce[q_id];
+ if (intr_coal->coalesce_timer_cfg != coal->coalesce_timer_cfg ||
+ intr_coal->pending_limt != coal->pending_limt)
+ intr_coal->user_set_intr_coal_flag = 1;
+
+ intr_coal->coalesce_timer_cfg = coal->coalesce_timer_cfg;
+ intr_coal->pending_limt = coal->pending_limt;
+ intr_coal->pkt_rate_low = coal->pkt_rate_low;
+ intr_coal->rx_usecs_low = coal->rx_usecs_low;
+ intr_coal->rx_pending_limt_low = coal->rx_pending_limt_low;
+ intr_coal->pkt_rate_high = coal->pkt_rate_high;
+ intr_coal->rx_usecs_high = coal->rx_usecs_high;
+ intr_coal->rx_pending_limt_high = coal->rx_pending_limt_high;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags) ||
+ q_id >= nic_dev->q_params.num_qps || nic_dev->adaptive_rx_coal)
+ return 0;
+
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = intr_coal->coalesce_timer_cfg;
+ info.pending_limt = intr_coal->pending_limt;
+ info.resend_timer_cfg = intr_coal->resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ intr_coal->coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = intr_coal->pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err)
+ nicif_warn(nic_dev, drv, netdev,
+ "Failed to set queue%u coalesce", q_id);
+
+ return err;
+}
+
+static int is_coalesce_exceed_limit(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (coal->rx_coalesce_usecs > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames out of range[%d-%d]\n", 0,
+ COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_low > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_low out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_low out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_coalesce_usecs_high > COALESCE_MAX_TIMER_CFG) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_coalesce_usecs_high out of range[%d-%d]\n", 0,
+ COALESCE_MAX_TIMER_CFG);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_high > COALESCE_MAX_PENDING_LIMIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "rx_max_coalesced_frames_high out of range[%d-%d]\n",
+ 0, COALESCE_MAX_PENDING_LIMIT);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int is_coalesce_legal(struct net_device *netdev,
+ const struct ethtool_coalesce *coal)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct ethtool_coalesce tmp_coal = {0};
+ int err;
+
+ if (coal->rx_coalesce_usecs != coal->tx_coalesce_usecs) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-usecs must be equal to rx-usecs\n");
+ return -EINVAL;
+ }
+
+ if (coal->rx_max_coalesced_frames != coal->tx_max_coalesced_frames) {
+ nicif_err(nic_dev, drv, netdev,
+ "tx-frames must be equal to rx-frames\n");
+ return -EINVAL;
+ }
+
+ tmp_coal.cmd = coal->cmd;
+ tmp_coal.rx_coalesce_usecs = coal->rx_coalesce_usecs;
+ tmp_coal.rx_max_coalesced_frames = coal->rx_max_coalesced_frames;
+ tmp_coal.tx_coalesce_usecs = coal->tx_coalesce_usecs;
+ tmp_coal.tx_max_coalesced_frames = coal->tx_max_coalesced_frames;
+ tmp_coal.use_adaptive_rx_coalesce = coal->use_adaptive_rx_coalesce;
+
+ tmp_coal.pkt_rate_low = coal->pkt_rate_low;
+ tmp_coal.rx_coalesce_usecs_low = coal->rx_coalesce_usecs_low;
+ tmp_coal.rx_max_coalesced_frames_low =
+ coal->rx_max_coalesced_frames_low;
+
+ tmp_coal.pkt_rate_high = coal->pkt_rate_high;
+ tmp_coal.rx_coalesce_usecs_high = coal->rx_coalesce_usecs_high;
+ tmp_coal.rx_max_coalesced_frames_high =
+ coal->rx_max_coalesced_frames_high;
+
+ if (memcmp(coal, &tmp_coal, sizeof(struct ethtool_coalesce))) {
+ nicif_err(nic_dev, drv, netdev,
+ "Only support to change rx/tx-usecs and rx/tx-frames\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = is_coalesce_exceed_limit(netdev, coal);
+ if (err)
+ return err;
+
+ if (coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT >=
+ coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesce_usecs_high(%u) must more than coalesce_usecs_low(%u), after dividing %d usecs unit\n",
+ coal->rx_coalesce_usecs_high,
+ coal->rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->rx_max_coalesced_frames_low / COALESCE_PENDING_LIMIT_UNIT >=
+ coal->rx_max_coalesced_frames_high / COALESCE_PENDING_LIMIT_UNIT) {
+ nicif_err(nic_dev, drv, netdev,
+ "coalesced_frames_high(%u) must more than coalesced_frames_low(%u),after dividing %d frames unit\n",
+ coal->rx_max_coalesced_frames_high,
+ coal->rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+ return -EOPNOTSUPP;
+ }
+
+ if (coal->pkt_rate_low >= coal->pkt_rate_high) {
+ nicif_err(nic_dev, drv, netdev,
+ "pkt_rate_high(%u) must more than pkt_rate_low(%u)\n",
+ coal->pkt_rate_high,
+ coal->pkt_rate_low);
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+#define CHECK_COALESCE_ALIGN(coal, item, unit) \
+do { \
+ if ((coal)->item % (unit)) \
+ nicif_warn(nic_dev, drv, netdev, \
+ "%s in %d units, change to %u\n", \
+ #item, (unit), ((coal)->item - \
+ (coal)->item % (unit))); \
+} while (0)
+
+#define CHECK_COALESCE_CHANGED(coal, item, unit, ori_val, obj_str) \
+do { \
+ if (((coal)->item / (unit)) != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %d to %u %s\n", \
+ #item, (ori_val) * (unit), \
+ ((coal)->item - (coal)->item % (unit)), \
+ (obj_str)); \
+} while (0)
+
+#define CHECK_PKT_RATE_CHANGED(coal, item, ori_val, obj_str) \
+do { \
+ if ((coal)->item != (ori_val)) \
+ nicif_info(nic_dev, drv, netdev, \
+ "Change %s from %llu to %u %s\n", \
+ #item, (ori_val), (coal)->item, (obj_str)); \
+} while (0)
+
+static int set_hw_coal_param(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *intr_coal, u16 queue)
+{
+ u16 i;
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ for (i = 0; i < nic_dev->max_qps; i++)
+ set_queue_coalesce(nic_dev, i, intr_coal);
+ } else {
+ if (queue >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Invalid queue_id: %u\n", queue);
+ return -EINVAL;
+ }
+ set_queue_coalesce(nic_dev, queue, intr_coal);
+ }
+
+ return 0;
+}
+
+static int set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal, u16 queue)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_intr_coal_info intr_coal = {0};
+ struct hinic3_intr_coal_info *ori_intr_coal = NULL;
+ u32 last_adaptive_rx;
+ char obj_str[32] = {0};
+ int err = 0;
+
+ err = is_coalesce_legal(netdev, coal);
+ if (err)
+ return err;
+
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT);
+ CHECK_COALESCE_ALIGN(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ if (queue == COALESCE_ALL_QUEUE) {
+ ori_intr_coal = &nic_dev->intr_coalesce[0];
+ snprintf(obj_str, sizeof(obj_str), "for netdev");
+ } else {
+ ori_intr_coal = &nic_dev->intr_coalesce[queue];
+ snprintf(obj_str, sizeof(obj_str), "for queue %u", queue);
+ }
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs, COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->coalesce_timer_cfg, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->pending_limt, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_high,
+ ori_intr_coal->pkt_rate_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_high,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_high, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_high,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_high, obj_str);
+ CHECK_PKT_RATE_CHANGED(coal, pkt_rate_low,
+ ori_intr_coal->pkt_rate_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_coalesce_usecs_low,
+ COALESCE_TIMER_CFG_UNIT,
+ ori_intr_coal->rx_usecs_low, obj_str);
+ CHECK_COALESCE_CHANGED(coal, rx_max_coalesced_frames_low,
+ COALESCE_PENDING_LIMIT_UNIT,
+ ori_intr_coal->rx_pending_limt_low, obj_str);
+
+ intr_coal.coalesce_timer_cfg =
+ (u8)(coal->rx_coalesce_usecs / COALESCE_TIMER_CFG_UNIT);
+ intr_coal.pending_limt = (u8)(coal->rx_max_coalesced_frames /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ last_adaptive_rx = nic_dev->adaptive_rx_coal;
+ nic_dev->adaptive_rx_coal = coal->use_adaptive_rx_coalesce;
+
+ intr_coal.pkt_rate_high = coal->pkt_rate_high;
+ intr_coal.rx_usecs_high =
+ (u8)(coal->rx_coalesce_usecs_high / COALESCE_TIMER_CFG_UNIT);
+ intr_coal.rx_pending_limt_high =
+ (u8)(coal->rx_max_coalesced_frames_high /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ intr_coal.pkt_rate_low = coal->pkt_rate_low;
+ intr_coal.rx_usecs_low =
+ (u8)(coal->rx_coalesce_usecs_low / COALESCE_TIMER_CFG_UNIT);
+ intr_coal.rx_pending_limt_low =
+ (u8)(coal->rx_max_coalesced_frames_low /
+ COALESCE_PENDING_LIMIT_UNIT);
+
+ /* coalesce timer or pending set to zero will disable coalesce */
+ if (!nic_dev->adaptive_rx_coal &&
+ (!intr_coal.coalesce_timer_cfg || !intr_coal.pending_limt))
+ nicif_warn(nic_dev, drv, netdev, "Coalesce will be disabled\n");
+
+ /* ensure coalesce paramester will not be changed in auto
+ * moderation work
+ */
+ if (HINIC3_CHANNEL_RES_VALID(nic_dev)) {
+ if (!nic_dev->adaptive_rx_coal)
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+ else if (!last_adaptive_rx)
+ queue_delayed_work(nic_dev->workq,
+ &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ }
+
+ return set_hw_coal_param(nic_dev, &intr_coal, queue);
+}
+
+static int hinic3_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+{
+ return get_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+static int hinic3_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *coal,
+ struct kernel_ethtool_coalesce *kernel_coal,
+ struct netlink_ext_ack *extack)
+{
+ return set_coalesce(netdev, coal, COALESCE_ALL_QUEUE);
+}
+
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+static int hinic3_get_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return get_coalesce(netdev, coal, (u16)queue);
+}
+
+static int hinic3_set_per_queue_coalesce(struct net_device *netdev, u32 queue,
+ struct ethtool_coalesce *coal)
+{
+ return set_coalesce(netdev, coal, (u16)queue);
+}
+#endif
+
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+static int hinic3_set_phys_id(struct net_device *netdev,
+ enum ethtool_phys_id_state state)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ switch (state) {
+ case ETHTOOL_ID_ACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Set LED blinking in 2HZ success\n");
+ break;
+
+ case ETHTOOL_ID_INACTIVE:
+ err = hinic3_set_led_status(nic_dev->hwdev,
+ MAG_CMD_LED_TYPE_ALARM,
+ MAG_CMD_LED_MODE_DEFAULT);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Reset LED to original status failed\n");
+ else
+ nicif_info(nic_dev, drv, netdev,
+ "Reset LED to original status success\n");
+ break;
+
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return err;
+}
+#else
+static int hinic3_phys_id(struct net_device *netdev, u32 data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nicif_err(nic_dev, drv, netdev, "Not support to set phys id\n");
+
+ return -EOPNOTSUPP;
+}
+#endif
+
+static void hinic3_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get pauseparam from hw\n");
+ } else {
+ pause->autoneg = nic_pause.auto_neg == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ pause->rx_pause = nic_pause.rx_pause;
+ pause->tx_pause = nic_pause.tx_pause;
+ }
+}
+
+static int hinic3_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_pause_config nic_pause = {0};
+ struct nic_port_info port_info = {0};
+ u32 auto_neg;
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get auto-negotiation state\n");
+ return -EFAULT;
+ }
+
+ auto_neg = port_info.autoneg_state == PORT_CFG_AN_ON ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (pause->autoneg != auto_neg) {
+ nicif_err(nic_dev, drv, netdev,
+ "To change autoneg please use: ethtool -s <dev> autoneg <on|off>\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_pause.auto_neg = pause->autoneg == AUTONEG_ENABLE ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ nic_pause.rx_pause = (u8)pause->rx_pause;
+ nic_pause.tx_pause = (u8)pause->tx_pause;
+
+ err = hinic3_set_pause_info(nic_dev->hwdev, nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set pauseparam\n");
+ return err;
+ }
+
+ nicif_info(nic_dev, drv, netdev, "Set pause options, tx: %s, rx: %s\n",
+ pause->tx_pause ? "on" : "off",
+ pause->rx_pause ? "on" : "off");
+
+ return 0;
+}
+
+#ifdef ETHTOOL_GMODULEEEPROM
+static int hinic3_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_type = 0;
+ u8 sfp_type_ext = 0;
+ int err;
+
+ err = hinic3_get_sfp_type(nic_dev->hwdev, &sfp_type, &sfp_type_ext);
+ if (err)
+ return err;
+
+ switch (sfp_type) {
+ case MODULE_TYPE_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ case MODULE_TYPE_QSFP:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ break;
+ case MODULE_TYPE_QSFP_PLUS:
+ if (sfp_type_ext >= 0x3) {
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
+ }
+ break;
+ case MODULE_TYPE_QSFP28:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_MAX_LEN;
+ break;
+ default:
+ nicif_warn(nic_dev, drv, netdev,
+ "Optical module unknown: 0x%x\n", sfp_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *ee, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err;
+
+ if (!ee->len || ((ee->len + ee->offset) > STD_SFP_INFO_MAX_SIZE))
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
+
+ err = hinic3_get_sfp_eeprom(nic_dev->hwdev, (u8 *)sfp_data, ee->len);
+ if (err)
+ return err;
+
+ memcpy(data, sfp_data + ee->offset, ee->len);
+
+ return 0;
+}
+#endif /* ETHTOOL_GMODULEEEPROM */
+
+#define HINIC3_PRIV_FLAGS_SYMM_RSS BIT(0)
+#define HINIC3_PRIV_FLAGS_LINK_UP BIT(1)
+#define HINIC3_PRIV_FLAGS_RXQ_RECOVERY BIT(2)
+
+static u32 hinic3_get_priv_flags(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 priv_flags = 0;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_SYMM_RSS;
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_LINK_UP;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ priv_flags |= HINIC3_PRIV_FLAGS_RXQ_RECOVERY;
+
+ return priv_flags;
+}
+
+int hinic3_set_rxq_recovery_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_RXQ_RECOVERY) {
+ if (!HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev)) {
+ nicif_info(nic_dev, drv, netdev, "Unsupport open rxq recovery\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_and_set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+ nicif_info(nic_dev, drv, netdev, "open rxq recovery\n");
+ } else {
+ if (!test_and_clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ return 0;
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ nicif_info(nic_dev, drv, netdev, "close rxq recovery\n");
+ }
+
+ return 0;
+}
+
+static int hinic3_set_symm_rss_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_SYMM_RSS) {
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev, "Failed to open Symmetric RSS while DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev, "Failed to open Symmetric RSS while RSS is disabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ set_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ } else {
+ clear_bit(HINIC3_SAME_RXTX, &nic_dev->flags);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_force_link_flag(struct net_device *netdev, u32 priv_flags)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 link_status = 0;
+ int err;
+
+ if (priv_flags & HINIC3_PRIV_FLAGS_LINK_UP) {
+ if (test_and_set_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ nic_dev->link_status = true;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Set link up\n");
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ } else {
+ if (!test_and_clear_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return 0;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev))
+ return 0;
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (err) {
+ nicif_err(nic_dev, link, netdev, "Get link state err: %d\n", err);
+ return err;
+ }
+
+ nic_dev->link_status = link_status;
+
+ if (link_status) {
+ if (netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return 0;
+
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link state is down\n");
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, nic_dev->link_status);
+ }
+
+ return 0;
+}
+
+static int hinic3_set_priv_flags(struct net_device *netdev, u32 priv_flags)
+{
+ int err;
+
+ err = hinic3_set_symm_rss_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ err = hinic3_set_rxq_recovery_flag(netdev, priv_flags);
+ if (err)
+ return err;
+
+ return hinic3_set_force_link_flag(netdev, priv_flags);
+}
+
+#define PORT_DOWN_ERR_IDX 0
+#define LP_DEFAULT_TIME 5 /* seconds */
+#define LP_PKT_LEN 60
+
+#define TEST_TIME_MULTIPLE 5
+static int hinic3_run_lp_test(struct hinic3_nic_dev *nic_dev, u32 test_time)
+{
+ u8 *lb_test_rx_buf = nic_dev->lb_test_rx_buf;
+ struct net_device *netdev = nic_dev->netdev;
+ u32 cnt = test_time * TEST_TIME_MULTIPLE;
+ struct sk_buff *skb_tmp = NULL;
+ struct ethhdr *eth_hdr = NULL;
+ struct sk_buff *skb = NULL;
+ u8 *test_data = NULL;
+ u32 i;
+ u8 j;
+
+ skb_tmp = alloc_skb(LP_PKT_LEN, GFP_ATOMIC);
+ if (!skb_tmp) {
+ nicif_err(nic_dev, drv, netdev,
+ "Alloc xmit skb template failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ eth_hdr = __skb_put(skb_tmp, ETH_HLEN);
+ eth_hdr->h_proto = htons(ETH_P_ARP);
+ ether_addr_copy(eth_hdr->h_dest, nic_dev->netdev->dev_addr);
+ eth_zero_addr(eth_hdr->h_source);
+ skb_reset_mac_header(skb_tmp);
+
+ test_data = __skb_put(skb_tmp, LP_PKT_LEN - ETH_HLEN);
+ for (i = ETH_HLEN; i < LP_PKT_LEN; i++)
+ test_data[i] = i & 0xFF;
+
+ skb_tmp->queue_mapping = 0;
+ skb_tmp->dev = netdev;
+ skb_tmp->protocol = htons(ETH_P_ARP);
+
+ for (i = 0; i < cnt; i++) {
+ nic_dev->lb_test_rx_idx = 0;
+ memset(lb_test_rx_buf, 0, LP_PKT_CNT * LP_PKT_LEN);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ skb = pskb_copy(skb_tmp, GFP_ATOMIC);
+ if (!skb) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Copy skb failed for loopback test\n");
+ return -ENOMEM;
+ }
+
+ /* mark index for every pkt */
+ skb->data[LP_PKT_LEN - 1] = j;
+
+ if (hinic3_lb_xmit_frame(skb, netdev)) {
+ dev_kfree_skb_any(skb);
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Xmit pkt failed for loopback test\n");
+ return -EBUSY;
+ }
+ }
+
+ /* wait till all pkts received to RX buffer */
+ msleep(HINIC3_WAIT_PKTS_TO_RX_BUFFER);
+
+ for (j = 0; j < LP_PKT_CNT; j++) {
+ if (memcmp((lb_test_rx_buf + (j * LP_PKT_LEN)),
+ skb_tmp->data, (LP_PKT_LEN - 1)) ||
+ (*(lb_test_rx_buf + ((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1))) != j)) {
+ dev_kfree_skb_any(skb_tmp);
+ nicif_err(nic_dev, drv, netdev,
+ "Compare pkt failed in loopback test(index=0x%02x, data[%d]=0x%02x)\n",
+ (j + (i * LP_PKT_CNT)),
+ (LP_PKT_LEN - 1),
+ *(lb_test_rx_buf +
+ (((j * LP_PKT_LEN) +
+ (LP_PKT_LEN - 1)))));
+ return -EIO;
+ }
+ }
+ }
+
+ dev_kfree_skb_any(skb_tmp);
+ nicif_info(nic_dev, drv, netdev, "Loopback test succeed.\n");
+ return 0;
+}
+
+enum diag_test_index {
+ INTERNAL_LP_TEST = 0,
+ EXTERNAL_LP_TEST = 1,
+ DIAG_TEST_MAX = 2,
+};
+
+#define HINIC3_INTERNAL_LP_MODE 5
+static int do_lp_test(struct hinic3_nic_dev *nic_dev, u32 *flags, u32 test_time,
+ enum diag_test_index *test_index)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_test_rx_buf = NULL;
+ int err = 0;
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ *test_index = INTERNAL_LP_TEST;
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, true)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set port loopback mode before loopback test\n");
+ return -EFAULT;
+ }
+
+ /* suspend 5000 ms, waiting for port to stop receiving frames */
+ msleep(5000);
+ } else {
+ *test_index = EXTERNAL_LP_TEST;
+ }
+
+ lb_test_rx_buf = vmalloc(LP_PKT_CNT * LP_PKT_LEN);
+ if (!lb_test_rx_buf) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to alloc RX buffer for loopback test\n");
+ err = -ENOMEM;
+ } else {
+ nic_dev->lb_test_rx_buf = lb_test_rx_buf;
+ nic_dev->lb_pkt_len = LP_PKT_LEN;
+ set_bit(HINIC3_LP_TEST, &nic_dev->flags);
+
+ if (hinic3_run_lp_test(nic_dev, test_time))
+ err = -EFAULT;
+
+ clear_bit(HINIC3_LP_TEST, &nic_dev->flags);
+ msleep(HINIC3_WAIT_CLEAR_LP_TEST);
+ vfree(lb_test_rx_buf);
+ nic_dev->lb_test_rx_buf = NULL;
+ }
+
+ if (!(*flags & ETH_TEST_FL_EXTERNAL_LB)) {
+ if (hinic3_set_loopback_mode(nic_dev->hwdev,
+ HINIC3_INTERNAL_LP_MODE, false)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to cancel port loopback mode after loopback test\n");
+ err = -EFAULT;
+ }
+ } else {
+ *flags |= ETH_TEST_FL_EXTERNAL_LB_DONE;
+ }
+
+ return err;
+}
+
+static void hinic3_lp_test(struct net_device *netdev, struct ethtool_test *eth_test,
+ u64 *data, u32 test_time)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ enum diag_test_index test_index = 0;
+ u8 link_status = 0;
+ int err;
+ u32 test_time_real = test_time;
+
+ /* don't support loopback test when netdev is closed. */
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Do not support loopback test when netdev is closed\n");
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[PORT_DOWN_ERR_IDX] = 1;
+ return;
+ }
+ if (test_time_real == 0)
+ test_time_real = LP_DEFAULT_TIME;
+
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ err = do_lp_test(nic_dev, ð_test->flags, test_time_real, &test_index);
+ if (err) {
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+ data[test_index] = 1;
+ }
+
+ netif_tx_wake_all_queues(netdev);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+}
+
+static void hinic3_diag_test(struct net_device *netdev,
+ struct ethtool_test *eth_test, u64 *data)
+{
+ memset(data, 0, DIAG_TEST_MAX * sizeof(u64));
+
+ hinic3_lp_test(netdev, eth_test, data, 0);
+}
+
+static const struct ethtool_ops hinic3_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+ .set_link_ksettings = hinic3_set_link_ksettings,
+#endif
+#endif
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+ .get_settings = hinic3_get_settings,
+ .set_settings = hinic3_set_settings,
+#endif
+
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .nway_reset = hinic3_nway_reset,
+#ifdef CONFIG_MODULE_PROF
+ .get_link = hinic3_get_link,
+#else
+ .get_link = ethtool_op_get_link,
+#endif
+ .get_ringparam = hinic3_get_ringparam,
+ .set_ringparam = hinic3_set_ringparam,
+ .get_pauseparam = hinic3_get_pauseparam,
+ .set_pauseparam = hinic3_set_pauseparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .self_test = hinic3_diag_test,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+#ifdef HAVE_ETHTOOL_SET_PHYS_ID
+ .set_phys_id = hinic3_set_phys_id,
+#else
+ .phys_id = hinic3_phys_id,
+#endif
+#endif
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .set_phys_id = hinic3_set_phys_id,
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+#ifdef ETHTOOL_GMODULEEEPROM
+ .get_module_info = hinic3_get_module_info,
+ .get_module_eeprom = hinic3_get_module_eeprom,
+#endif
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+static const struct ethtool_ops hinic3vf_ethtool_ops = {
+#ifdef SUPPORTED_COALESCE_PARAMS
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_PKT_RATE_RX_USECS,
+#endif
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+ .get_link_ksettings = hinic3_get_link_ksettings,
+#endif
+#else
+ .get_settings = hinic3_get_settings,
+#endif
+ .get_drvinfo = hinic3_get_drvinfo,
+ .get_msglevel = hinic3_get_msglevel,
+ .set_msglevel = hinic3_set_msglevel,
+ .get_link = ethtool_op_get_link,
+ .get_ringparam = hinic3_get_ringparam,
+
+ .set_ringparam = hinic3_set_ringparam,
+ .get_sset_count = hinic3_get_sset_count,
+ .get_ethtool_stats = hinic3_get_ethtool_stats,
+ .get_strings = hinic3_get_strings,
+
+ .get_coalesce = hinic3_get_coalesce,
+ .set_coalesce = hinic3_set_coalesce,
+#if defined(ETHTOOL_PERQUEUE) && defined(ETHTOOL_GCOALESCE)
+ .get_per_queue_coalesce = hinic3_get_per_queue_coalesce,
+ .set_per_queue_coalesce = hinic3_set_per_queue_coalesce,
+#endif
+
+ .get_rxnfc = hinic3_get_rxnfc,
+ .set_rxnfc = hinic3_set_rxnfc,
+ .get_priv_flags = hinic3_get_priv_flags,
+ .set_priv_flags = hinic3_set_priv_flags,
+
+#ifndef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+};
+
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+static const struct ethtool_ops_ext hinic3vf_ethtool_ops_ext = {
+ .size = sizeof(struct ethtool_ops_ext),
+ .get_channels = hinic3_get_channels,
+ .set_channels = hinic3_set_channels,
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ .get_rxfh_indir_size = hinic3_get_rxfh_indir_size,
+#endif
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+ .get_rxfh_key_size = hinic3_get_rxfh_key_size,
+ .get_rxfh = hinic3_get_rxfh,
+ .set_rxfh = hinic3_set_rxfh,
+#else
+ .get_rxfh_indir = hinic3_get_rxfh_indir,
+ .set_rxfh_indir = hinic3_set_rxfh_indir,
+#endif
+
+};
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+
+void hinic3_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &hinic3vf_ethtool_ops);
+#ifdef HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT
+ set_ethtool_ops_ext(netdev, &hinic3vf_ethtool_ops_ext);
+#endif /* HAVE_RHEL6_ETHTOOL_OPS_EXT_STRUCT */
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
new file mode 100644
index 000000000000..de59b7668254
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ethtool_stats.c
@@ -0,0 +1,1233 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+#define FPGA_PORT_COUNTER 0
+#define EVB_PORT_COUNTER 1
+u16 mag_support_mode = EVB_PORT_COUNTER;
+module_param(mag_support_mode, ushort, 0444);
+MODULE_PARM_DESC(mag_support_mode, "Set mag port counter support mode, 0:FPGA 1:EVB, default is 1");
+
+struct hinic3_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 size;
+ int offset;
+};
+
+#define HINIC3_NETDEV_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct rtnl_link_stats64, _stat_item), \
+ .offset = offsetof(struct rtnl_link_stats64, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_netdev_stats[] = {
+ HINIC3_NETDEV_STAT(rx_packets),
+ HINIC3_NETDEV_STAT(tx_packets),
+ HINIC3_NETDEV_STAT(rx_bytes),
+ HINIC3_NETDEV_STAT(tx_bytes),
+ HINIC3_NETDEV_STAT(rx_errors),
+ HINIC3_NETDEV_STAT(tx_errors),
+ HINIC3_NETDEV_STAT(rx_dropped),
+ HINIC3_NETDEV_STAT(tx_dropped),
+ HINIC3_NETDEV_STAT(multicast),
+ HINIC3_NETDEV_STAT(collisions),
+ HINIC3_NETDEV_STAT(rx_length_errors),
+ HINIC3_NETDEV_STAT(rx_over_errors),
+ HINIC3_NETDEV_STAT(rx_crc_errors),
+ HINIC3_NETDEV_STAT(rx_frame_errors),
+ HINIC3_NETDEV_STAT(rx_fifo_errors),
+ HINIC3_NETDEV_STAT(rx_missed_errors),
+ HINIC3_NETDEV_STAT(tx_aborted_errors),
+ HINIC3_NETDEV_STAT(tx_carrier_errors),
+ HINIC3_NETDEV_STAT(tx_fifo_errors),
+ HINIC3_NETDEV_STAT(tx_heartbeat_errors),
+};
+
+#define HINIC3_NIC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_nic_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_nic_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_nic_dev_stats[] = {
+ HINIC3_NIC_STAT(netdev_tx_timeout),
+};
+
+static struct hinic3_stats hinic3_nic_dev_stats_extern[] = {
+ HINIC3_NIC_STAT(tx_carrier_off_drop),
+ HINIC3_NIC_STAT(tx_invalid_qid),
+ HINIC3_NIC_STAT(rsvd1),
+ HINIC3_NIC_STAT(rsvd2),
+};
+
+#define HINIC3_RXQ_STAT(_stat_item) { \
+ .name = "rxq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_rxq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_rxq_stats, _stat_item) \
+}
+
+#define HINIC3_TXQ_STAT(_stat_item) { \
+ .name = "txq%d_"#_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_txq_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_txq_stats, _stat_item) \
+}
+
+/*lint -save -e786*/
+static struct hinic3_stats hinic3_rx_queue_stats[] = {
+ HINIC3_RXQ_STAT(packets),
+ HINIC3_RXQ_STAT(bytes),
+ HINIC3_RXQ_STAT(errors),
+ HINIC3_RXQ_STAT(csum_errors),
+ HINIC3_RXQ_STAT(other_errors),
+ HINIC3_RXQ_STAT(dropped),
+#ifdef HAVE_XDP_SUPPORT
+ HINIC3_RXQ_STAT(xdp_dropped),
+#endif
+ HINIC3_RXQ_STAT(rx_buf_empty),
+};
+
+static struct hinic3_stats hinic3_rx_queue_stats_extern[] = {
+ HINIC3_RXQ_STAT(alloc_skb_err),
+ HINIC3_RXQ_STAT(alloc_rx_buf_err),
+ HINIC3_RXQ_STAT(xdp_large_pkt),
+ HINIC3_RXQ_STAT(restore_drop_sge),
+ HINIC3_RXQ_STAT(rsvd2),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats[] = {
+ HINIC3_TXQ_STAT(packets),
+ HINIC3_TXQ_STAT(bytes),
+ HINIC3_TXQ_STAT(busy),
+ HINIC3_TXQ_STAT(wake),
+ HINIC3_TXQ_STAT(dropped),
+};
+
+static struct hinic3_stats hinic3_tx_queue_stats_extern[] = {
+ HINIC3_TXQ_STAT(skb_pad_err),
+ HINIC3_TXQ_STAT(frag_len_overflow),
+ HINIC3_TXQ_STAT(offload_cow_skb_err),
+ HINIC3_TXQ_STAT(map_frag_err),
+ HINIC3_TXQ_STAT(unknown_tunnel_pkt),
+ HINIC3_TXQ_STAT(frag_size_err),
+ HINIC3_TXQ_STAT(rsvd1),
+ HINIC3_TXQ_STAT(rsvd2),
+};
+
+/*lint -restore*/
+
+#define HINIC3_FUNC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_vport_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_vport_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_function_stats[] = {
+ HINIC3_FUNC_STAT(tx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(rx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(tx_discard_vport),
+ HINIC3_FUNC_STAT(rx_discard_vport),
+ HINIC3_FUNC_STAT(tx_err_vport),
+ HINIC3_FUNC_STAT(rx_err_vport),
+};
+
+#define HINIC3_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct mag_cmd_port_stats, _stat_item), \
+ .offset = offsetof(struct mag_cmd_port_stats, _stat_item) \
+}
+
+static struct hinic3_stats hinic3_port_stats[] = {
+ HINIC3_PORT_STAT(mac_tx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_tx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_good_oct_num),
+ HINIC3_PORT_STAT(mac_tx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_total_oct_num),
+ HINIC3_PORT_STAT(mac_tx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pause_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_err_all_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+ HINIC3_PORT_STAT(mac_rx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_rx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_good_oct_num),
+ HINIC3_PORT_STAT(mac_rx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_total_oct_num),
+ HINIC3_PORT_STAT(mac_rx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pause_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_sym_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num),
+};
+
+#define HINIC3_FGPA_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .size = FIELD_SIZEOF(struct hinic3_phy_fpga_port_stats, _stat_item), \
+ .offset = offsetof(struct hinic3_phy_fpga_port_stats, _stat_item) \
+}
+
+static struct hinic3_stats g_hinic3_fpga_port_stats[] = {
+ HINIC3_FGPA_PORT_STAT(mac_rx_total_octs_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_total_octs_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_under_frame_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_frag_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_64_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_127_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_255_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_511_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_1023_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_max_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_over_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_64_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_127_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_255_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_511_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_1023_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_max_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_over_oct_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_good_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_crc_error_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_broadcast_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_multicast_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_mac_frame_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_length_err_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_vlan_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_pause_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_rx_unknown_mac_frame_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_good_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_broadcast_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_multicast_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_underrun_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_mac_frame_ok_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_vlan_pkts_port),
+ HINIC3_FGPA_PORT_STAT(mac_tx_pause_pkts_port),
+};
+
+static char g_hinic_priv_flags_strings[][ETH_GSTRING_LEN] = {
+ "Symmetric-RSS",
+ "Force-Link-up",
+ "Rxq_Recovery",
+};
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev)
+{
+ u32 count;
+
+ count = ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats_extern) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_tx_queue_stats_extern) +
+ ARRAY_LEN(hinic3_rx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats_extern)) * nic_dev->max_qps;
+
+ return count;
+}
+
+#define GET_VALUE_OF_PTR(size, ptr) ( \
+ (size) == sizeof(u64) ? *(u64 *)(ptr) : \
+ (size) == sizeof(u32) ? *(u32 *)(ptr) : \
+ (size) == sizeof(u16) ? *(u16 *)(ptr) : *(u8 *)(ptr) \
+)
+
+#define DEV_STATS_PACK(items, item_idx, array, stats_ptr) do { \
+ int j; \
+ for (j = 0; j < ARRAY_LEN(array); j++) { \
+ memcpy((items)[item_idx].name, (array)[j].name, \
+ HINIC3_SHOW_ITEM_LEN); \
+ (items)[item_idx].hexadecimal = 0; \
+ (items)[item_idx].value = \
+ GET_VALUE_OF_PTR((array)[j].size, \
+ (char *)(stats_ptr) + (array)[j].offset); \
+ (item_idx)++; \
+ } \
+} while (0)
+
+#define QUEUE_STATS_PACK(items, item_idx, array, stats_ptr, qid) do { \
+ int j; \
+ for (j = 0; j < ARRAY_LEN(array); j++) { \
+ memcpy((items)[item_idx].name, (array)[j].name, \
+ HINIC3_SHOW_ITEM_LEN); \
+ snprintf((items)[item_idx].name, HINIC3_SHOW_ITEM_LEN, \
+ (array)[j].name, (qid)); \
+ (items)[item_idx].hexadecimal = 0; \
+ (items)[item_idx].value = \
+ GET_VALUE_OF_PTR((array)[j].size, \
+ (char *)(stats_ptr) + (array)[j].offset); \
+ (item_idx)++; \
+ } \
+} while (0)
+
+void hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats)
+{
+ struct hinic3_show_item *items = stats;
+ int item_idx = 0;
+ u16 qid;
+
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats, &nic_dev->stats);
+ DEV_STATS_PACK(items, item_idx, hinic3_nic_dev_stats_extern,
+ &nic_dev->stats);
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ QUEUE_STATS_PACK(items, item_idx, hinic3_tx_queue_stats,
+ &nic_dev->txqs[qid].txq_stats, qid);
+ QUEUE_STATS_PACK(items, item_idx, hinic3_tx_queue_stats_extern,
+ &nic_dev->txqs[qid].txq_stats, qid);
+ }
+
+ for (qid = 0; qid < nic_dev->max_qps; qid++) {
+ QUEUE_STATS_PACK(items, item_idx, hinic3_rx_queue_stats,
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ QUEUE_STATS_PACK(items, item_idx, hinic3_rx_queue_stats_extern,
+ &nic_dev->rxqs[qid].rxq_stats, qid);
+ }
+}
+
+static char g_hinic3_test_strings[][ETH_GSTRING_LEN] = {
+ "Internal lb test (on/offline)",
+ "External lb test (external_lb)",
+};
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset)
+{
+ int count = 0, q_num = 0;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (sset) {
+ case ETH_SS_TEST:
+ return ARRAY_LEN(g_hinic3_test_strings);
+ case ETH_SS_STATS:
+ q_num = nic_dev->q_params.num_qps;
+ count = ARRAY_LEN(hinic3_netdev_stats) +
+ ARRAY_LEN(hinic3_nic_dev_stats) +
+ ARRAY_LEN(hinic3_function_stats) +
+ (ARRAY_LEN(hinic3_tx_queue_stats) +
+ ARRAY_LEN(hinic3_rx_queue_stats)) * q_num;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ if (mag_support_mode == FPGA_PORT_COUNTER)
+ count += ARRAY_LEN(g_hinic3_fpga_port_stats);
+ else
+ count += ARRAY_LEN(hinic3_port_stats);
+ }
+
+ return count;
+ case ETH_SS_PRIV_FLAGS:
+ return ARRAY_LEN(g_hinic_priv_flags_strings);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void get_drv_queue_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct hinic3_txq_stats txq_stats;
+ struct hinic3_rxq_stats rxq_stats;
+ u16 i = 0, j = 0, qid = 0;
+ char *p = NULL;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->txqs)
+ break;
+
+ hinic3_txq_get_stats(&nic_dev->txqs[qid], &txq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++, i++) {
+ p = (char *)(&txq_stats) +
+ hinic3_tx_queue_stats[j].offset;
+ data[i] = (hinic3_tx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ hinic3_rxq_get_stats(&nic_dev->rxqs[qid], &rxq_stats);
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++, i++) {
+ p = (char *)(&rxq_stats) +
+ hinic3_rx_queue_stats[j].offset;
+ data[i] = (hinic3_rx_queue_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+ }
+}
+
+static u16 get_fpga_port_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct hinic3_phy_fpga_port_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(g_hinic3_fpga_port_stats) * sizeof(*data));
+ i += ARRAY_LEN(g_hinic3_fpga_port_stats);
+ return i;
+ }
+
+ err = hinic3_get_fpga_phy_port_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get port stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(g_hinic3_fpga_port_stats); j++, i++) {
+ p = (char *)(port_stats) + g_hinic3_fpga_port_stats[j].offset;
+ data[i] = (g_hinic3_fpga_port_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+static u16 get_ethtool_port_stats(struct hinic3_nic_dev *nic_dev, u64 *data)
+{
+ struct mag_cmd_port_stats *port_stats = NULL;
+ char *p = NULL;
+ u16 i = 0, j = 0;
+ int err;
+
+ if (mag_support_mode == FPGA_PORT_COUNTER)
+ return get_fpga_port_stats(nic_dev, data);
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to malloc port stats\n");
+ memset(&data[i], 0,
+ ARRAY_LEN(hinic3_port_stats) * sizeof(*data));
+ i += ARRAY_LEN(hinic3_port_stats);
+ return i;
+ }
+
+ err = hinic3_get_phy_port_stats(nic_dev->hwdev, port_stats);
+ if (err)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get port stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_port_stats); j++, i++) {
+ p = (char *)(port_stats) + hinic3_port_stats[j].offset;
+ data[i] = (hinic3_port_stats[j].size ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ kfree(port_stats);
+
+ return i;
+}
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef HAVE_NDO_GET_STATS64
+ struct rtnl_link_stats64 temp;
+ const struct rtnl_link_stats64 *net_stats = NULL;
+#else
+ const struct net_device_stats *net_stats = NULL;
+#endif
+ struct hinic3_nic_stats *nic_stats = NULL;
+
+ struct hinic3_vport_stats vport_stats = {0};
+ u16 i = 0, j = 0;
+ char *p = NULL;
+ int err;
+
+#ifdef HAVE_NDO_GET_STATS64
+ net_stats = dev_get_stats(netdev, &temp);
+#else
+ net_stats = dev_get_stats(netdev);
+#endif
+ for (j = 0; j < ARRAY_LEN(hinic3_netdev_stats); j++, i++) {
+ p = (char *)(net_stats) + hinic3_netdev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_netdev_stats[j].size, p);
+ }
+
+ nic_stats = &nic_dev->stats;
+ for (j = 0; j < ARRAY_LEN(hinic3_nic_dev_stats); j++, i++) {
+ p = (char *)(nic_stats) + hinic3_nic_dev_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_nic_dev_stats[j].size, p);
+ }
+
+ err = hinic3_get_vport_stats(nic_dev->hwdev, hinic3_global_func_id(nic_dev->hwdev),
+ &vport_stats);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to get function stats from fw\n");
+
+ for (j = 0; j < ARRAY_LEN(hinic3_function_stats); j++, i++) {
+ p = (char *)(&vport_stats) + hinic3_function_stats[j].offset;
+ data[i] = GET_VALUE_OF_PTR(hinic3_function_stats[j].size, p);
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ i += get_ethtool_port_stats(nic_dev, data + i);
+
+ get_drv_queue_stats(nic_dev, data + i);
+}
+
+static u16 get_drv_dev_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_netdev_stats); i++) {
+ memcpy(p, hinic3_netdev_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ for (i = 0; i < ARRAY_LEN(hinic3_nic_dev_stats); i++) {
+ memcpy(p, hinic3_nic_dev_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ return cnt;
+}
+
+static u16 get_hw_stats_strings(struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i, cnt = 0;
+
+ for (i = 0; i < ARRAY_LEN(hinic3_function_stats); i++) {
+ memcpy(p, hinic3_function_stats[i].name,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ if (mag_support_mode == FPGA_PORT_COUNTER) {
+ for (i = 0; i < ARRAY_LEN(g_hinic3_fpga_port_stats); i++) {
+ memcpy(p, g_hinic3_fpga_port_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ } else {
+ for (i = 0; i < ARRAY_LEN(hinic3_port_stats); i++) {
+ memcpy(p, hinic3_port_stats[i].name, ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+ }
+
+ return cnt;
+}
+
+static u16 get_qp_stats_strings(const struct hinic3_nic_dev *nic_dev, char *p)
+{
+ u16 i = 0, j = 0, cnt = 0;
+ int err;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_tx_queue_stats); j++) {
+ err = sprintf(p, hinic3_tx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf tx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++) {
+ for (j = 0; j < ARRAY_LEN(hinic3_rx_queue_stats); j++) {
+ err = sprintf(p, hinic3_rx_queue_stats[j].name, i);
+ if (err < 0)
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to sprintf rx queue stats name, idx_qps: %u, idx_stats: %u\n",
+ i, j);
+ p += ETH_GSTRING_LEN;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ char *p = (char *)data;
+ u16 offset = 0;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ memcpy(data, *g_hinic3_test_strings, sizeof(g_hinic3_test_strings));
+ return;
+ case ETH_SS_STATS:
+ offset = get_drv_dev_strings(nic_dev, p);
+ offset += get_hw_stats_strings(nic_dev,
+ p + offset * ETH_GSTRING_LEN);
+ get_qp_stats_strings(nic_dev, p + offset * ETH_GSTRING_LEN);
+
+ return;
+ case ETH_SS_PRIV_FLAGS:
+ memcpy(data, g_hinic_priv_flags_strings,
+ sizeof(g_hinic_priv_flags_strings));
+ return;
+ default:
+ nicif_err(nic_dev, drv, netdev,
+ "Invalid string set %u.", stringset);
+ return;
+ }
+}
+
+static const u32 hinic3_mag_link_mode_ge[] = {
+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_10ge_base_r[] = {
+ ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseR_FEC_BIT,
+ ETHTOOL_LINK_MODE_10000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLRM_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_25ge_base_r[] = {
+ ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_40ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r[] = {
+ ETHTOOL_LINK_MODE_50000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_50ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r[] = {
+ ETHTOOL_LINK_MODE_100000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_100ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r2[] = {
+ ETHTOOL_LINK_MODE_200000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR2_Full_BIT,
+};
+
+static const u32 hinic3_mag_link_mode_200ge_base_r4[] = {
+ ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
+};
+
+struct hw2ethtool_link_mode {
+ const u32 *link_mode_bit_arr;
+ u32 arr_size;
+ u32 speed;
+};
+
+/*lint -save -e26 */
+static const struct hw2ethtool_link_mode
+ hw2ethtool_link_mode_table[LINK_MODE_MAX_NUMBERS] = {
+ [LINK_MODE_GE] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_ge,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_ge),
+ .speed = SPEED_1000,
+ },
+ [LINK_MODE_10GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_10ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_10ge_base_r),
+ .speed = SPEED_10000,
+ },
+ [LINK_MODE_25GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_25ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_25ge_base_r),
+ .speed = SPEED_25000,
+ },
+ [LINK_MODE_40GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_40ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_40ge_base_r4),
+ .speed = SPEED_40000,
+ },
+ [LINK_MODE_50GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_50GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_50ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_50ge_base_r2),
+ .speed = SPEED_50000,
+ },
+ [LINK_MODE_100GE_BASE_R] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r2),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_100GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_100ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_100ge_base_r4),
+ .speed = SPEED_100000,
+ },
+ [LINK_MODE_200GE_BASE_R2] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r2,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r2),
+ .speed = SPEED_200000,
+ },
+ [LINK_MODE_200GE_BASE_R4] = {
+ .link_mode_bit_arr = hinic3_mag_link_mode_200ge_base_r4,
+ .arr_size = ARRAY_LEN(hinic3_mag_link_mode_200ge_base_r4),
+ .speed = SPEED_200000,
+ },
+};
+
+/*lint -restore */
+
+#define GET_SUPPORTED_MODE 0
+#define GET_ADVERTISED_MODE 1
+
+struct cmd_link_settings {
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(supported);
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(advertising);
+
+ u32 speed;
+ u8 duplex;
+ u8 port;
+ u8 autoneg;
+};
+
+#define ETHTOOL_ADD_SUPPORTED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->supported)
+#define ETHTOOL_ADD_ADVERTISED_LINK_MODE(ecmd, mode) \
+ set_bit(ETHTOOL_LINK_MODE_##mode##_BIT, (ecmd)->advertising)
+
+#define ETHTOOL_ADD_SUPPORTED_SPEED_LINK_MODE(ecmd, mode) \
+do { \
+ u32 i; \
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) { \
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >= \
+ __ETHTOOL_LINK_MODE_MASK_NBITS) \
+ continue; \
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i], \
+ (ecmd)->supported); \
+ } \
+} while (0)
+
+#define ETHTOOL_ADD_ADVERTISED_SPEED_LINK_MODE(ecmd, mode) \
+do { \
+ u32 i; \
+ for (i = 0; i < hw2ethtool_link_mode_table[mode].arr_size; i++) { \
+ if (hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i] >= \
+ __ETHTOOL_LINK_MODE_MASK_NBITS) \
+ continue; \
+ set_bit(hw2ethtool_link_mode_table[mode].link_mode_bit_arr[i], \
+ (ecmd)->advertising); \
+ } \
+} while (0)
+
+/* Related to enum mag_cmd_port_speed */
+static u32 hw_to_ethtool_speed[] = {
+ (u32)SPEED_UNKNOWN, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000, SPEED_200000
+};
+
+static int hinic3_ethtool_to_hw_speed_level(u32 speed)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_LEN(hw_to_ethtool_speed); i++) {
+ if (hw_to_ethtool_speed[i] == speed)
+ break;
+ }
+
+ return i;
+}
+
+static void hinic3_add_ethtool_link_mode(struct cmd_link_settings *link_settings,
+ u32 hw_link_mode, u32 name)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (hw_link_mode & BIT(link_mode)) {
+ if (name == GET_SUPPORTED_MODE)
+ ETHTOOL_ADD_SUPPORTED_SPEED_LINK_MODE
+ (link_settings, link_mode);
+ else
+ ETHTOOL_ADD_ADVERTISED_SPEED_LINK_MODE
+ (link_settings, link_mode);
+ }
+ }
+}
+
+static int hinic3_link_speed_set(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings,
+ struct nic_port_info *port_info)
+{
+ u8 link_state = 0;
+ int err;
+
+ if (port_info->supported_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->supported_mode,
+ GET_SUPPORTED_MODE);
+ if (port_info->advertised_mode != LINK_MODE_UNKNOWN)
+ hinic3_add_ethtool_link_mode(link_settings,
+ port_info->advertised_mode,
+ GET_ADVERTISED_MODE);
+
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_state);
+ if (!err && link_state) {
+ link_settings->speed =
+ port_info->speed < ARRAY_LEN(hw_to_ethtool_speed) ?
+ hw_to_ethtool_speed[port_info->speed] :
+ (u32)SPEED_UNKNOWN;
+
+ link_settings->duplex = port_info->duplex;
+ } else {
+ link_settings->speed = (u32)SPEED_UNKNOWN;
+ link_settings->duplex = DUPLEX_UNKNOWN;
+ }
+
+ return 0;
+}
+
+static void hinic3_link_port_type(struct cmd_link_settings *link_settings,
+ u8 port_type)
+{
+ switch (port_type) {
+ case MAG_CMD_WIRE_TYPE_ELECTRIC:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, TP);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, TP);
+ link_settings->port = PORT_TP;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_AOC:
+ case MAG_CMD_WIRE_TYPE_MM:
+ case MAG_CMD_WIRE_TYPE_SM:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_FIBRE;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_COPPER:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, FIBRE);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, FIBRE);
+ link_settings->port = PORT_DA;
+ break;
+
+ case MAG_CMD_WIRE_TYPE_BACKPLANE:
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Backplane);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Backplane);
+ link_settings->port = PORT_NONE;
+ break;
+
+ default:
+ link_settings->port = PORT_OTHER;
+ break;
+ }
+}
+
+static int get_link_pause_settings(struct hinic3_nic_dev *nic_dev,
+ struct cmd_link_settings *link_settings)
+{
+ struct nic_pause_config nic_pause = {0};
+ int err;
+
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get pauseparam from hw\n");
+ return err;
+ }
+
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Pause);
+ if (nic_pause.rx_pause && nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ } else if (nic_pause.tx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ } else if (nic_pause.rx_pause) {
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Pause);
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings,
+ Asym_Pause);
+ }
+
+ return 0;
+}
+
+static int get_link_settings(struct net_device *netdev,
+ struct cmd_link_settings *link_settings)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to get port info\n");
+ return err;
+ }
+
+ err = hinic3_link_speed_set(nic_dev, link_settings, &port_info);
+ if (err)
+ return err;
+
+ hinic3_link_port_type(link_settings, port_info.port_type);
+
+ link_settings->autoneg = port_info.autoneg_state == PORT_CFG_AN_ON ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ if (port_info.autoneg_cap)
+ ETHTOOL_ADD_SUPPORTED_LINK_MODE(link_settings, Autoneg);
+ if (port_info.autoneg_state == PORT_CFG_AN_ON)
+ ETHTOOL_ADD_ADVERTISED_LINK_MODE(link_settings, Autoneg);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ err = get_link_pause_settings(nic_dev, link_settings);
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ struct ethtool_link_settings *base = &link_settings->base;
+ int err;
+
+ ethtool_link_ksettings_zero_link_mode(link_settings, supported);
+ ethtool_link_ksettings_zero_link_mode(link_settings, advertising);
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ bitmap_copy(link_settings->link_modes.supported, settings.supported,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+ bitmap_copy(link_settings->link_modes.advertising, settings.advertising,
+ __ETHTOOL_LINK_MODE_MASK_NBITS);
+
+ base->autoneg = settings.autoneg;
+ base->speed = settings.speed;
+ base->duplex = settings.duplex;
+ base->port = settings.port;
+
+ return 0;
+}
+#endif
+#endif
+
+static bool hinic3_is_support_speed(u32 supported_link, u32 speed)
+{
+ u32 link_mode;
+
+ for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) {
+ if (!(supported_link & BIT(link_mode)))
+ continue;
+
+ if (hw2ethtool_link_mode_table[link_mode].speed == speed)
+ return true;
+ }
+
+ return false;
+}
+
+static int hinic3_is_speed_legal(struct hinic3_nic_dev *nic_dev,
+ struct nic_port_info *port_info, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int speed_level = 0;
+
+ if (port_info->supported_mode == LINK_MODE_UNKNOWN ||
+ port_info->advertised_mode == LINK_MODE_UNKNOWN) {
+ nicif_err(nic_dev, drv, netdev, "Unknown supported link modes\n");
+ return -EAGAIN;
+ }
+
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ if (speed_level >= PORT_SPEED_UNKNOWN ||
+ !hinic3_is_support_speed(port_info->supported_mode, speed)) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not supported speed: %u\n", speed);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_link_settings_type(struct hinic3_nic_dev *nic_dev,
+ u8 autoneg, u32 speed, u32 *set_settings)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ err = hinic3_get_port_info(nic_dev->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get current settings\n");
+ return -EAGAIN;
+ }
+
+ /* Alwayse set autonegation */
+ if (port_info.autoneg_cap)
+ *set_settings |= HILINK_LINK_SET_AUTONEG;
+
+ if (autoneg == AUTONEG_ENABLE) {
+ if (!port_info.autoneg_cap) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Not support autoneg\n");
+ return -EOPNOTSUPP;
+ }
+ } else if (speed != (u32)SPEED_UNKNOWN) {
+ /* Set speed only when autoneg is disable */
+ err = hinic3_is_speed_legal(nic_dev, &port_info, speed);
+ if (err)
+ return err;
+
+ *set_settings |= HILINK_LINK_SET_SPEED;
+ } else {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Need to set speed when autoneg is off\n");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_settings_to_hw(struct hinic3_nic_dev *nic_dev,
+ u32 set_settings, u8 autoneg, u32 speed)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_link_ksettings settings = {0};
+ int speed_level = 0;
+ char set_link_str[128] = {0};
+ int err = 0;
+
+ err = snprintf(set_link_str, sizeof(set_link_str) - 1, "%s",
+ (bool)(set_settings & HILINK_LINK_SET_AUTONEG) ?
+ ((bool)autoneg ? "autong enable " : "autong disable ") : "");
+ if (err < 0)
+ return -EINVAL;
+
+ if (set_settings & HILINK_LINK_SET_SPEED) {
+ speed_level = hinic3_ethtool_to_hw_speed_level(speed);
+ err = snprintf(set_link_str, sizeof(set_link_str) - 1,
+ "%sspeed %u ", set_link_str, speed);
+ if (err < 0)
+ return -EINVAL;
+ }
+
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = (bool)autoneg ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+ settings.speed = (u8)speed_level;
+
+ err = hinic3_set_link_settings(nic_dev->hwdev, &settings);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Set %sfailed\n",
+ set_link_str);
+ else
+ nicif_info(nic_dev, drv, netdev, "Set %ssuccess\n",
+ set_link_str);
+
+ return err;
+}
+
+static int set_link_settings(struct net_device *netdev, u8 autoneg, u32 speed)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 set_settings = 0;
+ int err = 0;
+
+ err = get_link_settings_type(nic_dev, autoneg, speed, &set_settings);
+ if (err)
+ return err;
+
+ if (set_settings)
+ err = hinic3_set_settings_to_hw(nic_dev, set_settings,
+ autoneg, speed);
+ else
+ nicif_info(nic_dev, drv, netdev, "Nothing changed, exiting without setting anything\n");
+
+ return err;
+}
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->base.autoneg,
+ link_settings->base.speed);
+}
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep)
+{
+ struct cmd_link_settings settings = { { 0 } };
+ int err;
+
+ err = get_link_settings(netdev, &settings);
+ if (err)
+ return err;
+
+ ep->supported = settings.supported[0] & ((u32)~0);
+ ep->advertising = settings.advertising[0] & ((u32)~0);
+
+ ep->autoneg = settings.autoneg;
+ ethtool_cmd_speed_set(ep, settings.speed);
+ ep->duplex = settings.duplex;
+ ep->port = settings.port;
+ ep->transceiver = XCVR_INTERNAL;
+
+ return 0;
+}
+
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings)
+{
+ /* Only support to set autoneg and speed */
+ return set_link_settings(netdev, link_settings->autoneg,
+ ethtool_cmd_speed(link_settings));
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
new file mode 100644
index 000000000000..70346d6393de
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_filter.c
@@ -0,0 +1,483 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+
+static unsigned char set_filter_state = 1;
+module_param(set_filter_state, byte, 0444);
+MODULE_PARM_DESC(set_filter_state, "Set mac filter config state: 0 - disable, 1 - enable (default=1)");
+
+static int hinic3_uc_sync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return hinic3_set_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+static int hinic3_uc_unsync(struct net_device *netdev, u8 *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ /* The addr is in use */
+ if (ether_addr_equal(addr, netdev->dev_addr))
+ return 0;
+
+ return hinic3_del_mac(nic_dev->hwdev, addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+}
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->uc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+
+ list_for_each_entry_safe(f, ftmp, &nic_dev->mc_filter_list, list) {
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ hinic3_uc_unsync(netdev, f->addr);
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static struct hinic3_mac_filter *hinic3_find_mac(const struct list_head *filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry(f, filter_list, list) {
+ if (ether_addr_equal(addr, f->addr))
+ return f;
+ }
+ return NULL;
+}
+
+static struct hinic3_mac_filter *hinic3_add_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list,
+ u8 *addr)
+{
+ struct hinic3_mac_filter *f;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ goto out;
+
+ ether_addr_copy(f->addr, addr);
+
+ INIT_LIST_HEAD(&f->list);
+ list_add_tail(&f->list, mac_filter_list);
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+out:
+ return f;
+}
+
+static void hinic3_del_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_mac_filter *f)
+{
+ set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+
+ if (f->state == HINIC3_MAC_WAIT_HW_SYNC) {
+ /* have not added to hw, delete it directly */
+ list_del(&f->list);
+ kfree(f);
+ return;
+ }
+
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+}
+
+static struct hinic3_mac_filter *hinic3_mac_filter_entry_clone(const struct hinic3_mac_filter *src)
+{
+ struct hinic3_mac_filter *f;
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f)
+ return NULL;
+
+ *f = *src;
+ INIT_LIST_HEAD(&f->list);
+
+ return f;
+}
+
+static void hinic3_undo_del_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ if (hinic3_find_mac(filter_list, f->addr))
+ continue;
+
+ if (f->state == HINIC3_MAC_HW_SYNCED)
+ f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
+
+ list_move_tail(&f->list, filter_list);
+ }
+}
+
+static void hinic3_undo_add_filter_entries(struct list_head *filter_list,
+ const struct list_head *from)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *tmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, from, list) {
+ tmp = hinic3_find_mac(filter_list, f->addr);
+ if (tmp && tmp->state == HINIC3_MAC_HW_SYNCED)
+ tmp->state = HINIC3_MAC_WAIT_HW_SYNC;
+ }
+}
+
+static void hinic3_cleanup_filter_list(const struct list_head *head)
+{
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+
+ list_for_each_entry_safe(f, ftmp, head, list) {
+ list_del(&f->list);
+ kfree(f);
+ }
+}
+
+static int hinic3_mac_filter_sync_hw(struct hinic3_nic_dev *nic_dev,
+ struct list_head *del_list,
+ struct list_head *add_list)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ if (!list_empty(del_list)) {
+ list_for_each_entry_safe(f, ftmp, del_list, list) {
+ err = hinic3_uc_unsync(netdev, f->addr);
+ if (err) { /* ignore errors when delete mac */
+ nic_err(&nic_dev->pdev->dev, "Failed to delete mac\n");
+ }
+
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ if (!list_empty(add_list)) {
+ list_for_each_entry_safe(f, ftmp, add_list, list) {
+ err = hinic3_uc_sync(netdev, f->addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to add mac\n");
+ return err;
+ }
+
+ add_count++;
+ list_del(&f->list);
+ kfree(f);
+ }
+ }
+
+ return add_count;
+}
+
+static int hinic3_mac_filter_sync(struct hinic3_nic_dev *nic_dev,
+ struct list_head *mac_filter_list, bool uc)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ struct list_head tmp_del_list, tmp_add_list;
+ struct hinic3_mac_filter *fclone = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ int err = 0, add_count = 0;
+
+ INIT_LIST_HEAD(&tmp_del_list);
+ INIT_LIST_HEAD(&tmp_add_list);
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_UNSYNC)
+ continue;
+
+ f->state = HINIC3_MAC_HW_UNSYNCED;
+ list_move_tail(&f->list, &tmp_del_list);
+ }
+
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_WAIT_HW_SYNC)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone) {
+ err = -ENOMEM;
+ break;
+ }
+
+ f->state = HINIC3_MAC_HW_SYNCED;
+ list_add_tail(&fclone->list, &tmp_add_list);
+ }
+
+ if (err) {
+ hinic3_undo_del_filter_entries(mac_filter_list, &tmp_del_list);
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ nicif_err(nic_dev, drv, netdev, "Failed to clone mac_filter_entry\n");
+
+ hinic3_cleanup_filter_list(&tmp_del_list);
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ return -ENOMEM;
+ }
+
+ add_count = hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list,
+ &tmp_add_list);
+ if (list_empty(&tmp_add_list))
+ return add_count;
+
+ /* there are errors when add mac to hw, delete all mac in hw */
+ hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
+ /* VF don't support to enter promisc mode,
+ * so we can't delete any other uc mac
+ */
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev) || !uc) {
+ list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
+ if (f->state != HINIC3_MAC_HW_SYNCED)
+ continue;
+
+ fclone = hinic3_mac_filter_entry_clone(f);
+ if (!fclone)
+ break;
+
+ f->state = HINIC3_MAC_WAIT_HW_SYNC;
+ list_add_tail(&fclone->list, &tmp_del_list);
+ }
+ }
+
+ hinic3_cleanup_filter_list(&tmp_add_list);
+ hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list, &tmp_add_list);
+
+ /* need to enter promisc/allmulti mode */
+ return -ENOMEM;
+}
+
+static void hinic3_mac_filter_sync_all(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int add_count;
+
+ if (test_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags)) {
+ clear_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->uc_filter_list,
+ true);
+ if (add_count < 0 && HINIC3_SUPPORT_PROMISC(nic_dev->hwdev)) {
+ set_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "Promisc mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+
+ add_count = hinic3_mac_filter_sync(nic_dev,
+ &nic_dev->mc_filter_list,
+ false);
+ if (add_count < 0 && HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev)) {
+ set_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ nicif_info(nic_dev, drv, netdev, "All multicast mode forced on\n");
+ } else if (add_count) {
+ clear_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+ }
+ }
+}
+
+#define HINIC3_DEFAULT_RX_MODE (NIC_RX_MODE_UC | NIC_RX_MODE_MC | \
+ NIC_RX_MODE_BC)
+
+static void hinic3_update_mac_filter(struct hinic3_nic_dev *nic_dev,
+ const struct netdev_hw_addr_list *src_list,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct netdev_hw_addr *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list) {
+ filter = hinic3_find_mac(filter_list, ha->addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_hw_addr_list_for_each(ha, src_list)
+ if (ether_addr_equal(ha->addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+
+#ifndef NETDEV_HW_ADDR_T_MULTICAST
+static void hinic3_update_mc_filter(struct hinic3_nic_dev *nic_dev,
+ struct list_head *filter_list)
+{
+ struct hinic3_mac_filter *filter = NULL;
+ struct hinic3_mac_filter *ftmp = NULL;
+ struct hinic3_mac_filter *f = NULL;
+ struct dev_mc_list *ha = NULL;
+
+ /* add addr if not already in the filter list */
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev) {
+ filter = hinic3_find_mac(filter_list, ha->da_addr);
+ if (!filter)
+ hinic3_add_filter(nic_dev, filter_list, ha->da_addr);
+ else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
+ filter->state = HINIC3_MAC_HW_SYNCED;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+ /* delete addr if not in netdev list */
+ list_for_each_entry_safe(f, ftmp, filter_list, list) {
+ bool found = false;
+
+ netif_addr_lock_bh(nic_dev->netdev);
+ netdev_for_each_mc_addr(ha, nic_dev->netdev)
+ if (ether_addr_equal(ha->da_addr, f->addr)) {
+ found = true;
+ break;
+ }
+ netif_addr_unlock_bh(nic_dev->netdev);
+
+ if (found)
+ continue;
+
+ hinic3_del_filter(nic_dev, f);
+ }
+}
+#endif
+
+static void update_mac_filter(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (test_and_clear_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags)) {
+ hinic3_update_mac_filter(nic_dev, &netdev->uc,
+ &nic_dev->uc_filter_list);
+ /* FPGA mc only 12 entry, default disable mc */
+ if (set_filter_state) {
+#ifdef NETDEV_HW_ADDR_T_MULTICAST
+ hinic3_update_mac_filter(nic_dev, &netdev->mc,
+ &nic_dev->mc_filter_list);
+#else
+ hinic3_update_mc_filter(nic_dev,
+ &nic_dev->mc_filter_list);
+#endif
+ }
+ }
+}
+
+static void sync_rx_mode_to_hw(struct hinic3_nic_dev *nic_dev, int promisc_en,
+ int allmulti_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 rx_mod = HINIC3_DEFAULT_RX_MODE;
+ int err;
+
+ rx_mod |= (promisc_en ? NIC_RX_MODE_PROMISC : 0);
+ rx_mod |= (allmulti_en ? NIC_RX_MODE_MC_ALL : 0);
+
+ if (promisc_en != test_bit(HINIC3_HW_PROMISC_ON,
+ &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s promisc mode\n",
+ promisc_en ? "Enter" : "Left");
+ if (allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ nicif_info(nic_dev, drv, netdev,
+ "%s all_multi mode\n",
+ allmulti_en ? "Enter" : "Left");
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mod);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rx_mode\n");
+ return;
+ }
+
+ promisc_en ? set_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state);
+
+ allmulti_en ? set_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state) :
+ clear_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state);
+}
+
+void hinic3_set_rx_mode_work(struct work_struct *work)
+{
+ struct hinic3_nic_dev *nic_dev =
+ container_of(work, struct hinic3_nic_dev, rx_mode_work);
+ struct net_device *netdev = nic_dev->netdev;
+ int promisc_en = 0, allmulti_en = 0;
+
+ update_mac_filter(nic_dev);
+
+ hinic3_mac_filter_sync_all(nic_dev);
+
+ if (HINIC3_SUPPORT_PROMISC(nic_dev->hwdev))
+ promisc_en = !!(netdev->flags & IFF_PROMISC) ||
+ test_bit(HINIC3_PROMISC_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev))
+ allmulti_en = !!(netdev->flags & IFF_ALLMULTI) ||
+ test_bit(HINIC3_ALLMULTI_FORCE_ON,
+ &nic_dev->rx_mod_state);
+
+ if (promisc_en !=
+ test_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) ||
+ allmulti_en !=
+ test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
+ sync_rx_mode_to_hw(nic_dev, promisc_en, allmulti_en);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
new file mode 100644
index 000000000000..34888e3d3535
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_hw.h
@@ -0,0 +1,828 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_H
+#define HINIC3_HW_H
+
+#include "hinic3_comm_cmd.h"
+#include "comm_msg_intf.h"
+#include "comm_cmdq_intf.h"
+
+#include "hinic3_crm.h"
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+
+/* to use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth) */
+#define HINIC3_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define HINIC3_HW_WQ_PAGE_SIZE 0x1000
+#define HINIC3_MAX_WQ_PAGE_SIZE_ORDER 8
+#define SPU_HOST_ID 4
+
+enum hinic3_channel_id {
+ HINIC3_CHANNEL_DEFAULT,
+ HINIC3_CHANNEL_COMM,
+ HINIC3_CHANNEL_NIC,
+ HINIC3_CHANNEL_ROCE,
+ HINIC3_CHANNEL_TOE,
+ HINIC3_CHANNEL_FC,
+ HINIC3_CHANNEL_OVS,
+ HINIC3_CHANNEL_DSW,
+ HINIC3_CHANNEL_MIG,
+ HINIC3_CHANNEL_CRYPT,
+
+ HINIC3_CHANNEL_MAX = 32,
+};
+
+struct hinic3_cmd_buf {
+ void *buf;
+ dma_addr_t dma_addr;
+ u16 size;
+ /* Usage count, USERS DO NOT USE */
+ atomic_t ref_cnt;
+};
+
+enum hinic3_aeq_type {
+ HINIC3_HW_INTER_INT = 0,
+ HINIC3_MBX_FROM_FUNC = 1,
+ HINIC3_MSG_FROM_MGMT_CPU = 2,
+ HINIC3_API_RSP = 3,
+ HINIC3_API_CHAIN_STS = 4,
+ HINIC3_MBX_SEND_RSLT = 5,
+ HINIC3_MAX_AEQ_EVENTS
+};
+
+enum hinic3_aeq_sw_type {
+ HINIC3_STATELESS_EVENT = 0,
+ HINIC3_STATEFUL_EVENT = 1,
+ HINIC3_MAX_AEQ_SW_EVENTS
+};
+
+enum hinic3_hwdev_init_state {
+ HINIC3_HWDEV_NONE_INITED = 0,
+ HINIC3_HWDEV_MGMT_INITED,
+ HINIC3_HWDEV_MBOX_INITED,
+ HINIC3_HWDEV_CMDQ_INITED,
+};
+
+enum hinic3_ceq_event {
+ HINIC3_NON_L2NIC_SCQ,
+ HINIC3_NON_L2NIC_ECQ,
+ HINIC3_NON_L2NIC_NO_CQ_EQ,
+ HINIC3_CMDQ,
+ HINIC3_L2NIC_SQ,
+ HINIC3_L2NIC_RQ,
+ HINIC3_MAX_CEQ_EVENTS,
+};
+
+enum hinic3_mbox_seg_errcode {
+ MBOX_ERRCODE_NO_ERRORS = 0,
+ /* VF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+ /* PPF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+ /* PF send the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+ /* The mailbox data size is set to all zero */
+ MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+ /* The sender function attribute has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+ /* The receiver function attr has not been learned by hardware */
+ MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600,
+};
+
+struct hinic3_ceq_info {
+ u32 q_len;
+ u32 page_size;
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+};
+
+typedef void (*hinic3_aeq_hwe_cb)(void *pri_handle, u8 *data, u8 size);
+typedef u8 (*hinic3_aeq_swe_cb)(void *pri_handle, u8 event, u8 *data);
+typedef void (*hinic3_ceq_event_cb)(void *pri_handle, u32 ceqe_data);
+
+typedef int (*hinic3_vf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_mbox_cb)(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_ppf_mbox_cb)(void *pri_handle, u16 pf_idx,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+typedef int (*hinic3_pf_recv_from_ppf_mbox_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_aeq_register_hw_cb - register aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param hwe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle,
+ enum hinic3_aeq_type event, hinic3_aeq_hwe_cb hwe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_hw_cb - unregister aeq hardware callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event);
+
+/**
+ * @brief hinic3_aeq_register_swe_cb - register aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @pri_handle: the pointer to private invoker device
+ * @param event: event type
+ * @param aeq_swe_cb: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb);
+
+/**
+ * @brief hinic3_aeq_unregister_swe_cb - unregister aeq soft event callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event);
+
+/**
+ * @brief hinic3_ceq_register_cb - register ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback);
+/**
+ * @brief hinic3_ceq_unregister_cb - unregister ceq callback
+ * @param hwdev: device pointer to hwdev
+ * @param event: event type
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event);
+
+/**
+ * @brief hinic3_register_ppf_mbox_cb - ppf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback);
+
+/**
+ * @brief hinic3_register_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback);
+/**
+ * @brief hinic3_register_vf_mbox_cb - vf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback);
+
+/**
+ * @brief hinic3_unregister_ppf_mbox_cb - ppf unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_pf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_vf_mbox_cb - pf register mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_unregister_ppf_to_pf_mbox_cb - unregister mbox msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod);
+
+typedef void (*hinic3_mgmt_msg_cb)(void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+/**
+ * @brief hinic3_register_service_adapter - register mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param pri_handle: private data will be used by the callback
+ * @param callback: callback function
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback);
+
+/**
+ * @brief hinic3_unregister_mgmt_msg_cb - unregister mgmt msg callback
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod);
+
+/**
+ * @brief hinic3_register_service_adapter - register service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param service_adapter: service adapter
+ * @param type: service type
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_unregister_service_adapter - unregister service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ **/
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_service_adapter - get service adapter
+ * @param hwdev: device pointer to hwdev
+ * @param type: service type
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc doorbell & direct wqe pyhsical addr
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free doorbell & direct wqe physical address
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_addr - alloc doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base);
+
+/**
+ * @brief hinic3_free_db_addr - free doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to free doorbell base address
+ * @param dwqe_base: pointer to free direct base address
+ **/
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base);
+
+/**
+ * @brief hinic3_alloc_db_phy_addr - alloc physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: pointer to alloc doorbell base address
+ * @param dwqe_base: pointer to alloc direct base address
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base);
+
+/**
+ * @brief hinic3_free_db_phy_addr - free physical doorbell & direct wqe
+ * @param hwdev: device pointer to hwdev
+ * @param db_base: free doorbell base address
+ * @param dwqe_base: free direct base address
+ **/
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base);
+
+/**
+ * @brief hinic3_set_root_ctxt - set root context
+ * @param hwdev: device pointer to hwdev
+ * @param rq_depth: rq depth
+ * @param sq_depth: sq depth
+ * @param rx_buf_sz: rx buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth,
+ int rx_buf_sz, u16 channel);
+
+/**
+ * @brief hinic3_clean_root_ctxt - clean root context
+ * @param hwdev: device pointer to hwdev
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel);
+
+/**
+ * @brief hinic3_alloc_cmd_buf - alloc cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @retval non-zero: success
+ * @retval null: failure
+ **/
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev);
+
+/**
+ * @brief hinic3_free_cmd_buf - free cmd buffer
+ * @param hwdev: device pointer to hwdev
+ * @param cmd_buf: cmd buffer to free
+ **/
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf);
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id, u16 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value);
+/**
+ * @brief hinic3_sm_ctr_rd32_clear - small single 32 counter read clear
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value);
+
+/**
+ * @brief hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value1: read counter value ptr
+ * @param value2: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2);
+
+/**
+ * @brief hinic3_sm_ctr_rd64 - big counter 64 read
+ * @param hwdev: device pointer to hwdev
+ * @param node: the node id
+ * @param instance: instance id
+ * @param ctr_id: counter id
+ * @param value: read counter value ptr
+ * @retval zero: success
+ * @retval non-zero: failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value);
+
+/**
+ * @brief hinic3_api_csr_rd32 - read 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val);
+
+/**
+ * @brief hinic3_api_csr_wr32 - write 32 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val);
+
+/**
+ * @brief hinic3_api_csr_rd64 - read 64 byte csr
+ * @param hwdev: device pointer to hwdev
+ * @param dest: hardware node id
+ * @param addr: reg address
+ * @param val: reg value
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val);
+
+/**
+ * @brief hinic3_dbg_get_hw_stats - get hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @param hw_stats: pointer to memory caller to alloc
+ * @param out_size: out size
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u16 *out_size);
+
+/**
+ * @brief hinic3_dbg_clear_hw_stats - clear hardware stats
+ * @param hwdev: device pointer to hwdev
+ * @retval clear hardware size
+ */
+u16 hinic3_dbg_clear_hw_stats(void *hwdev);
+
+/**
+ * @brief hinic3_get_chip_fault_stats - get chip fault stats
+ * @param hwdev: device pointer to hwdev
+ * @param chip_fault_stats: pointer to memory caller to alloc
+ * @param offset: offset
+ */
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset);
+
+/**
+ * @brief hinic3_msg_to_mgmt_sync - msg to management cpu
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_async - msg to management cpu async
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function does not sleep inside, allowing use in irq context
+ */
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+/**
+ * @brief hinic3_msg_to_mgmt_no_ack - msg to management cpu don't need no ack
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ *
+ * The function will sleep inside, and it is not allowed to be used in
+ * interrupt context
+ */
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel);
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+/**
+ * @brief hinic3_mbox_to_pf - vf mbox message to pf
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_mbox_to_vf - mbox message to vf
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf index
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param in_size: in buffer size
+ * @param buf_out: message buffer out
+ * @param out_size: out buffer size
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel);
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+/**
+ * @brief hinic3_cmdq_async - cmdq asynchronous message
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq direct message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param out_param: message out
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_cmdq_detail_resp - cmdq detail message response
+ * @param hwdev: device pointer to hwdev
+ * @param mod: mod type
+ * @param cmd: cmd
+ * @param cos_id: cos id
+ * @param buf_in: message buffer in
+ * @param buf_out: message buffer out
+ * @param out_param: inline output data
+ * @param timeout: timeout
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel);
+
+/**
+ * @brief hinic3_ppf_tmr_start - start ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_start(void *hwdev);
+
+/**
+ * @brief hinic3_ppf_tmr_stop - stop ppf timer
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_ppf_tmr_stop(void *hwdev);
+
+/**
+ * @brief hinic3_func_tmr_bitmap_set - set timer bitmap status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en);
+
+/**
+ * @brief hinic3_get_board_info - get board info
+ * @param hwdev: device pointer to hwdev
+ * @param info: board info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel);
+
+/**
+ * @brief hinic3_set_wq_page_size - set work queue page size
+ * @param hwdev: device pointer to hwdev
+ * @param func_idx: function id
+ * @param page_size: page size
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel);
+
+/**
+ * @brief hinic3_event_callback - evnet callback to notify service driver
+ * @param hwdev: device pointer to hwdev
+ * @param event: event info to service driver
+ */
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event);
+
+/**
+ * @brief hinic3_dbg_lt_rd_16byte - liner table read
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ */
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data);
+
+/**
+ * @brief hinic3_dbg_lt_wr_16byte_mask - liner table write
+ * @param hwdev: device pointer to hwdev
+ * @param dest: destine id
+ * @param instance: instance id
+ * @param lt_index: liner table index id
+ * @param data: data
+ * @param mask: mask
+ */
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask);
+
+/**
+ * @brief hinic3_link_event_stats - link event stats
+ * @param hwdev: device pointer to hwdev
+ * @param link: link status
+ */
+void hinic3_link_event_stats(void *dev, u8 link);
+
+/**
+ * @brief hinic3_get_hw_pf_infos - get pf infos
+ * @param hwdev: device pointer to hwdev
+ * @param infos: pf infos
+ * @param channel: channel id
+ */
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel);
+
+/**
+ * @brief hinic3_func_reset - reset func
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param reset_flag: reset flag
+ * @param channel: channel id
+ */
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel);
+
+int hinic3_get_ppf_timer_cfg(void *hwdev);
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function);
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func);
+
+int hinic3_ppf_ht_gpa_init(void *dev);
+
+void hinic3_ppf_ht_gpa_deinit(void *dev);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id);
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_force_complete_all(void *dev);
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr);
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id);
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info);
+
+void hinic3_set_api_stop(void *hwdev);
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index);
+int hinic3_switch_config(void *hwdev, u8 cfg_index);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
new file mode 100644
index 000000000000..3c835ff95e89
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_irq.c
@@ -0,0 +1,189 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+
+int hinic3_poll(struct napi_struct *napi, int budget)
+{
+ int tx_pkts, rx_pkts;
+ struct hinic3_irq *irq_cfg =
+ container_of(napi, struct hinic3_irq, napi);
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ rx_pkts = hinic3_rx_poll(irq_cfg->rxq, budget);
+
+ tx_pkts = hinic3_tx_poll(irq_cfg->txq, budget);
+ if (tx_pkts >= budget || rx_pkts >= budget)
+ return budget;
+
+ napi_complete(napi);
+
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return max(tx_pkts, rx_pkts);
+}
+
+static void qp_add_napi(struct hinic3_irq *irq_cfg)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ netif_napi_add(nic_dev->netdev, &irq_cfg->napi,
+ hinic3_poll, nic_dev->poll_weight);
+ napi_enable(&irq_cfg->napi);
+}
+
+static void qp_del_napi(struct hinic3_irq *irq_cfg)
+{
+ napi_disable(&irq_cfg->napi);
+ netif_napi_del(&irq_cfg->napi);
+}
+
+static irqreturn_t qp_irq(int irq, void *data)
+{
+ struct hinic3_irq *irq_cfg = (struct hinic3_irq *)data;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+
+ hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev, irq_cfg->msix_entry_idx, 1);
+
+ napi_schedule(&irq_cfg->napi);
+
+ return IRQ_HANDLED;
+}
+
+static int hinic3_request_irq(struct hinic3_irq *irq_cfg, u16 q_id)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev);
+ struct interrupt_info info = {0};
+ int err;
+
+ qp_add_napi(irq_cfg);
+
+ info.msix_index = irq_cfg->msix_entry_idx;
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = nic_dev->intr_coalesce[q_id].pending_limt;
+ info.coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ info.resend_timer_cfg = nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg =
+ nic_dev->intr_coalesce[q_id].coalesce_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt =
+ nic_dev->intr_coalesce[q_id].pending_limt;
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev,
+ "Failed to set RX interrupt coalescing attribute.\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ err = request_irq(irq_cfg->irq_id, &qp_irq, 0, irq_cfg->irq_name, irq_cfg);
+ if (err) {
+ nicif_err(nic_dev, drv, irq_cfg->netdev, "Failed to request Rx irq\n");
+ qp_del_napi(irq_cfg);
+ return err;
+ }
+
+ irq_set_affinity_hint(irq_cfg->irq_id, &irq_cfg->affinity_mask);
+
+ return 0;
+}
+
+static void hinic3_release_irq(struct hinic3_irq *irq_cfg)
+{
+ irq_set_affinity_hint(irq_cfg->irq_id, NULL);
+ synchronize_irq(irq_cfg->irq_id);
+ free_irq(irq_cfg->irq_id, irq_cfg);
+ qp_del_napi(irq_cfg);
+}
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct irq_info *qp_irq_info = NULL;
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id, i;
+ u32 local_cpu;
+ int err;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ qp_irq_info = &nic_dev->qps_irq_info[q_id];
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+
+ irq_cfg->irq_id = qp_irq_info->irq_id;
+ irq_cfg->msix_entry_idx = qp_irq_info->msix_entry_idx;
+ irq_cfg->netdev = nic_dev->netdev;
+ irq_cfg->txq = &nic_dev->txqs[q_id];
+ irq_cfg->rxq = &nic_dev->rxqs[q_id];
+ nic_dev->rxqs[q_id].irq_cfg = irq_cfg;
+
+ local_cpu = cpumask_local_spread(q_id, dev_to_node(&pdev->dev));
+ cpumask_set_cpu(local_cpu, &irq_cfg->affinity_mask);
+
+ err = snprintf(irq_cfg->irq_name, sizeof(irq_cfg->irq_name),
+ "%s_qp%u", nic_dev->netdev->name, q_id);
+ if (err < 0) {
+ err = -EINVAL;
+ goto req_tx_irq_err;
+ }
+
+ err = hinic3_request_irq(irq_cfg, q_id);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to request Rx irq\n");
+ goto req_tx_irq_err;
+ }
+
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_SET_MSIX_AUTO_MASK);
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_ENABLE);
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->moderation_task, hinic3_auto_moderation_work);
+
+ return 0;
+
+req_tx_irq_err:
+ for (i = 0; i < q_id; i++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[i];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+
+ return err;
+}
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_irq *irq_cfg = NULL;
+ u16 q_id;
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ irq_cfg = &nic_dev->q_params.irq_cfg[q_id];
+ hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev,
+ irq_cfg->msix_entry_idx,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_release_irq(irq_cfg);
+ }
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_lld.h b/drivers/net/ethernet/huawei/hinic3/hinic3_lld.h
new file mode 100644
index 000000000000..656b49f8ad6c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_lld.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_LLD_H
+#define HINIC3_LLD_H
+
+#include "hinic3_crm.h"
+
+struct hinic3_lld_dev {
+ struct pci_dev *pdev;
+ void *hwdev;
+};
+struct hinic3_uld_info {
+ /* When the function does not need to initialize the corresponding uld,
+ * @probe needs to return 0 and uld_dev is set to NULL;
+ * if uld_dev is NULL, @remove will not be called when uninstalling
+ */
+ int (*probe)(struct hinic3_lld_dev *lld_dev, void **uld_dev, char *uld_dev_name);
+ void (*remove)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ int (*suspend)(struct hinic3_lld_dev *lld_dev, void *uld_dev, pm_message_t state);
+ int (*resume)(struct hinic3_lld_dev *lld_dev, void *uld_dev);
+ void (*event)(struct hinic3_lld_dev *lld_dev, void *uld_dev,
+ struct hinic3_event_info *event);
+ int (*ioctl)(void *uld_dev, u32 cmd, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+};
+
+/* hinic3_register_uld - register an upper-layer driver
+ * @type: uld service type
+ * @uld_info: uld callback
+ *
+ * Registers an upper-layer driver.
+ * Traverse existing devices and call @probe to initialize the uld device.
+ */
+int hinic3_register_uld(enum hinic3_service_type type, struct hinic3_uld_info *uld_info);
+
+/**
+ * hinic3_unregister_uld - unregister an upper-layer driver
+ * @type: uld service type
+ *
+ * Traverse existing devices and call @remove to uninstall the uld device.
+ * Unregisters an existing upper-layer driver.
+ */
+void hinic3_unregister_uld(enum hinic3_service_type type);
+
+void lld_hold(void);
+void lld_put(void);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_name - get lld device by chip name
+ * @param chip_name: chip name
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name);
+
+/**
+ * @brief lld_dev_hold - get reference to lld_dev
+ * @param dev: lld device
+ *
+ * Hold reference to device to keep it from being freed
+ **/
+void lld_dev_hold(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief lld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ *
+ * Release reference to device to allow it to be freed
+ **/
+void lld_dev_put(struct hinic3_lld_dev *dev);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_dev_name_unsafe - get lld device by uld device name
+ * @param dev_name: uld device name
+ * @param type: uld service type, When the type is SERVICE_T_MAX, try to match
+ * all ULD names to get uld_dev
+ *
+ * hinic3_get_lld_dev_by_dev_name_unsafe() is completely analogous to
+ * hinic3_get_lld_dev_by_dev_name(), The only difference is that the reference
+ * of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that lld_dev will not be freed during the remove process
+ * when using lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_lld_dev_by_chip_and_port - get lld device by chip name and port id
+ * @param chip_name: chip name
+ * @param port_id: port id
+ **/
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * The value of lld_dev reference increases when lld_dev is obtained. The caller needs
+ * to release the reference by calling lld_dev_put.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief hinic3_get_ppf_lld_dev_unsafe - get ppf lld device by current function's lld device
+ * @param lld_dev: current function's lld device
+ *
+ * hinic3_get_ppf_lld_dev_unsafe() is completely analogous to hinic3_get_ppf_lld_dev(),
+ * The only difference is that the reference of lld_dev is not increased when lld_dev is obtained.
+ *
+ * The caller must ensure that ppf's lld_dev will not be freed during the remove process
+ * when using ppf lld_dev.
+ **/
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev);
+
+/**
+ * @brief uld_dev_hold - get reference to uld_dev
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * Hold reference to uld device to keep it from being freed
+ **/
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief uld_dev_put - release reference to lld_dev
+ * @param dev: lld device
+ * @param type: uld service type
+ *
+ * Release reference to uld device to allow it to be freed
+ **/
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * The value of uld_dev reference increases when uld_dev is obtained. The caller needs
+ * to release the reference by calling uld_dev_put.
+ **/
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_uld_dev_unsafe - get uld device by lld device
+ * @param lld_dev: lld device
+ * @param type: uld service type
+ *
+ * hinic3_get_uld_dev_unsafe() is completely analogous to hinic3_get_uld_dev(),
+ * The only difference is that the reference of uld_dev is not increased when uld_dev is obtained.
+ *
+ * The caller must ensure that uld_dev will not be freed during the remove process
+ * when using uld_dev.
+ **/
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+
+/**
+ * @brief hinic3_get_chip_name - get chip name by lld device
+ * @param lld_dev: lld device
+ * @param chip_name: String for storing the chip name
+ * @param max_len: Maximum number of characters to be copied for chip_name
+ **/
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len);
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service);
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load);
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id,
+ u16 service, bool en);
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev);
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type);
+const char **hinic3_get_uld_names(void);
+int hinic3_lld_init(void);
+void hinic3_lld_exit(void);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
new file mode 100644
index 000000000000..4049e81ce034
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mag_cfg.c
@@ -0,0 +1,953 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "mag_cmd.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel);
+
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel)
+{
+ struct mag_cmd_set_port_enable en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.function_id = hinic3_global_func_id(hwdev);
+ en_state.state = enable ? MAG_CMD_TX_ENABLE | MAG_CMD_RX_ENABLE :
+ MAG_CMD_PORT_DISABLE;
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_SET_PORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size,
+ channel);
+ if (err || !out_size || en_state.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set port state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_port_enable);
+
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats)
+{
+ struct mag_cmd_get_port_stat *port_stats = NULL;
+ struct mag_cmd_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->counter, sizeof(*stats));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_phy_port_stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable)
+{
+ return 0;
+}
+
+int hinic3_reset_port_link_cfg(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_force_port_relink(void *hwdev)
+{
+ return 0;
+}
+
+int hinic3_set_autoneg(void *hwdev, bool enable)
+{
+ struct hinic3_link_ksettings settings = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 set_settings = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ set_settings |= HILINK_LINK_SET_AUTONEG;
+ settings.valid_bitmap = set_settings;
+ settings.autoneg = enable ? PORT_CFG_AN_ON : PORT_CFG_AN_OFF;
+
+ return hinic3_set_link_settings(hwdev, &settings);
+}
+
+static int hinic3_cfg_loopback_mode(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *mode, u8 *enable)
+{
+ struct mag_cmd_cfg_loopback_mode lp;
+ u16 out_size = sizeof(lp);
+ int err;
+
+ memset(&lp, 0, sizeof(lp));
+ lp.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ lp.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET) {
+ lp.lp_mode = *mode;
+ lp.lp_en = *enable;
+ }
+
+ err = mag_msg_to_mgmt_sync(nic_io->hwdev, MAG_CMD_CFG_LOOPBACK_MODE,
+ &lp, sizeof(lp), &lp, &out_size);
+ if (err || !out_size || lp.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s loopback mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == MGMT_MSG_CMD_OP_SET ? "set" : "get",
+ err, lp.head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET) {
+ *mode = lp.lp_mode;
+ *enable = lp.lp_en;
+ }
+
+ return 0;
+}
+
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !mode || !enable)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_GET, mode,
+ enable);
+}
+
+#define LOOP_MODE_MIN 1
+#define LOOP_MODE_MAX 6
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if (mode < LOOP_MODE_MIN || mode > LOOP_MODE_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid loopback mode %u to set\n",
+ mode);
+ return -EINVAL;
+ }
+
+ return hinic3_cfg_loopback_mode(nic_io, MGMT_MSG_CMD_OP_SET, &mode,
+ &enable);
+}
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct mag_cmd_set_led_cfg led_info;
+ u16 out_size = sizeof(led_info);
+ int err;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&led_info, 0, sizeof(led_info));
+
+ led_info.function_id = hinic3_global_func_id(hwdev);
+ led_info.type = type;
+ led_info.mode = mode;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LED_CFG, &led_info,
+ sizeof(led_info), &led_info, &out_size);
+ if (err || led_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set led status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, led_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel)
+{
+ struct mag_cmd_get_port_info port_msg;
+ u16 out_size = sizeof(port_msg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !port_info)
+ return -EINVAL;
+
+ memset(&port_msg, 0, sizeof(port_msg));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ port_msg.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync_ch(hwdev, MAG_CMD_GET_PORT_INFO, &port_msg,
+ sizeof(port_msg), &port_msg, &out_size,
+ channel);
+ if (err || !out_size || port_msg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, port_msg.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ port_info->autoneg_cap = port_msg.an_support;
+ port_info->autoneg_state = port_msg.an_en;
+ port_info->duplex = port_msg.duplex;
+ port_info->port_type = port_msg.wire_type;
+ port_info->speed = port_msg.speed;
+ port_info->fec = port_msg.fec;
+ port_info->supported_mode = port_msg.supported_mode;
+ port_info->advertised_mode = port_msg.advertised_mode;
+
+ return 0;
+}
+
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel)
+{
+ struct nic_port_info port_info = {0};
+ int err;
+
+ if (!hwdev || !speed)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, channel);
+ if (err)
+ return err;
+
+ *speed = port_info.speed;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_speed);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings)
+{
+ struct mag_cmd_set_port_cfg info;
+ u16 out_size = sizeof(info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !settings)
+ return -EINVAL;
+
+ memset(&info, 0, sizeof(info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ info.port_id = hinic3_physical_port_id(hwdev);
+ info.config_bitmap = settings->valid_bitmap;
+ info.autoneg = settings->autoneg;
+ info.speed = settings->speed;
+ info.fec = settings->fec;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_PORT_CFG, &info,
+ sizeof(info), &info, &out_size);
+ if (err || !out_size || info.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set link settings, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, info.head.status, out_size);
+ return -EIO;
+ }
+
+ return info.head.status;
+}
+
+int hinic3_get_link_state(void *hwdev, u8 *link_state)
+{
+ struct mag_cmd_get_link_status get_link;
+ u16 out_size = sizeof(get_link);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&get_link, 0, sizeof(get_link));
+ get_link.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_LINK_STATUS, &get_link,
+ sizeof(get_link), &get_link, &out_size);
+ if (err || !out_size || get_link.head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_link.head.status, out_size);
+ return -EIO;
+ }
+
+ *link_state = get_link.status;
+
+ return 0;
+}
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status)
+{
+ struct mag_cmd_get_link_status link;
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u16 out_size = sizeof(link);
+ int err;
+
+ memset(&link, 0, sizeof(link));
+ if (vf_infos[HW_VF_ID_TO_OS(vf_id)].registered) {
+ link.status = link_status;
+ link.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ err = hinic3_mbox_to_vf(nic_io->hwdev, vf_id, HINIC3_MOD_HILINK,
+ MAG_CMD_GET_LINK_STATUS, &link,
+ sizeof(link), &link, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err == MBOX_ERRCODE_UNKNOWN_DES_FUNC) {
+ nic_warn(nic_io->dev_hdl, "VF%d not initialized, disconnect it\n",
+ HW_VF_ID_TO_OS(vf_id));
+ hinic3_unregister_vf(nic_io, vf_id);
+ return;
+ }
+ if (err || !out_size || link.head.status)
+ nic_err(nic_io->dev_hdl,
+ "Send link change event to VF %d failed, err: %d, status: 0x%x, out_size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, link.head.status, out_size);
+ }
+}
+
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 i;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ nic_io->link_status = link_status;
+ for (i = 1; i <= nic_io->max_vfs; i++) {
+ if (!nic_io->vf_infos[HW_VF_ID_TO_OS(i)].link_forced)
+ hinic3_notify_vf_link_status(nic_io, i, link_status);
+ }
+}
+
+static int hinic3_get_vf_link_status_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ struct mag_cmd_get_link_status *get_link = buf_out;
+ bool link_forced, link_up;
+
+ link_forced = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced;
+ link_up = vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up;
+
+ if (link_forced)
+ get_link->status = link_up ?
+ HINIC3_LINK_UP : HINIC3_LINK_DOWN;
+ else
+ get_link->status = nic_io->link_status;
+
+ get_link->head.status = 0;
+ *out_size = sizeof(*get_link);
+
+ return 0;
+}
+
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info)
+{
+ /* TO DO */
+ return 0;
+}
+
+static void get_port_info(void *hwdev,
+ const struct mag_cmd_get_link_status *link_status,
+ struct hinic3_event_link_info *link_info)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (hinic3_func_type(hwdev) != TYPE_VF && link_status->status) {
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_warn(nic_io->dev_hdl, "Failed to get port info\n");
+ } else {
+ link_info->valid = 1;
+ link_info->port_type = port_info.port_type;
+ link_info->autoneg_cap = port_info.autoneg_cap;
+ link_info->autoneg_state = port_info.autoneg_state;
+ link_info->duplex = port_info.duplex;
+ link_info->speed = port_info.speed;
+ hinic3_refresh_nic_cfg(hwdev, &port_info);
+ }
+ }
+}
+
+static void link_status_event_handler(void *hwdev, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_link_status *link_status = NULL;
+ struct mag_cmd_get_link_status *ret_link_status = NULL;
+ struct hinic3_event_info event_info = {0};
+ struct hinic3_event_link_info *link_info = (void *)event_info.event_data;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ link_status = buf_in;
+ sdk_info(nic_io->dev_hdl, "Link status report received, func_id: %u, status: %u\n",
+ hinic3_global_func_id(hwdev), link_status->status);
+
+ hinic3_link_event_stats(hwdev, link_status->status);
+
+ /* link event reported only after set vport enable */
+ get_port_info(hwdev, link_status, link_info);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = link_status->status ?
+ EVENT_NIC_LINK_UP : EVENT_NIC_LINK_DOWN;
+
+ hinic3_event_callback(hwdev, &event_info);
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ hinic3_notify_all_vfs_link_changed(hwdev, link_status->status);
+ ret_link_status = buf_out;
+ ret_link_status->head.status = 0;
+ *out_size = sizeof(*ret_link_status);
+ }
+}
+
+static void cable_plug_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_wire_event *plug_event = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ rt_cmd->mpu_send_sfp_abs = false;
+ rt_cmd->mpu_send_sfp_info = false;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ memset(&event_info, 0, sizeof(event_info));
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_PORT_MODULE_EVENT;
+ ((struct hinic3_port_module_event *)(void *)event_info.event_data)->type =
+ plug_event->status ? HINIC3_PORT_MODULE_CABLE_PLUGGED :
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED;
+
+ *out_size = sizeof(*plug_event);
+ plug_event = buf_out;
+ plug_event->head.status = 0;
+
+ hinic3_event_callback(hwdev, &event_info);
+}
+
+static void port_sfp_info_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_info *sfp_info = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (in_size != sizeof(*sfp_info)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp info cmd, length: %u, should be %ld\n",
+ in_size, sizeof(*sfp_info));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->std_sfp_info, sfp_info,
+ sizeof(struct mag_cmd_get_xsfp_info));
+ rt_cmd->mpu_send_sfp_info = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+static void port_sfp_abs_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct mag_cmd_get_xsfp_present *sfp_abs = buf_in;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (in_size != sizeof(*sfp_abs)) {
+ sdk_err(nic_io->dev_hdl, "Invalid sfp absent cmd, length: %u, should be %ld\n",
+ in_size, sizeof(*sfp_abs));
+ return;
+ }
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ memcpy(&rt_cmd->abs, sfp_abs,
+ sizeof(struct mag_cmd_get_xsfp_present));
+ rt_cmd->mpu_send_sfp_abs = true;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+}
+
+bool hinic3_if_sfp_absent(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ struct mag_cmd_get_xsfp_present sfp_abs;
+ u8 port_id = hinic3_physical_port_id(hwdev);
+ u16 out_size = sizeof(sfp_abs);
+ int err;
+ bool sfp_abs_status;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&sfp_abs, 0, sizeof(sfp_abs));
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_abs) {
+ if (rt_cmd->abs.head.status) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return true;
+ }
+
+ sfp_abs_status = (bool)rt_cmd->abs.abs_status;
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return sfp_abs_status;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_abs.port_id = port_id;
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_PRESENT,
+ &sfp_abs, sizeof(sfp_abs), &sfp_abs,
+ &out_size);
+ if (sfp_abs.head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp absent status, err: %d, status: 0x%x, out size: 0x%x\n",
+ port_id, err, sfp_abs.head.status, out_size);
+ return true;
+ }
+
+ return (sfp_abs.abs_status == 0 ? false : true);
+}
+
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u16 out_size = sizeof(*sfp_info);
+ int err;
+
+ if (!hwdev || !sfp_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (rt_cmd->std_sfp_info.head.status) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ memcpy(sfp_info, &rt_cmd->std_sfp_info, sizeof(*sfp_info));
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ sfp_info->port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_XSFP_INFO, sfp_info,
+ sizeof(*sfp_info), sfp_info, &out_size);
+ if (sfp_info->head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port%u sfp eeprom information, err: %d, status: 0x%x, out size: 0x%x\n",
+ hinic3_physical_port_id(hwdev), err,
+ sfp_info->head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len)
+{
+ struct mag_cmd_get_xsfp_info sfp_info;
+ int err;
+
+ if (!hwdev || !data)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ memset(&sfp_info, 0, sizeof(sfp_info));
+
+ err = hinic3_get_sfp_info(hwdev, &sfp_info);
+ if (err)
+ return err;
+
+ memcpy(data, sfp_info.sfp_info, len);
+
+ return 0;
+}
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_port_routine_cmd *rt_cmd = NULL;
+ u8 sfp_data[STD_SFP_INFO_MAX_SIZE];
+ int err;
+
+ if (!hwdev || !sfp_type || !sfp_type_ext)
+ return -EINVAL;
+
+ if (hinic3_if_sfp_absent(hwdev))
+ return -ENXIO;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ rt_cmd = &nic_io->nic_cfg.rt_cmd;
+
+ mutex_lock(&nic_io->nic_cfg.sfp_mutex);
+ if (rt_cmd->mpu_send_sfp_info) {
+ if (rt_cmd->std_sfp_info.head.status) {
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return -EIO;
+ }
+
+ *sfp_type = rt_cmd->std_sfp_info.sfp_info[0];
+ *sfp_type_ext = rt_cmd->std_sfp_info.sfp_info[1];
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+ return 0;
+ }
+ mutex_unlock(&nic_io->nic_cfg.sfp_mutex);
+
+ err = hinic3_get_sfp_eeprom(hwdev, (u8 *)sfp_data,
+ STD_SFP_INFO_MAX_SIZE);
+ if (err)
+ return err;
+
+ *sfp_type = sfp_data[0];
+ *sfp_type_ext = sfp_data[1];
+
+ return 0;
+}
+
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status)
+{
+ struct mag_cmd_set_link_follow follow;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(follow);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (status >= HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ nic_err(nic_io->dev_hdl, "Invalid link follow status: %d\n", status);
+ return -EINVAL;
+ }
+
+ memset(&follow, 0, sizeof(follow));
+ follow.function_id = hinic3_global_func_id(hwdev);
+ follow.follow = status;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LINK_FOLLOW, &follow,
+ sizeof(follow), &follow, &out_size);
+ if ((follow.head.status != HINIC3_MGMT_CMD_UNSUPPORTED && follow.head.status) ||
+ err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set link status follow port status, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, follow.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return follow.head.status;
+}
+
+int hinic3_update_pf_bw(void *hwdev)
+{
+ struct nic_port_info port_info = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ err = hinic3_get_port_info(hwdev, &port_info, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get port info\n");
+ return -EIO;
+ }
+
+ err = hinic3_set_pf_rate(hwdev, port_info.speed);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set pf bandwidth\n");
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 old_bw_limit;
+ u8 link_state = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (bw_limit > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bandwidth: %u\n", bw_limit);
+ return -EINVAL;
+ }
+
+ err = hinic3_get_link_state(hwdev, &link_state);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get link state\n");
+ return -EIO;
+ }
+
+ if (!link_state) {
+ nic_err(nic_io->dev_hdl, "Link status must be up when setting pf tx rate\n");
+ return -EINVAL;
+ }
+
+ old_bw_limit = nic_io->nic_cfg.pf_bw_limit;
+ nic_io->nic_cfg.pf_bw_limit = bw_limit;
+
+ err = hinic3_update_pf_bw(hwdev);
+ if (err) {
+ nic_io->nic_cfg.pf_bw_limit = old_bw_limit;
+ return err;
+ }
+
+ return 0;
+}
+
+static const struct vf_msg_handler vf_mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = hinic3_get_vf_link_status_msg_handler,
+ },
+};
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_mag_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+ const struct vf_msg_handler *handler = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ for (index = 0; index < cmd_size; index++) {
+ handler = &vf_mag_cmd_handler[index];
+ if (cmd == handler->cmd)
+ return handler->handler(nic_io, vf_id, buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for mag cmd: %u received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+static struct nic_event_handler mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = link_status_event_handler,
+ },
+
+ {
+ .cmd = MAG_CMD_WIRE_EVENT,
+ .handler = cable_plug_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_INFO,
+ .handler = port_sfp_info_event,
+ },
+
+ {
+ .cmd = MAG_CMD_GET_XSFP_PRESENT,
+ .handler = port_sfp_abs_event,
+ },
+};
+
+static int hinic3_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = ARRAY_LEN(mag_cmd_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ for (i = 0; i < size; i++) {
+ if (cmd == mag_cmd_handler[i].cmd) {
+ mag_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported mag event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_mag_event_handler(hwdev, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report hilink event */
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_mag_event_handler(pri_handle, cmd, buf_in, in_size,
+ buf_out, out_size);
+}
+
+static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_mag_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_mag_cmd_handler[i].cmd) {
+ cmd_to_pf = true;
+ break;
+ }
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+static int mag_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_main.c b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
new file mode 100644
index 000000000000..87f6f5417e9e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_main.c
@@ -0,0 +1,1125 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/dcbnl.h>
+#include <linux/tcp.h>
+#include <linux/ip.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_lld.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_rss.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+#include "hinic3_profile.h"
+
+/*lint -e806*/
+#define DEFAULT_POLL_WEIGHT 64
+static unsigned int poll_weight = DEFAULT_POLL_WEIGHT;
+module_param(poll_weight, uint, 0444);
+MODULE_PARM_DESC(poll_weight, "Number packets for NAPI budget (default=64)");
+
+#define HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT 2
+#define HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG 25
+#define HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static unsigned char qp_pending_limit = HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT;
+module_param(qp_pending_limit, byte, 0444);
+MODULE_PARM_DESC(qp_pending_limit, "QP MSI-X Interrupt coalescing parameter pending_limit (default=2)");
+
+static unsigned char qp_coalesc_timer_cfg =
+ HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG;
+module_param(qp_coalesc_timer_cfg, byte, 0444);
+MODULE_PARM_DESC(qp_coalesc_timer_cfg, "QP MSI-X Interrupt coalescing parameter coalesc_timer_cfg (default=32)");
+
+#define DEFAULT_RX_BUFF_LEN 2
+u16 rx_buff = DEFAULT_RX_BUFF_LEN;
+module_param(rx_buff, ushort, 0444);
+MODULE_PARM_DESC(rx_buff, "Set rx_buff size, buffer len must be 2^n. 2 - 16, default is 2KB");
+
+static unsigned int lro_replenish_thld = 256;
+module_param(lro_replenish_thld, uint, 0444);
+MODULE_PARM_DESC(lro_replenish_thld, "Number wqe for lro replenish buffer (default=256)");
+
+static unsigned char set_link_status_follow = HINIC3_LINK_FOLLOW_STATUS_MAX;
+module_param(set_link_status_follow, byte, 0444);
+MODULE_PARM_DESC(set_link_status_follow, "Set link status follow port status (0=default,1=follow,2=separate,3=unset");
+
+/*lint +e806*/
+
+#define HINIC3_NIC_DEV_WQ_NAME "hinic3_nic_dev_wq"
+
+#define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_LINK)
+
+#define QID_MASKED(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+#define WATCHDOG_TIMEOUT 5
+
+#define HINIC3_SQ_DEPTH 1024
+#define HINIC3_RQ_DEPTH 1024
+
+enum hinic3_rx_buff_len {
+ RX_BUFF_VALID_2KB = 2,
+ RX_BUFF_VALID_4KB = 4,
+ RX_BUFF_VALID_8KB = 8,
+ RX_BUFF_VALID_16KB = 16,
+};
+
+#define CONVERT_UNIT 1024
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr);
+
+/* used for netdev notifier register/unregister */
+static DEFINE_MUTEX(hinic3_netdev_notifiers_mutex);
+static int hinic3_netdev_notifiers_ref_cnt;
+static struct notifier_block hinic3_netdev_notifier = {
+ .notifier_call = hinic3_netdev_event,
+};
+
+static void hinic3_register_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ hinic3_netdev_notifiers_ref_cnt++;
+ if (hinic3_netdev_notifiers_ref_cnt == 1) {
+ err = register_netdevice_notifier(&hinic3_netdev_notifier);
+ if (err) {
+ nic_info(&nic_dev->pdev->dev, "Register netdevice notifier failed, err: %d\n",
+ err);
+ hinic3_netdev_notifiers_ref_cnt--;
+ }
+ }
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+static void hinic3_unregister_notifier(struct hinic3_nic_dev *nic_dev)
+{
+ mutex_lock(&hinic3_netdev_notifiers_mutex);
+ if (hinic3_netdev_notifiers_ref_cnt == 1)
+ unregister_netdevice_notifier(&hinic3_netdev_notifier);
+
+ if (hinic3_netdev_notifiers_ref_cnt)
+ hinic3_netdev_notifiers_ref_cnt--;
+ mutex_unlock(&hinic3_netdev_notifiers_mutex);
+}
+
+#define HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT 1
+#define HINIC3_VLAN_CLEAR_OFFLOAD (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | \
+ NETIF_F_SCTP_CRC | NETIF_F_RXCSUM | \
+ NETIF_F_ALL_TSO)
+
+static int hinic3_netdev_event(struct notifier_block *notifier, unsigned long event, void *ptr)
+{
+ struct net_device *ndev = netdev_notifier_info_to_dev(ptr);
+ struct net_device *real_dev = NULL;
+ struct net_device *ret = NULL;
+ u16 vlan_depth;
+
+ if (!is_vlan_dev(ndev))
+ return NOTIFY_DONE;
+
+ dev_hold(ndev);
+
+ switch (event) {
+ case NETDEV_REGISTER:
+ real_dev = vlan_dev_real_dev(ndev);
+ if (!hinic3_is_netdev_ops_match(real_dev))
+ goto out;
+
+ vlan_depth = 1;
+ ret = vlan_dev_priv(ndev)->real_dev;
+ while (is_vlan_dev(ret)) {
+ ret = vlan_dev_priv(ret)->real_dev;
+ vlan_depth++;
+ }
+
+ if (vlan_depth == HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+ ndev->vlan_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ } else if (vlan_depth > HINIC3_MAX_VLAN_DEPTH_OFFLOAD_SUPPORT) {
+#ifdef HAVE_NDO_SET_FEATURES
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(ndev,
+ get_netdev_hw_features(ndev) &
+ (~HINIC3_VLAN_CLEAR_OFFLOAD));
+#else
+ ndev->hw_features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+#endif
+#endif
+ ndev->features &= (~HINIC3_VLAN_CLEAR_OFFLOAD);
+ }
+
+ break;
+
+ default:
+ break;
+ };
+
+out:
+ dev_put(ndev);
+
+ return NOTIFY_DONE;
+}
+#endif
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status)
+{
+ struct net_device *netdev = nic_dev->netdev;
+
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ test_bit(HINIC3_LP_TEST, &nic_dev->flags) ||
+ test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags))
+ return;
+
+ if (status) {
+ if (netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_on(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is up\n");
+ } else {
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ nic_dev->link_status = status;
+ netif_carrier_off(netdev);
+ nicif_info(nic_dev, link, netdev, "Link is down\n");
+ }
+}
+
+static void netdev_feature_init(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ netdev_features_t dft_fts = 0;
+ netdev_features_t cso_fts = 0;
+ netdev_features_t vlan_fts = 0;
+ netdev_features_t tso_fts = 0;
+ netdev_features_t hw_features = 0;
+
+ dft_fts |= NETIF_F_SG | NETIF_F_HIGHDMA;
+
+ if (HINIC3_SUPPORT_CSUM(nic_dev->hwdev))
+ cso_fts |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM;
+ if (HINIC3_SUPPORT_SCTP_CRC(nic_dev->hwdev))
+ cso_fts |= NETIF_F_SCTP_CRC;
+
+ if (HINIC3_SUPPORT_TSO(nic_dev->hwdev))
+ tso_fts |= NETIF_F_TSO | NETIF_F_TSO6;
+
+ if (HINIC3_SUPPORT_VLAN_OFFLOAD(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_TX;
+#elif defined(NETIF_F_HW_VLAN_TX)
+ vlan_fts |= NETIF_F_HW_VLAN_TX;
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_RX;
+#elif defined(NETIF_F_HW_VLAN_RX)
+ vlan_fts |= NETIF_F_HW_VLAN_RX;
+#endif
+ }
+
+ if (HINIC3_SUPPORT_RXVLAN_FILTER(nic_dev->hwdev)) {
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ vlan_fts |= NETIF_F_HW_VLAN_FILTER;
+#endif
+ }
+
+#ifdef HAVE_ENCAPSULATION_TSO
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev))
+ tso_fts |= NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_UDP_TUNNEL_CSUM;
+#endif /* HAVE_ENCAPSULATION_TSO */
+
+ /* LRO is disable in default, only set hw features */
+ if (HINIC3_SUPPORT_LRO(nic_dev->hwdev))
+ hw_features |= NETIF_F_LRO;
+
+#if (KERNEL_VERSION(4, 11, 0) > LINUX_VERSION_CODE)
+ if (HINIC3_SUPPORT_UFO(nic_dev->hwdev)) {
+ /* UFO is disable in default */
+ hw_features |= NETIF_F_UFO;
+ netdev->vlan_features |= NETIF_F_UFO;
+ }
+#endif
+
+ netdev->features |= dft_fts | cso_fts | tso_fts | vlan_fts;
+ netdev->vlan_features |= dft_fts | cso_fts | tso_fts;
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ hw_features |= get_netdev_hw_features(netdev);
+#else
+ hw_features |= netdev->hw_features;
+#endif
+
+ hw_features |= netdev->features;
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_hw_features(netdev, hw_features);
+#else
+ netdev->hw_features = hw_features;
+#endif
+
+#ifdef IFF_UNICAST_FLT
+ netdev->priv_flags |= IFF_UNICAST_FLT;
+#endif
+
+#ifdef HAVE_ENCAPSULATION_CSUM
+ netdev->hw_enc_features |= dft_fts;
+ if (HINIC3_SUPPORT_VXLAN_OFFLOAD(nic_dev->hwdev)) {
+ netdev->hw_enc_features |= cso_fts;
+#ifdef HAVE_ENCAPSULATION_TSO
+ netdev->hw_enc_features |= tso_fts | NETIF_F_TSO_ECN;
+#endif /* HAVE_ENCAPSULATION_TSO */
+ }
+#endif /* HAVE_ENCAPSULATION_CSUM */
+}
+
+static void init_intr_coal_param(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_intr_coal_info *info = NULL;
+ u16 i;
+
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ info = &nic_dev->intr_coalesce[i];
+
+ info->pending_limt = qp_pending_limit;
+ info->coalesce_timer_cfg = qp_coalesc_timer_cfg;
+
+ info->resend_timer_cfg = HINIC3_DEAULT_TXRX_MSIX_RESEND_TIMER_CFG;
+
+ info->pkt_rate_high = HINIC3_RX_RATE_HIGH;
+ info->rx_usecs_high = HINIC3_RX_COAL_TIME_HIGH;
+ info->rx_pending_limt_high = HINIC3_RX_PENDING_LIMIT_HIGH;
+
+ info->pkt_rate_low = HINIC3_RX_RATE_LOW;
+ info->rx_usecs_low = HINIC3_RX_COAL_TIME_LOW;
+ info->rx_pending_limt_low = HINIC3_RX_PENDING_LIMIT_LOW;
+ }
+}
+
+static int hinic3_init_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ u64 size;
+
+ if (qp_pending_limit != HINIC3_DEAULT_TXRX_MSIX_PENDING_LIMIT ||
+ qp_coalesc_timer_cfg != HINIC3_DEAULT_TXRX_MSIX_COALESC_TIMER_CFG)
+ nic_dev->intr_coal_set_flag = 1;
+ else
+ nic_dev->intr_coal_set_flag = 0;
+
+ size = sizeof(*nic_dev->intr_coalesce) * nic_dev->max_qps;
+ if (!size) {
+ nic_err(&nic_dev->pdev->dev, "Cannot allocate zero size intr coalesce\n");
+ return -EINVAL;
+ }
+ nic_dev->intr_coalesce = kzalloc(size, GFP_KERNEL);
+ if (!nic_dev->intr_coalesce) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc intr coalesce\n");
+ return -ENOMEM;
+ }
+
+ init_intr_coal_param(nic_dev);
+
+ if (test_bit(HINIC3_INTR_ADAPT, &nic_dev->flags))
+ nic_dev->adaptive_rx_coal = 1;
+ else
+ nic_dev->adaptive_rx_coal = 0;
+
+ return 0;
+}
+
+static void hinic3_free_intr_coalesce(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->intr_coalesce);
+}
+
+static int hinic3_alloc_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ err = hinic3_alloc_txqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc txqs\n");
+ return err;
+ }
+
+ err = hinic3_alloc_rxqs(netdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc rxqs\n");
+ goto alloc_rxqs_err;
+ }
+
+ err = hinic3_init_intr_coalesce(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init_intr_coalesce\n");
+ goto init_intr_err;
+ }
+
+ return 0;
+
+init_intr_err:
+ hinic3_free_rxqs(netdev);
+
+alloc_rxqs_err:
+ hinic3_free_txqs(netdev);
+
+ return err;
+}
+
+static void hinic3_free_txrxqs(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_intr_coalesce(nic_dev);
+ hinic3_free_rxqs(nic_dev->netdev);
+ hinic3_free_txqs(nic_dev->netdev);
+}
+
+static void hinic3_sw_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_free_txrxqs(nic_dev);
+
+ hinic3_clean_mac_list_filter(nic_dev);
+
+ hinic3_del_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_clear_rss_config(nic_dev);
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
+ hinic3_sync_dcb_state(nic_dev->hwdev, 1, 0);
+}
+
+static int hinic3_sw_init(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u64 nic_features;
+ int err = 0;
+
+ nic_features = hinic3_get_feature_cap(nic_dev->hwdev);
+ /* You can update the features supported by the driver according to the
+ * scenario here
+ */
+ nic_features &= NIC_DRV_DEFAULT_FEATURE;
+ hinic3_update_nic_feature(nic_dev->hwdev, nic_features);
+
+ sema_init(&nic_dev->port_state_sem, 1);
+
+ err = hinic3_dcb_init(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to init dcb\n");
+ return -EFAULT;
+ }
+
+ nic_dev->q_params.sq_depth = HINIC3_SQ_DEPTH;
+ nic_dev->q_params.rq_depth = HINIC3_RQ_DEPTH;
+
+ hinic3_try_to_enable_rss(nic_dev);
+
+ err = hinic3_get_default_mac(nic_dev->hwdev, netdev->dev_addr);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to get MAC address\n");
+ goto get_mac_err;
+ }
+
+ if (!is_valid_ether_addr(netdev->dev_addr)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_err(&nic_dev->pdev->dev, "Invalid MAC address %pM\n",
+ netdev->dev_addr);
+ err = -EIO;
+ goto err_mac;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "Invalid MAC address %pM, using random\n",
+ netdev->dev_addr);
+ eth_hw_addr_random(netdev);
+ }
+
+ err = hinic3_set_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+ /* When this is VF driver, we must consider that PF has already set VF
+ * MAC, and we can't consider this condition is error status during
+ * driver probe procedure.
+ */
+ if (err && err != HINIC3_PF_SET_VF_ALREADY) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set default MAC\n");
+ goto set_mac_err;
+ }
+
+ /* MTU range: 384 - 9600 */
+#ifdef HAVE_NETDEVICE_MIN_MAX_MTU
+ netdev->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+
+#ifdef HAVE_NETDEVICE_EXTENDED_MIN_MAX_MTU
+ netdev->extended->min_mtu = HINIC3_MIN_MTU_SIZE;
+ netdev->extended->max_mtu = HINIC3_MAX_JUMBO_FRAME_SIZE;
+#endif
+
+ err = hinic3_alloc_txrxqs(nic_dev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to alloc qps\n");
+ goto alloc_qps_err;
+ }
+
+ return 0;
+
+alloc_qps_err:
+ hinic3_del_mac(nic_dev->hwdev, netdev->dev_addr, 0,
+ hinic3_global_func_id(nic_dev->hwdev),
+ HINIC3_CHANNEL_NIC);
+
+set_mac_err:
+err_mac:
+get_mac_err:
+ hinic3_clear_rss_config(nic_dev);
+
+ return err;
+}
+
+static void hinic3_assign_netdev_ops(struct hinic3_nic_dev *adapter)
+{
+ hinic3_set_netdev_ops(adapter);
+ if (!HINIC3_FUNC_IS_VF(adapter->hwdev))
+ hinic3_set_ethtool_ops(adapter->netdev);
+ else
+ hinic3vf_set_ethtool_ops(adapter->netdev);
+
+ adapter->netdev->watchdog_timeo = WATCHDOG_TIMEOUT * HZ;
+}
+
+static int hinic3_validate_parameters(struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+
+ /* If weight exceeds the queue depth, the queue resources will be
+ * exhausted, and increasing it has no effect.
+ */
+ if (!poll_weight || poll_weight > HINIC3_MAX_RX_QUEUE_DEPTH) {
+ nic_warn(&pdev->dev, "Module Parameter poll_weight is out of range: [1, %d], resetting to %d\n",
+ HINIC3_MAX_RX_QUEUE_DEPTH, DEFAULT_POLL_WEIGHT);
+ poll_weight = DEFAULT_POLL_WEIGHT;
+ }
+
+ /* check rx_buff value, default rx_buff is 2KB.
+ * Valid rx_buff include 2KB/4KB/8KB/16KB.
+ */
+ if (rx_buff != RX_BUFF_VALID_2KB && rx_buff != RX_BUFF_VALID_4KB &&
+ rx_buff != RX_BUFF_VALID_8KB && rx_buff != RX_BUFF_VALID_16KB) {
+ nic_warn(&pdev->dev, "Module Parameter rx_buff value %u is out of range, must be 2^n. Valid range is 2 - 16, resetting to %dKB",
+ rx_buff, DEFAULT_RX_BUFF_LEN);
+ rx_buff = DEFAULT_RX_BUFF_LEN;
+ }
+
+ return 0;
+}
+
+static void decide_intr_cfg(struct hinic3_nic_dev *nic_dev)
+{
+ set_bit(HINIC3_INTR_ADAPT, &nic_dev->flags);
+}
+
+static void adaptive_configuration_init(struct hinic3_nic_dev *nic_dev)
+{
+ decide_intr_cfg(nic_dev);
+}
+
+static int set_interrupt_moder(struct hinic3_nic_dev *nic_dev, u16 q_id,
+ u8 coalesc_timer_cfg, u8 pending_limt)
+{
+ struct interrupt_info info;
+ int err;
+
+ memset(&info, 0, sizeof(info));
+
+ if (coalesc_timer_cfg == nic_dev->rxqs[q_id].last_coalesc_timer_cfg &&
+ pending_limt == nic_dev->rxqs[q_id].last_pending_limt)
+ return 0;
+
+ /* netdev not running or qp not in using,
+ * don't need to set coalesce to hw
+ */
+ if (!HINIC3_CHANNEL_RES_VALID(nic_dev) ||
+ q_id >= nic_dev->q_params.num_qps)
+ return 0;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.coalesc_timer_cfg = coalesc_timer_cfg;
+ info.pending_limt = pending_limt;
+ info.msix_index = nic_dev->q_params.irq_cfg[q_id].msix_entry_idx;
+ info.resend_timer_cfg =
+ nic_dev->intr_coalesce[q_id].resend_timer_cfg;
+
+ err = hinic3_set_interrupt_cfg(nic_dev->hwdev, info,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to modify moderation for Queue: %u\n", q_id);
+ } else {
+ nic_dev->rxqs[q_id].last_coalesc_timer_cfg = coalesc_timer_cfg;
+ nic_dev->rxqs[q_id].last_pending_limt = pending_limt;
+ }
+
+ return err;
+}
+
+static void calc_coal_para(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_intr_coal_info *q_coal, u64 rx_rate,
+ u8 *coalesc_timer_cfg, u8 *pending_limt)
+{
+ if (rx_rate < q_coal->pkt_rate_low) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_low;
+ *pending_limt = q_coal->rx_pending_limt_low;
+ } else if (rx_rate > q_coal->pkt_rate_high) {
+ *coalesc_timer_cfg = q_coal->rx_usecs_high;
+ *pending_limt = q_coal->rx_pending_limt_high;
+ } else {
+ *coalesc_timer_cfg =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_usecs_high - q_coal->rx_usecs_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_usecs_low);
+
+ *pending_limt =
+ (u8)((rx_rate - q_coal->pkt_rate_low) *
+ (q_coal->rx_pending_limt_high - q_coal->rx_pending_limt_low) /
+ (q_coal->pkt_rate_high - q_coal->pkt_rate_low) +
+ q_coal->rx_pending_limt_low);
+ }
+}
+
+static void update_queue_coal(struct hinic3_nic_dev *nic_dev, u16 qid,
+ u64 rx_rate, u64 avg_pkt_size, u64 tx_rate)
+{
+ struct hinic3_intr_coal_info *q_coal = NULL;
+ u8 coalesc_timer_cfg, pending_limt;
+
+ q_coal = &nic_dev->intr_coalesce[qid];
+
+ if (rx_rate > HINIC3_RX_RATE_THRESH && avg_pkt_size > HINIC3_AVG_PKT_SMALL) {
+ calc_coal_para(nic_dev, q_coal, rx_rate, &coalesc_timer_cfg, &pending_limt);
+ } else {
+ coalesc_timer_cfg = HINIC3_LOWEST_LATENCY;
+ pending_limt = q_coal->rx_pending_limt_low;
+ }
+
+ set_interrupt_moder(nic_dev, qid, coalesc_timer_cfg, pending_limt);
+}
+
+void hinic3_auto_moderation_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay,
+ struct hinic3_nic_dev,
+ moderation_task);
+ unsigned long period = (unsigned long)(jiffies -
+ nic_dev->last_moder_jiffies);
+ u64 rx_packets, rx_bytes, rx_pkt_diff, rx_rate, avg_pkt_size;
+ u64 tx_packets, tx_bytes, tx_pkt_diff, tx_rate;
+ u16 qid;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+
+ if (!nic_dev->adaptive_rx_coal || !period)
+ return;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rx_packets = nic_dev->rxqs[qid].rxq_stats.packets;
+ rx_bytes = nic_dev->rxqs[qid].rxq_stats.bytes;
+ tx_packets = nic_dev->txqs[qid].txq_stats.packets;
+ tx_bytes = nic_dev->txqs[qid].txq_stats.bytes;
+
+ rx_pkt_diff =
+ rx_packets - nic_dev->rxqs[qid].last_moder_packets;
+ avg_pkt_size = rx_pkt_diff ?
+ ((unsigned long)(rx_bytes -
+ nic_dev->rxqs[qid].last_moder_bytes)) /
+ rx_pkt_diff : 0;
+
+ rx_rate = rx_pkt_diff * HZ / period;
+ tx_pkt_diff =
+ tx_packets - nic_dev->txqs[qid].last_moder_packets;
+ tx_rate = tx_pkt_diff * HZ / period;
+
+ update_queue_coal(nic_dev, qid, rx_rate, avg_pkt_size,
+ tx_rate);
+
+ nic_dev->rxqs[qid].last_moder_packets = rx_packets;
+ nic_dev->rxqs[qid].last_moder_bytes = rx_bytes;
+ nic_dev->txqs[qid].last_moder_packets = tx_packets;
+ nic_dev->txqs[qid].last_moder_bytes = tx_bytes;
+ }
+
+ nic_dev->last_moder_jiffies = jiffies;
+}
+
+static void hinic3_periodic_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev, periodic_work);
+
+ if (test_and_clear_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag))
+ hinic3_fault_event_report(nic_dev->hwdev, HINIC3_FAULT_SRC_TX_TIMEOUT,
+ FAULT_LEVEL_SERIOUS_FLR);
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+}
+
+static void free_nic_dev(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ destroy_workqueue(nic_dev->workq);
+ kfree(nic_dev->vlan_bitmap);
+}
+
+static int setup_nic_dev(struct net_device *netdev,
+ struct hinic3_lld_dev *lld_dev)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev;
+ char *netdev_name_fmt;
+ u32 page_num;
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ nic_dev->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ nic_dev->lld_dev = lld_dev;
+ nic_dev->hwdev = lld_dev->hwdev;
+ nic_dev->pdev = pdev;
+ nic_dev->poll_weight = (int)poll_weight;
+ nic_dev->msg_enable = DEFAULT_MSG_ENABLE;
+ nic_dev->lro_replenish_thld = lro_replenish_thld;
+ nic_dev->rx_buff_len = (u16)(rx_buff * CONVERT_UNIT);
+ nic_dev->dma_rx_buff_size = RX_BUFF_NUM_PER_PAGE * nic_dev->rx_buff_len;
+ page_num = nic_dev->dma_rx_buff_size / PAGE_SIZE;
+ nic_dev->page_order = page_num > 0 ? ilog2(page_num) : 0;
+
+ mutex_init(&nic_dev->nic_mutex);
+
+ nic_dev->vlan_bitmap = kzalloc(VLAN_BITMAP_SIZE(nic_dev), GFP_KERNEL);
+ if (!nic_dev->vlan_bitmap) {
+ nic_err(&pdev->dev, "Failed to allocate vlan bitmap\n");
+ return -ENOMEM;
+ }
+
+ nic_dev->workq = create_singlethread_workqueue(HINIC3_NIC_DEV_WQ_NAME);
+ if (!nic_dev->workq) {
+ nic_err(&pdev->dev, "Failed to initialize nic workqueue\n");
+ kfree(nic_dev->vlan_bitmap);
+ return -ENOMEM;
+ }
+
+ INIT_DELAYED_WORK(&nic_dev->periodic_work, hinic3_periodic_work_handler);
+ INIT_DELAYED_WORK(&nic_dev->rxq_check_work, hinic3_rxq_check_work_handler);
+
+ INIT_LIST_HEAD(&nic_dev->uc_filter_list);
+ INIT_LIST_HEAD(&nic_dev->mc_filter_list);
+ INIT_WORK(&nic_dev->rx_mode_work, hinic3_set_rx_mode_work);
+
+ INIT_LIST_HEAD(&nic_dev->rx_flow_rule.rules);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_list);
+ INIT_LIST_HEAD(&nic_dev->tcam.tcam_dynamic_info.tcam_dynamic_list);
+
+ hinic3_init_nic_prof_adapter(nic_dev);
+
+ netdev_name_fmt = hinic3_get_dft_netdev_name_fmt(nic_dev);
+ if (netdev_name_fmt)
+ strncpy(netdev->name, netdev_name_fmt, IFNAMSIZ);
+
+ return 0;
+}
+
+static int hinic3_set_default_hw_feature(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ hinic3_dcb_reset_hw_config(nic_dev);
+
+ if (set_link_status_follow < HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ err = hinic3_set_link_status_follow(nic_dev->hwdev,
+ set_link_status_follow);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ nic_warn(&nic_dev->pdev->dev,
+ "Current version of firmware doesn't support to set link status follow port status\n");
+ }
+ }
+
+ err = hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set nic features\n");
+ return err;
+ }
+
+ /* enable all hw features in netdev->features */
+ err = hinic3_set_hw_features(nic_dev);
+ if (err) {
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+ return err;
+ }
+
+ if (HINIC3_SUPPORT_RXQ_RECOVERY(nic_dev->hwdev))
+ set_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+
+ return 0;
+}
+
+static int nic_probe(struct hinic3_lld_dev *lld_dev, void **uld_dev,
+ char *uld_dev_name)
+{
+ struct pci_dev *pdev = lld_dev->pdev;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct net_device *netdev = NULL;
+ u16 max_qps, glb_func_id;
+ int err;
+
+ if (!hinic3_support_nic(lld_dev->hwdev, NULL)) {
+ nic_info(&pdev->dev, "Hw don't support nic\n");
+ return 0;
+ }
+
+ nic_info(&pdev->dev, "NIC service probe begin\n");
+
+ err = hinic3_validate_parameters(lld_dev);
+ if (err) {
+ err = -EINVAL;
+ goto err_out;
+ }
+
+ glb_func_id = hinic3_global_func_id(lld_dev->hwdev);
+ err = hinic3_func_reset(lld_dev->hwdev, glb_func_id, HINIC3_NIC_RES,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to reset function\n");
+ goto err_out;
+ }
+
+ max_qps = hinic3_func_max_nic_qnum(lld_dev->hwdev);
+ netdev = alloc_etherdev_mq(sizeof(*nic_dev), max_qps);
+ if (!netdev) {
+ nic_err(&pdev->dev, "Failed to allocate ETH device\n");
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ nic_dev = (struct hinic3_nic_dev *)netdev_priv(netdev);
+ err = setup_nic_dev(netdev, lld_dev);
+ if (err)
+ goto setup_dev_err;
+
+ adaptive_configuration_init(nic_dev);
+
+ /* get nic cap from hw */
+ hinic3_support_nic(lld_dev->hwdev, &nic_dev->nic_cap);
+
+ err = hinic3_init_nic_hwdev(nic_dev->hwdev, pdev, &pdev->dev,
+ nic_dev->rx_buff_len);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to init nic hwdev\n");
+ goto init_nic_hwdev_err;
+ }
+
+ err = hinic3_sw_init(nic_dev);
+ if (err)
+ goto sw_init_err;
+
+ hinic3_assign_netdev_ops(nic_dev);
+ netdev_feature_init(netdev);
+
+ err = hinic3_set_default_hw_feature(nic_dev);
+ if (err)
+ goto set_features_err;
+
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_register_notifier(nic_dev);
+#endif
+
+ err = register_netdev(netdev);
+ if (err) {
+ nic_err(&pdev->dev, "Failed to register netdev\n");
+ err = -ENOMEM;
+ goto netdev_err;
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->periodic_work, HZ);
+ netif_carrier_off(netdev);
+
+ *uld_dev = nic_dev;
+ nicif_info(nic_dev, probe, netdev, "Register netdev succeed\n");
+ nic_info(&pdev->dev, "NIC service probed\n");
+
+ return 0;
+
+netdev_err:
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+set_features_err:
+ hinic3_sw_deinit(nic_dev);
+
+sw_init_err:
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+init_nic_hwdev_err:
+ free_nic_dev(nic_dev);
+setup_dev_err:
+ free_netdev(netdev);
+
+err_out:
+ nic_err(&pdev->dev, "NIC service probe failed\n");
+
+ return err;
+}
+
+static void nic_remove(struct hinic3_lld_dev *lld_dev, void *adapter)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct net_device *netdev = NULL;
+
+ if (!nic_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ nic_info(&lld_dev->pdev->dev, "NIC service remove begin\n");
+
+ netdev = nic_dev->netdev;
+
+ unregister_netdev(netdev);
+#ifdef HAVE_MULTI_VLAN_OFFLOAD_EN
+ hinic3_unregister_notifier(nic_dev);
+#endif
+
+ cancel_delayed_work_sync(&nic_dev->periodic_work);
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+ cancel_work_sync(&nic_dev->rx_mode_work);
+ destroy_workqueue(nic_dev->workq);
+
+ hinic3_flush_rx_flow_rule(nic_dev);
+
+ hinic3_update_nic_feature(nic_dev->hwdev, 0);
+ hinic3_set_nic_feature_to_hw(nic_dev->hwdev);
+
+ hinic3_sw_deinit(nic_dev);
+
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+ hinic3_deinit_nic_prof_adapter(nic_dev);
+ kfree(nic_dev->vlan_bitmap);
+
+ free_netdev(netdev);
+
+ nic_info(&lld_dev->pdev->dev, "NIC service removed\n");
+}
+
+static void sriov_state_change(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_sriov_state_info *info)
+{
+ if (!info->enable)
+ hinic3_clear_vfs_info(nic_dev->hwdev);
+}
+
+static void hinic3_port_module_event_handler(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_event_info *event)
+{
+ const char *g_hinic3_module_link_err[LINK_ERR_NUM] = { "Unrecognized module" };
+ struct hinic3_port_module_event *module_event = (void *)event->event_data;
+ enum port_module_event_type type = module_event->type;
+ enum link_err_type err_type = module_event->err_type;
+
+ switch (type) {
+ case HINIC3_PORT_MODULE_CABLE_PLUGGED:
+ case HINIC3_PORT_MODULE_CABLE_UNPLUGGED:
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Port module event: Cable %s\n",
+ type == HINIC3_PORT_MODULE_CABLE_PLUGGED ?
+ "plugged" : "unplugged");
+ break;
+ case HINIC3_PORT_MODULE_LINK_ERR:
+ if (err_type >= LINK_ERR_NUM) {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, Unknown error type: 0x%x\n",
+ err_type);
+ } else {
+ nicif_info(nic_dev, link, nic_dev->netdev,
+ "Link failed, error type: 0x%x: %s\n",
+ err_type,
+ g_hinic3_module_link_err[err_type]);
+ }
+ break;
+ default:
+ nicif_err(nic_dev, link, nic_dev->netdev,
+ "Unknown port module type %d\n", type);
+ break;
+ }
+}
+
+static void nic_event(struct hinic3_lld_dev *lld_dev, void *adapter,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_nic_dev *nic_dev = adapter;
+ struct hinic3_fault_event *fault = NULL;
+
+ if (!nic_dev || !event || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return;
+
+ switch (HINIC3_SRV_EVENT_TYPE(event->service, event->type)) {
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_DOWN):
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_LINK_UP):
+ hinic3_link_status_change(nic_dev, true);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_NIC, EVENT_NIC_PORT_MODULE_EVENT):
+ hinic3_port_module_event_handler(nic_dev, event);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_SRIOV_STATE_CHANGE):
+ sriov_state_change(nic_dev, (void *)event->event_data);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_FAULT):
+ fault = (void *)event->event_data;
+ if (fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id == hinic3_global_func_id(lld_dev->hwdev))
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_PCIE_LINK_DOWN):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_HEART_LOST):
+ case HINIC3_SRV_EVENT_TYPE(EVENT_SRV_COMM, EVENT_COMM_MGMT_WATCHDOG):
+ hinic3_link_status_change(nic_dev, false);
+ break;
+ default:
+ break;
+ }
+}
+
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!lld_dev || !hinic3_support_nic(lld_dev->hwdev, NULL))
+ return NULL;
+
+ nic_dev = hinic3_get_uld_dev_unsafe(lld_dev, SERVICE_T_NIC);
+ if (!nic_dev) {
+ nic_err(&lld_dev->pdev->dev,
+ "There's no net device attached on the pci device");
+ return NULL;
+ }
+
+ return nic_dev->netdev;
+}
+EXPORT_SYMBOL(hinic3_get_netdev_by_lld);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!netdev || !hinic3_is_netdev_ops_match(netdev))
+ return NULL;
+
+ nic_dev = netdev_priv(netdev);
+ if (!nic_dev)
+ return NULL;
+
+ return nic_dev->lld_dev;
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_netdev);
+
+struct hinic3_uld_info g_nic_uld_info = {
+ .probe = nic_probe,
+ .remove = nic_remove,
+ .suspend = NULL,
+ .resume = NULL,
+ .event = nic_event,
+ .ioctl = nic_ioctl,
+}; /*lint -e766*/
+
+struct hinic3_uld_info *get_nic_uld_info(void)
+{
+ return &g_nic_uld_info;
+}
+
+#define HINIC3_NIC_DRV_DESC "Intelligent Network Interface Card Driver"
+
+static __init int hinic3_nic_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_NIC_DRV_DESC,
+ HINIC3_NIC_DRV_VERSION);
+
+ err = hinic3_lld_init();
+ if (err) {
+ pr_err("SDK init failed.\n");
+ return err;
+ }
+
+ err = hinic3_register_uld(SERVICE_T_NIC, &g_nic_uld_info);
+ if (err) {
+ pr_err("Register hinic3 uld failed\n");
+ hinic3_lld_exit();
+ return err;
+ }
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ hinic3_unregister_uld(SERVICE_T_NIC);
+ hinic3_lld_exit();
+ return err;
+ }
+
+ return 0;
+}
+
+static __exit void hinic3_nic_lld_exit(void)
+{
+ hinic3_unregister_uld(SERVICE_T_NIC);
+
+ hinic3_module_post_exit();
+
+ hinic3_lld_exit();
+}
+
+module_init(hinic3_nic_lld_init);
+module_exit(hinic3_nic_lld_exit);
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION(HINIC3_NIC_DRV_DESC);
+MODULE_VERSION(HINIC3_NIC_DRV_VERSION);
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
new file mode 100644
index 000000000000..c4524d703c7d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mgmt_interface.h
@@ -0,0 +1,1252 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Huawei HiNIC PCI Express Linux driver
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ */
+
+#ifndef HINIC_MGMT_INTERFACE_H
+#define HINIC_MGMT_INTERFACE_H
+
+#include "nic_cfg_comm.h"
+#include "mgmt_msg_base.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define HINIC3_CMD_OP_SET 1
+#define HINIC3_CMD_OP_GET 0
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+#ifndef BIT
+#define BIT(n) (1UL << (n))
+#endif
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+ NIC_F_XSFP_REPORT = BIT(14),
+ NIC_F_VF_MAC = BIT(15),
+ NIC_F_RATE_LIMIT = BIT(16),
+ NIC_F_RXQ_RECOVERY = BIT(17),
+};
+
+#define NIC_F_ALL_MASK 0x3FFFF /* 使能所有属性 */
+
+struct hinic3_mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#define NIC_MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[NIC_MAX_FEATURE_QWORD];
+};
+
+struct hinic3_port_mac_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_vport_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+struct hinic3_port_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define HINIC3_SET_PORT_CAR_PROFILE 0
+#define HINIC3_SET_PORT_CAR_STATE 1
+
+struct hinic3_port_car_info {
+ u32 cir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 xir; /* unit: kbps, range:[1,400*1000*1000], i.e. 1Kbps~400Gbps(400M*kbps) */
+ u32 cbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+ u32 xbs; /* unit: Byte, range:[1,320*1000*1000], i.e. 1byte~2560Mbit */
+};
+
+struct hinic3_cmd_set_port_car {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode; /* 0--set car profile, 1--set car state */
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd;
+
+ struct hinic3_port_car_info car;
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_cache_out_qp_resource {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_phy_fpga_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct hinic3_port_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_phy_fpga_port_stats stats;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_cmd_qpn {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+typedef union {
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } mac_table_arg;
+ struct {
+ u32 er_id;
+ u32 vlan_id;
+ } vlan_elb_table_arg;
+ struct {
+ u32 func_id;
+ } vlan_filter_arg;
+ struct {
+ u32 mc_id;
+ } mc_elb_arg;
+ struct {
+ u32 func_id;
+ } func_tbl_arg;
+ struct {
+ u32 port_id;
+ } port_tbl_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } fdir_io_table_arg;
+ struct {
+ u32 tbl_index;
+ u32 cnt;
+ u32 total_cnt;
+ } flexq_table_arg;
+ u32 args[4];
+} sm_tbl_args;
+
+#define DFX_SM_TBL_BUF_MAX (768)
+
+struct nic_cmd_dfx_sm_table {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 tbl_type;
+ sm_tbl_args args;
+ u8 tbl_buf[DFX_SM_TBL_BUF_MAX];
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+/* ucode capture cfg info */
+struct nic_cmd_capture_info {
+ struct hinic3_mgmt_msg_head msg_head;
+ u32 op_type;
+ u32 func_port;
+ u32 is_en_trx; /* 也作为tx_rx */
+ u32 offset_cos; /* 也作为cos */
+ u32 data_vlan; /* 也作为vlan */
+};
+
+struct hinic3_cmd_lro_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* unit is 1K */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_cmd_local_lro_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 0: get state, 1: set state */
+ u8 state; /* 0: disable, 1: enable */
+};
+
+struct hinic3_cmd_vf_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u8 qos;
+ u8 rsvd2[5];
+};
+
+struct hinic3_cmd_spoofchk_set {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 state;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_tx_rate_cfg {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 min_rate;
+ u32 max_rate;
+ u8 rsvd2[8];
+};
+
+struct hinic3_cmd_port_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_register_vf {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd1[3];
+ u32 support_extra_feature;
+ u8 rsvd2[32];
+};
+
+struct hinic3_cmd_link_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct hinic3_cmd_vlan_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u16 rsvd2;
+};
+
+/* set vlan filter */
+struct hinic3_cmd_set_vlan_filter {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ u32 vlan_filter_ctrl; /* bit0:vlan filter en; bit1:broadcast_filter_en */
+};
+
+struct hinic3_cmd_link_ksettings_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off, 1 - on */
+ u8 fec; /* 0 - RSFEC, 1 - BASEFEC, 2 - NOFEC */
+ u8 rsvd2[21]; /* reserved for duplex, port, etc. */
+};
+
+struct mpu_lt_info {
+ u8 node;
+ u8 inst;
+ u8 entry_size;
+ u8 rsvd;
+ u32 lt_index;
+ u32 offset;
+ u32 len;
+};
+
+struct nic_mpu_lt_opera {
+ struct hinic3_mgmt_msg_head msg_head;
+ struct mpu_lt_info net_lt_cmd;
+ u8 data[100];
+};
+
+struct hinic3_force_pkt_drop {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 rsvd1[3];
+};
+struct hinic3_rx_mode_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+/* rss */
+struct hinic3_rss_context_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[NIC_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[NIC_RSS_INDIR_SIZE];
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+struct hinic3_rss_template_mgmt {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[NIC_DCB_COS_MAX];
+ u16 num_qps;
+ u16 rsvd1;
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 pcp2cos[NIC_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_up_ets_cfg { /* delet */
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX];
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX];
+};
+
+#define CMD_QOS_ETS_COS_TC BIT(0)
+#define CMD_QOS_ETS_TC_BW BIT(1)
+#define CMD_QOS_ETS_COS_PRIO BIT(2)
+#define CMD_QOS_ETS_COS_BW BIT(3)
+#define CMD_QOS_ETS_TC_PRIO BIT(4)
+struct hinic3_cmd_ets_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 1 - set, 0 - get */
+ /* bit0 - cos_tc, bit1 - tc_bw, bit2 - cos_prio, bit3 - cos_bw, bit4 - tc_prio */
+ u8 cfg_bitmap;
+ u8 rsvd;
+
+ u8 cos_tc[NIC_DCB_COS_MAX];
+ u8 tc_bw[NIC_DCB_TC_MAX];
+ u8 cos_prio[NIC_DCB_COS_MAX]; /* 0 - DWRR, 1 - STRICT */
+ u8 cos_bw[NIC_DCB_COS_MAX];
+ u8 tc_prio[NIC_DCB_TC_MAX]; /* 0 - DWRR, 1 - STRICT */
+};
+
+struct hinic3_cmd_set_dcb_state {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 port_state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd[7];
+};
+
+#define PFC_BIT_MAP_NUM 8
+struct hinic3_cmd_set_pfc {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0:get 1: set pfc_en 2: set pfc_bitmap 3: set all */
+ u8 pfc_en; /* pfc_en 和 pfc_bitmap 必须同时设置 */
+ u8 pfc_bitmap;
+ u8 rsvd[4];
+};
+
+#define CMD_QOS_PORT_TRUST BIT(0)
+#define CMD_QOS_PORT_DFT_COS BIT(1)
+struct hinic3_cmd_qos_port_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 port_id;
+ u8 op_code; /* 0 - get, 1 - set */
+ u8 cfg_bitmap; /* bit0 - trust, bit1 - dft_cos */
+ u8 rsvd0;
+
+ u8 trust;
+ u8 dft_cos;
+ u8 rsvd1[18];
+};
+
+#define MAP_COS_MAX_NUM 8
+#define CMD_QOS_MAP_PCP2COS BIT(0)
+#define CMD_QOS_MAP_DSCP2COS BIT(1)
+struct hinic3_cmd_qos_map_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 op_code;
+ u8 cfg_bitmap; /* bit0 - pcp2cos, bit1 - dscp2cos */
+ u16 rsvd0;
+
+ u8 pcp2cos[8]; /* 必须8个一起配置 */
+ /* 配置dscp2cos时,若cos值设置为0xFF,MPU则忽略此dscp优先级的配置,
+ * 允许一次性配置多个dscp跟cos的映射关系
+ */
+ u8 dscp2cos[64];
+ u32 rsvd1[4];
+};
+
+struct hinic3_cos_up_map {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 cos_valid_mask; /* every bit indicate index of map is valid 1 or not 0 */
+ u16 rsvd1;
+
+ /* user priority in cos(index:cos, value: up pri) */
+ u8 map[NIC_DCB_UP_MAX];
+};
+
+struct hinic3_cmd_pause_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+/* pfc风暴检测配置 */
+struct nic_cmd_pause_inquiry_cfg {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid;
+
+ u32 type; /* 1: set, 2: get */
+
+ u32 rx_inquiry_pause_drop_pkts_en; /* rx 卸包使能 */
+ u32 rx_inquiry_pause_period_ms; /* rx pause 检测周期 默认 200ms */
+ u32 rx_inquiry_pause_times; /* rx pause 检测次数 默认1次 */
+ /* rx pause 检测阈值 默认 PAUSE_FRAME_THD_10G/25G/40G/100 */
+ u32 rx_inquiry_pause_frame_thd;
+ u32 rx_inquiry_tx_total_pkts; /* rx pause 检测tx收包总数 */
+
+ u32 tx_inquiry_pause_en; /* tx pause 检测使能 */
+ u32 tx_inquiry_pause_period_ms; /* tx pause 检测周期 默认 200ms */
+ u32 tx_inquiry_pause_times; /* tx pause 检测次数 默认 5次 */
+ u32 tx_inquiry_pause_frame_thd; /* tx pause 检测阈值 */
+ u32 tx_inquiry_rx_total_pkts; /* tx pause 检测rx收包总数 */
+
+ u32 rsvd[4];
+};
+
+/* pfc/pause风暴tx异常上报 */
+struct nic_cmd_tx_pause_notice {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 tx_pause_except; /* 1: 异常,0: 正常 */
+ u32 except_level;
+ u32 rsvd;
+};
+
+#define HINIC3_CMD_OP_FREE 0
+#define HINIC3_CMD_OP_ALLOC 1
+
+struct hinic3_cmd_cfg_qps {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: alloc qp, 0: free qp */
+ u8 rsvd1;
+ u16 num_qps;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_led_config {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port;
+ u8 type;
+ u8 mode;
+ u8 rsvd1;
+};
+
+struct hinic3_cmd_port_loopback {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u8 mode;
+ u8 en;
+ u32 rsvd1[2];
+};
+
+struct hinic3_cmd_get_light_module_abs {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsv[2];
+};
+
+#define STD_SFP_INFO_MAX_SIZE 640
+struct hinic3_cmd_get_std_sfp_info {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 eeprom_len;
+ u32 rsvd;
+ u8 sfp_info[STD_SFP_INFO_MAX_SIZE];
+};
+
+struct hinic3_cable_plug_event {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 plugged; /* 0: unplugged, 1: plugged */
+ u8 port_id;
+};
+
+/* MAC模块接口 */
+struct nic_cmd_mac_info {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 valid_bitmap;
+ u16 rsvd;
+
+ u8 host_id[32];
+ u8 port_id[32];
+ u8 mac_addr[192];
+};
+
+struct nic_cmd_set_tcam_enable {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct nic_cmd_set_fdir_status {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+
+/* tcam block type, according to tcam block size */
+enum {
+ NIC_TCAM_BLOCK_TYPE_LARGE = 0, /* block_size: 16 */
+ NIC_TCAM_BLOCK_TYPE_SMALL, /* block_size: 0 */
+ NIC_TCAM_BLOCK_TYPE_MAX
+};
+
+/* alloc tcam block input struct */
+struct nic_cmd_ctrl_tcam_block_in {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: 释放分配的tcam block, 1: 申请新的tcam block */
+ /* 0: 分配16 size 的tcam block, 1: 分配0 size的tcam block, 其他预留 */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* 驱动发给uP表示驱动希望分配的block大小
+ * uP返回给驱动的接口,表示uP 支持的分配的tcam block大小
+ */
+ u16 alloc_block_num;
+};
+
+/* alloc tcam block output struct */
+struct nic_cmd_ctrl_tcam_block_out {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u8 alloc_en; /* 0: 释放分配的tcam block, 1: 申请新的tcam block */
+ /* 0: 分配16 size 的tcam block, 1: 分配0 size的tcam block, 其他预留 */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ /* 驱动发给uP表示驱动希望分配的block大小
+ * uP返回给驱动的接口,表示uP 支持的分配的tcam block大小
+ */
+ u16 mpu_alloc_block_size;
+};
+
+struct nic_cmd_flush_tcam_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id; /* func_id */
+ u16 rsvd;
+};
+
+struct nic_cmd_dfx_fdir_tcam_block_table {
+ struct hinic3_mgmt_msg_head head;
+ u8 tcam_type;
+ u8 valid;
+ u16 tcam_block_index;
+ u16 use_function_id;
+ u16 rsvd;
+};
+
+struct tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define TCAM_FLOW_KEY_SIZE (44)
+
+struct tcam_key_x_y {
+ u8 x[TCAM_FLOW_KEY_SIZE];
+ u8 y[TCAM_FLOW_KEY_SIZE];
+};
+
+struct nic_tcam_cfg_rule {
+ u32 index;
+ struct tcam_result data;
+ struct tcam_key_x_y key;
+};
+
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct nic_cmd_fdir_add_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ struct nic_tcam_cfg_rule rule;
+};
+
+struct nic_cmd_fdir_del_rules {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct nic_cmd_fdir_get_rule {
+ struct hinic3_mgmt_msg_head head;
+
+ u32 index;
+ u8 valid;
+ u8 type;
+ u16 rsvd;
+ struct tcam_key_x_y key;
+ struct tcam_result data;
+ u64 packet_count;
+ u64 byte_count;
+};
+
+struct hinic3_tcam_key_ipv4_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+ u32 rsvd2 : 16;
+ u32 dipv4_l : 16;
+ u32 rsvd3;
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+};
+
+struct hinic3_tcam_key_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+};
+
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+};
+
+struct tag_tcam_key {
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+
+ union {
+ struct hinic3_tcam_key_ipv4_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+enum {
+ PPA_TABLE_ID_CLEAN_CMD = 0,
+ PPA_TABLE_ID_ADD_CMD,
+ PPA_TABLE_ID_DEL_CMD,
+ FDIR_TABLE_ID_ADD_CMD,
+ FDIR_TABLE_ID_DEL_CMD,
+ PPA_TABEL_ID_MAX
+};
+
+struct hinic3_ppa_cfg_table_id_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u16 cmd;
+ u16 table_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_ppa_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_en;
+ u8 rsvd;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_flush_en_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /* 0 flush done, 1 in flush operation */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct hinic3_mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+/* BIOS CONF */
+enum {
+ NIC_NVM_DATA_SET = BIT(0), /* 1-save, 0-read */
+ NIC_NVM_DATA_PXE = BIT(1),
+ NIC_NVM_DATA_VLAN = BIT(2),
+ NIC_NVM_DATA_VLAN_PRI = BIT(3),
+ NIC_NVM_DATA_VLAN_ID = BIT(4),
+ NIC_NVM_DATA_WORK_MODE = BIT(5),
+ NIC_NVM_DATA_PF_SPEED_LIMIT = BIT(6),
+ NIC_NVM_DATA_GE_MODE = BIT(7),
+ NIC_NVM_DATA_AUTO_NEG = BIT(8),
+ NIC_NVM_DATA_LINK_FEC = BIT(9),
+ NIC_NVM_DATA_PF_ADAPTIVE_LINK = BIT(10),
+ NIC_NVM_DATA_SRIOV_CONTROL = BIT(11),
+ NIC_NVM_DATA_EXTEND_MODE = BIT(12),
+ NIC_NVM_DATA_RESET = BIT(31),
+};
+
+#define BIOS_CFG_SIGNATURE 0x1923E518
+#define BIOS_OP_CFG_ALL(op_code_val) (((op_code_val) >> 1) & (0xFFFFFFFF))
+#define BIOS_OP_CFG_WRITE(op_code_val) ((op_code_val) & NIC_NVM_DATA_SET)
+#define BIOS_OP_CFG_PXE_EN(op_code_val) ((op_code_val) & NIC_NVM_DATA_PXE)
+#define BIOS_OP_CFG_VLAN_EN(op_code_val) ((op_code_val) & NIC_NVM_DATA_VLAN)
+#define BIOS_OP_CFG_VLAN_PRI(op_code_val) ((op_code_val) & NIC_NVM_DATA_VLAN_PRI)
+#define BIOS_OP_CFG_VLAN_ID(op_code_val) ((op_code_val) & NIC_NVM_DATA_VLAN_ID)
+#define BIOS_OP_CFG_WORK_MODE(op_code_val) ((op_code_val) & NIC_NVM_DATA_WORK_MODE)
+#define BIOS_OP_CFG_PF_BW(op_code_val) ((op_code_val) & NIC_NVM_DATA_PF_SPEED_LIMIT)
+#define BIOS_OP_CFG_GE_SPEED(op_code_val) ((op_code_val) & NIC_NVM_DATA_GE_MODE)
+#define BIOS_OP_CFG_AUTO_NEG(op_code_val) ((op_code_val) & NIC_NVM_DATA_AUTO_NEG)
+#define BIOS_OP_CFG_LINK_FEC(op_code_val) ((op_code_val) & NIC_NVM_DATA_LINK_FEC)
+#define BIOS_OP_CFG_AUTO_ADPAT(op_code_val) ((op_code_val) & NIC_NVM_DATA_PF_ADAPTIVE_LINK)
+#define BIOS_OP_CFG_SRIOV_ENABLE(op_code_val) ((op_code_val) & NIC_NVM_DATA_SRIOV_CONTROL)
+#define BIOS_OP_CFG_EXTEND_MODE(op_code_val) ((op_code_val) & NIC_NVM_DATA_EXTEND_MODE)
+#define BIOS_OP_CFG_RST_DEF_SET(op_code_val) ((op_code_val) & (u32)NIC_NVM_DATA_RESET)
+
+#define NIC_BIOS_CFG_MAX_PF_BW 100
+/* 注意:此结构必须保证4字节对齐 */
+struct nic_bios_cfg {
+ u32 signature; /* 签名,用于判断FLASH的内容合法性 */
+ u8 pxe_en; /* PXE enable: 0 - disable 1 - enable */
+ u8 extend_mode;
+ u8 rsvd0[2];
+ u8 pxe_vlan_en; /* PXE VLAN enable: 0 - disable 1 - enable */
+ u8 pxe_vlan_pri; /* PXE VLAN priority: 0-7 */
+ u16 pxe_vlan_id; /* PXE VLAN ID 1-4094 */
+ u32 service_mode; /* 参考CHIPIF_SERVICE_MODE_x 宏 */
+ u32 pf_bw; /* PF速率,百分比 0-100 */
+ u8 speed; /* enum of port speed */
+ u8 auto_neg; /* 自协商开关 0 - 字段无效 1 - 开2 - 关 */
+ u8 lanes; /* lane num */
+ u8 fec; /* FEC模式, 参考 enum mag_cmd_port_fec */
+ u8 auto_adapt; /* 自适应模式配置0 - 无效配置 1 - 开启 2 - 关闭 */
+ u8 func_valid; /* 指示func_id是否有效; 0 - 无效,other - 有效 */
+ u8 func_id; /* 当func_valid不为0时,该成员才有意义 */
+ u8 sriov_en; /* SRIOV-EN: 0 - 无效配置, 1 - 开启, 2 - 关闭 */
+};
+
+struct nic_cmd_bios_cfg {
+ struct hinic3_mgmt_msg_head head;
+ u32 op_code; /* Operation Code: Bit0[0: read 1:write, BIT1-6: cfg_mask */
+ struct nic_bios_cfg bios_cfg;
+};
+
+struct nic_cmd_vhd_config {
+ struct hinic3_mgmt_msg_head head;
+
+ u16 func_id;
+ u8 vhd_type;
+ u8 virtio_small_enable; /* 0: mergeable mode, 1: small mode */
+};
+
+/* BOND */
+struct hinic3_create_bond_info {
+ u32 bond_id; /* bond设备号,output时有效,mpu操作成功返回时回填 */
+ u32 master_slave_port_id; /* */
+ u32 slave_bitmap; /* bond port id bitmap */
+ u32 poll_timeout; /* bond设备链路检查时间 */
+ u32 up_delay; /* 暂时预留 */
+ u32 down_delay; /* 暂时预留 */
+ u32 bond_mode; /* 暂时预留 */
+ u32 active_pf; /* bond使用的active pf id */
+ u32 active_port_max_num; /* bond活动成员口个数上限 */
+ u32 active_port_min_num; /* bond活动成员口个数下限 */
+ u32 xmit_hash_policy; /* hash策略,用于微码选路逻辑 */
+ u32 rsvd[2];
+};
+
+/* 创建bond的消息接口 */
+struct hinic3_cmd_create_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_create_bond_info create_bond_info;
+};
+
+struct hinic3_cmd_delete_bond {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 rsvd[2];
+};
+
+struct hinic3_open_close_bond_info {
+ u32 bond_id; /* bond设备号 */
+ u32 open_close_flag; /* 开启/关闭bond标识:1为open, 0为close */
+ u32 rsvd[2];
+};
+
+/* MPU bond的消息接口 */
+struct hinic3_cmd_open_close_bond {
+ struct hinic3_mgmt_msg_head head;
+ struct hinic3_open_close_bond_info open_close_bond_info;
+};
+
+/* LACPDU的port相关字段 */
+struct lacp_port_params {
+ u16 port_number;
+ u16 port_priority;
+ u16 key;
+ u16 system_priority;
+ u8 system[ETH_ALEN];
+ u8 port_state;
+ u8 rsvd;
+};
+
+struct lacp_port_info {
+ u32 selected;
+ u32 aggregator_port_id; /* 使用的 aggregator port ID */
+
+ struct lacp_port_params actor; /* actor port参数 */
+ struct lacp_port_params partner; /* partner port参数 */
+
+ u64 tx_lacp_pkts;
+ u64 rx_lacp_pkts;
+ u64 rx_8023ad_drop;
+ u64 tx_8023ad_drop;
+ u64 unknown_pkt_drop;
+ u64 rx_marker_pkts;
+ u64 tx_marker_pkts;
+};
+
+/* lacp 状态信息 */
+struct hinic3_bond_status_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status; /* 该bond子设备的链路状态 */
+ u32 active_bitmap; /* 该bond子设备的slave port状态 */
+ u32 port_count; /* 该bond子设备个数 */
+
+ struct lacp_port_info port_info[4];
+
+ u64 success_report_cnt[4]; /* 每个host成功上报lacp协商结果次数 */
+ u64 fail_report_cnt[4]; /* 每个host上报lacp协商结果失败次数 */
+
+ u64 poll_timeout;
+ u64 fast_periodic_timeout;
+ u64 slow_periodic_timeout;
+ u64 short_timeout;
+ u64 long_timeout;
+ u64 aggregate_wait_timeout;
+ u64 tx_period_timeout;
+ u64 rx_marker_timer;
+};
+
+/* lacp协商结果更新之后向主机侧发送异步消息通知结构体 */
+struct hinic3_bond_active_report_info {
+ struct hinic3_mgmt_msg_head head;
+ u32 bond_id;
+ u32 bon_mmi_status; /* 该bond子设备的链路状态 */
+ u32 active_bitmap; /* 该bond子设备的slave port状态 */
+
+ u8 rsvd[16];
+};
+
+/* IP checksum error packets, enable rss quadruple hash. */
+struct hinic3_ipcs_err_rss_enable_operation_s {
+ struct hinic3_mgmt_msg_head head;
+
+ u8 en_tag;
+ u8 type; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+struct hinic3_smac_check_state {
+ struct hinic3_mgmt_msg_head head;
+ u8 smac_check_en; /* 1: enable 0: disable */
+ u8 op_code; /* 1: set 0: get */
+ u8 rsvd[2];
+};
+
+#endif /* HINIC_MGMT_INTERFACE_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
new file mode 100644
index 000000000000..4e9f38d1ed6a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mt.h
@@ -0,0 +1,681 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MT_H
+#define HINIC3_MT_H
+
+#define HINIC3_DRV_NAME "hisdk3"
+#define HINIC3_CHIP_NAME "hinic"
+/* Interrupt at most records, interrupt will be recorded in the FFM */
+
+#define NICTOOL_CMD_TYPE (0x18)
+
+struct api_cmd_rd {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+ void *ack;
+ u16 ack_size;
+};
+
+struct api_cmd_wr {
+ u32 pf_id;
+ u8 dest;
+ u8 *cmd;
+ u16 size;
+};
+
+#define PF_DEV_INFO_NUM 32
+
+struct pf_dev_info {
+ u64 bar0_size;
+ u8 bus;
+ u8 slot;
+ u8 func;
+ u64 phy_addr;
+};
+
+/* Indicates the maximum number of interrupts that can be recorded.
+ * Subsequent interrupts are not recorded in FFM.
+ */
+#define FFM_RECORD_NUM_MAX 64
+
+struct ffm_intr_info {
+ u8 node_id;
+ /* error level of the interrupt source */
+ u8 err_level;
+ /* Classification by interrupt source properties */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+};
+
+struct ffm_intr_tm_info {
+ struct ffm_intr_info intr_info;
+ u8 times;
+ u8 sec;
+ u8 min;
+ u8 hour;
+ u8 mday;
+ u8 mon;
+ u16 year;
+};
+
+struct ffm_record_info {
+ u32 ffm_num;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+ struct ffm_intr_tm_info ffm[FFM_RECORD_NUM_MAX];
+};
+
+struct dbgtool_k_glb_info {
+ struct semaphore dbgtool_sem;
+ struct ffm_record_info *ffm;
+};
+
+struct msg_2_up {
+ u8 pf_id;
+ u8 mod;
+ u8 cmd;
+ void *buf_in;
+ u16 in_size;
+ void *buf_out;
+ u16 *out_size;
+};
+
+struct dbgtool_param {
+ union {
+ struct api_cmd_rd api_rd;
+ struct api_cmd_wr api_wr;
+ struct pf_dev_info *dev_info;
+ struct ffm_record_info *ffm_rd;
+ struct msg_2_up msg2up;
+ } param;
+ char chip_name[16];
+};
+
+/* dbgtool command type */
+/* You can add commands as required. The dbgtool command can be
+ * used to invoke all interfaces of the kernel-mode x86 driver.
+ */
+typedef enum {
+ DBGTOOL_CMD_API_RD = 0,
+ DBGTOOL_CMD_API_WR,
+ DBGTOOL_CMD_FFM_RD,
+ DBGTOOL_CMD_FFM_CLR,
+ DBGTOOL_CMD_PF_DEV_INFO_GET,
+ DBGTOOL_CMD_MSG_2_UP,
+ DBGTOOL_CMD_FREE_MEM,
+ DBGTOOL_CMD_NUM
+} dbgtool_cmd;
+
+#define PF_MAX_SIZE (16)
+#define BUSINFO_LEN (32)
+
+enum module_name {
+ SEND_TO_NPU = 1,
+ SEND_TO_MPU,
+ SEND_TO_SM,
+
+ SEND_TO_HW_DRIVER,
+#define SEND_TO_SRV_DRV_BASE (SEND_TO_HW_DRIVER + 1)
+ SEND_TO_NIC_DRIVER = SEND_TO_SRV_DRV_BASE,
+ SEND_TO_OVS_DRIVER,
+ SEND_TO_ROCE_DRIVER,
+ SEND_TO_TOE_DRIVER,
+ SEND_TO_IOE_DRIVER,
+ SEND_TO_FC_DRIVER,
+ SEND_TO_VBS_DRIVER,
+ SEND_TO_IPSEC_DRIVER,
+ SEND_TO_VIRTIO_DRIVER,
+ SEND_TO_MIGRATE_DRIVER,
+ SEND_TO_PPA_DRIVER,
+ SEND_TO_CUSTOM_DRIVER = SEND_TO_SRV_DRV_BASE + 11,
+ SEND_TO_DRIVER_MAX = SEND_TO_SRV_DRV_BASE + 15, /* reserved */
+};
+
+enum driver_cmd_type {
+ TX_INFO = 1,
+ Q_NUM,
+ TX_WQE_INFO,
+ TX_MAPPING,
+ RX_INFO,
+ RX_WQE_INFO,
+ RX_CQE_INFO,
+ UPRINT_FUNC_EN,
+ UPRINT_FUNC_RESET,
+ UPRINT_SET_PATH,
+ UPRINT_GET_STATISTICS,
+ FUNC_TYPE,
+ GET_FUNC_IDX,
+ GET_INTER_NUM,
+ CLOSE_TX_STREAM,
+ GET_DRV_VERSION,
+ CLEAR_FUNC_STASTIC,
+ GET_HW_STATS,
+ CLEAR_HW_STATS,
+ GET_SELF_TEST_RES,
+ GET_CHIP_FAULT_STATS,
+ NIC_RSVD1,
+ NIC_RSVD2,
+ NIC_RSVD3,
+ GET_CHIP_ID,
+ GET_SINGLE_CARD_INFO,
+ GET_FIRMWARE_ACTIVE_STATUS,
+ ROCE_DFX_FUNC,
+ GET_DEVICE_ID,
+ GET_PF_DEV_INFO,
+ CMD_FREE_MEM,
+ GET_LOOPBACK_MODE = 32,
+ SET_LOOPBACK_MODE,
+ SET_LINK_MODE,
+ SET_PF_BW_LIMIT,
+ GET_PF_BW_LIMIT,
+ ROCE_CMD,
+ GET_POLL_WEIGHT,
+ SET_POLL_WEIGHT,
+ GET_HOMOLOGUE,
+ SET_HOMOLOGUE,
+ GET_SSET_COUNT,
+ GET_SSET_ITEMS,
+ IS_DRV_IN_VM,
+ LRO_ADPT_MGMT,
+ SET_INTER_COAL_PARAM,
+ GET_INTER_COAL_PARAM,
+ GET_CHIP_INFO,
+ GET_NIC_STATS_LEN,
+ GET_NIC_STATS_STRING,
+ GET_NIC_STATS_INFO,
+ GET_PF_ID,
+ NIC_RSVD4,
+ NIC_RSVD5,
+ DCB_QOS_INFO,
+ DCB_PFC_STATE,
+ DCB_ETS_STATE,
+ DCB_STATE,
+ QOS_DEV,
+ GET_QOS_COS,
+ GET_ULD_DEV_NAME,
+ GET_TX_TIMEOUT,
+ SET_TX_TIMEOUT,
+
+ RSS_CFG = 0x40,
+ RSS_INDIR,
+ PORT_ID,
+
+ GET_FUNC_CAP = 0x50,
+ GET_XSFP_PRESENT = 0x51,
+ GET_XSFP_INFO = 0x52,
+ DEV_NAME_TEST = 0x53,
+
+ GET_WIN_STAT = 0x60,
+ WIN_CSR_READ = 0x61,
+ WIN_CSR_WRITE = 0x62,
+ WIN_API_CMD_RD = 0x63,
+
+ VM_COMPAT_TEST = 0xFF
+};
+
+enum api_chain_cmd_type {
+ API_CSR_READ,
+ API_CSR_WRITE
+};
+
+enum sm_cmd_type {
+ SM_CTR_RD16 = 1,
+ SM_CTR_RD32,
+ SM_CTR_RD64_PAIR,
+ SM_CTR_RD64,
+ SM_CTR_RD32_CLEAR,
+ SM_CTR_RD64_PAIR_CLEAR,
+ SM_CTR_RD64_CLEAR
+};
+
+struct cqm_stats {
+ atomic_t cqm_cmd_alloc_cnt;
+ atomic_t cqm_cmd_free_cnt;
+ atomic_t cqm_send_cmd_box_cnt;
+ atomic_t cqm_send_cmd_imm_cnt;
+ atomic_t cqm_db_addr_alloc_cnt;
+ atomic_t cqm_db_addr_free_cnt;
+ atomic_t cqm_fc_srq_create_cnt;
+ atomic_t cqm_srq_create_cnt;
+ atomic_t cqm_rq_create_cnt;
+ atomic_t cqm_qpc_mpt_create_cnt;
+ atomic_t cqm_nonrdma_queue_create_cnt;
+ atomic_t cqm_rdma_queue_create_cnt;
+ atomic_t cqm_rdma_table_create_cnt;
+ atomic_t cqm_qpc_mpt_delete_cnt;
+ atomic_t cqm_nonrdma_queue_delete_cnt;
+ atomic_t cqm_rdma_queue_delete_cnt;
+ atomic_t cqm_rdma_table_delete_cnt;
+ atomic_t cqm_func_timer_clear_cnt;
+ atomic_t cqm_func_hash_buf_clear_cnt;
+ atomic_t cqm_scq_callback_cnt;
+ atomic_t cqm_ecq_callback_cnt;
+ atomic_t cqm_nocq_callback_cnt;
+ atomic_t cqm_aeq_callback_cnt[112];
+};
+
+struct link_event_stats {
+ atomic_t link_down_stats;
+ atomic_t link_up_stats;
+};
+
+enum hinic3_fault_err_level {
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_HOST,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX,
+};
+
+enum hinic3_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_TSENSOR_FAULT,
+ FAULT_TYPE_MAX,
+};
+
+struct fault_event_stats {
+ /* TODO :HINIC_NODE_ID_MAX: temp use the value of 1822(22) */
+ atomic_t chip_fault_stats[22][FAULT_LEVEL_MAX];
+ atomic_t fault_type_stat[FAULT_TYPE_MAX];
+ atomic_t pcie_fault_stats;
+};
+
+enum hinic3_ucode_event_type {
+ HINIC3_INTERNAL_OTHER_FATAL_ERROR = 0x0,
+ HINIC3_CHANNEL_BUSY = 0x7,
+ HINIC3_NIC_FATAL_ERROR_MAX = 0x8,
+};
+
+struct hinic3_hw_stats {
+ atomic_t heart_lost_stats;
+ struct cqm_stats cqm_stats;
+ struct link_event_stats link_event_stats;
+ struct fault_event_stats fault_event_stats;
+ atomic_t nic_ucode_event_stats[HINIC3_NIC_FATAL_ERROR_MAX];
+};
+
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+
+struct pf_info {
+ char name[IFNAMSIZ];
+ char bus_info[BUSINFO_LEN];
+ u32 pf_type;
+};
+
+struct card_info {
+ struct pf_info pf[PF_MAX_SIZE];
+ u32 pf_num;
+};
+
+struct hinic3_nic_loop_mode {
+ u32 loop_mode;
+ u32 loop_ctrl;
+};
+
+struct hinic3_pf_info {
+ u32 isvalid;
+ u32 pf_id;
+};
+
+enum hinic3_show_set {
+ HINIC3_SHOW_SSET_IO_STATS = 1,
+};
+
+#define HINIC3_SHOW_ITEM_LEN 32
+struct hinic3_show_item {
+ char name[HINIC3_SHOW_ITEM_LEN];
+ u8 hexadecimal; /* 0: decimal , 1: Hexadecimal */
+ u8 rsvd[7];
+ u64 value;
+};
+
+#define HINIC3_CHIP_FAULT_SIZE (110 * 1024)
+#define MAX_DRV_BUF_SIZE 4096
+
+struct nic_cmd_chip_fault_stats {
+ u32 offset;
+ u8 chip_fault_stats[MAX_DRV_BUF_SIZE];
+};
+
+#define NIC_TOOL_MAGIC 'x'
+
+#define CARD_MAX_SIZE (64)
+
+struct nic_card_id {
+ u32 id[CARD_MAX_SIZE];
+ u32 num;
+};
+
+struct func_pdev_info {
+ u64 bar0_phy_addr;
+ u64 bar0_size;
+ u64 bar1_phy_addr;
+ u64 bar1_size;
+ u64 bar3_phy_addr;
+ u64 bar3_size;
+ u64 rsvd1[4];
+};
+
+struct hinic3_card_func_info {
+ u32 num_pf;
+ u32 rsvd0;
+ u64 usr_api_phy_addr;
+ struct func_pdev_info pdev_info[CARD_MAX_SIZE];
+};
+
+struct wqe_info {
+ int q_id;
+ void *slq_handle;
+ unsigned int wqe_id;
+};
+
+#define MAX_VER_INFO_LEN 128
+struct drv_version_info {
+ char ver[MAX_VER_INFO_LEN];
+};
+
+struct hinic3_tx_hw_page {
+ u64 phy_addr;
+ u64 *map_addr;
+};
+
+struct nic_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /* sw_ci */
+ u16 fi; /* hw_ci */
+ u32 q_depth;
+ u16 pi_reverse; /* TODO: what is this? */
+ u16 wqebb_size;
+ u8 priority;
+ u16 *ci_addr;
+ u64 cla_addr;
+ void *slq_handle;
+ /* TODO: NIC don't use direct wqe */
+ struct hinic3_tx_hw_page direct_wqe;
+ struct hinic3_tx_hw_page doorbell;
+ u32 page_idx;
+ u32 glb_sq_id;
+};
+
+struct nic_rq_info {
+ u16 q_id;
+ u16 delta;
+ u16 hw_pi;
+ u16 ci; /* sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *slq_handle;
+ u64 ci_wqe_page_addr;
+ u64 ci_cla_tbl_addr;
+
+ u8 coalesc_timer_cfg;
+ u8 pending_limt;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+#define MT_EPERM 1 /* Operation not permitted */
+#define MT_EIO 2 /* I/O error */
+#define MT_EINVAL 3 /* Invalid argument */
+#define MT_EBUSY 4 /* Device or resource busy */
+#define MT_EOPNOTSUPP 0xFF /* Operation not supported */
+
+struct mt_msg_head {
+ u8 status;
+ u8 rsvd1[3];
+};
+
+#define MT_DCB_OPCODE_WR BIT(0) /* 1 - write, 0 - read */
+struct hinic3_mt_qos_info { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 valid_cos_bitmap;
+ u8 valid_up_bitmap;
+ u32 rsvd1;
+};
+
+struct hinic3_mt_dcb_state {
+ struct mt_msg_head head;
+
+ u16 op_code; /* 0 - get dcb state, 1 - set dcb state */
+ u8 state; /* 0 - disable, 1 - enable dcb */
+ u8 rsvd;
+};
+
+#define MT_DCB_ETS_UP_TC BIT(1)
+#define MT_DCB_ETS_UP_BW BIT(2)
+#define MT_DCB_ETS_UP_PRIO BIT(3)
+#define MT_DCB_ETS_TC_BW BIT(4)
+#define MT_DCB_ETS_TC_PRIO BIT(5)
+
+#define DCB_UP_TC_NUM 0x8
+struct hinic3_mt_ets_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 up_tc[DCB_UP_TC_NUM];
+ u8 up_bw[DCB_UP_TC_NUM];
+ u8 tc_bw[DCB_UP_TC_NUM];
+ u8 up_prio_bitmap;
+ u8 tc_prio_bitmap;
+ u32 rsvd;
+};
+
+#define MT_DCB_PFC_PFC_STATE BIT(1)
+#define MT_DCB_PFC_PFC_PRI_EN BIT(2)
+
+struct hinic3_mt_pfc_state { /* delete */
+ struct mt_msg_head head;
+
+ u16 op_code;
+ u8 state;
+ u8 pfc_en_bitpamp;
+ u32 rsvd;
+};
+
+#define CMD_QOS_DEV_TRUST BIT(0)
+#define CMD_QOS_DEV_DFT_COS BIT(1)
+#define CMD_QOS_DEV_PCP2COS BIT(2)
+#define CMD_QOS_DEV_DSCP2COS BIT(3)
+
+struct hinic3_mt_qos_dev_cfg {
+ struct mt_msg_head head;
+
+ u8 op_code; /* 0:get 1: set */
+ u8 rsvd0;
+ /* bit0 - trust, bit1 - dft_cos, bit2 - pcp2cos, bit3 - dscp2cos */
+ u16 cfg_bitmap;
+
+ u8 trust; /* 0 - pcp, 1 - dscp */
+ u8 dft_cos;
+ u16 rsvd1;
+ u8 pcp2cos[8]; /* 必须8个一起配置 */
+ /* 配置dscp2cos时,若cos值设置为0xFF,驱动则忽略此dscp优先级的配置,
+ * 允许一次性配置多个dscp跟cos的映射关系
+ */
+ u8 dscp2cos[64];
+ u32 rsvd2[4];
+};
+
+enum mt_api_type {
+ API_TYPE_MBOX = 1,
+ API_TYPE_API_CHAIN_BYPASS,
+ API_TYPE_API_CHAIN_TO_MPU,
+ API_TYPE_CLP,
+};
+
+struct npu_cmd_st {
+ u32 mod : 8;
+ u32 cmd : 8;
+ u32 ack_type : 3;
+ u32 direct_resp : 1;
+ u32 len : 12;
+};
+
+struct mpu_cmd_st {
+ u32 api_type : 8;
+ u32 mod : 8;
+ u32 cmd : 16;
+};
+
+struct msg_module {
+ char device_name[IFNAMSIZ];
+ u32 module;
+ union {
+ u32 msg_formate; /* for driver */
+ struct npu_cmd_st npu_cmd;
+ struct mpu_cmd_st mpu_cmd;
+ };
+ u32 timeout; /* for mpu/npu cmd */
+ u32 func_idx;
+ u32 buf_in_size;
+ u32 buf_out_size;
+ void *in_buf;
+ void *out_buf;
+ int bus_num;
+ u8 port_id;
+ u8 rsvd1[3];
+ u32 rsvd2[4];
+};
+
+struct hinic3_mt_qos_cos_cfg {
+ struct mt_msg_head head;
+
+ u8 port_id;
+ u8 func_cos_bitmap;
+ u8 port_cos_bitmap;
+ u8 func_max_cos_num;
+ u32 rsvd2[4];
+};
+
+#define MAX_NETDEV_NUM 4
+
+enum hinic3_bond_cmd_to_custom_e {
+ CMD_CUSTOM_BOND_DEV_CREATE = 1,
+ CMD_CUSTOM_BOND_DEV_DELETE,
+ CMD_CUSTOM_BOND_GET_CHIP_NAME,
+ CMD_CUSTOM_BOND_GET_CARD_INFO
+};
+
+typedef enum {
+ HASH_POLICY_L2 = 0, /* SMAC_DMAC */
+ HASH_POLICY_L23 = 1, /* SIP_DIP_SPORT_DPORT */
+ HASH_POLICY_L34 = 2, /* SMAC_DMAC_SIP_DIP */
+ HASH_POLICY_MAX = 3 /* MAX */
+} xmit_hash_policy_e;
+
+/* bond mode */
+typedef enum tag_bond_mode {
+ BOND_MODE_NONE = 0, /**< bond disable */
+ BOND_MODE_BACKUP = 1, /**< 1 for active-backup */
+ BOND_MODE_BALANCE = 2, /**< 2 for balance-xor */
+ BOND_MODE_LACP = 4, /**< 4 for 802.3ad */
+ BOND_MODE_MAX
+} bond_mode_e;
+
+struct add_bond_dev_s {
+ struct mt_msg_head head;
+ /* input can be empty, indicates that the value
+ * is assigned by the driver
+ */
+ char bond_name[IFNAMSIZ];
+ u8 slave_cnt;
+ u8 rsvd[3];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ]; /* unit : ms */
+ u32 poll_timeout; /* default value = 100 */
+ u32 up_delay; /* default value = 0 */
+ u32 down_delay; /* default value = 0 */
+ u32 bond_mode; /* default value = BOND_MODE_LACP */
+
+ /* maximum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_max_num;
+ /* minimum number of active bond member interfaces,
+ * default value = 0
+ */
+ u32 active_port_min_num;
+ /* hash policy, which is used for microcode routing logic,
+ * default value = 0
+ */
+ xmit_hash_policy_e xmit_hash_policy;
+};
+
+struct del_bond_dev_s {
+ struct mt_msg_head head;
+ char bond_name[IFNAMSIZ];
+};
+
+struct get_bond_chip_name_s {
+ char bond_name[IFNAMSIZ];
+ char chip_name[IFNAMSIZ];
+};
+
+struct bond_drv_msg_s {
+ u32 bond_id;
+ u32 slave_cnt;
+ u32 master_slave_index;
+ char bond_name[IFNAMSIZ];
+ char slave_name[MAX_NETDEV_NUM][IFNAMSIZ];
+};
+
+#define MAX_BONDING_CNT_PER_CARD (2)
+
+struct bond_negotiate_status {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+ u32 bond_id;
+ u32 bond_mmi_status; /* 该bond子设备的链路状态 */
+ u32 active_bitmap; /* 该bond子设备的slave port状态 */
+
+ u8 rsvd[16];
+};
+
+struct bond_all_msg_s {
+ struct bond_drv_msg_s drv_msg;
+ struct bond_negotiate_status active_info;
+};
+
+struct get_card_bond_msg_s {
+ u32 bond_cnt;
+ struct bond_all_msg_s all_msg[MAX_BONDING_CNT_PER_CARD];
+};
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg, u32 in_size, void **buf_in);
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg, u32 out_size, void **buf_out);
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in);
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg, void *buf_out);
+
+int copy_buf_out_to_user(struct msg_module *nt_msg, u32 out_size, void *buf_out);
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+int send_to_npu(void *hwdev, struct msg_module *nt_msg, void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+int send_to_sm(void *hwdev, struct msg_module *nt_msg, void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif /* _HINIC3_MT_H_ */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
new file mode 100644
index 000000000000..67553270f710
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_netdev_ops.c
@@ -0,0 +1,1975 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <net/dsfield.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/netlink.h>
+#include <linux/debugfs.h>
+#include <linux/ip.h>
+
+#include "ossl_knl.h"
+#ifdef HAVE_XDP_SUPPORT
+#include <linux/bpf.h>
+#endif
+#include "hinic3_hw.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+#include "hinic3_nic_prof.h"
+
+#define HINIC3_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+
+#define HINIC3_LRO_DEFAULT_COAL_PKT_SIZE 32
+#define HINIC3_LRO_DEFAULT_TIME_LIMIT 16
+#define HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT 100
+static void hinic3_nic_set_rx_mode(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (netdev_uc_count(netdev) != nic_dev->netdev_uc_cnt ||
+ netdev_mc_count(netdev) != nic_dev->netdev_mc_cnt) {
+ set_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags);
+ nic_dev->netdev_uc_cnt = netdev_uc_count(netdev);
+ nic_dev->netdev_mc_cnt = netdev_mc_count(netdev);
+ }
+
+ queue_work(nic_dev->workq, &nic_dev->rx_mode_work);
+}
+
+static int hinic3_alloc_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ u32 size;
+ int err;
+
+ size = sizeof(*q_params->txqs_res) * q_params->num_qps;
+ q_params->txqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->txqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resources array\n");
+ return -ENOMEM;
+ }
+
+ size = sizeof(*q_params->rxqs_res) * q_params->num_qps;
+ q_params->rxqs_res = kzalloc(size, GFP_KERNEL);
+ if (!q_params->rxqs_res) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource array\n");
+ err = -ENOMEM;
+ goto alloc_rxqs_res_arr_err;
+ }
+
+ size = sizeof(*q_params->irq_cfg) * q_params->num_qps;
+ q_params->irq_cfg = kzalloc(size, GFP_KERNEL);
+ if (!q_params->irq_cfg) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc irq resource array\n");
+ err = -ENOMEM;
+ goto alloc_irq_cfg_err;
+ }
+
+ err = hinic3_alloc_txqs_res(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txqs resource\n");
+ goto alloc_txqs_res_err;
+ }
+
+ err = hinic3_alloc_rxqs_res(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxqs resource\n");
+ goto alloc_rxqs_res_err;
+ }
+
+ return 0;
+
+alloc_rxqs_res_err:
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+alloc_txqs_res_err:
+ kfree(q_params->irq_cfg);
+ q_params->irq_cfg = NULL;
+
+alloc_irq_cfg_err:
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+alloc_rxqs_res_arr_err:
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+
+ return err;
+}
+
+static void hinic3_free_txrxq_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ hinic3_free_rxqs_res(nic_dev, q_params->num_qps, q_params->rq_depth,
+ q_params->rxqs_res);
+ hinic3_free_txqs_res(nic_dev, q_params->num_qps, q_params->sq_depth,
+ q_params->txqs_res);
+
+ kfree(q_params->irq_cfg);
+ q_params->irq_cfg = NULL;
+
+ kfree(q_params->rxqs_res);
+ q_params->rxqs_res = NULL;
+
+ kfree(q_params->txqs_res);
+ q_params->txqs_res = NULL;
+}
+
+static int hinic3_configure_txrxqs(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ int err;
+
+ err = hinic3_configure_txqs(nic_dev, q_params->num_qps,
+ q_params->sq_depth, q_params->txqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure txqs\n");
+ return err;
+ }
+
+ err = hinic3_configure_rxqs(nic_dev, q_params->num_qps,
+ q_params->rq_depth, q_params->rxqs_res);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to configure rxqs\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void config_dcb_qps_map(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 num_cos;
+
+ if (!test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ hinic3_update_tx_db_cos(nic_dev, 0);
+ return;
+ }
+
+ num_cos = hinic3_get_dev_user_cos_num(nic_dev);
+ hinic3_update_qp_cos_cfg(nic_dev, num_cos);
+ /* For now, we don't support to change num_cos */
+ if (num_cos > nic_dev->cos_config_num_max ||
+ nic_dev->q_params.num_qps < num_cos) {
+ nicif_err(nic_dev, drv, netdev, "Invalid num_cos: %u or num_qps: %u, disable DCB\n",
+ num_cos, nic_dev->q_params.num_qps);
+ nic_dev->q_params.num_cos = 0;
+ clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
+ /* if we can't enable rss or get enough num_qps,
+ * need to sync default configure to hw
+ */
+ hinic3_configure_dcb(netdev);
+ }
+
+ hinic3_update_tx_db_cos(nic_dev, 1);
+}
+
+static int hinic3_configure(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ int err;
+
+ err = hinic3_set_port_mtu(nic_dev->hwdev, (u16)netdev->mtu);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set mtu\n");
+ return err;
+ }
+
+ config_dcb_qps_map(nic_dev);
+
+ /* rx rss init */
+ err = hinic3_rx_configure(netdev, test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to configure rx\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_remove_configure(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_rx_remove_configure(nic_dev->netdev);
+}
+
+/* try to modify the number of irq to the target number,
+ * and return the actual number of irq.
+ */
+static u16 hinic3_qp_irq_change(struct hinic3_nic_dev *nic_dev,
+ u16 dst_num_qp_irq)
+{
+ struct irq_info *qps_irq_info = nic_dev->qps_irq_info;
+ u16 resp_irq_num, irq_num_gap, i;
+ u16 idx;
+ int err;
+
+ if (dst_num_qp_irq > nic_dev->num_qp_irq) {
+ irq_num_gap = dst_num_qp_irq - nic_dev->num_qp_irq;
+ err = hinic3_alloc_irqs(nic_dev->hwdev, SERVICE_T_NIC,
+ irq_num_gap,
+ &qps_irq_info[nic_dev->num_qp_irq],
+ &resp_irq_num);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc irqs\n");
+ return nic_dev->num_qp_irq;
+ }
+
+ nic_dev->num_qp_irq += resp_irq_num;
+ } else if (dst_num_qp_irq < nic_dev->num_qp_irq) {
+ irq_num_gap = nic_dev->num_qp_irq - dst_num_qp_irq;
+ for (i = 0; i < irq_num_gap; i++) {
+ idx = (nic_dev->num_qp_irq - i) - 1;
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ qps_irq_info[idx].irq_id);
+ qps_irq_info[idx].irq_id = 0;
+ qps_irq_info[idx].msix_entry_idx = 0;
+ }
+ nic_dev->num_qp_irq = dst_num_qp_irq;
+ }
+
+ return nic_dev->num_qp_irq;
+}
+
+static void config_dcb_num_qps(struct hinic3_nic_dev *nic_dev,
+ const struct hinic3_dyna_txrxq_params *q_params,
+ u16 max_qps)
+{
+ u8 num_cos = q_params->num_cos;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (!num_cos || num_cos > nic_dev->cos_config_num_max || num_cos > max_qps)
+ return; /* will disable DCB in config_dcb_qps_map() */
+
+ hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
+}
+
+static void hinic3_config_num_qps(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *q_params)
+{
+ u16 alloc_num_irq, cur_num_irq;
+ u16 dst_num_irq;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ q_params->num_qps = 1;
+
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+
+ if (nic_dev->num_qp_irq >= q_params->num_qps)
+ goto out;
+
+ cur_num_irq = nic_dev->num_qp_irq;
+
+ alloc_num_irq = hinic3_qp_irq_change(nic_dev, q_params->num_qps);
+ if (alloc_num_irq < q_params->num_qps) {
+ q_params->num_qps = alloc_num_irq;
+ config_dcb_num_qps(nic_dev, q_params, q_params->num_qps);
+ nicif_warn(nic_dev, drv, nic_dev->netdev,
+ "Can not get enough irqs, adjust num_qps to %u\n",
+ q_params->num_qps);
+
+ /* The current irq may be in use, we must keep it */
+ dst_num_irq = (u16)max_t(u16, cur_num_irq, q_params->num_qps);
+ hinic3_qp_irq_change(nic_dev, dst_num_irq);
+ }
+
+out:
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Finally num_qps: %u\n",
+ q_params->num_qps);
+}
+
+/* determin num_qps from rss_tmpl_id/irq_num/dcb_en */
+static int hinic3_setup_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u32 irq_size;
+
+ nic_dev->num_qp_irq = 0;
+
+ irq_size = sizeof(*nic_dev->qps_irq_info) * nic_dev->max_qps;
+ if (!irq_size) {
+ nicif_err(nic_dev, drv, netdev, "Cannot allocate zero size entries\n");
+ return -EINVAL;
+ }
+ nic_dev->qps_irq_info = kzalloc(irq_size, GFP_KERNEL);
+ if (!nic_dev->qps_irq_info) {
+ nicif_err(nic_dev, drv, netdev, "Failed to alloc qps_irq_info\n");
+ return -ENOMEM;
+ }
+
+ hinic3_config_num_qps(nic_dev, &nic_dev->q_params);
+
+ return 0;
+}
+
+static void hinic3_destroy_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 i;
+
+ for (i = 0; i < nic_dev->num_qp_irq; i++)
+ hinic3_free_irq(nic_dev->hwdev, SERVICE_T_NIC,
+ nic_dev->qps_irq_info[i].irq_id);
+
+ kfree(nic_dev->qps_irq_info);
+}
+
+int hinic3_force_port_disable(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ down(&nic_dev->port_state_sem);
+
+ err = hinic3_set_port_enable(nic_dev->hwdev, false, HINIC3_CHANNEL_NIC);
+ if (!err)
+ nic_dev->force_port_disable = true;
+
+ up(&nic_dev->port_state_sem);
+
+ return err;
+}
+
+int hinic3_force_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable)
+{
+ int err = 0;
+
+ down(&nic_dev->port_state_sem);
+
+ nic_dev->force_port_disable = false;
+ err = hinic3_set_port_enable(nic_dev->hwdev, enable,
+ HINIC3_CHANNEL_NIC);
+
+ up(&nic_dev->port_state_sem);
+
+ return err;
+}
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable)
+{
+ int err;
+
+ down(&nic_dev->port_state_sem);
+
+ /* Do nothing when force disable
+ * Port will disable when call force port disable
+ * and should not enable port when in force mode
+ */
+ if (nic_dev->force_port_disable) {
+ up(&nic_dev->port_state_sem);
+ return 0;
+ }
+
+ err = hinic3_set_port_enable(nic_dev->hwdev, enable,
+ HINIC3_CHANNEL_NIC);
+
+ up(&nic_dev->port_state_sem);
+
+ return err;
+}
+
+static void hinic3_print_link_message(struct hinic3_nic_dev *nic_dev,
+ u8 link_status)
+{
+ if (nic_dev->link_status == link_status)
+ return;
+
+ nic_dev->link_status = link_status;
+
+ nicif_info(nic_dev, link, nic_dev->netdev, "Link is %s\n",
+ (link_status ? "up" : "down"));
+}
+
+static int hinic3_alloc_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ int err;
+
+ qp_params->num_qps = trxq_params->num_qps;
+ qp_params->sq_depth = trxq_params->sq_depth;
+ qp_params->rq_depth = trxq_params->rq_depth;
+
+ err = hinic3_alloc_qps(nic_dev->hwdev, nic_dev->qps_irq_info,
+ qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc qps\n");
+ return err;
+ }
+
+ err = hinic3_alloc_txrxq_resources(nic_dev, trxq_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to alloc txrxq resources\n");
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_free_channel_resources(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ mutex_lock(&nic_dev->nic_mutex);
+ hinic3_free_txrxq_resources(nic_dev, trxq_params);
+ hinic3_free_qps(nic_dev->hwdev, qp_params);
+ mutex_unlock(&nic_dev->nic_mutex);
+}
+
+static int hinic3_open_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params,
+ struct hinic3_dyna_txrxq_params *trxq_params)
+{
+ int err;
+
+ err = hinic3_init_qps(nic_dev->hwdev, qp_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init qps\n");
+ return err;
+ }
+
+ err = hinic3_configure_txrxqs(nic_dev, trxq_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to configure txrxqs\n");
+ goto cfg_txrxqs_err;
+ }
+
+ err = hinic3_qps_irq_init(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto init_qp_irq_err;
+ }
+
+ err = hinic3_configure(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to init txrxq irq\n");
+ goto configure_err;
+ }
+
+ return 0;
+
+configure_err:
+ hinic3_qps_irq_deinit(nic_dev);
+
+init_qp_irq_err:
+cfg_txrxqs_err:
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+
+ return err;
+}
+
+static void hinic3_close_channel(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ hinic3_remove_configure(nic_dev);
+ hinic3_qps_irq_deinit(nic_dev);
+ hinic3_deinit_qps(nic_dev->hwdev, qp_params);
+}
+
+int hinic3_vport_up(struct hinic3_nic_dev *nic_dev)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 link_status = 0;
+ u16 glb_func_id;
+ int err;
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, true,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable vport\n");
+ goto vport_enable_err;
+ }
+
+ err = hinic3_maybe_set_port_state(nic_dev, true);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to enable port\n");
+ goto port_enable_err;
+ }
+
+ netif_set_real_num_tx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_set_real_num_rx_queues(netdev, nic_dev->q_params.num_qps);
+ netif_tx_wake_all_queues(netdev);
+
+ if (test_bit(HINIC3_FORCE_LINK_UP, &nic_dev->flags)) {
+ link_status = true;
+ netif_carrier_on(netdev);
+ } else {
+ err = hinic3_get_link_state(nic_dev->hwdev, &link_status);
+ if (!err && link_status)
+ netif_carrier_on(netdev);
+ }
+
+ queue_delayed_work(nic_dev->workq, &nic_dev->moderation_task,
+ HINIC3_MODERATONE_DELAY);
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ hinic3_print_link_message(nic_dev, link_status);
+
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, link_status);
+
+ return 0;
+
+port_enable_err:
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+vport_enable_err:
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ /* After set vport disable 100ms, no packets will be send to host */
+ msleep(100);
+
+ return err;
+}
+
+void hinic3_vport_down(struct hinic3_nic_dev *nic_dev)
+{
+ u16 glb_func_id;
+
+ netif_carrier_off(nic_dev->netdev);
+ netif_tx_disable(nic_dev->netdev);
+
+ cancel_delayed_work_sync(&nic_dev->rxq_check_work);
+
+ cancel_delayed_work_sync(&nic_dev->moderation_task);
+
+ if (hinic3_get_chip_present_flag(nic_dev->hwdev)) {
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev))
+ hinic3_notify_all_vfs_link_changed(nic_dev->hwdev, 0);
+
+ hinic3_maybe_set_port_state(nic_dev, false);
+
+ glb_func_id = hinic3_global_func_id(nic_dev->hwdev);
+ hinic3_set_vport_enable(nic_dev->hwdev, glb_func_id, false,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_flush_txqs(nic_dev->netdev);
+ /* After set vport disable 100ms,
+ * no packets will be send to host
+ * FPGA set 2000ms
+ */
+ msleep(HINIC3_WAIT_FLUSH_QP_RESOURCE_TIMEOUT);
+ hinic3_flush_qps_res(nic_dev->hwdev);
+ }
+}
+
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data)
+{
+ struct hinic3_dyna_qp_params new_qp_params = {0};
+ struct hinic3_dyna_qp_params cur_qp_params = {0};
+ int err;
+
+ hinic3_config_num_qps(nic_dev, trxq_params);
+
+ err = hinic3_alloc_channel_resources(nic_dev, &new_qp_params,
+ trxq_params);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc channel resources\n");
+ return err;
+ }
+
+ if (!test_and_set_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags)) {
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &cur_qp_params);
+ hinic3_free_channel_resources(nic_dev, &cur_qp_params,
+ &nic_dev->q_params);
+ }
+
+ if (nic_dev->num_qp_irq > trxq_params->num_qps)
+ hinic3_qp_irq_change(nic_dev, trxq_params->num_qps);
+ nic_dev->q_params = *trxq_params;
+
+ if (reopen_handler)
+ reopen_handler(nic_dev, priv_data);
+
+ err = hinic3_open_channel(nic_dev, &new_qp_params, trxq_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change channel settings success\n");
+
+ return 0;
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &new_qp_params);
+
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &new_qp_params, trxq_params);
+
+ return err;
+}
+
+int hinic3_open(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (test_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_info(nic_dev, drv, netdev, "Netdev already open, do nothing\n");
+ return 0;
+ }
+
+ err = hinic3_init_nicio_res(nic_dev->hwdev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init nicio resources\n");
+ return err;
+ }
+
+ err = hinic3_setup_num_qps(nic_dev);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to setup num_qps\n");
+ goto setup_qps_err;
+ }
+
+ err = hinic3_alloc_channel_resources(nic_dev, &qp_params,
+ &nic_dev->q_params);
+ if (err)
+ goto alloc_channel_res_err;
+
+ err = hinic3_open_channel(nic_dev, &qp_params, &nic_dev->q_params);
+ if (err)
+ goto open_channel_err;
+
+ err = hinic3_vport_up(nic_dev);
+ if (err)
+ goto vport_up_err;
+
+ err = hinic3_set_master_dev_state(nic_dev, true);
+ if (err)
+ goto set_master_dev_err;
+
+ set_bit(HINIC3_INTF_UP, &nic_dev->flags);
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is up\n");
+
+ return 0;
+
+set_master_dev_err:
+ hinic3_vport_down(nic_dev);
+
+vport_up_err:
+ hinic3_close_channel(nic_dev, &qp_params);
+
+open_channel_err:
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+alloc_channel_res_err:
+ hinic3_destroy_num_qps(nic_dev);
+
+setup_qps_err:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+
+ return err;
+}
+
+int hinic3_close(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!test_and_clear_bit(HINIC3_INTF_UP, &nic_dev->flags)) {
+ nicif_info(nic_dev, drv, netdev, "Netdev already close, do nothing\n");
+ return 0;
+ }
+
+ if (test_and_clear_bit(HINIC3_CHANGE_RES_INVALID, &nic_dev->flags))
+ goto out;
+
+ hinic3_set_master_dev_state(nic_dev, false);
+
+ hinic3_vport_down(nic_dev);
+ hinic3_close_channel(nic_dev, &qp_params);
+ hinic3_free_channel_resources(nic_dev, &qp_params, &nic_dev->q_params);
+
+out:
+ hinic3_deinit_nicio_res(nic_dev->hwdev);
+ hinic3_destroy_num_qps(nic_dev);
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Netdev is down\n");
+
+ return 0;
+}
+
+#define IPV6_ADDR_LEN 4
+#define PKT_INFO_LEN 9
+#define BITS_PER_TUPLE 32
+static u32 calc_xor_rss(u8 *rss_tunple, u32 len)
+{
+ u32 hash_value;
+ u32 i;
+
+ hash_value = rss_tunple[0];
+ for (i = 1; i < len; i++)
+ hash_value = hash_value ^ rss_tunple[i];
+
+ return hash_value;
+}
+
+static u32 calc_toep_rss(const u32 *rss_tunple, u32 len, const u32 *rss_key)
+{
+ u32 rss = 0;
+ u32 i, j;
+
+ for (i = 1; i <= len; i++) {
+ for (j = 0; j < BITS_PER_TUPLE; j++)
+ if (rss_tunple[i - 1] & ((u32)1 <<
+ (u32)((BITS_PER_TUPLE - 1) - j)))
+ rss ^= (rss_key[i - 1] << j) |
+ (u32)((u64)rss_key[i] >>
+ (BITS_PER_TUPLE - j));
+ }
+
+ return rss;
+}
+
+#define RSS_VAL(val, type) \
+ (((type) == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP) ? ntohl(val) : (val))
+
+static u8 parse_ipv6_info(struct sk_buff *skb, u32 *rss_tunple,
+ u8 hash_engine, u32 *len)
+{
+ struct ipv6hdr *ipv6hdr = ipv6_hdr(skb);
+ u32 *saddr = (u32 *)&ipv6hdr->saddr;
+ u32 *daddr = (u32 *)&ipv6hdr->daddr;
+ u8 i;
+
+ for (i = 0; i < IPV6_ADDR_LEN; i++) {
+ rss_tunple[i] = RSS_VAL(daddr[i], hash_engine);
+ /* The offset of the sport relative to the dport is 4 */
+ rss_tunple[(u32)(i + IPV6_ADDR_LEN)] =
+ RSS_VAL(saddr[i], hash_engine);
+ }
+ *len = IPV6_ADDR_LEN + IPV6_ADDR_LEN;
+
+ if (skb_network_header(skb) + sizeof(*ipv6hdr) ==
+ skb_transport_header(skb))
+ return ipv6hdr->nexthdr;
+ return 0;
+}
+
+static u16 select_queue_by_hash_func(struct net_device *dev, struct sk_buff *skb,
+ unsigned int num_tx_queues)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(dev);
+ struct nic_rss_type rss_type = nic_dev->rss_type;
+ struct iphdr *iphdr = NULL;
+ u32 rss_tunple[PKT_INFO_LEN] = {0};
+ u32 len = 0;
+ u32 hash = 0;
+ u8 hash_engine = nic_dev->rss_hash_engine;
+ u8 l4_proto;
+ unsigned char *l4_hdr = NULL;
+
+ if (skb_rx_queue_recorded(skb)) {
+ hash = skb_get_rx_queue(skb);
+ if (unlikely(hash >= num_tx_queues))
+ hash %= num_tx_queues;
+
+ return (u16)hash;
+ }
+
+ iphdr = ip_hdr(skb);
+ if (iphdr->version == IPV4_VERSION) {
+ rss_tunple[len++] = RSS_VAL(iphdr->daddr, hash_engine);
+ rss_tunple[len++] = RSS_VAL(iphdr->saddr, hash_engine);
+ l4_proto = iphdr->protocol;
+ } else if (iphdr->version == IPV6_VERSION) {
+ l4_proto = parse_ipv6_info(skb, (u32 *)rss_tunple,
+ hash_engine, &len);
+ } else {
+ return (u16)hash;
+ }
+
+ if ((iphdr->version == IPV4_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv4) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv4))) ||
+ (iphdr->version == IPV6_VERSION &&
+ ((l4_proto == IPPROTO_UDP && rss_type.udp_ipv6) ||
+ (l4_proto == IPPROTO_TCP && rss_type.tcp_ipv6)))) {
+ l4_hdr = skb_transport_header(skb);
+ /* High 16 bits are dport, low 16 bits are sport. */
+ rss_tunple[len++] = ((u32)ntohs(*((u16 *)l4_hdr + 1U)) << 16) |
+ ntohs(*(u16 *)l4_hdr);
+ } /* rss_type.ipv4 and rss_type.ipv6 default on. */
+
+ if (hash_engine == HINIC3_RSS_HASH_ENGINE_TYPE_TOEP)
+ hash = calc_toep_rss((u32 *)rss_tunple, len,
+ nic_dev->rss_hkey_be);
+ else
+ hash = calc_xor_rss((u8 *)rss_tunple, len * (u32)sizeof(u32));
+
+ return (u16)nic_dev->rss_indir[hash & 0xFF];
+}
+
+#define GET_DSCP_PRI_OFFSET 2
+static u8 hinic3_get_dscp_up(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb)
+{
+ int dscp_cp;
+
+ if (skb->protocol == htons(ETH_P_IP))
+ dscp_cp = ipv4_get_dsfield(ip_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else if (skb->protocol == htons(ETH_P_IPV6))
+ dscp_cp = ipv6_get_dsfield(ipv6_hdr(skb)) >> GET_DSCP_PRI_OFFSET;
+ else
+ return nic_dev->hw_dcb_cfg.default_cos;
+ return nic_dev->hw_dcb_cfg.dscp2cos[dscp_cp];
+}
+
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ struct net_device *sb_dev,
+ select_queue_fallback_t fallback)
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel,
+ select_queue_fallback_t fallback)
+#endif
+
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL)
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb,
+ __always_unused void *accel)
+
+#else
+static u16 hinic3_select_queue(struct net_device *netdev, struct sk_buff *skb)
+#endif /* end of HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 txq;
+ u8 cos, qp_num;
+
+ if (test_bit(HINIC3_SAME_RXTX, &nic_dev->flags))
+ return select_queue_by_hash_func(netdev, skb, netdev->real_num_tx_queues);
+
+ txq =
+#if defined(HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY)
+ netdev_pick_tx(netdev, skb, NULL);
+#elif defined(HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK)
+#ifdef HAVE_NDO_SELECT_QUEUE_SB_DEV
+ fallback(netdev, skb, sb_dev);
+#else
+ fallback(netdev, skb);
+#endif
+#else
+ skb_tx_hash(netdev, skb);
+#endif
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (nic_dev->hw_dcb_cfg.trust == DCB_PCP) {
+ if (skb->vlan_tci)
+ cos = nic_dev->hw_dcb_cfg.pcp2cos[skb->vlan_tci >> VLAN_PRIO_SHIFT];
+ else
+ cos = nic_dev->hw_dcb_cfg.default_cos;
+ } else {
+ cos = hinic3_get_dscp_up(nic_dev, skb);
+ }
+
+ qp_num = nic_dev->hw_dcb_cfg.cos_qp_num[cos] ?
+ txq % nic_dev->hw_dcb_cfg.cos_qp_num[cos] : 0;
+ txq = nic_dev->hw_dcb_cfg.cos_qp_offset[cos] + qp_num;
+ }
+
+ return txq;
+}
+
+#ifdef HAVE_NDO_GET_STATS64
+#ifdef HAVE_VOID_NDO_GET_STATS64
+static void hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#else
+static struct rtnl_link_stats64
+ *hinic3_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+#endif
+
+#else /* !HAVE_NDO_GET_STATS64 */
+static struct net_device_stats *hinic3_get_stats(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifndef HAVE_NDO_GET_STATS64
+#ifdef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats *stats = &netdev->stats;
+#else
+ struct net_device_stats *stats = &nic_dev->net_stats;
+#endif /* HAVE_NETDEV_STATS_IN_NETDEV */
+#endif /* HAVE_NDO_GET_STATS64 */
+ struct hinic3_txq_stats *txq_stats = NULL;
+ struct hinic3_rxq_stats *rxq_stats = NULL;
+ struct hinic3_txq *txq = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 bytes, packets, dropped, errors;
+ unsigned int start;
+ int i;
+
+ bytes = 0;
+ packets = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->txqs)
+ break;
+
+ txq = &nic_dev->txqs[i];
+ txq_stats = &txq->txq_stats;
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ bytes += txq_stats->bytes;
+ packets += txq_stats->packets;
+ dropped += txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ }
+ stats->tx_packets = packets;
+ stats->tx_bytes = bytes;
+ stats->tx_dropped = dropped;
+
+ bytes = 0;
+ packets = 0;
+ errors = 0;
+ dropped = 0;
+ for (i = 0; i < nic_dev->max_qps; i++) {
+ if (!nic_dev->rxqs)
+ break;
+
+ rxq = &nic_dev->rxqs[i];
+ rxq_stats = &rxq->rxq_stats;
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ bytes += rxq_stats->bytes;
+ packets += rxq_stats->packets;
+ errors += rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ dropped += rxq_stats->dropped;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ }
+ stats->rx_packets = packets;
+ stats->rx_bytes = bytes;
+ stats->rx_errors = errors;
+ stats->rx_dropped = dropped;
+
+#ifndef HAVE_VOID_NDO_GET_STATS64
+ return stats;
+#endif
+}
+
+#ifdef HAVE_NDO_TX_TIMEOUT_TXQ
+static void hinic3_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+#else
+static void hinic3_tx_timeout(struct net_device *netdev)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_io_queue *sq = NULL;
+ bool hw_err = false;
+ u32 sw_pi, hw_ci;
+ u8 q_id;
+
+ HINIC3_NIC_STATS_INC(nic_dev, netdev_tx_timeout);
+ nicif_err(nic_dev, drv, netdev, "Tx timeout\n");
+
+ for (q_id = 0; q_id < nic_dev->q_params.num_qps; q_id++) {
+ if (!netif_xmit_stopped(netdev_get_tx_queue(netdev, q_id)))
+ continue;
+
+ sq = nic_dev->txqs[q_id].sq;
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+ nicif_info(nic_dev, drv, netdev,
+ "txq%u: sw_pi: %hu, hw_ci: %u, sw_ci: %u, napi->state: 0x%lx.\n",
+ q_id, sw_pi, hw_ci, hinic3_get_sq_local_ci(sq),
+ nic_dev->q_params.irq_cfg[q_id].napi.state);
+
+ if (sw_pi != hw_ci)
+ hw_err = true;
+ }
+
+ if (hw_err)
+ set_bit(EVENT_WORK_TX_TIMEOUT, &nic_dev->event_flag);
+}
+
+static int hinic3_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u32 mtu = (u32)new_mtu;
+ int err = 0;
+
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_max_mtu;
+
+ if (hinic3_is_xdp_enable(nic_dev)) {
+ xdp_max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ if (mtu > xdp_max_mtu) {
+ nicif_err(nic_dev, drv, netdev,
+ "Max MTU for xdp usage is %d\n", xdp_max_mtu);
+ return -EINVAL;
+ }
+ }
+#endif
+
+ err = hinic3_config_port_mtu(nic_dev, mtu);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change port mtu to %d\n",
+ new_mtu);
+ } else {
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Change mtu from %u to %d\n",
+ netdev->mtu, new_mtu);
+ netdev->mtu = mtu;
+ }
+
+ return err;
+}
+
+static int hinic3_set_mac_addr(struct net_device *netdev, void *addr)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct sockaddr *saddr = addr;
+ int err;
+
+ if (!is_valid_ether_addr(saddr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (ether_addr_equal(netdev->dev_addr, saddr->sa_data)) {
+ nicif_info(nic_dev, drv, netdev,
+ "Already using mac address %pM\n",
+ saddr->sa_data);
+ return 0;
+ }
+
+ err = hinic3_config_port_mac(nic_dev, saddr);
+ if (err)
+ return err;
+
+ ether_addr_copy(netdev->dev_addr, saddr->sa_data);
+
+ nicif_info(nic_dev, drv, netdev, "Set new mac address %pM\n",
+ saddr->sa_data);
+
+ return 0;
+}
+
+#if (KERNEL_VERSION(3, 3, 0) > LINUX_VERSION_CODE)
+static void
+#else
+static int
+#endif
+hinic3_vlan_rx_add_vid(struct net_device *netdev,
+ #if (KERNEL_VERSION(3, 10, 0) <= LINUX_VERSION_CODE)
+ __always_unused __be16 proto,
+ #endif
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ u32 col, line;
+ int err = 0;
+
+ /* VLAN 0 donot be added, which is the same as VLAN 0 deleted. */
+ if (vid == 0)
+ goto end;
+
+ col = VID_COL(nic_dev, vid);
+ line = VID_LINE(nic_dev, vid);
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ err = hinic3_add_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to add vlan %u\n", vid);
+ goto end;
+ }
+
+ set_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Add vlan %u\n", vid);
+
+end:
+#if (KERNEL_VERSION(3, 3, 0) <= LINUX_VERSION_CODE)
+ return err;
+#else
+ return;
+#endif
+}
+
+#if (KERNEL_VERSION(3, 3, 0) > LINUX_VERSION_CODE)
+static void
+#else
+static int
+#endif
+hinic3_vlan_rx_kill_vid(struct net_device *netdev,
+ #if (KERNEL_VERSION(3, 10, 0) <= LINUX_VERSION_CODE)
+ __always_unused __be16 proto,
+ #endif
+ u16 vid)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u16 func_id;
+ int col, line;
+ int err = 0;
+
+ col = VID_COL(nic_dev, vid);
+ line = VID_LINE(nic_dev, vid);
+
+ /* In the broadcast scenario, ucode finds the corresponding function
+ * based on VLAN 0 of vlan table. If we delete VLAN 0, the VLAN function
+ * is affected.
+ */
+ if (vid == 0)
+ goto end;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_del_vlan(nic_dev->hwdev, vid, func_id);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to delete vlan\n");
+ goto end;
+ }
+
+ clear_bit(col, &vlan_bitmap[line]);
+
+ nicif_info(nic_dev, drv, netdev, "Remove vlan %u\n", vid);
+
+end:
+#if (KERNEL_VERSION(3, 3, 0) <= LINUX_VERSION_CODE)
+ return err;
+#else
+ return;
+#endif
+}
+
+#ifdef NEED_VLAN_RESTORE
+static int hinic3_vlan_restore(struct net_device *netdev)
+{
+ int err = 0;
+#if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE)
+ struct net_device *vlandev = NULL;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ unsigned long *vlan_bitmap = nic_dev->vlan_bitmap;
+ u32 col, line;
+ u16 i;
+
+ if (!netdev->netdev_ops->ndo_vlan_rx_add_vid)
+ return -EFAULT;
+ rcu_read_lock();
+ for (i = 0; i < VLAN_N_VID; i++) {
+/* lint -e778 */
+#ifdef HAVE_VLAN_FIND_DEV_DEEP_RCU
+ vlandev =
+ __vlan_find_dev_deep_rcu(netdev, htons(ETH_P_8021Q), i);
+#else
+ vlandev = __vlan_find_dev_deep(netdev, htons(ETH_P_8021Q), i);
+#endif
+/* lint +e778 */
+ col = VID_COL(nic_dev, i);
+ line = VID_LINE(nic_dev, i);
+ if (!vlandev && (vlan_bitmap[line] & (1UL << col)) != 0) {
+#if (KERNEL_VERSION(3, 10, 0) <= LINUX_VERSION_CODE)
+ err = netdev->netdev_ops->ndo_vlan_rx_kill_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "delete vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+#else
+ netdev->netdev_ops->ndo_vlan_rx_kill_vid(netdev, i);
+#endif
+ } else if (vlandev && (vlan_bitmap[line] & (1UL << col)) == 0) {
+#if (KERNEL_VERSION(3, 10, 0) <= LINUX_VERSION_CODE)
+ err = netdev->netdev_ops->ndo_vlan_rx_add_vid(netdev,
+ htons(ETH_P_8021Q), i);
+ if (err) {
+ hinic3_err(nic_dev, drv, "restore vlan %u failed, err code %d\n",
+ i, err);
+ break;
+ }
+#else
+ netdev->netdev_ops->ndo_vlan_rx_add_vid(netdev, i);
+#endif
+ }
+ }
+ rcu_read_unlock();
+#endif
+
+ return err;
+}
+#endif
+
+#define SET_FEATURES_OP_STR(op) ((op) ? "Enable" : "Disable")
+
+static int set_feature_rx_csum(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_RXCSUM)
+ hinic3_info(nic_dev, drv, "%s rx csum success\n",
+ SET_FEATURES_OP_STR(wanted_features &
+ NETIF_F_RXCSUM));
+
+ return 0;
+}
+
+static int set_feature_tso(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_TSO)
+ hinic3_info(nic_dev, drv, "%s tso success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_TSO));
+
+ return 0;
+}
+
+#ifdef NETIF_F_UFO
+static int set_feature_ufo(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+
+ if (changed & NETIF_F_UFO)
+ hinic3_info(nic_dev, drv, "%s ufo success\n",
+ SET_FEATURES_OP_STR(wanted_features & NETIF_F_UFO));
+
+ return 0;
+}
+#endif
+
+static int set_feature_lro(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+ bool en = !!(wanted_features & NETIF_F_LRO);
+ int err;
+
+ if (!(changed & NETIF_F_LRO))
+ return 0;
+
+#ifdef HAVE_XDP_SUPPORT
+ if (en && hinic3_is_xdp_enable(nic_dev)) {
+ hinic3_err(nic_dev, drv, "Can not enable LRO when xdp is enable\n");
+ *failed_features |= NETIF_F_LRO;
+ return -EINVAL;
+ }
+#endif
+
+ err = hinic3_set_rx_lro_state(nic_dev->hwdev, en,
+ HINIC3_LRO_DEFAULT_TIME_LIMIT,
+ HINIC3_LRO_DEFAULT_COAL_PKT_SIZE);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s lro failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= NETIF_F_LRO;
+ } else {
+ hinic3_info(nic_dev, drv, "%s lro success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_rx_cvlan(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#ifdef NETIF_F_HW_VLAN_CTAG_RX
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_CTAG_RX;
+#else
+ netdev_features_t vlan_feature = NETIF_F_HW_VLAN_RX;
+#endif
+ bool en = !!(wanted_features & vlan_feature);
+ int err;
+
+ if (!(changed & vlan_feature))
+ return 0;
+
+ err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rxvlan failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rxvlan success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_feature_vlan_filter(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t wanted_features,
+ netdev_features_t features,
+ netdev_features_t *failed_features)
+{
+ netdev_features_t changed = wanted_features ^ features;
+#if defined(NETIF_F_HW_VLAN_CTAG_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_CTAG_FILTER;
+#elif defined(NETIF_F_HW_VLAN_FILTER)
+ netdev_features_t vlan_filter_feature = NETIF_F_HW_VLAN_FILTER;
+#endif
+ bool en = !!(wanted_features & vlan_filter_feature);
+ int err = 0;
+
+ if (!(changed & vlan_filter_feature))
+ return 0;
+
+#ifdef NEED_VLAN_RESTORE
+ if (en)
+ err = hinic3_vlan_restore(nic_dev->netdev);
+#endif
+
+ if (err == 0)
+ err = hinic3_set_vlan_fliter(nic_dev->hwdev, en);
+ if (err) {
+ hinic3_err(nic_dev, drv, "%s rx vlan filter failed\n",
+ SET_FEATURES_OP_STR(en));
+ *failed_features |= vlan_filter_feature;
+ } else {
+ hinic3_info(nic_dev, drv, "%s rx vlan filter success\n",
+ SET_FEATURES_OP_STR(en));
+ }
+
+ return err;
+}
+
+static int set_features(struct hinic3_nic_dev *nic_dev,
+ netdev_features_t pre_features,
+ netdev_features_t features)
+{
+ netdev_features_t failed_features = 0;
+ u32 err = 0;
+
+ err |= (u32)set_feature_rx_csum(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_tso(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_lro(nic_dev, features, pre_features,
+ &failed_features);
+#ifdef NETIF_F_UFO
+ err |= (u32)set_feature_ufo(nic_dev, features, pre_features,
+ &failed_features);
+#endif
+ err |= (u32)set_feature_rx_cvlan(nic_dev, features, pre_features,
+ &failed_features);
+ err |= (u32)set_feature_vlan_filter(nic_dev, features, pre_features,
+ &failed_features);
+ if (err) {
+ nic_dev->netdev->features = features ^ failed_features;
+ return -EIO;
+ }
+
+ return 0;
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static int hinic3_set_features(struct net_device *netdev, u32 features)
+#else
+static int hinic3_set_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ return set_features(nic_dev, nic_dev->netdev->features,
+ features);
+}
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev)
+{
+ /* enable all hw features in netdev->features */
+ return set_features(nic_dev, ~nic_dev->netdev->features,
+ nic_dev->netdev->features);
+}
+
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+static u32 hinic3_fix_features(struct net_device *netdev, u32 features)
+#else
+static netdev_features_t hinic3_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+#endif
+{
+ netdev_features_t features_tmp = features;
+
+ /* If Rx checksum is disabled, then LRO should also be disabled */
+ if (!(features_tmp & NETIF_F_RXCSUM))
+ features_tmp &= ~NETIF_F_LRO;
+
+ return features_tmp;
+}
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void hinic3_netpoll(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 i;
+
+ for (i = 0; i < nic_dev->q_params.num_qps; i++)
+ napi_schedule(&nic_dev->q_params.irq_cfg[i].napi);
+}
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+static int hinic3_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ if (is_multicast_ether_addr(mac) || /*lint !e574*/
+ vf >= pci_num_vf(adapter->pdev)) /*lint !e574*/
+ return -EINVAL;
+
+ err = hinic3_set_vf_mac(adapter->hwdev, OS_VF_ID_TO_HW(vf), mac);
+ if (err)
+ return err;
+
+ if (!is_zero_ether_addr(mac))
+ nic_info(&adapter->pdev->dev, "Setting MAC %pM on VF %d\n",
+ mac, vf);
+ else
+ nic_info(&adapter->pdev->dev, "Deleting MAC on VF %d\n", vf);
+
+ nic_info(&adapter->pdev->dev, "Please reload the VF driver to make this change effective.");
+
+ return 0;
+}
+
+/*lint -save -e574 -e734*/
+#ifdef IFLA_VF_MAX
+static int set_hw_vf_vlan(void *hwdev, u16 cur_vlanprio, int vf,
+ u16 vlan, u8 qos)
+{
+ int err = 0;
+ u16 old_vlan = cur_vlanprio & VLAN_VID_MASK;
+
+ if (vlan || qos) {
+ if (cur_vlanprio) {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ if (err)
+ return err;
+ }
+ err = hinic3_add_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf), vlan, qos);
+ } else {
+ err = hinic3_kill_vf_vlan(hwdev, OS_VF_ID_TO_HW(vf));
+ }
+
+ err = hinic3_update_mac_vlan(hwdev, old_vlan, vlan, OS_VF_ID_TO_HW(vf));
+ return err;
+}
+
+#define HINIC3_MAX_VLAN_ID 4094
+#define HINIC3_MAX_QOS_NUM 7
+
+#ifdef IFLA_VF_VLAN_INFO_MAX
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos, __be16 vlan_proto)
+#else
+static int hinic3_ndo_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan,
+ u8 qos)
+#endif
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ u16 vlanprio, cur_vlanprio;
+
+ if (vf >= pci_num_vf(adapter->pdev) ||
+ vlan > HINIC3_MAX_VLAN_ID || qos > HINIC3_MAX_QOS_NUM)
+ return -EINVAL;
+#ifdef IFLA_VF_VLAN_INFO_MAX
+ if (vlan_proto != htons(ETH_P_8021Q))
+ return -EPROTONOSUPPORT;
+#endif
+ vlanprio = vlan | (qos << HINIC3_VLAN_PRIORITY_SHIFT);
+ cur_vlanprio = hinic3_vf_info_vlanprio(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* duplicate request, so just return success */
+ if (vlanprio == cur_vlanprio)
+ return 0;
+
+ return set_hw_vf_vlan(adapter->hwdev, cur_vlanprio, vf, vlan, qos);
+}
+#endif
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+static int hinic3_ndo_set_vf_spoofchk(struct net_device *netdev, int vf,
+ bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err = 0;
+ bool cur_spoofchk = false;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_spoofchk = hinic3_vf_info_spoofchk(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_spoofchk) || (!setting && !cur_spoofchk))
+ return 0;
+
+ err = hinic3_set_vf_spoofchk(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d spoofchk %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+static int hinic3_ndo_set_vf_trust(struct net_device *netdev, int vf, bool setting)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+ bool cur_trust;
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ cur_trust = hinic3_get_vf_trust(adapter->hwdev,
+ OS_VF_ID_TO_HW(vf));
+ /* same request, so just return success */
+ if ((setting && cur_trust) || (!setting && !cur_trust))
+ return 0;
+
+ err = hinic3_set_vf_trust(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf), setting);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d trusted %s successfully\n",
+ vf, setting ? "on" : "off");
+ else
+ nicif_err(adapter, drv, netdev, "Failed set VF %d trusted %s\n",
+ vf, setting ? "on" : "off");
+
+ return err;
+}
+#endif
+
+static int hinic3_ndo_get_vf_config(struct net_device *netdev,
+ int vf, struct ifla_vf_info *ivi)
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+
+ if (vf >= pci_num_vf(adapter->pdev))
+ return -EINVAL;
+
+ hinic3_get_vf_config(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf), ivi);
+
+ return 0;
+}
+
+/**
+ * hinic3_ndo_set_vf_link_state
+ * @netdev: network interface device structure
+ * @vf_id: VF identifier
+ * @link: required link state
+ *
+ * Set the link state of a specified VF, regardless of physical link state
+ **/
+int hinic3_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link)
+{
+ static const char * const vf_link[] = {"auto", "enable", "disable"};
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ int err;
+
+ /* validate the request */
+ if (vf_id >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, netdev,
+ "Invalid VF Identifier %d\n", vf_id);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_link_state(adapter->hwdev,
+ (u16)OS_VF_ID_TO_HW(vf_id), link);
+ if (!err)
+ nicif_info(adapter, drv, netdev, "Set VF %d link state: %s\n",
+ vf_id, vf_link[link]);
+
+ return err;
+}
+
+static int is_set_vf_bw_param_valid(const struct hinic3_nic_dev *adapter,
+ int vf, int min_tx_rate, int max_tx_rate)
+{
+ if (!HINIC3_SUPPORT_RATE_LIMIT(adapter->hwdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* verify VF is active */
+ if (vf >= pci_num_vf(adapter->pdev)) {
+ nicif_err(adapter, drv, adapter->netdev, "VF number must be less than %d\n",
+ pci_num_vf(adapter->pdev));
+ return -EINVAL;
+ }
+
+ if (max_tx_rate < min_tx_rate) {
+ nicif_err(adapter, drv, adapter->netdev, "Invalid rate, max rate %d must greater than min rate %d\n",
+ max_tx_rate, min_tx_rate);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC3_TX_RATE_TABLE_FULL 12
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev,
+ int vf, int min_tx_rate, int max_tx_rate)
+#else
+static int hinic3_ndo_set_vf_bw(struct net_device *netdev, int vf,
+ int max_tx_rate)
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+{
+ struct hinic3_nic_dev *adapter = netdev_priv(netdev);
+ struct nic_port_info port_info = {0};
+#ifndef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ int min_tx_rate = 0;
+#endif
+ u8 link_status = 0;
+ u32 speeds[] = {0, SPEED_10, SPEED_100, SPEED_1000, SPEED_10000,
+ SPEED_25000, SPEED_40000, SPEED_50000, SPEED_100000,
+ SPEED_200000};
+ int err = 0;
+
+ err = is_set_vf_bw_param_valid(adapter, vf, min_tx_rate, max_tx_rate);
+ if (err)
+ return err;
+
+ err = hinic3_get_link_state(adapter->hwdev, &link_status);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Get link status failed when set vf tx rate\n");
+ return -EIO;
+ }
+
+ if (!link_status) {
+ nicif_err(adapter, drv, netdev,
+ "Link status must be up when set vf tx rate\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_get_port_info(adapter->hwdev, &port_info,
+ HINIC3_CHANNEL_NIC);
+ if (err || port_info.speed >= PORT_SPEED_UNKNOWN)
+ return -EIO;
+
+ /* rate limit cannot be less than 0 and greater than link speed */
+ if (max_tx_rate < 0 || max_tx_rate > speeds[port_info.speed]) {
+ nicif_err(adapter, drv, netdev, "Set vf max tx rate must be in [0 - %u]\n",
+ speeds[port_info.speed]);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_vf_tx_rate(adapter->hwdev, (u16)OS_VF_ID_TO_HW(vf),
+ (u32)max_tx_rate, (u32)min_tx_rate);
+ if (err) {
+ nicif_err(adapter, drv, netdev,
+ "Unable to set VF %d max rate %d min rate %d%s\n",
+ vf, max_tx_rate, min_tx_rate,
+ err == HINIC3_TX_RATE_TABLE_FULL ?
+ ", tx rate profile is full" : "");
+ return -EIO;
+ }
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d max tx rate %d min tx rate %d successfully\n",
+ vf, max_tx_rate, min_tx_rate);
+#else
+ nicif_info(adapter, drv, netdev,
+ "Set VF %d tx rate %d successfully\n",
+ vf, max_tx_rate);
+#endif
+
+ return 0;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev)
+{
+ return !!nic_dev->xdp_prog;
+}
+
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev)
+{
+ return nic_dev->rx_buff_len - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN);
+}
+
+static int hinic3_xdp_setup(struct hinic3_nic_dev *nic_dev,
+ struct bpf_prog *prog,
+ struct netlink_ext_ack *extack)
+{
+ struct bpf_prog *old_prog = NULL;
+ int max_mtu = hinic3_xdp_max_mtu(nic_dev);
+ int q_id;
+
+ if (nic_dev->netdev->mtu > max_mtu) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program, the current MTU %d is larger than max allowed MTU %d\n",
+ nic_dev->netdev->mtu, max_mtu);
+ NL_SET_ERR_MSG_MOD(extack,
+ "MTU too large for loading xdp program");
+ return -EINVAL;
+ }
+
+ if (nic_dev->netdev->features & NETIF_F_LRO) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to setup xdp program while LRO is on\n");
+ NL_SET_ERR_MSG_MOD(extack,
+ "Failed to setup xdp program while LRO is on\n");
+ return -EINVAL;
+ }
+
+ old_prog = xchg(&nic_dev->xdp_prog, prog);
+ for (q_id = 0; q_id < nic_dev->max_qps; q_id++)
+ xchg(&nic_dev->rxqs[q_id].xdp_prog, nic_dev->xdp_prog);
+
+ if (old_prog)
+ bpf_prog_put(old_prog);
+
+ return 0;
+}
+
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+static int hinic3_xdp(struct net_device *netdev, struct netdev_bpf *xdp)
+#else
+static int hinic3_xdp(struct net_device *netdev, struct netdev_xdp *xdp)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ switch (xdp->command) {
+ case XDP_SETUP_PROG:
+ return hinic3_xdp_setup(nic_dev, xdp->prog, xdp->extack);
+#ifdef HAVE_XDP_QUERY_PROG
+ case XDP_QUERY_PROG:
+ xdp->prog_id = nic_dev->xdp_prog ?
+ nic_dev->xdp_prog->aux->id : 0;
+ return 0;
+#endif
+ default:
+ return -EINVAL;
+ }
+}
+#endif
+
+static const struct net_device_ops hinic3_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef IFLA_VF_MAX
+ .ndo_set_vf_mac = hinic3_ndo_set_vf_mac,
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_SET_VF_VLAN
+ .extended.ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#else
+ .ndo_set_vf_vlan = hinic3_ndo_set_vf_vlan,
+#endif
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ .ndo_set_vf_rate = hinic3_ndo_set_vf_bw,
+#else
+ .ndo_set_vf_tx_rate = hinic3_ndo_set_vf_bw,
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ .ndo_set_vf_spoofchk = hinic3_ndo_set_vf_spoofchk,
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ .extended.ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#else
+ .ndo_set_vf_trust = hinic3_ndo_set_vf_trust,
+#endif /* HAVE_RHEL7_NET_DEVICE_OPS_EXT */
+#endif /* HAVE_NDO_SET_VF_TRUST */
+
+ .ndo_get_vf_config = hinic3_ndo_get_vf_config,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ .ndo_set_vf_link_state = hinic3_ndo_set_vf_link_state,
+#endif
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+static const struct net_device_ops hinic3vf_netdev_ops = {
+ .ndo_open = hinic3_open,
+ .ndo_stop = hinic3_close,
+ .ndo_start_xmit = hinic3_xmit_frame,
+
+#ifdef HAVE_NDO_GET_STATS64
+ .ndo_get_stats64 = hinic3_get_stats64,
+#else
+ .ndo_get_stats = hinic3_get_stats,
+#endif /* HAVE_NDO_GET_STATS64 */
+
+ .ndo_tx_timeout = hinic3_tx_timeout,
+ .ndo_select_queue = hinic3_select_queue,
+
+#ifdef HAVE_RHEL7_NET_DEVICE_OPS_EXT
+ /* RHEL7 requires this to be defined to enable extended ops. RHEL7
+ * uses the function get_ndo_ext to retrieve offsets for extended
+ * fields from with the net_device_ops struct and ndo_size is checked
+ * to determine whether or not the offset is valid.
+ */
+ .ndo_size = sizeof(const struct net_device_ops),
+#endif
+
+#ifdef HAVE_RHEL7_NETDEV_OPS_EXT_NDO_CHANGE_MTU
+ .extended.ndo_change_mtu = hinic3_change_mtu,
+#else
+ .ndo_change_mtu = hinic3_change_mtu,
+#endif
+ .ndo_set_mac_address = hinic3_set_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+
+#if defined(NETIF_F_HW_VLAN_TX) || defined(NETIF_F_HW_VLAN_CTAG_TX)
+ .ndo_vlan_rx_add_vid = hinic3_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = hinic3_vlan_rx_kill_vid,
+#endif
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = hinic3_netpoll,
+#endif /* CONFIG_NET_POLL_CONTROLLER */
+
+ .ndo_set_rx_mode = hinic3_nic_set_rx_mode,
+
+#ifdef HAVE_XDP_SUPPORT
+#ifdef HAVE_NDO_BPF_NETDEV_BPF
+ .ndo_bpf = hinic3_xdp,
+#else
+ .ndo_xdp = hinic3_xdp,
+#endif
+#endif
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+};
+
+/* RHEL6 keeps these operations in a separate structure */
+static const struct net_device_ops_ext hinic3vf_netdev_ops_ext = {
+ .size = sizeof(struct net_device_ops_ext),
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+
+#ifdef HAVE_NDO_SET_FEATURES
+ .ndo_fix_features = hinic3_fix_features,
+ .ndo_set_features = hinic3_set_features,
+#endif /* HAVE_NDO_SET_FEATURES */
+};
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev)
+{
+ if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
+ nic_dev->netdev->netdev_ops = &hinic3_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ } else {
+ nic_dev->netdev->netdev_ops = &hinic3vf_netdev_ops;
+#ifdef HAVE_RHEL6_NET_DEVICE_OPS_EXT
+ set_netdev_ops_ext(nic_dev->netdev, &hinic3vf_netdev_ops_ext);
+#endif /* HAVE_RHEL6_NET_DEVICE_OPS_EXT */
+ }
+}
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev)
+{
+ return netdev->netdev_ops == &hinic3_netdev_ops ||
+ netdev->netdev_ops == &hinic3vf_netdev_ops;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
new file mode 100644
index 000000000000..69cacbae3b57
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic.h
@@ -0,0 +1,183 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_H
+#define HINIC3_NIC_H
+
+#include <linux/types.h>
+#include <linux/semaphore.h>
+
+#include "hinic3_common.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "mag_cmd.h"
+
+/* ************************ array index define ********************* */
+#define ARRAY_INDEX_0 0
+#define ARRAY_INDEX_1 1
+#define ARRAY_INDEX_2 2
+#define ARRAY_INDEX_3 3
+#define ARRAY_INDEX_4 4
+#define ARRAY_INDEX_5 5
+#define ARRAY_INDEX_6 6
+#define ARRAY_INDEX_7 7
+
+struct hinic3_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+struct vf_data_storage {
+ u8 drv_mac_addr[ETH_ALEN];
+ u8 user_mac_addr[ETH_ALEN];
+ bool registered;
+ bool use_specified_mac;
+ u16 pf_vlan;
+ u8 pf_qos;
+ u8 rsvd2;
+ u32 max_rate;
+ u32 min_rate;
+
+ bool link_forced;
+ bool link_up; /* only valid if VF link is forced */
+ bool spoofchk;
+ bool trust;
+ u16 num_qps;
+ u32 support_extra_feature;
+};
+
+struct hinic3_port_routine_cmd {
+ bool mpu_send_sfp_info;
+ bool mpu_send_sfp_abs;
+
+ struct mag_cmd_get_xsfp_info std_sfp_info;
+ struct mag_cmd_get_xsfp_present abs;
+};
+
+struct hinic3_nic_cfg {
+ struct semaphore cfg_lock;
+
+ /* Valid when pfc is disable */
+ bool pause_set;
+ struct nic_pause_config nic_pause;
+
+ u8 pfc_en;
+ u8 pfc_bitmap;
+
+ struct nic_port_info port_info;
+
+ /* percentage of pf link bandwidth */
+ u32 pf_bw_limit;
+ u32 rsvd2;
+
+ struct hinic3_port_routine_cmd rt_cmd;
+ struct mutex sfp_mutex; /* mutex used for copy sfp info */
+};
+
+struct hinic3_nic_io {
+ void *hwdev;
+ void *pcidev_hdl;
+ void *dev_hdl;
+
+ u8 link_status;
+ u8 rsvd1;
+ u32 rsvd2;
+
+ struct hinic3_io_queue *sq;
+ struct hinic3_io_queue *rq;
+
+ u16 num_qps;
+ u16 max_qps;
+
+ void *ci_vaddr_base;
+ dma_addr_t ci_dma_base;
+
+ u8 __iomem *sqs_db_addr;
+ u8 __iomem *rqs_db_addr;
+
+ u16 max_vfs;
+ u16 rsvd3;
+ u32 rsvd4;
+
+ struct vf_data_storage *vf_infos;
+ struct hinic3_dcb_state dcb_state;
+ struct hinic3_nic_cfg nic_cfg;
+
+ u16 rx_buff_len;
+ u16 rsvd5;
+ u32 rsvd6;
+ u64 feature_cap;
+ u64 rsvd7;
+};
+
+struct vf_msg_handler {
+ u16 cmd;
+ int (*handler)(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+struct nic_event_handler {
+ u16 cmd;
+ void (*handler)(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr);
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel);
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id);
+
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data);
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io);
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io);
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state);
+
+void hinic3_notify_vf_link_status(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u8 link_status);
+
+int hinic3_vf_mag_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+void hinic3_pf_mag_event_handler(void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int hinic3_pf_mag_mbox_handler(void *hwdev, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
new file mode 100644
index 000000000000..2c1b5658b458
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.c
@@ -0,0 +1,1608 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_nic_cmd.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_cfg.h"
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr)
+{
+ struct hinic3_cmd_cons_idx_attr cons_idx_attr;
+ u16 out_size = sizeof(cons_idx_attr);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ memset(&cons_idx_attr, 0, sizeof(cons_idx_attr));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ cons_idx_attr.func_idx = hinic3_global_func_id(hwdev);
+
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size);
+ if (err || !out_size || cons_idx_attr.msg_head.status) {
+ sdk_err(nic_io->dev_hdl,
+ "Failed to set ci attribute table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cons_idx_attr.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define PF_SET_VF_MAC(hwdev, status) \
+ (hinic3_func_type(hwdev) == TYPE_VF && \
+ (status) == HINIC3_PF_SET_VF_ALREADY)
+
+static int hinic3_check_mac_info(void *hwdev, u8 status, u16 vlan_id)
+{
+ if ((status && status != HINIC3_MGMT_STATUS_EXIST) ||
+ ((vlan_id & CHECK_IPSU_15BIT) &&
+ status == HINIC3_MGMT_STATUS_EXIST)) {
+ if (PF_SET_VF_MAC(hwdev, status))
+ return 0;
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define HINIC_VLAN_ID_MASK 0x7FFF
+
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size, channel);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore set operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_mac);
+
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id,
+ u16 channel)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.mac, mac_addr);
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info), &mac_info,
+ &out_size, channel);
+ if (err || !out_size ||
+ (mac_info.msg_head.status && !PF_SET_VF_MAC(hwdev, mac_info.msg_head.status))) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to delete MAC, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, mac_info.msg_head.status, out_size, channel);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF mac, Ignore delete operation.\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_del_mac);
+
+int hinic3_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id)
+{
+ struct hinic3_port_mac_update mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !old_mac || !new_mac)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if ((vlan_id & HINIC_VLAN_ID_MASK) >= VLAN_N_VID) {
+ nic_err(nic_io->dev_hdl, "Invalid VLAN number: %d\n",
+ (vlan_id & HINIC_VLAN_ID_MASK));
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ ether_addr_copy(mac_info.old_mac, old_mac);
+ ether_addr_copy(mac_info.new_mac, new_mac);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to update MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF MAC. Ignore update operation\n");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ nic_warn(nic_io->dev_hdl, "MAC is repeated. Ignore update operation\n");
+ return 0;
+ }
+
+ return 0;
+}
+
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ mac_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size || mac_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get mac, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ ether_addr_copy(mac_addr, mac_info.mac);
+
+ return 0;
+}
+
+static int hinic3_config_vlan(struct hinic3_nic_io *nic_io, u8 opcode,
+ u16 vlan_id, u16 func_id)
+{
+ struct hinic3_cmd_vlan_config vlan_info;
+ u16 out_size = sizeof(vlan_info);
+ int err;
+
+ memset(&vlan_info, 0, sizeof(vlan_info));
+ vlan_info.opcode = opcode;
+ vlan_info.func_id = func_id;
+ vlan_info.vlan_id = vlan_id;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ &vlan_info, sizeof(vlan_info),
+ &vlan_info, &out_size);
+ if (err || !out_size || vlan_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to %s vlan, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_ADD ? "add" : "delete",
+ err, vlan_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan_id, func_id);
+}
+
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ return hinic3_config_vlan(nic_io, HINIC3_CMD_OP_DEL, vlan_id, func_id);
+}
+
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel)
+{
+ struct hinic3_vport_state en_state;
+ u16 out_size = sizeof(en_state);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&en_state, 0, sizeof(en_state));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ en_state.func_id = func_id;
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync_ch(hwdev, HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ &en_state, sizeof(en_state),
+ &en_state, &out_size, channel);
+ if (err || !out_size || en_state.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vport state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, en_state.msg_head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hinic3_set_vport_enable);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!memcmp(&nic_io->dcb_state, dcb_state, sizeof(nic_io->dcb_state)))
+ return 0;
+
+ /* save in sdk, vf will get dcb state when probing */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ /* notify stateful in pf, than notify all vf */
+ hinic3_notify_dcb_state_event(nic_io, dcb_state);
+
+ return 0;
+}
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memcpy(dcb_state, &nic_io->dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !cos)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_UP_MAX && nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ return -EINVAL;
+
+ if (pri >= NIC_DCB_IP_PRI_MAX && nic_io->dcb_state.trust == HINIC3_DCB_DSCP)
+ return -EINVAL;
+
+/*lint -e662*/
+/*lint -e661*/
+ if (nic_io->dcb_state.dcb_on) {
+ if (nic_io->dcb_state.trust == HINIC3_DCB_PCP)
+ *cos = nic_io->dcb_state.pcp2cos[pri];
+ else
+ *cos = nic_io->dcb_state.dscp2cos[pri];
+ } else {
+ *cos = nic_io->dcb_state.default_cos;
+ }
+/*lint +e662*/
+/*lint +e661*/
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_cos_by_pri);
+
+int hinic3_save_dcb_state(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ memcpy(&nic_io->dcb_state, dcb_state, sizeof(*dcb_state));
+
+ return 0;
+}
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_cmd_vf_dcb_state vf_dcb;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(vf_dcb);
+ int err;
+
+ if (!hwdev || !dcb_state)
+ return -EINVAL;
+
+ memset(&vf_dcb, 0, sizeof(vf_dcb));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF) {
+ nic_err(nic_io->dev_hdl, "Only vf need to get pf dcb state\n");
+ return -EINVAL;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_COS, &vf_dcb,
+ sizeof(vf_dcb), &vf_dcb, &out_size);
+ if (err || !out_size || vf_dcb.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get vf default cos, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vf_dcb.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(dcb_state, &vf_dcb.state, sizeof(*dcb_state));
+ /* Save dcb_state in hw for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_pf_dcb_state);
+
+#define UNSUPPORT_SET_PAUSE 0x10
+static int hinic3_cfg_hw_pause(struct hinic3_nic_io *nic_io, u8 opcode,
+ struct nic_pause_config *nic_pause)
+{
+ struct hinic3_cmd_pause_config pause_info;
+ u16 out_size = sizeof(pause_info);
+ int err;
+
+ memset(&pause_info, 0, sizeof(pause_info));
+
+ pause_info.port_id = hinic3_physical_port_id(nic_io->hwdev);
+ pause_info.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET) {
+ pause_info.auto_neg = nic_pause->auto_neg;
+ pause_info.rx_pause = nic_pause->rx_pause;
+ pause_info.tx_pause = nic_pause->tx_pause;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+ &pause_info, sizeof(pause_info),
+ &pause_info, &out_size);
+ if (err || !out_size || pause_info.msg_head.status) {
+ if (pause_info.msg_head.status == UNSUPPORT_SET_PAUSE) {
+ err = -EOPNOTSUPP;
+ nic_err(nic_io->dev_hdl, "Can not set pause when pfc is enable\n");
+ } else {
+ err = -EFAULT;
+ nic_err(nic_io->dev_hdl, "Failed to %s pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, pause_info.msg_head.status, out_size);
+ }
+ return err;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET) {
+ nic_pause->auto_neg = pause_info.auto_neg;
+ nic_pause->rx_pause = pause_info.rx_pause;
+ nic_pause->tx_pause = pause_info.tx_pause;
+ }
+
+ return 0;
+}
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause)
+{
+ struct hinic3_nic_cfg *nic_cfg = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ nic_cfg = &nic_io->nic_cfg;
+
+ down(&nic_cfg->cfg_lock);
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_SET, &nic_pause);
+ if (err) {
+ up(&nic_cfg->cfg_lock);
+ return err;
+ }
+
+ nic_cfg->pfc_en = 0;
+ nic_cfg->pfc_bitmap = 0;
+ nic_cfg->pause_set = true;
+ nic_cfg->nic_pause.auto_neg = nic_pause.auto_neg;
+ nic_cfg->nic_pause.rx_pause = nic_pause.rx_pause;
+ nic_cfg->nic_pause.tx_pause = nic_pause.tx_pause;
+
+ up(&nic_cfg->cfg_lock);
+
+ return 0;
+}
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
+{
+ struct hinic3_nic_cfg *nic_cfg = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err = 0;
+
+ if (!hwdev || !nic_pause)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ nic_cfg = &nic_io->nic_cfg;
+
+ err = hinic3_cfg_hw_pause(nic_io, HINIC3_CMD_OP_GET, nic_pause);
+ if (err)
+ return err;
+
+ if (nic_cfg->pause_set || !nic_pause->auto_neg) {
+ nic_pause->rx_pause = nic_cfg->nic_pause.rx_pause;
+ nic_pause->tx_pause = nic_cfg->nic_pause.tx_pause;
+ }
+
+ return 0;
+}
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state)
+{
+ struct hinic3_cmd_set_dcb_state dcb_state;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(dcb_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&dcb_state, 0, sizeof(dcb_state));
+
+ dcb_state.op_code = op_code;
+ dcb_state.state = state;
+ dcb_state.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_QOS_DCB_STATE,
+ &dcb_state, sizeof(dcb_state), &dcb_state, &out_size);
+ if (err || dcb_state.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set dcb state, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dcb_state.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num)
+{
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dcb_set_rq_iq_mapping);
+
+int hinic3_flush_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_clear_qp_resource sq_res;
+ u16 out_size = sizeof(sq_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&sq_res, 0, sizeof(sq_res));
+
+ sq_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ &sq_res, sizeof(sq_res), &sq_res,
+ &out_size);
+ if (err || !out_size || sq_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to clear sq resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sq_res.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_flush_qps_res);
+
+int hinic3_cache_out_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_cache_out_qp_resource qp_res;
+ u16 out_size = sizeof(qp_res);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&qp_res, 0, sizeof(qp_res));
+
+ qp_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+ &qp_res, sizeof(qp_res), &qp_res, &out_size);
+ if (err || !out_size || qp_res.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to cache out qp resources, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, qp_res.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_fpga_phy_port_stats(void *hwdev, struct hinic3_phy_fpga_port_stats *stats)
+{
+ struct hinic3_port_stats *port_stats = NULL;
+ struct hinic3_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ port_stats = kzalloc(sizeof(*port_stats), GFP_KERNEL);
+ if (!port_stats)
+ return -ENOMEM;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_PORT_STAT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->msg_head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->stats, sizeof(*stats));
+
+out:
+ kfree(port_stats);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_fpga_phy_port_stats);
+
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats)
+{
+ struct hinic3_port_stats_info stats_info;
+ struct hinic3_cmd_vport_stats vport_stats;
+ u16 out_size = sizeof(vport_stats);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ memset(&vport_stats, 0, sizeof(vport_stats));
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ stats_info.func_id = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.msg_head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get function statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vport_stats.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+
+ return 0;
+}
+
+static int hinic3_set_function_table(struct hinic3_nic_io *nic_io, u32 cfg_bitmap,
+ const struct hinic3_func_tbl_cfg *cfg)
+{
+ struct hinic3_cmd_set_func_tbl cmd_func_tbl;
+ u16 out_size = sizeof(cmd_func_tbl);
+ int err;
+
+ memset(&cmd_func_tbl, 0, sizeof(cmd_func_tbl));
+ cmd_func_tbl.func_id = hinic3_global_func_id(nic_io->hwdev);
+ cmd_func_tbl.cfg_bitmap = cfg_bitmap;
+ cmd_func_tbl.tbl_cfg = *cfg;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_FUNC_TBL,
+ &cmd_func_tbl, sizeof(cmd_func_tbl),
+ &cmd_func_tbl, &out_size);
+ if (err || cmd_func_tbl.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set func table, bitmap: 0x%x, err: %d, status: 0x%x, out size: 0x%x\n",
+ cfg_bitmap, err, cmd_func_tbl.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_init_function_table(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ u32 cfg_bitmap = BIT(FUNC_CFG_INIT) | BIT(FUNC_CFG_MTU) |
+ BIT(FUNC_CFG_RX_BUF_SIZE);
+
+ func_tbl_cfg.mtu = 0x3FFF; /* default, max mtu */
+ func_tbl_cfg.rx_wqe_buf_size = nic_io->rx_buff_len;
+
+ return hinic3_set_function_table(nic_io, cfg_bitmap, &func_tbl_cfg);
+}
+
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg = {0};
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if (new_mtu < HINIC3_MIN_MTU_SIZE) {
+ nic_err(nic_io->dev_hdl,
+ "Invalid mtu size: %ubytes, mtu size < %ubytes",
+ new_mtu, HINIC3_MIN_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ if (new_mtu > HINIC3_MAX_JUMBO_FRAME_SIZE) {
+ nic_err(nic_io->dev_hdl, "Invalid mtu size: %ubytes, mtu size > %ubytes",
+ new_mtu, HINIC3_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ func_tbl_cfg.mtu = new_mtu;
+ return hinic3_set_function_table(nic_io, BIT(FUNC_CFG_MTU),
+ &func_tbl_cfg);
+}
+
+static int nic_feature_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > NIC_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to negotiate nic feature, err:%d, status: 0x%x, out_size: 0x%x\n",
+ err, feature_nego.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64));
+
+ return 0;
+}
+
+static int hinic3_get_bios_pf_bw_limit(void *hwdev, u32 *pf_bw_limit)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct nic_cmd_bios_cfg cfg = {{0}};
+ u16 out_size = sizeof(cfg);
+ int err;
+
+ if (!hwdev || !pf_bw_limit)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !HINIC3_SUPPORT_RATE_LIMIT(hwdev))
+ return 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ cfg.bios_cfg.func_id = (u8)hinic3_global_func_id(hwdev);
+ cfg.bios_cfg.func_valid = 1;
+ cfg.op_code = 0 | NIC_NVM_DATA_PF_SPEED_LIMIT;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_BIOS_CFG, &cfg, sizeof(cfg),
+ &cfg, &out_size);
+ if (err || !out_size || cfg.head.status) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to get bios pf bandwidth limit, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg.head.status, out_size);
+ return -EIO;
+ }
+
+ /* check data is valid or not */
+ if (cfg.bios_cfg.signature != BIOS_CFG_SIGNATURE)
+ nic_warn(nic_io->dev_hdl, "Invalid bios configuration data, signature: 0x%x\n",
+ cfg.bios_cfg.signature);
+
+ if (cfg.bios_cfg.pf_bw > MAX_LIMIT_BW) {
+ nic_err(nic_io->dev_hdl, "Invalid bios cfg pf bandwidth limit: %u\n",
+ cfg.bios_cfg.pf_bw);
+ return -EINVAL;
+ }
+
+ *pf_bw_limit = cfg.bios_cfg.pf_bw;
+
+ return 0;
+}
+
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg = {{0}};
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 out_size = sizeof(rate_cfg);
+ u32 pf_rate;
+ int err;
+ u32 speed_convert[PORT_SPEED_UNKNOWN] = {
+ 0, 10, 100, 1000, 10000, 25000, 40000, 50000, 100000, 200000
+ };
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EINVAL;
+
+ if (speed_level >= PORT_SPEED_UNKNOWN) {
+ nic_err(nic_io->dev_hdl, "Invalid speed level: %hhu\n", speed_level);
+ return -EINVAL;
+ }
+
+ if (nic_io->nic_cfg.pf_bw_limit == MAX_LIMIT_BW) {
+ pf_rate = 0;
+ } else {
+ /* divided by 100 to convert to percentage */
+ pf_rate = (speed_convert[speed_level] / 100) * nic_io->nic_cfg.pf_bw_limit;
+ /* bandwidth limit is very small but not unlimit in this case */
+ if (pf_rate == 0 && speed_level != PORT_SPEED_NOT_SET)
+ pf_rate = 1;
+ }
+
+ rate_cfg.func_id = hinic3_global_func_id(hwdev);
+ rate_cfg.min_rate = 0;
+ rate_cfg.max_rate = pf_rate;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_MAX_MIN_RATE, &rate_cfg,
+ sizeof(rate_cfg), &rate_cfg, &out_size);
+ if (err || !out_size || rate_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rate(%u), err: %d, status: 0x%x, out size: 0x%x\n",
+ pf_rate, err, rate_cfg.msg_head.status, out_size);
+ return rate_cfg.msg_head.status ? rate_cfg.msg_head.status : -EIO;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_nic_feature_from_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_GET, s_feature, size);
+}
+
+int hinic3_set_nic_feature_to_hw(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_SET, &nic_io->feature_cap, 1);
+}
+
+u64 hinic3_get_feature_cap(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ return nic_io->feature_cap;
+}
+
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ nic_io->feature_cap = s_feature;
+
+ nic_info(nic_io->dev_hdl, "Update nic feature to 0x%llx\n", nic_io->feature_cap);
+}
+
+static inline int init_nic_hwdev_param_valid(const void *hwdev, const void *pcidev_hdl,
+ const void *dev_hdl)
+{
+ if (!hwdev || !pcidev_hdl || !dev_hdl)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_init_nic_io(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ struct hinic3_nic_io **nic_io)
+{
+ if (init_nic_hwdev_param_valid(hwdev, pcidev_hdl, dev_hdl))
+ return -EINVAL;
+
+ *nic_io = kzalloc(sizeof(**nic_io), GFP_KERNEL);
+ if (!(*nic_io))
+ return -ENOMEM;
+
+ (*nic_io)->dev_hdl = dev_hdl;
+ (*nic_io)->pcidev_hdl = pcidev_hdl;
+ (*nic_io)->hwdev = hwdev;
+
+ sema_init(&((*nic_io)->nic_cfg.cfg_lock), 1);
+ mutex_init(&((*nic_io)->nic_cfg.sfp_mutex));
+
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_abs = false;
+ (*nic_io)->nic_cfg.rt_cmd.mpu_send_sfp_info = false;
+
+ return 0;
+}
+
+/* *
+ * hinic3_init_nic_hwdev - init nic hwdev
+ * @hwdev: pointer to hwdev
+ * @pcidev_hdl: pointer to pcidev or handler
+ * @dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or dma_alloc()
+ * @rx_buff_len: rx_buff_len is receive buffer length
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl,
+ u16 rx_buff_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ err = hinic3_init_nic_io(hwdev, pcidev_hdl, dev_hdl, &nic_io);
+ if (err)
+ return err;
+
+ err = hinic3_register_service_adapter(hwdev, nic_io, SERVICE_T_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to register service adapter\n");
+ goto register_sa_err;
+ }
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 1, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set function svc used state\n");
+ goto set_used_state_err;
+ }
+
+ err = hinic3_init_function_table(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init function table\n");
+ goto err_out;
+ }
+
+ err = hinic3_get_nic_feature_from_hw(hwdev, &nic_io->feature_cap, 1);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get nic features\n");
+ goto err_out;
+ }
+
+ sdk_info(dev_hdl, "nic features: 0x%llx\n", nic_io->feature_cap);
+
+ err = hinic3_get_bios_pf_bw_limit(hwdev, &nic_io->nic_cfg.pf_bw_limit);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get pf bandwidth limit\n");
+ goto err_out;
+ }
+
+ err = hinic3_vf_func_init(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init vf info\n");
+ goto err_out;
+ }
+
+ nic_io->rx_buff_len = rx_buff_len;
+
+ return 0;
+
+err_out:
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0, HINIC3_CHANNEL_NIC);
+
+set_used_state_err:
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+register_sa_err:
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_init_nic_hwdev);
+
+void hinic3_free_nic_hwdev(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_vf_func_free(nic_io);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_NIC, 0, HINIC3_CHANNEL_NIC);
+
+ hinic3_unregister_service_adapter(hwdev, SERVICE_T_NIC);
+
+ mutex_deinit(&nic_io->nic_cfg.sfp_mutex);
+ sema_deinit(&nic_io->nic_cfg.cfg_lock);
+
+ kfree(nic_io);
+}
+EXPORT_SYMBOL(hinic3_free_nic_hwdev);
+
+int hinic3_force_drop_tx_pkt(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_force_pkt_drop pkt_drop;
+ u16 out_size = sizeof(pkt_drop);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&pkt_drop, 0, sizeof(pkt_drop));
+ pkt_drop.port = hinic3_physical_port_id(hwdev);
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ &pkt_drop, sizeof(pkt_drop),
+ &pkt_drop, &out_size);
+ if ((pkt_drop.msg_head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ pkt_drop.msg_head.status) || err || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Failed to set force tx packets drop, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, pkt_drop.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return pkt_drop.msg_head.status;
+}
+
+int hinic3_set_rx_mode(void *hwdev, u32 enable)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rx_mode_config rx_mode_cfg;
+ u16 out_size = sizeof(rx_mode_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&rx_mode_cfg, 0, sizeof(rx_mode_cfg));
+ rx_mode_cfg.func_id = hinic3_global_func_id(hwdev);
+ rx_mode_cfg.rx_mode = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_MODE,
+ &rx_mode_cfg, sizeof(rx_mode_cfg),
+ &rx_mode_cfg, &out_size);
+ if (err || !out_size || rx_mode_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx mode, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rx_mode_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_vlan_offload vlan_cfg;
+ u16 out_size = sizeof(vlan_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&vlan_cfg, 0, sizeof(vlan_cfg));
+ vlan_cfg.func_id = hinic3_global_func_id(hwdev);
+ vlan_cfg.vlan_offload = en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg),
+ &vlan_cfg, &out_size);
+ if (err || !out_size || vlan_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rx vlan offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id)
+{
+ struct vf_data_storage *vf_info = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 func_id;
+ int err;
+
+ if (!hwdev || old_vlan >= VLAN_N_VID || new_vlan >= VLAN_N_VID)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (!nic_io->vf_infos || is_zero_ether_addr(vf_info->drv_mac_addr))
+ return 0;
+
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+
+ err = hinic3_del_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, old_vlan);
+ return err;
+ }
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ new_vlan, func_id, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d MAC %pM vlan %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->drv_mac_addr, new_vlan);
+ hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ old_vlan, func_id, HINIC3_CHANNEL_NIC);
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro(void *hwdev, u8 ipv4_en, u8 ipv6_en,
+ u8 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_config lro_cfg;
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&lro_cfg, 0, sizeof(lro_cfg));
+ lro_cfg.func_id = hinic3_global_func_id(hwdev);
+ lro_cfg.opcode = HINIC3_CMD_OP_SET;
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_pkt_len = lro_max_pkt_len;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RX_LRO,
+ &lro_cfg, sizeof(lro_cfg),
+ &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rx_lro_timer(void *hwdev, u32 timer_value)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_lro_timer lro_timer;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&lro_timer, 0, sizeof(lro_timer));
+ lro_timer.opcode = HINIC3_CMD_OP_SET;
+ lro_timer.timer = timer_value;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ &lro_timer, sizeof(lro_timer),
+ &lro_timer, &out_size);
+ if (err || !out_size || lro_timer.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set lro timer, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, lro_timer.msg_head.status, out_size);
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 ipv4_en = 0, ipv6_en = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ nic_info(nic_io->dev_hdl, "Set LRO max coalesce packet size to %uK\n",
+ lro_max_pkt_len);
+
+ err = hinic3_set_rx_lro(hwdev, ipv4_en, ipv6_en, (u8)lro_max_pkt_len);
+ if (err)
+ return err;
+
+ /* we don't set LRO timer for VF */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ nic_info(nic_io->dev_hdl, "Set LRO timer to %u\n", lro_timer);
+
+ return hinic3_set_rx_lro_timer(hwdev, lro_timer);
+}
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_cmd_set_vlan_filter vlan_filter;
+ u16 out_size = sizeof(vlan_filter);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.func_id = hinic3_global_func_id(hwdev);
+ vlan_filter.vlan_filter_ctrl = vlan_filter_ctrl;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ &vlan_filter, sizeof(vlan_filter),
+ &vlan_filter, &out_size);
+ if (err || !out_size || vlan_filter.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan filter, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, vlan_filter.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en)
+{
+ // struct hinic_hwdev *dev = hwdev;
+ struct nic_cmd_capture_info cap_info = {{0}};
+ u16 out_size = sizeof(cap_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ /* 2 function capture types */
+ // cap_info.op_type = UP_UCAPTURE_OP_TYPE_FUNC;
+ cap_info.is_en_trx = cap_en;
+ cap_info.func_port = func_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_UCAPTURE_OPT,
+ &cap_info, sizeof(cap_info),
+ &cap_info, &out_size);
+ if (err || !out_size || cap_info.msg_head.status)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_capture_en);
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_add_rule);
+ struct nic_cmd_fdir_add_rule tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !tcam_rule)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (tcam_rule->index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to add is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_add_rule));
+ memcpy((void *)&tcam_cmd.rule, (void *)tcam_rule,
+ sizeof(struct nic_tcam_cfg_rule));
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ADD_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Add tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_del_tcam_rule(void *hwdev, u32 index)
+{
+ u16 out_size = sizeof(struct nic_cmd_fdir_del_rules);
+ struct nic_cmd_fdir_del_rules tcam_cmd;
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ nic_err(nic_io->dev_hdl, "Tcam rules num to del is invalid\n");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct nic_cmd_fdir_del_rules));
+ tcam_cmd.index_start = index;
+ tcam_cmd.index_num = 1;
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.type = TCAM_RULE_FDIR_TYPE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_DEL_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd),
+ &tcam_cmd, &out_size);
+ if (err || tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Del tcam rule failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_cmd.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * hinic3_mgmt_tcam_block - alloc or free tcam block for IO packet.
+ *
+ * @param hwdev
+ * The hardware interface of a nic device.
+ * @param alloc_en
+ * 1 alloc block.
+ * 0 free block.
+ * @param index
+ * block index from firmware.
+ * @return
+ * 0 on success,
+ * negative error value otherwise.
+ */
+static int hinic3_mgmt_tcam_block(void *hwdev, u8 alloc_en, u16 *index)
+{
+ struct nic_cmd_ctrl_tcam_block_out tcam_block_info;
+ u16 out_size = sizeof(struct nic_cmd_ctrl_tcam_block_out);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !index)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&tcam_block_info, 0,
+ sizeof(struct nic_cmd_ctrl_tcam_block_out));
+
+ tcam_block_info.func_id = hinic3_global_func_id(hwdev);
+ tcam_block_info.alloc_en = alloc_en;
+ tcam_block_info.tcam_type = NIC_TCAM_BLOCK_TYPE_LARGE;
+ tcam_block_info.tcam_block_index = *index;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ &tcam_block_info, sizeof(tcam_block_info),
+ &tcam_block_info, &out_size);
+ if (err || tcam_block_info.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Set tcam block failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_block_info.head.status, out_size);
+ return -EIO;
+ }
+
+ if (alloc_en)
+ *index = tcam_block_info.tcam_block_index;
+
+ return 0;
+}
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_ENABLE, index);
+}
+
+int hinic3_free_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_mgmt_tcam_block(hwdev, HINIC3_TCAM_BLOCK_DISABLE, index);
+}
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable)
+{
+ struct nic_cmd_set_tcam_enable port_tcam_cmd;
+ u16 out_size = sizeof(port_tcam_cmd);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&port_tcam_cmd, 0, sizeof(port_tcam_cmd));
+ port_tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ port_tcam_cmd.tcam_enable = (u8)enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ENABLE_TCAM,
+ &port_tcam_cmd, sizeof(port_tcam_cmd),
+ &port_tcam_cmd, &out_size);
+ if (err || port_tcam_cmd.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Set fdir tcam filter failed, err: %d, status: 0x%x, out size: 0x%x, enable: 0x%x\n",
+ err, port_tcam_cmd.head.status, out_size,
+ enable);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_flush_tcam_rule(void *hwdev)
+{
+ struct nic_cmd_flush_tcam_rules tcam_flush;
+ u16 out_size = sizeof(struct nic_cmd_flush_tcam_rules);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&tcam_flush, 0, sizeof(struct nic_cmd_flush_tcam_rules));
+ tcam_flush.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FLUSH_TCAM,
+ &tcam_flush,
+ sizeof(struct nic_cmd_flush_tcam_rules),
+ &tcam_flush, &out_size);
+ if (err || tcam_flush.head.status || !out_size) {
+ nic_err(nic_io->dev_hdl,
+ "Flush tcam fdir rules failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, tcam_flush.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rxq_hw *rxq_hw = NULL;
+ struct rxq_check_info *rxq_info_out = NULL;
+ int err;
+ u16 i;
+
+ if (!hwdev || !rxq_info)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ rxq_hw = cmd_buf->buf;
+ rxq_hw->func_id = hinic3_global_func_id(hwdev);
+ rxq_hw->num_queues = num_qps;
+
+ hinic3_cpu_to_be32(rxq_hw, sizeof(struct hinic3_rxq_hw));
+
+ cmd_buf->size = sizeof(struct hinic3_rxq_hw);
+
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC, HINIC3_UCODE_CMD_RXQ_INFO_GET,
+ cmd_buf, cmd_buf, NULL, 0, HINIC3_CHANNEL_NIC);
+ if (err)
+ goto get_rxq_info_failed;
+
+ rxq_info_out = cmd_buf->buf;
+ for (i = 0; i < num_qps; i++) {
+ rxq_info[i].hw_pi = rxq_info_out[i].hw_pi >> wqe_type;
+ rxq_info[i].hw_ci = rxq_info_out[i].hw_ci >> wqe_type;
+ }
+
+get_rxq_info_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct vf_data_storage *vf_infos = NULL;
+ int vf_id;
+
+ if (!hwdev) {
+ pr_err("hwdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("nic_io is null.\n");
+ return -EINVAL;
+ }
+
+ vf_infos = nic_io->vf_infos;
+ for (vf_id = 0; vf_id < nic_io->max_vfs; vf_id++) {
+ vf_infos[vf_id].link_up = link_state;
+ vf_infos[vf_id].link_forced = vf_link_forced;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_pf_set_vf_link_state);
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
new file mode 100644
index 000000000000..dc0a8eb6e2df
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg.h
@@ -0,0 +1,620 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_CFG_H
+#define HINIC3_NIC_CFG_H
+
+#include <linux/types.h>
+#include <linux/netdevice.h>
+
+#include "hinic3_mgmt_interface.h"
+#include "mag_cmd.h"
+
+#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
+#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+
+#define HINIC3_VLAN_PRIORITY_SHIFT 13
+
+#define HINIC3_RSS_INDIR_4B_UNIT 3
+#define HINIC3_RSS_INDIR_NUM 2
+
+#define HINIC3_RSS_KEY_RSV_NUM 2
+#define HINIC3_MAX_NUM_RQ 256
+
+#define HINIC3_MIN_MTU_SIZE 256
+#define HINIC3_MAX_JUMBO_FRAME_SIZE 9600
+
+#define HINIC3_PF_SET_VF_ALREADY 0x4
+#define HINIC3_MGMT_STATUS_EXIST 0x6
+#define CHECK_IPSU_15BIT 0x8000
+
+#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB /* Table empty */
+#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC /* Table full */
+
+#define HINIC3_LOWEST_LATENCY 3
+#define HINIC3_MULTI_VM_LATENCY 32
+#define HINIC3_MULTI_VM_PENDING_LIMIT 4
+
+#define HINIC3_RX_RATE_LOW 200000
+#define HINIC3_RX_COAL_TIME_LOW 25
+#define HINIC3_RX_PENDING_LIMIT_LOW 2
+
+#define HINIC3_RX_RATE_HIGH 700000
+#define HINIC3_RX_COAL_TIME_HIGH 225
+#define HINIC3_RX_PENDING_LIMIT_HIGH 8
+
+#define HINIC3_RX_RATE_THRESH 50000
+#define HINIC3_TX_RATE_THRESH 50000
+#define HINIC3_RX_RATE_LOW_VM 100000
+#define HINIC3_RX_PENDING_LIMIT_HIGH_VM 87
+
+#define HINIC3_DCB_PCP 0
+#define HINIC3_DCB_DSCP 1
+
+#define MAX_LIMIT_BW 100
+
+enum hinic3_valid_link_settings {
+ HILINK_LINK_SET_SPEED = 0x1,
+ HILINK_LINK_SET_AUTONEG = 0x2,
+ HILINK_LINK_SET_FEC = 0x4,
+};
+
+enum hinic3_link_follow_status {
+ HINIC3_LINK_FOLLOW_DEFAULT,
+ HINIC3_LINK_FOLLOW_PORT,
+ HINIC3_LINK_FOLLOW_SEPARATE,
+ HINIC3_LINK_FOLLOW_STATUS_MAX,
+};
+
+struct hinic3_link_ksettings {
+ u32 valid_bitmap;
+ u8 speed; /* enum nic_speed_level */
+ u8 autoneg; /* 0 - off; 1 - on */
+ u8 fec; /* 0 - RSFEC; 1 - BASEFEC; 2 - NOFEC */
+};
+
+u64 hinic3_get_feature_cap(void *hwdev);
+
+#define HINIC3_SUPPORT_FEATURE(hwdev, feature) \
+ (hinic3_get_feature_cap(hwdev) & NIC_F_##feature)
+#define HINIC3_SUPPORT_CSUM(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, CSUM)
+#define HINIC3_SUPPORT_SCTP_CRC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, SCTP_CRC)
+#define HINIC3_SUPPORT_TSO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, TSO)
+#define HINIC3_SUPPORT_UFO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, UFO)
+#define HINIC3_SUPPORT_LRO(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, LRO)
+#define HINIC3_SUPPORT_RSS(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RSS)
+#define HINIC3_SUPPORT_RXVLAN_FILTER(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_FILTER)
+#define HINIC3_SUPPORT_VLAN_OFFLOAD(hwdev) \
+ (HINIC3_SUPPORT_FEATURE(hwdev, RX_VLAN_STRIP) && \
+ HINIC3_SUPPORT_FEATURE(hwdev, TX_VLAN_INSERT))
+#define HINIC3_SUPPORT_VXLAN_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, VXLAN_OFFLOAD)
+#define HINIC3_SUPPORT_IPSEC_OFFLOAD(hwdev) \
+ HINIC3_SUPPORT_FEATURE(hwdev, IPSEC_OFFLOAD)
+#define HINIC3_SUPPORT_FDIR(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, FDIR)
+#define HINIC3_SUPPORT_PROMISC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, PROMISC)
+#define HINIC3_SUPPORT_ALLMULTI(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, ALLMULTI)
+#define HINIC3_SUPPORT_VF_MAC(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, VF_MAC)
+#define HINIC3_SUPPORT_RATE_LIMIT(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RATE_LIMIT)
+
+#define HINIC3_SUPPORT_RXQ_RECOVERY(hwdev) HINIC3_SUPPORT_FEATURE(hwdev, RXQ_RECOVERY)
+
+struct nic_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum hinic3_rss_hash_type {
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR = 0,
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP,
+ HINIC3_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
+/* rss */
+struct nic_rss_indirect_tbl {
+ u32 rsvd[4]; /* Make sure that 16B beyond entry[] */
+ u16 entry[NIC_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_context_tbl {
+ u32 rsvd[4];
+ u32 ctx;
+};
+
+#define NIC_CONFIG_ALL_QUEUE_VLAN_CTX 0xFFFF
+struct nic_vlan_ctx {
+ u32 func_id;
+ u32 qid; /* if qid = 0xFFFF, config current function all queue */
+ u32 vlan_tag;
+ u32 vlan_mode;
+ u32 vlan_sel;
+};
+
+enum hinic3_link_status {
+ HINIC3_LINK_DOWN = 0,
+ HINIC3_LINK_UP
+};
+
+struct nic_port_info {
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u32 supported_mode;
+ u32 advertised_mode;
+};
+
+struct nic_pause_config {
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+};
+
+struct rxq_check_info {
+ u16 hw_pi;
+ u16 hw_ci;
+};
+
+struct hinic3_rxq_hw {
+ u32 func_id;
+ u32 num_queues;
+
+ u32 rsvd[14];
+};
+
+#define MODULE_TYPE_SFP 0x3
+#define MODULE_TYPE_QSFP28 0x11
+#define MODULE_TYPE_QSFP 0x0C
+#define MODULE_TYPE_QSFP_PLUS 0x0D
+
+#define TCAM_IP_TYPE_MASK 0x1
+#define TCAM_TUNNEL_TYPE_MASK 0xF
+#define TCAM_FUNC_ID_MASK 0x7FFF
+
+int hinic3_add_tcam_rule(void *hwdev, struct nic_tcam_cfg_rule *tcam_rule);
+int hinic3_del_tcam_rule(void *hwdev, u32 index);
+
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index);
+int hinic3_free_tcam_block(void *hwdev, u16 *index);
+
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable);
+
+int hinic3_flush_tcam_rule(void *hwdev);
+
+/* *
+ * @brief hinic3_update_mac - update mac address to hardware
+ * @param hwdev: device pointer to hwdev
+ * @param old_mac: old mac to delete
+ * @param new_mac: new mac to update
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id);
+
+/* *
+ * @brief hinic3_get_default_mac - get default mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr);
+
+/* *
+ * @brief hinic3_set_port_mtu - set function mtu
+ * @param hwdev: device pointer to hwdev
+ * @param new_mtu: mtu
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu);
+
+/* *
+ * @brief hinic3_get_link_state - get link state
+ * @param hwdev: device pointer to hwdev
+ * @param link_state: link state, 0-link down, 1-link up
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_link_state(void *hwdev, u8 *link_state);
+
+/* *
+ * @brief hinic3_get_vport_stats - get function stats
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: function index
+ * @param stats: function stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_vport_stats(void *hwdev, u16 func_id, struct hinic3_vport_stats *stats);
+
+/* *
+ * @brief hinic3_notify_all_vfs_link_changed - notify to all vfs link changed
+ * @param hwdev: device pointer to hwdev
+ * @param link_status: link state, 0-link down, 1-link up
+ */
+void hinic3_notify_all_vfs_link_changed(void *hwdev, u8 link_status);
+
+/* *
+ * @brief hinic3_force_drop_tx_pkt - force drop tx packet
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_force_drop_tx_pkt(void *hwdev);
+
+/* *
+ * @brief hinic3_set_rx_mode - set function rx mode
+ * @param hwdev: device pointer to hwdev
+ * @param enable: rx mode state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_mode(void *hwdev, u32 enable);
+
+/* *
+ * @brief hinic3_set_rx_vlan_offload - set function vlan offload valid state
+ * @param hwdev: device pointer to hwdev
+ * @param en: 0-disable, 1-enable
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en);
+
+/* *
+ * @brief hinic3_set_rx_lro_state - set rx LRO configuration
+ * @param hwdev: device pointer to hwdev
+ * @param lro_en: 0-disable, 1-enable
+ * @param lro_timer: LRO aggregation timeout
+ * @param lro_max_pkt_len: LRO coalesce packet size(unit is 1K)
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len);
+
+/* *
+ * @brief hinic3_set_vf_spoofchk - set vf spoofchk
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param spoofchk: spoofchk
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk);
+
+/* *
+ * @brief hinic3_vf_info_spoofchk - get vf spoofchk info
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval spoofchk state
+ */
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_add_vf_vlan - add vf vlan id
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @param qos: qos
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos);
+
+/* *
+ * @brief hinic3_kill_vf_vlan - kill vf vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param vlan: vlan id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_mac - set vf mac
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param mac_addr: vf mac address
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, unsigned char *mac_addr);
+
+/* *
+ * @brief hinic3_vf_info_vlanprio - get vf vlan priority
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @retval zero: vlan priority
+ */
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param max_rate: max rate
+ * @param min_rate: min rate
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate);
+
+/* *
+ * @brief hinic3_set_vf_tx_rate - set vf tx rate
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param ivi: vf info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi);
+
+/* *
+ * @brief hinic3_set_vf_link_state - set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_id: vf id
+ * @param link: link state
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link);
+
+/* *
+ * @brief hinic3_get_port_info - set port info
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port info
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info,
+ u16 channel);
+
+/* *
+ * @brief hinic3_set_rss_type - set rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type);
+
+/* *
+ * @brief hinic3_get_rss_type - get rss type
+ * @param hwdev: device pointer to hwdev
+ * @param rss_type: rss type
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type);
+
+/* *
+ * @brief hinic3_rss_get_hash_engine - get rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type);
+
+/* *
+ * @brief hinic3_rss_set_hash_engine - set rss hash engine
+ * @param hwdev: device pointer to hwdev
+ * @param type: hash engine
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type);
+
+/* *
+ * @brief hinic3_rss_cfg - set rss configuration
+ * @param hwdev: device pointer to hwdev
+ * @param rss_en: enable rss flag
+ * @param type: number of TC
+ * @param cos_num: cos num
+ * @param num_qps: number of queue
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc,
+ u16 num_qps);
+
+/* *
+ * @brief hinic3_rss_set_template_tbl - set template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key);
+
+/* *
+ * @brief hinic3_rss_get_template_tbl - get template table
+ * @param hwdev: device pointer to hwdev
+ * @param key: rss key
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key);
+
+/* *
+ * @brief hinic3_refresh_nic_cfg - refresh port cfg
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: port information
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_refresh_nic_cfg(void *hwdev, struct nic_port_info *port_info);
+
+/* *
+ * @brief hinic3_add_vlan - add vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_del_vlan - delete vlan
+ * @param hwdev: device pointer to hwdev
+ * @param vlan_id: vlan id
+ * @param func_id: function id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/* *
+ * @brief hinic3_rss_set_indir_tbl - set rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table);
+
+/* *
+ * @brief hinic3_rss_get_indir_tbl - get rss indirect table
+ * @param hwdev: device pointer to hwdev
+ * @param indir_table: rss indirect table
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table);
+
+/* *
+ * @brief hinic3_get_phy_port_stats - get port stats
+ * @param hwdev: device pointer to hwdev
+ * @param stats: port stats
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_cmd_port_stats *stats);
+
+int hinic3_get_fpga_phy_port_stats(void *hwdev, struct hinic3_phy_fpga_port_stats *stats);
+
+int hinic3_set_port_funcs_state(void *hwdev, bool enable);
+
+int hinic3_reset_port_link_cfg(void *hwdev);
+
+int hinic3_force_port_relink(void *hwdev);
+
+int hinic3_set_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_dcb_set_pfc(void *hwdev, u8 pfc_en, u8 pfc_bitmap);
+
+int hinic3_dcb_get_pfc(void *hwdev, u8 *pfc_en_bitmap);
+
+int hinic3_dcb_set_ets(void *hwdev, u8 *cos_tc, u8 *cos_bw, u8 *cos_prio,
+ u8 *tc_bw, u8 *tc_prio);
+
+int hinic3_dcb_set_cos_up_map(void *hwdev, u8 cos_valid_bitmap, u8 *cos_up,
+ u8 max_cos_num);
+
+int hinic3_dcb_set_rq_iq_mapping(void *hwdev, u32 num_rqs, u8 *map,
+ u32 max_map_num);
+
+int hinic3_sync_dcb_state(void *hwdev, u8 op_code, u8 state);
+
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
+
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
+
+int hinic3_set_link_settings(void *hwdev,
+ struct hinic3_link_ksettings *settings);
+
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
+
+void hinic3_clear_vfs_info(void *hwdev);
+
+int hinic3_update_mac_vlan(void *hwdev, u16 old_vlan, u16 new_vlan, int vf_id);
+
+int hinic3_set_led_status(void *hwdev, enum mag_led_type type,
+ enum mag_led_mode mode);
+
+int hinic3_set_func_capture_en(void *hwdev, u16 func_id, bool cap_en);
+
+int hinic3_set_loopback_mode(void *hwdev, u8 mode, u8 enable);
+int hinic3_get_loopback_mode(void *hwdev, u8 *mode, u8 *enable);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+bool hinic3_get_vf_trust(void *hwdev, int vf_id);
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust);
+#endif
+
+int hinic3_set_autoneg(void *hwdev, bool enable);
+
+int hinic3_get_sfp_type(void *hwdev, u8 *sfp_type, u8 *sfp_type_ext);
+int hinic3_get_sfp_eeprom(void *hwdev, u8 *data, u32 len);
+
+bool hinic3_if_sfp_absent(void *hwdev);
+int hinic3_get_sfp_info(void *hwdev, struct mag_cmd_get_xsfp_info *sfp_info);
+
+/* *
+ * @brief hinic3_set_nic_feature_to_hw - sync nic feature to hardware
+ * @param hwdev: device pointer to hwdev
+ */
+int hinic3_set_nic_feature_to_hw(void *hwdev);
+
+/* *
+ * @brief hinic3_update_nic_feature - update nic feature
+ * @param hwdev: device pointer to hwdev
+ * @param s_feature: nic features
+ * @param size: @s_feature's array size
+ */
+void hinic3_update_nic_feature(void *hwdev, u64 s_feature);
+
+/* *
+ * @brief hinic3_set_link_status_follow - set link follow status
+ * @param hwdev: device pointer to hwdev
+ * @param status: link follow status
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_link_status_follow(void *hwdev, enum hinic3_link_follow_status status);
+
+/* *
+ * @brief hinic3_update_pf_bw - update pf bandwidth
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_update_pf_bw(void *hwdev);
+
+/* *
+ * @brief hinic3_set_pf_bw_limit - set pf bandwidth limit
+ * @param hwdev: device pointer to hwdev
+ * @param bw_limit: pf bandwidth limit
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_bw_limit(void *hwdev, u32 bw_limit);
+
+/* *
+ * @brief hinic3_set_pf_rate - set pf rate
+ * @param hwdev: device pointer to hwdev
+ * @param speed_level: speed level
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_pf_rate(void *hwdev, u8 speed_level);
+
+int hinic3_get_rxq_hw_info(void *hwdev, struct rxq_check_info *rxq_info, u16 num_qps, u16 wqe_type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
new file mode 100644
index 000000000000..b46cf78ce9e3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cfg_vf.c
@@ -0,0 +1,637 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_nic_cmd.h"
+
+/*lint -e806*/
+static unsigned char set_vf_link_state;
+module_param(set_vf_link_state, byte, 0444);
+MODULE_PARM_DESC(set_vf_link_state, "Set vf link state, 0 represents link auto, 1 represents link always up, 2 represents link always down. - default is 0.");
+/*lint +e806*/
+
+/* In order to adapt different linux version */
+enum {
+ HINIC3_IFLA_VF_LINK_STATE_AUTO, /* link state of the uplink */
+ HINIC3_IFLA_VF_LINK_STATE_ENABLE, /* link always up */
+ HINIC3_IFLA_VF_LINK_STATE_DISABLE, /* link always down */
+};
+
+#define NIC_CVLAN_INSERT_ENABLE 0x1
+#define NIC_QINQ_INSERT_ENABLE 0X3
+static int hinic3_set_vlan_ctx(struct hinic3_nic_io *nic_io, u16 func_id,
+ u16 vlan_tag, u16 q_id, bool add)
+{
+ struct nic_vlan_ctx *vlan_ctx = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_vlan_ctx);
+ vlan_ctx = (struct nic_vlan_ctx *)cmd_buf->buf;
+
+ vlan_ctx->func_id = func_id;
+ vlan_ctx->qid = q_id;
+ vlan_ctx->vlan_tag = vlan_tag;
+ vlan_ctx->vlan_sel = 0; /* TPID0 in IPSU */
+ vlan_ctx->vlan_mode = add ?
+ NIC_QINQ_INSERT_ENABLE : NIC_CVLAN_INSERT_ENABLE;
+
+ hinic3_cpu_to_be32(vlan_ctx, sizeof(struct nic_vlan_ctx));
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set vlan context, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+ return -EFAULT;
+ }
+
+ return err;
+}
+
+int hinic3_cfg_vf_vlan(struct hinic3_nic_io *nic_io, u8 opcode, u16 vid,
+ u8 qos, int vf_id)
+{
+ struct hinic3_cmd_vf_vlan_config vf_vlan;
+ u16 out_size = sizeof(vf_vlan);
+ u16 glb_func_id;
+ int err;
+ u16 vlan_tag;
+
+ /* VLAN 0 is a special case, don't allow it to be removed */
+ if (!vid && opcode == HINIC3_CMD_OP_DEL)
+ return 0;
+
+ memset(&vf_vlan, 0, sizeof(vf_vlan));
+
+ vf_vlan.opcode = opcode;
+ vf_vlan.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ vf_vlan.vlan_id = vid;
+ vf_vlan.qos = qos;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_CFG_VF_VLAN,
+ &vf_vlan, sizeof(vf_vlan),
+ &vf_vlan, &out_size);
+ if (err || !out_size || vf_vlan.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan, err: %d, status: 0x%x,out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err, vf_vlan.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ vlan_tag = vid + (u16)(qos << VLAN_PRIO_SHIFT);
+
+ glb_func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ err = hinic3_set_vlan_ctx(nic_io, glb_func_id, vlan_tag,
+ NIC_CONFIG_ALL_QUEUE_VLAN_CTX,
+ opcode == HINIC3_CMD_OP_ADD);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d vlan ctx, err: %d\n",
+ HW_VF_ID_TO_OS(vf_id), err);
+
+ /* rollback vlan config */
+ if (opcode == HINIC3_CMD_OP_DEL)
+ vf_vlan.opcode = HINIC3_CMD_OP_ADD;
+ else
+ vf_vlan.opcode = HINIC3_CMD_OP_DEL;
+ l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_VF_VLAN, &vf_vlan,
+ sizeof(vf_vlan), &vf_vlan, &out_size);
+ return err;
+ }
+
+ return 0;
+}
+
+/* this function just be called by hinic3_ndo_set_vf_mac,
+ * others are not permitted.
+ */
+int hinic3_set_vf_mac(void *hwdev, int vf_id, unsigned char *mac_addr)
+{
+ struct vf_data_storage *vf_info;
+ struct hinic3_nic_io *nic_io;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+#ifndef __VMWARE__
+ /* duplicate request, so just return success */
+ if (ether_addr_equal(vf_info->user_mac_addr, mac_addr))
+ return 0;
+
+#else
+ if (ether_addr_equal(vf_info->user_mac_addr, mac_addr))
+ return 0;
+#endif
+ ether_addr_copy(vf_info->user_mac_addr, mac_addr);
+
+ return 0;
+}
+
+int hinic3_add_vf_vlan(void *hwdev, int vf_id, u16 vlan, u8 qos)
+{
+ struct hinic3_nic_io *nic_io;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD, vlan, qos, vf_id);
+ if (err)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = vlan;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = qos;
+
+ nic_info(nic_io->dev_hdl, "Setting VLAN %u, QOS 0x%x on VF %d\n",
+ vlan, qos, HW_VF_ID_TO_OS(vf_id));
+
+ return 0;
+}
+
+int hinic3_kill_vf_vlan(void *hwdev, int vf_id)
+{
+ struct vf_data_storage *vf_infos;
+ struct hinic3_nic_io *nic_io;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ vf_infos = nic_io->vf_infos;
+
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_DEL,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos, vf_id);
+ if (err)
+ return err;
+
+ nic_info(nic_io->dev_hdl, "Remove VLAN %u on VF %d\n",
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan,
+ HW_VF_ID_TO_OS(vf_id));
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan = 0;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos = 0;
+
+ return 0;
+}
+
+u16 hinic3_vf_info_vlanprio(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io;
+ u16 pf_vlan, vlanprio;
+ u8 pf_qos;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ pf_vlan = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_vlan;
+ pf_qos = nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].pf_qos;
+ vlanprio = (u16)(pf_vlan | (pf_qos << HINIC3_VLAN_PRIORITY_SHIFT));
+
+ return vlanprio;
+}
+
+int hinic3_set_vf_link_state(void *hwdev, u16 vf_id, int link)
+{
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u8 link_status = 0;
+
+ switch (link) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = false;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = nic_io->link_status ?
+ true : false;
+ link_status = nic_io->link_status;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = true;
+ link_status = HINIC3_LINK_UP;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_forced = true;
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].link_up = false;
+ link_status = HINIC3_LINK_DOWN;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Notify the VF of its new link state */
+ hinic3_notify_vf_link_status(nic_io, vf_id, link_status);
+
+ return 0;
+}
+
+int hinic3_set_vf_spoofchk(void *hwdev, u16 vf_id, bool spoofchk)
+{
+ struct hinic3_cmd_spoofchk_set spoofchk_cfg;
+ struct vf_data_storage *vf_infos = NULL;
+ u16 out_size = sizeof(spoofchk_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ vf_infos = nic_io->vf_infos;
+
+ memset(&spoofchk_cfg, 0, sizeof(spoofchk_cfg));
+
+ spoofchk_cfg.func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ spoofchk_cfg.state = spoofchk ? 1 : 0;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ &spoofchk_cfg,
+ sizeof(spoofchk_cfg), &spoofchk_cfg,
+ &out_size);
+ if (err || !out_size || spoofchk_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF(%d) spoofchk, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ spoofchk_cfg.msg_head.status, out_size);
+ err = -EINVAL;
+ }
+
+ vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk = spoofchk;
+
+ return err;
+}
+
+bool hinic3_vf_info_spoofchk(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].spoofchk;
+}
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+int hinic3_set_vf_trust(void *hwdev, u16 vf_id, bool trust)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (vf_id > nic_io->max_vfs)
+ return -EINVAL;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust = trust;
+
+ return 0;
+}
+
+bool hinic3_get_vf_trust(void *hwdev, int vf_id)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (vf_id > nic_io->max_vfs)
+ return -EINVAL;
+
+ return nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].trust;
+}
+#endif
+
+static int hinic3_set_vf_tx_rate_max_min(struct hinic3_nic_io *nic_io,
+ u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_cmd_tx_rate_cfg rate_cfg;
+ u16 out_size = sizeof(rate_cfg);
+ int err;
+
+ memset(&rate_cfg, 0, sizeof(rate_cfg));
+
+ rate_cfg.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+ rate_cfg.max_rate = max_rate;
+ rate_cfg.min_rate = min_rate;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+ &rate_cfg, sizeof(rate_cfg), &rate_cfg,
+ &out_size);
+ if (rate_cfg.msg_head.status || err || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), max_rate, min_rate, err,
+ rate_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_vf_tx_rate(void *hwdev, u16 vf_id, u32 max_rate, u32 min_rate)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!HINIC3_SUPPORT_RATE_LIMIT(hwdev)) {
+ nic_err(nic_io->dev_hdl, "Current function doesn't support to set vf rate limit\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = hinic3_set_vf_tx_rate_max_min(nic_io, vf_id, max_rate, min_rate);
+ if (err)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].max_rate = max_rate;
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].min_rate = min_rate;
+
+ return 0;
+}
+
+void hinic3_get_vf_config(void *hwdev, u16 vf_id, struct ifla_vf_info *ivi)
+{
+ struct vf_data_storage *vfinfo;
+ struct hinic3_nic_io *nic_io;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ vfinfo = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ ivi->vf = HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(ivi->mac, vfinfo->user_mac_addr);
+ ivi->vlan = vfinfo->pf_vlan;
+ ivi->qos = vfinfo->pf_qos;
+
+#ifdef HAVE_VF_SPOOFCHK_CONFIGURE
+ ivi->spoofchk = vfinfo->spoofchk;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ ivi->trusted = vfinfo->trust;
+#endif
+
+#ifdef HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+ ivi->max_tx_rate = vfinfo->max_rate;
+ ivi->min_tx_rate = vfinfo->min_rate;
+#else
+ ivi->tx_rate = vfinfo->max_rate;
+#endif /* HAVE_NDO_SET_VF_MIN_MAX_TX_RATE */
+
+#ifdef HAVE_NDO_SET_VF_LINK_STATE
+ if (!vfinfo->link_forced)
+ ivi->linkstate = IFLA_VF_LINK_STATE_AUTO;
+ else if (vfinfo->link_up)
+ ivi->linkstate = IFLA_VF_LINK_STATE_ENABLE;
+ else
+ ivi->linkstate = IFLA_VF_LINK_STATE_DISABLE;
+#endif
+}
+
+static int hinic3_init_vf_infos(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos = nic_io->vf_infos;
+ u8 vf_link_state;
+
+ if (set_vf_link_state > HINIC3_IFLA_VF_LINK_STATE_DISABLE) {
+ nic_warn(nic_io->dev_hdl, "Module Parameter set_vf_link_state value %u is out of range, resetting to %d\n",
+ set_vf_link_state, HINIC3_IFLA_VF_LINK_STATE_AUTO);
+ set_vf_link_state = HINIC3_IFLA_VF_LINK_STATE_AUTO;
+ }
+
+ vf_link_state = set_vf_link_state;
+
+ switch (vf_link_state) {
+ case HINIC3_IFLA_VF_LINK_STATE_AUTO:
+ vf_infos[vf_id].link_forced = false;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_ENABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = true;
+ break;
+ case HINIC3_IFLA_VF_LINK_STATE_DISABLE:
+ vf_infos[vf_id].link_forced = true;
+ vf_infos[vf_id].link_up = false;
+ break;
+ default:
+ nic_err(nic_io->dev_hdl, "Input parameter set_vf_link_state error: %u\n",
+ vf_link_state);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int vf_func_register(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf register_info;
+ u16 out_size = sizeof(register_info);
+ int err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_vf_event_handler);
+ if (err)
+ return err;
+
+ err = hinic3_register_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_vf_mag_event_handler);
+ if (err)
+ goto reg_hilink_err;
+
+ memset(®ister_info, 0, sizeof(register_info));
+ register_info.op_register = 1;
+ register_info.support_extra_feature = 0;
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ ®ister_info, sizeof(register_info),
+ ®ister_info, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || register_info.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, register_info.msg_head.status, out_size);
+ err = -EIO;
+ goto register_err;
+ }
+
+ return 0;
+
+register_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+
+reg_hilink_err:
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+static int pf_init_vf_infos(struct hinic3_nic_io *nic_io)
+{
+ u32 size;
+ int err;
+ u16 i;
+
+ nic_io->max_vfs = hinic3_func_max_vf(nic_io->hwdev);
+ size = sizeof(*nic_io->vf_infos) * nic_io->max_vfs;
+ if (!size)
+ return 0;
+
+ nic_io->vf_infos = kzalloc(size, GFP_KERNEL);
+ if (!nic_io->vf_infos)
+ return -ENOMEM;
+
+ for (i = 0; i < nic_io->max_vfs; i++) {
+ err = hinic3_init_vf_infos(nic_io, i);
+ if (err)
+ goto init_vf_infos_err;
+ }
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_mbox_handler);
+ if (err)
+ goto register_pf_mbox_cb_err;
+
+ err = hinic3_register_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_mbox_handler);
+ if (err)
+ goto register_pf_mag_mbox_cb_err;
+
+ return 0;
+
+register_pf_mag_mbox_cb_err:
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+register_pf_mbox_cb_err:
+init_vf_infos_err:
+ kfree(nic_io->vf_infos);
+
+ return err;
+}
+
+int hinic3_vf_func_init(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF)
+ return vf_func_register(nic_io);
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ nic_io->hwdev, hinic3_pf_event_handler);
+ if (err)
+ return err;
+
+ err = hinic3_register_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK,
+ nic_io->hwdev, hinic3_pf_mag_event_handler);
+ if (err)
+ goto register_mgmt_msg_cb_err;
+
+ err = pf_init_vf_infos(nic_io);
+ if (err)
+ goto pf_init_vf_infos_err;
+
+ return 0;
+
+pf_init_vf_infos_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+register_mgmt_msg_cb_err:
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+
+ return err;
+}
+
+void hinic3_vf_func_free(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_cmd_register_vf unregister;
+ u16 out_size = sizeof(unregister);
+ int err;
+
+ memset(&unregister, 0, sizeof(unregister));
+ unregister.op_register = 0;
+ if (hinic3_func_type(nic_io->hwdev) == TYPE_VF) {
+ err = hinic3_mbox_to_pf(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_NIC_CMD_VF_REGISTER,
+ &unregister, sizeof(unregister),
+ &unregister, &out_size, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || !out_size || unregister.msg_head.status)
+ nic_err(nic_io->dev_hdl, "Failed to unregister VF, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, unregister.msg_head.status, out_size);
+
+ hinic3_unregister_vf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ } else {
+ if (nic_io->vf_infos) {
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_pf_mbox_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ hinic3_clear_vfs_info(nic_io->hwdev);
+ kfree(nic_io->vf_infos);
+ }
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_HILINK);
+ hinic3_unregister_mgmt_msg_cb(nic_io->hwdev, HINIC3_MOD_L2NIC);
+ }
+}
+
+static void clear_vf_infos(void *hwdev, u16 vf_id)
+{
+ struct vf_data_storage *vf_infos;
+ struct hinic3_nic_io *nic_io;
+ u16 func_id;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ func_id = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+ vf_infos = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ if (vf_infos->use_specified_mac)
+ hinic3_del_mac(hwdev, vf_infos->drv_mac_addr,
+ vf_infos->pf_vlan, func_id, HINIC3_CHANNEL_NIC);
+
+ if (hinic3_vf_info_vlanprio(hwdev, vf_id))
+ hinic3_kill_vf_vlan(hwdev, vf_id);
+
+ if (vf_infos->max_rate)
+ hinic3_set_vf_tx_rate(hwdev, vf_id, 0, 0);
+
+ if (vf_infos->spoofchk)
+ hinic3_set_vf_spoofchk(hwdev, vf_id, false);
+
+#ifdef HAVE_NDO_SET_VF_TRUST
+ if (vf_infos->trust)
+ hinic3_set_vf_trust(hwdev, vf_id, false);
+#endif
+
+ memset(vf_infos, 0, sizeof(*vf_infos));
+ /* set vf_infos to default */
+ hinic3_init_vf_infos(nic_io, HW_VF_ID_TO_OS(vf_id));
+}
+
+void hinic3_clear_vfs_info(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io =
+ hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ u16 i;
+
+ for (i = 0; i < nic_io->max_vfs; i++)
+ clear_vf_infos(hwdev, OS_VF_ID_TO_HW(i));
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cmd.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cmd.h
new file mode 100644
index 000000000000..31e224ab1095
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_cmd.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2011, Huawei Tech. Co., Ltd.
+ * File Name : hinic3_comm_cmd.h
+ * Version : Initial Draft
+ * Created : 2019/4/25
+ * Last Modified :
+ * Description : NIC Commands between Driver and MPU
+ * Function List :
+ */
+
+#ifndef HINIC3_NIC_CMD_H
+#define HINIC3_NIC_CMD_H
+
+/* Commands between NIC to MPU
+ */
+enum hinic3_nic_cmd {
+ HINIC3_NIC_CMD_VF_REGISTER = 0, /* only for PFD and VFD */
+
+ /* FUNC CFG */
+ HINIC3_NIC_CMD_SET_FUNC_TBL = 5,
+ HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_SET_RX_MODE,
+ HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ HINIC3_NIC_CMD_GET_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAN_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ HINIC3_NIC_CMD_CFG_FLEX_QUEUE,
+ /* LRO CFG */
+ HINIC3_NIC_CMD_CFG_RX_LRO,
+ HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ HINIC3_NIC_CMD_FEATURE_NEGO,
+ HINIC3_NIC_CMD_CFG_LOCAL_LRO_STATE,
+
+ HINIC3_NIC_CMD_CACHE_OUT_QP_RES,
+
+ /* MAC & VLAN CFG */
+ HINIC3_NIC_CMD_GET_MAC = 20,
+ HINIC3_NIC_CMD_SET_MAC,
+ HINIC3_NIC_CMD_DEL_MAC,
+ HINIC3_NIC_CMD_UPDATE_MAC,
+ HINIC3_NIC_CMD_GET_ALL_DEFAULT_MAC,
+
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ HINIC3_NIC_CMD_SMAC_CHECK_STATE,
+
+ /* SR-IOV */
+ HINIC3_NIC_CMD_CFG_VF_VLAN = 40,
+ HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ /* RATE LIMIT */
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+
+ /* RSS CFG */
+ HINIC3_NIC_CMD_RSS_CFG = 60,
+ HINIC3_NIC_CMD_RSS_TEMP_MGR, /* TODO: delete after implement nego cmd */
+ HINIC3_NIC_CMD_GET_RSS_CTX_TBL, /* TODO: delete: move to ucode cmd */
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ /* IP checksum error packets, enable rss quadruple hash */
+ HINIC3_NIC_CMD_IPCS_ERR_RSS_ENABLE_OP = 66,
+
+ /* PPA/FDIR */
+ HINIC3_NIC_CMD_ADD_TC_FLOW = 80,
+ HINIC3_NIC_CMD_DEL_TC_FLOW,
+ HINIC3_NIC_CMD_GET_TC_FLOW,
+ HINIC3_NIC_CMD_FLUSH_TCAM,
+ HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ HINIC3_NIC_CMD_ENABLE_TCAM,
+ HINIC3_NIC_CMD_GET_TCAM_BLOCK,
+ HINIC3_NIC_CMD_CFG_PPA_TABLE_ID,
+ HINIC3_NIC_CMD_SET_PPA_EN = 88,
+ HINIC3_NIC_CMD_CFG_PPA_MODE,
+ HINIC3_NIC_CMD_CFG_PPA_FLUSH,
+ HINIC3_NIC_CMD_SET_FDIR_STATUS,
+ HINIC3_NIC_CMD_GET_PPA_COUNTER,
+
+ /* PORT CFG */
+ HINIC3_NIC_CMD_SET_PORT_ENABLE = 100,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+
+ HINIC3_NIC_CMD_SET_PORT_CAR,
+ HINIC3_NIC_CMD_SET_ER_DROP_PKT,
+
+ HINIC3_NIC_CMD_VF_COS,
+ HINIC3_NIC_CMD_SETUP_COS_MAPPING,
+ HINIC3_NIC_CMD_SET_ETS,
+ HINIC3_NIC_CMD_SET_PFC,
+ HINIC3_NIC_CMD_QOS_ETS,
+ HINIC3_NIC_CMD_QOS_PFC,
+ HINIC3_NIC_CMD_QOS_DCB_STATE,
+ HINIC3_NIC_CMD_QOS_PORT_CFG,
+ HINIC3_NIC_CMD_QOS_MAP_CFG,
+ HINIC3_NIC_CMD_FORCE_PKT_DROP,
+ HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE = 118,
+ HINIC3_NIC_CMD_INQUIRT_PAUSE_CFG = 119,
+
+ /* MISC */
+ HINIC3_NIC_CMD_BIOS_CFG = 120,
+ HINIC3_NIC_CMD_SET_FIRMWARE_CUSTOM_PACKETS_MSG,
+
+ /* BOND */
+ HINIC3_NIC_CMD_BOND_DEV_CREATE = 134,
+ HINIC3_NIC_CMD_BOND_DEV_DELETE,
+ HINIC3_NIC_CMD_BOND_DEV_OPEN_CLOSE,
+ HINIC3_NIC_CMD_BOND_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_INFO_GET,
+ HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+
+ /* DFX */
+ HINIC3_NIC_CMD_GET_SM_TABLE = 140,
+ HINIC3_NIC_CMD_RD_LINE_TBL,
+
+ HINIC3_NIC_CMD_SET_UCAPTURE_OPT = 160, /* TODO: move to roce */
+ HINIC3_NIC_CMD_SET_VHD_CFG,
+
+ /* TODO: move to HILINK */
+ HINIC3_NIC_CMD_GET_PORT_STAT = 200,
+ HINIC3_NIC_CMD_CLEAN_PORT_STAT,
+ HINIC3_NIC_CMD_CFG_LOOPBACK_MODE,
+ HINIC3_NIC_CMD_GET_SFP_QSFP_INFO,
+ HINIC3_NIC_CMD_SET_SFP_STATUS,
+ HINIC3_NIC_CMD_GET_LIGHT_MODULE_ABS,
+ HINIC3_NIC_CMD_GET_LINK_INFO,
+ HINIC3_NIC_CMD_CFG_AN_TYPE,
+ HINIC3_NIC_CMD_GET_PORT_INFO,
+ HINIC3_NIC_CMD_SET_LINK_SETTINGS,
+ HINIC3_NIC_CMD_ACTIVATE_BIOS_LINK_CFG,
+ HINIC3_NIC_CMD_RESTORE_LINK_CFG,
+ HINIC3_NIC_CMD_SET_LINK_FOLLOW,
+ HINIC3_NIC_CMD_GET_LINK_STATE,
+ HINIC3_NIC_CMD_LINK_STATUS_REPORT,
+ HINIC3_NIC_CMD_CABLE_PLUG_EVENT,
+ HINIC3_NIC_CMD_LINK_ERR_EVENT,
+ HINIC3_NIC_CMD_SET_LED_STATUS,
+
+ HINIC3_NIC_CMD_MAX = 256,
+};
+
+/* NIC CMDQ MODE */
+enum hinic3_ucode_cmd {
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC3_UCODE_CMD_ARM_SQ,
+ HINIC3_UCODE_CMD_ARM_RQ,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_SET_IQ_ENABLE,
+ HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+ HINIC3_UCODE_CMD_PPA_HASH_TABLE,
+ HINIC3_UCODE_CMD_RXQ_INFO_GET = 13,
+};
+
+#endif /* HINIC3_NIC_CMD_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
new file mode 100644
index 000000000000..17d48c4d6e51
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c
@@ -0,0 +1,146 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_mt.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size, enum hinic3_queue_type q_type)
+{
+ struct hinic3_io_queue *queue = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ void *src_wqebb = NULL;
+ u32 i, offset;
+
+ if (!hwdev) {
+ pr_err("hwdev is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (q_id >= nic_io->num_qps) {
+ pr_err("q_id[%u] > num_qps_cfg[%u].\n", q_id, nic_io->num_qps);
+ return -EINVAL;
+ }
+
+ queue = (q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id];
+
+ if ((idx + wqebb_cnt) > queue->wq.q_depth) {
+ pr_err("(idx[%u] + idx[%u]) > q_depth[%u].\n", idx, wqebb_cnt, queue->wq.q_depth);
+ return -EINVAL;
+ }
+
+ if (*wqe_size != (queue->wq.wqebb_size * wqebb_cnt)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %d\n",
+ *wqe_size, (queue->wq.wqebb_size * wqebb_cnt));
+ return -EINVAL;
+ }
+
+ for (i = 0; i < wqebb_cnt; i++) {
+ src_wqebb = hinic3_wq_wqebb_addr(&queue->wq, (u16)WQ_MASK_IDX(&queue->wq, idx + i));
+ offset = queue->wq.wqebb_size * i;
+ memcpy(wqe + offset, src_wqebb, queue->wq.wqebb_size);
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *sq = NULL;
+
+ if (!hwdev || !sq_info) {
+ pr_err("hwdev or sq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*sq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user :%u, expect: %lu\n",
+ msg_size, sizeof(*sq_info));
+ return -EINVAL;
+ }
+
+ sq = &nic_io->sq[q_id];
+
+ sq_info->q_id = q_id;
+ sq_info->pi = hinic3_get_sq_local_pi(sq);
+ sq_info->ci = hinic3_get_sq_local_ci(sq);
+ sq_info->fi = hinic3_get_sq_hw_ci(sq);
+ sq_info->q_depth = sq->wq.q_depth;
+ sq_info->wqebb_size = sq->wq.wqebb_size;
+
+ sq_info->ci_addr = sq->tx.cons_idx_addr;
+
+ sq_info->cla_addr = sq->wq.wq_block_paddr;
+ sq_info->slq_handle = sq;
+
+ sq_info->doorbell.map_addr = (u64 *)sq->db_addr;
+
+ return 0;
+}
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_io_queue *rq = NULL;
+
+ if (!hwdev || !rq_info) {
+ pr_err("hwdev or rq_info is NULL.\n");
+ return -EINVAL;
+ }
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (q_id >= nic_io->num_qps) {
+ nic_err(nic_io->dev_hdl, "Input queue id(%u) is larger than the actual queue number\n",
+ q_id);
+ return -EINVAL;
+ }
+
+ if (msg_size != sizeof(*rq_info)) {
+ nic_err(nic_io->dev_hdl, "Unexpect out buf size from user: %u, expect: %lu\n",
+ msg_size, sizeof(*rq_info));
+ return -EINVAL;
+ }
+
+ rq = &nic_io->rq[q_id];
+
+ rq_info->q_id = q_id;
+
+ rq_info->hw_pi = cpu_to_be16(*rq->rx.pi_virt_addr);
+
+ rq_info->wqebb_size = rq->wq.wqebb_size;
+ rq_info->q_depth = (u16)rq->wq.q_depth;
+
+ rq_info->buf_len = nic_io->rx_buff_len;
+
+ rq_info->slq_handle = rq;
+
+ rq_info->ci_wqe_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+ rq_info->ci_cla_tbl_addr = rq->wq.wq_block_paddr;
+
+ rq_info->msix_idx = rq->msix_entry_idx;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
new file mode 100644
index 000000000000..4ba96d5fbb32
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DBG_H
+#define HINIC3_NIC_DBG_H
+
+#include "hinic3_mt.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
+ u32 msg_size);
+
+int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, const u16 *wqe_size,
+ enum hinic3_queue_type q_type);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
new file mode 100644
index 000000000000..2967311aab76
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_dev.h
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_DEV_H
+#define HINIC3_NIC_DEV_H
+
+#include <linux/netdevice.h>
+#include <linux/semaphore.h>
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
+#include "hinic3_dcb.h"
+
+#define HINIC3_NIC_DRV_NAME "hinic3"
+#define HINIC3_NIC_DRV_VERSION ""
+
+#define HINIC3_FUNC_IS_VF(hwdev) (hinic3_func_type(hwdev) == TYPE_VF)
+
+#define HINIC3_AVG_PKT_SMALL 256U
+#define HINIC3_MODERATONE_DELAY HZ
+
+#define LP_PKT_CNT 64
+
+enum hinic3_flags {
+ HINIC3_INTF_UP,
+ HINIC3_MAC_FILTER_CHANGED,
+ HINIC3_LP_TEST,
+ HINIC3_RSS_ENABLE,
+ HINIC3_DCB_ENABLE,
+ HINIC3_SAME_RXTX,
+ HINIC3_INTR_ADAPT,
+ HINIC3_UPDATE_MAC_FILTER,
+ HINIC3_CHANGE_RES_INVALID,
+ HINIC3_RSS_DEFAULT_INDIR,
+ HINIC3_FORCE_LINK_UP,
+ HINIC3_BONDING_MASTER,
+ HINIC3_AUTONEG_RESET,
+ HINIC3_RXQ_RECOVERY,
+};
+
+#define HINIC3_CHANNEL_RES_VALID(nic_dev) \
+ (test_bit(HINIC3_INTF_UP, &(nic_dev)->flags) && \
+ !test_bit(HINIC3_CHANGE_RES_INVALID, &(nic_dev)->flags))
+
+#define RX_BUFF_NUM_PER_PAGE 2
+
+#define VLAN_BITMAP_BYTE_SIZE(nic_dev) (sizeof(*(nic_dev)->vlan_bitmap))
+#define VLAN_BITMAP_BITS_SIZE(nic_dev) (VLAN_BITMAP_BYTE_SIZE(nic_dev) * 8)
+#define VLAN_NUM_BITMAPS(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VLAN_BITMAP_SIZE(nic_dev) (VLAN_N_VID / \
+ VLAN_BITMAP_BYTE_SIZE(nic_dev))
+#define VID_LINE(nic_dev, vid) ((vid) / VLAN_BITMAP_BITS_SIZE(nic_dev))
+#define VID_COL(nic_dev, vid) ((vid) & (VLAN_BITMAP_BITS_SIZE(nic_dev) - 1))
+
+#define NIC_DRV_DEFAULT_FEATURE NIC_F_ALL_MASK
+
+enum hinic3_event_work_flags {
+ EVENT_WORK_TX_TIMEOUT,
+};
+
+enum hinic3_rx_mode_state {
+ HINIC3_HW_PROMISC_ON,
+ HINIC3_HW_ALLMULTI_ON,
+ HINIC3_PROMISC_FORCE_ON,
+ HINIC3_ALLMULTI_FORCE_ON,
+};
+
+enum mac_filter_state {
+ HINIC3_MAC_WAIT_HW_SYNC,
+ HINIC3_MAC_HW_SYNCED,
+ HINIC3_MAC_WAIT_HW_UNSYNC,
+ HINIC3_MAC_HW_UNSYNCED,
+};
+
+struct hinic3_mac_filter {
+ struct list_head list;
+ u8 addr[ETH_ALEN];
+ unsigned long state;
+};
+
+struct hinic3_irq {
+ struct net_device *netdev;
+ /* IRQ corresponding index number */
+ u16 msix_entry_idx;
+ u16 rsvd1;
+ u32 irq_id; /* The IRQ number from OS */
+
+ char irq_name[IFNAMSIZ + 16];
+ struct napi_struct napi;
+ cpumask_t affinity_mask;
+ struct hinic3_txq *txq;
+ struct hinic3_rxq *rxq;
+};
+
+struct hinic3_intr_coal_info {
+ u8 pending_limt;
+ u8 coalesce_timer_cfg;
+ u8 resend_timer_cfg;
+
+ u64 pkt_rate_low;
+ u8 rx_usecs_low;
+ u8 rx_pending_limt_low;
+ u64 pkt_rate_high;
+ u8 rx_usecs_high;
+ u8 rx_pending_limt_high;
+
+ u8 user_set_intr_coal_flag;
+};
+
+struct hinic3_dyna_txrxq_params {
+ u16 num_qps;
+ u8 num_cos;
+ u8 rsvd1;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_dyna_txq_res *txqs_res;
+ struct hinic3_dyna_rxq_res *rxqs_res;
+ struct hinic3_irq *irq_cfg;
+};
+
+#define HINIC3_NIC_STATS_INC(nic_dev, field) \
+do { \
+ u64_stats_update_begin(&(nic_dev)->stats.syncp); \
+ (nic_dev)->stats.field++; \
+ u64_stats_update_end(&(nic_dev)->stats.syncp); \
+} while (0)
+
+struct hinic3_nic_stats {
+ u64 netdev_tx_timeout;
+
+ /* Subdivision statistics show in private tool */
+ u64 tx_carrier_off_drop;
+ u64 tx_invalid_qid;
+ u64 rsvd1;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+#define HINIC3_TCAM_DYNAMIC_BLOCK_SIZE 16
+#define HINIC3_MAX_TCAM_FILTERS 512
+
+#define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \
+ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index))
+
+struct hinic3_rx_flow_rule {
+ struct list_head rules;
+ int tot_num_rules;
+};
+
+struct hinic3_tcam_dynamic_block {
+ struct list_head block_list;
+ u16 dynamic_block_id;
+ u16 dynamic_index_cnt;
+ u8 dynamic_index_used[HINIC3_TCAM_DYNAMIC_BLOCK_SIZE];
+};
+
+struct hinic3_tcam_dynamic_block_info {
+ struct list_head tcam_dynamic_list;
+ u16 dynamic_block_cnt;
+};
+
+struct hinic3_tcam_filter {
+ struct list_head tcam_filter_list;
+ u16 dynamic_block_id;
+ u16 index;
+ struct tag_tcam_key tcam_key;
+ u16 queue;
+};
+
+/* function level struct info */
+struct hinic3_tcam_info {
+ u16 tcam_rule_nums;
+ struct list_head tcam_list;
+ struct hinic3_tcam_dynamic_block_info tcam_dynamic_info;
+};
+
+struct hinic3_nic_dev {
+ struct pci_dev *pdev;
+ struct net_device *netdev;
+ struct hinic3_lld_dev *lld_dev;
+ void *hwdev;
+
+ int poll_weight;
+ u32 rsvd1;
+ unsigned long *vlan_bitmap;
+
+ u16 max_qps;
+
+ u32 msg_enable;
+ unsigned long flags;
+
+ u32 lro_replenish_thld;
+ u32 dma_rx_buff_size;
+ u16 rx_buff_len;
+ u32 page_order;
+
+ /* Rss related varibles */
+ u8 rss_hash_engine;
+ struct nic_rss_type rss_type;
+ u8 *rss_hkey;
+ /* hkey in big endian */
+ u32 *rss_hkey_be;
+ u32 *rss_indir;
+
+ u8 cos_config_num_max;
+ u8 func_dft_cos_bitmap;
+ u16 port_dft_cos_bitmap; /* used to tool validity check */
+
+ struct hinic3_dcb_config hw_dcb_cfg;
+ struct hinic3_dcb_config wanted_dcb_cfg;
+ struct hinic3_dcb_config dcb_cfg;
+ unsigned long dcb_flags;
+ int disable_port_cnt;
+ /* lock for disable or enable traffic flow */
+ struct semaphore dcb_sem;
+
+ struct hinic3_intr_coal_info *intr_coalesce;
+ unsigned long last_moder_jiffies;
+ u32 adaptive_rx_coal;
+ u8 intr_coal_set_flag;
+
+#ifndef HAVE_NETDEV_STATS_IN_NETDEV
+ struct net_device_stats net_stats;
+#endif
+
+ struct hinic3_nic_stats stats;
+
+ /* lock for nic resource */
+ struct mutex nic_mutex;
+ bool force_port_disable;
+ struct semaphore port_state_sem;
+ u8 link_status;
+
+ struct nic_service_cap nic_cap;
+
+ struct hinic3_txq *txqs;
+ struct hinic3_rxq *rxqs;
+ struct hinic3_dyna_txrxq_params q_params;
+
+ u16 num_qp_irq;
+ struct irq_info *qps_irq_info;
+
+ struct workqueue_struct *workq;
+
+ struct work_struct rx_mode_work;
+ struct delayed_work moderation_task;
+
+ struct list_head uc_filter_list;
+ struct list_head mc_filter_list;
+ unsigned long rx_mod_state;
+ int netdev_uc_cnt;
+ int netdev_mc_cnt;
+
+ int lb_test_rx_idx;
+ int lb_pkt_len;
+ u8 *lb_test_rx_buf;
+
+ struct hinic3_tcam_info tcam;
+ struct hinic3_rx_flow_rule rx_flow_rule;
+
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct delayed_work periodic_work;
+ /* reference to enum hinic3_event_work_flags */
+ unsigned long event_flag;
+
+ struct hinic3_nic_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+ u64 rsvd8[7];
+ u32 rsvd9;
+ u32 rxq_get_err_times;
+ struct delayed_work rxq_check_work;
+};
+
+#define hinic_msg(level, nic_dev, msglvl, format, arg...) \
+do { \
+ if ((nic_dev)->netdev && (nic_dev)->netdev->reg_state \
+ == NETREG_REGISTERED) \
+ nicif_##level((nic_dev), msglvl, (nic_dev)->netdev, \
+ format, ## arg); \
+ else \
+ nic_##level(&(nic_dev)->pdev->dev, \
+ format, ## arg); \
+} while (0)
+
+#define hinic3_info(nic_dev, msglvl, format, arg...) \
+ hinic_msg(info, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_warn(nic_dev, msglvl, format, arg...) \
+ hinic_msg(warn, nic_dev, msglvl, format, ## arg)
+
+#define hinic3_err(nic_dev, msglvl, format, arg...) \
+ hinic_msg(err, nic_dev, msglvl, format, ## arg)
+
+struct hinic3_uld_info *get_nic_uld_info(void);
+
+u32 hinic3_get_io_stats_size(const struct hinic3_nic_dev *nic_dev);
+
+void hinic3_get_io_stats(const struct hinic3_nic_dev *nic_dev, void *stats);
+
+int hinic3_open(struct net_device *netdev);
+
+int hinic3_close(struct net_device *netdev);
+
+void hinic3_set_ethtool_ops(struct net_device *netdev);
+
+void hinic3vf_set_ethtool_ops(struct net_device *netdev);
+
+int nic_ioctl(void *uld_dev, u32 cmd, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+
+void hinic3_update_num_qps(struct net_device *netdev);
+
+int hinic3_qps_irq_init(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_qps_irq_deinit(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_set_netdev_ops(struct hinic3_nic_dev *nic_dev);
+
+bool hinic3_is_netdev_ops_match(const struct net_device *netdev);
+
+int hinic3_set_hw_features(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_set_rx_mode_work(struct work_struct *work);
+
+void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_get_strings(struct net_device *netdev, u32 stringset, u8 *data);
+
+void hinic3_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data);
+
+int hinic3_get_sset_count(struct net_device *netdev, int sset);
+
+int hinic3_force_port_disable(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_force_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable);
+
+int hinic3_maybe_set_port_state(struct hinic3_nic_dev *nic_dev, bool enable);
+
+#ifdef ETHTOOL_GLINKSETTINGS
+#ifndef XENSERVER_HAVE_NEW_ETHTOOL_OPS
+int hinic3_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *link_settings);
+int hinic3_set_link_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings
+ *link_settings);
+#endif
+#endif
+
+#ifndef HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+int hinic3_get_settings(struct net_device *netdev, struct ethtool_cmd *ep);
+int hinic3_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *link_settings);
+#endif
+
+void hinic3_auto_moderation_work(struct work_struct *work);
+
+typedef void (*hinic3_reopen_handler)(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data);
+int hinic3_change_channel_settings(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_dyna_txrxq_params *trxq_params,
+ hinic3_reopen_handler reopen_handler,
+ const void *priv_data);
+
+void hinic3_link_status_change(struct hinic3_nic_dev *nic_dev, bool status);
+
+#ifdef HAVE_XDP_SUPPORT
+bool hinic3_is_xdp_enable(struct hinic3_nic_dev *nic_dev);
+int hinic3_xdp_max_mtu(struct hinic3_nic_dev *nic_dev);
+#endif
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
new file mode 100644
index 000000000000..57cf07cee554
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_event.c
@@ -0,0 +1,580 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_nic_cmd.h"
+
+static int hinic3_init_vf_config(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info;
+ u16 func_id;
+ int err = 0;
+
+ vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ ether_addr_copy(vf_info->drv_mac_addr, vf_info->user_mac_addr);
+ if (!is_zero_ether_addr(vf_info->drv_mac_addr)) {
+ vf_info->use_specified_mac = true;
+ func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + vf_id;
+
+ err = hinic3_set_mac(nic_io->hwdev, vf_info->drv_mac_addr,
+ vf_info->pf_vlan, func_id,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ } else {
+ vf_info->use_specified_mac = false;
+ }
+
+ if (hinic3_vf_info_vlanprio(nic_io->hwdev, vf_id)) {
+ err = hinic3_cfg_vf_vlan(nic_io, HINIC3_CMD_OP_ADD,
+ vf_info->pf_vlan, vf_info->pf_qos,
+ vf_id);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to add VF %d VLAN_QOS\n",
+ HW_VF_ID_TO_OS(vf_id));
+ return err;
+ }
+ }
+
+ if (vf_info->max_rate) {
+ err = hinic3_set_vf_tx_rate(nic_io->hwdev, vf_id,
+ vf_info->max_rate,
+ vf_info->min_rate);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d max rate %u, min rate %u\n",
+ HW_VF_ID_TO_OS(vf_id), vf_info->max_rate,
+ vf_info->min_rate);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int register_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ int err;
+
+ if (vf_id > nic_io->max_vfs) {
+ nic_err(nic_io->dev_hdl, "Register VF id %d exceed limit[0-%d]\n",
+ HW_VF_ID_TO_OS(vf_id), HW_VF_ID_TO_OS(nic_io->max_vfs));
+ return -EFAULT;
+ }
+
+ err = hinic3_init_vf_config(nic_io, vf_id);
+ if (err)
+ return err;
+
+ nic_io->vf_infos[HW_VF_ID_TO_OS(vf_id)].registered = true;
+
+ return 0;
+}
+
+static int unregister_vf_msg_handler(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info =
+ nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (vf_id > nic_io->max_vfs)
+ return -EFAULT;
+
+ vf_info->registered = false;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = hinic3_glb_pf_vf_offset(nic_io->hwdev) + (u16)vf_id;
+ mac_info.vlan_id = vf_info->pf_vlan;
+ ether_addr_copy(mac_info.mac, vf_info->drv_mac_addr);
+
+ if (vf_info->use_specified_mac || vf_info->pf_vlan) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_DEL_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || mac_info.msg_head.status || !out_size) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf_id), err,
+ mac_info.msg_head.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ memset(vf_info->drv_mac_addr, 0, ETH_ALEN);
+
+ return 0;
+}
+
+static int hinic3_register_vf_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_register_vf *register_vf = buf_in;
+ struct hinic3_cmd_register_vf *register_info = buf_out;
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+ int err;
+
+ if (register_vf->op_register) {
+ vf_info->support_extra_feature = register_vf->support_extra_feature;
+ err = register_vf_msg_handler(nic_io, vf_id);
+ } else {
+ err = unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+ }
+
+ if (err)
+ register_info->msg_head.status = EFAULT;
+
+ *out_size = sizeof(*register_info);
+
+ return 0;
+}
+
+void hinic3_unregister_vf(struct hinic3_nic_io *nic_io, u16 vf_id)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf_id);
+
+ unregister_vf_msg_handler(nic_io, vf_id);
+ vf_info->support_extra_feature = 0;
+}
+
+static int hinic3_get_vf_cos_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf_id, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *dcb_state = buf_out;
+
+ memcpy(&dcb_state->state, &nic_io->dcb_state,
+ sizeof(nic_io->dcb_state));
+
+ dcb_state->msg_head.status = 0;
+ *out_size = sizeof(*dcb_state);
+ return 0;
+}
+
+static int hinic3_get_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_info = buf_out;
+
+ int err;
+
+ if (HINIC3_SUPPORT_VF_MAC(nic_io->hwdev)) {
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_GET_MAC, buf_in,
+ in_size, buf_out, out_size);
+ if (!err) {
+ if (is_zero_ether_addr(mac_info->mac))
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ }
+ return err;
+ }
+
+ ether_addr_copy(mac_info->mac, vf_info->drv_mac_addr);
+ mac_info->msg_head.status = 0;
+ *out_size = sizeof(*mac_info);
+
+ return 0;
+}
+
+static int hinic3_set_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_SET_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to set VF %d MAC address, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->mac);
+
+ return err;
+}
+
+static int hinic3_del_vf_mac_msg_handler(struct hinic3_nic_io *nic_io, u16 vf,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_set *mac_in = buf_in;
+ struct hinic3_port_mac_set *mac_out = buf_out;
+ int err;
+
+ if (vf_info->use_specified_mac && !vf_info->trust &&
+ is_valid_ether_addr(mac_in->mac)) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac))
+ mac_in->vlan_id = vf_info->pf_vlan;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_DEL_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_err(nic_io->dev_hdl, "Failed to delete VF %d MAC, err: %d, status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (is_valid_ether_addr(mac_in->mac) && !mac_out->msg_head.status)
+ eth_zero_addr(vf_info->drv_mac_addr);
+
+ return err;
+}
+
+static int hinic3_update_vf_mac_msg_handler(struct hinic3_nic_io *nic_io,
+ u16 vf, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct vf_data_storage *vf_info = nic_io->vf_infos + HW_VF_ID_TO_OS(vf);
+ struct hinic3_port_mac_update *mac_in = buf_in;
+ struct hinic3_port_mac_update *mac_out = buf_out;
+ int err;
+
+ if (!is_valid_ether_addr(mac_in->new_mac)) {
+ nic_err(nic_io->dev_hdl, "Update VF MAC is invalid.\n");
+ return -EINVAL;
+ }
+
+#ifndef __VMWARE__
+ if (vf_info->use_specified_mac && !vf_info->trust) {
+ nic_warn(nic_io->dev_hdl, "PF has already set VF %d MAC address, and vf trust is off.\n",
+ HW_VF_ID_TO_OS(vf));
+ mac_out->msg_head.status = HINIC3_PF_SET_VF_ALREADY;
+ *out_size = sizeof(*mac_out);
+ return 0;
+ }
+#else
+ err = hinic_config_vf_request(nic_io->hwdev->pcidev_hdl,
+ HW_VF_ID_TO_OS(vf),
+ HINIC_CFG_VF_MAC_CHANGED,
+ (void *)mac_in->new_mac);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to config VF %d MAC request, err: %d\n",
+ HW_VF_ID_TO_OS(vf), err);
+ return err;
+ }
+#endif
+ mac_in->vlan_id = vf_info->pf_vlan;
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ buf_in, in_size, buf_out, out_size);
+ if (err || !(*out_size)) {
+ nic_warn(nic_io->dev_hdl, "Failed to update VF %d MAC, err: %d,status: 0x%x, out size: 0x%x\n",
+ HW_VF_ID_TO_OS(vf), err, mac_out->msg_head.status,
+ *out_size);
+ return -EFAULT;
+ }
+
+ if (!mac_out->msg_head.status)
+ ether_addr_copy(vf_info->drv_mac_addr, mac_in->new_mac);
+
+ return err;
+}
+
+const struct vf_msg_handler vf_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_REGISTER,
+ .handler = hinic3_register_vf_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_GET_MAC,
+ .handler = hinic3_get_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_SET_MAC,
+ .handler = hinic3_set_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_DEL_MAC,
+ .handler = hinic3_del_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_UPDATE_MAC,
+ .handler = hinic3_update_vf_mac_msg_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = hinic3_get_vf_cos_msg_handler
+ },
+};
+
+static int _l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u16 channel)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_cmd_handler[i].cmd)
+ cmd_to_pf = true;
+ }
+ }
+
+ if (cmd_to_pf)
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ channel);
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, HINIC3_CHANNEL_NIC);
+}
+
+int l2nic_msg_to_mgmt_sync_ch(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u16 channel)
+{
+ return _l2nic_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size, channel);
+}
+
+/* pf/ppf handler mbox msg from vf */
+int hinic3_pf_mbox_handler(void *hwdev,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 index, cmd_size = ARRAY_LEN(vf_cmd_handler);
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ for (index = 0; index < cmd_size; index++) {
+ if (cmd == vf_cmd_handler[index].cmd)
+ return vf_cmd_handler[index].handler(nic_io, vf_id,
+ buf_in, in_size,
+ buf_out, out_size);
+ }
+
+ nic_warn(nic_io->dev_hdl, "NO handler for nic cmd(%u) received from vf id: %u\n",
+ cmd, vf_id);
+
+ return -EINVAL;
+}
+
+void hinic3_notify_dcb_state_event(struct hinic3_nic_io *nic_io,
+ struct hinic3_dcb_state *dcb_state)
+{
+ struct hinic3_event_info event_info = {0};
+ int i;
+/*lint -e679*/
+ if (dcb_state->trust == HINIC3_DCB_PCP)
+ /* This is 8 user priority to cos mapping relationships */
+ sdk_info(nic_io->dev_hdl, "DCB %s, default cos %u, pcp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->pcp2cos[ARRAY_INDEX_0], dcb_state->pcp2cos[ARRAY_INDEX_1],
+ dcb_state->pcp2cos[ARRAY_INDEX_2], dcb_state->pcp2cos[ARRAY_INDEX_3],
+ dcb_state->pcp2cos[ARRAY_INDEX_4], dcb_state->pcp2cos[ARRAY_INDEX_5],
+ dcb_state->pcp2cos[ARRAY_INDEX_6], dcb_state->pcp2cos[ARRAY_INDEX_7]);
+ else
+ for (i = 0; i < NIC_DCB_DSCP_NUM; i++) {
+ sdk_info(nic_io->dev_hdl,
+ "DCB %s, default cos %u, dscp2cos %u%u%u%u%u%u%u%u\n",
+ dcb_state->dcb_on ? "on" : "off", dcb_state->default_cos,
+ dcb_state->dscp2cos[ARRAY_INDEX_0 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_1 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_2 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_3 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_4 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_5 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_6 + i * NIC_DCB_DSCP_NUM],
+ dcb_state->dscp2cos[ARRAY_INDEX_7 + i * NIC_DCB_DSCP_NUM]);
+ }
+/*lint +e679*/
+ /* Saved in sdk for stateful module */
+ hinic3_save_dcb_state(nic_io, dcb_state);
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = EVENT_NIC_DCB_STATE_CHANGE;
+ memcpy((void *)event_info.event_data, dcb_state, sizeof(*dcb_state));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static void dcb_state_event(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_vf_dcb_state *vf_dcb;
+ struct hinic3_nic_io *nic_io;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ vf_dcb = buf_in;
+ if (!vf_dcb)
+ return;
+
+ hinic3_notify_dcb_state_event(nic_io, &vf_dcb->state);
+}
+
+static void tx_pause_excp_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct nic_cmd_tx_pause_notice *excp_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if (in_size != sizeof(*excp_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %ld\n",
+ in_size, sizeof(*excp_info));
+ return;
+ }
+
+ nic_warn(nic_io->dev_hdl, "Receive tx pause exception event, excp: %u, level: %u\n",
+ excp_info->tx_pause_except, excp_info->except_level);
+
+ hinic3_fault_event_report(hwdev, HINIC3_FAULT_SRC_TX_PAUSE_EXCP,
+ (u16)excp_info->except_level);
+}
+
+static void bond_active_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_bond_active_report_info *active_info = buf_in;
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_event_info event_info = {0};
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ if (in_size != sizeof(*active_info)) {
+ nic_err(nic_io->dev_hdl, "Invalid in_size: %u, should be %ld\n",
+ in_size, sizeof(*active_info));
+ return;
+ }
+
+ event_info.service = EVENT_SRV_NIC;
+ event_info.type = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE;
+ memcpy((void *)event_info.event_data, active_info, sizeof(*active_info));
+
+ hinic3_event_callback(nic_io->hwdev, &event_info);
+}
+
+static const struct nic_event_handler nic_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ .handler = dcb_state_event,
+ },
+ {
+ .cmd = HINIC3_NIC_CMD_TX_PAUSE_EXCP_NOTICE,
+ .handler = tx_pause_excp_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_BOND_ACTIVE_NOTICE,
+ .handler = bond_active_event_handler,
+ },
+};
+
+static int _event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 size = sizeof(nic_cmd_handler) / sizeof(struct nic_event_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ for (i = 0; i < size; i++) {
+ if (cmd == nic_cmd_handler[i].cmd) {
+ nic_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ return 0;
+ }
+ }
+
+ /* can't find this event cmd */
+ sdk_warn(nic_io->dev_hdl, "Unsupported nic event, cmd: %u\n", cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+ return 0;
+}
+
+/* vf handler mbox msg from ppf/pf */
+/* vf link change event
+ * vf fault report event, TBD
+ */
+int hinic3_vf_event_handler(void *hwdev,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+/* pf/ppf handler mgmt cpu report nic event */
+void hinic3_pf_event_handler(void *hwdev, u16 cmd,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ _event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
new file mode 100644
index 000000000000..22670ffe7ebf
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.c
@@ -0,0 +1,1122 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic.h"
+#include "hinic3_nic_cmd.h"
+#include "hinic3_nic_io.h"
+
+#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 1
+#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 1
+#define HINIC3_DEAULT_DROP_THD_ON (0xFFFF)
+#define HINIC3_DEAULT_DROP_THD_OFF 0
+/*lint -e806*/
+static unsigned char tx_pending_limit = HINIC3_DEAULT_TX_CI_PENDING_LIMIT;
+module_param(tx_pending_limit, byte, 0444);
+MODULE_PARM_DESC(tx_pending_limit, "TX CI coalescing parameter pending_limit (default=0)");
+
+static unsigned char tx_coalescing_time = HINIC3_DEAULT_TX_CI_COALESCING_TIME;
+module_param(tx_coalescing_time, byte, 0444);
+MODULE_PARM_DESC(tx_coalescing_time, "TX CI coalescing parameter coalescing_time (default=0)");
+
+static unsigned char rq_wqe_type = HINIC3_NORMAL_RQ_WQE;
+module_param(rq_wqe_type, byte, 0444);
+MODULE_PARM_DESC(rq_wqe_type, "RQ WQE type 0-8Bytes, 1-16Bytes, 2-32Bytes (default=2)");
+
+/*lint +e806*/
+static u32 tx_drop_thd_on = HINIC3_DEAULT_DROP_THD_ON;
+module_param(tx_drop_thd_on, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_on, "TX parameter drop_thd_on (default=0xffff)");
+
+static u32 tx_drop_thd_off = HINIC3_DEAULT_DROP_THD_OFF;
+module_param(tx_drop_thd_off, uint, 0644);
+MODULE_PARM_DESC(tx_drop_thd_off, "TX parameter drop_thd_off (default=0)");
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_CI_Q_ADDR_SIZE (64)
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+ (ALIGN((num_qps) * HINIC3_CI_Q_ADDR_SIZE, pg_sz))
+
+#define HINIC3_CI_VADDR(base_addr, q_id) ((u8 *)(base_addr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define HINIC3_CI_PADDR(base_paddr, q_id) ((base_paddr) + \
+ (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define WQ_PREFETCH_MAX 4
+#define WQ_PREFETCH_MIN 1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define HINIC3_Q_CTXT_MAX 31 /* (2048 - 8) / 64 */
+
+enum hinic3_qp_ctxt_type {
+ HINIC3_QP_CTXT_TYPE_SQ,
+ HINIC3_QP_CTXT_TYPE_RQ,
+};
+
+struct hinic3_qp_ctxt_header {
+ u16 num_queues;
+ u16 queue_type;
+ u16 start_qid;
+ u16 rsvd;
+};
+
+struct hinic3_sq_ctxt {
+ u32 ci_pi;
+ u32 drop_mode_sp;
+ u32 wq_pfn_hi_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd0;
+ u32 pkt_drop_thd;
+ u32 global_sq_id;
+ u32 vlan_ceq_attr;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 rsvd8;
+ u32 rsvd9;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_rq_ctxt {
+ u32 ci_pi;
+ u32 ceq_attr;
+ u32 wq_pfn_hi_type_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd[3];
+ u32 cqe_sge_len;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 pi_paddr_hi;
+ u32 pi_paddr_lo;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct hinic3_sq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_rq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_clean_queue_ctxt {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_sqs) * sizeof(struct hinic3_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs) ((u16)(sizeof(struct hinic3_qp_ctxt_header) \
+ + (num_rqs) * sizeof(struct hinic3_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member) (((val) & \
+ SQ_CTXT_MODE_##member##_MASK) \
+ << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member) (((val) & \
+ SQ_CTXT_PKT_DROP_##member##_MASK) \
+ << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_VLAN_TAG_SHIFT 0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23
+
+#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member) (((val) & \
+ SQ_CTXT_VLAN_##member##_MASK) \
+ << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member) (((val) & \
+ SQ_CTXT_PREF_##member##_MASK) \
+ << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) \
+ << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU
+#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member) (((val) & \
+ RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_PAGE_##member##_MASK) << \
+ RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) << \
+ RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member) (((val) & \
+ RQ_CTXT_PREF_##member##_MASK) << \
+ RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
+ RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+/* sq and rq */
+#define TOTAL_DB_NUM(num_qps) ((u16)(2 * (num_qps)))
+
+static int hinic3_create_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ u16 q_id, u32 sq_depth, u16 sq_msix_idx)
+{
+ int err;
+
+ /* sq used & hardware request init 1 */
+ sq->owner = 1;
+
+ sq->q_id = q_id;
+ sq->msix_entry_idx = sq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &sq->wq, sq_depth,
+ (u16)BIT(HINIC3_SQ_WQEBB_SHIFT));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create tx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_sq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq)
+{
+ hinic3_wq_destroy(&sq->wq);
+}
+
+static int hinic3_create_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq,
+ u16 q_id, u32 rq_depth, u16 rq_msix_idx)
+{
+ int err;
+
+ rq->wqe_type = rq_wqe_type;
+ rq->q_id = q_id;
+ rq->msix_entry_idx = rq_msix_idx;
+
+ err = hinic3_wq_create(nic_io->hwdev, &rq->wq, rq_depth,
+ (u16)BIT(HINIC3_RQ_WQEBB_SHIFT + rq_wqe_type));
+ if (err) {
+ sdk_err(nic_io->dev_hdl, "Failed to create rx queue(%u) wq\n",
+ q_id);
+ return err;
+ }
+
+ rq->rx.pi_virt_addr = dma_zalloc_coherent(nic_io->dev_hdl, PAGE_SIZE,
+ &rq->rx.pi_dma_addr,
+ GFP_KERNEL);
+ if (!rq->rx.pi_virt_addr) {
+ hinic3_wq_destroy(&rq->wq);
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq pi virt addr\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void hinic3_destroy_rq(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *rq)
+{
+ dma_free_coherent(nic_io->dev_hdl, PAGE_SIZE, rq->rx.pi_virt_addr,
+ rq->rx.pi_dma_addr);
+
+ hinic3_wq_destroy(&rq->wq);
+}
+
+static int create_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq, u16 q_id, u32 sq_depth,
+ u32 rq_depth, u16 qp_msix_idx)
+{
+ int err;
+
+ err = hinic3_create_sq(nic_io, sq, q_id, sq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create sq, qid: %u\n",
+ q_id);
+ return err;
+ }
+
+ err = hinic3_create_rq(nic_io, rq, q_id, rq_depth, qp_msix_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to create rq, qid: %u\n",
+ q_id);
+ goto create_rq_err;
+ }
+
+ return 0;
+
+create_rq_err:
+ hinic3_destroy_sq(nic_io, sq);
+
+ return err;
+}
+
+static void destroy_qp(struct hinic3_nic_io *nic_io, struct hinic3_io_queue *sq,
+ struct hinic3_io_queue *rq)
+{
+ hinic3_destroy_sq(nic_io, sq);
+ hinic3_destroy_rq(nic_io, rq);
+}
+
+int hinic3_init_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ void __iomem *db_base = NULL;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ nic_io->max_qps = hinic3_func_max_qnum(hwdev);
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for sqs\n");
+ return -ENOMEM;
+ }
+ nic_io->sqs_db_addr = (u8 *)db_base;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate doorbell for rqs\n");
+ return -ENOMEM;
+ }
+ nic_io->rqs_db_addr = (u8 *)db_base;
+
+ nic_io->ci_vaddr_base =
+ dma_zalloc_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ &nic_io->ci_dma_base, GFP_KERNEL);
+ if (!nic_io->ci_vaddr_base) {
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+ nic_err(nic_io->dev_hdl, "Failed to allocate ci area\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void hinic3_deinit_nicio_res(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ dma_free_coherent(nic_io->dev_hdl,
+ CI_TABLE_SIZE(nic_io->max_qps, PAGE_SIZE),
+ nic_io->ci_vaddr_base, nic_io->ci_dma_base);
+ /* free all doorbell */
+ hinic3_free_db_addr(hwdev, nic_io->sqs_db_addr, NULL);
+ hinic3_free_db_addr(hwdev, nic_io->rqs_db_addr, NULL);
+}
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = NULL;
+ struct hinic3_io_queue *rqs = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id, i, num_qps;
+ int err;
+
+ if (!hwdev || !qps_msix_arry || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ if (qp_params->num_qps > nic_io->max_qps || !qp_params->num_qps)
+ return -EINVAL;
+
+ num_qps = qp_params->num_qps;
+ sqs = kcalloc(num_qps, sizeof(*sqs), GFP_KERNEL);
+ if (!sqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate sq\n");
+ err = -ENOMEM;
+ goto alloc_sqs_err;
+ }
+
+ rqs = kcalloc(num_qps, sizeof(*rqs), GFP_KERNEL);
+ if (!rqs) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate rq\n");
+ err = -ENOMEM;
+ goto alloc_rqs_err;
+ }
+
+ for (q_id = 0; q_id < num_qps; q_id++) {
+ err = create_qp(nic_io, &sqs[q_id], &rqs[q_id], q_id, qp_params->sq_depth,
+ qp_params->rq_depth, qps_msix_arry[q_id].msix_entry_idx);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate qp %u, err: %d\n", q_id, err);
+ goto create_qp_err;
+ }
+ }
+
+ qp_params->sqs = sqs;
+ qp_params->rqs = rqs;
+
+ return 0;
+
+create_qp_err:
+ for (i = 0; i < q_id; i++)
+ destroy_qp(nic_io, &sqs[i], &rqs[i]);
+
+ kfree(rqs);
+
+alloc_rqs_err:
+ kfree(sqs);
+
+alloc_sqs_err:
+
+ return err;
+}
+
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 q_id;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ for (q_id = 0; q_id < qp_params->num_qps; q_id++)
+ destroy_qp(nic_io, &qp_params->sqs[q_id],
+ &qp_params->rqs[q_id]);
+
+ kfree(qp_params->sqs);
+ kfree(qp_params->rqs);
+}
+
+static void init_qps_info(struct hinic3_nic_io *nic_io,
+ struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_io_queue *sqs = qp_params->sqs;
+ struct hinic3_io_queue *rqs = qp_params->rqs;
+ u16 q_id;
+
+ nic_io->num_qps = qp_params->num_qps;
+ nic_io->sq = qp_params->sqs;
+ nic_io->rq = qp_params->rqs;
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sqs[q_id].tx.cons_idx_addr =
+ HINIC3_CI_VADDR(nic_io->ci_vaddr_base, q_id);
+ /* clear ci value */
+ *(u16 *)sqs[q_id].tx.cons_idx_addr = 0;
+ sqs[q_id].db_addr = nic_io->sqs_db_addr;
+
+ /* The first num_qps doorbell is used by sq */
+ rqs[q_id].db_addr = nic_io->rqs_db_addr;
+ }
+}
+
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ init_qps_info(nic_io, qp_params);
+
+ return hinic3_init_qp_ctxts(hwdev);
+}
+
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !qp_params)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return;
+ }
+
+ qp_params->sqs = nic_io->sq;
+ qp_params->rqs = nic_io->rq;
+ qp_params->num_qps = nic_io->num_qps;
+
+ hinic3_free_qp_ctxts(hwdev);
+}
+
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+ int err;
+
+ if (!hwdev || !qps_msix_arry)
+ return -EFAULT;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io) {
+ pr_err("Failed to get nic service adapter\n");
+ return -EFAULT;
+ }
+
+ err = hinic3_init_nicio_res(hwdev);
+ if (err)
+ return err;
+
+ qp_params.num_qps = num_qp;
+ qp_params.sq_depth = sq_depth;
+ qp_params.rq_depth = rq_depth;
+ err = hinic3_alloc_qps(hwdev, qps_msix_arry, &qp_params);
+ if (err) {
+ hinic3_deinit_nicio_res(hwdev);
+ nic_err(nic_io->dev_hdl,
+ "Failed to allocate qps, err: %d\n", err);
+ return err;
+ }
+
+ init_qps_info(nic_io, &qp_params);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_create_qps);
+
+void hinic3_destroy_qps(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_dyna_qp_params qp_params = {0};
+
+ if (!hwdev)
+ return;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return;
+
+ hinic3_deinit_qps(hwdev, &qp_params);
+ hinic3_free_qps(hwdev, &qp_params);
+ hinic3_deinit_nicio_res(hwdev);
+}
+EXPORT_SYMBOL(hinic3_destroy_qps);
+
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || q_type >= HINIC3_MAX_QUEUE_TYPE)
+ return NULL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return NULL;
+
+ return ((q_type == HINIC3_SQ) ? &nic_io->sq[q_id] : &nic_io->rq[q_id]);
+}
+EXPORT_SYMBOL(hinic3_get_nic_queue);
+
+static void hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr,
+ enum hinic3_qp_ctxt_type ctxt_type,
+ u16 num_queues, u16 q_id)
+{
+ qp_ctxt_hdr->queue_type = ctxt_type;
+ qp_ctxt_hdr->num_queues = num_queues;
+ qp_ctxt_hdr->start_qid = q_id;
+ qp_ctxt_hdr->rsvd = 0;
+
+ hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+static void hinic3_sq_prepare_ctxt(struct hinic3_io_queue *sq, u16 sq_id,
+ struct hinic3_sq_ctxt *sq_ctxt)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+
+ ci_start = hinic3_get_sq_local_ci(sq);
+ pi_start = hinic3_get_sq_local_pi(sq);
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&sq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(sq->wq.wq_block_paddr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ sq_ctxt->ci_pi =
+ SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ sq_ctxt->drop_mode_sp =
+ SQ_CTXT_MODE_SET(0, SP_FLAG) |
+ SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+ sq_ctxt->wq_pfn_hi_owner =
+ SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ /* TO DO */
+ sq_ctxt->pkt_drop_thd =
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_on, THD_ON) |
+ SQ_CTXT_PKT_DROP_THD_SET(tx_drop_thd_off, THD_OFF);
+
+ sq_ctxt->global_sq_id =
+ SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+ /* enable insert c-vlan in default */
+ sq_ctxt->vlan_ceq_attr =
+ SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+ SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+ sq_ctxt->rsvd0 = 0;
+
+ sq_ctxt->pref_cache =
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ sq_ctxt->pref_ci_owner =
+ SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ SQ_CTXT_PREF_SET(1, OWNER);
+
+ sq_ctxt->pref_wq_pfn_hi_ci =
+ SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+ SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+ sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->wq_block_pfn_hi =
+ SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+static void hinic3_rq_prepare_ctxt_get_wq_info(struct hinic3_io_queue *rq,
+ u32 *wq_page_pfn_hi, u32 *wq_page_pfn_lo,
+ u32 *wq_block_pfn_hi, u32 *wq_block_pfn_lo)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+
+ wq_page_addr = hinic3_wq_get_first_wqe_page_addr(&rq->wq);
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ *wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ *wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ wq_block_pfn = WQ_BLOCK_PFN(rq->wq.wq_block_paddr);
+ *wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ *wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+}
+
+static void hinic3_rq_prepare_ctxt(struct hinic3_io_queue *rq, struct hinic3_rq_ctxt *rq_ctxt)
+{
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+ u16 wqe_type = rq->wqe_type;
+
+ /* RQ depth is in unit of 8Bytes */
+ ci_start = (u16)((u32)hinic3_get_rq_local_ci(rq) << wqe_type);
+ pi_start = (u16)((u32)hinic3_get_rq_local_pi(rq) << wqe_type);
+
+ hinic3_rq_prepare_ctxt_get_wq_info(rq, &wq_page_pfn_hi, &wq_page_pfn_lo,
+ &wq_block_pfn_hi, &wq_block_pfn_lo);
+
+ rq_ctxt->ci_pi =
+ RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(0, EN) |
+ RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+ rq_ctxt->wq_pfn_hi_type_owner =
+ RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ switch (wqe_type) {
+ case HINIC3_EXTEND_RQ_WQE:
+ /* use 32Byte WQE with SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+ break;
+ case HINIC3_NORMAL_RQ_WQE:
+ /* use 16Byte WQE with 32Bytes SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+ rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+ break;
+ default:
+ pr_err("Invalid rq wqe type: %u", wqe_type);
+ }
+
+ rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pref_cache =
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ rq_ctxt->pref_ci_owner =
+ RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ RQ_CTXT_PREF_SET(1, OWNER);
+
+ rq_ctxt->pref_wq_pfn_hi_ci =
+ RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+ RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+ rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pi_paddr_hi = upper_32_bits(rq->rx.pi_dma_addr);
+ rq_ctxt->pi_paddr_lo = lower_32_bits(rq->rx.pi_dma_addr);
+
+ rq_ctxt->wq_block_pfn_hi =
+ RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ hinic3_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+static int init_sq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL;
+ struct hinic3_sq_ctxt *sq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *sq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ sq_ctxt_block = cmd_buf->buf;
+ sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_SQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ sq = &nic_io->sq[curr_id];
+
+ hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+ }
+
+ cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set SQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_rq_ctxts(struct hinic3_nic_io *nic_io)
+{
+ struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL;
+ struct hinic3_rq_ctxt *rq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_io_queue *rq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_io->num_qps) {
+ rq_ctxt_block = cmd_buf->buf;
+ rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+ max_ctxts = (nic_io->num_qps - q_id) > HINIC3_Q_CTXT_MAX ?
+ HINIC3_Q_CTXT_MAX : (nic_io->num_qps - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_RQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ rq = &nic_io->rq[curr_id];
+
+ hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+ }
+
+ cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set RQ ctxts, err: %d, out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int init_qp_ctxts(struct hinic3_nic_io *nic_io)
+{
+ int err;
+
+ err = init_sq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ err = init_rq_ctxts(nic_io);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int clean_queue_offload_ctxt(struct hinic3_nic_io *nic_io,
+ enum hinic3_qp_ctxt_type ctxt_type)
+{
+ struct hinic3_clean_queue_ctxt *ctxt_block = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_io->hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctxt_block = cmd_buf->buf;
+ ctxt_block->cmdq_hdr.num_queues = nic_io->max_qps;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.start_qid = 0;
+
+ hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmd_buf->size = sizeof(*ctxt_block);
+
+ err = hinic3_cmdq_direct_resp(nic_io->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if ((err) || (out_param)) {
+ nic_err(nic_io->dev_hdl, "Failed to clean queue offload ctxts, err: %d,out_param: 0x%llx\n",
+ err, out_param);
+
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(nic_io->hwdev, cmd_buf);
+
+ return err;
+}
+
+static int clean_qp_offload_ctxt(struct hinic3_nic_io *nic_io)
+{
+ /* clean LRO/TSO context space */
+ return (clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_SQ) ||
+ clean_queue_offload_ctxt(nic_io, HINIC3_QP_CTXT_TYPE_RQ));
+}
+
+/* init qps ctxt and set sq ci attr and arm all sq */
+int hinic3_init_qp_ctxts(void *hwdev)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_sq_attr sq_attr;
+ u32 rq_depth;
+ u16 q_id;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ if (!nic_io)
+ return -EFAULT;
+
+ err = init_qp_ctxts(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to init QP ctxts\n");
+ return err;
+ }
+
+ /* clean LRO/TSO context space */
+ err = clean_qp_offload_ctxt(nic_io);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to clean qp offload ctxts\n");
+ return err;
+ }
+
+ rq_depth = nic_io->rq[0].wq.q_depth << nic_io->rq[0].wqe_type;
+
+ err = hinic3_set_root_ctxt(hwdev, rq_depth, nic_io->sq[0].wq.q_depth,
+ nic_io->rx_buff_len, HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set root context\n");
+ return err;
+ }
+
+ for (q_id = 0; q_id < nic_io->num_qps; q_id++) {
+ sq_attr.ci_dma_base =
+ HINIC3_CI_PADDR(nic_io->ci_dma_base, q_id) >> 0x2;
+ sq_attr.pending_limit = tx_pending_limit;
+ sq_attr.coalescing_time = tx_coalescing_time;
+ sq_attr.intr_en = 1;
+ sq_attr.intr_idx = nic_io->sq[q_id].msix_entry_idx;
+ sq_attr.l2nic_sqn = q_id;
+ sq_attr.dma_attr_off = 0;
+ err = hinic3_set_ci_table(hwdev, &sq_attr);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to set ci table\n");
+ goto set_cons_idx_table_err;
+ }
+ }
+
+ return 0;
+
+set_cons_idx_table_err:
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(hinic3_init_qp_ctxts);
+
+void hinic3_free_qp_ctxts(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hinic3_clean_root_ctxt(hwdev, HINIC3_CHANNEL_NIC);
+}
+EXPORT_SYMBOL_GPL(hinic3_free_qp_ctxts);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
new file mode 100644
index 000000000000..5c5585a7fd74
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_io.h
@@ -0,0 +1,325 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_IO_H
+#define HINIC3_NIC_IO_H
+
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+
+#define HINIC3_MAX_TX_QUEUE_DEPTH 65536
+#define HINIC3_MAX_RX_QUEUE_DEPTH 16384
+
+#define HINIC3_MIN_QUEUE_DEPTH 128
+
+#define HINIC3_SQ_WQEBB_SHIFT 4
+#define HINIC3_RQ_WQEBB_SHIFT 3
+
+#define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT)
+#define HINIC3_CQE_SIZE_SHIFT 4
+
+enum hinic3_rq_wqe_type {
+ HINIC3_COMPACT_RQ_WQE,
+ HINIC3_NORMAL_RQ_WQE,
+ HINIC3_EXTEND_RQ_WQE,
+};
+
+struct hinic3_io_queue {
+ struct hinic3_wq wq;
+ union {
+ u8 wqe_type; /* for rq */
+ u8 owner; /* for sq */
+ };
+ u8 rsvd1;
+ u16 rsvd2;
+
+ u16 q_id;
+ u16 msix_entry_idx;
+
+ u8 __iomem *db_addr;
+
+ union {
+ struct {
+ void *cons_idx_addr;
+ } tx;
+
+ struct {
+ u16 *pi_virt_addr;
+ dma_addr_t pi_dma_addr;
+ } rx;
+ };
+} ____cacheline_aligned;
+
+struct hinic3_nic_db {
+ u32 db_info;
+ u32 pi_hi;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+/* *
+ * @brief hinic3_get_sq_free_wqebbs - get send queue free wqebb
+ * @param sq: send queue
+ * @retval : number of free wqebb
+ */
+static inline u16 hinic3_get_sq_free_wqebbs(struct hinic3_io_queue *sq)
+{
+ return hinic3_wq_free_wqebbs(&sq->wq);
+}
+
+/* *
+ * @brief hinic3_update_sq_local_ci - update send queue local consumer index
+ * @param sq: send queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_sq_local_ci(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&sq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_ci - get send queue local consumer index
+ * @param sq: send queue
+ * @retval : local consumer index
+ */
+static inline u16 hinic3_get_sq_local_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_local_pi - get send queue local producer index
+ * @param sq: send queue
+ * @retval : local producer index
+ */
+static inline u16 hinic3_get_sq_local_pi(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq, sq->wq.prod_idx);
+}
+
+/* *
+ * @brief hinic3_get_sq_hw_ci - get send queue hardware consumer index
+ * @param sq: send queue
+ * @retval : hardware consumer index
+ */
+static inline u16 hinic3_get_sq_hw_ci(const struct hinic3_io_queue *sq)
+{
+ return WQ_MASK_IDX(&sq->wq,
+ hinic3_hw_cpu16(*(u16 *)sq->tx.cons_idx_addr));
+}
+
+/* *
+ * @brief hinic3_get_sq_one_wqebb - get send queue wqe with single wqebb
+ * @param sq: send queue
+ * @param pi: return current pi
+ * @retval : wqe base address
+ */
+static inline void *hinic3_get_sq_one_wqebb(struct hinic3_io_queue *sq, u16 *pi)
+{
+ return hinic3_wq_get_one_wqebb(&sq->wq, pi);
+}
+
+/* *
+ * @brief hinic3_get_sq_multi_wqebb - get send queue wqe with multiple wqebbs
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_multi_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ return hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+}
+
+/* *
+ * @brief hinic3_get_and_update_sq_owner - get and update send queue owner bit
+ * @param sq: send queue
+ * @param curr_pi: current pi
+ * @param wqebb_cnt: wqebb counter
+ * @retval : owner bit
+ */
+static inline u16 hinic3_get_and_update_sq_owner(struct hinic3_io_queue *sq,
+ u16 curr_pi, u16 wqebb_cnt)
+{
+ u16 owner = sq->owner;
+
+ if (unlikely(curr_pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return owner;
+}
+
+/* *
+ * @brief hinic3_get_sq_wqe_with_owner - get send queue wqe with owner
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param pi: return current pi
+ * @param owner: return owner bit
+ * @param second_part_wqebbs_addr: second part wqebbs base address
+ * @param first_part_wqebbs_num: number wqebbs of first part
+ * @retval : first part wqebbs base address
+ */
+static inline void *hinic3_get_sq_wqe_with_owner(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 *pi,
+ u16 *owner,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ void *wqe = hinic3_wq_get_multi_wqebbs(&sq->wq, wqebb_cnt, pi,
+ second_part_wqebbs_addr,
+ first_part_wqebbs_num);
+
+ *owner = sq->owner;
+ if (unlikely(*pi + wqebb_cnt >= sq->wq.q_depth))
+ sq->owner = !sq->owner;
+
+ return wqe;
+}
+
+/* *
+ * @brief hinic3_rollback_sq_wqebbs - rollback send queue wqe
+ * @param sq: send queue
+ * @param wqebb_cnt: wqebb counter
+ * @param owner: owner bit
+ */
+static inline void hinic3_rollback_sq_wqebbs(struct hinic3_io_queue *sq,
+ u16 wqebb_cnt, u16 owner)
+{
+ if (owner != sq->owner)
+ sq->owner = (u8)owner;
+ sq->wq.prod_idx -= wqebb_cnt;
+}
+
+/* *
+ * @brief hinic3_rq_wqe_addr - get receive queue wqe address by queue index
+ * @param rq: receive queue
+ * @param idx: wq index
+ * @retval: wqe base address
+ */
+static inline void *hinic3_rq_wqe_addr(struct hinic3_io_queue *rq, u16 idx)
+{
+ return hinic3_wq_wqebb_addr(&rq->wq, idx);
+}
+
+/* *
+ * @brief hinic3_update_rq_hw_pi - update receive queue hardware pi
+ * @param rq: receive queue
+ * @param pi: pi
+ */
+static inline void hinic3_update_rq_hw_pi(struct hinic3_io_queue *rq, u16 pi)
+{
+ *rq->rx.pi_virt_addr = cpu_to_be16((pi & rq->wq.idx_mask) <<
+ rq->wqe_type);
+}
+
+/* *
+ * @brief hinic3_update_rq_local_ci - update receive queue local consumer index
+ * @param sq: receive queue
+ * @param wqe_cnt: number of wqebb
+ */
+static inline void hinic3_update_rq_local_ci(struct hinic3_io_queue *rq,
+ u16 wqebb_cnt)
+{
+ hinic3_wq_put_wqebbs(&rq->wq, wqebb_cnt);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_ci - get receive queue local ci
+ * @param rq: receive queue
+ * @retval: receive queue local ci
+ */
+static inline u16 hinic3_get_rq_local_ci(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.cons_idx);
+}
+
+/* *
+ * @brief hinic3_get_rq_local_pi - get receive queue local pi
+ * @param rq: receive queue
+ * @retval: receive queue local pi
+ */
+static inline u16 hinic3_get_rq_local_pi(const struct hinic3_io_queue *rq)
+{
+ return WQ_MASK_IDX(&rq->wq, rq->wq.prod_idx);
+}
+
+/* ******************** DB INFO ******************** */
+#define DB_INFO_QID_SHIFT 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT 23
+#define DB_INFO_COS_SHIFT 24
+#define DB_INFO_TYPE_SHIFT 27
+
+#define DB_INFO_QID_MASK 0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK 0x1U
+#define DB_INFO_COS_MASK 0x7U
+#define DB_INFO_TYPE_MASK 0x1FU
+#define DB_INFO_SET(val, member) \
+ (((u32)(val) & DB_INFO_##member##_MASK) << \
+ DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK 0xFFU
+#define DB_PI_HIGH_MASK 0xFFU
+#define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT 8
+#define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_ADDR(queue, pi) ((u64 *)((queue)->db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE 1
+
+/* CFLAG_DATA_PATH */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+/* *
+ * @brief hinic3_write_db - write doorbell
+ * @param queue: nic io queue
+ * @param cos: cos index
+ * @param cflag: 0--sq, 1--rq
+ * @param pi: product index
+ */
+static inline void hinic3_write_db(struct hinic3_io_queue *queue, int cos,
+ u8 cflag, u16 pi)
+{
+ struct hinic3_nic_db db;
+
+ db.db_info = DB_INFO_SET(SRC_TYPE, TYPE) | DB_INFO_SET(cflag, CFLAG) |
+ DB_INFO_SET(cos, COS) | DB_INFO_SET(queue->q_id, QID);
+ db.pi_hi = DB_PI_HIGH(pi);
+ /* Data should be written to HW in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.pi_hi = hinic3_hw_be32(db.pi_hi);
+
+ wmb(); /* Write all before the doorbell */
+
+ writeq(*((u64 *)&db), DB_ADDR(queue, pi));
+}
+
+struct hinic3_dyna_qp_params {
+ u16 num_qps;
+ u32 sq_depth;
+ u32 rq_depth;
+
+ struct hinic3_io_queue *sqs;
+ struct hinic3_io_queue *rqs;
+};
+
+int hinic3_alloc_qps(void *hwdev, struct irq_info *qps_msix_arry,
+ struct hinic3_dyna_qp_params *qp_params);
+void hinic3_free_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+void hinic3_deinit_qps(void *hwdev, struct hinic3_dyna_qp_params *qp_params);
+int hinic3_init_nicio_res(void *hwdev);
+void hinic3_deinit_nicio_res(void *hwdev);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
new file mode 100644
index 000000000000..78d943d2dab5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_profile.h"
+#include "hinic3_nic_prof.h"
+
+static bool is_match_nic_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter nic_prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_nic_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ u16 num_adap = ARRAY_SIZE(nic_prof_adap_objs);
+
+ nic_dev->prof_adap = hinic3_prof_init(nic_dev, nic_prof_adap_objs, num_adap,
+ (void *)&nic_dev->prof_attr);
+ if (nic_dev->prof_adap)
+ nic_info(&nic_dev->pdev->dev, "Find profile adapter type: %d\n",
+ nic_dev->prof_adap->type);
+}
+
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
+{
+ hinic3_prof_deinit(nic_dev->prof_adap, nic_dev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
new file mode 100644
index 000000000000..3c279e715b0a
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_prof.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_PROF_H
+#define HINIC3_NIC_PROF_H
+#include <linux/socket.h>
+
+#include <linux/types.h>
+
+#include "hinic3_nic_cfg.h"
+
+struct hinic3_nic_prof_attr {
+ void *priv_data;
+ char netdev_name[IFNAMSIZ];
+};
+
+struct hinic3_nic_dev;
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline char *hinic3_get_dft_netdev_name_fmt(struct hinic3_nic_dev *nic_dev)
+{
+ if (nic_dev->prof_attr)
+ return nic_dev->prof_attr->netdev_name;
+
+ return NULL;
+}
+
+#ifdef CONFIG_MODULE_PROF
+int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag);
+u32 hinic3_get_link(struct net_device *dev)
+int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu);
+int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr);
+#else
+static inline int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag)
+{
+ return 0;
+}
+
+static inline int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu)
+{
+ return hinic3_set_port_mtu(nic_dev->hwdev, (u16)mtu);
+}
+
+static inline int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr)
+{
+ return hinic3_update_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, saddr->sa_data, 0,
+ hinic3_global_func_id(nic_dev->hwdev));
+}
+
+#endif
+
+void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
new file mode 100644
index 000000000000..f492c5d8ad08
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_nic_qp.h
@@ -0,0 +1,384 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NIC_QP_H
+#define HINIC3_NIC_QP_H
+
+#include "hinic3_common.h"
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+#define HINIC3_MAX_SQ_SGE 18
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_SHIFT 5
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_SHIFT 7
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_SHIFT 8
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0x1FU
+#define RQ_CQE_OFFOLAD_TYPE_IP_TYPE_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_ENC_L3_TYPE_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_TUNNEL_PKT_FORMAT_MASK 0xFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) \
+ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC3_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+#define HINIC3_GET_RX_IP_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, IP_TYPE)
+#define HINIC3_GET_RX_ENC_L3_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, ENC_L3_TYPE)
+#define HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, TUNNEL_PKT_FORMAT)
+
+#define HINIC3_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC3_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) \
+ (((val) >> RQ_CQE_SGE_##member##_SHIFT) & RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC3_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC3_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) \
+ (((val) >> RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC3_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC3_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC3_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC3_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC3_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC3_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+#define HINIC3_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) \
+ (((val) >> RQ_CQE_##member##_SHIFT) & RQ_CQE_##member##_MASK)
+#define HINIC3_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT 8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK 0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member) \
+ (((val) >> RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+ RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define HINIC3_GET_DECRYPT_STATUS(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+struct hinic3_rq_cqe {
+ u32 status;
+ u32 vlan_len;
+
+ u32 offload_type;
+ u32 hash_val;
+ u32 xid;
+ u32 decrypt_info;
+ u32 rsvd6;
+ u32 pkt_info;
+};
+
+struct hinic3_sge_sect {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_rq_extend_wqe {
+ struct hinic3_sge_sect buf_desc;
+ struct hinic3_sge_sect cqe_sect;
+};
+
+struct hinic3_rq_normal_wqe {
+ u32 buf_hi_addr;
+ u32 buf_lo_addr;
+ u32 cqe_hi_addr;
+ u32 cqe_lo_addr;
+};
+
+struct hinic3_rq_wqe {
+ union {
+ struct hinic3_rq_normal_wqe normal_wqe;
+ struct hinic3_rq_extend_wqe extend_wqe;
+ };
+};
+
+struct hinic3_sq_wqe_desc {
+ u32 ctrl_len;
+ u32 queue_info;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+/* Engine only pass first 12B TS field directly to uCode through metadata
+ * vlan_offoad is used for hardware when vlan insert in tx
+ */
+struct hinic3_sq_task {
+ u32 pkt_info0;
+ u32 ip_identify;
+ u32 pkt_info2; /* ipsec used as spi */
+ u32 vlan_offload;
+};
+
+struct hinic3_sq_bufdesc {
+ u32 len; /* 31-bits Length, L2NIC only use length[17:0] */
+ u32 rsvd;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+struct hinic3_sq_compact_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+};
+
+struct hinic3_sq_extend_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+ struct hinic3_sq_task task;
+ struct hinic3_sq_bufdesc buf_desc[0];
+};
+
+struct hinic3_sq_wqe {
+ union {
+ struct hinic3_sq_compact_wqe compact_wqe;
+ struct hinic3_sq_extend_wqe extend_wqe;
+ };
+};
+
+/* use section pointer for support non continuous wqe */
+struct hinic3_sq_wqe_combo {
+ struct hinic3_sq_wqe_desc *ctrl_bd0;
+ struct hinic3_sq_task *task;
+ struct hinic3_sq_bufdesc *bds_head;
+ struct hinic3_sq_bufdesc *bds_sec2;
+ u16 first_bds_num;
+ u32 wqe_type;
+ u32 task_type;
+};
+
+/* ************* SQ_CTRL ************** */
+enum sq_wqe_data_format {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum sq_wqe_ec_type {
+ SQ_WQE_COMPACT_TYPE = 0,
+ SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+enum sq_wqe_tasksect_len_type {
+ SQ_WQE_TASKSECT_46BITS = 0,
+ SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+#define SQ_CTRL_BD0_LEN_SHIFT 0
+#define SQ_CTRL_RSVD_SHIFT 18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT 19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT 28
+#define SQ_CTRL_DIRECT_SHIFT 29
+#define SQ_CTRL_EXTENDED_SHIFT 30
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BD0_LEN_MASK 0x3FFFFU
+#define SQ_CTRL_RSVD_MASK 0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_DIRECT_MASK 0x1U
+#define SQ_CTRL_EXTENDED_MASK 0x1U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_##member##_MASK) << SQ_CTRL_##member##_SHIFT)
+
+#define SQ_CTRL_GET(val, member) \
+ (((val) >> SQ_CTRL_##member##_SHIFT) & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_##member##_MASK << SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT 0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK 0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) & \
+ SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT 24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT 25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT 27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT 28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT 29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT 30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT 31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK 0x1U
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT 0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT 16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK 0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK 0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member) \
+ (((val) & SQ_TASK_INFO3_##member##_MASK) << \
+ SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member) \
+ (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+ SQ_TASK_INFO3_##member##_MASK)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u32 hinic3_get_pkt_len_for_super_cqe(const struct hinic3_rq_cqe *cqe,
+ bool last)
+{
+ u32 pkt_len = hinic3_hw_cpu32(cqe->pkt_info);
+
+ if (!last)
+ return RQ_CQE_PKT_LEN_GET(pkt_len, FIRST_LEN);
+ else
+ return RQ_CQE_PKT_LEN_GET(pkt_len, LAST_LEN);
+}
+
+/* *
+ * hinic3_set_vlan_tx_offload - set vlan offload info
+ * @task: wqe task section
+ * @vlan_tag: vlan tag
+ * @vlan_type: 0--select TPID0 in IPSU, 1--select TPID0 in IPSU
+ * 2--select TPID2 in IPSU, 3--select TPID3 in IPSU, 4--select TPID4 in IPSU
+ */
+static inline void hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task,
+ u16 vlan_tag, u8 vlan_type)
+{
+ task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+ SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+ SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
new file mode 100644
index 000000000000..b992defdea6d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c
@@ -0,0 +1,907 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+
+#define MAX_NUM_OF_ETHTOOL_NTUPLE_RULES BIT(9)
+struct hinic3_ethtool_rx_flow_rule {
+ struct list_head list;
+ struct ethtool_rx_flow_spec flow_spec;
+};
+
+static void tcam_translate_key_y(u8 *key_y, const u8 *src_input, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_y[idx] = src_input[idx] & mask[idx];
+}
+
+static void tcam_translate_key_x(u8 *key_x, const u8 *key_y, const u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_x[idx] = key_y[idx] ^ mask[idx];
+}
+
+static void tcam_key_calculate(struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ tcam_translate_key_y(fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_info),
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+ tcam_translate_key_x(fdir_tcam_rule->key.x, fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_mask), TCAM_FLOW_KEY_SIZE);
+}
+
+#define TCAM_IPV4_TYPE 0
+#define TCAM_IPV6_TYPE 1
+
+static int hinic3_base_ipv4_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *val = &fs->h_u.tcp_ip4_spec;
+ u32 temp;
+
+ switch (mask->ip4src) {
+ case U32_MAX:
+ temp = ntohl(val->ip4src);
+ tcam_key->key_info.sipv4_h = high_16_bits(temp);
+ tcam_key->key_info.sipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.sipv4_h = U16_MAX;
+ tcam_key->key_mask.sipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ switch (mask->ip4dst) {
+ case U32_MAX:
+ temp = ntohl(val->ip4dst);
+ tcam_key->key_info.dipv4_h = high_16_bits(temp);
+ tcam_key->key_info.dipv4_l = low_16_bits(temp);
+
+ tcam_key->key_mask.dipv4_h = U16_MAX;
+ tcam_key->key_mask.dipv4_l = U16_MAX;
+ break;
+ case 0:
+ break;
+
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ip mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info.ip_type = TCAM_IPV4_TYPE;
+ tcam_key->key_mask.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip4_spec *l4_mask = &fs->m_u.tcp_ip4_spec;
+ struct ethtool_tcpip4_spec *l4_val = &fs->h_u.tcp_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask.dport = l4_mask->pdst;
+
+ tcam_key->key_info.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V4_FLOW)
+ tcam_key->key_info.ip_proto = IPPROTO_TCP;
+ else
+ tcam_key->key_info.ip_proto = IPPROTO_UDP;
+ tcam_key->key_mask.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip4_spec *l3_mask = &fs->m_u.usr_ip4_spec;
+ struct ethtool_usrip4_spec *l3_val = &fs->h_u.usr_ip4_spec;
+ int err;
+
+ err = hinic3_base_ipv4_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info.ip_proto = l3_val->proto;
+ tcam_key->key_mask.ip_proto = l3_mask->proto;
+
+ return 0;
+}
+
+#ifndef UNSUPPORT_NTUPLE_IPV6
+enum ipv6_parse_res {
+ IPV6_MASK_INVALID,
+ IPV6_MASK_ALL_MASK,
+ IPV6_MASK_ALL_ZERO,
+};
+
+enum ipv6_index {
+ IPV6_IDX0,
+ IPV6_IDX1,
+ IPV6_IDX2,
+ IPV6_IDX3,
+};
+
+static int ipv6_mask_parse(const u32 *ipv6_mask)
+{
+ if (ipv6_mask[IPV6_IDX0] == 0 && ipv6_mask[IPV6_IDX1] == 0 &&
+ ipv6_mask[IPV6_IDX2] == 0 && ipv6_mask[IPV6_IDX3] == 0)
+ return IPV6_MASK_ALL_ZERO;
+
+ if (ipv6_mask[IPV6_IDX0] == U32_MAX &&
+ ipv6_mask[IPV6_IDX1] == U32_MAX &&
+ ipv6_mask[IPV6_IDX2] == U32_MAX && ipv6_mask[IPV6_IDX3] == U32_MAX)
+ return IPV6_MASK_ALL_MASK;
+
+ return IPV6_MASK_INVALID;
+}
+
+static int hinic3_base_ipv6_parse(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *val = &fs->h_u.tcp_ip6_spec;
+ int parse_res;
+ u32 temp;
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6src);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6src[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.sipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.sipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.sipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6src[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.sipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.sipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.sipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.sipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid src_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ parse_res = ipv6_mask_parse((u32 *)mask->ip6dst);
+ if (parse_res == IPV6_MASK_ALL_MASK) {
+ temp = ntohl(val->ip6dst[IPV6_IDX0]);
+ tcam_key->key_info_ipv6.dipv6_key0 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key1 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX1]);
+ tcam_key->key_info_ipv6.dipv6_key2 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key3 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX2]);
+ tcam_key->key_info_ipv6.dipv6_key4 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key5 = low_16_bits(temp);
+ temp = ntohl(val->ip6dst[IPV6_IDX3]);
+ tcam_key->key_info_ipv6.dipv6_key6 = high_16_bits(temp);
+ tcam_key->key_info_ipv6.dipv6_key7 = low_16_bits(temp);
+
+ tcam_key->key_mask_ipv6.dipv6_key0 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key1 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key2 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key3 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key4 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key5 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key6 = U16_MAX;
+ tcam_key->key_mask_ipv6.dipv6_key7 = U16_MAX;
+ } else if (parse_res == IPV6_MASK_INVALID) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "invalid dst_ipv6 mask\n");
+ return -EINVAL;
+ }
+
+ tcam_key->key_info_ipv6.ip_type = TCAM_IPV6_TYPE;
+ tcam_key->key_mask_ipv6.ip_type = TCAM_IP_TYPE_MASK;
+
+ tcam_key->key_info_ipv6.function_id =
+ hinic3_global_func_id(nic_dev->hwdev);
+ tcam_key->key_mask_ipv6.function_id = TCAM_FUNC_ID_MASK;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_l4_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_tcpip6_spec *l4_mask = &fs->m_u.tcp_ip6_spec;
+ struct ethtool_tcpip6_spec *l4_val = &fs->h_u.tcp_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.dport = ntohs(l4_val->pdst);
+ tcam_key->key_mask_ipv6.dport = l4_mask->pdst;
+
+ tcam_key->key_info_ipv6.sport = ntohs(l4_val->psrc);
+ tcam_key->key_mask_ipv6.sport = l4_mask->psrc;
+
+ if (fs->flow_type == TCP_V6_FLOW)
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_TCP;
+ else
+ tcam_key->key_info_ipv6.ip_proto = NEXTHDR_UDP;
+ tcam_key->key_mask_ipv6.ip_proto = U8_MAX;
+
+ return 0;
+}
+
+static int hinic3_fdir_tcam_ipv6_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key)
+{
+ struct ethtool_usrip6_spec *l3_mask = &fs->m_u.usr_ip6_spec;
+ struct ethtool_usrip6_spec *l3_val = &fs->h_u.usr_ip6_spec;
+ int err;
+
+ err = hinic3_base_ipv6_parse(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+
+ tcam_key->key_info_ipv6.ip_proto = l3_val->l4_proto;
+ tcam_key->key_mask_ipv6.ip_proto = l3_mask->l4_proto;
+
+ return 0;
+}
+#endif
+
+static int hinic3_fdir_tcam_info_init(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs,
+ struct tag_tcam_key *tcam_key,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ int err;
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ err = hinic3_fdir_tcam_ipv4_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IP_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ err = hinic3_fdir_tcam_ipv6_l4_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+ case IPV6_USER_FLOW:
+ err = hinic3_fdir_tcam_ipv6_init(nic_dev, fs, tcam_key);
+ if (err)
+ return err;
+ break;
+#endif
+ default:
+ return -ENOTSUPP;
+ }
+
+ tcam_key->key_info.tunnel_type = 0;
+ tcam_key->key_mask.tunnel_type = TCAM_TUNNEL_TYPE_MASK;
+
+ fdir_tcam_rule->data.qid = (u32)fs->ring_cookie;
+ tcam_key_calculate(tcam_key, fdir_tcam_rule);
+
+ return 0;
+}
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule_tmp = NULL;
+ struct hinic3_tcam_filter *tcam_iter = NULL;
+ struct hinic3_tcam_filter *tcam_iter_tmp = NULL;
+ struct hinic3_tcam_dynamic_block *block = NULL;
+ struct hinic3_tcam_dynamic_block *block_tmp = NULL;
+ struct list_head *dynamic_list =
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list;
+
+ if (!list_empty(&tcam_info->tcam_list)) {
+ list_for_each_entry_safe(tcam_iter, tcam_iter_tmp,
+ &tcam_info->tcam_list,
+ tcam_filter_list) {
+ list_del(&tcam_iter->tcam_filter_list);
+ kfree(tcam_iter);
+ }
+ }
+ if (!list_empty(dynamic_list)) {
+ list_for_each_entry_safe(block, block_tmp, dynamic_list,
+ block_list) {
+ list_del(&block->block_list);
+ kfree(block);
+ }
+ }
+
+ if (!list_empty(&nic_dev->rx_flow_rule.rules)) {
+ list_for_each_entry_safe(eth_rule, eth_rule_tmp,
+ &nic_dev->rx_flow_rule.rules, list) {
+ list_del(ð_rule->list);
+ kfree(eth_rule);
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ hinic3_flush_tcam_rule(nic_dev->hwdev);
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+ }
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_alloc_dynamic_block_resource(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_info *tcam_info,
+ u16 dynamic_block_id)
+{
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+
+ dynamic_block_ptr = kzalloc(sizeof(*dynamic_block_ptr), GFP_KERNEL);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "fdir filter dynamic alloc block index %d memory failed\n",
+ dynamic_block_id);
+ return NULL;
+ }
+
+ dynamic_block_ptr->dynamic_block_id = dynamic_block_id;
+ list_add_tail(&dynamic_block_ptr->block_list,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt++;
+
+ return dynamic_block_ptr;
+}
+
+static void hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_dynamic_block *block_ptr)
+{
+ if (!block_ptr)
+ return;
+
+ list_del(&block_ptr->block_list);
+ kfree(block_ptr);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt--;
+}
+
+static struct hinic3_tcam_dynamic_block *
+hinic3_dynamic_lookup_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule,
+ const struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_filter *tcam_filter,
+ u16 *tcam_index)
+{
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 index;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list)
+ if (tmp->dynamic_index_cnt < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)
+ break;
+
+ if (!tmp || tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic lookup for index failed\n");
+ return NULL;
+ }
+
+ for (index = 0; index < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE; index++)
+ if (tmp->dynamic_index_used[index] == 0)
+ break;
+
+ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "tcam block 0x%x supports filter rules is full\n",
+ tmp->dynamic_block_id);
+ return NULL;
+ }
+
+ tcam_filter->dynamic_block_id = tmp->dynamic_block_id;
+ tcam_filter->index = index;
+ *tcam_index = index;
+
+ fdir_tcam_rule->index = index +
+ HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id);
+
+ return tmp;
+}
+
+static int hinic3_add_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter,
+ struct nic_tcam_cfg_rule *fdir_tcam_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt;
+ u16 tcam_block_index = 0;
+ int block_alloc_flag = 0;
+ u16 index = 0;
+ int err;
+
+ if (tcam_info->tcam_rule_nums >=
+ block_cnt * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ if (block_cnt >= (HINIC3_MAX_TCAM_FILTERS /
+ HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic tcam block is full, alloc failed\n");
+ goto failed;
+ }
+
+ err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+ &tcam_block_index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic tcam alloc block failed\n");
+ goto failed;
+ }
+
+ block_alloc_flag = 1;
+
+ dynamic_block_ptr =
+ hinic3_alloc_dynamic_block_resource(nic_dev, tcam_info,
+ tcam_block_index);
+ if (!dynamic_block_ptr) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter dynamic alloc block memory failed\n");
+ goto block_alloc_failed;
+ }
+ }
+
+ tmp = hinic3_dynamic_lookup_tcam_filter(nic_dev,
+ fdir_tcam_rule, tcam_info,
+ tcam_filter, &index);
+ if (!tmp) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Dynamic lookup tcam filter failed\n");
+ goto lookup_tcam_index_failed;
+ }
+
+ err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir_tcam_rule add failed\n");
+ goto add_tcam_rules_failed;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Add fdir tcam rule, function_id: 0x%x, tcam_block_id: %d, local_index: %d, global_index: %d, queue: %d, tcam_rule_nums: %d succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev),
+ tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index,
+ fdir_tcam_rule->data.qid, tcam_info->tcam_rule_nums + 1);
+
+ if (tcam_info->tcam_rule_nums == 0) {
+ err = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, true);
+ if (err)
+ goto enable_failed;
+ }
+
+ list_add_tail(&tcam_filter->tcam_filter_list, &tcam_info->tcam_list);
+
+ tmp->dynamic_index_used[index] = 1;
+ tmp->dynamic_index_cnt++;
+
+ tcam_info->tcam_rule_nums++;
+
+ return 0;
+
+enable_failed:
+ hinic3_del_tcam_rule(nic_dev->hwdev, fdir_tcam_rule->index);
+
+add_tcam_rules_failed:
+lookup_tcam_index_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_dynamic_block_resource(tcam_info,
+ dynamic_block_ptr);
+
+block_alloc_failed:
+ if (block_alloc_flag == 1)
+ hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index);
+
+failed:
+ return -EFAULT;
+}
+
+static int hinic3_del_tcam_filter(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tcam_filter *tcam_filter)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ u16 dynamic_block_id = tcam_filter->dynamic_block_id;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u32 index = 0;
+ int err;
+
+ list_for_each_entry(tmp,
+ &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ block_list) {
+ if (tmp->dynamic_block_id == dynamic_block_id)
+ break;
+ }
+ if (!tmp || tmp->dynamic_block_id != dynamic_block_id) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Fdir filter del dynamic lookup for block failed\n");
+ return -EFAULT;
+ }
+
+ index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+ tcam_filter->index;
+
+ err = hinic3_del_tcam_rule(nic_dev->hwdev, index);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "fdir_tcam_rule del failed\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev,
+ "Del fdir_tcam_dynamic_rule function_id: 0x%x, tcam_block_id: %d, local_index: %d, global_index: %d, local_rules_nums: %d, global_rule_nums: %d succeed\n",
+ hinic3_global_func_id(nic_dev->hwdev), dynamic_block_id,
+ tcam_filter->index, index, tmp->dynamic_index_cnt - 1,
+ tcam_info->tcam_rule_nums - 1);
+
+ tmp->dynamic_index_used[tcam_filter->index] = 0;
+ tmp->dynamic_index_cnt--;
+ tcam_info->tcam_rule_nums--;
+ if (tmp->dynamic_index_cnt == 0) {
+ hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+ hinic3_free_dynamic_block_resource(tcam_info, tmp);
+ }
+
+ if (tcam_info->tcam_rule_nums == 0)
+ hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+ list_del(&tcam_filter->tcam_filter_list);
+ kfree(tcam_filter);
+
+ return 0;
+}
+
+static inline struct hinic3_tcam_filter *
+hinic3_tcam_filter_lookup(const struct list_head *filter_list,
+ struct tag_tcam_key *key)
+{
+ struct hinic3_tcam_filter *iter;
+
+ list_for_each_entry(iter, filter_list, tcam_filter_list) {
+ if (memcmp(key, &iter->tcam_key,
+ sizeof(struct tag_tcam_key)) == 0) {
+ return iter;
+ }
+ }
+
+ return NULL;
+}
+
+static void del_ethtool_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ list_del(ð_rule->list);
+ nic_dev->rx_flow_rule.tot_num_rules--;
+
+ kfree(eth_rule);
+}
+
+static int hinic3_remove_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *eth_rule)
+{
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ struct hinic3_tcam_filter *tcam_filter;
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+
+ err = hinic3_fdir_tcam_info_init(nic_dev, ð_rule->flow_spec,
+ &tcam_key, &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (!tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter does not exists\n");
+ return -EEXIST;
+ }
+
+ err = hinic3_del_tcam_filter(nic_dev, tcam_filter);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Delete tcam filter failed\n");
+ return err;
+ }
+
+ del_ethtool_rule(nic_dev, eth_rule);
+
+ return 0;
+}
+
+static void add_rule_to_list(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_ethtool_rx_flow_rule *rule)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+ struct list_head *head = &nic_dev->rx_flow_rule.rules;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location > rule->flow_spec.location)
+ break;
+ head = &iter->list;
+ }
+ nic_dev->rx_flow_rule.tot_num_rules++;
+ list_add(&rule->list, head);
+}
+
+static int hinic3_add_one_rule(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct nic_tcam_cfg_rule fdir_tcam_rule;
+ struct tag_tcam_key tcam_key;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct hinic3_tcam_filter *tcam_filter = NULL;
+ struct hinic3_tcam_info *tcam_info = &nic_dev->tcam;
+ int err;
+
+ memset(&fdir_tcam_rule, 0, sizeof(fdir_tcam_rule));
+ memset(&tcam_key, 0, sizeof(tcam_key));
+ err = hinic3_fdir_tcam_info_init(nic_dev, fs, &tcam_key,
+ &fdir_tcam_rule);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Init fdir info failed\n");
+ return err;
+ }
+
+ tcam_filter = hinic3_tcam_filter_lookup(&tcam_info->tcam_list,
+ &tcam_key);
+ if (tcam_filter) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Filter exists\n");
+ return -EEXIST;
+ }
+
+ tcam_filter = kzalloc(sizeof(*tcam_filter), GFP_KERNEL);
+ if (!tcam_filter)
+ return -ENOMEM;
+ memcpy(&tcam_filter->tcam_key,
+ &tcam_key, sizeof(struct tag_tcam_key));
+ tcam_filter->queue = (u16)fdir_tcam_rule.data.qid;
+
+ err = hinic3_add_tcam_filter(nic_dev, tcam_filter, &fdir_tcam_rule);
+ if (err)
+ goto add_tcam_filter_fail;
+
+ /* driver save new rule filter */
+ eth_rule = kzalloc(sizeof(*eth_rule), GFP_KERNEL);
+ if (!eth_rule) {
+ err = -ENOMEM;
+ goto alloc_eth_rule_fail;
+ }
+
+ eth_rule->flow_spec = *fs;
+ add_rule_to_list(nic_dev, eth_rule);
+
+ return 0;
+
+alloc_eth_rule_fail:
+ hinic3_del_tcam_filter(nic_dev, tcam_filter);
+add_tcam_filter_fail:
+ kfree(tcam_filter);
+ return err;
+}
+
+static struct hinic3_ethtool_rx_flow_rule *
+find_ethtool_rule(const struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.location == location)
+ return iter;
+ }
+ return NULL;
+}
+
+static int validate_flow(struct hinic3_nic_dev *nic_dev,
+ const struct ethtool_rx_flow_spec *fs)
+{
+ if (fs->location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "loc exceed limit[0,%lu]\n",
+ MAX_NUM_OF_ETHTOOL_NTUPLE_RULES);
+ return -EINVAL;
+ }
+
+ if (fs->ring_cookie >= nic_dev->q_params.num_qps) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "action is larger than queue number %u\n",
+ nic_dev->q_params.num_qps);
+ return -EINVAL;
+ }
+
+ switch (fs->flow_type) {
+ case TCP_V4_FLOW:
+ case UDP_V4_FLOW:
+ case IP_USER_FLOW:
+#ifndef UNSUPPORT_NTUPLE_IPV6
+ case TCP_V6_FLOW:
+ case UDP_V6_FLOW:
+ case IPV6_USER_FLOW:
+#endif
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "flow type is not supported\n");
+ return -ENOTSUPP;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ struct ethtool_rx_flow_spec flow_spec_temp;
+ int loc_exit_flag = 0;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ err = validate_flow(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "flow is not valid %d\n", err);
+ return err;
+ }
+
+ eth_rule = find_ethtool_rule(nic_dev, fs->location);
+ /* when location is same, delete old location rule. */
+ if (eth_rule) {
+ memcpy(&flow_spec_temp, ð_rule->flow_spec,
+ sizeof(struct ethtool_rx_flow_spec));
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+ if (err)
+ return err;
+
+ loc_exit_flag = 1;
+ }
+
+ /* add new rule filter */
+ err = hinic3_add_one_rule(nic_dev, fs);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Add new rule filter failed\n");
+ if (loc_exit_flag)
+ hinic3_add_one_rule(nic_dev, &flow_spec_temp);
+
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+ int err;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -ENOSPC;
+
+ eth_rule = find_ethtool_rule(nic_dev, location);
+ if (!eth_rule)
+ return -ENOENT;
+
+ err = hinic3_remove_one_rule(nic_dev, eth_rule);
+
+ return err;
+}
+
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location)
+{
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (location >= MAX_NUM_OF_ETHTOOL_NTUPLE_RULES)
+ return -EINVAL;
+
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list) {
+ if (eth_rule->flow_spec.location == location) {
+ info->fs = eth_rule->flow_spec;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs)
+{
+ int idx = 0;
+ struct hinic3_ethtool_rx_flow_rule *eth_rule = NULL;
+
+ if (!HINIC3_SUPPORT_FDIR(nic_dev->hwdev)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported ntuple function\n");
+ return -EOPNOTSUPP;
+ }
+
+ info->data = MAX_NUM_OF_ETHTOOL_NTUPLE_RULES;
+ list_for_each_entry(eth_rule, &nic_dev->rx_flow_rule.rules, list)
+ rule_locs[idx++] = eth_rule->flow_spec.location;
+
+ return info->rule_cnt == idx ? 0 : -ENOENT;
+}
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num)
+{
+ struct hinic3_ethtool_rx_flow_rule *iter = NULL;
+
+ list_for_each_entry(iter, &nic_dev->rx_flow_rule.rules, list) {
+ if (iter->flow_spec.ring_cookie >= q_num) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "User defined filter %u assigns flow to queue %llu. Queue number %u is invalid\n",
+ iter->flow_spec.location, iter->flow_spec.ring_cookie, q_num);
+ return false;
+ }
+ }
+
+ return true;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_profile.h b/drivers/net/ethernet/huawei/hinic3/hinic3_profile.h
new file mode 100644
index 000000000000..a93f3b60e709
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_profile.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROFILE_H
+#define HINIC3_PROFILE_H
+
+typedef bool (*hinic3_is_match_prof)(void *device);
+typedef void *(*hinic3_init_prof_attr)(void *device);
+typedef void (*hinic3_deinit_prof_attr)(void *porf_attr);
+
+enum prof_adapter_type {
+ PROF_ADAP_TYPE_INVALID,
+ PROF_ADAP_TYPE_PANGEA = 1,
+
+ /* Add prof adapter type before default */
+ PROF_ADAP_TYPE_DEFAULT,
+};
+
+/**
+ * struct hinic3_prof_adapter - custom scene's profile adapter
+ * @type: adapter type
+ * @match: Check whether the current function is used in the custom scene.
+ * Implemented in the current source file
+ * @init: When @match return true, the initialization function called in probe.
+ * Implemented in the source file of the custom scene
+ * @deinit: When @match return true, the deinitialization function called when
+ * remove. Implemented in the source file of the custom scene
+ */
+struct hinic3_prof_adapter {
+ enum prof_adapter_type type;
+ hinic3_is_match_prof match;
+ hinic3_init_prof_attr init;
+ hinic3_deinit_prof_attr deinit;
+};
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+/*lint -save -e661 */
+static inline struct hinic3_prof_adapter *
+hinic3_prof_init(void *device, struct hinic3_prof_adapter *adap_objs, int num_adap,
+ void **prof_attr)
+{
+ struct hinic3_prof_adapter *prof_obj = NULL;
+ u16 i;
+
+ for (i = 0; i < num_adap; i++) {
+ prof_obj = &adap_objs[i];
+ if (!(prof_obj->match && prof_obj->match(device)))
+ continue;
+
+ *prof_attr = prof_obj->init ? prof_obj->init(device) : NULL;
+
+ return prof_obj;
+ }
+
+ return NULL;
+}
+
+static inline void hinic3_prof_deinit(struct hinic3_prof_adapter *prof_obj, void *prof_attr)
+{
+ if (!prof_obj)
+ return;
+
+ if (prof_obj->deinit)
+ prof_obj->deinit(prof_attr);
+}
+
+/*lint -restore*/
+
+/* module-level interface */
+#ifdef CONFIG_MODULE_PROF
+struct hinic3_module_ops {
+ int (*module_prof_init)(void);
+ void (*module_prof_exit)(void);
+ void (*probe_fault_process)(void *pdev, u16 level);
+ int (*probe_pre_process)(void *pdev);
+ void (*probe_pre_unprocess)(void *pdev);
+};
+
+struct hinic3_module_ops *hinic3_get_module_prof_ops(void);
+
+static inline void hinic3_probe_fault_process(void *pdev, u16 level)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_fault_process)
+ ops->probe_fault_process(pdev, level);
+}
+
+static inline int hinic3_module_pre_init(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->module_prof_init)
+ return -EINVAL;
+
+ return ops->module_prof_init();
+}
+
+static inline void hinic3_module_post_exit(void)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->module_prof_exit)
+ ops->module_prof_exit();
+}
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (!ops || !ops->probe_pre_process)
+ return -EINVAL;
+
+ return ops->probe_pre_process(pdev);
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev)
+{
+ struct hinic3_module_ops *ops = hinic3_get_module_prof_ops();
+
+ if (ops && ops->probe_pre_unprocess)
+ ops->probe_pre_unprocess(pdev);
+}
+#else
+static inline void hinic3_probe_fault_process(void *pdev, u16 level) { };
+
+static inline int hinic3_module_pre_init(void)
+{
+ return 0;
+}
+
+static inline void hinic3_module_post_exit(void) { };
+
+static inline int hinic3_probe_pre_process(void *pdev)
+{
+ return 0;
+}
+
+static inline void hinic3_probe_pre_unprocess(void *pdev) { };
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
new file mode 100644
index 000000000000..9b31d89e26ce
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.c
@@ -0,0 +1,978 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_hw.h"
+#include "hinic3_rss.h"
+
+/*lint -e806*/
+static u16 num_qps;
+module_param(num_qps, ushort, 0444);
+MODULE_PARM_DESC(num_qps, "Number of Queue Pairs (default=0)");
+
+#define MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, out_qps) do { \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ nic_warn(&(nic_dev)->pdev->dev, \
+ "Module Parameter %s value %u is out of range, " \
+ "Maximum value for the device: %u, using %u\n", \
+ #num_qps, num_qps, (nic_dev)->max_qps, \
+ (nic_dev)->max_qps); \
+ if ((num_qps) > (nic_dev)->max_qps) \
+ (out_qps) = (nic_dev)->max_qps; \
+ else if ((num_qps) > 0) \
+ (out_qps) = (num_qps); \
+} while (0)
+
+/* In rx, iq means cos */
+static u8 hinic3_get_iqmap_by_tc(const u8 *prio_tc, u8 num_iq, u8 tc)
+{
+ u8 i, map = 0;
+
+ for (i = 0; i < num_iq; i++) {
+ if (prio_tc[i] == tc)
+ map |= (u8)(1U << ((num_iq - 1) - i));
+ }
+
+ return map;
+}
+
+static u8 hinic3_get_tcid_by_rq(const u32 *indir_tbl, u8 num_tcs, u16 rq_id)
+{
+ u16 tc_group_size;
+ int i;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ tc_group_size = NIC_RSS_INDIR_SIZE / temp_num_tcs;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (indir_tbl[i] == rq_id)
+ return (u8)(i / tc_group_size);
+ }
+
+ return 0xFF; /* Invalid TC */
+}
+
+static int hinic3_get_rq2iq_map(struct hinic3_nic_dev *nic_dev,
+ u16 num_rq, u8 num_tcs, u8 *prio_tc, u8 cos_num,
+ u32 *indir_tbl, u8 *map, u32 map_size)
+{
+ u16 qid;
+ u8 tc_id;
+ u8 temp_num_tcs = num_tcs;
+
+ if (!num_tcs)
+ temp_num_tcs = 1;
+
+ if (num_rq > map_size) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Rq number(%u) exceed max map qid(%u)\n",
+ num_rq, map_size);
+ return -EINVAL;
+ }
+
+ if (cos_num < HINIC_NUM_IQ_PER_FUNC) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Cos number(%u) less then map qid(%d)\n",
+ cos_num, HINIC_NUM_IQ_PER_FUNC);
+ return -EINVAL;
+ }
+
+ for (qid = 0; qid < num_rq; qid++) {
+ tc_id = hinic3_get_tcid_by_rq(indir_tbl, temp_num_tcs, qid);
+ map[qid] = hinic3_get_iqmap_by_tc(prio_tc,
+ HINIC_NUM_IQ_PER_FUNC, tc_id);
+ }
+
+ return 0;
+}
+
+static void hinic3_fillout_indir_tbl(struct hinic3_nic_dev *nic_dev, u8 num_cos, u32 *indir)
+{
+ u16 k, group_size, start_qid = 0, qp_num = 0;
+ int i = 0;
+ u8 j, cur_cos = 0, default_cos;
+ u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
+
+ if (num_cos == 0) {
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir[i] = i % nic_dev->q_params.num_qps;
+ } else {
+ group_size = NIC_RSS_INDIR_SIZE / num_cos;
+
+ for (j = 0; j < num_cos; j++) {
+ while (cur_cos < NIC_DCB_COS_MAX &&
+ nic_dev->hw_dcb_cfg.cos_qp_num[cur_cos] == 0)
+ cur_cos++;
+
+ if (cur_cos >= NIC_DCB_COS_MAX) {
+ if (BIT(nic_dev->hw_dcb_cfg.default_cos) & valid_cos_map)
+ default_cos = nic_dev->hw_dcb_cfg.default_cos;
+ else
+ default_cos = (u8)fls(valid_cos_map) - 1;
+
+ start_qid = nic_dev->hw_dcb_cfg.cos_qp_offset[default_cos];
+ qp_num = nic_dev->hw_dcb_cfg.cos_qp_num[default_cos];
+ } else {
+ start_qid = nic_dev->hw_dcb_cfg.cos_qp_offset[cur_cos];
+ qp_num = nic_dev->hw_dcb_cfg.cos_qp_num[cur_cos];
+ }
+
+ for (k = 0; k < group_size; k++)
+ indir[i++] = start_qid + k % qp_num;
+
+ cur_cos++;
+ }
+ }
+}
+
+/*lint -e528*/
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map, u32 map_size, u8 dcb_en)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 i, cos_num;
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+ u8 cfg_map[NIC_DCB_UP_MAX] = {0};
+ int err;
+
+ if (dcb_en) {
+ cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (nic_dev->hw_dcb_cfg.trust == 0) {
+ memcpy(cfg_map, nic_dev->hw_dcb_cfg.pcp2cos, sizeof(cfg_map));
+ } else if (nic_dev->hw_dcb_cfg.trust == 1) {
+ for (i = 0; i < NIC_DCB_UP_MAX; i++)
+ cfg_map[i] = nic_dev->hw_dcb_cfg.dscp2cos[i * NIC_DCB_DSCP_NUM];
+ }
+#define COS_CHANGE_OFFSET 4
+ for (i = 0; i < COS_CHANGE_OFFSET; i++)
+ cos_map[COS_CHANGE_OFFSET + i] = cfg_map[i];
+
+ for (i = 0; i < COS_CHANGE_OFFSET; i++)
+ cos_map[i] = cfg_map[NIC_DCB_UP_MAX - (i + 1)];
+
+ while (cos_num & (cos_num - 1))
+ cos_num++;
+ } else {
+ cos_num = 0;
+ }
+
+ err = hinic3_set_hw_rss_parameters(netdev, 1, cos_num, cos_map, dcb_en);
+ if (err)
+ return err;
+
+ err = hinic3_get_rq2iq_map(nic_dev, nic_dev->q_params.num_qps, cos_num, cos_map,
+ NIC_DCB_UP_MAX, nic_dev->rss_indir, rq2iq_map, map_size);
+ if (err)
+ nicif_err(nic_dev, drv, netdev, "Failed to get rq map\n");
+ return err;
+}
+
+/*lint -e528*/
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+
+ hinic3_rss_cfg(nic_dev->hwdev, 0, 0, cos_map, 1);
+}
+
+void hinic3_init_rss_parameters(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ nic_dev->rss_hash_engine = HINIC3_RSS_HASH_ENGINE_TYPE_XOR;
+ nic_dev->rss_type.tcp_ipv6_ext = 1;
+ nic_dev->rss_type.ipv6_ext = 1;
+ nic_dev->rss_type.tcp_ipv6 = 1;
+ nic_dev->rss_type.ipv6 = 1;
+ nic_dev->rss_type.tcp_ipv4 = 1;
+ nic_dev->rss_type.ipv4 = 1;
+ nic_dev->rss_type.udp_ipv6 = 1;
+ nic_dev->rss_type.udp_ipv4 = 1;
+}
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev)
+{
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+
+ kfree(nic_dev->rss_indir);
+ nic_dev->rss_indir = NULL;
+}
+
+void hinic3_set_default_rss_indir(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+}
+
+static void hinic3_maybe_reconfig_rss_indir(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int i;
+
+ /* if dcb is enabled, user can not config rss indir table */
+ if (dcb_en) {
+ nicif_info(nic_dev, drv, netdev, "DCB is enabled, set default rss indir\n");
+ goto discard_user_rss_indir;
+ }
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++) {
+ if (nic_dev->rss_indir[i] >= nic_dev->q_params.num_qps)
+ goto discard_user_rss_indir;
+ }
+
+ return;
+
+discard_user_rss_indir:
+ hinic3_set_default_rss_indir(netdev);
+}
+
+static void decide_num_qps(struct hinic3_nic_dev *nic_dev)
+{
+ u16 tmp_num_qps = nic_dev->max_qps;
+ u16 num_cpus = 0;
+ int i, node;
+
+ if (nic_dev->nic_cap.default_num_queues != 0 &&
+ nic_dev->nic_cap.default_num_queues < nic_dev->max_qps)
+ tmp_num_qps = nic_dev->nic_cap.default_num_queues;
+
+ MOD_PARA_VALIDATE_NUM_QPS(nic_dev, num_qps, tmp_num_qps);
+
+ for (i = 0; i < (int)num_online_cpus(); i++) {
+ node = (int)cpu_to_node(i);
+ if (node == dev_to_node(&nic_dev->pdev->dev))
+ num_cpus++;
+ }
+
+ if (!num_cpus)
+ num_cpus = (u16)num_online_cpus();
+
+ nic_dev->q_params.num_qps = (u16)min_t(u16, tmp_num_qps, num_cpus);
+}
+
+static void copy_value_to_rss_hkey(struct hinic3_nic_dev *nic_dev,
+ const u8 *hkey)
+{
+ u32 i;
+ u32 *rss_hkey = (u32 *)nic_dev->rss_hkey;
+
+ memcpy(nic_dev->rss_hkey, hkey, NIC_RSS_KEY_SIZE);
+
+ /* make a copy of the key, and convert it to Big Endian */
+ for (i = 0; i < NIC_RSS_KEY_SIZE / sizeof(u32); i++)
+ nic_dev->rss_hkey_be[i] = cpu_to_be32(rss_hkey[i]);
+}
+
+static int alloc_rss_resource(struct hinic3_nic_dev *nic_dev)
+{
+ u8 default_rss_key[NIC_RSS_KEY_SIZE] = {
+ 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+ 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+ 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+ 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+ 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+
+ /* We request double spaces for the hash key,
+ * the second one holds the key of Big Edian
+ * format.
+ */
+ nic_dev->rss_hkey =
+ kzalloc(NIC_RSS_KEY_SIZE *
+ HINIC3_RSS_KEY_RSV_NUM, GFP_KERNEL);
+ if (!nic_dev->rss_hkey) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_hkey\n");
+ return -ENOMEM;
+ }
+
+ /* The second space is for big edian hash key */
+ nic_dev->rss_hkey_be = (u32 *)(nic_dev->rss_hkey +
+ NIC_RSS_KEY_SIZE);
+ copy_value_to_rss_hkey(nic_dev, (u8 *)default_rss_key);
+
+ nic_dev->rss_indir = kzalloc(sizeof(u32) * NIC_RSS_INDIR_SIZE, GFP_KERNEL);
+ if (!nic_dev->rss_indir) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc memory for rss_indir\n");
+ kfree(nic_dev->rss_hkey);
+ nic_dev->rss_hkey = NULL;
+ return -ENOMEM;
+ }
+
+ set_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ return 0;
+}
+
+/*lint -e528*/
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_map[NIC_DCB_UP_MAX] = {0};
+ int err = 0;
+
+ if (!nic_dev)
+ return;
+
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+ if (nic_dev->max_qps <= 1 || !HINIC3_SUPPORT_RSS(nic_dev->hwdev))
+ goto set_q_params;
+
+ err = alloc_rss_resource(nic_dev);
+ if (err) {
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+
+ set_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->max_qps = hinic3_func_max_nic_qnum(nic_dev->hwdev);
+
+ decide_num_qps(nic_dev);
+
+ hinic3_init_rss_parameters(nic_dev->netdev);
+ err = hinic3_set_hw_rss_parameters(nic_dev->netdev, 0, 0, cos_map,
+ test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
+ if (err) {
+ nic_err(&nic_dev->pdev->dev, "Failed to set hardware rss parameters\n");
+
+ hinic3_clear_rss_config(nic_dev);
+ nic_dev->max_qps = 1;
+ goto set_q_params;
+ }
+ return;
+
+set_q_params:
+ clear_bit(HINIC3_RSS_ENABLE, &nic_dev->flags);
+ nic_dev->q_params.num_qps = nic_dev->max_qps;
+}
+
+static int hinic3_config_rss_hw_resource(struct hinic3_nic_dev *nic_dev,
+ u32 *indir_tbl)
+{
+ int err;
+
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, nic_dev->rss_type);
+ if (err)
+ return err;
+
+ return hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+}
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ /* RSS key */
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_hkey);
+ if (err)
+ return err;
+
+ hinic3_maybe_reconfig_rss_indir(netdev, dcb_en);
+
+ if (test_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags))
+ hinic3_fillout_indir_tbl(nic_dev, cos_num, nic_dev->rss_indir);
+
+ err = hinic3_config_rss_hw_resource(nic_dev, nic_dev->rss_indir);
+ if (err)
+ return err;
+
+ err = hinic3_rss_cfg(nic_dev->hwdev, rss_en, cos_num, cos_map,
+ nic_dev->q_params.num_qps);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/* for ethtool */
+static int set_l4_rss_hash_ops(const struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ u8 rss_l4_en = 0;
+
+ switch (cmd->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ case 0:
+ rss_l4_en = 0;
+ break;
+ case (RXH_L4_B_0_1 | RXH_L4_B_2_3):
+ rss_l4_en = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ rss_type->tcp_ipv4 = rss_l4_en;
+ break;
+ case TCP_V6_FLOW:
+ rss_type->tcp_ipv6 = rss_l4_en;
+ break;
+ case UDP_V4_FLOW:
+ rss_type->udp_ipv4 = rss_l4_en;
+ break;
+ case UDP_V6_FLOW:
+ rss_type->udp_ipv6 = rss_l4_en;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int update_rss_hash_opts(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *cmd,
+ struct nic_rss_type *rss_type)
+{
+ int err;
+
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ case UDP_V4_FLOW:
+ case UDP_V6_FLOW:
+ err = set_l4_rss_hash_ops(cmd, rss_type);
+ if (err)
+ return err;
+
+ break;
+ case IPV4_FLOW:
+ rss_type->ipv4 = 1;
+ break;
+ case IPV6_FLOW:
+ rss_type->ipv6 = 1;
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Unsupported flow type\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type *rss_type = &nic_dev->rss_type;
+ int err;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ cmd->data = 0;
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "RSS is disable, not support to set flow-hash\n");
+ return -EOPNOTSUPP;
+ }
+
+ /* RSS does not support anything other than hashing
+ * to queues on src and dst IPs and ports
+ */
+ if (cmd->data & ~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 |
+ RXH_L4_B_2_3))
+ return -EINVAL;
+
+ /* We need at least the IP SRC and DEST fields for hashing */
+ if (!(cmd->data & RXH_IP_SRC) || !(cmd->data & RXH_IP_DST))
+ return -EINVAL;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rss type\n");
+ return -EFAULT;
+ }
+
+ err = update_rss_hash_opts(nic_dev, cmd, rss_type);
+ if (err)
+ return err;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, *rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss type\n");
+ return -EFAULT;
+ }
+
+ nicif_info(nic_dev, drv, nic_dev->netdev, "Set rss hash options success\n");
+
+ return 0;
+}
+
+static void convert_rss_type(u8 rss_opt, struct ethtool_rxnfc *cmd)
+{
+ if (rss_opt)
+ cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+}
+
+static int hinic3_convert_rss_type(struct hinic3_nic_dev *nic_dev,
+ struct nic_rss_type *rss_type,
+ struct ethtool_rxnfc *cmd)
+{
+ cmd->data = RXH_IP_SRC | RXH_IP_DST;
+ switch (cmd->flow_type) {
+ case TCP_V4_FLOW:
+ convert_rss_type(rss_type->tcp_ipv4, cmd);
+ break;
+ case TCP_V6_FLOW:
+ convert_rss_type(rss_type->tcp_ipv6, cmd);
+ break;
+ case UDP_V4_FLOW:
+ convert_rss_type(rss_type->udp_ipv4, cmd);
+ break;
+ case UDP_V6_FLOW:
+ convert_rss_type(rss_type->udp_ipv6, cmd);
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ break;
+ default:
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Unsupported flow type\n");
+ cmd->data = 0;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_get_rss_hash_opts(struct hinic3_nic_dev *nic_dev, struct ethtool_rxnfc *cmd)
+{
+ struct nic_rss_type rss_type = {0};
+ int err;
+
+ cmd->data = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ return 0;
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, &rss_type);
+ if (err) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss type\n");
+ return err;
+ }
+
+ return hinic3_convert_rss_type(nic_dev, &rss_type, cmd);
+}
+
+#if (KERNEL_VERSION(3, 4, 24) > LINUX_VERSION_CODE)
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, void *rule_locs)
+#else
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = nic_dev->q_params.num_qps;
+ break;
+ case ETHTOOL_GRXCLSRLCNT:
+ cmd->rule_cnt = (u32)nic_dev->rx_flow_rule.tot_num_rules;
+ break;
+ case ETHTOOL_GRXCLSRULE:
+ err = hinic3_ethtool_get_flow(nic_dev, cmd, cmd->fs.location);
+ break;
+ case ETHTOOL_GRXCLSRLALL:
+ err = hinic3_ethtool_get_all_flows(nic_dev, cmd, rule_locs);
+ break;
+ case ETHTOOL_GRXFH:
+ err = hinic3_get_rss_hash_opts(nic_dev, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ switch (cmd->cmd) {
+ case ETHTOOL_SRXFH:
+ err = hinic3_set_rss_hash_opts(nic_dev, cmd);
+ break;
+ case ETHTOOL_SRXCLSRLINS:
+ err = hinic3_ethtool_flow_replace(nic_dev, &cmd->fs);
+ break;
+ case ETHTOOL_SRXCLSRLDEL:
+ err = hinic3_ethtool_flow_remove(nic_dev, cmd->fs.location);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static u16 hinic3_max_channels(struct hinic3_nic_dev *nic_dev)
+{
+ u8 tcs = (u8)netdev_get_num_tc(nic_dev->netdev);
+
+ return tcs ? nic_dev->max_qps / tcs : nic_dev->max_qps;
+}
+
+static u16 hinic3_curr_channels(struct hinic3_nic_dev *nic_dev)
+{
+ if (netif_running(nic_dev->netdev))
+ return nic_dev->q_params.num_qps ?
+ nic_dev->q_params.num_qps : 1;
+ else
+ return (u16)min_t(u16, hinic3_max_channels(nic_dev),
+ nic_dev->q_params.num_qps);
+}
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ channels->max_rx = 0;
+ channels->max_tx = 0;
+ channels->max_other = 0;
+ /* report maximum channels */
+ channels->max_combined = hinic3_max_channels(nic_dev);
+ channels->rx_count = 0;
+ channels->tx_count = 0;
+ channels->other_count = 0;
+ /* report flow director queues as maximum channels */
+ channels->combined_count = hinic3_curr_channels(nic_dev);
+}
+
+static int hinic3_validate_channel_parameter(struct net_device *netdev,
+ const struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 max_channel = hinic3_max_channels(nic_dev);
+ unsigned int count = channels->combined_count;
+
+ if (!count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Unsupported combined_count=0\n");
+ return -EINVAL;
+ }
+
+ if (channels->tx_count || channels->rx_count || channels->other_count) {
+ nicif_err(nic_dev, drv, netdev,
+ "Setting rx/tx/other count not supported\n");
+ return -EINVAL;
+ }
+
+ if (count > max_channel) {
+ nicif_err(nic_dev, drv, netdev,
+ "Combined count %u exceed limit %u\n", count,
+ max_channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void change_num_channel_reopen_handler(struct hinic3_nic_dev *nic_dev,
+ const void *priv_data)
+{
+ hinic3_set_default_rss_indir(nic_dev->netdev);
+}
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_dyna_txrxq_params q_params = {0};
+ unsigned int count = channels->combined_count;
+ int err;
+ u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
+
+ if (hinic3_validate_channel_parameter(netdev, channels))
+ return -EINVAL;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, netdev,
+ "This function don't support RSS, only support 1 queue pair\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
+ if (count < user_cos_num) {
+ nicif_err(nic_dev, drv, netdev,
+ "DCB is on, channels num should more than valid cos num:%u\n",
+ user_cos_num);
+
+ return -EOPNOTSUPP;
+ }
+ }
+
+ if (HINIC3_SUPPORT_FDIR(nic_dev->hwdev) &&
+ !hinic3_validate_channel_setting_in_ntuple(nic_dev, count))
+ return -EOPNOTSUPP;
+
+ nicif_info(nic_dev, drv, netdev, "Set max combined queue number from %u to %u\n",
+ nic_dev->q_params.num_qps, count);
+
+ if (netif_running(netdev)) {
+ q_params = nic_dev->q_params;
+ q_params.num_qps = (u16)count;
+ q_params.txqs_res = NULL;
+ q_params.rxqs_res = NULL;
+ q_params.irq_cfg = NULL;
+
+ nicif_info(nic_dev, drv, netdev, "Restarting channel\n");
+ err = hinic3_change_channel_settings(nic_dev, &q_params,
+ change_num_channel_reopen_handler, NULL);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to change channel settings\n");
+ return -EFAULT;
+ }
+ } else {
+ /* Discard user configured rss */
+ hinic3_set_default_rss_indir(netdev);
+ nic_dev->q_params.num_qps = (u16)count;
+ }
+
+ return 0;
+}
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev)
+{
+ return NIC_RSS_INDIR_SIZE;
+}
+#endif
+
+static int set_rss_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err;
+
+ if (indir) {
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to set rss indir table\n");
+ return -EFAULT;
+ }
+ clear_bit(HINIC3_RSS_DEFAULT_INDIR, &nic_dev->flags);
+
+ memcpy(nic_dev->rss_indir, indir,
+ sizeof(u32) * NIC_RSS_INDIR_SIZE);
+ nicif_info(nic_dev, drv, netdev, "Change rss indir success\n");
+ }
+
+ if (key) {
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, key);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to set rss key\n");
+ return -EFAULT;
+ }
+
+ copy_value_to_rss_hkey(nic_dev, key);
+ nicif_info(nic_dev, drv, netdev, "Change rss key success\n");
+ }
+
+ return 0;
+}
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev)
+{
+ return NIC_RSS_KEY_SIZE;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc)
+#else
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc)
+ *hfunc = nic_dev->rss_hash_engine ?
+ ETH_RSS_HASH_TOP : ETH_RSS_HASH_XOR;
+#endif
+
+ if (indir) {
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+ if (err)
+ return -EFAULT;
+ }
+
+ if (key)
+ memcpy(key, nic_dev->rss_hkey, NIC_RSS_KEY_SIZE);
+
+ return err;
+}
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc)
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key)
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key)
+#endif
+#endif /* HAVE_RXFH_HASHFUNC */
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss parameters when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+#ifdef HAVE_RXFH_HASHFUNC
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE) {
+ if (hfunc != ETH_RSS_HASH_TOP && hfunc != ETH_RSS_HASH_XOR) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set hfunc type except TOP and XOR\n");
+ return -EOPNOTSUPP;
+ }
+
+ nic_dev->rss_hash_engine = (hfunc == ETH_RSS_HASH_XOR) ?
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR :
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP;
+ err = hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ nic_dev->rss_hash_engine);
+ if (err)
+ return -EFAULT;
+
+ nicif_info(nic_dev, drv, netdev,
+ "Change hfunc to RSS_HASH_%s success\n",
+ (hfunc == ETH_RSS_HASH_XOR) ? "XOR" : "TOP");
+ }
+#endif
+ err = set_rss_rxfh(netdev, indir, key);
+
+ return err;
+}
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int err = 0;
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ u32 *indir = NULL;
+
+ /* In a low version kernel(eg:suse 11.2), call the interface twice.
+ * First call to get the size value,
+ * and second call to get the rxfh indir according to the size value.
+ */
+ if (indir1->size == 0) {
+ indir1->size = NIC_RSS_INDIR_SIZE;
+ return 0;
+ }
+
+ if (indir1->size < NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (indir)
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indir);
+
+ return err;
+}
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1)
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir)
+#endif
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+ const u32 *indir = NULL;
+
+ if (indir1->size != NIC_RSS_INDIR_SIZE) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to set rss indir, rss size(%d) is more than system rss size(%u).\n",
+ NIC_RSS_INDIR_SIZE, indir1->size);
+ return -EINVAL;
+ }
+
+ indir = indir1->ring_index;
+#endif
+
+ if (!test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Not support to set rss indir when rss is disable\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) && indir) {
+ nicif_err(nic_dev, drv, netdev,
+ "Not support to set indir when DCB is enabled\n");
+ return -EOPNOTSUPP;
+ }
+
+ return set_rss_rxfh(netdev, indir, NULL);
+}
+
+#endif /* defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH) */
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
new file mode 100644
index 000000000000..8961cdd095a1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RSS_H
+#define HINIC3_RSS_H
+
+#include "hinic3_nic_dev.h"
+
+#define HINIC_NUM_IQ_PER_FUNC 8
+
+int hinic3_rss_init(struct hinic3_nic_dev *nic_dev, u8 *rq2iq_map,
+ u32 map_size, u8 dcb_en);
+
+void hinic3_rss_deinit(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_set_hw_rss_parameters(struct net_device *netdev, u8 rss_en,
+ u8 cos_num, u8 *cos_map, u8 dcb_en);
+
+void hinic3_init_rss_parameters(struct net_device *netdev);
+
+void hinic3_set_default_rss_indir(struct net_device *netdev);
+
+void hinic3_try_to_enable_rss(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_clear_rss_config(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev);
+int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 location);
+
+int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rxnfc *info, u32 *rule_locs);
+
+int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location);
+
+int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev,
+ struct ethtool_rx_flow_spec *fs);
+
+bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num);
+
+/* for ethtool */
+#if (KERNEL_VERSION(3, 4, 24) > LINUX_VERSION_CODE)
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, void *rule_locs);
+#else
+int hinic3_get_rxnfc(struct net_device *netdev,
+ struct ethtool_rxnfc *cmd, u32 *rule_locs);
+#endif
+
+int hinic3_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd);
+
+void hinic3_get_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+int hinic3_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels);
+
+#ifndef NOT_HAVE_GET_RXFH_INDIR_SIZE
+u32 hinic3_get_rxfh_indir_size(struct net_device *netdev);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#if defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)
+u32 hinic3_get_rxfh_key_size(struct net_device *netdev);
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key, u8 *hfunc);
+#else /* HAVE_RXFH_HASHFUNC */
+int hinic3_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#ifdef HAVE_RXFH_HASHFUNC
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key,
+ const u8 hfunc);
+#else
+#ifdef HAVE_RXFH_NONCONST
+int hinic3_set_rxfh(struct net_device *netdev, u32 *indir, u8 *key);
+#else
+int hinic3_set_rxfh(struct net_device *netdev, const u32 *indir, const u8 *key);
+#endif /* HAVE_RXFH_NONCONST */
+#endif /* HAVE_RXFH_HASHFUNC */
+
+#else /* !(defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_get_rxfh_indir(struct net_device *netdev,
+ struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_get_rxfh_indir(struct net_device *netdev, u32 *indir);
+#endif
+
+#ifdef NOT_HAVE_GET_RXFH_INDIR_SIZE
+int hinic3_set_rxfh_indir(struct net_device *netdev,
+ const struct ethtool_rxfh_indir *indir1);
+#else
+int hinic3_set_rxfh_indir(struct net_device *netdev, const u32 *indir);
+#endif /* NOT_HAVE_GET_RXFH_INDIR_SIZE */
+
+#endif /* (defined(ETHTOOL_GRSSH) && defined(ETHTOOL_SRSSH)) */
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
new file mode 100644
index 000000000000..175c4d68b795
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rss_cfg.c
@@ -0,0 +1,384 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/kernel.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/dcbnl.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_nic_cmd.h"
+#include "hinic3_hw.h"
+#include "hinic3_nic.h"
+#include "hinic3_common.h"
+
+static int hinic3_rss_cfg_hash_key(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *key)
+{
+ struct hinic3_cmd_rss_hash_key hash_key;
+ u16 out_size = sizeof(hash_key);
+ int err;
+
+ memset(&hash_key, 0, sizeof(struct hinic3_cmd_rss_hash_key));
+ hash_key.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_key.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(hash_key.key, key, NIC_RSS_KEY_SIZE);
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ &hash_key, sizeof(hash_key),
+ &hash_key, &out_size);
+ if (err || !out_size || hash_key.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash key, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_key.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(key, hash_key.key, NIC_RSS_KEY_SIZE);
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_key(void *hwdev, const u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ u8 hash_key[NIC_RSS_KEY_SIZE];
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memcpy(hash_key, key, NIC_RSS_KEY_SIZE);
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_SET, hash_key);
+}
+
+int hinic3_rss_get_hash_key(void *hwdev, u8 *key)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ return hinic3_rss_cfg_hash_key(nic_io, HINIC3_CMD_OP_GET, key);
+}
+
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u16 *indir_tbl = NULL;
+ int err, i;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd_buf.\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ cmd_buf, cmd_buf, NULL, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err) {
+ nic_err(nic_io->dev_hdl, "Failed to get rss indir table\n");
+ goto get_indir_tbl_failed;
+ }
+
+ indir_tbl = (u16 *)cmd_buf->buf;
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_table[i] = *(indir_tbl + i);
+
+get_indir_tbl_failed:
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ return err;
+}
+
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table)
+{
+ struct nic_rss_indirect_tbl *indir_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 *temp = NULL;
+ u32 i, size;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf;
+ memset(indir_tbl, 0, sizeof(*indir_tbl));
+
+ for (i = 0; i < NIC_RSS_INDIR_SIZE; i++)
+ indir_tbl->entry[i] = (u16)(*(indir_table + i));
+
+ size = sizeof(indir_tbl->entry) / sizeof(u32);
+ temp = (u32 *)indir_tbl->entry;
+ for (i = 0; i < size; i++)
+ temp[i] = cpu_to_be32(temp[i]);
+
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss indir table\n");
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+ return err;
+}
+
+static int hinic3_cmdq_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct nic_rss_context_tbl *ctx_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_nic_io *nic_io = NULL;
+ u32 ctx = 0;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ nic_err(nic_io->dev_hdl, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf->size = sizeof(struct nic_rss_context_tbl);
+ ctx_tbl = (struct nic_rss_context_tbl *)cmd_buf->buf;
+ memset(ctx_tbl, 0, sizeof(*ctx_tbl));
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* cfg the rss context table by command queue */
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ cmd_buf, &out_param, 0,
+ HINIC3_CHANNEL_NIC);
+
+ hinic3_free_cmd_buf(hwdev, cmd_buf);
+
+ if (err || out_param != 0) {
+ nic_err(nic_io->dev_hdl, "cmdq set set rss context table failed, err: %d\n",
+ err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_mgmt_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+ struct hinic3_rss_context_table ctx_tbl;
+ u32 ctx = 0;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&ctx_tbl, 0, sizeof(ctx_tbl));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+ ctx_tbl.context = ctx;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+
+ if (ctx_tbl.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ } else if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "mgmt Failed to set rss context offload, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_rss_type(void *hwdev, struct nic_rss_type rss_type)
+{
+ int err;
+
+ err = hinic3_mgmt_set_rss_type(hwdev, rss_type);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_cmdq_set_rss_type(hwdev, rss_type);
+
+ return err;
+}
+
+int hinic3_get_rss_type(void *hwdev, struct nic_rss_type *rss_type)
+{
+ struct hinic3_rss_context_table ctx_tbl;
+ u16 out_size = sizeof(ctx_tbl);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+
+ memset(&ctx_tbl, 0, sizeof(struct hinic3_rss_context_table));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+ if (err || !out_size || ctx_tbl.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to get hash type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ rss_type->ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context,
+ TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+static int hinic3_rss_cfg_hash_engine(struct hinic3_nic_io *nic_io, u8 opcode,
+ u8 *type)
+{
+ struct hinic3_cmd_rss_engine_type hash_type;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ memset(&hash_type, 0, sizeof(struct hinic3_cmd_rss_engine_type));
+
+ hash_type.func_id = hinic3_global_func_id(nic_io->hwdev);
+ hash_type.opcode = opcode;
+
+ if (opcode == HINIC3_CMD_OP_SET)
+ hash_type.hash_engine = *type;
+
+ err = l2nic_msg_to_mgmt_sync(nic_io->hwdev,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type),
+ &hash_type, &out_size);
+ if (err || !out_size || hash_type.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to %s hash engine, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == HINIC3_CMD_OP_SET ? "set" : "get",
+ err, hash_type.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ *type = hash_type.hash_engine;
+
+ return 0;
+}
+
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_SET, &type);
+}
+
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type)
+{
+ struct hinic3_nic_io *nic_io = NULL;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ return hinic3_rss_cfg_hash_engine(nic_io, HINIC3_CMD_OP_GET, type);
+}
+
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 cos_num, u8 *prio_tc, u16 num_qps)
+{
+ struct hinic3_cmd_rss_config rss_cfg;
+ u16 out_size = sizeof(rss_cfg);
+ struct hinic3_nic_io *nic_io = NULL;
+ int err;
+
+ /* micro code required: number of TC should be power of 2 */
+ if (!hwdev || !prio_tc || (cos_num & (cos_num - 1)))
+ return -EINVAL;
+
+ nic_io = hinic3_get_service_adapter(hwdev, SERVICE_T_NIC);
+ memset(&rss_cfg, 0, sizeof(struct hinic3_cmd_rss_config));
+ rss_cfg.func_id = hinic3_global_func_id(hwdev);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.rq_priority_number = cos_num ? (u8)ilog2(cos_num) : 0;
+ rss_cfg.num_qps = num_qps;
+
+ memcpy(rss_cfg.prio_tc, prio_tc, NIC_DCB_UP_MAX);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_CFG,
+ &rss_cfg, sizeof(rss_cfg),
+ &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.msg_head.status) {
+ nic_err(nic_io->dev_hdl, "Failed to set rss cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, rss_cfg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
new file mode 100644
index 000000000000..a4085334806d
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.c
@@ -0,0 +1,1344 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
+#include <linux/module.h>
+#include <linux/compiler.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_common.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_rss.h"
+#include "hinic3_rx.h"
+
+static u32 rq_pi_rd_en;
+module_param(rq_pi_rd_en, uint, 0644);
+MODULE_PARM_DESC(rq_pi_rd_en, "Enable rq read pi from host, defaut update pi by doorbell (default=0)");
+
+/* performance: ci addr RTE_CACHE_SIZE(64B) alignment */
+#define HINIC3_RX_HDR_SIZE 256
+#define HINIC3_RX_BUFFER_WRITE 16
+
+#define HINIC3_RX_TCP_PKT 0x3
+#define HINIC3_RX_UDP_PKT 0x4
+#define HINIC3_RX_SCTP_PKT 0x7
+
+#define HINIC3_RX_IPV4_PKT 0
+#define HINIC3_RX_IPV6_PKT 1
+#define HINIC3_RX_INVALID_IP_TYPE 2
+
+#define HINIC3_RX_PKT_FORMAT_NON_TUNNEL 0
+#define HINIC3_RX_PKT_FORMAT_VXLAN 1
+
+#define RXQ_STATS_INC(rxq, field) \
+do { \
+ u64_stats_update_begin(&(rxq)->rxq_stats.syncp); \
+ (rxq)->rxq_stats.field++; \
+ u64_stats_update_end(&(rxq)->rxq_stats.syncp); \
+} while (0)
+
+static bool rx_alloc_mapped_page(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_rx_info *rx_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct page *page = rx_info->page;
+ dma_addr_t dma = rx_info->buf_dma_addr;
+
+ if (likely(dma))
+ return true;
+
+ /* alloc new page for storage */
+ page = alloc_pages_node(NUMA_NO_NODE, GFP_ATOMIC | __GFP_COLD |
+ __GFP_COMP, nic_dev->page_order);
+ if (unlikely(!page))
+ return false;
+
+ /* map page for use */
+ dma = dma_map_page(&pdev->dev, page, 0, nic_dev->dma_rx_buff_size,
+ DMA_FROM_DEVICE);
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
+ */
+ if (unlikely(dma_mapping_error(&pdev->dev, dma))) {
+ __free_pages(page, nic_dev->page_order);
+ return false;
+ }
+
+ rx_info->page = page;
+ rx_info->buf_dma_addr = dma;
+ rx_info->page_offset = 0;
+
+ return true;
+}
+
+static u32 hinic3_rx_fill_wqe(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ int rq_wqe_len = rxq->rq->wq.wqebb_size;
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ rx_info = &rxq->rx_info[i];
+ rq_wqe = hinic3_rq_wqe_addr(rxq->rq, (u16)i);
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ /* unit of cqe length is 16B */
+ hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+ rx_info->cqe_dma,
+ (sizeof(struct hinic3_rq_cqe) >>
+ HINIC3_CQE_SIZE_SHIFT));
+ /* use fixed len */
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.cqe_hi_addr =
+ upper_32_bits(rx_info->cqe_dma);
+ rq_wqe->normal_wqe.cqe_lo_addr =
+ lower_32_bits(rx_info->cqe_dma);
+ }
+
+ hinic3_hw_be32_len(rq_wqe, rq_wqe_len);
+ rx_info->rq_wqe = rq_wqe;
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
+{
+ struct net_device *netdev = rxq->netdev;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 i, free_wqebbs = rxq->delta - 1;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[rxq->next_to_update];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ break;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ rxq->next_to_update = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ }
+
+ if (likely(i)) {
+ if (!rq_pi_rd_en) {
+ hinic3_write_db(rxq->rq,
+ rxq->q_id & (NIC_DCB_COS_MAX - 1),
+ RQ_CFLAG_DP,
+ (u16)((u32)rxq->next_to_update <<
+ rxq->rq->wqe_type));
+ } else {
+ /* Write all the wqes before pi update */
+ wmb();
+
+ hinic3_update_rq_hw_pi(rxq->rq, rxq->next_to_update);
+ }
+ rxq->delta -= i;
+ rxq->next_to_alloc = rxq->next_to_update;
+ } else if (free_wqebbs == rxq->q_depth - 1) {
+ RXQ_STATS_INC(rxq, rx_buf_empty);
+ }
+
+ return i;
+}
+
+static u32 hinic3_rx_alloc_buffers(struct hinic3_nic_dev *nic_dev, u32 rq_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ u32 free_wqebbs = rq_depth - 1;
+ u32 idx;
+
+ for (idx = 0; idx < free_wqebbs; idx++) {
+ if (!rx_alloc_mapped_page(nic_dev, &rx_info_arr[idx]))
+ break;
+ }
+
+ return idx;
+}
+
+static void hinic3_rx_free_buffers(struct hinic3_nic_dev *nic_dev, u32 q_depth,
+ struct hinic3_rx_info *rx_info_arr)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ u32 i;
+
+ /* Free all the Rx ring sk_buffs */
+ for (i = 0; i < q_depth; i++) {
+ rx_info = &rx_info_arr[i];
+
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev,
+ rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size,
+ DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+ }
+}
+
+static void hinic3_reuse_rx_page(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *old_rx_info)
+{
+ struct hinic3_rx_info *new_rx_info;
+ u16 nta = rxq->next_to_alloc;
+
+ new_rx_info = &rxq->rx_info[nta];
+
+ /* update, and store next to alloc */
+ nta++;
+ rxq->next_to_alloc = (nta < rxq->q_depth) ? nta : 0;
+
+ new_rx_info->page = old_rx_info->page;
+ new_rx_info->page_offset = old_rx_info->page_offset;
+ new_rx_info->buf_dma_addr = old_rx_info->buf_dma_addr;
+
+ /* sync the buffer for use by the device */
+ dma_sync_single_range_for_device(rxq->dev, new_rx_info->buf_dma_addr,
+ new_rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+}
+
+static bool hinic3_add_rx_frag(struct hinic3_rxq *rxq,
+ struct hinic3_rx_info *rx_info,
+ struct sk_buff *skb, u32 size)
+{
+ struct page *page;
+ u8 *va;
+
+ page = rx_info->page;
+ va = (u8 *)page_address(page) + rx_info->page_offset;
+ prefetch(va);
+#if L1_CACHE_BYTES < 128
+ prefetch(va + L1_CACHE_BYTES);
+#endif
+
+ dma_sync_single_range_for_cpu(rxq->dev,
+ rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len,
+ DMA_FROM_DEVICE);
+
+ if (size <= HINIC3_RX_HDR_SIZE && !skb_is_nonlinear(skb)) {
+ memcpy(__skb_put(skb, size), va,
+ ALIGN(size, sizeof(long))); /*lint !e666*/
+
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(page_to_nid(page) == numa_node_id()))
+ return true;
+
+ /* this page cannot be reused so discard it */
+ put_page(page);
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ (int)rx_info->page_offset, (int)size, rxq->buf_len);
+
+ /* avoid re-using remote pages */
+ if (unlikely(page_to_nid(page) != numa_node_id()))
+ return false;
+
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_info->page_offset ^= rxq->buf_len;
+ get_page(page);
+
+ return true;
+}
+
+static void packaging_skb(struct hinic3_rxq *rxq, struct sk_buff *head_skb,
+ u8 sge_num, u32 pkt_len)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ struct sk_buff *skb = NULL;
+ u8 frag_num = 0;
+ u32 size;
+ u32 sw_ci;
+ u32 temp_pkt_len = pkt_len;
+ u8 temp_sge_num = sge_num;
+
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ skb = head_skb;
+ while (temp_sge_num) {
+ rx_info = &rxq->rx_info[sw_ci];
+ sw_ci = (sw_ci + 1) & rxq->q_mask;
+ if (unlikely(temp_pkt_len > rxq->buf_len)) {
+ size = rxq->buf_len;
+ temp_pkt_len -= rxq->buf_len;
+ } else {
+ size = temp_pkt_len;
+ }
+
+ if (unlikely(frag_num == MAX_SKB_FRAGS)) {
+ frag_num = 0;
+ if (skb == head_skb)
+ skb = skb_shinfo(skb)->frag_list;
+ else
+ skb = skb->next;
+ }
+
+ if (unlikely(skb != head_skb)) {
+ head_skb->len += size;
+ head_skb->data_len += size;
+ head_skb->truesize += rxq->buf_len;
+ }
+
+ if (likely(hinic3_add_rx_frag(rxq, rx_info, skb, size))) {
+ hinic3_reuse_rx_page(rxq, rx_info);
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rxq->dev, rx_info->buf_dma_addr,
+ rxq->dma_rx_buff_size, DMA_FROM_DEVICE);
+ }
+ /* clear contents of buffer_info */
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ temp_sge_num--;
+ frag_num++;
+ }
+}
+
+#define HINIC3_GET_SGE_NUM(pkt_len, rxq) \
+ ((u8)(((pkt_len) >> (rxq)->rx_buff_shift) + \
+ (((pkt_len) & ((rxq)->buf_len - 1)) ? 1 : 0)))
+
+static struct sk_buff *hinic3_fetch_rx_buffer(struct hinic3_rxq *rxq,
+ u32 pkt_len)
+{
+ struct sk_buff *head_skb = NULL;
+ struct sk_buff *cur_skb = NULL;
+ struct sk_buff *skb = NULL;
+ struct net_device *netdev = rxq->netdev;
+ u8 sge_num, skb_num;
+ u16 wqebb_cnt = 0;
+
+ head_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!head_skb))
+ return NULL;
+
+ sge_num = HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ if (likely(sge_num <= MAX_SKB_FRAGS))
+ skb_num = 1;
+ else
+ skb_num = (sge_num / MAX_SKB_FRAGS) +
+ ((sge_num % MAX_SKB_FRAGS) ? 1 : 0);
+
+ while (unlikely(skb_num > 1)) {
+ cur_skb = netdev_alloc_skb_ip_align(netdev, HINIC3_RX_HDR_SIZE);
+ if (unlikely(!cur_skb))
+ goto alloc_skb_fail;
+
+ if (!skb) {
+ skb_shinfo(head_skb)->frag_list = cur_skb;
+ skb = cur_skb;
+ } else {
+ skb->next = cur_skb;
+ skb = cur_skb;
+ }
+
+ skb_num--;
+ }
+
+ prefetchw(head_skb->data);
+ wqebb_cnt = sge_num;
+
+ packaging_skb(rxq, head_skb, sge_num, pkt_len);
+
+ rxq->cons_idx += wqebb_cnt;
+ rxq->delta += wqebb_cnt;
+
+ return head_skb;
+
+alloc_skb_fail:
+ dev_kfree_skb_any(head_skb);
+ return NULL;
+}
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ stats->bytes = rxq_stats->bytes;
+ stats->packets = rxq_stats->packets;
+ stats->errors = rxq_stats->csum_errors +
+ rxq_stats->other_errors;
+ stats->csum_errors = rxq_stats->csum_errors;
+ stats->other_errors = rxq_stats->other_errors;
+ stats->dropped = rxq_stats->dropped;
+ stats->xdp_dropped = rxq_stats->xdp_dropped;
+ stats->rx_buf_empty = rxq_stats->rx_buf_empty;
+ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats)
+{
+ u64_stats_update_begin(&rxq_stats->syncp);
+ rxq_stats->bytes = 0;
+ rxq_stats->packets = 0;
+ rxq_stats->errors = 0;
+ rxq_stats->csum_errors = 0;
+ rxq_stats->other_errors = 0;
+ rxq_stats->dropped = 0;
+ rxq_stats->xdp_dropped = 0;
+ rxq_stats->rx_buf_empty = 0;
+
+ rxq_stats->alloc_skb_err = 0;
+ rxq_stats->alloc_rx_buf_err = 0;
+ rxq_stats->xdp_large_pkt = 0;
+ rxq_stats->restore_drop_sge = 0;
+ rxq_stats->rsvd2 = 0;
+ u64_stats_update_end(&rxq_stats->syncp);
+}
+
+static void rxq_stats_init(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rxq_stats *rxq_stats = &rxq->rxq_stats;
+
+ u64_stats_init(&rxq_stats->syncp);
+ hinic3_rxq_clean_stats(rxq_stats);
+}
+
+#ifndef HAVE_ETH_GET_HEADLEN_FUNC
+static unsigned int hinic3_eth_get_headlen(unsigned char *data, unsigned int max_len)
+{
+#define IP_FRAG_OFFSET 0x1FFF
+#define FCOE_HLEN 38
+#define ETH_P_8021_AD 0x88A8
+#define ETH_P_8021_Q 0x8100
+#define TCP_HEAD_OFFSET 12
+ union {
+ unsigned char *data;
+ struct ethhdr *eth;
+ struct vlan_ethhdr *vlan;
+ struct iphdr *ipv4;
+ struct ipv6hdr *ipv6;
+ } hdr;
+ u16 protocol;
+ u8 nexthdr = 0;
+ u8 hlen;
+
+ if (unlikely(max_len < ETH_HLEN))
+ return max_len;
+
+ hdr.data = data;
+ protocol = hdr.eth->h_proto;
+
+ /* L2 header */
+ /*lint -save -e778*/
+ if (protocol == __constant_htons(ETH_P_8021_AD) ||
+ protocol == __constant_htons(ETH_P_8021_Q)) { /*lint -restore*/
+ if (unlikely(max_len < ETH_HLEN + VLAN_HLEN))
+ return max_len;
+
+ /* L3 protocol */
+ protocol = hdr.vlan->h_vlan_encapsulated_proto;
+ hdr.data += sizeof(struct vlan_ethhdr);
+ } else {
+ hdr.data += ETH_HLEN;
+ }
+
+ /* L3 header */
+ /*lint -save -e778*/
+ switch (protocol) {
+ case __constant_htons(ETH_P_IP): /*lint -restore*/
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct iphdr)))
+ return max_len;
+
+ /* L3 header length = (1st byte & 0x0F) << 2 */
+ hlen = (hdr.data[0] & 0x0F) << 2;
+
+ if (hlen < sizeof(struct iphdr))
+ return (unsigned int)(hdr.data - data);
+
+ if (!(hdr.ipv4->frag_off & htons(IP_FRAG_OFFSET)))
+ nexthdr = hdr.ipv4->protocol;
+
+ hdr.data += hlen;
+ break;
+
+ case __constant_htons(ETH_P_IPV6):
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct ipv6hdr)))
+ return max_len;
+ /* L4 protocol */
+ nexthdr = hdr.ipv6->nexthdr;
+ hdr.data += sizeof(struct ipv6hdr);
+ break;
+
+ case __constant_htons(ETH_P_FCOE):
+ hdr.data += FCOE_HLEN;
+ break;
+
+ default:
+ return (unsigned int)(hdr.data - data);
+ }
+
+ /* L4 header */
+ switch (nexthdr) {
+ case IPPROTO_TCP:
+ if ((int)(hdr.data - data) >
+ (int)(max_len - sizeof(struct tcphdr)))
+ return max_len;
+
+ /* L4 header length = (13st byte & 0xF0) >> 2 */
+ if (((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT) > sizeof(struct tcphdr))
+ hdr.data += ((hdr.data[TCP_HEAD_OFFSET] & 0xF0) >>
+ HINIC3_HEADER_DATA_UNIT);
+ else
+ hdr.data += sizeof(struct tcphdr);
+ break;
+ case IPPROTO_UDP:
+ case IPPROTO_UDPLITE:
+ hdr.data += sizeof(struct udphdr);
+ break;
+
+ case IPPROTO_SCTP:
+ hdr.data += sizeof(struct sctphdr);
+ break;
+ default:
+ break;
+ }
+
+ if ((hdr.data - data) > max_len)
+ return max_len;
+ else
+ return (unsigned int)(hdr.data - data);
+}
+#endif
+
+static void hinic3_pull_tail(struct sk_buff *skb)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va = NULL;
+ unsigned int pull_len;
+
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
+
+#ifdef HAVE_ETH_GET_HEADLEN_FUNC
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+#ifdef ETH_GET_HEADLEN_NEED_DEV
+ pull_len = eth_get_headlen(skb->dev, va, HINIC3_RX_HDR_SIZE);
+#else
+ pull_len = eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+#else
+ pull_len = hinic3_eth_get_headlen(va, HINIC3_RX_HDR_SIZE);
+#endif
+
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
+
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, (int)pull_len);
+ skb_frag_off_add(frag, (int)pull_len);
+
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
+
+static void hinic3_rx_csum(struct hinic3_rxq *rxq, u32 offload_type,
+ u32 status, struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ u32 pkt_type = HINIC3_GET_RX_PKT_TYPE(offload_type);
+ u32 ip_type = HINIC3_GET_RX_IP_TYPE(offload_type);
+ u32 pkt_fmt = HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type);
+
+ u32 csum_err;
+
+ csum_err = HINIC3_GET_RX_CSUM_ERR(status);
+ if (unlikely(csum_err == HINIC3_RX_CSUM_IPSU_OTHER_ERR))
+ rxq->rxq_stats.other_errors++;
+
+ if (!(netdev->features & NETIF_F_RXCSUM))
+ return;
+
+ if (unlikely(csum_err)) {
+ /* pkt type is recognized by HW, and csum is wrong */
+ if (!(csum_err & (HINIC3_RX_CSUM_HW_CHECK_NONE |
+ HINIC3_RX_CSUM_IPSU_OTHER_ERR)))
+ rxq->rxq_stats.csum_errors++;
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ if (ip_type == HINIC3_RX_INVALID_IP_TYPE ||
+ !(pkt_fmt == HINIC3_RX_PKT_FORMAT_NON_TUNNEL ||
+ pkt_fmt == HINIC3_RX_PKT_FORMAT_VXLAN)) {
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ switch (pkt_type) {
+ case HINIC3_RX_TCP_PKT:
+ case HINIC3_RX_UDP_PKT:
+ case HINIC3_RX_SCTP_PKT:
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ break;
+ default:
+ skb->ip_summed = CHECKSUM_NONE;
+ break;
+ }
+}
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+static void hinic3_rx_gro(struct hinic3_rxq *rxq, u32 offload_type,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = rxq->netdev;
+ bool l2_tunnel = false;
+
+ if (!(netdev->features & NETIF_F_GRO))
+ return;
+
+ l2_tunnel =
+ HINIC3_GET_RX_TUNNEL_PKT_FORMAT(offload_type) ==
+ HINIC3_RX_PKT_FORMAT_VXLAN ? 1 : 0;
+ if (l2_tunnel && skb->ip_summed == CHECKSUM_UNNECESSARY)
+ /* If we checked the outer header let the stack know */
+ skb->csum_level = 1;
+}
+#endif /* HAVE_SKBUFF_CSUM_LEVEL */
+
+static void hinic3_copy_lp_data(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb)
+{
+ struct net_device *netdev = nic_dev->netdev;
+ u8 *lb_buf = nic_dev->lb_test_rx_buf;
+ void *frag_data = NULL;
+ int lb_len = nic_dev->lb_pkt_len;
+ int pkt_offset, frag_len, i;
+
+ if (nic_dev->lb_test_rx_idx == LP_PKT_CNT) {
+ nic_dev->lb_test_rx_idx = 0;
+ nicif_warn(nic_dev, rx_err, netdev, "Loopback test warning, receive too many test pkts\n");
+ }
+
+ if (skb->len != nic_dev->lb_pkt_len) {
+ nicif_warn(nic_dev, rx_err, netdev, "Wrong packet length\n");
+ nic_dev->lb_test_rx_idx++;
+ return;
+ }
+
+ pkt_offset = nic_dev->lb_test_rx_idx * lb_len;
+ frag_len = (int)skb_headlen(skb);
+ memcpy(lb_buf + pkt_offset, skb->data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ frag_data = skb_frag_address(&skb_shinfo(skb)->frags[i]);
+ frag_len = (int)skb_frag_size(&skb_shinfo(skb)->frags[i]);
+ memcpy(lb_buf + pkt_offset, frag_data, (size_t)(u32)frag_len);
+
+ pkt_offset += frag_len;
+ }
+ nic_dev->lb_test_rx_idx++;
+}
+
+static inline void hinic3_lro_set_gso_params(struct sk_buff *skb, u16 num_lro)
+{
+ struct ethhdr *eth = (struct ethhdr *)(skb->data);
+ __be16 proto;
+
+ proto = __vlan_get_protocol(skb, eth->h_proto, NULL);
+
+ skb_shinfo(skb)->gso_size = (u16)DIV_ROUND_UP((skb->len - skb_headlen(skb)), num_lro);
+ skb_shinfo(skb)->gso_type = (proto == htons(ETH_P_IP)) ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6;
+ skb_shinfo(skb)->gso_segs = num_lro;
+}
+
+#ifdef HAVE_XDP_SUPPORT
+enum hinic3_xdp_pkt {
+ HINIC3_XDP_PKT_PASS,
+ HINIC3_XDP_PKT_DROP,
+};
+
+static void update_drop_rx_info(struct hinic3_rxq *rxq, u16 weqbb_num)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+
+ while (weqbb_num) {
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+ if (likely(page_to_nid(rx_info->page) == numa_node_id()))
+ hinic3_reuse_rx_page(rxq, rx_info);
+
+ rx_info->buf_dma_addr = 0;
+ rx_info->page = NULL;
+ rxq->cons_idx++;
+ rxq->delta++;
+
+ weqbb_num--;
+ }
+}
+
+int hinic3_run_xdp(struct hinic3_rxq *rxq, u32 pkt_len)
+{
+ struct bpf_prog *xdp_prog = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ struct xdp_buff xdp;
+ int result = HINIC3_XDP_PKT_PASS;
+ u16 weqbb_num = 1; /* xdp can only use one rx_buff */
+ u8 *va = NULL;
+ u32 act;
+
+ rcu_read_lock();
+ xdp_prog = READ_ONCE(rxq->xdp_prog);
+ if (!xdp_prog)
+ goto unlock_rcu;
+
+ if (unlikely(pkt_len > rxq->buf_len)) {
+ RXQ_STATS_INC(rxq, xdp_large_pkt);
+ weqbb_num = (u16)(pkt_len >> rxq->rx_buff_shift) +
+ ((pkt_len & (rxq->buf_len - 1)) ? 1 : 0);
+ result = HINIC3_XDP_PKT_DROP;
+ goto xdp_out;
+ }
+
+ rx_info = &rxq->rx_info[rxq->cons_idx & rxq->q_mask];
+ va = (u8 *)page_address(rx_info->page) + rx_info->page_offset;
+ prefetch(va);
+ dma_sync_single_range_for_cpu(rxq->dev, rx_info->buf_dma_addr,
+ rx_info->page_offset,
+ rxq->buf_len, DMA_FROM_DEVICE);
+ xdp.data = va;
+ xdp.data_hard_start = xdp.data;
+ xdp.data_end = xdp.data + pkt_len;
+#ifdef HAVE_XDP_FRAME_SZ
+ xdp.frame_sz = rxq->buf_len;
+#endif
+#ifdef HAVE_XDP_DATA_META
+ xdp_set_data_meta_invalid(&xdp);
+#endif
+ prefetchw(xdp.data_hard_start);
+ act = bpf_prog_run_xdp(xdp_prog, &xdp);
+ switch (act) {
+ case XDP_PASS:
+ break;
+ case XDP_DROP:
+ result = HINIC3_XDP_PKT_DROP;
+ break;
+ default:
+ result = HINIC3_XDP_PKT_DROP;
+ bpf_warn_invalid_xdp_action(act);
+ }
+
+xdp_out:
+ if (result == HINIC3_XDP_PKT_DROP) {
+ RXQ_STATS_INC(rxq, xdp_dropped);
+ update_drop_rx_info(rxq, weqbb_num);
+ }
+
+unlock_rcu:
+ rcu_read_unlock();
+
+ return result;
+}
+#endif
+
+static int recv_one_pkt(struct hinic3_rxq *rxq, struct hinic3_rq_cqe *rx_cqe,
+ u32 pkt_len, u32 vlan_len, u32 status)
+{
+ struct sk_buff *skb;
+ struct net_device *netdev = rxq->netdev;
+ u32 offload_type;
+ u16 num_lro;
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+
+#ifdef HAVE_XDP_SUPPORT
+ u32 xdp_status;
+
+ xdp_status = hinic3_run_xdp(rxq, pkt_len);
+ if (xdp_status == HINIC3_XDP_PKT_DROP)
+ return 0;
+#endif
+
+ skb = hinic3_fetch_rx_buffer(rxq, pkt_len);
+ if (unlikely(!skb)) {
+ RXQ_STATS_INC(rxq, alloc_skb_err);
+ return -ENOMEM;
+ }
+
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ hinic3_pull_tail(skb);
+
+ offload_type = hinic3_hw_cpu32(rx_cqe->offload_type);
+ hinic3_rx_csum(rxq, offload_type, status, skb);
+
+#ifdef HAVE_SKBUFF_CSUM_LEVEL
+ hinic3_rx_gro(rxq, offload_type, skb);
+#endif
+
+#if defined(NETIF_F_HW_VLAN_CTAG_RX)
+ if ((netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#else
+ if ((netdev->features & NETIF_F_HW_VLAN_RX) &&
+ HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type)) {
+#endif
+ u16 vid = HINIC3_GET_RX_VLAN_TAG(vlan_len);
+
+ /* if the packet is a vlan pkt, the vid may be 0 */
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
+ }
+
+ if (unlikely(test_bit(HINIC3_LP_TEST, &nic_dev->flags)))
+ hinic3_copy_lp_data(nic_dev, skb);
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro)
+ hinic3_lro_set_gso_params(skb, num_lro);
+
+ skb_record_rx_queue(skb, rxq->q_id);
+ skb->protocol = eth_type_trans(skb, netdev);
+
+ if (skb_has_frag_list(skb)) {
+#ifdef HAVE_NAPI_GRO_FLUSH_OLD
+ napi_gro_flush(&rxq->irq_cfg->napi, false);
+#else
+ napi_gro_flush(&rxq->irq_cfg->napi);
+#endif
+ netif_receive_skb(skb);
+ } else {
+ napi_gro_receive(&rxq->irq_cfg->napi, skb);
+ }
+
+ return 0;
+}
+
+#define LRO_PKT_HDR_LEN_IPV4 66
+#define LRO_PKT_HDR_LEN_IPV6 86
+#define LRO_PKT_HDR_LEN(cqe) \
+ (HINIC3_GET_RX_IP_TYPE(hinic3_hw_cpu32((cqe)->offload_type)) == \
+ HINIC3_RX_IPV6_PKT ? LRO_PKT_HDR_LEN_IPV6 : LRO_PKT_HDR_LEN_IPV4)
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(rxq->netdev);
+ u32 sw_ci, status, pkt_len, vlan_len, dropped = 0;
+ struct hinic3_rq_cqe *rx_cqe = NULL;
+ u64 rx_bytes = 0;
+ u16 num_lro;
+ int pkts = 0, nr_pkts = 0;
+ u16 num_wqe = 0;
+
+ while (likely(pkts < budget)) {
+ sw_ci = rxq->cons_idx & rxq->q_mask;
+ rx_cqe = rxq->rx_info[sw_ci].cqe;
+ status = hinic3_hw_cpu32(rx_cqe->status);
+ if (!HINIC3_GET_RX_DONE(status))
+ break;
+
+ /* make sure we read rx_done before packet length */
+ rmb();
+
+ vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+ pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+ if (recv_one_pkt(rxq, rx_cqe, pkt_len, vlan_len, status))
+ break;
+
+ rx_bytes += pkt_len;
+ pkts++;
+ nr_pkts++;
+
+ num_lro = HINIC3_GET_RX_NUM_LRO(status);
+ if (num_lro) {
+ rx_bytes += ((num_lro - 1) * LRO_PKT_HDR_LEN(rx_cqe));
+
+ num_wqe += HINIC3_GET_SGE_NUM(pkt_len, rxq);
+ }
+
+ rx_cqe->status = 0;
+
+ if (num_wqe >= nic_dev->lro_replenish_thld)
+ break;
+ }
+
+ if (rxq->delta >= HINIC3_RX_BUFFER_WRITE)
+ hinic3_rx_fill_buffers(rxq);
+
+ u64_stats_update_begin(&rxq->rxq_stats.syncp);
+ rxq->rxq_stats.packets += (u64)(u32)nr_pkts;
+ rxq->rxq_stats.bytes += rx_bytes;
+ rxq->rxq_stats.dropped += (u64)dropped;
+ u64_stats_update_end(&rxq->rxq_stats.syncp);
+ return pkts;
+}
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx, i;
+ u32 pkts;
+ u64 size;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+ size = sizeof(*rqres->rx_info) * rq_depth;
+ rqres->rx_info = kzalloc(size, GFP_KERNEL);
+ if (!rqres->rx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx info\n", idx);
+ goto err_out;
+ }
+
+ rqres->cqe_start_vaddr =
+ dma_zalloc_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ &rqres->cqe_start_paddr,
+ GFP_KERNEL);
+ if (!rqres->cqe_start_vaddr) {
+ kfree(rqres->rx_info);
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d cqe\n", idx);
+ goto err_out;
+ }
+
+ pkts = hinic3_rx_alloc_buffers(nic_dev, rq_depth,
+ rqres->rx_info);
+ if (!pkts) {
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+ kfree(rqres->rx_info);
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc rxq%d rx buffers\n", idx);
+ goto err_out;
+ }
+ rqres->next_to_alloc = (u16)pkts;
+ }
+ return 0;
+
+err_out:
+ for (i = 0; i < idx; i++) {
+ rqres = &rxqs_res[i];
+
+ hinic3_rx_free_buffers(nic_dev, rq_depth, rqres->rx_info);
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+ kfree(rqres->rx_info);
+ }
+
+ return -ENOMEM;
+}
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ u64 cqe_mem_size = sizeof(struct hinic3_rq_cqe) * rq_depth;
+ int idx;
+
+ for (idx = 0; idx < num_rq; idx++) {
+ rqres = &rxqs_res[idx];
+
+ hinic3_rx_free_buffers(nic_dev, rq_depth, rqres->rx_info);
+ dma_free_coherent(&nic_dev->pdev->dev, cqe_mem_size,
+ rqres->cqe_start_vaddr,
+ rqres->cqe_start_paddr);
+ kfree(rqres->rx_info);
+ }
+}
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res)
+{
+ struct hinic3_dyna_rxq_res *rqres = NULL;
+ struct irq_info *msix_entry = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_rq_cqe *cqe_va = NULL;
+ dma_addr_t cqe_pa;
+ u16 q_id;
+ u32 idx;
+ u32 pkts;
+
+ nic_dev->rxq_get_err_times = 0;
+ for (q_id = 0; q_id < num_rq; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rqres = &rxqs_res[q_id];
+ msix_entry = &nic_dev->qps_irq_info[q_id];
+
+ rxq->irq_id = msix_entry->irq_id;
+ rxq->msix_entry_idx = msix_entry->msix_entry_idx;
+ rxq->next_to_update = 0;
+ rxq->next_to_alloc = rqres->next_to_alloc;
+ rxq->q_depth = rq_depth;
+ rxq->delta = rxq->q_depth;
+ rxq->q_mask = rxq->q_depth - 1;
+ rxq->cons_idx = 0;
+
+ rxq->last_sw_pi = rxq->q_depth - 1;
+ rxq->last_sw_ci = 0;
+ rxq->last_hw_ci = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->rxq_print_times = 0;
+ rxq->last_packets = 0;
+ rxq->restore_buf_num = 0;
+
+ rxq->rx_info = rqres->rx_info;
+
+ /* fill cqe */
+ cqe_va = (struct hinic3_rq_cqe *)rqres->cqe_start_vaddr;
+ cqe_pa = rqres->cqe_start_paddr;
+ for (idx = 0; idx < rq_depth; idx++) {
+ rxq->rx_info[idx].cqe = cqe_va;
+ rxq->rx_info[idx].cqe_dma = cqe_pa;
+ cqe_va++;
+ cqe_pa += sizeof(*rxq->rx_info->cqe);
+ }
+
+ rxq->rq = hinic3_get_nic_queue(nic_dev->hwdev, rxq->q_id,
+ HINIC3_RQ);
+ if (!rxq->rq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to get rq\n");
+ return -EINVAL;
+ }
+
+ pkts = hinic3_rx_fill_wqe(rxq);
+ if (pkts != rxq->q_depth) {
+ nicif_err(nic_dev, drv, nic_dev->netdev, "Failed to fill rx wqe\n");
+ return -EFAULT;
+ }
+
+ pkts = hinic3_rx_fill_buffers(rxq);
+ if (!pkts) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to fill Rx buffer\n");
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_free_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->rxqs);
+}
+
+int hinic3_alloc_rxqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_rxq *rxq = NULL;
+ u16 num_rxqs = nic_dev->max_qps;
+ u16 q_id;
+ u64 rxq_size;
+
+ rxq_size = num_rxqs * sizeof(*nic_dev->rxqs);
+ if (!rxq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size rxqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->rxqs = kzalloc(rxq_size, GFP_KERNEL);
+ if (!nic_dev->rxqs) {
+ nic_err(&pdev->dev, "Failed to allocate rxqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_rxqs; q_id++) {
+ rxq = &nic_dev->rxqs[q_id];
+ rxq->netdev = netdev;
+ rxq->dev = &pdev->dev;
+ rxq->q_id = q_id;
+ rxq->buf_len = nic_dev->rx_buff_len;
+ rxq->rx_buff_shift = (u32)ilog2(nic_dev->rx_buff_len);
+ rxq->dma_rx_buff_size = nic_dev->dma_rx_buff_size;
+ rxq->q_depth = nic_dev->q_params.rq_depth;
+ rxq->q_mask = nic_dev->q_params.rq_depth - 1;
+
+ rxq_stats_init(rxq);
+ }
+
+ return 0;
+}
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u8 rq2iq_map[HINIC3_MAX_NUM_RQ];
+ int err;
+
+ /* Set all rq mapping to all iq in default */
+
+ memset(rq2iq_map, 0xFF, sizeof(rq2iq_map));
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags)) {
+ err = hinic3_rss_init(nic_dev, rq2iq_map, sizeof(rq2iq_map), dcb_en);
+ if (err) {
+ nicif_err(nic_dev, drv, netdev, "Failed to init rss\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+void hinic3_rx_remove_configure(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ if (test_bit(HINIC3_RSS_ENABLE, &nic_dev->flags))
+ hinic3_rss_deinit(nic_dev);
+}
+
+int rxq_restore(struct hinic3_nic_dev *nic_dev, u16 q_id, u16 hw_ci)
+{
+ struct hinic3_rxq *rxq = &nic_dev->rxqs[q_id];
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ dma_addr_t dma_addr;
+ u32 free_wqebbs = rxq->delta - rxq->restore_buf_num;
+ u32 buff_pi;
+ u32 i;
+ int err;
+
+ if (rxq->delta < rxq->restore_buf_num)
+ return -EINVAL;
+
+ if (rxq->restore_buf_num == 0) /* start restore process */
+ rxq->restore_pi = rxq->next_to_update;
+
+ buff_pi = rxq->restore_pi;
+
+ if ((((rxq->cons_idx & rxq->q_mask) + rxq->q_depth -
+ rxq->next_to_update) % rxq->q_depth) != rxq->delta)
+ return -EINVAL;
+
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[buff_pi];
+
+ if (unlikely(!rx_alloc_mapped_page(nic_dev, rx_info))) {
+ RXQ_STATS_INC(rxq, alloc_rx_buf_err);
+ rxq->restore_pi = (u16)((rxq->restore_pi + i) & rxq->q_mask);
+ return -ENOMEM;
+ }
+
+ dma_addr = rx_info->buf_dma_addr + rx_info->page_offset;
+
+ rq_wqe = rx_info->rq_wqe;
+
+ if (rxq->rq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+ buff_pi = (u16)((buff_pi + 1) & rxq->q_mask);
+ rxq->restore_buf_num++;
+ }
+
+ nic_info(&nic_dev->pdev->dev, "rxq %u restore_buf_num:%u\n", q_id, rxq->restore_buf_num);
+
+ rx_info = &rxq->rx_info[(hw_ci + rxq->q_depth - 1) & rxq->q_mask];
+ if (rx_info->buf_dma_addr) {
+ dma_unmap_page(&nic_dev->pdev->dev, rx_info->buf_dma_addr,
+ nic_dev->dma_rx_buff_size, DMA_FROM_DEVICE);
+ rx_info->buf_dma_addr = 0;
+ }
+
+ if (rx_info->page) {
+ __free_pages(rx_info->page, nic_dev->page_order);
+ rx_info->page = NULL;
+ }
+
+ rxq->delta = 1;
+ rxq->next_to_update = (u16)((hw_ci + rxq->q_depth - 1) & rxq->q_mask);
+ rxq->cons_idx = (u16)((rxq->next_to_update + 1) & rxq->q_mask);
+ rxq->restore_buf_num = 0;
+ rxq->next_to_alloc = rxq->next_to_update;
+
+ for (i = 0; i < rxq->q_depth; i++) {
+ if (!HINIC3_GET_RX_DONE(hinic3_hw_cpu32(rxq->rx_info[i].cqe->status)))
+ continue;
+
+ RXQ_STATS_INC(rxq, restore_drop_sge);
+ rxq->rx_info[i].cqe->status = 0;
+ }
+
+ err = hinic3_cache_out_qps_res(nic_dev->hwdev);
+ if (err) {
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ return err;
+ }
+
+ if (!rq_pi_rd_en) {
+ hinic3_write_db(rxq->rq, rxq->q_id & (NIC_DCB_COS_MAX - 1),
+ RQ_CFLAG_DP, (u16)((u32)rxq->next_to_update << rxq->rq->wqe_type));
+ } else {
+ /* Write all the wqes before pi update */
+ wmb();
+
+ hinic3_update_rq_hw_pi(rxq->rq, rxq->next_to_update);
+ }
+
+ return 0;
+}
+
+bool rxq_is_normal(struct hinic3_rxq *rxq, struct rxq_check_info rxq_info)
+{
+ u32 status;
+
+ if (rxq->rxq_stats.packets != rxq->last_packets || rxq_info.hw_pi != rxq_info.hw_ci ||
+ rxq_info.hw_ci != rxq->last_hw_ci || rxq->next_to_update != rxq->last_sw_pi)
+ return true;
+
+ /* hw rx no wqe and driver rx no packet recv */
+ status = rxq->rx_info[rxq->cons_idx & rxq->q_mask].cqe->status;
+ if (HINIC3_GET_RX_DONE(hinic3_hw_cpu32(status)))
+ return true;
+
+ if ((rxq->cons_idx & rxq->q_mask) != rxq->last_sw_ci ||
+ rxq->rxq_stats.packets != rxq->last_packets ||
+ rxq->next_to_update != rxq_info.hw_pi)
+ return true;
+
+ return false;
+}
+
+#define RXQ_CHECK_ERR_TIMES 2
+#define RXQ_PRINT_MAX_TIMES 3
+#define RXQ_GET_ERR_MAX_TIMES 3
+void hinic3_rxq_check_work_handler(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_nic_dev *nic_dev = container_of(delay, struct hinic3_nic_dev,
+ rxq_check_work);
+ struct rxq_check_info *rxq_info = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u64 size;
+ u16 qid;
+ int err;
+
+ if (!test_bit(HINIC3_INTF_UP, &nic_dev->flags))
+ return;
+
+ if (test_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags))
+ queue_delayed_work(nic_dev->workq, &nic_dev->rxq_check_work, HZ);
+
+ size = sizeof(*rxq_info) * nic_dev->q_params.num_qps;
+ if (!size)
+ return;
+
+ rxq_info = kzalloc(size, GFP_KERNEL);
+ if (!rxq_info)
+ return;
+
+ err = hinic3_get_rxq_hw_info(nic_dev->hwdev, rxq_info, nic_dev->q_params.num_qps,
+ nic_dev->rxqs[0].rq->wqe_type);
+ if (err) {
+ nic_dev->rxq_get_err_times++;
+ if (nic_dev->rxq_get_err_times >= RXQ_GET_ERR_MAX_TIMES)
+ clear_bit(HINIC3_RXQ_RECOVERY, &nic_dev->flags);
+ goto free_rxq_info;
+ }
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ rxq = &nic_dev->rxqs[qid];
+ if (!rxq_is_normal(rxq, rxq_info[qid])) {
+ rxq->rx_check_err_cnt++;
+ if (rxq->rx_check_err_cnt < RXQ_CHECK_ERR_TIMES)
+ continue;
+
+ if (rxq->rxq_print_times <= RXQ_PRINT_MAX_TIMES) {
+ nic_warn(&nic_dev->pdev->dev, "rxq %u wqe abnormal, hw_pi:%u, hw_ci:%u, sw_pi:%u, sw_ci:%u delta:%u\n",
+ qid, rxq_info[qid].hw_pi, rxq_info[qid].hw_ci,
+ rxq->next_to_update,
+ rxq->cons_idx & rxq->q_mask, rxq->delta);
+ rxq->rxq_print_times++;
+ }
+
+ err = rxq_restore(nic_dev, qid, rxq_info[qid].hw_ci);
+ if (err)
+ continue;
+ }
+
+ rxq->rxq_print_times = 0;
+ rxq->rx_check_err_cnt = 0;
+ rxq->last_sw_pi = rxq->next_to_update;
+ rxq->last_sw_ci = rxq->cons_idx & rxq->q_mask;
+ rxq->last_hw_ci = rxq_info[qid].hw_ci;
+ rxq->last_packets = rxq->rxq_stats.packets;
+ }
+
+ nic_dev->rxq_get_err_times = 0;
+
+free_rxq_info:
+ kfree(rxq_info);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
new file mode 100644
index 000000000000..f4d6f4fdb13e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_rx.h
@@ -0,0 +1,155 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_RX_H
+#define HINIC3_RX_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/mm_types.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/u64_stats_sync.h>
+
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_dev.h"
+
+/* rx cqe checksum err */
+#define HINIC3_RX_CSUM_IP_CSUM_ERR BIT(0)
+#define HINIC3_RX_CSUM_TCP_CSUM_ERR BIT(1)
+#define HINIC3_RX_CSUM_UDP_CSUM_ERR BIT(2)
+#define HINIC3_RX_CSUM_IGMP_CSUM_ERR BIT(3)
+#define HINIC3_RX_CSUM_ICMPV4_CSUM_ERR BIT(4)
+#define HINIC3_RX_CSUM_ICMPV6_CSUM_ERR BIT(5)
+#define HINIC3_RX_CSUM_SCTP_CRC_ERR BIT(6)
+#define HINIC3_RX_CSUM_HW_CHECK_NONE BIT(7)
+#define HINIC3_RX_CSUM_IPSU_OTHER_ERR BIT(8)
+
+#define HINIC3_HEADER_DATA_UNIT 2
+
+struct hinic3_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 dropped;
+ u64 xdp_dropped;
+ u64 rx_buf_empty;
+
+ u64 alloc_skb_err;
+ u64 alloc_rx_buf_err;
+ u64 xdp_large_pkt;
+ u64 restore_drop_sge;
+ u64 rsvd2;
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_rx_info {
+ dma_addr_t buf_dma_addr;
+
+ struct hinic3_rq_cqe *cqe;
+ dma_addr_t cqe_dma;
+ struct page *page;
+ u32 page_offset;
+ u32 rsvd1;
+ struct hinic3_rq_wqe *rq_wqe;
+ struct sk_buff *saved_skb;
+ u32 skb_len;
+ u32 rsvd2;
+};
+
+struct hinic3_rxq {
+ struct net_device *netdev;
+
+ u16 q_id;
+ u16 rsvd1;
+ u32 q_depth;
+ u32 q_mask;
+
+ u16 buf_len;
+ u16 rsvd2;
+ u32 rx_buff_shift;
+ u32 dma_rx_buff_size;
+
+ struct hinic3_rxq_stats rxq_stats;
+ u32 cons_idx;
+ u32 delta;
+
+ u32 irq_id;
+ u16 msix_entry_idx;
+ u16 rsvd3;
+
+ struct hinic3_rx_info *rx_info;
+ struct hinic3_io_queue *rq;
+#ifdef HAVE_XDP_SUPPORT
+ struct bpf_prog *xdp_prog;
+#endif
+
+ struct hinic3_irq *irq_cfg;
+ u16 next_to_alloc;
+ u16 next_to_update;
+ struct device *dev; /* device for DMA mapping */
+
+ unsigned long status;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u8 last_coalesc_timer_cfg;
+ u8 last_pending_limt;
+ u16 restore_buf_num;
+ u32 rsvd5;
+ u64 rsvd6;
+
+ u32 last_sw_pi;
+ u32 last_sw_ci;
+
+ u32 last_hw_ci;
+ u8 rx_check_err_cnt;
+ u8 rxq_print_times;
+ u16 restore_pi;
+
+ u64 last_packets;
+} ____cacheline_aligned;
+
+struct hinic3_dyna_rxq_res {
+ u16 next_to_alloc;
+ struct hinic3_rx_info *rx_info;
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+};
+
+int hinic3_alloc_rxqs(struct net_device *netdev);
+
+void hinic3_free_rxqs(struct net_device *netdev);
+
+int hinic3_alloc_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+void hinic3_free_rxqs_res(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_configure_rxqs(struct hinic3_nic_dev *nic_dev, u16 num_rq,
+ u32 rq_depth, struct hinic3_dyna_rxq_res *rxqs_res);
+
+int hinic3_rx_configure(struct net_device *netdev, u8 dcb_en);
+
+void hinic3_rx_remove_configure(struct net_device *netdev);
+
+int hinic3_rx_poll(struct hinic3_rxq *rxq, int budget);
+
+void hinic3_rxq_get_stats(struct hinic3_rxq *rxq,
+ struct hinic3_rxq_stats *stats);
+
+void hinic3_rxq_clean_stats(struct hinic3_rxq_stats *rxq_stats);
+
+void hinic3_rxq_check_work_handler(struct work_struct *work);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
new file mode 100644
index 000000000000..fee4cfca1e4c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_srv_nic.h
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2018-2022. All rights reserved.
+ * @file hinic3_srv_nic.h
+ * @details nic service interface
+ * History :
+ * 1.Date : 2018/3/8
+ * Modification: Created file
+ */
+
+#ifndef HINIC3_SRV_NIC_H
+#define HINIC3_SRV_NIC_H
+
+#include "hinic3_mgmt_interface.h"
+#include "mag_cmd.h"
+#include "hinic3_lld.h"
+
+enum hinic3_queue_type {
+ HINIC3_SQ,
+ HINIC3_RQ,
+ HINIC3_MAX_QUEUE_TYPE
+};
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_netdev(struct net_device *netdev);
+struct net_device *hinic3_get_netdev_by_lld(struct hinic3_lld_dev *lld_dev);
+
+struct hinic3_event_link_info {
+ u8 valid;
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+};
+
+enum link_err_type {
+ LINK_ERR_MODULE_UNRECOGENIZED,
+ LINK_ERR_NUM,
+};
+
+enum port_module_event_type {
+ HINIC3_PORT_MODULE_CABLE_PLUGGED,
+ HINIC3_PORT_MODULE_CABLE_UNPLUGGED,
+ HINIC3_PORT_MODULE_LINK_ERR,
+ HINIC3_PORT_MODULE_MAX_EVENT,
+};
+
+struct hinic3_port_module_event {
+ enum port_module_event_type type;
+ enum link_err_type err_type;
+};
+
+struct hinic3_dcb_info {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 up_cos[NIC_DCB_COS_MAX];
+};
+
+enum hinic3_nic_event_type {
+ EVENT_NIC_LINK_DOWN,
+ EVENT_NIC_LINK_UP,
+ EVENT_NIC_PORT_MODULE_EVENT,
+ EVENT_NIC_DCB_STATE_CHANGE,
+};
+
+/* *
+ * @brief hinic3_set_mac - set mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_del_mac - delete mac address
+ * @param hwdev: device pointer to hwdev
+ * @param mac_addr: mac address from hardware
+ * @param vlan_id: vlan id
+ * @param func_id: function index
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id, u16 channel);
+
+/* *
+ * @brief hinic3_set_vport_enable - set function valid status
+ * @param hwdev: device pointer to hwdev
+ * @param func_id: global function index
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_vport_enable(void *hwdev, u16 func_id, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_set_port_enable - set port status
+ * @param hwdev: device pointer to hwdev
+ * @param enable: 0-disable, 1-enable
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_set_port_enable(void *hwdev, bool enable, u16 channel);
+
+/* *
+ * @brief hinic3_flush_qps_res - flush queue pairs resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_flush_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_cache_out_qps_res - cache out queue pairs wqe resource in hardware
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_cache_out_qps_res(void *hwdev);
+
+/* *
+ * @brief hinic3_init_nic_hwdev - init nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @param pcidev_hdl: pointer to pcidev or handler
+ * @param dev_hdl: pointer to pcidev->dev or handler, for sdk_err() or
+ * dma_alloc()
+ * @param rx_buff_len: rx_buff_len is receive buffer length
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_nic_hwdev(void *hwdev, void *pcidev_hdl, void *dev_hdl, u16 rx_buff_len);
+
+/* *
+ * @brief hinic3_free_nic_hwdev - free nic hwdev
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+void hinic3_free_nic_hwdev(void *hwdev);
+
+/* *
+ * @brief hinic3_get_speed - set link speed
+ * @param hwdev: device pointer to hwdev
+ * @param port_info: link speed
+ * @param channel: channel id
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_get_speed(void *hwdev, enum mag_cmd_port_speed *speed, u16 channel);
+
+int hinic3_get_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_pf_dcb_state(void *hwdev, struct hinic3_dcb_state *dcb_state);
+
+int hinic3_get_cos_by_pri(void *hwdev, u8 pri, u8 *cos);
+
+/* *
+ * @brief hinic3_create_qps - create queue pairs
+ * @param hwdev: device pointer to hwdev
+ * @param num_qp: number of queue pairs
+ * @param sq_depth: sq depth
+ * @param rq_depth: rq depth
+ * @param qps_msix_arry: msix info
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_create_qps(void *hwdev, u16 num_qp, u32 sq_depth, u32 rq_depth,
+ struct irq_info *qps_msix_arry);
+
+/* *
+ * @brief hinic3_destroy_qps - destroy queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_destroy_qps(void *hwdev);
+
+/* *
+ * @brief hinic3_get_nic_queue - get nic queue
+ * @param hwdev: device pointer to hwdev
+ * @param q_id: queue index
+ * @param q_type: queue type
+ * @retval queue address
+ */
+void *hinic3_get_nic_queue(void *hwdev, u16 q_id, enum hinic3_queue_type q_type);
+
+/* *
+ * @brief hinic3_init_qp_ctxts - init queue pair context
+ * @param hwdev: device pointer to hwdev
+ * @retval zero: success
+ * @retval non-zero: failure
+ */
+int hinic3_init_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_free_qp_ctxts - free queue pairs
+ * @param hwdev: device pointer to hwdev
+ */
+void hinic3_free_qp_ctxts(void *hwdev);
+
+/* *
+ * @brief hinic3_pf_set_vf_link_state pf set vf link state
+ * @param hwdev: device pointer to hwdev
+ * @param vf_link_forced: set link forced
+ * @param link_state: Set link state, This parameter is valid only when vf_link_forced is true
+ */
+int hinic3_pf_set_vf_link_state(void *hwdev, bool vf_link_forced, bool link_state);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
new file mode 100644
index 000000000000..3029cff7f00b
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.c
@@ -0,0 +1,1016 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <net/xfrm.h>
+#include <linux/netdevice.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/dma-mapping.h>
+#include <linux/types.h>
+#include <linux/u64_stats_sync.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_srv_nic.h"
+#include "hinic3_nic_dev.h"
+#include "hinic3_tx.h"
+
+#define MIN_SKB_LEN 32
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define NIC_QID(q_id, nic_dev) ((q_id) & ((nic_dev)->num_qps - 1))
+
+#define HINIC3_TX_TASK_WRAPPED 1
+#define HINIC3_TX_BD_DESC_WRAPPED 2
+
+#define TXQ_STATS_INC(txq, field) \
+do { \
+ u64_stats_update_begin(&(txq)->txq_stats.syncp); \
+ (txq)->txq_stats.field++; \
+ u64_stats_update_end(&(txq)->txq_stats.syncp); \
+} while (0)
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+ unsigned int start;
+
+ u64_stats_update_begin(&stats->syncp);
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ stats->bytes = txq_stats->bytes;
+ stats->packets = txq_stats->packets;
+ stats->busy = txq_stats->busy;
+ stats->wake = txq_stats->wake;
+ stats->dropped = txq_stats->dropped;
+ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+ u64_stats_update_end(&stats->syncp);
+}
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats)
+{
+ u64_stats_update_begin(&txq_stats->syncp);
+ txq_stats->bytes = 0;
+ txq_stats->packets = 0;
+ txq_stats->busy = 0;
+ txq_stats->wake = 0;
+ txq_stats->dropped = 0;
+
+ txq_stats->skb_pad_err = 0;
+ txq_stats->frag_len_overflow = 0;
+ txq_stats->offload_cow_skb_err = 0;
+ txq_stats->map_frag_err = 0;
+ txq_stats->unknown_tunnel_pkt = 0;
+ txq_stats->frag_size_err = 0;
+ txq_stats->rsvd1 = 0;
+ txq_stats->rsvd2 = 0;
+ u64_stats_update_end(&txq_stats->syncp);
+}
+
+static void txq_stats_init(struct hinic3_txq *txq)
+{
+ struct hinic3_txq_stats *txq_stats = &txq->txq_stats;
+
+ u64_stats_init(&txq_stats->syncp);
+ hinic3_txq_clean_stats(txq_stats);
+}
+
+static inline void hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs,
+ dma_addr_t addr, u32 len)
+{
+ buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr));
+ buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr));
+ buf_descs->len = hinic3_hw_be32(len);
+}
+
+static int tx_map_skb(struct hinic3_nic_dev *nic_dev, struct sk_buff *skb,
+ u16 valid_nr_frags, struct hinic3_txq *txq,
+ struct hinic3_tx_info *tx_info,
+ struct hinic3_sq_wqe_combo *wqe_combo)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+ struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+ struct hinic3_dma_info *dma_info = tx_info->dma_info;
+ struct pci_dev *pdev = nic_dev->pdev;
+ skb_frag_t *frag = NULL;
+ u32 j, i;
+ int err;
+
+ dma_info[0].dma = dma_map_single(&pdev->dev, skb->data, skb_headlen(skb), DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[0].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ return -EFAULT;
+ }
+
+ dma_info[0].len = skb_headlen(skb);
+
+ wqe_desc->hi_addr = hinic3_hw_be32(upper_32_bits(dma_info[0].dma));
+ wqe_desc->lo_addr = hinic3_hw_be32(lower_32_bits(dma_info[0].dma));
+
+ wqe_desc->ctrl_len = dma_info[0].len;
+
+ for (i = 0; i < valid_nr_frags;) {
+ frag = &(skb_shinfo(skb)->frags[i]);
+ if (unlikely(i == wqe_combo->first_bds_num))
+ buf_desc = wqe_combo->bds_sec2;
+
+ i++;
+ dma_info[i].dma = skb_frag_dma_map(&pdev->dev, frag, 0,
+ skb_frag_size(frag),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(&pdev->dev, dma_info[i].dma)) {
+ TXQ_STATS_INC(txq, map_frag_err);
+ i--;
+ err = -EFAULT;
+ goto frag_map_err;
+ }
+ dma_info[i].len = skb_frag_size(frag);
+
+ hinic3_set_buf_desc(buf_desc, dma_info[i].dma,
+ dma_info[i].len);
+ buf_desc++;
+ }
+
+ return 0;
+
+frag_map_err:
+ for (j = 0; j < i;) {
+ j++;
+ dma_unmap_page(&pdev->dev, dma_info[j].dma,
+ dma_info[j].len, DMA_TO_DEVICE);
+ }
+ dma_unmap_single(&pdev->dev, dma_info[0].dma, dma_info[0].len,
+ DMA_TO_DEVICE);
+ return err;
+}
+
+static inline void tx_unmap_skb(struct hinic3_nic_dev *nic_dev,
+ struct sk_buff *skb, u16 valid_nr_frags,
+ struct hinic3_dma_info *dma_info)
+{
+ struct pci_dev *pdev = nic_dev->pdev;
+ int i;
+
+ for (i = 0; i < valid_nr_frags;) {
+ i++;
+ dma_unmap_page(&pdev->dev,
+ dma_info[i].dma,
+ dma_info[i].len, DMA_TO_DEVICE);
+ }
+
+ dma_unmap_single(&pdev->dev, dma_info[0].dma,
+ dma_info[0].len, DMA_TO_DEVICE);
+}
+
+union hinic3_l4 {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+};
+
+enum sq_l3_type {
+ UNKNOWN_L3TYPE = 0,
+ IPV6_PKT = 1,
+ IPV4_PKT_NO_CHKSUM_OFFLOAD = 2,
+ IPV4_PKT_WITH_CHKSUM_OFFLOAD = 3,
+};
+
+enum sq_l4offload_type {
+ OFFLOAD_DISABLE = 0,
+ TCP_OFFLOAD_ENABLE = 1,
+ SCTP_OFFLOAD_ENABLE = 2,
+ UDP_OFFLOAD_ENABLE = 3,
+};
+
+/* initialize l4_len and offset */
+static void get_inner_l4_info(struct sk_buff *skb, union hinic3_l4 *l4,
+ u8 l4_proto, u32 *offset,
+ enum sq_l4offload_type *l4_offload)
+{
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ *l4_offload = TCP_OFFLOAD_ENABLE;
+ /* To keep same with TSO, payload offset begins from paylaod */
+ *offset = (l4->tcp->doff << TCP_HDR_DATA_OFF_UNIT_SHIFT) +
+ TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+
+ case IPPROTO_UDP:
+ *l4_offload = UDP_OFFLOAD_ENABLE;
+ *offset = TRANSPORT_OFFSET(l4->hdr, skb);
+ break;
+ default:
+ break;
+ }
+}
+
+static int hinic3_tx_csum(struct hinic3_txq *txq, struct hinic3_sq_task *task,
+ struct sk_buff *skb)
+{
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+#if (KERNEL_VERSION(3, 8, 0) <= LINUX_VERSION_CODE)
+ if (skb->encapsulation) {
+ union hinic3_ip ip;
+ u8 l4_proto;
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ ip.hdr = skb_network_header(skb);
+ if (ip.v4->version == IPV4_VERSION) {
+ l4_proto = ip.v4->protocol;
+ } else if (ip.v4->version == IPV6_VERSION) {
+ union hinic3_l4 l4;
+ unsigned char *exthdr;
+ __be16 frag_off;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+#endif
+ exthdr = ip.hdr + sizeof(*ip.v6);
+ l4_proto = ip.v6->nexthdr;
+ l4.hdr = skb_transport_header(skb);
+ if (l4.hdr != exthdr)
+ ipv6_skip_exthdr(skb, exthdr - skb->data,
+ &l4_proto, &frag_off);
+ } else {
+ l4_proto = IPPROTO_RAW;
+ }
+
+ if (l4_proto != IPPROTO_UDP ||
+ ((struct udphdr *)skb_transport_header(skb))->dest != VXLAN_OFFLOAD_PORT_LE) {
+ TXQ_STATS_INC(txq, unknown_tunnel_pkt);
+ /* Unsupport tunnel packet, disable csum offload */
+ skb_checksum_help(skb);
+ return 0;
+ }
+ }
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+#else
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+#endif
+ return 1;
+}
+
+static void get_inner_l3_l4_type(struct sk_buff *skb, union hinic3_ip *ip,
+ union hinic3_l4 *l4,
+ enum sq_l3_type *l3_type, u8 *l4_proto)
+{
+ unsigned char *exthdr = NULL;
+
+ if (ip->v4->version == IP4_VERSION) {
+ *l3_type = IPV4_PKT_WITH_CHKSUM_OFFLOAD;
+ *l4_proto = ip->v4->protocol;
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ /* inner_transport_header is wrong in centos7.0 and suse12.1 */
+ l4->hdr = ip->hdr + ((u8)ip->v4->ihl << IP_HDR_IHL_UNIT_SHIFT);
+#endif
+ } else if (ip->v4->version == IP6_VERSION) {
+ *l3_type = IPV6_PKT;
+ exthdr = ip->hdr + sizeof(*ip->v6);
+ *l4_proto = ip->v6->nexthdr;
+ if (exthdr != l4->hdr) {
+ __be16 frag_off = 0;
+#ifndef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ ipv6_skip_exthdr(skb, (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+#else
+ int pld_off = 0;
+
+ pld_off = ipv6_skip_exthdr(skb,
+ (int)(exthdr - skb->data),
+ l4_proto, &frag_off);
+ l4->hdr = skb->data + pld_off;
+#endif
+ }
+ } else {
+ *l3_type = UNKNOWN_L3TYPE;
+ *l4_proto = 0;
+ }
+}
+
+static void hinic3_set_tso_info(struct hinic3_sq_task *task, u32 *queue_info,
+ enum sq_l4offload_type l4_offload,
+ u32 offset, u32 mss)
+{
+ if (l4_offload == TCP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ } else if (l4_offload == UDP_OFFLOAD_ENABLE) {
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UFO);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ }
+
+ /* Default enable L3 calculation */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(offset >> 1, PLDOFF);
+
+ /* set MSS value */
+ *queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(*queue_info, MSS);
+ *queue_info |= SQ_CTRL_QUEUE_INFO_SET(mss, MSS);
+}
+
+static int hinic3_tso(struct hinic3_sq_task *task, u32 *queue_info,
+ struct sk_buff *skb)
+{
+ enum sq_l4offload_type l4_offload = OFFLOAD_DISABLE;
+ enum sq_l3_type l3_type;
+ union hinic3_ip ip;
+ union hinic3_l4 l4;
+ u32 offset = 0;
+ u8 l4_proto;
+ int err;
+
+ if (!skb_is_gso(skb))
+ return 0;
+
+ err = skb_cow_head(skb, 0);
+ if (err < 0)
+ return err;
+
+#if (KERNEL_VERSION(3, 8, 0) <= LINUX_VERSION_CODE)
+ if (skb->encapsulation) {
+ u32 gso_type = skb_shinfo(skb)->gso_type;
+ /* L3 checksum always enable */
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+
+ l4.hdr = skb_transport_header(skb);
+ ip.hdr = skb_network_header(skb);
+
+ if (gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ } else if (gso_type & SKB_GSO_UDP_TUNNEL) {
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ if (ip.v4->version == 6) {
+ l4.udp->check = ~csum_magic(&ip, IPPROTO_UDP);
+ task->pkt_info0 |=
+ SQ_TASK_INFO0_SET(1U, OUT_L4_EN);
+ }
+#endif
+ }
+
+ ip.hdr = skb_inner_network_header(skb);
+ l4.hdr = skb_inner_transport_header(skb);
+ } else {
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+ }
+#else
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+#endif
+
+ get_inner_l3_l4_type(skb, &ip, &l4, &l3_type, &l4_proto);
+
+ if (l4_proto == IPPROTO_TCP)
+ l4.tcp->check = ~csum_magic(&ip, IPPROTO_TCP);
+#ifdef HAVE_IP6_FRAG_ID_ENABLE_UFO
+ else if (l4_proto == IPPROTO_UDP && ip.v4->version == 6)
+ task->ip_identify =
+ be32_to_cpu(skb_shinfo(skb)->ip6_frag_id);
+#endif
+
+ get_inner_l4_info(skb, &l4, l4_proto, &offset, &l4_offload);
+
+#ifdef HAVE_OUTER_IPV6_TUNNEL_OFFLOAD
+ u32 network_hdr_len;
+
+ if (unlikely(l3_type == UNKNOWN_L3TYPE))
+ network_hdr_len = 0;
+ else
+ network_hdr_len = l4.hdr - ip.hdr;
+
+ if (unlikely(!offset)) {
+ if (l3_type == UNKNOWN_L3TYPE)
+ offset = ip.hdr - skb->data;
+ else if (l4_offload == OFFLOAD_DISABLE)
+ offset = ip.hdr - skb->data + network_hdr_len;
+ }
+#endif
+
+ hinic3_set_tso_info(task, queue_info, l4_offload, offset,
+ skb_shinfo(skb)->gso_size);
+
+ return 1;
+}
+
+static u32 hinic3_tx_offload(struct sk_buff *skb, struct hinic3_sq_task *task,
+ u32 *queue_info, struct hinic3_txq *txq)
+{
+ u32 offload = 0;
+ int tso_cs_en;
+
+ task->pkt_info0 = 0;
+ task->ip_identify = 0;
+ task->pkt_info2 = 0;
+ task->vlan_offload = 0;
+
+ tso_cs_en = hinic3_tso(task, queue_info, skb);
+ if (tso_cs_en < 0) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ } else if (tso_cs_en) {
+ offload |= TX_OFFLOAD_TSO;
+ } else {
+ tso_cs_en = hinic3_tx_csum(txq, task, skb);
+ if (tso_cs_en)
+ offload |= TX_OFFLOAD_CSUM;
+ }
+
+#define VLAN_INSERT_MODE_MAX 5
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ /* select vlan insert mode by qid, default 802.1Q Tag type */
+ hinic3_set_vlan_tx_offload(task, skb_vlan_tag_get(skb),
+ txq->q_id % VLAN_INSERT_MODE_MAX);
+ offload |= TX_OFFLOAD_VLAN;
+ }
+
+ if (unlikely(SQ_CTRL_QUEUE_INFO_GET(*queue_info, PLDOFF) >
+ MAX_PAYLOAD_OFFSET)) {
+ offload = TX_OFFLOAD_INVALID;
+ return offload;
+ }
+
+ return offload;
+}
+
+static void get_pkt_stats(struct hinic3_tx_info *tx_info, struct sk_buff *skb)
+{
+ u32 ihs, hdr_len;
+
+ if (skb_is_gso(skb)) {
+#if (KERNEL_VERSION(3, 8, 0) <= LINUX_VERSION_CODE)
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ if (skb->encapsulation) {
+#ifdef HAVE_SKB_INNER_TRANSPORT_OFFSET
+ ihs = skb_inner_transport_offset(skb) +
+ inner_tcp_hdrlen(skb);
+#else
+ ihs = (skb_inner_transport_header(skb) - skb->data) +
+ inner_tcp_hdrlen(skb);
+#endif
+ } else {
+#endif
+#endif
+ ihs = skb_transport_offset(skb) + tcp_hdrlen(skb);
+#if (KERNEL_VERSION(3, 8, 0) <= LINUX_VERSION_CODE)
+#if (defined(HAVE_SKB_INNER_TRANSPORT_HEADER) && \
+ defined(HAVE_SK_BUFF_ENCAPSULATION))
+ }
+#endif
+#endif
+ hdr_len = (skb_shinfo(skb)->gso_segs - 1) * ihs;
+ tx_info->num_bytes = skb->len + (u64)hdr_len;
+ } else {
+ tx_info->num_bytes = skb->len > ETH_ZLEN ? skb->len : ETH_ZLEN;
+ }
+
+ tx_info->num_pkts = 1;
+}
+
+static inline int hinic3_maybe_stop_tx(struct hinic3_txq *txq, u16 wqebb_cnt)
+{
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) >= wqebb_cnt))
+ return 0;
+
+ /* We need to check again in a case another CPU has just
+ * made room available.
+ */
+ netif_stop_subqueue(txq->netdev, txq->q_id);
+
+ if (likely(hinic3_get_sq_free_wqebbs(txq->sq) < wqebb_cnt))
+ return -EBUSY;
+
+ /* there have enough wqebbs after queue is wake up */
+ netif_start_subqueue(txq->netdev, txq->q_id);
+
+ return 0;
+}
+
+static u16 hinic3_set_wqe_combo(struct hinic3_txq *txq,
+ struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 offload, u16 num_sge, u16 *curr_pi)
+{
+ void *second_part_wqebbs_addr = NULL;
+ void *wqe = NULL;
+ u16 first_part_wqebbs_num, tmp_pi;
+
+ wqe_combo->ctrl_bd0 = hinic3_get_sq_one_wqebb(txq->sq, curr_pi);
+ if (!offload && num_sge == 1) {
+ wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi, 1);
+ }
+
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+
+ if (offload) {
+ wqe_combo->task = hinic3_get_sq_one_wqebb(txq->sq, &tmp_pi);
+ wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+ } else {
+ wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+ }
+
+ if (num_sge > 1) {
+ /* first wqebb contain bd0, and bd size is equal to sq wqebb
+ * size, so we use (num_sge - 1) as wanted weqbb_cnt
+ */
+ wqe = hinic3_get_sq_multi_wqebbs(txq->sq, num_sge - 1, &tmp_pi,
+ &second_part_wqebbs_addr,
+ &first_part_wqebbs_num);
+ wqe_combo->bds_head = wqe;
+ wqe_combo->bds_sec2 = second_part_wqebbs_addr;
+ wqe_combo->first_bds_num = first_part_wqebbs_num;
+ }
+
+ return hinic3_get_and_update_sq_owner(txq->sq, *curr_pi,
+ num_sge + (u16)!!offload);
+}
+
+/* *
+ * hinic3_prepare_sq_ctrl - init sq wqe cs
+ * @nr_descs: total sge_num, include bd0 in cs
+ */
+static void hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo,
+ u32 queue_info, int nr_descs, u16 owner)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->ctrl_bd0;
+
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+ wqe_desc->ctrl_len |=
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+ /* compact wqe queue_info will transfer to ucode */
+ wqe_desc->queue_info = 0;
+ return;
+ }
+
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(nr_descs, BUFDESC_NUM) |
+ SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+ wqe_desc->queue_info = queue_info;
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+ if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+ wqe_desc->queue_info |=
+ SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+ } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+ TX_MSS_MIN) {
+ /* mss should not less than 80 */
+ wqe_desc->queue_info =
+ SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+ }
+
+ wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info);
+}
+
+static netdev_tx_t hinic3_send_one_skb(struct sk_buff *skb,
+ struct net_device *netdev,
+ struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_sq_wqe_combo wqe_combo = {0};
+ struct hinic3_tx_info *tx_info = NULL;
+ struct hinic3_sq_task task;
+ u32 offload, queue_info = 0;
+ u16 owner = 0, pi = 0;
+ u16 wqebb_cnt, num_sge, valid_nr_frags;
+ bool find_zero_sge_len = false;
+ int err, i;
+
+ if (unlikely(skb->len < MIN_SKB_LEN)) {
+ if (skb_pad(skb, (int)(MIN_SKB_LEN - skb->len))) {
+ TXQ_STATS_INC(txq, skb_pad_err);
+ goto tx_skb_pad_err;
+ }
+
+ skb->len = MIN_SKB_LEN;
+ }
+
+ valid_nr_frags = 0;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ if (!skb_frag_size(&skb_shinfo(skb)->frags[i])) {
+ find_zero_sge_len = true;
+ continue;
+ } else if (find_zero_sge_len) {
+ TXQ_STATS_INC(txq, frag_size_err);
+ goto tx_drop_pkts;
+ }
+
+ valid_nr_frags++;
+ }
+
+ num_sge = valid_nr_frags + 1;
+
+ /* assume need normal TS format wqe, task info need 1 wqebb */
+ wqebb_cnt = num_sge + 1;
+ if (unlikely(hinic3_maybe_stop_tx(txq, wqebb_cnt))) {
+ TXQ_STATS_INC(txq, busy);
+ return NETDEV_TX_BUSY;
+ }
+
+ offload = hinic3_tx_offload(skb, &task, &queue_info, txq);
+ if (unlikely(offload == TX_OFFLOAD_INVALID)) {
+ TXQ_STATS_INC(txq, offload_cow_skb_err);
+ goto tx_drop_pkts;
+ } else if (!offload) {
+ /* no TS in current wqe */
+ wqebb_cnt -= 1;
+ if (unlikely(num_sge == 1 && skb->len > COMPACET_WQ_SKB_MAX_LEN))
+ goto tx_drop_pkts;
+ }
+
+ owner = hinic3_set_wqe_combo(txq, &wqe_combo, offload, num_sge, &pi);
+ if (offload) {
+ /* ip6_frag_id is big endiant, not need to transfer */
+ wqe_combo.task->ip_identify = hinic3_hw_be32(task.ip_identify);
+ wqe_combo.task->pkt_info0 = hinic3_hw_be32(task.pkt_info0);
+ wqe_combo.task->pkt_info2 = hinic3_hw_be32(task.pkt_info2);
+ wqe_combo.task->vlan_offload =
+ hinic3_hw_be32(task.vlan_offload);
+ }
+
+ tx_info = &txq->tx_info[pi];
+ tx_info->skb = skb;
+ tx_info->wqebb_cnt = wqebb_cnt;
+ tx_info->valid_nr_frags = valid_nr_frags;
+
+ err = tx_map_skb(nic_dev, skb, valid_nr_frags, txq, tx_info,
+ &wqe_combo);
+ if (err) {
+ hinic3_rollback_sq_wqebbs(txq->sq, wqebb_cnt, owner);
+ goto tx_drop_pkts;
+ }
+
+ get_pkt_stats(tx_info, skb);
+
+ hinic3_prepare_sq_ctrl(&wqe_combo, queue_info, num_sge, owner);
+
+ hinic3_write_db(txq->sq, txq->cos, SQ_CFLAG_DP,
+ hinic3_get_sq_local_pi(txq->sq));
+
+ return NETDEV_TX_OK;
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+
+tx_skb_pad_err:
+ TXQ_STATS_INC(txq, dropped);
+
+ return NETDEV_TX_OK;
+}
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 q_id = skb_get_queue_mapping(skb);
+ struct hinic3_txq *txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+}
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct hinic3_txq *txq = NULL;
+ u16 q_id = skb_get_queue_mapping(skb);
+
+ if (unlikely(!netif_carrier_ok(netdev))) {
+ dev_kfree_skb_any(skb);
+ HINIC3_NIC_STATS_INC(nic_dev, tx_carrier_off_drop);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(q_id >= nic_dev->q_params.num_qps)) {
+ txq = &nic_dev->txqs[0];
+ HINIC3_NIC_STATS_INC(nic_dev, tx_invalid_qid);
+ goto tx_drop_pkts;
+ }
+ txq = &nic_dev->txqs[q_id];
+
+ return hinic3_send_one_skb(skb, netdev, txq);
+
+tx_drop_pkts:
+ dev_kfree_skb_any(skb);
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.dropped++;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return NETDEV_TX_OK;
+}
+
+static inline void tx_free_skb(struct hinic3_nic_dev *nic_dev,
+ struct hinic3_tx_info *tx_info)
+{
+ tx_unmap_skb(nic_dev, tx_info->skb, tx_info->valid_nr_frags,
+ tx_info->dma_info);
+ dev_kfree_skb_any(tx_info->skb);
+ tx_info->skb = NULL;
+}
+
+static void free_all_tx_skbs(struct hinic3_nic_dev *nic_dev, u32 sq_depth,
+ struct hinic3_tx_info *tx_info_arr)
+{
+ struct hinic3_tx_info *tx_info = NULL;
+ u32 idx;
+
+ for (idx = 0; idx < sq_depth; idx++) {
+ tx_info = &tx_info_arr[idx];
+ if (tx_info->skb)
+ tx_free_skb(nic_dev, tx_info);
+ }
+}
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ struct hinic3_tx_info *tx_info = NULL;
+ u64 tx_bytes = 0, wake = 0, nr_pkts = 0;
+ int pkts = 0;
+ u16 wqebb_cnt = 0;
+ u16 hw_ci, sw_ci = 0, q_id = txq->sq->q_id;
+
+ hw_ci = hinic3_get_sq_hw_ci(txq->sq);
+ dma_rmb();
+ sw_ci = hinic3_get_sq_local_ci(txq->sq);
+
+ do {
+ tx_info = &txq->tx_info[sw_ci];
+
+ /* Whether all of the wqebb of this wqe is completed */
+ if (hw_ci == sw_ci ||
+ ((hw_ci - sw_ci) & txq->q_mask) < tx_info->wqebb_cnt)
+ break;
+
+ sw_ci = (sw_ci + tx_info->wqebb_cnt) & (u16)txq->q_mask;
+ prefetch(&txq->tx_info[sw_ci]);
+
+ wqebb_cnt += tx_info->wqebb_cnt;
+
+ tx_bytes += tx_info->num_bytes;
+ nr_pkts += tx_info->num_pkts;
+ pkts++;
+
+ tx_free_skb(nic_dev, tx_info);
+ } while (likely(pkts < budget));
+
+ hinic3_update_sq_local_ci(txq->sq, wqebb_cnt);
+
+ if (unlikely(__netif_subqueue_stopped(nic_dev->netdev, q_id) &&
+ hinic3_get_sq_free_wqebbs(txq->sq) >= 1 &&
+ test_bit(HINIC3_INTF_UP, &nic_dev->flags))) {
+ struct netdev_queue *netdev_txq =
+ netdev_get_tx_queue(txq->netdev, q_id);
+
+ __netif_tx_lock(netdev_txq, smp_processor_id());
+ /* To avoid re-waking subqueue with xmit_frame */
+ if (__netif_subqueue_stopped(nic_dev->netdev, q_id)) {
+ netif_wake_subqueue(nic_dev->netdev, q_id);
+ wake++;
+ }
+ __netif_tx_unlock(netdev_txq);
+ }
+
+ u64_stats_update_begin(&txq->txq_stats.syncp);
+ txq->txq_stats.bytes += tx_bytes;
+ txq->txq_stats.packets += nr_pkts;
+ txq->txq_stats.wake += wake;
+ u64_stats_update_end(&txq->txq_stats.syncp);
+
+ return pkts;
+}
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos)
+{
+ u16 idx;
+
+ for (idx = 0; idx < q_num; idx++)
+ nic_dev->txqs[idx + start_qid].cos = cos;
+}
+
+#define HINIC3_BDS_PER_SQ_WQEBB \
+ (HINIC3_SQ_WQEBB_SIZE / sizeof(struct hinic3_sq_bufdesc))
+
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx, i;
+ u64 size;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ size = sizeof(*tqres->tx_info) * sq_depth;
+ tqres->tx_info = kzalloc(size, GFP_KERNEL);
+ if (!tqres->tx_info) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d tx info\n", idx);
+ goto err_out;
+ }
+
+ size = sizeof(*tqres->bds) *
+ (sq_depth * HINIC3_BDS_PER_SQ_WQEBB +
+ HINIC3_MAX_SQ_SGE);
+ tqres->bds = kzalloc(size, GFP_KERNEL);
+ if (!tqres->bds) {
+ kfree(tqres->tx_info);
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to alloc txq%d bds info\n", idx);
+ goto err_out;
+ }
+ }
+
+ return 0;
+
+err_out:
+ for (i = 0; i < idx; i++) {
+ tqres = &txqs_res[i];
+
+ kfree(tqres->bds);
+ kfree(tqres->tx_info);
+ }
+
+ return -ENOMEM;
+}
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ int idx;
+
+ for (idx = 0; idx < num_sq; idx++) {
+ tqres = &txqs_res[idx];
+
+ free_all_tx_skbs(nic_dev, sq_depth, tqres->tx_info);
+ kfree(tqres->bds);
+ kfree(tqres->tx_info);
+ }
+}
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res)
+{
+ struct hinic3_dyna_txq_res *tqres = NULL;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id;
+ u32 idx;
+
+ for (q_id = 0; q_id < num_sq; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ tqres = &txqs_res[q_id];
+
+ txq->q_depth = sq_depth;
+ txq->q_mask = sq_depth - 1;
+
+ txq->tx_info = tqres->tx_info;
+ for (idx = 0; idx < sq_depth; idx++)
+ txq->tx_info[idx].dma_info =
+ &tqres->bds[idx * HINIC3_BDS_PER_SQ_WQEBB];
+
+ txq->sq = hinic3_get_nic_queue(nic_dev->hwdev, q_id, HINIC3_SQ);
+ if (!txq->sq) {
+ nicif_err(nic_dev, drv, nic_dev->netdev,
+ "Failed to get %u sq\n", q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ struct pci_dev *pdev = nic_dev->pdev;
+ struct hinic3_txq *txq = NULL;
+ u16 q_id, num_txqs = nic_dev->max_qps;
+ u64 txq_size;
+
+ txq_size = num_txqs * sizeof(*nic_dev->txqs);
+ if (!txq_size) {
+ nic_err(&pdev->dev, "Cannot allocate zero size txqs\n");
+ return -EINVAL;
+ }
+
+ nic_dev->txqs = kzalloc(txq_size, GFP_KERNEL);
+ if (!nic_dev->txqs) {
+ nic_err(&pdev->dev, "Failed to allocate txqs\n");
+ return -ENOMEM;
+ }
+
+ for (q_id = 0; q_id < num_txqs; q_id++) {
+ txq = &nic_dev->txqs[q_id];
+ txq->netdev = netdev;
+ txq->q_id = q_id;
+ txq->q_depth = nic_dev->q_params.sq_depth;
+ txq->q_mask = nic_dev->q_params.sq_depth - 1;
+ txq->dev = &pdev->dev;
+
+ txq_stats_init(txq);
+ }
+
+ return 0;
+}
+
+void hinic3_free_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+
+ kfree(nic_dev->txqs);
+}
+
+static bool is_hw_complete_sq_process(struct hinic3_io_queue *sq)
+{
+ u16 sw_pi, hw_ci;
+
+ sw_pi = hinic3_get_sq_local_pi(sq);
+ hw_ci = hinic3_get_sq_hw_ci(sq);
+
+ return sw_pi == hw_ci;
+}
+
+#define HINIC3_FLUSH_QUEUE_TIMEOUT 1000
+static int hinic3_stop_sq(struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(txq->netdev);
+ unsigned long timeout;
+ int err;
+
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, timeout));
+
+ /* force hardware to drop packets */
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ err = hinic3_force_drop_tx_pkt(nic_dev->hwdev);
+ if (err)
+ break;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, timeout));
+
+ /* Avoid msleep takes too long and get a fake result */
+ if (is_hw_complete_sq_process(txq->sq))
+ return 0;
+
+ return -EFAULT;
+}
+
+/* should stop transmit any packets before calling this function */
+int hinic3_flush_txqs(struct net_device *netdev)
+{
+ struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
+ u16 qid;
+ int err;
+
+ for (qid = 0; qid < nic_dev->q_params.num_qps; qid++) {
+ err = hinic3_stop_sq(&nic_dev->txqs[qid]);
+ if (err)
+ nicif_err(nic_dev, drv, netdev,
+ "Failed to stop sq%u\n", qid);
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
new file mode 100644
index 000000000000..290ef297c45c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_tx.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_TX_H
+#define HINIC3_TX_H
+
+#include <net/ipv6.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+
+#include "hinic3_nic_qp.h"
+#include "hinic3_nic_io.h"
+
+#define VXLAN_OFFLOAD_PORT_LE 46354 /* big end is 4789 */
+
+#define COMPACET_WQ_SKB_MAX_LEN 16383
+
+#define IP4_VERSION 4
+#define IP6_VERSION 6
+#define IP_HDR_IHL_UNIT_SHIFT 2
+#define TCP_HDR_DATA_OFF_UNIT_SHIFT 2
+
+enum tx_offload_type {
+ TX_OFFLOAD_TSO = BIT(0),
+ TX_OFFLOAD_CSUM = BIT(1),
+ TX_OFFLOAD_VLAN = BIT(2),
+ TX_OFFLOAD_INVALID = BIT(3),
+ TX_OFFLOAD_ESP = BIT(4),
+};
+
+struct hinic3_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 busy;
+ u64 wake;
+ u64 dropped;
+
+ /* Subdivision statistics show in private tool */
+ u64 skb_pad_err;
+ u64 frag_len_overflow;
+ u64 offload_cow_skb_err;
+ u64 map_frag_err;
+ u64 unknown_tunnel_pkt;
+ u64 frag_size_err;
+ u64 rsvd1;
+ u64 rsvd2;
+
+#ifdef HAVE_NDO_GET_STATS64
+ struct u64_stats_sync syncp;
+#else
+ struct u64_stats_sync_empty syncp;
+#endif
+};
+
+struct hinic3_dma_info {
+ dma_addr_t dma;
+ u32 len;
+};
+
+#define IPV4_VERSION 4
+#define IPV6_VERSION 6
+#define TCP_HDR_DOFF_UNIT 2
+#define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data))
+
+union hinic3_ip {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+};
+
+struct hinic3_tx_info {
+ struct sk_buff *skb;
+
+ u16 wqebb_cnt;
+ u16 valid_nr_frags;
+
+ int num_sge;
+ u16 num_pkts;
+ u16 rsvd1;
+ u32 rsvd2;
+ u64 num_bytes;
+ struct hinic3_dma_info *dma_info;
+ u64 rsvd3;
+};
+
+struct hinic3_txq {
+ struct net_device *netdev;
+ struct device *dev;
+
+ struct hinic3_txq_stats txq_stats;
+
+ u8 cos;
+ u8 rsvd1;
+ u16 q_id;
+ u32 q_mask;
+ u32 q_depth;
+ u32 rsvd2;
+
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_io_queue *sq;
+
+ u64 last_moder_packets;
+ u64 last_moder_bytes;
+ u64 rsvd3;
+} ____cacheline_aligned;
+
+netdev_tx_t hinic3_lb_xmit_frame(struct sk_buff *skb,
+ struct net_device *netdev);
+
+struct hinic3_dyna_txq_res {
+ struct hinic3_tx_info *tx_info;
+ struct hinic3_dma_info *bds;
+};
+
+netdev_tx_t hinic3_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
+
+void hinic3_txq_get_stats(struct hinic3_txq *txq,
+ struct hinic3_txq_stats *stats);
+
+void hinic3_txq_clean_stats(struct hinic3_txq_stats *txq_stats);
+
+struct hinic3_nic_dev;
+int hinic3_alloc_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+void hinic3_free_txqs_res(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_configure_txqs(struct hinic3_nic_dev *nic_dev, u16 num_sq,
+ u32 sq_depth, struct hinic3_dyna_txq_res *txqs_res);
+
+int hinic3_alloc_txqs(struct net_device *netdev);
+
+void hinic3_free_txqs(struct net_device *netdev);
+
+int hinic3_tx_poll(struct hinic3_txq *txq, int budget);
+
+int hinic3_flush_txqs(struct net_device *netdev);
+
+void hinic3_set_txq_cos(struct hinic3_nic_dev *nic_dev, u16 start_qid,
+ u16 q_num, u8 cos);
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline __sum16 csum_magic(union hinic3_ip *ip, unsigned short proto)
+{
+ return (ip->v4->version == IPV4_VERSION) ?
+ csum_tcpudp_magic(ip->v4->saddr, ip->v4->daddr, 0, proto, 0) :
+ csum_ipv6_magic(&ip->v6->saddr, &ip->v6->daddr, 0, proto, 0);
+}
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
new file mode 100644
index 000000000000..1b9e509109b8
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hinic3_wq.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_WQ_H
+#define HINIC3_WQ_H
+
+struct hinic3_wq {
+ u16 cons_idx;
+ u16 prod_idx;
+
+ u32 q_depth;
+ u16 idx_mask;
+ u16 wqebb_size_shift;
+ u16 rsvd1;
+ u16 num_wq_pages;
+ u32 wqebbs_per_page;
+ u16 wqebbs_per_page_shift;
+ u16 wqebbs_per_page_mask;
+
+ struct hinic3_dma_addr_align *wq_pages;
+
+ dma_addr_t wq_block_paddr;
+ u64 *wq_block_vaddr;
+
+ void *dev_hdl;
+ u32 wq_page_size;
+ u16 wqebb_size;
+} ____cacheline_aligned;
+
+#define WQ_MASK_IDX(wq, idx) ((idx) & (wq)->idx_mask)
+#define WQ_MASK_PAGE(wq, pg_idx) \
+ ((pg_idx) < (wq)->num_wq_pages ? (pg_idx) : 0)
+#define WQ_PAGE_IDX(wq, idx) ((idx) >> (wq)->wqebbs_per_page_shift)
+#define WQ_OFFSET_IN_PAGE(wq, idx) ((idx) & (wq)->wqebbs_per_page_mask)
+#define WQ_GET_WQEBB_ADDR(wq, pg_idx, idx_in_pg) \
+ ((u8 *)(wq)->wq_pages[pg_idx].align_vaddr + \
+ ((idx_in_pg) << (wq)->wqebb_size_shift))
+#define WQ_IS_0_LEVEL_CLA(wq) ((wq)->num_wq_pages == 1)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline u16 hinic3_wq_free_wqebbs(struct hinic3_wq *wq)
+{
+ return wq->q_depth - ((wq->q_depth + wq->prod_idx - wq->cons_idx) &
+ wq->idx_mask) - 1;
+}
+
+static inline bool hinic3_wq_is_empty(struct hinic3_wq *wq)
+{
+ return WQ_MASK_IDX(wq, wq->prod_idx) == WQ_MASK_IDX(wq, wq->cons_idx);
+}
+
+static inline void *hinic3_wq_get_one_wqebb(struct hinic3_wq *wq, u16 *pi)
+{
+ *pi = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx++;
+
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, *pi),
+ WQ_OFFSET_IN_PAGE(wq, *pi));
+}
+
+static inline void *hinic3_wq_get_multi_wqebbs(struct hinic3_wq *wq,
+ u16 num_wqebbs, u16 *prod_idx,
+ void **second_part_wqebbs_addr,
+ u16 *first_part_wqebbs_num)
+{
+ u32 pg_idx, off_in_page;
+
+ *prod_idx = WQ_MASK_IDX(wq, wq->prod_idx);
+ wq->prod_idx += num_wqebbs;
+
+ pg_idx = WQ_PAGE_IDX(wq, *prod_idx);
+ off_in_page = WQ_OFFSET_IN_PAGE(wq, *prod_idx);
+
+ if (off_in_page + num_wqebbs > wq->wqebbs_per_page) {
+ /* wqe across wq page boundary */
+ *second_part_wqebbs_addr =
+ WQ_GET_WQEBB_ADDR(wq, WQ_MASK_PAGE(wq, pg_idx + 1), 0);
+ *first_part_wqebbs_num = wq->wqebbs_per_page - off_in_page;
+ } else {
+ *second_part_wqebbs_addr = NULL;
+ *first_part_wqebbs_num = num_wqebbs;
+ }
+
+ return WQ_GET_WQEBB_ADDR(wq, pg_idx, off_in_page);
+}
+
+static inline void hinic3_wq_put_wqebbs(struct hinic3_wq *wq, u16 num_wqebbs)
+{
+ wq->cons_idx += num_wqebbs;
+}
+
+static inline void *hinic3_wq_wqebb_addr(struct hinic3_wq *wq, u16 idx)
+{
+ return WQ_GET_WQEBB_ADDR(wq, WQ_PAGE_IDX(wq, idx),
+ WQ_OFFSET_IN_PAGE(wq, idx));
+}
+
+static inline void *hinic3_wq_read_one_wqebb(struct hinic3_wq *wq,
+ u16 *cons_idx)
+{
+ *cons_idx = WQ_MASK_IDX(wq, wq->cons_idx);
+
+ return hinic3_wq_wqebb_addr(wq, *cons_idx);
+}
+
+static inline u64 hinic3_wq_get_first_wqe_page_addr(struct hinic3_wq *wq)
+{
+ return wq->wq_pages[0].align_paddr;
+}
+
+static inline void hinic3_wq_reset(struct hinic3_wq *wq)
+{
+ u16 pg_idx;
+
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ for (pg_idx = 0; pg_idx < wq->num_wq_pages; pg_idx++)
+ memset(wq->wq_pages[pg_idx].align_vaddr, 0, wq->wq_page_size);
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size);
+void hinic3_wq_destroy(struct hinic3_wq *wq);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
new file mode 100644
index 000000000000..b742f8a8d9fe
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.c
@@ -0,0 +1,1211 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/semaphore.h>
+#include <linux/jiffies.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_api_cmd.h"
+
+#define API_CMD_CHAIN_CELL_SIZE_SHIFT 6U
+
+#define API_CMD_CELL_DESC_SIZE 8
+#define API_CMD_CELL_DATA_ADDR_SIZE 8
+
+#define API_CHAIN_NUM_CELLS 32
+#define API_CHAIN_CELL_SIZE 128
+#define API_CHAIN_RSP_DATA_SIZE 128
+
+#define API_CMD_CELL_WB_ADDR_SIZE 8
+
+#define API_CHAIN_CELL_ALIGNMENT 8
+
+#define API_CMD_TIMEOUT 10000
+#define API_CMD_STATUS_TIMEOUT 10000
+
+#define API_CMD_BUF_SIZE 2048ULL
+
+#define API_CMD_NODE_ALIGN_SIZE 512ULL
+#define API_PAYLOAD_ALIGN_SIZE 64ULL
+
+#define API_CHAIN_RESP_ALIGNMENT 128ULL
+
+#define COMPLETION_TIMEOUT_DEFAULT 1000UL
+#define POLLING_COMPLETION_TIMEOUT_DEFAULT 1000U
+
+#define API_CMD_RESPONSE_DATA_PADDR(val) be64_to_cpu(*((u64 *)(val)))
+
+#define READ_API_CMD_PRIV_DATA(id, token) ((((u32)(id)) << 16) + (token))
+#define WRITE_API_CMD_PRIV_DATA(id) (((u8)(id)) << 16)
+
+#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
+
+#define SIZE_4BYTES(size) (ALIGN((u32)(size), 4U) >> 2)
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8U) >> 3)
+
+enum api_cmd_data_format {
+ SGL_DATA = 1,
+};
+
+enum api_cmd_type {
+ API_CMD_WRITE_TYPE = 0,
+ API_CMD_READ_TYPE = 1,
+};
+
+enum api_cmd_bypass {
+ NOT_BYPASS = 0,
+ BYPASS = 1,
+};
+
+enum api_cmd_resp_aeq {
+ NOT_TRIGGER = 0,
+ TRIGGER = 1,
+};
+
+enum api_cmd_chn_code {
+ APICHN_0 = 0,
+};
+
+enum api_cmd_chn_rsvd {
+ APICHN_VALID = 0,
+ APICHN_INVALID = 1,
+};
+
+#define API_DESC_LEN (7)
+
+static u8 xor_chksum_set(void *data)
+{
+ int idx;
+ u8 checksum = 0;
+ u8 *val = data;
+
+ for (idx = 0; idx < API_DESC_LEN; idx++)
+ checksum ^= val[idx];
+
+ return checksum;
+}
+
+static void set_prod_idx(struct hinic3_api_cmd_chain *chain)
+{
+ enum hinic3_api_cmd_chain_type chain_type = chain->chain_type;
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 hw_prod_idx_addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
+ u32 prod_idx = chain->prod_idx;
+
+ hinic3_hwif_write_reg(hwif, hw_prod_idx_addr, prod_idx);
+}
+
+static u32 get_hw_cons_idx(struct hinic3_api_cmd_chain *chain)
+{
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ return HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+}
+
+static void dump_api_chain_reg(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ u32 addr, val;
+ u16 pci_cmd = 0;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+
+ sdk_err(dev, "Chain type: 0x%x, cpld error: 0x%x, check error: 0x%x, current fsm: 0x%x\n",
+ chain->chain_type, HINIC3_API_CMD_STATUS_GET(val, CPLD_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, CHKSUM_ERR),
+ HINIC3_API_CMD_STATUS_GET(val, FSM));
+
+ sdk_err(dev, "Chain hw current ci: 0x%x\n",
+ HINIC3_API_CMD_STATUS_GET(val, CONS_IDX));
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ sdk_err(dev, "Chain hw current pi: 0x%x\n", val);
+ pci_read_config_word(chain->hwdev->pcidev_hdl, PCI_COMMAND, &pci_cmd);
+ sdk_err(dev, "PCI command reg: 0x%x\n", pci_cmd);
+}
+
+/**
+ * chain_busy - check if the chain is still processing last requests
+ * @chain: chain to check
+ **/
+static int chain_busy(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ struct hinic3_api_cmd_cell_ctxt *ctxt;
+ u64 resp_header;
+
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_MULTI_READ:
+ case HINIC3_API_CMD_POLL_READ:
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ if (ctxt->status &&
+ !HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ sdk_err(dev, "Context(0x%x) busy!, pi: %u, resp_header: 0x%08x%08x\n",
+ ctxt->status, chain->prod_idx,
+ upper_32_bits(resp_header),
+ lower_32_bits(resp_header));
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ chain->cons_idx = get_hw_cons_idx(chain);
+
+ if (chain->cons_idx == MASKED_IDX(chain, chain->prod_idx + 1)) {
+ sdk_err(dev, "API CMD chain %d is busy, cons_idx = %u, prod_idx = %u\n",
+ chain->chain_type, chain->cons_idx,
+ chain->prod_idx);
+ dump_api_chain_reg(chain);
+ return -EBUSY;
+ }
+ break;
+ default:
+ sdk_err(dev, "Unknown Chain type %d\n", chain->chain_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * get_cell_data_size - get the data size of specific cell type
+ * @type: chain type
+ **/
+static u16 get_cell_data_size(enum hinic3_api_cmd_chain_type type)
+{
+ u16 cell_data_size = 0;
+
+ switch (type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_WB_ADDR_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
+ API_CMD_CELL_DATA_ADDR_SIZE,
+ API_CHAIN_CELL_ALIGNMENT);
+ break;
+ default:
+ break;
+ }
+
+ return cell_data_size;
+}
+
+/**
+ * prepare_cell_ctrl - prepare the ctrl of the cell for the command
+ * @cell_ctrl: the control of the cell to set the control into it
+ * @cell_len: the size of the cell
+ **/
+static void prepare_cell_ctrl(u64 *cell_ctrl, u16 cell_len)
+{
+ u64 ctrl;
+ u8 chksum;
+
+ ctrl = HINIC3_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(cell_len), CELL_LEN) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, RD_DMA_ATTR_OFF) |
+ HINIC3_API_CMD_CELL_CTRL_SET(0ULL, WR_DMA_ATTR_OFF);
+
+ chksum = xor_chksum_set(&ctrl);
+
+ ctrl |= HINIC3_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ *cell_ctrl = cpu_to_be64(ctrl);
+}
+
+/**
+ * prepare_api_cmd - prepare API CMD command
+ * @chain: chain for the command
+ * @cell: the cell of the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ **/
+static void prepare_api_cmd(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ u32 priv;
+
+ cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ priv = READ_API_CMD_PRIV_DATA(chain->chain_type,
+ cell_ctxt->saved_prod_idx);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_READ_TYPE, RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(NOT_TRIGGER,
+ RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ priv = WRITE_API_CMD_PRIV_DATA(chain->chain_type);
+ cell->desc = HINIC3_API_CMD_DESC_SET(SGL_DATA, API_TYPE) |
+ HINIC3_API_CMD_DESC_SET(API_CMD_WRITE_TYPE,
+ RD_WR) |
+ HINIC3_API_CMD_DESC_SET(NOT_BYPASS, MGMT_BYPASS) |
+ HINIC3_API_CMD_DESC_SET(TRIGGER, RESP_AEQE_EN) |
+ HINIC3_API_CMD_DESC_SET(priv, PRIV_DATA);
+ break;
+ default:
+ sdk_err(chain->hwdev->dev_hdl, "Unknown Chain type: %d\n",
+ chain->chain_type);
+ return;
+ }
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(APICHN_0, APICHN_CODE) |
+ HINIC3_API_CMD_DESC_SET(APICHN_VALID, APICHN_RSVD);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(node_id, DEST) |
+ HINIC3_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
+
+ cell->desc |= HINIC3_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
+ XOR_CHKSUM);
+
+ /* The data in the HW should be in Big Endian Format */
+ cell->desc = cpu_to_be64(cell->desc);
+
+ memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
+}
+
+/**
+ * prepare_cell - prepare cell ctrl and cmd in the current producer cell
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @cmd_size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static void prepare_cell(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size)
+{
+ struct hinic3_api_cmd_cell *curr_node;
+ u16 cell_size;
+
+ curr_node = chain->curr_node;
+
+ cell_size = get_cell_data_size(chain->chain_type);
+
+ prepare_cell_ctrl(&curr_node->ctrl, cell_size);
+ prepare_api_cmd(chain, curr_node, node_id, cmd, cmd_size);
+}
+
+static inline void cmd_chain_prod_idx_inc(struct hinic3_api_cmd_chain *chain)
+{
+ chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
+}
+
+static void issue_api_cmd(struct hinic3_api_cmd_chain *chain)
+{
+ set_prod_idx(chain);
+}
+
+/**
+ * api_cmd_status_update - update the status of the chain
+ * @chain: chain to update
+ **/
+static void api_cmd_status_update(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_status *wb_status;
+ enum hinic3_api_cmd_chain_type chain_type;
+ u64 status_header;
+ u32 buf_desc;
+
+ wb_status = chain->wb_status;
+
+ buf_desc = be32_to_cpu(wb_status->buf_desc);
+ if (HINIC3_API_CMD_STATUS_GET(buf_desc, CHKSUM_ERR))
+ return;
+
+ status_header = be64_to_cpu(wb_status->header);
+ chain_type = HINIC3_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
+ if (chain_type >= HINIC3_API_CMD_MAX)
+ return;
+
+ if (chain_type != chain->chain_type)
+ return;
+
+ chain->cons_idx = HINIC3_API_CMD_STATUS_GET(buf_desc, CONS_IDX);
+}
+
+static enum hinic3_wait_return wait_for_status_poll_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ api_cmd_status_update(chain);
+ /* SYNC API CMD cmd should start after prev cmd finished */
+ if (chain->cons_idx == chain->prod_idx)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_status_poll - wait for write to mgmt command to complete
+ * @chain: the chain of the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_status_poll(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain,
+ wait_for_status_poll_handler,
+ API_CMD_STATUS_TIMEOUT, 100); /* wait 100 us once */
+}
+
+static void copy_resp_data(struct hinic3_api_cmd_cell_ctxt *ctxt, void *ack,
+ u16 ack_size)
+{
+ struct hinic3_api_cmd_resp_fmt *resp = ctxt->resp;
+
+ memcpy(ack, &resp->resp_data, ack_size);
+ ctxt->status = 0;
+}
+
+static enum hinic3_wait_return check_cmd_resp_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = priv_data;
+ u64 resp_header;
+ u8 resp_status;
+
+ if (!ctxt->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ resp_header = be64_to_cpu(ctxt->resp->header);
+ rmb(); /* read the latest header */
+
+ if (HINIC3_API_CMD_RESP_HEADER_VALID(resp_header)) {
+ resp_status = HINIC3_API_CMD_RESP_HEAD_GET(resp_header, STATUS);
+ if (resp_status) {
+ pr_err("Api chain response data err, status: %u\n",
+ resp_status);
+ return WAIT_PROCESS_ERR;
+ }
+
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * prepare_cell - polling for respense data of the read api-command
+ * @chain: pointer to api cmd chain
+ *
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_resp_polling(struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ return hinic3_wait_for_timeout(ctxt, check_cmd_resp_handler,
+ POLLING_COMPLETION_TIMEOUT_DEFAULT,
+ USEC_PER_MSEC);
+}
+
+/**
+ * wait_for_api_cmd_completion - wait for command to complete
+ * @chain: chain for the command
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_api_cmd_completion(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt,
+ void *ack, u16 ack_size)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err = 0;
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ err = wait_for_resp_polling(ctxt);
+ if (err == 0)
+ copy_resp_data(ctxt, ack, ack_size);
+ else
+ sdk_err(dev, "API CMD poll response timeout\n");
+ break;
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ err = wait_for_status_poll(chain);
+ if (err != 0) {
+ sdk_err(dev, "API CMD Poll status timeout, chain type: %d\n",
+ chain->chain_type);
+ break;
+ }
+ break;
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* No need to wait */
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (err != 0)
+ dump_api_chain_reg(chain);
+
+ return err;
+}
+
+static inline void update_api_cmd_ctxt(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell_ctxt *ctxt)
+{
+ ctxt->status = 1;
+ ctxt->saved_prod_idx = chain->prod_idx;
+ if (ctxt->resp) {
+ ctxt->resp->header = 0;
+
+ /* make sure "header" was cleared */
+ wmb();
+ }
+}
+
+/**
+ * api_cmd - API CMD command
+ * @chain: chain for the command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 cmd_size, void *ack, u16 ack_size)
+{
+ struct hinic3_api_cmd_cell_ctxt *ctxt = NULL;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock(&chain->async_lock);
+ else
+ down(&chain->sem);
+ ctxt = &chain->cell_ctxt[chain->prod_idx];
+ if (chain_busy(chain)) {
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+ return -EBUSY;
+ }
+ update_api_cmd_ctxt(chain, ctxt);
+
+ prepare_cell(chain, node_id, cmd, cmd_size);
+
+ cmd_chain_prod_idx_inc(chain);
+
+ wmb(); /* issue the command */
+
+ issue_api_cmd(chain);
+
+ /* incremented prod idx, update ctxt */
+
+ chain->curr_node = chain->cell_ctxt[chain->prod_idx].cell_vaddr;
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_unlock(&chain->async_lock);
+ else
+ up(&chain->sem);
+
+ return wait_for_api_cmd_completion(chain, ctxt, ack, ack_size);
+}
+
+/**
+ * hinic3_api_cmd_write - Write API CMD command
+ * @chain: chain for write command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size)
+{
+ /* Verify the chain type */
+ return api_cmd(chain, node_id, cmd, size, NULL, 0);
+}
+
+/**
+ * hinic3_api_cmd_read - Read API CMD command
+ * @chain: chain for read command
+ * @node_id: destination node on the card that will receive the command
+ * @cmd: command data
+ * @size: the command size
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size)
+{
+ return api_cmd(chain, node_id, cmd, size, ack, ack_size);
+}
+
+static enum hinic3_wait_return check_chain_restart_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *cmd_chain = priv_data;
+ u32 reg_addr, val;
+
+ if (!cmd_chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(cmd_chain->hwdev->hwif, reg_addr);
+ if (!HINIC3_API_CMD_CHAIN_REQ_GET(val, RESTART))
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * api_cmd_hw_restart - restart the chain in the HW
+ * @chain: the API CMD specific chain to restart
+ **/
+static int api_cmd_hw_restart(struct hinic3_api_cmd_chain *cmd_chain)
+{
+ struct hinic3_hwif *hwif = cmd_chain->hwdev->hwif;
+ u32 reg_addr, val;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(cmd_chain->chain_type);
+ val = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ val = HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
+ val |= HINIC3_API_CMD_CHAIN_REQ_SET(1, RESTART);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, val);
+
+ return hinic3_wait_for_timeout(cmd_chain, check_chain_restart_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_ctrl_init - set the control register of a chain
+ * @chain: the API CMD specific chain to set control register for
+ **/
+static void api_cmd_ctrl_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 reg_addr, ctrl;
+ u32 size;
+
+ /* Read Modify Write */
+ reg_addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ size = (u32)ilog2(chain->cell_size >> API_CMD_CHAIN_CELL_SIZE_SHIFT);
+
+ ctrl = hinic3_hwif_read_reg(hwif, reg_addr);
+
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ ctrl |= HINIC3_API_CMD_CHAIN_CTRL_SET(0, AEQE_EN) |
+ HINIC3_API_CMD_CHAIN_CTRL_SET(size, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, reg_addr, ctrl);
+}
+
+/**
+ * api_cmd_set_status_addr - set the status address of a chain in the HW
+ * @chain: the API CMD specific chain to set status address for
+ **/
+static void api_cmd_set_status_addr(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->wb_status_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_set_num_cells - set the number cells of a chain in the HW
+ * @chain: the API CMD specific chain to set the number of cells for
+ **/
+static void api_cmd_set_num_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
+ val = chain->num_cells;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+/**
+ * api_cmd_head_init - set the head cell of a chain in the HW
+ * @chain: the API CMD specific chain to set the head for
+ **/
+static void api_cmd_head_init(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, val;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
+ val = upper_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
+ val = lower_32_bits(chain->head_cell_paddr);
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static enum hinic3_wait_return check_chain_ready_handler(void *priv_data)
+{
+ struct hinic3_api_cmd_chain *chain = priv_data;
+ u32 addr, val;
+ u32 hw_cons_idx;
+
+ if (!chain->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ addr = HINIC3_CSR_API_CMD_STATUS_0_ADDR(chain->chain_type);
+ val = hinic3_hwif_read_reg(chain->hwdev->hwif, addr);
+ hw_cons_idx = HINIC3_API_CMD_STATUS_GET(val, CONS_IDX);
+ /* wait for HW cons idx to be updated */
+ if (hw_cons_idx == chain->cons_idx)
+ return WAIT_PROCESS_CPL;
+ return WAIT_PROCESS_WAITING;
+}
+
+/**
+ * wait_for_ready_chain - wait for the chain to be ready
+ * @chain: the API CMD specific chain to wait for
+ * Return: 0 - success, negative - failure
+ **/
+static int wait_for_ready_chain(struct hinic3_api_cmd_chain *chain)
+{
+ return hinic3_wait_for_timeout(chain, check_chain_ready_handler,
+ API_CMD_TIMEOUT, USEC_PER_MSEC);
+}
+
+/**
+ * api_cmd_chain_hw_clean - clean the HW
+ * @chain: the API CMD specific chain
+ **/
+static void api_cmd_chain_hw_clean(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_hwif *hwif = chain->hwdev->hwif;
+ u32 addr, ctrl;
+
+ addr = HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
+
+ ctrl = hinic3_hwif_read_reg(hwif, addr);
+ ctrl = HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
+ HINIC3_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl);
+}
+
+/**
+ * api_cmd_chain_hw_init - initialize the chain in the HW
+ * @chain: the API CMD specific chain to initialize in HW
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_chain_hw_init(struct hinic3_api_cmd_chain *chain)
+{
+ api_cmd_chain_hw_clean(chain);
+
+ api_cmd_set_status_addr(chain);
+
+ if (api_cmd_hw_restart(chain)) {
+ sdk_err(chain->hwdev->dev_hdl, "Failed to restart api_cmd_hw\n");
+ return -EBUSY;
+ }
+
+ api_cmd_ctrl_init(chain);
+ api_cmd_set_num_cells(chain);
+ api_cmd_head_init(chain);
+
+ return wait_for_ready_chain(chain);
+}
+
+/**
+ * alloc_cmd_buf - allocate a dma buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_cmd_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *dev = chain->hwdev->dev_hdl;
+ void *buf_vaddr;
+ u64 buf_paddr;
+ int err = 0;
+
+ buf_vaddr = (u8 *)((u64)chain->buf_vaddr_base +
+ chain->buf_size_align * cell_idx);
+ buf_paddr = chain->buf_paddr_base +
+ chain->buf_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->api_cmd_vaddr = buf_vaddr;
+
+ /* set the cmd DMA address in the cell */
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_POLL_READ:
+ cell->read.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ /* The data in the HW should be in Big Endian Format */
+ cell->write.hw_cmd_paddr = cpu_to_be64(buf_paddr);
+ break;
+ default:
+ sdk_err(dev, "Unknown API CMD Chain type: %d\n",
+ chain->chain_type);
+ err = -EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * alloc_cmd_buf - allocate a resp buffer for API CMD command
+ * @chain: the API CMD specific chain for the cmd
+ * @cell: the cell in the HW for the cmd
+ * @cell_idx: the index of the cell
+ **/
+static void alloc_resp_buf(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_cell *cell, u32 cell_idx)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ void *resp_vaddr;
+ u64 resp_paddr;
+
+ resp_vaddr = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * cell_idx);
+ resp_paddr = chain->rsp_paddr_base +
+ chain->rsp_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+
+ cell_ctxt->resp = resp_vaddr;
+ cell->read.hw_wb_resp_paddr = cpu_to_be64(resp_paddr);
+}
+
+static int hinic3_alloc_api_cmd_cell_buf(struct hinic3_api_cmd_chain *chain,
+ u32 cell_idx,
+ struct hinic3_api_cmd_cell *node)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ int err;
+
+ /* For read chain, we should allocate buffer for the response data */
+ if (chain->chain_type == HINIC3_API_CMD_MULTI_READ ||
+ chain->chain_type == HINIC3_API_CMD_POLL_READ)
+ alloc_resp_buf(chain, node, cell_idx);
+
+ switch (chain->chain_type) {
+ case HINIC3_API_CMD_WRITE_TO_MGMT_CPU:
+ case HINIC3_API_CMD_POLL_WRITE:
+ case HINIC3_API_CMD_POLL_READ:
+ case HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU:
+ err = alloc_cmd_buf(chain, node, cell_idx);
+ if (err) {
+ sdk_err(dev, "Failed to allocate cmd buffer\n");
+ goto alloc_cmd_buf_err;
+ }
+ break;
+ /* For api command write and api command read, the data section
+ * is directly inserted in the cell, so no need to allocate.
+ */
+ case HINIC3_API_CMD_MULTI_READ:
+ chain->cell_ctxt[cell_idx].api_cmd_vaddr =
+ &node->read.hw_cmd_paddr;
+ break;
+ default:
+ sdk_err(dev, "Unsupported API CMD chain type\n");
+ err = -EINVAL;
+ goto alloc_cmd_buf_err;
+ }
+
+ return 0;
+
+alloc_cmd_buf_err:
+
+ return err;
+}
+
+/**
+ * api_cmd_create_cell - create API CMD cell of specific chain
+ * @chain: the API CMD specific chain to create its cell
+ * @cell_idx: the cell index to create
+ * @pre_node: previous cell
+ * @node_vaddr: the virt addr of the cell
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cell(struct hinic3_api_cmd_chain *chain, u32 cell_idx,
+ struct hinic3_api_cmd_cell *pre_node,
+ struct hinic3_api_cmd_cell **node_vaddr)
+{
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *node;
+ void *cell_vaddr;
+ u64 cell_paddr;
+ int err;
+
+ cell_vaddr = (void *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * cell_idx);
+ cell_paddr = chain->cell_paddr_base +
+ chain->cell_size_align * cell_idx;
+
+ cell_ctxt = &chain->cell_ctxt[cell_idx];
+ cell_ctxt->cell_vaddr = cell_vaddr;
+ cell_ctxt->hwdev = chain->hwdev;
+ node = cell_ctxt->cell_vaddr;
+
+ if (!pre_node) {
+ chain->head_node = cell_vaddr;
+ chain->head_cell_paddr = (dma_addr_t)cell_paddr;
+ } else {
+ /* The data in the HW should be in Big Endian Format */
+ pre_node->next_cell_paddr = cpu_to_be64(cell_paddr);
+ }
+
+ /* Driver software should make sure that there is an empty API
+ * command cell at the end the chain
+ */
+ node->next_cell_paddr = 0;
+
+ err = hinic3_alloc_api_cmd_cell_buf(chain, cell_idx, node);
+ if (err)
+ return err;
+
+ *node_vaddr = node;
+
+ return 0;
+}
+
+/**
+ * api_cmd_create_cells - create API CMD cells for specific chain
+ * @chain: the API CMD specific chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_cells(struct hinic3_api_cmd_chain *chain)
+{
+ struct hinic3_api_cmd_cell *node = NULL, *pre_node = NULL;
+ void *dev = chain->hwdev->dev_hdl;
+ u32 cell_idx;
+ int err;
+
+ for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
+ err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
+ if (err) {
+ sdk_err(dev, "Failed to create API CMD cell\n");
+ return err;
+ }
+
+ pre_node = node;
+ }
+
+ if (!node)
+ return -EFAULT;
+
+ /* set the Final node to point on the start */
+ node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
+
+ /* set the current node to be the head */
+ chain->curr_node = chain->head_node;
+ return 0;
+}
+
+/**
+ * api_chain_init - initialize API CMD specific chain
+ * @chain: the API CMD specific chain to initialize
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_chain_init(struct hinic3_api_cmd_chain *chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ void *dev = chain->hwdev->dev_hdl;
+ size_t cell_ctxt_size;
+ size_t cells_buf_size;
+ int err;
+
+ chain->chain_type = attr->chain_type;
+ chain->num_cells = attr->num_cells;
+ chain->cell_size = attr->cell_size;
+ chain->rsp_size = attr->rsp_size;
+
+ chain->prod_idx = 0;
+ chain->cons_idx = 0;
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_init(&chain->async_lock);
+ else
+ sema_init(&chain->sem, 1);
+
+ cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
+ if (!cell_ctxt_size) {
+ sdk_err(dev, "Api chain cell size cannot be zero\n");
+ err = -EINVAL;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->cell_ctxt = kzalloc(cell_ctxt_size, GFP_KERNEL);
+ if (!chain->cell_ctxt) {
+ sdk_err(dev, "Failed to allocate cell contexts for a chain\n");
+ err = -ENOMEM;
+ goto alloc_cell_ctxt_err;
+ }
+
+ chain->wb_status = dma_zalloc_coherent(dev,
+ sizeof(*chain->wb_status),
+ &chain->wb_status_paddr,
+ GFP_KERNEL);
+ if (!chain->wb_status) {
+ sdk_err(dev, "Failed to allocate DMA wb status\n");
+ err = -ENOMEM;
+ goto alloc_wb_status_err;
+ }
+
+ chain->cell_size_align = ALIGN((u64)chain->cell_size,
+ API_CMD_NODE_ALIGN_SIZE);
+ chain->rsp_size_align = ALIGN((u64)chain->rsp_size,
+ API_CHAIN_RESP_ALIGNMENT);
+ chain->buf_size_align = ALIGN(API_CMD_BUF_SIZE, API_PAYLOAD_ALIGN_SIZE);
+
+ cells_buf_size = (chain->cell_size_align + chain->rsp_size_align +
+ chain->buf_size_align) * chain->num_cells;
+
+ err = hinic3_dma_zalloc_coherent_align(dev, cells_buf_size,
+ API_CMD_NODE_ALIGN_SIZE,
+ GFP_KERNEL,
+ &chain->cells_addr);
+ if (err) {
+ sdk_err(dev, "Failed to allocate API CMD cells buffer\n");
+ goto alloc_cells_buf_err;
+ }
+
+ chain->cell_vaddr_base = chain->cells_addr.align_vaddr;
+ chain->cell_paddr_base = chain->cells_addr.align_paddr;
+
+ chain->rsp_vaddr_base = (u8 *)((u64)chain->cell_vaddr_base +
+ chain->cell_size_align * chain->num_cells);
+ chain->rsp_paddr_base = chain->cell_paddr_base +
+ chain->cell_size_align * chain->num_cells;
+
+ chain->buf_vaddr_base = (u8 *)((u64)chain->rsp_vaddr_base +
+ chain->rsp_size_align * chain->num_cells);
+ chain->buf_paddr_base = chain->rsp_paddr_base +
+ chain->rsp_size_align * chain->num_cells;
+
+ return 0;
+
+alloc_cells_buf_err:
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+
+alloc_wb_status_err:
+ kfree(chain->cell_ctxt);
+
+/*lint -save -e548*/
+alloc_cell_ctxt_err:
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+/*lint -restore*/
+ return err;
+}
+
+/**
+ * api_chain_free - free API CMD specific chain
+ * @chain: the API CMD specific chain to free
+ **/
+static void api_chain_free(struct hinic3_api_cmd_chain *chain)
+{
+ void *dev = chain->hwdev->dev_hdl;
+
+ hinic3_dma_free_coherent_align(dev, &chain->cells_addr);
+
+ dma_free_coherent(dev, sizeof(*chain->wb_status),
+ chain->wb_status, chain->wb_status_paddr);
+ kfree(chain->cell_ctxt);
+
+ if (chain->chain_type == HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU)
+ spin_lock_deinit(&chain->async_lock);
+ else
+ sema_deinit(&chain->sem);
+}
+
+/**
+ * api_cmd_create_chain - create API CMD specific chain
+ * @chain: the API CMD specific chain to create
+ * @attr: attributes to set in the chain
+ * Return: 0 - success, negative - failure
+ **/
+static int api_cmd_create_chain(struct hinic3_api_cmd_chain **cmd_chain,
+ struct hinic3_api_cmd_chain_attr *attr)
+{
+ struct hinic3_hwdev *hwdev = attr->hwdev;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ int err;
+
+ if (attr->num_cells & (attr->num_cells - 1)) {
+ sdk_err(hwdev->dev_hdl, "Invalid number of cells, must be power of 2\n");
+ return -EINVAL;
+ }
+
+ chain = kzalloc(sizeof(*chain), GFP_KERNEL);
+ if (!chain)
+ return -ENOMEM;
+
+ chain->hwdev = hwdev;
+
+ err = api_chain_init(chain, attr);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain\n");
+ goto chain_init_err;
+ }
+
+ err = api_cmd_create_cells(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cells for API CMD chain\n");
+ goto create_cells_err;
+ }
+
+ err = api_cmd_chain_hw_init(chain);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize chain HW\n");
+ goto chain_hw_init_err;
+ }
+
+ *cmd_chain = chain;
+ return 0;
+
+chain_hw_init_err:
+create_cells_err:
+ api_chain_free(chain);
+
+chain_init_err:
+ kfree(chain);
+ return err;
+}
+
+/**
+ * api_cmd_destroy_chain - destroy API CMD specific chain
+ * @chain: the API CMD specific chain to destroy
+ **/
+static void api_cmd_destroy_chain(struct hinic3_api_cmd_chain *chain)
+{
+ api_chain_free(chain);
+ kfree(chain);
+}
+
+/**
+ * hinic3_api_cmd_init - Initialize all the API CMD chains
+ * @hwif: the hardware interface of a pci function device
+ * @chain: the API CMD chains that will be initialized
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain)
+{
+ void *dev = hwdev->dev_hdl;
+ struct hinic3_api_cmd_chain_attr attr;
+ u8 chain_type, i;
+ int err;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return 0;
+
+ attr.hwdev = hwdev;
+ attr.num_cells = API_CHAIN_NUM_CELLS;
+ attr.cell_size = API_CHAIN_CELL_SIZE;
+ attr.rsp_size = API_CHAIN_RSP_DATA_SIZE;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++) {
+ attr.chain_type = chain_type;
+
+ err = api_cmd_create_chain(&chain[chain_type], &attr);
+ if (err) {
+ sdk_err(dev, "Failed to create chain %d\n", chain_type);
+ goto create_chain_err;
+ }
+ }
+
+ return 0;
+
+create_chain_err:
+ i = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+ for (; i < chain_type; i++)
+ api_cmd_destroy_chain(chain[i]);
+
+ return err;
+}
+
+/**
+ * hinic3_api_cmd_free - free the API CMD chains
+ * @chain: the API CMD chains that will be freed
+ **/
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain)
+{
+ u8 chain_type;
+
+ if (COMM_SUPPORT_API_CHAIN(hwdev) == 0)
+ return;
+
+ chain_type = HINIC3_API_CMD_WRITE_TO_MGMT_CPU;
+
+ for (; chain_type < HINIC3_API_CMD_MAX; chain_type++)
+ api_cmd_destroy_chain(chain[chain_type]);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
new file mode 100644
index 000000000000..727e668bf237
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_api_cmd.h
@@ -0,0 +1,286 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_API_CMD_H
+#define HINIC3_API_CMD_H
+
+#include <linux/semaphore.h>
+
+#include "hinic3_eqs.h"
+#include "hinic3_hwif.h"
+
+/* api_cmd_cell.ctrl structure */
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_SHIFT 0
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_SHIFT 16
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_SHIFT 24
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_CELL_CTRL_CELL_LEN_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_RD_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_WR_DMA_ATTR_OFF_MASK 0x3FU
+#define HINIC3_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_CELL_CTRL_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_CELL_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CELL_CTRL_##member##_SHIFT)
+
+/* api_cmd_cell.desc structure */
+#define HINIC3_API_CMD_DESC_API_TYPE_SHIFT 0
+#define HINIC3_API_CMD_DESC_RD_WR_SHIFT 1
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_SHIFT 3
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_SHIFT 4
+#define HINIC3_API_CMD_DESC_APICHN_CODE_SHIFT 6
+#define HINIC3_API_CMD_DESC_PRIV_DATA_SHIFT 8
+#define HINIC3_API_CMD_DESC_DEST_SHIFT 32
+#define HINIC3_API_CMD_DESC_SIZE_SHIFT 40
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
+
+#define HINIC3_API_CMD_DESC_API_TYPE_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RD_WR_MASK 0x1U
+#define HINIC3_API_CMD_DESC_MGMT_BYPASS_MASK 0x1U
+#define HINIC3_API_CMD_DESC_RESP_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_DESC_APICHN_RSVD_MASK 0x3U
+#define HINIC3_API_CMD_DESC_APICHN_CODE_MASK 0x3U
+#define HINIC3_API_CMD_DESC_PRIV_DATA_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_DESC_DEST_MASK 0x1FU
+#define HINIC3_API_CMD_DESC_SIZE_MASK 0x7FFU
+#define HINIC3_API_CMD_DESC_XOR_CHKSUM_MASK 0xFFU
+
+#define HINIC3_API_CMD_DESC_SET(val, member) \
+ ((((u64)(val)) & HINIC3_API_CMD_DESC_##member##_MASK) << \
+ HINIC3_API_CMD_DESC_##member##_SHIFT)
+
+/* api_cmd_status header */
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_SHIFT 0
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
+
+#define HINIC3_API_CMD_STATUS_HEADER_VALID_MASK 0xFFU
+#define HINIC3_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFFU
+
+#define HINIC3_API_CMD_STATUS_HEADER_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_HEADER_##member##_MASK)
+
+/* API_CHAIN_REQ CSR: 0x0020+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_SHIFT 2
+
+#define HINIC3_API_CMD_CHAIN_REQ_RESTART_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_REQ_WB_TRIGGER_MASK 0x1U
+
+#define HINIC3_API_CMD_CHAIN_REQ_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_REQ_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_REQ_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
+ HINIC3_API_CMD_CHAIN_REQ_##member##_MASK)
+
+#define HINIC3_API_CMD_CHAIN_REQ_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_REQ_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_REQ_##member##_SHIFT)))
+
+/* API_CHAIN_CTL CSR: 0x0014+api_idx*0x080 */
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_SHIFT 1
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
+
+#define HINIC3_API_CMD_CHAIN_CTRL_RESTART_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1U
+#define HINIC3_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3U
+#define HINIC3_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3U
+
+#define HINIC3_API_CMD_CHAIN_CTRL_SET(val, member) \
+ (((val) & HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK) << \
+ HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)
+
+#define HINIC3_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
+ ((val) & (~(HINIC3_API_CMD_CHAIN_CTRL_##member##_MASK \
+ << HINIC3_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
+
+/* api_cmd rsp header */
+#define HINIC3_API_CMD_RESP_HEAD_VALID_SHIFT 0
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_SHIFT 8
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT 16
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_SHIFT 24
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT 40
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_MASK 0xFF
+#define HINIC3_API_CMD_RESP_HEAD_STATUS_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK 0xFFU
+#define HINIC3_API_CMD_RESP_HEAD_RESP_LEN_MASK 0x1FFU
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK 0xFFFFFFU
+
+#define HINIC3_API_CMD_RESP_HEAD_VALID_CODE 0xFF
+
+#define HINIC3_API_CMD_RESP_HEADER_VALID(val) \
+ (((val) & HINIC3_API_CMD_RESP_HEAD_VALID_MASK) == \
+ HINIC3_API_CMD_RESP_HEAD_VALID_CODE)
+
+#define HINIC3_API_CMD_RESP_HEAD_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_##member##_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_##member##_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_CHAIN_ID(val) \
+ (((val) >> HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_CHAIN_ID_MASK)
+
+#define HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV(val) \
+ ((u16)(((val) >> HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_SHIFT) & \
+ HINIC3_API_CMD_RESP_HEAD_DRIVER_PRIV_MASK))
+/* API_STATUS_0 CSR: 0x0030+api_idx*0x080 */
+#define HINIC3_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFFU
+#define HINIC3_API_CMD_STATUS_CONS_IDX_SHIFT 0
+
+#define HINIC3_API_CMD_STATUS_FSM_MASK 0xFU
+#define HINIC3_API_CMD_STATUS_FSM_SHIFT 24
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3U
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
+
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_MASK 0x1U
+#define HINIC3_API_CMD_STATUS_CPLD_ERR_SHIFT 30
+
+#define HINIC3_API_CMD_STATUS_CONS_IDX(val) \
+ ((val) & HINIC3_API_CMD_STATUS_CONS_IDX_MASK)
+
+#define HINIC3_API_CMD_STATUS_CHKSUM_ERR(val) \
+ (((val) >> HINIC3_API_CMD_STATUS_CHKSUM_ERR_SHIFT) & \
+ HINIC3_API_CMD_STATUS_CHKSUM_ERR_MASK)
+
+#define HINIC3_API_CMD_STATUS_GET(val, member) \
+ (((val) >> HINIC3_API_CMD_STATUS_##member##_SHIFT) & \
+ HINIC3_API_CMD_STATUS_##member##_MASK)
+
+enum hinic3_api_cmd_chain_type {
+ /* write to mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_TO_MGMT_CPU = 2,
+ /* multi read command with completion notification - not used */
+ HINIC3_API_CMD_MULTI_READ = 3,
+ /* write command without completion notification */
+ HINIC3_API_CMD_POLL_WRITE = 4,
+ /* read command without completion notification */
+ HINIC3_API_CMD_POLL_READ = 5,
+ /* read from mgmt cpu command with completion */
+ HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU = 6,
+ HINIC3_API_CMD_MAX,
+};
+
+struct hinic3_api_cmd_status {
+ u64 header;
+ u32 buf_desc;
+ u32 cell_addr_hi;
+ u32 cell_addr_lo;
+ u32 rsvd0;
+ u64 rsvd1;
+};
+
+/* HW struct */
+struct hinic3_api_cmd_cell {
+ u64 ctrl;
+
+ /* address is 64 bit in HW struct */
+ u64 next_cell_paddr;
+
+ u64 desc;
+
+ /* HW struct */
+ union {
+ struct {
+ u64 hw_cmd_paddr;
+ } write;
+
+ struct {
+ u64 hw_wb_resp_paddr;
+ u64 hw_cmd_paddr;
+ } read;
+ };
+};
+
+struct hinic3_api_cmd_resp_fmt {
+ u64 header;
+ u64 resp_data;
+};
+
+struct hinic3_api_cmd_cell_ctxt {
+ struct hinic3_api_cmd_cell *cell_vaddr;
+
+ void *api_cmd_vaddr;
+
+ struct hinic3_api_cmd_resp_fmt *resp;
+
+ struct completion done;
+ int status;
+
+ u32 saved_prod_idx;
+ struct hinic3_hwdev *hwdev;
+};
+
+struct hinic3_api_cmd_chain_attr {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 rsp_size;
+ u16 cell_size;
+};
+
+struct hinic3_api_cmd_chain {
+ struct hinic3_hwdev *hwdev;
+ enum hinic3_api_cmd_chain_type chain_type;
+
+ u32 num_cells;
+ u16 cell_size;
+ u16 rsp_size;
+ u32 rsvd1;
+
+ /* HW members is 24 bit format */
+ u32 prod_idx;
+ u32 cons_idx;
+
+ struct semaphore sem;
+ /* Async cmd can not be scheduling */
+ spinlock_t async_lock;
+
+ dma_addr_t wb_status_paddr;
+ struct hinic3_api_cmd_status *wb_status;
+
+ dma_addr_t head_cell_paddr;
+ struct hinic3_api_cmd_cell *head_node;
+
+ struct hinic3_api_cmd_cell_ctxt *cell_ctxt;
+ struct hinic3_api_cmd_cell *curr_node;
+
+ struct hinic3_dma_addr_align cells_addr;
+
+ u8 *cell_vaddr_base;
+ u64 cell_paddr_base;
+ u8 *rsp_vaddr_base;
+ u64 rsp_paddr_base;
+ u8 *buf_vaddr_base;
+ u64 buf_paddr_base;
+ u64 cell_size_align;
+ u64 rsp_size_align;
+ u64 buf_size_align;
+
+ u64 rsvd2;
+};
+
+int hinic3_api_cmd_write(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size);
+
+int hinic3_api_cmd_read(struct hinic3_api_cmd_chain *chain, u8 node_id,
+ const void *cmd, u16 size, void *ack, u16 ack_size);
+
+int hinic3_api_cmd_init(struct hinic3_hwdev *hwdev,
+ struct hinic3_api_cmd_chain **chain);
+
+void hinic3_api_cmd_free(const struct hinic3_hwdev *hwdev, struct hinic3_api_cmd_chain **chain);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
new file mode 100644
index 000000000000..230859adf0b2
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.c
@@ -0,0 +1,1543 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/errno.h>
+#include <linux/completion.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_common.h"
+#include "hinic3_wq.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_hwif.h"
+#include "hinic3_cmdq.h"
+
+#define HINIC3_CMDQ_BUF_SIZE 2048U
+
+#define CMDQ_CMD_TIMEOUT 5000 /* millisecond */
+
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+#define CMDQ_DB_INFO_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_INFO_##member##_MASK) << \
+ CMDQ_DB_INFO_##member##_SHIFT)
+
+#define CMDQ_DB_HEAD_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_HEAD_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_HEAD_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_HEAD_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_HEAD_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_HEAD_SRC_TYPE_MASK 0x1FU
+#define CMDQ_DB_HEAD_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_HEAD_##member##_MASK) << \
+ CMDQ_DB_HEAD_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ ((((u32)(val)) & CMDQ_CTRL_##member##_MASK) << \
+ CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) & \
+ CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ ((((u32)(val)) & CMDQ_WQE_HEADER_##member##_MASK) << \
+ CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) & \
+ CMDQ_WQE_HEADER_##member##_MASK)
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 53
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0xFF
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_PAGE_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << \
+ CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_BLOCK_INFO_GET(val, member) \
+ (((u64)(val) >> CMDQ_CTXT_##member##_SHIFT) & \
+ CMDQ_CTXT_##member##_MASK)
+
+#define SAVED_DATA_ARM_SHIFT 31
+
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) << \
+ SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK << \
+ SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 0
+
+#define WQE_ERRCODE_VAL_MASK 0x7FFFFFFF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & \
+ WQE_ERRCODE_##member##_MASK)
+
+#define CEQE_CMDQ_TYPE_SHIFT 0
+
+#define CEQE_CMDQ_TYPE_MASK 0x7
+
+#define CEQE_CMDQ_GET(val, member) \
+ (((val) >> CEQE_CMDQ_##member##_SHIFT) & \
+ CEQE_CMDQ_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct hinic3_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) \
+ (((u8 *)(db_base)) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN_SHIFT 12
+#define CMDQ_PFN(addr) ((addr) >> CMDQ_PFN_SHIFT)
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+
+#define COMPLETE_LEN 3
+
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQE_SIZE 64
+
+#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
+ struct hinic3_cmdqs, cmdq[0])
+
+#define CMDQ_SEND_CMPT_CODE 10
+#define CMDQ_COMPLETE_CMPT_CODE 11
+#define CMDQ_FORCE_STOP_CMPT_CODE 12
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type {
+ WQE_LCMD_TYPE,
+ WQE_SCMD_TYPE,
+};
+
+enum ctrl_sect_len {
+ CTRL_SECT_LEN = 1,
+ CTRL_DIRECT_SECT_LEN = 2,
+};
+
+enum bufdesc_len {
+ BUFDESC_LCMD_LEN = 2,
+ BUFDESC_SCMD_LEN = 3,
+};
+
+enum data_format {
+ DATA_SGE,
+ DATA_DIRECT,
+};
+
+enum completion_format {
+ COMPLETE_DIRECT,
+ COMPLETE_SGE,
+};
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type {
+ SYNC_CMD_DIRECT_RESP,
+ SYNC_CMD_SGE_RESP,
+ ASYNC_CMD,
+};
+
+#define NUM_WQEBBS_FOR_CMDQ_WQE 1
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq)
+{
+ return hinic3_wq_is_empty(&cmdq->wq);
+}
+
+static void *cmdq_read_wqe(struct hinic3_wq *wq, u16 *ci)
+{
+ if (hinic3_wq_is_empty(wq))
+ return NULL;
+
+ return hinic3_wq_read_one_wqebb(wq, ci);
+}
+
+static void *cmdq_get_wqe(struct hinic3_wq *wq, u16 *pi)
+{
+ if (!hinic3_wq_free_wqebbs(wq))
+ return NULL;
+
+ return hinic3_wq_get_one_wqebb(wq, pi);
+}
+
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ void *dev = NULL;
+
+ if (!hwdev) {
+ pr_err("Failed to alloc cmd buf, Invalid hwdev\n");
+ return NULL;
+ }
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+
+ cmd_buf = kzalloc(sizeof(*cmd_buf), GFP_ATOMIC);
+ if (!cmd_buf) {
+ sdk_err(dev, "Failed to allocate cmd buf\n");
+ return NULL;
+ }
+
+ cmd_buf->buf = pci_pool_alloc(cmdqs->cmd_buf_pool, GFP_ATOMIC,
+ &cmd_buf->dma_addr);
+ if (!cmd_buf->buf) {
+ sdk_err(dev, "Failed to allocate cmdq cmd buf from the pool\n");
+ goto alloc_pci_buf_err;
+ }
+
+ cmd_buf->size = HINIC3_CMDQ_BUF_SIZE;
+ atomic_set(&cmd_buf->ref_cnt, 1);
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ kfree(cmd_buf);
+ return NULL;
+}
+EXPORT_SYMBOL(hinic3_alloc_cmd_buf);
+
+void hinic3_free_cmd_buf(void *hwdev, struct hinic3_cmd_buf *cmd_buf)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+
+ if (!hwdev || !cmd_buf) {
+ pr_err("Failed to free cmd buf, hwdev or cmd_buf is NULL\n");
+ return;
+ }
+
+ if (!atomic_dec_and_test(&cmd_buf->ref_cnt))
+ return;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ pci_pool_free(cmdqs->cmd_buf_pool, cmd_buf->buf, cmd_buf->dma_addr);
+ kfree(cmd_buf);
+}
+EXPORT_SYMBOL(hinic3_free_cmd_buf);
+
+static void cmdq_set_completion(struct hinic3_cmdq_completion *complete,
+ struct hinic3_cmd_buf *buf_out)
+{
+ struct hinic3_sge_resp *sge_resp = &complete->sge_resp;
+
+ hinic3_set_sge(&sge_resp->sge, buf_out->dma_addr,
+ HINIC3_CMDQ_BUF_SIZE);
+}
+
+static void cmdq_set_lcmd_bufdesc(struct hinic3_cmdq_wqe_lcmd *wqe,
+ struct hinic3_cmd_buf *buf_in)
+{
+ hinic3_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void cmdq_fill_db(struct hinic3_cmdq_db *db,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ db->db_info = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX);
+
+ db->db_head = CMDQ_DB_HEAD_SET(HINIC3_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_HEAD_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_HEAD_SET(HINIC3_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+}
+
+static void cmdq_set_db(struct hinic3_cmdq *cmdq,
+ enum hinic3_cmdq_type cmdq_type, u16 prod_idx)
+{
+ struct hinic3_cmdq_db db = {0};
+ u8 *db_base = cmdq->hwdev->cmdqs->cmdqs_db_base;
+
+ cmdq_fill_db(&db, cmdq_type, prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ db.db_info = hinic3_hw_be32(db.db_info);
+ db.db_head = hinic3_hw_be32(db.db_head);
+
+ wmb(); /* write all before the doorbell */
+ writeq(*((u64 *)&db), CMDQ_DB_ADDR(db_base, prod_idx));
+}
+
+static void cmdq_wqe_fill(void *dst, const void *src)
+{
+ memcpy((u8 *)dst + FIRST_DATA_TO_WRITE_LAST,
+ (u8 *)src + FIRST_DATA_TO_WRITE_LAST,
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ wmb(); /* The first 8 bytes should be written last */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void cmdq_prepare_wqe_ctrl(struct hinic3_cmdq_wqe *wqe, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format data_format,
+ enum bufdesc_len buf_len)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ enum ctrl_sect_len ctrl_len;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_cmdq_wqe_scmd *wqe_scmd = NULL;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) |
+ CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ if (cmd == CMDQ_SET_ARM_CMD && mod == HINIC3_MOD_COMM) {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ } else {
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ WQE_HEADER(wqe)->saved_data = saved_data;
+ }
+}
+
+static void cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, int wrapped,
+ u8 mod, u8 cmd, u16 prod_idx)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_DIRECT_RESP:
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion,
+ buf_out);
+ }
+ break;
+ case ASYNC_CMD:
+ wqe_lcmd->completion.direct_resp = 0;
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, mod, cmd, prod_idx, complete_format,
+ DATA_SGE, BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+static void cmdq_update_cmd_status(struct hinic3_cmdq *cmdq, u16 prod_idx,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ u32 status_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ cmd_info = &cmdq->cmd_infos[prod_idx];
+
+ if (cmd_info->errcode) {
+ status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info);
+ *cmd_info->errcode = WQE_ERRCODE_GET(status_info, VAL);
+ }
+
+ if (cmd_info->direct_resp)
+ *cmd_info->direct_resp =
+ hinic3_hw_cpu32(wqe_lcmd->completion.direct_resp);
+}
+
+static int hinic3_cmdq_sync_timeout_check(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 pi)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd;
+ struct hinic3_ctrl *ctrl;
+ u32 ctrl_info;
+
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+ if (!WQE_COMPLETED(ctrl_info)) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check busy bit not set\n");
+ return -EFAULT;
+ }
+
+ cmdq_update_cmd_status(cmdq, pi, wqe);
+
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq sync command check succeed\n");
+ return 0;
+}
+
+static void clear_cmd_info(struct hinic3_cmdq_cmd_info *cmd_info,
+ const struct hinic3_cmdq_cmd_info *saved_cmd_info)
+{
+ if (cmd_info->errcode == saved_cmd_info->errcode)
+ cmd_info->errcode = NULL;
+
+ if (cmd_info->done == saved_cmd_info->done)
+ cmd_info->done = NULL;
+
+ if (cmd_info->direct_resp == saved_cmd_info->direct_resp)
+ cmd_info->direct_resp = NULL;
+}
+
+static int cmdq_ceq_handler_status(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe,
+ u32 timeout)
+{
+ ulong timeo;
+ int err;
+ ulong end = jiffies + msecs_to_jiffies(timeout);
+
+ if (cmdq->hwdev->poll) {
+ while (time_before(jiffies, end)) {
+ hinic3_cmdq_ceq_handler(cmdq->hwdev, 0);
+ if (saved_cmd_info->done->done != 0)
+ return 0;
+ usleep_range(9, 10); /* sleep 9 us ~ 10 us */
+ }
+ } else {
+ timeo = msecs_to_jiffies(timeout);
+ if (wait_for_completion_timeout(saved_cmd_info->done, timeo))
+ return 0;
+ }
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmd_info->cmpt_code == saved_cmd_info->cmpt_code)
+ cmd_info->cmpt_code = NULL;
+
+ if (*saved_cmd_info->cmpt_code == CMDQ_COMPLETE_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Cmdq direct sync command has been completed\n");
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return 0;
+ }
+
+ if (curr_msg_id == cmd_info->cmdq_msg_id) {
+ err = hinic3_cmdq_sync_timeout_check(cmdq, curr_wqe,
+ curr_prod_idx);
+ if (err)
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_TIMEOUT;
+ else
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FAKE_TIMEOUT;
+ } else {
+ err = -ETIMEDOUT;
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command current msg id dismatch with cmd_info msg id\n");
+ }
+
+ clear_cmd_info(cmd_info, saved_cmd_info);
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+
+ if (err == 0)
+ return 0;
+
+ hinic3_dump_ceq_info(cmdq->hwdev);
+
+ return -ETIMEDOUT;
+}
+
+static int wait_cmdq_sync_cmd_completion(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_cmdq_cmd_info *saved_cmd_info,
+ u64 curr_msg_id, u16 curr_prod_idx,
+ struct hinic3_cmdq_wqe *curr_wqe, u32 timeout)
+{
+ return cmdq_ceq_handler_status(cmdq, cmd_info, saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, timeout);
+}
+
+static int cmdq_msg_lock(struct hinic3_cmdq *cmdq, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
+
+ /* Keep wrapped and doorbell index correct. bh - for tasklet(ceq) */
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ if (cmdqs->lock_channel_en && test_bit(channel, &cmdqs->channel_stop)) {
+ spin_unlock_bh(&cmdq->cmdq_lock);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void cmdq_msg_unlock(struct hinic3_cmdq *cmdq)
+{
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_clear_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev)
+{
+ if (cmd_info->buf_in)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_in);
+
+ if (cmd_info->buf_out)
+ hinic3_free_cmd_buf(hwdev, cmd_info->buf_out);
+
+ cmd_info->buf_in = NULL;
+ cmd_info->buf_out = NULL;
+}
+
+static void cmdq_set_cmd_buf(struct hinic3_cmdq_cmd_info *cmd_info,
+ struct hinic3_hwdev *hwdev,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out)
+{
+ cmd_info->buf_in = buf_in;
+ cmd_info->buf_out = buf_out;
+
+ if (buf_in)
+ atomic_inc(&buf_in->ref_cnt);
+
+ if (buf_out)
+ atomic_inc(&buf_out->ref_cnt);
+}
+
+static int cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, u8 mod,
+ u8 cmd, struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, NULL);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(*cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ /* CMDQ WQE is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL, saved_cmd_info;
+ struct completion done;
+ u16 curr_prod_idx, next_prod_idx;
+ int wrapped, errcode = 0, wqe_size = WQE_LCMD_SIZE;
+ int cmpt_code = CMDQ_SEND_CMPT_CODE;
+ u64 curr_msg_id;
+ int err;
+ u32 real_timeout;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+
+ init_completion(&done);
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_SGE_RESP;
+ cmd_info->done = &done;
+ cmd_info->errcode = &errcode;
+ cmd_info->direct_resp = out_param;
+ cmd_info->cmpt_code = &cmpt_code;
+ cmd_info->channel = channel;
+ cmdq_set_cmd_buf(cmd_info, cmdq->hwdev, buf_in, buf_out);
+
+ memcpy(&saved_cmd_info, cmd_info, sizeof(*cmd_info));
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ hinic3_hw_be32_len(&wqe, wqe_size);
+
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ (cmd_info->cmdq_msg_id)++;
+ curr_msg_id = cmd_info->cmdq_msg_id;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ real_timeout = timeout ? timeout : CMDQ_CMD_TIMEOUT;
+ err = wait_cmdq_sync_cmd_completion(cmdq, cmd_info, &saved_cmd_info,
+ curr_msg_id, curr_prod_idx,
+ curr_wqe, real_timeout);
+ if (err) {
+ sdk_err(cmdq->hwdev->dev_hdl, "Cmdq sync command(mod: %u, cmd: %u) timeout, prod idx: 0x%x\n",
+ mod, cmd, curr_prod_idx);
+ err = -ETIMEDOUT;
+ }
+
+ if (cmpt_code == CMDQ_FORCE_STOP_CMPT_CODE) {
+ sdk_info(cmdq->hwdev->dev_hdl, "Force stop cmdq cmd, mod: %u, cmd: %u\n",
+ mod, cmd);
+ err = -EAGAIN;
+ }
+
+ destroy_completion(&done);
+ smp_rmb(); /* read error code after completion */
+
+ return (err != 0) ? err : errcode;
+}
+
+static int cmdq_async_cmd(struct hinic3_cmdq *cmdq, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_wq *wq = &cmdq->wq;
+ int wqe_size = WQE_LCMD_SIZE;
+ u16 curr_prod_idx, next_prod_idx;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL, wqe;
+ int wrapped, err;
+
+ err = cmdq_msg_lock(cmdq, channel);
+ if (err)
+ return err;
+
+ curr_wqe = cmdq_get_wqe(wq, &curr_prod_idx);
+ if (!curr_wqe) {
+ cmdq_msg_unlock(cmdq);
+ return -EBUSY;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+
+ wrapped = cmdq->wrapped;
+ next_prod_idx = curr_prod_idx + NUM_WQEBBS_FOR_CMDQ_WQE;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = (cmdq->wrapped == 0) ? 1 : 0;
+ next_prod_idx -= (u16)wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, ASYNC_CMD, buf_in, NULL, wrapped,
+ mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format */
+ hinic3_hw_be32_len(&wqe, wqe_size);
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmd_info = &cmdq->cmd_infos[curr_prod_idx];
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_ASYNC;
+ cmd_info->channel = channel;
+ /* The caller will not free the cmd_buf of the asynchronous command,
+ * so there is no need to increase the reference count here
+ */
+ cmd_info->buf_in = buf_in;
+
+ /* LB mode 1 compatible, cmdq 0 also for async, which is sync_no_wait */
+ cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx);
+
+ cmdq_msg_unlock(cmdq);
+
+ return 0;
+}
+
+static int cmdq_params_valid(const void *hwdev, const struct hinic3_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ pr_err("Invalid CMDQ buffer addr or hwdev\n");
+ return -EINVAL;
+ }
+
+ if (!buf_in->size || buf_in->size > HINIC3_CMDQ_BUF_SIZE) {
+ pr_err("Invalid CMDQ buffer size: 0x%x\n", buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+static int wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & HINIC3_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end) && cmdqs->hwdev->chip_present_flag &&
+ !cmdqs->disable_flag);
+
+ cmdqs->disable_flag = 1;
+
+ return -EBUSY;
+}
+
+int hinic3_cmdq_direct_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ pr_err("Invalid CMDQ parameters\n");
+ return err;
+ }
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, out_param,
+ timeout, channel);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_direct_resp);
+
+int hinic3_cmdq_detail_resp(void *hwdev, u8 mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out,
+ u64 *out_param, u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC],
+ mod, cmd, buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cmdq_detail_resp);
+
+int hinic3_cos_id_detail_resp(void *hwdev, u8 mod, u8 cmd, u8 cos_id,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u64 *out_param,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+
+ if (cos_id >= cmdqs->cmdq_num) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq id is invalid\n");
+ return -EINVAL;
+ }
+
+ err = cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[cos_id], mod, cmd,
+ buf_in, buf_out, out_param,
+ timeout, channel);
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -ETIMEDOUT;
+ else
+ return err;
+}
+EXPORT_SYMBOL(hinic3_cos_id_detail_resp);
+
+int hinic3_cmdq_async(void *hwdev, u8 mod, u8 cmd, struct hinic3_cmd_buf *buf_in, u16 channel)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err)
+ return err;
+
+ cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq is disable\n");
+ return err;
+ }
+ /* LB mode 1 compatible, cmdq 0 also for async, which is sync_no_wait */
+ return cmdq_async_cmd(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod,
+ cmd, buf_in, channel);
+}
+
+static void clear_wqe_complete_bit(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ u32 header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info);
+ enum data_format df = CMDQ_WQE_HEADER_GET(header_info, DATA_FMT);
+
+ if (df == DATA_SGE)
+ ctrl = &wqe->wqe_lcmd.ctrl;
+ else
+ ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+
+ /* clear HW busy bit */
+ ctrl->ctrl_info = 0;
+ cmdq->cmd_infos[ci].cmd_type = HINIC3_CMD_TYPE_NONE;
+
+ wmb(); /* verify wqe is clear */
+
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+}
+
+static void cmdq_sync_cmd_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ spin_lock(&cmdq->cmdq_lock);
+
+ cmdq_update_cmd_status(cmdq, ci, wqe);
+
+ if (cmdq->cmd_infos[ci].cmpt_code) {
+ *cmdq->cmd_infos[ci].cmpt_code = CMDQ_COMPLETE_CMPT_CODE;
+ cmdq->cmd_infos[ci].cmpt_code = NULL;
+ }
+
+ /* make sure cmpt_code operation before done operation */
+ smp_rmb();
+
+ if (cmdq->cmd_infos[ci].done) {
+ complete(cmdq->cmd_infos[ci].done);
+ cmdq->cmd_infos[ci].done = NULL;
+ }
+
+ spin_unlock(&cmdq->cmdq_lock);
+
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], cmdq->hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static void cmdq_async_cmd_handler(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[ci], hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+}
+
+static int cmdq_arm_ceq_handler(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_wqe *wqe, u16 ci)
+{
+ struct hinic3_ctrl *ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+ u32 ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+
+ if (!WQE_COMPLETED(ctrl_info))
+ return -EBUSY;
+
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+
+ return 0;
+}
+
+#define HINIC3_CMDQ_WQE_HEAD_LEN 32
+static void hinic3_dump_cmdq_wqe_head(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq_wqe *wqe)
+{
+ u32 i;
+ u32 *data = (u32 *)wqe;
+
+ for (i = 0; i < (HINIC3_CMDQ_WQE_HEAD_LEN / sizeof(u32)); i += 0x4) {
+ sdk_info(hwdev->dev_hdl, "wqe data: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+ *(data + i), *(data + i + 0x1), *(data + i + 0x2),
+ *(data + i + 0x3));
+ }
+}
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)handle)->cmdqs;
+ enum hinic3_cmdq_type cmdq_type = CEQE_CMDQ_GET(ceqe_data, TYPE);
+ struct hinic3_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
+ struct hinic3_hwdev *hwdev = cmdqs->hwdev;
+ struct hinic3_cmdq_wqe *wqe = NULL;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_ctrl *ctrl = NULL;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci;
+
+ while ((wqe = cmdq_read_wqe(&cmdq->wq, &ci)) != NULL) {
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ switch (cmd_info->cmd_type) {
+ case HINIC3_CMD_TYPE_NONE:
+ return;
+ case HINIC3_CMD_TYPE_TIMEOUT:
+ sdk_warn(hwdev->dev_hdl, "Cmdq timeout, q_id: %u, ci: %u\n",
+ cmdq_type, ci);
+ hinic3_dump_cmdq_wqe_head(hwdev, wqe);
+ /*lint -fallthrough */
+ case HINIC3_CMD_TYPE_FAKE_TIMEOUT:
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ break;
+ case HINIC3_CMD_TYPE_SET_ARM:
+ /* arm_bit was set until here */
+ if (cmdq_arm_ceq_handler(cmdq, wqe, ci))
+ return;
+ break;
+ default:
+ /* only arm bit is using scmd wqe, the wqe is lcmd */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ if (!WQE_COMPLETED(hinic3_hw_cpu32((ctrl)->ctrl_info)))
+ return;
+
+ dma_rmb();
+ /* For FORCE_STOP cmd_type, we also need to wait for
+ * the firmware processing to complete to prevent the
+ * firmware from accessing the released cmd_buf
+ */
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_FORCE_STOP) {
+ cmdq_clear_cmd_buf(cmd_info, hwdev);
+ clear_wqe_complete_bit(cmdq, wqe, ci);
+ } else if (cmd_info->cmd_type == HINIC3_CMD_TYPE_ASYNC) {
+ cmdq_async_cmd_handler(hwdev, cmdq, wqe, ci);
+ } else {
+ cmdq_sync_cmd_handler(cmdq, wqe, ci);
+ }
+
+ break;
+ }
+ }
+}
+
+static void cmdq_init_queue_ctxt(struct hinic3_cmdqs *cmdqs,
+ struct hinic3_cmdq *cmdq,
+ struct cmdq_ctxt_info *ctxt_info)
+{
+ struct hinic3_wq *wq = &cmdq->wq;
+ u64 cmdq_first_block_paddr, pfn;
+ u16 start_ci = (u16)wq->cons_idx;
+
+ pfn = CMDQ_PFN(hinic3_wq_get_first_wqe_page_addr(wq));
+
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(HINIC3_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ if (!WQ_IS_0_LEVEL_CLA(wq)) {
+ cmdq_first_block_paddr = cmdqs->wq_block_paddr;
+ pfn = CMDQ_PFN(cmdq_first_block_paddr);
+ }
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+}
+
+static int init_cmdq(struct hinic3_cmdq *cmdq, struct hinic3_hwdev *hwdev,
+ enum hinic3_cmdq_type q_type)
+{
+ int err;
+
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+ cmdq->hwdev = hwdev;
+
+ spin_lock_init(&cmdq->cmdq_lock);
+
+ cmdq->cmd_infos = kcalloc(cmdq->wq.q_depth, sizeof(*cmdq->cmd_infos),
+ GFP_KERNEL);
+ if (!cmdq->cmd_infos) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate cmdq infos\n");
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ return 0;
+
+cmd_infos_err:
+ spin_lock_deinit(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static void free_cmdq(struct hinic3_cmdq *cmdq)
+{
+ kfree(cmdq->cmd_infos);
+ spin_lock_deinit(&cmdq->cmdq_lock);
+}
+
+static int hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_set_cmdq_ctxt(hwdev, cmdq_type,
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ if (err)
+ return err;
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+ cmdqs->disable_flag = 0;
+
+ return 0;
+}
+
+static void cmdq_flush_sync_cmd(struct hinic3_cmdq_cmd_info *cmd_info)
+{
+ if (cmd_info->cmd_type != HINIC3_CMD_TYPE_DIRECT_RESP &&
+ cmd_info->cmd_type != HINIC3_CMD_TYPE_SGE_RESP)
+ return;
+
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_FORCE_STOP;
+
+ if (cmd_info->cmpt_code &&
+ *cmd_info->cmpt_code == CMDQ_SEND_CMPT_CODE)
+ *cmd_info->cmpt_code = CMDQ_FORCE_STOP_CMPT_CODE;
+
+ if (cmd_info->done) {
+ complete(cmd_info->done);
+ cmd_info->done = NULL;
+ cmd_info->cmpt_code = NULL;
+ cmd_info->direct_resp = NULL;
+ cmd_info->errcode = NULL;
+ }
+}
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u16 ci = 0;
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ while (cmdq_read_wqe(&cmdq->wq, &ci)) {
+ hinic3_wq_put_wqebbs(&cmdq->wq, NUM_WQEBBS_FOR_CMDQ_WQE);
+ cmd_info = &cmdq->cmd_infos[ci];
+
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_DIRECT_RESP ||
+ cmd_info->cmd_type == HINIC3_CMD_TYPE_SGE_RESP)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void hinic3_cmdq_flush_channel_sync_cmd(struct hinic3_hwdev *hwdev, u16 channel)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ if (cmd_info->channel == channel)
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ struct hinic3_cmdq *cmdq = NULL;
+ struct hinic3_wq *wq = NULL;
+ u16 wqe_cnt, ci, i;
+
+ cmdq = &hwdev->cmdqs->cmdq[HINIC3_CMDQ_SYNC];
+
+ spin_lock_bh(&cmdq->cmdq_lock);
+
+ wq = &cmdq->wq;
+ ci = wq->cons_idx;
+ wqe_cnt = (u16)WQ_MASK_IDX(wq, wq->prod_idx +
+ wq->q_depth - wq->cons_idx);
+ for (i = 0; i < wqe_cnt; i++) {
+ cmd_info = &cmdq->cmd_infos[WQ_MASK_IDX(wq, ci + i)];
+ cmdq_flush_sync_cmd(cmd_info);
+ }
+
+ spin_unlock_bh(&cmdq->cmdq_lock);
+}
+
+static void cmdq_reset_all_cmd_buff(struct hinic3_cmdq *cmdq)
+{
+ u16 i;
+
+ for (i = 0; i < cmdq->wq.q_depth; i++)
+ cmdq_clear_cmd_buf(&cmdq->cmd_infos[i], cmdq->hwdev);
+}
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable)
+{
+ if (channel >= HINIC3_CHANNEL_MAX)
+ return -EINVAL;
+
+ if (enable) {
+ clear_bit(channel, &hwdev->cmdqs->channel_stop);
+ } else {
+ set_bit(channel, &hwdev->cmdqs->channel_stop);
+ hinic3_cmdq_flush_channel_sync_cmd(hwdev, channel);
+ }
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel 0x%x\n",
+ enable ? "Enable" : "Disable", channel);
+
+ return 0;
+}
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->cmdqs->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s cmdq channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ hinic3_wq_reset(&cmdqs->cmdq[cmdq_type].wq);
+ }
+
+ return hinic3_set_cmdq_ctxts(hwdev);
+}
+
+static int create_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 type, cmdq_type;
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = hinic3_wq_create(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type].wq,
+ HINIC3_CMDQ_DEPTH, CMDQ_WQEBB_SIZE);
+ if (err) {
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to create cmdq wq\n");
+ goto destroy_wq;
+ }
+ }
+
+ /* 1-level CLA must put all cmdq's wq page addr in one wq block */
+ if (!WQ_IS_0_LEVEL_CLA(&cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq)) {
+ /* cmdq wq's CLA table is up to 512B */
+#define CMDQ_WQ_CLA_SIZE 512
+ if (cmdqs->cmdq[HINIC3_CMDQ_SYNC].wq.num_wq_pages >
+ CMDQ_WQ_CLA_SIZE / sizeof(u64)) {
+ err = -EINVAL;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Cmdq wq page exceed limit: %lu\n",
+ CMDQ_WQ_CLA_SIZE / sizeof(u64));
+ goto destroy_wq;
+ }
+
+ cmdqs->wq_block_vaddr =
+ dma_zalloc_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ &cmdqs->wq_block_paddr, GFP_KERNEL);
+ if (!cmdqs->wq_block_vaddr) {
+ err = -ENOMEM;
+ sdk_err(cmdqs->hwdev->dev_hdl, "Failed to alloc cmdq wq block\n");
+ goto destroy_wq;
+ }
+
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdqs->cmdq_num; type++)
+ memcpy((u8 *)cmdqs->wq_block_vaddr +
+ CMDQ_WQ_CLA_SIZE * type,
+ cmdqs->cmdq[type].wq.wq_block_vaddr,
+ cmdqs->cmdq[type].wq.num_wq_pages * sizeof(u64));
+ }
+
+ return 0;
+
+destroy_wq:
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[type].wq);
+
+ return err;
+}
+
+static void destroy_cmdq_wq(struct hinic3_cmdqs *cmdqs)
+{
+ u8 cmdq_type;
+
+ if (cmdqs->wq_block_vaddr)
+ dma_free_coherent(cmdqs->hwdev->dev_hdl, PAGE_SIZE,
+ cmdqs->wq_block_vaddr, cmdqs->wq_block_paddr);
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++)
+ hinic3_wq_destroy(&cmdqs->cmdq[cmdq_type].wq);
+}
+
+static int init_cmdqs(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ u8 cmdq_num;
+ int err = -ENOMEM;
+
+ if (COMM_SUPPORT_CMDQ_NUM(hwdev)) {
+ cmdq_num = hwdev->glb_attr.cmdq_num;
+ if (hwdev->glb_attr.cmdq_num > HINIC3_MAX_CMDQ_TYPES) {
+ sdk_warn(hwdev->dev_hdl, "Adjust cmdq num to %d\n", HINIC3_MAX_CMDQ_TYPES);
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+ } else {
+ cmdq_num = HINIC3_MAX_CMDQ_TYPES;
+ }
+
+ cmdqs = kzalloc(sizeof(*cmdqs), GFP_KERNEL);
+ if (!cmdqs)
+ return err;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+ cmdqs->cmdq_num = cmdq_num;
+
+ cmdqs->cmd_buf_pool = dma_pool_create("hinic3_cmdq", hwdev->dev_hdl,
+ HINIC3_CMDQ_BUF_SIZE, HINIC3_CMDQ_BUF_SIZE, 0ULL);
+ if (!cmdqs->cmd_buf_pool) {
+ sdk_err(hwdev->dev_hdl, "Failed to create cmdq buffer pool\n");
+ goto pool_create_err;
+ }
+
+ return 0;
+
+pool_create_err:
+ kfree(cmdqs);
+
+ return err;
+}
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ void __iomem *db_base = NULL;
+ u8 type, cmdq_type;
+ int err = -ENOMEM;
+
+ err = init_cmdqs(hwdev);
+ if (err)
+ return err;
+
+ cmdqs = hwdev->cmdqs;
+
+ err = create_cmdq_wq(cmdqs);
+ if (err)
+ goto create_wq_err;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate doorbell address\n");
+ goto alloc_db_err;
+ }
+
+ cmdqs->cmdqs_db_base = (u8 *)db_base;
+ for (cmdq_type = HINIC3_CMDQ_SYNC; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev, cmdq_type);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize cmdq type :%d\n", cmdq_type);
+ goto init_cmdq_err;
+ }
+
+ cmdq_init_queue_ctxt(cmdqs, &cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ }
+
+ err = hinic3_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ for (type = HINIC3_CMDQ_SYNC; type < cmdq_type; type++)
+ free_cmdq(&cmdqs->cmdq[type]);
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+
+alloc_db_err:
+ destroy_cmdq_wq(cmdqs);
+
+create_wq_err:
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+ kfree(cmdqs);
+
+ return err;
+}
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ u8 cmdq_type = HINIC3_CMDQ_SYNC;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ hinic3_cmdq_flush_cmd(hwdev, &cmdqs->cmdq[cmdq_type]);
+ cmdq_reset_all_cmd_buff(&cmdqs->cmdq[cmdq_type]);
+ free_cmdq(&cmdqs->cmdq[cmdq_type]);
+ }
+
+ hinic3_free_db_addr(hwdev, cmdqs->cmdqs_db_base, NULL);
+ destroy_cmdq_wq(cmdqs);
+
+ dma_pool_destroy(cmdqs->cmd_buf_pool);
+
+ kfree(cmdqs);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
new file mode 100644
index 000000000000..ab36dc9c2ba6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_cmdq.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CMDQ_H
+#define HINIC3_CMDQ_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/spinlock.h>
+
+#include "comm_msg_intf.h"
+#include "hinic3_hw.h"
+#include "hinic3_wq.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_SCMD_DATA_LEN 16
+
+#define HINIC3_CMDQ_DEPTH 4096
+
+enum hinic3_cmdq_type {
+ HINIC3_CMDQ_SYNC,
+ HINIC3_CMDQ_ASYNC,
+ HINIC3_MAX_CMDQ_TYPES = 4
+};
+
+enum hinic3_db_src_type {
+ HINIC3_DB_SRC_CMDQ_TYPE,
+ HINIC3_DB_SRC_L2NIC_SQ_TYPE,
+};
+
+enum hinic3_cmdq_db_type {
+ HINIC3_DB_SQ_RQ_TYPE,
+ HINIC3_DB_CMDQ_TYPE,
+};
+
+/* hardware define: cmdq wqe */
+struct hinic3_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct hinic3_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[HINIC3_SCMD_DATA_LEN];
+};
+
+struct hinic3_lcmd_bufdesc {
+ struct hinic3_sge sge;
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct hinic3_cmdq_db {
+ u32 db_head;
+ u32 db_info;
+};
+
+struct hinic3_status {
+ u32 status_info;
+};
+
+struct hinic3_ctrl {
+ u32 ctrl_info;
+};
+
+struct hinic3_sge_resp {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_cmdq_completion {
+ union {
+ struct hinic3_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct hinic3_cmdq_wqe_scmd {
+ struct hinic3_cmdq_header header;
+ u64 rsvd;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_scmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_wqe_lcmd {
+ struct hinic3_cmdq_header header;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_lcmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_inline_wqe {
+ struct hinic3_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct hinic3_cmdq_wqe {
+ union {
+ struct hinic3_cmdq_inline_wqe inline_wqe;
+ struct hinic3_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct hinic3_cmdq_arm_bit {
+ u32 q_type;
+ u32 q_id;
+};
+
+enum hinic3_cmdq_status {
+ HINIC3_CMDQ_ENABLE = BIT(0),
+};
+
+enum hinic3_cmdq_cmd_type {
+ HINIC3_CMD_TYPE_NONE,
+ HINIC3_CMD_TYPE_SET_ARM,
+ HINIC3_CMD_TYPE_DIRECT_RESP,
+ HINIC3_CMD_TYPE_SGE_RESP,
+ HINIC3_CMD_TYPE_ASYNC,
+ HINIC3_CMD_TYPE_FAKE_TIMEOUT,
+ HINIC3_CMD_TYPE_TIMEOUT,
+ HINIC3_CMD_TYPE_FORCE_STOP,
+};
+
+struct hinic3_cmdq_cmd_info {
+ enum hinic3_cmdq_cmd_type cmd_type;
+ u16 channel;
+ u16 rsvd1;
+
+ struct completion *done;
+ int *errcode;
+ int *cmpt_code;
+ u64 *direct_resp;
+ u64 cmdq_msg_id;
+
+ struct hinic3_cmd_buf *buf_in;
+ struct hinic3_cmd_buf *buf_out;
+};
+
+struct hinic3_cmdq {
+ struct hinic3_wq wq;
+
+ enum hinic3_cmdq_type cmdq_type;
+ int wrapped;
+
+ /* spinlock for send cmdq commands */
+ spinlock_t cmdq_lock;
+
+ struct cmdq_ctxt_info cmdq_ctxt;
+
+ struct hinic3_cmdq_cmd_info *cmd_infos;
+
+ struct hinic3_hwdev *hwdev;
+ u64 rsvd1[2];
+};
+
+struct hinic3_cmdqs {
+ struct hinic3_hwdev *hwdev;
+
+ struct pci_pool *cmd_buf_pool;
+ /* doorbell area */
+ u8 __iomem *cmdqs_db_base;
+
+ /* All cmdq's CLA of a VF occupy a PAGE when cmdq wq is 1-level CLA */
+ dma_addr_t wq_block_paddr;
+ void *wq_block_vaddr;
+ struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES];
+
+ u32 status;
+ u32 disable_flag;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u8 cmdq_num;
+ u32 rsvd1;
+ u64 rsvd2;
+};
+
+void hinic3_cmdq_ceq_handler(void *handle, u32 ceqe_data);
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev);
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdq_flush_cmd(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmdq *cmdq);
+
+int hinic3_cmdq_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable);
+
+void hinic3_cmdq_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+void hinic3_cmdq_flush_sync_cmd(struct hinic3_hwdev *hwdev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
new file mode 100644
index 000000000000..a942ef185e6f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_common.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/io-mapping.h>
+#include <linux/delay.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+
+int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
+ unsigned int flag,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ void *vaddr = NULL, *align_vaddr = NULL;
+ dma_addr_t paddr, align_paddr;
+ u64 real_size = size;
+
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ /* align */
+ if (align_paddr == paddr) {
+ align_vaddr = vaddr;
+ goto out;
+ }
+
+ dma_free_coherent(dev_hdl, real_size, vaddr, paddr);
+
+ /* realloc memory for align */
+ real_size = size + align;
+ vaddr = dma_zalloc_coherent(dev_hdl, real_size, &paddr, flag);
+ if (!vaddr)
+ return -ENOMEM;
+
+ align_paddr = ALIGN(paddr, align);
+ align_vaddr = (void *)((u64)vaddr + (align_paddr - paddr));
+
+out:
+ mem_align->real_size = (u32)real_size;
+ mem_align->ori_vaddr = vaddr;
+ mem_align->ori_paddr = paddr;
+ mem_align->align_vaddr = align_vaddr;
+ mem_align->align_paddr = align_paddr;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_dma_zalloc_coherent_align);
+
+void hinic3_dma_free_coherent_align(void *dev_hdl,
+ struct hinic3_dma_addr_align *mem_align)
+{
+ dma_free_coherent(dev_hdl, mem_align->real_size,
+ mem_align->ori_vaddr, mem_align->ori_paddr);
+}
+EXPORT_SYMBOL(hinic3_dma_free_coherent_align);
+
+int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
+ u32 wait_total_ms, u32 wait_once_us)
+{
+ enum hinic3_wait_return ret;
+ unsigned long end;
+ /* Take 9/10 * wait_once_us as the minimum sleep time of usleep_range */
+ u32 usleep_min = wait_once_us - wait_once_us / 10;
+
+ if (!handler)
+ return -EINVAL;
+
+ end = jiffies + msecs_to_jiffies(wait_total_ms);
+ do {
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ /* Sleep more than 20ms using msleep is accurate */
+ if (wait_once_us >= 20 * USEC_PER_MSEC)
+ msleep(wait_once_us / USEC_PER_MSEC);
+ else
+ usleep_range(usleep_min, wait_once_us);
+ } while (time_before(jiffies, end));
+
+ ret = handler(priv_data);
+ if (ret == WAIT_PROCESS_CPL)
+ return 0;
+ else if (ret == WAIT_PROCESS_ERR)
+ return -EIO;
+
+ return -ETIMEDOUT;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
new file mode 100644
index 000000000000..b5390c9ed488
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_csr.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_CSR_H
+#define HINIC3_CSR_H
+
+/* bit30/bit31 for bar index flag
+ * 00: bar0
+ * 01: bar1
+ * 10: bar2
+ * 11: bar3
+ */
+#define HINIC3_CFG_REGS_FLAG 0x40000000
+
+#define HINIC3_MGMT_REGS_FLAG 0xC0000000
+
+#define HINIC3_REGS_FLAG_MAKS 0x3FFFFFFF
+
+#define HINIC3_VF_CFG_REG_OFFSET 0x2000
+
+#define HINIC3_HOST_CSR_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6000)
+#define HINIC3_CSR_GLOBAL_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6400)
+
+/* HW interface registers */
+#define HINIC3_CSR_FUNC_ATTR0_ADDR (HINIC3_CFG_REGS_FLAG + 0x0)
+#define HINIC3_CSR_FUNC_ATTR1_ADDR (HINIC3_CFG_REGS_FLAG + 0x4)
+#define HINIC3_CSR_FUNC_ATTR2_ADDR (HINIC3_CFG_REGS_FLAG + 0x8)
+#define HINIC3_CSR_FUNC_ATTR3_ADDR (HINIC3_CFG_REGS_FLAG + 0xC)
+#define HINIC3_CSR_FUNC_ATTR4_ADDR (HINIC3_CFG_REGS_FLAG + 0x10)
+#define HINIC3_CSR_FUNC_ATTR5_ADDR (HINIC3_CFG_REGS_FLAG + 0x14)
+#define HINIC3_CSR_FUNC_ATTR6_ADDR (HINIC3_CFG_REGS_FLAG + 0x18)
+
+#define HINIC3_FUNC_CSR_MAILBOX_DATA_OFF 0x80
+#define HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0100)
+#define HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0104)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x0108)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF \
+ (HINIC3_CFG_REGS_FLAG + 0x010C)
+/* CLP registers */
+#define HINIC3_BAR3_CLP_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x0000)
+
+#define HINIC3_UCPU_CLP_SIZE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x40)
+#define HINIC3_UCPU_CLP_REQBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x44)
+#define HINIC3_UCPU_CLP_RSPBASE_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x48)
+#define HINIC3_UCPU_CLP_REQ_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x4c)
+#define HINIC3_UCPU_CLP_RSP_REG (HINIC3_HOST_CSR_BASE_ADDR + 0x50)
+#define HINIC3_CLP_REG(member) (HINIC3_UCPU_CLP_##member##_REG)
+
+#define HINIC3_CLP_REQ_DATA HINIC3_BAR3_CLP_BASE_ADDR
+#define HINIC3_CLP_RSP_DATA (HINIC3_BAR3_CLP_BASE_ADDR + 0x1000)
+#define HINIC3_CLP_DATA(member) (HINIC3_CLP_##member##_DATA)
+
+#define HINIC3_PPF_ELECTION_OFFSET 0x0
+#define HINIC3_MPF_ELECTION_OFFSET 0x20
+
+#define HINIC3_CSR_PPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_PPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_MPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x60)
+#define HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE 0x4
+
+#define HINIC3_CSR_FUNC_PPF_ELECT(host_idx) \
+ (HINIC3_CSR_FUNC_PPF_ELECT_BASE_ADDR + \
+ (host_idx) * HINIC3_CSR_FUNC_PPF_ELECT_PORT_STRIDE)
+
+#define HINIC3_CSR_DMA_ATTR_TBL_ADDR (HINIC3_CFG_REGS_FLAG + 0x380)
+#define HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x390)
+
+/* MSI-X registers */
+#define HINIC3_CSR_MSIX_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x310)
+#define HINIC3_CSR_MSIX_CTRL_ADDR (HINIC3_CFG_REGS_FLAG + 0x300)
+#define HINIC3_CSR_MSIX_CNT_ADDR (HINIC3_CFG_REGS_FLAG + 0x304)
+#define HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR (HINIC3_CFG_REGS_FLAG + 0x58)
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_SHIFT 22
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_MASK 0x3FFU
+
+#define HINIC3_MSI_CLR_INDIR_SET(val, member) \
+ (((val) & HINIC3_MSI_CLR_INDIR_##member##_MASK) << \
+ HINIC3_MSI_CLR_INDIR_##member##_SHIFT)
+
+/* EQ registers */
+#define HINIC3_AEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x210)
+#define HINIC3_CEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x290)
+
+#define HINIC3_EQ_INDIR_IDX_ADDR(type) \
+ ((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_INDIR_IDX_ADDR : HINIC3_CEQ_INDIR_IDX_ADDR)
+
+#define HINIC3_AEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x240)
+#define HINIC3_CEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x2C0)
+
+#define HINIC3_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_CEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CSR_AEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x200)
+#define HINIC3_CSR_AEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x204)
+#define HINIC3_CSR_AEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x208)
+#define HINIC3_CSR_AEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x20C)
+#define HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x50)
+
+#define HINIC3_CSR_CEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x280)
+#define HINIC3_CSR_CEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x284)
+#define HINIC3_CSR_CEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x288)
+#define HINIC3_CSR_CEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x28c)
+#define HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x54)
+
+/* API CMD registers */
+#define HINIC3_CSR_API_CMD_BASE (HINIC3_MGMT_REGS_FLAG + 0x2000)
+
+#define HINIC3_CSR_API_CMD_STRIDE 0x80
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x0 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x4 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_HI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x8 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_LO_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0xC + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x10 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x14 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x1C + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x20 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+#define HINIC3_CSR_API_CMD_STATUS_0_ADDR(idx) \
+ (HINIC3_CSR_API_CMD_BASE + 0x30 + (idx) * HINIC3_CSR_API_CMD_STRIDE)
+
+/* self test register */
+#define HINIC3_MGMT_HEALTH_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0x983c)
+
+#define HINIC3_CHIP_BASE_INFO_ADDR (HINIC3_MGMT_REGS_FLAG + 0xB02C)
+
+#define HINIC3_CHIP_ERR_STATUS0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0EC)
+#define HINIC3_CHIP_ERR_STATUS1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F0)
+
+#define HINIC3_ERR_INFO0_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F4)
+#define HINIC3_ERR_INFO1_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0F8)
+#define HINIC3_ERR_INFO2_ADDR (HINIC3_MGMT_REGS_FLAG + 0xC0FC)
+
+#define HINIC3_MULT_HOST_SLAVE_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF30)
+#define HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR (HINIC3_MGMT_REGS_FLAG + 0xDF4C)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
new file mode 100644
index 000000000000..4c13a2e8ffd6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.c
@@ -0,0 +1,803 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_lld.h"
+#include "hinic3_sriov.h"
+#include "hinic3_nictool.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_dev_mgmt.h"
+
+#define HINIC3_WAIT_TOOL_CNT_TIMEOUT 10000
+#define HINIC3_WAIT_TOOL_MIN_USLEEP_TIME 9900
+#define HINIC3_WAIT_TOOL_MAX_USLEEP_TIME 10000
+
+static unsigned long card_bit_map;
+
+LIST_HEAD(g_hinic3_chip_list);
+
+struct list_head *get_hinic3_chip_list(void)
+{
+ return &g_hinic3_chip_list;
+}
+
+void uld_dev_hold(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_inc(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_hold);
+
+void uld_dev_put(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ atomic_dec(&pci_adapter->uld_ref_cnt[type]);
+}
+EXPORT_SYMBOL(uld_dev_put);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter)
+{
+ atomic_set(&pci_adapter->ref_cnt, 0);
+}
+
+void lld_dev_hold(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_inc(&pci_adapter->ref_cnt);
+}
+
+void lld_dev_put(struct hinic3_lld_dev *dev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(dev->pdev);
+
+ atomic_dec(&pci_adapter->ref_cnt);
+}
+
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_TOOL_CNT_TIMEOUT);
+ do {
+ if (!atomic_read(&pci_adapter->ref_cnt))
+ return;
+
+ /* if sleep 10ms, use usleep_range to be more precise */
+ usleep_range(HINIC3_WAIT_TOOL_MIN_USLEEP_TIME,
+ HINIC3_WAIT_TOOL_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+}
+
+enum hinic3_lld_status {
+ HINIC3_NODE_CHANGE = BIT(0),
+};
+
+struct hinic3_lld_lock {
+ /* lock for chip list */
+ struct mutex lld_mutex;
+ unsigned long status;
+ atomic_t dev_ref_cnt;
+};
+
+struct hinic3_lld_lock g_lld_lock;
+
+#define WAIT_LLD_DEV_HOLD_TIMEOUT (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_NODE_CHANGED (10 * 60 * 1000) /* 10minutes */
+#define WAIT_LLD_DEV_REF_CNT_EMPTY (2 * 60 * 1000) /* 2minutes */
+#define PRINT_TIMEOUT_INTERVAL 10000
+#define MS_PER_SEC 1000
+#define LLD_LOCK_MIN_USLEEP_TIME 900
+#define LLD_LOCK_MAX_USLEEP_TIME 1000
+
+/* node in chip_node will changed, tools or driver can't get node
+ * during this situation
+ */
+void lld_lock_chip_node(void)
+{
+ unsigned long end;
+ bool timeout = true;
+ u32 loop_cnt;
+
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ loop_cnt = 0;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && test_and_set_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait for lld node change complete timeout when trying to get lld lock\n");
+
+ loop_cnt = 0;
+ timeout = true;
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_NODE_CHANGED);
+ do {
+ if (!atomic_read(&g_lld_lock.dev_ref_cnt)) {
+ timeout = false;
+ break;
+ }
+
+ loop_cnt++;
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait for lld dev unused for %us, reference count: %d\n",
+ loop_cnt / MS_PER_SEC,
+ atomic_read(&g_lld_lock.dev_ref_cnt));
+
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (timeout && atomic_read(&g_lld_lock.dev_ref_cnt))
+ pr_warn("Wait for lld dev unused timeout\n");
+
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_unlock_chip_node(void)
+{
+ clear_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status);
+}
+
+/* When tools or other drivers want to get node of chip_node, use this function
+ * to prevent node be freed
+ */
+void lld_hold(void)
+{
+ unsigned long end;
+ u32 loop_cnt = 0;
+
+ /* ensure there have not any chip node in changing */
+ mutex_lock(&g_lld_lock.lld_mutex);
+
+ end = jiffies + msecs_to_jiffies(WAIT_LLD_DEV_HOLD_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ break;
+
+ loop_cnt++;
+
+ if (loop_cnt % PRINT_TIMEOUT_INTERVAL == 0)
+ pr_warn("Wait lld node change complete for %us\n",
+ loop_cnt / MS_PER_SEC);
+ /* if sleep 1ms, use usleep_range to be more precise */
+ usleep_range(LLD_LOCK_MIN_USLEEP_TIME,
+ LLD_LOCK_MAX_USLEEP_TIME);
+ } while (time_before(jiffies, end));
+
+ if (test_bit(HINIC3_NODE_CHANGE, &g_lld_lock.status))
+ pr_warn("Wait lld node change complete timeout when trying to hode lld dev\n");
+
+ atomic_inc(&g_lld_lock.dev_ref_cnt);
+ mutex_unlock(&g_lld_lock.lld_mutex);
+}
+
+void lld_put(void)
+{
+ atomic_dec(&g_lld_lock.dev_ref_cnt);
+}
+
+void hinic3_lld_lock_init(void)
+{
+ mutex_init(&g_lld_lock.lld_mutex);
+ atomic_set(&g_lld_lock.dev_ref_cnt, 0);
+}
+
+void hinic3_get_all_chip_id(void *id_info)
+{
+ struct nic_card_id *card_id = (struct nic_card_id *)id_info;
+ struct card_node *chip_node = NULL;
+ int i = 0;
+ int id, err;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get hinic3 id\n");
+ continue;
+ }
+ card_id->id[i] = (u32)id;
+ i++;
+ }
+ lld_put();
+ card_id->num = (u32)i;
+}
+
+void hinic3_get_card_func_info_by_card_name(const char *chip_name,
+ struct hinic3_card_func_info *card_func)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct func_pdev_info *pdev_info = NULL;
+
+ card_func->num_pf = 0;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (strncmp(chip_node->chip_name, chip_name, IFNAMSIZ))
+ continue;
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ pdev_info = &card_func->pdev_info[card_func->num_pf];
+ pdev_info->bar1_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+ pdev_info->bar1_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PF_PCI_CFG_REG_BAR);
+
+ pdev_info->bar3_size =
+ pci_resource_len(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+ pdev_info->bar3_phy_addr =
+ pci_resource_start(dev->pcidev,
+ HINIC3_PCI_MGMT_REG_BAR);
+
+ card_func->num_pf++;
+ if (card_func->num_pf >= MAX_SIZE) {
+ lld_put();
+ return;
+ }
+ }
+ }
+
+ lld_put();
+}
+
+static bool is_pcidev_match_chip_name(const char *ifname, struct hinic3_pcidev *dev,
+ struct card_node *chip_node, enum func_type type)
+{
+ if (!strncmp(chip_node->chip_name, ifname, IFNAMSIZ)) {
+ if (hinic3_func_type(dev->hwdev) != type)
+ return false;
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_dst_type_lld_dev_by_chip_name(const char *ifname,
+ enum func_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_chip_name(ifname, dev, chip_node, type))
+ return &dev->lld_dev;
+ }
+ }
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_name(const char *chip_name)
+{
+ struct hinic3_lld_dev *dev = NULL;
+
+ lld_hold();
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PPF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_PF);
+ if (dev)
+ goto out;
+
+ dev = get_dst_type_lld_dev_by_chip_name(chip_name, TYPE_VF);
+out:
+ if (dev)
+ lld_dev_hold(dev);
+ lld_put();
+
+ return dev;
+}
+
+static int get_dynamic_uld_dev_name(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ char *ifname)
+{
+ u32 out_size = IFNAMSIZ;
+
+ if (!g_uld_info[type].ioctl)
+ return -EFAULT;
+
+ return g_uld_info[type].ioctl(dev->uld_dev[type], GET_ULD_DEV_NAME,
+ NULL, 0, ifname, &out_size);
+}
+
+static bool is_pcidev_match_dev_name(const char *dev_name, struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ enum hinic3_service_type i;
+ char nic_uld_name[IFNAMSIZ] = {0};
+ int err;
+
+ if (type > SERVICE_T_MAX)
+ return false;
+
+ if (type == SERVICE_T_MAX) {
+ for (i = SERVICE_T_OVS; i < SERVICE_T_MAX; i++) {
+ if (!strncmp(dev->uld_dev_name[i], dev_name, IFNAMSIZ))
+ return true;
+ }
+ } else {
+ if (!strncmp(dev->uld_dev_name[type], dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ err = get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, (char *)nic_uld_name);
+ if (err == 0) {
+ if (!strncmp(nic_uld_name, dev_name, IFNAMSIZ))
+ return true;
+ }
+
+ return false;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type, bool hold)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (is_pcidev_match_dev_name(dev_name, dev, type)) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ }
+
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_chip_and_port(const char *chip_name, u8 port_id)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id &&
+ !strncmp(chip_node->chip_name, chip_name, IFNAMSIZ)) {
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+
+ return &dev->lld_dev;
+ }
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name);
+
+struct hinic3_lld_dev *hinic3_get_lld_dev_by_dev_name_unsafe(const char *dev_name,
+ enum hinic3_service_type type)
+{
+ return get_lld_dev_by_dev_name(dev_name, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_lld_dev_by_dev_name_unsafe);
+
+static void *get_uld_by_lld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type,
+ bool hold)
+{
+ struct hinic3_pcidev *dev = NULL;
+ void *uld = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ dev = pci_get_drvdata(lld_dev->pdev);
+ if (!dev)
+ return NULL;
+
+ spin_lock_bh(&dev->uld_lock);
+ if (!dev->uld_dev[type] || !test_bit(type, &dev->uld_state)) {
+ spin_unlock_bh(&dev->uld_lock);
+ return NULL;
+ }
+ uld = dev->uld_dev[type];
+
+ if (hold)
+ atomic_inc(&dev->uld_ref_cnt[type]);
+ spin_unlock_bh(&dev->uld_lock);
+
+ return uld;
+}
+
+void *hinic3_get_uld_dev(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, true);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev);
+
+void *hinic3_get_uld_dev_unsafe(struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ return get_uld_by_lld_dev(lld_dev, type, false);
+}
+EXPORT_SYMBOL(hinic3_get_uld_dev_unsafe);
+
+static struct hinic3_lld_dev *get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev, bool hold)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ lld_hold();
+ chip_node = pci_adapter->chip_node;
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (dev->hwdev && hinic3_func_type(dev->hwdev) == TYPE_PPF) {
+ if (hold)
+ lld_dev_hold(&dev->lld_dev);
+ lld_put();
+ return &dev->lld_dev;
+ }
+ }
+ lld_put();
+
+ return NULL;
+}
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, true);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev);
+
+struct hinic3_lld_dev *hinic3_get_ppf_lld_dev_unsafe(struct hinic3_lld_dev *lld_dev)
+{
+ return get_ppf_lld_dev(lld_dev, false);
+}
+EXPORT_SYMBOL(hinic3_get_ppf_lld_dev_unsafe);
+
+int hinic3_get_chip_name(struct hinic3_lld_dev *lld_dev, char *chip_name, u16 max_len)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!lld_dev || !chip_name || !max_len)
+ return -EINVAL;
+
+ pci_adapter = pci_get_drvdata(lld_dev->pdev);
+ if (!pci_adapter)
+ return -EFAULT;
+
+ lld_hold();
+ strncpy(chip_name, pci_adapter->chip_node->chip_name, max_len);
+ chip_name[max_len - 1] = '\0';
+
+ lld_put();
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_chip_name);
+
+struct hinic3_hwdev *hinic3_get_sdk_hwdev_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ return lld_dev->hwdev;
+}
+
+struct card_node *hinic3_get_chip_node_by_lld(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(lld_dev->pdev);
+
+ return pci_adapter->chip_node;
+}
+
+static struct card_node *hinic3_get_chip_node_by_hwdev(const void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct card_node *node_tmp = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!hwdev)
+ return NULL;
+
+ lld_hold();
+
+ list_for_each_entry(node_tmp, &g_hinic3_chip_list, node) {
+ if (!chip_node) {
+ list_for_each_entry(dev, &node_tmp->func_list, node) {
+ if (dev->hwdev == hwdev) {
+ chip_node = node_tmp;
+ break;
+ }
+ }
+ }
+ }
+
+ lld_put();
+
+ return chip_node;
+}
+
+static bool is_func_valid(struct hinic3_pcidev *dev)
+{
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ return false;
+
+ return true;
+}
+
+void hinic3_get_card_info(const void *hwdev, void *bufin)
+{
+ struct card_node *chip_node = NULL;
+ struct card_info *info = (struct card_info *)bufin;
+ struct hinic3_pcidev *dev = NULL;
+ void *fun_hwdev = NULL;
+ u32 i = 0;
+
+ info->pf_num = 0;
+
+ chip_node = hinic3_get_chip_node_by_hwdev(hwdev);
+ if (!chip_node)
+ return;
+
+ lld_hold();
+
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (!is_func_valid(dev))
+ continue;
+
+ fun_hwdev = dev->hwdev;
+
+ if (hinic3_support_nic(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_NIC]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_NIC);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_NIC, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_support_ppa(fun_hwdev, NULL)) {
+ if (dev->uld_dev[SERVICE_T_PPA]) {
+ info->pf[i].pf_type |= (u32)BIT(SERVICE_T_PPA);
+ get_dynamic_uld_dev_name(dev, SERVICE_T_PPA, info->pf[i].name);
+ }
+ }
+
+ if (hinic3_func_for_mgmt(fun_hwdev))
+ strlcpy(info->pf[i].name, "FOR_MGMT", IFNAMSIZ);
+
+ strlcpy(info->pf[i].bus_info, pci_name(dev->pcidev),
+ sizeof(info->pf[i].bus_info));
+ info->pf_num++;
+ i = info->pf_num;
+ }
+
+ lld_put();
+}
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return &pci_adapter->sriov_info;
+}
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev)
+ return NULL;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter)
+ return NULL;
+
+ return pci_adapter->hwdev;
+}
+
+bool hinic3_is_in_host(void)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) != TYPE_VF) {
+ lld_put();
+ return true;
+ }
+ }
+ }
+
+ lld_put();
+
+ return false;
+}
+
+
+static bool chip_node_is_exist(struct hinic3_pcidev *pci_adapter,
+ unsigned char *bus_number)
+{
+ struct card_node *chip_node = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pci_is_root_bus(pci_adapter->pcidev->bus))
+ *bus_number = pci_adapter->pcidev->bus->number;
+
+ if (*bus_number != 0) {
+ if (pci_adapter->pcidev->is_virtfn) {
+ pf_pdev = pci_adapter->pcidev->physfn;
+ *bus_number = pf_pdev->bus->number;
+ }
+
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node->bus_num == *bus_number) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ } else if (HINIC3_IS_VF_DEV(pci_adapter->pcidev) ||
+ HINIC3_IS_SPU_DEV(pci_adapter->pcidev)) {
+ list_for_each_entry(chip_node, &g_hinic3_chip_list, node) {
+ if (chip_node) {
+ pci_adapter->chip_node = chip_node;
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = NULL;
+ unsigned char i;
+ unsigned char bus_number = 0;
+
+ if (chip_node_is_exist(pci_adapter, &bus_number))
+ return 0;
+
+ for (i = 0; i < CARD_MAX_SIZE; i++) {
+ if (test_and_set_bit(i, &card_bit_map) == 0)
+ break;
+ }
+
+ if (i == CARD_MAX_SIZE) {
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to alloc card id\n");
+ return -EFAULT;
+ }
+
+ chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
+ if (!chip_node) {
+ clear_bit(i, &card_bit_map);
+ sdk_err(&pci_adapter->pcidev->dev,
+ "Failed to alloc chip node\n");
+ return -ENOMEM;
+ }
+
+ /* bus number */
+ chip_node->bus_num = bus_number;
+
+ if (snprintf(chip_node->chip_name, IFNAMSIZ, "%s%u", HINIC3_CHIP_NAME, i) < 0) {
+ clear_bit(i, &card_bit_map);
+ kfree(chip_node);
+ return -EINVAL;
+ }
+
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Add new chip %s to global list succeed\n",
+ chip_node->chip_name);
+
+ list_add_tail(&chip_node->node, &g_hinic3_chip_list);
+
+ INIT_LIST_HEAD(&chip_node->func_list);
+ pci_adapter->chip_node = chip_node;
+
+ return 0;
+}
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter)
+{
+ struct card_node *chip_node = pci_adapter->chip_node;
+ int id, err;
+
+ if (list_empty(&chip_node->func_list)) {
+ list_del(&chip_node->node);
+ sdk_info(&pci_adapter->pcidev->dev,
+ "Delete chip %s from global list succeed\n",
+ chip_node->chip_name);
+ err = sscanf(chip_node->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0)
+ sdk_err(&pci_adapter->pcidev->dev, "Failed to get hinic3 id\n");
+
+ clear_bit(id, &card_bit_map);
+
+ kfree(chip_node);
+ }
+}
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_physical_port_id(dev->hwdev) == port_id) {
+ *pf_id = hinic3_global_func_id(dev->hwdev);
+ *isvalid = 1;
+ break;
+ }
+ }
+ lld_put();
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
new file mode 100644
index 000000000000..0b7bf8e18732
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_dev_mgmt.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEV_MGMT_H
+#define HINIC3_DEV_MGMT_H
+#include <linux/types.h>
+#include <linux/bitops.h>
+
+#include "hinic3_sriov.h"
+#include "hinic3_lld.h"
+
+#define HINIC3_VF_PCI_CFG_REG_BAR 0
+#define HINIC3_PF_PCI_CFG_REG_BAR 1
+
+#define HINIC3_PCI_INTR_REG_BAR 2
+#define HINIC3_PCI_MGMT_REG_BAR 3 /* Only PF have mgmt bar */
+#define HINIC3_PCI_DB_BAR 4
+
+#define PRINT_ULD_DETACH_TIMEOUT_INTERVAL 1000 /* 1 second */
+#define ULD_LOCK_MIN_USLEEP_TIME 900
+#define ULD_LOCK_MAX_USLEEP_TIME 1000
+
+#define HINIC3_IS_VF_DEV(pdev) ((pdev)->device == HINIC3_DEV_ID_VF)
+#define HINIC3_IS_SPU_DEV(pdev) ((pdev)->device == HINIC3_DEV_ID_SPU)
+
+enum {
+ HINIC3_NOT_PROBE = 1,
+ HINIC3_PROBE_START = 2,
+ HINIC3_PROBE_OK = 3,
+ HINIC3_IN_REMOVE = 4,
+};
+
+/* Structure pcidev private */
+struct hinic3_pcidev {
+ struct pci_dev *pcidev;
+ void *hwdev;
+ struct card_node *chip_node;
+ struct hinic3_lld_dev lld_dev;
+ /* Record the service object address,
+ * such as hinic3_dev and toe_dev, fc_dev
+ */
+ void *uld_dev[SERVICE_T_MAX];
+ /* Record the service object name */
+ char uld_dev_name[SERVICE_T_MAX][IFNAMSIZ];
+ /* It is a the global variable for driver to manage
+ * all function device linked list
+ */
+ struct list_head node;
+
+ bool disable_vf_load;
+ bool disable_srv_load[SERVICE_T_MAX];
+
+ void __iomem *cfg_reg_base;
+ void __iomem *intr_reg_base;
+ void __iomem *mgmt_reg_base;
+ u64 db_dwqe_len;
+ u64 db_base_phy;
+ void __iomem *db_base;
+
+ /* lock for attach/detach uld */
+ struct mutex pdev_mutex;
+ int lld_state;
+ u32 rsvd1;
+
+ struct hinic3_sriov_info sriov_info;
+
+ /* setted when uld driver processing event */
+ unsigned long state;
+ struct pci_device_id id;
+
+ atomic_t ref_cnt;
+
+ atomic_t uld_ref_cnt[SERVICE_T_MAX];
+ unsigned long uld_state;
+ spinlock_t uld_lock;
+
+ u16 probe_fault_level;
+ u16 rsvd2;
+ u64 rsvd4;
+};
+
+struct hinic_chip_info {
+ u8 chip_id; /* chip id within card */
+ u8 card_type; /* hinic_multi_chip_card_type */
+ u8 rsvd[10]; /* reserved 10 bytes */
+};
+
+struct list_head *get_hinic3_chip_list(void);
+
+int alloc_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void free_chip_node(struct hinic3_pcidev *pci_adapter);
+
+void lld_lock_chip_node(void);
+
+void lld_unlock_chip_node(void);
+
+void hinic3_lld_lock_init(void);
+
+void lld_dev_cnt_init(struct hinic3_pcidev *pci_adapter);
+void wait_lld_dev_unused(struct hinic3_pcidev *pci_adapter);
+
+void *hinic3_get_hwdev_by_pcidev(struct pci_dev *pdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
new file mode 100644
index 000000000000..1949ab879cbc
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.c
@@ -0,0 +1,431 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/netlink.h>
+#include <linux/pci.h>
+#include <linux/firmware.h>
+
+#include "hinic3_devlink.h"
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+#include "hinic3_common.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw.h"
+
+static bool check_image_valid(struct hinic3_hwdev *hwdev, const u8 *buf,
+ u32 size, struct host_image *host_image)
+{
+ struct firmware_image *fw_image = NULL;
+ u32 len = 0;
+ u32 i;
+
+ fw_image = (struct firmware_image *)buf;
+ if (fw_image->fw_magic != FW_MAGIC_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw magic read from file, fw_magic: 0x%x\n",
+ fw_image->fw_magic);
+ return false;
+ }
+
+ if (fw_image->fw_info.section_cnt > FW_TYPE_MAX_NUM) {
+ sdk_err(hwdev->dev_hdl, "Wrong fw type number read from file, fw_type_num: 0x%x\n",
+ fw_image->fw_info.section_cnt);
+ return false;
+ }
+
+ for (i = 0; i < fw_image->fw_info.section_cnt; i++) {
+ len += fw_image->section_info[i].section_len;
+ memcpy(&host_image->section_info[i], &fw_image->section_info[i],
+ sizeof(struct firmware_section));
+ }
+
+ if (len != fw_image->fw_len ||
+ (u32)(fw_image->fw_len + FW_IMAGE_HEAD_SIZE) != size) {
+ sdk_err(hwdev->dev_hdl, "Wrong data size read from file\n");
+ return false;
+ }
+
+ host_image->image_info.total_len = fw_image->fw_len;
+ host_image->image_info.fw_version = fw_image->fw_version;
+ host_image->type_num = fw_image->fw_info.section_cnt;
+ host_image->device_id = fw_image->device_id;
+
+ return true;
+}
+
+static bool check_image_integrity(struct hinic3_hwdev *hwdev, struct host_image *host_image)
+{
+ u64 collect_section_type = 0;
+ u32 type, i;
+
+ for (i = 0; i < host_image->type_num; i++) {
+ type = host_image->section_info[i].section_type;
+ if (collect_section_type & (1ULL << type)) {
+ sdk_err(hwdev->dev_hdl, "Duplicate section type: %u\n", type);
+ return false;
+ }
+ collect_section_type |= (1ULL << type);
+ }
+
+ if ((collect_section_type & IMAGE_COLD_SUB_MODULES_MUST_IN) ==
+ IMAGE_COLD_SUB_MODULES_MUST_IN &&
+ (collect_section_type & IMAGE_CFG_SUB_MODULES_MUST_IN) != 0)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "Failed to check file integrity, valid: 0x%llx, current: 0x%llx\n",
+ (IMAGE_COLD_SUB_MODULES_MUST_IN | IMAGE_CFG_SUB_MODULES_MUST_IN),
+ collect_section_type);
+
+ return false;
+}
+
+static bool check_image_device_type(struct hinic3_hwdev *hwdev, u32 device_type)
+{
+ struct comm_cmd_board_info board_info;
+
+ memset(&board_info, 0, sizeof(board_info));
+ if (hinic3_get_board_info(hwdev, &board_info.info, HINIC3_CHANNEL_COMM)) {
+ sdk_err(hwdev->dev_hdl, "Failed to get board info\n");
+ return false;
+ }
+
+ if (device_type == board_info.info.board_type)
+ return true;
+
+ sdk_err(hwdev->dev_hdl, "The image device type: 0x%x doesn't match the firmware device type: 0x%x\n",
+ device_type, board_info.info.board_type);
+
+ return false;
+}
+
+static void encapsulate_update_cmd(struct hinic3_cmd_update_firmware *msg,
+ struct firmware_section *section_info,
+ int *remain_len, u32 *send_len, u32 *send_pos)
+{
+ memset(msg->data, 0, sizeof(msg->data));
+ msg->ctl_info.sf = (*remain_len == section_info->section_len) ? true : false;
+ msg->section_info.section_crc = section_info->section_crc;
+ msg->section_info.section_type = section_info->section_type;
+ msg->section_version = section_info->section_version;
+ msg->section_len = section_info->section_len;
+ msg->section_offset = *send_pos;
+ msg->ctl_info.bit_signed = section_info->section_flag & 0x1;
+
+ if (*remain_len <= FW_FRAGMENT_MAX_LEN) {
+ msg->ctl_info.sl = true;
+ msg->ctl_info.fragment_len = (u32)(*remain_len);
+ *send_len += section_info->section_len;
+ } else {
+ msg->ctl_info.sl = false;
+ msg->ctl_info.fragment_len = FW_FRAGMENT_MAX_LEN;
+ *send_len += FW_FRAGMENT_MAX_LEN;
+ }
+}
+
+static int hinic3_flash_firmware(struct hinic3_hwdev *hwdev, const u8 *data,
+ struct host_image *image)
+{
+ u32 send_pos, send_len, section_offset, i;
+ struct hinic3_cmd_update_firmware *update_msg = NULL;
+ u16 out_size = sizeof(*update_msg);
+ bool total_flag = false;
+ int remain_len, err;
+
+ update_msg = kzalloc(sizeof(*update_msg), GFP_KERNEL);
+ if (!update_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc update message\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < image->type_num; i++) {
+ section_offset = image->section_info[i].section_offset;
+ remain_len = (int)(image->section_info[i].section_len);
+ send_len = 0;
+ send_pos = 0;
+
+ while (remain_len > 0) {
+ if (!total_flag) {
+ update_msg->total_len = image->image_info.total_len;
+ total_flag = true;
+ } else {
+ update_msg->total_len = 0;
+ }
+
+ encapsulate_update_cmd(update_msg, &image->section_info[i],
+ &remain_len, &send_len, &send_pos);
+
+ memcpy(update_msg->data,
+ ((data + FW_IMAGE_HEAD_SIZE) + section_offset) + send_pos,
+ update_msg->ctl_info.fragment_len);
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_UPDATE_FW,
+ update_msg, sizeof(*update_msg),
+ update_msg, &out_size,
+ FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || update_msg->msg_head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to update firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, update_msg->msg_head.status, out_size);
+ err = update_msg->msg_head.status ?
+ update_msg->msg_head.status : -EIO;
+ kfree(update_msg);
+ return err;
+ }
+
+ send_pos = send_len;
+ remain_len = (int)(image->section_info[i].section_len - send_len);
+ }
+ }
+
+ kfree(update_msg);
+
+ return 0;
+}
+
+static int hinic3_flash_update_notify(struct devlink *devlink, const struct firmware *fw,
+ struct host_image *image, struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_begin_notify(devlink);
+#endif
+ devlink_flash_update_status_notify(devlink, "Flash firmware begin", NULL, 0, 0);
+ sdk_info(hwdev->dev_hdl, "Flash firmware begin\n");
+ err = hinic3_flash_firmware(hwdev, fw->data, image);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to flash firmware, err: %d\n", err);
+ NL_SET_ERR_MSG_MOD(extack, "Flash firmware failed");
+ devlink_flash_update_status_notify(devlink, "Flash firmware failed", NULL, 0, 0);
+ } else {
+ sdk_info(hwdev->dev_hdl, "Flash firmware end\n");
+ devlink_flash_update_status_notify(devlink, "Flash firmware end", NULL, 0, 0);
+ }
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ devlink_flash_update_end_notify(devlink);
+#endif
+
+ return err;
+}
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+static int hinic3_devlink_flash_update(struct devlink *devlink, const char *file_name,
+ const char *component, struct netlink_ext_ack *extack)
+#else
+static int hinic3_devlink_flash_update(struct devlink *devlink,
+ struct devlink_flash_update_params *params,
+ struct netlink_ext_ack *extack)
+#endif
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ const struct firmware *fw = NULL;
+#else
+ const struct firmware *fw = params->fw;
+#endif
+ struct host_image *image = NULL;
+ int err;
+
+ image = kzalloc(sizeof(*image), GFP_KERNEL);
+ if (!image) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc host image\n");
+ err = -ENOMEM;
+ goto devlink_param_reset;
+ }
+
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_PARAM
+ err = request_firmware_direct(&fw, file_name, hwdev->dev_hdl);
+#else
+ err = request_firmware_direct(&fw, params->file_name, hwdev->dev_hdl);
+#endif
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to request firmware\n");
+ goto devlink_request_fw_err;
+ }
+#endif
+
+ if (!check_image_valid(hwdev, fw->data, (u32)(fw->size), image) ||
+ !check_image_integrity(hwdev, image) ||
+ !check_image_device_type(hwdev, image->device_id)) {
+ sdk_err(hwdev->dev_hdl, "Failed to check image\n");
+ NL_SET_ERR_MSG_MOD(extack, "Check image failed");
+ err = -EINVAL;
+ goto devlink_update_out;
+ }
+
+ err = hinic3_flash_update_notify(devlink, fw, image, extack);
+
+devlink_update_out:
+#ifdef HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+ release_firmware(fw);
+
+devlink_request_fw_err:
+#endif
+ kfree(image);
+
+devlink_param_reset:
+ /* reset activate_fw and switch_cfg after flash update operation */
+ devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ return err;
+}
+
+static const struct devlink_ops hinic3_devlink_ops = {
+ .flash_update = hinic3_devlink_flash_update,
+};
+
+static int hinic3_devlink_get_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->activate_fw;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_activate_firmware_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->activate_fw = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Activate firmware begin\n");
+
+ err = hinic3_activate_firmware(hwdev, devlink_dev->activate_fw);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to activate firmware, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Activate firmware end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_get_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+
+ ctx->val.vu8 = devlink_dev->switch_cfg;
+
+ return 0;
+}
+
+static int hinic3_devlink_set_switch_config(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ int err;
+
+ devlink_dev->switch_cfg = ctx->val.vu8;
+ sdk_info(hwdev->dev_hdl, "Switch cfg begin");
+
+ err = hinic3_switch_config(hwdev, devlink_dev->switch_cfg);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to switch cfg, err: %d\n", err);
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Switch cfg end\n");
+
+ return 0;
+}
+
+static int hinic3_devlink_firmware_config_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct hinic3_devlink *devlink_dev = devlink_priv(devlink);
+ struct hinic3_hwdev *hwdev = devlink_dev->hwdev;
+ u8 cfg_index = val.vu8;
+
+ if (cfg_index > FW_CFG_MAX_INDEX) {
+ sdk_err(hwdev->dev_hdl, "Firmware cfg index out of range [0,7]\n");
+ NL_SET_ERR_MSG_MOD(extack, "Firmware cfg index out of range [0,7]");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static const struct devlink_param hinic3_devlink_params[] = {
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ "activate_fw", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_activate_firmware_config,
+ hinic3_devlink_set_activate_firmware_config,
+ hinic3_devlink_firmware_config_validate),
+ DEVLINK_PARAM_DRIVER(HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+ "switch_cfg", DEVLINK_PARAM_TYPE_U8,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ hinic3_devlink_get_switch_config,
+ hinic3_devlink_set_switch_config,
+ hinic3_devlink_firmware_config_validate),
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = NULL;
+ struct pci_dev *pdev = NULL;
+ int err;
+
+ devlink = devlink_alloc(&hinic3_devlink_ops, sizeof(struct hinic3_devlink));
+ if (!devlink) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc devlink\n");
+ return -ENOMEM;
+ }
+
+ hwdev->devlink_dev = devlink_priv(devlink);
+ hwdev->devlink_dev->hwdev = hwdev;
+ hwdev->devlink_dev->activate_fw = FW_CFG_DEFAULT_INDEX;
+ hwdev->devlink_dev->switch_cfg = FW_CFG_DEFAULT_INDEX;
+
+ pdev = hwdev->hwif->pdev;
+ err = devlink_register(devlink, &pdev->dev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink\n");
+ goto register_devlink_err;
+ }
+
+ err = devlink_params_register(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to register devlink params\n");
+ goto register_devlink_params_err;
+ }
+
+ devlink_params_publish(devlink);
+
+ return 0;
+
+register_devlink_params_err:
+ devlink_unregister(devlink);
+
+register_devlink_err:
+ devlink_free(devlink);
+
+ return -EFAULT;
+}
+
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev)
+{
+ struct devlink *devlink = priv_to_devlink(hwdev->devlink_dev);
+
+ devlink_params_unpublish(devlink);
+ devlink_params_unregister(devlink, hinic3_devlink_params,
+ ARRAY_SIZE(hinic3_devlink_params));
+ devlink_unregister(devlink);
+ devlink_free(devlink);
+}
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
new file mode 100644
index 000000000000..0b5a086358b9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_devlink.h
@@ -0,0 +1,149 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_DEVLINK_H
+#define HINIC3_DEVLINK_H
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+
+#define FW_MAGIC_NUM 0x5a5a1100
+#define FW_IMAGE_HEAD_SIZE 4096
+#define FW_FRAGMENT_MAX_LEN 1536
+#define FW_CFG_DEFAULT_INDEX 0xFF
+#define FW_TYPE_MAX_NUM 0x40
+#define FW_CFG_MAX_INDEX 7
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+enum hinic3_devlink_param_id {
+ HINIC3_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
+ HINIC3_DEVLINK_PARAM_ID_ACTIVATE_FW,
+ HINIC3_DEVLINK_PARAM_ID_SWITCH_CFG,
+};
+#endif
+
+enum hinic3_firmware_type {
+ UP_FW_UPDATE_MIN_TYPE1 = 0x0,
+ UP_FW_UPDATE_UP_TEXT = 0x0,
+ UP_FW_UPDATE_UP_DATA = 0x1,
+ UP_FW_UPDATE_UP_DICT = 0x2,
+ UP_FW_UPDATE_TILE_PCPTR = 0x3,
+ UP_FW_UPDATE_TILE_TEXT = 0x4,
+ UP_FW_UPDATE_TILE_DATA = 0x5,
+ UP_FW_UPDATE_TILE_DICT = 0x6,
+ UP_FW_UPDATE_PPE_STATE = 0x7,
+ UP_FW_UPDATE_PPE_BRANCH = 0x8,
+ UP_FW_UPDATE_PPE_EXTACT = 0x9,
+ UP_FW_UPDATE_MAX_TYPE1 = 0x9,
+ UP_FW_UPDATE_CFG0 = 0xa,
+ UP_FW_UPDATE_CFG1 = 0xb,
+ UP_FW_UPDATE_CFG2 = 0xc,
+ UP_FW_UPDATE_CFG3 = 0xd,
+ UP_FW_UPDATE_MAX_TYPE1_CFG = 0xd,
+
+ UP_FW_UPDATE_MIN_TYPE2 = 0x14,
+ UP_FW_UPDATE_MAX_TYPE2 = 0x14,
+
+ UP_FW_UPDATE_MIN_TYPE3 = 0x18,
+ UP_FW_UPDATE_PHY = 0x18,
+ UP_FW_UPDATE_BIOS = 0x19,
+ UP_FW_UPDATE_HLINK_ONE = 0x1a,
+ UP_FW_UPDATE_HLINK_TWO = 0x1b,
+ UP_FW_UPDATE_HLINK_THR = 0x1c,
+ UP_FW_UPDATE_MAX_TYPE3 = 0x1c,
+
+ UP_FW_UPDATE_MIN_TYPE4 = 0x20,
+ UP_FW_UPDATE_L0FW = 0x20,
+ UP_FW_UPDATE_L1FW = 0x21,
+ UP_FW_UPDATE_BOOT = 0x22,
+ UP_FW_UPDATE_SEC_DICT = 0x23,
+ UP_FW_UPDATE_HOT_PATCH0 = 0x24,
+ UP_FW_UPDATE_HOT_PATCH1 = 0x25,
+ UP_FW_UPDATE_HOT_PATCH2 = 0x26,
+ UP_FW_UPDATE_HOT_PATCH3 = 0x27,
+ UP_FW_UPDATE_HOT_PATCH4 = 0x28,
+ UP_FW_UPDATE_HOT_PATCH5 = 0x29,
+ UP_FW_UPDATE_HOT_PATCH6 = 0x2a,
+ UP_FW_UPDATE_HOT_PATCH7 = 0x2b,
+ UP_FW_UPDATE_HOT_PATCH8 = 0x2c,
+ UP_FW_UPDATE_HOT_PATCH9 = 0x2d,
+ UP_FW_UPDATE_HOT_PATCH10 = 0x2e,
+ UP_FW_UPDATE_HOT_PATCH11 = 0x2f,
+ UP_FW_UPDATE_HOT_PATCH12 = 0x30,
+ UP_FW_UPDATE_HOT_PATCH13 = 0x31,
+ UP_FW_UPDATE_HOT_PATCH14 = 0x32,
+ UP_FW_UPDATE_HOT_PATCH15 = 0x33,
+ UP_FW_UPDATE_HOT_PATCH16 = 0x34,
+ UP_FW_UPDATE_HOT_PATCH17 = 0x35,
+ UP_FW_UPDATE_HOT_PATCH18 = 0x36,
+ UP_FW_UPDATE_HOT_PATCH19 = 0x37,
+ UP_FW_UPDATE_MAX_TYPE4 = 0x37,
+
+ UP_FW_UPDATE_MIN_TYPE5 = 0x3a,
+ UP_FW_UPDATE_OPTION_ROM = 0x3a,
+ UP_FW_UPDATE_MAX_TYPE5 = 0x3a,
+
+ UP_FW_UPDATE_MIN_TYPE6 = 0x3e,
+ UP_FW_UPDATE_MAX_TYPE6 = 0x3e,
+
+ UP_FW_UPDATE_MIN_TYPE7 = 0x40,
+ UP_FW_UPDATE_MAX_TYPE7 = 0x40,
+};
+
+#define IMAGE_MPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_UP_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_UP_DICT))
+
+#define IMAGE_NPU_ALL_IN (BIT_ULL(UP_FW_UPDATE_TILE_PCPTR) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_TEXT) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DATA) | \
+ BIT_ULL(UP_FW_UPDATE_TILE_DICT) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_STATE) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_BRANCH) | \
+ BIT_ULL(UP_FW_UPDATE_PPE_EXTACT))
+
+#define IMAGE_COLD_SUB_MODULES_MUST_IN (IMAGE_MPU_ALL_IN | IMAGE_NPU_ALL_IN)
+
+#define IMAGE_CFG_SUB_MODULES_MUST_IN (BIT_ULL(UP_FW_UPDATE_CFG0) | \
+ BIT_ULL(UP_FW_UPDATE_CFG1) | \
+ BIT_ULL(UP_FW_UPDATE_CFG2) | \
+ BIT_ULL(UP_FW_UPDATE_CFG3))
+
+struct firmware_section {
+ u32 section_len;
+ u32 section_offset;
+ u32 section_version;
+ u32 section_type;
+ u32 section_crc;
+ u32 section_flag;
+};
+
+struct firmware_image {
+ u32 fw_version;
+ u32 fw_len;
+ u32 fw_magic;
+ struct {
+ u32 section_cnt : 16;
+ u32 rsvd : 16;
+ } fw_info;
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ u32 device_id; /* cfg fw board_type value */
+ u32 rsvd0[101]; /* device_id and rsvd0[101] is update_head_extend_info */
+ u32 rsvd1[534]; /* big bin file total size 4096B */
+ u32 bin_data; /* obtain the address for use */
+};
+
+struct host_image {
+ struct firmware_section section_info[FW_TYPE_MAX_NUM];
+ struct {
+ u32 total_len;
+ u32 fw_version;
+ } image_info;
+ u32 type_num;
+ u32 device_id;
+};
+
+int hinic3_init_devlink(struct hinic3_hwdev *hwdev);
+void hinic3_uninit_devlink(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
new file mode 100644
index 000000000000..2638dd1865d7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.c
@@ -0,0 +1,1381 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hw.h"
+#include "hinic3_csr.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_eqs.h"
+
+#define HINIC3_EQS_WQ_NAME "hinic3_eqs"
+
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x7U
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << \
+ AEQ_CTRL_0_##member##_SHIFT)
+
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK << \
+ AEQ_CTRL_0_##member##_SHIFT)))
+
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << \
+ AEQ_CTRL_1_##member##_SHIFT)
+
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK << \
+ AEQ_CTRL_1_##member##_SHIFT)))
+
+#define HINIC3_EQ_PROD_IDX_MASK 0xFFFFF
+#define HINIC3_TASK_PROCESS_EQE_LIMIT 1024
+#define HINIC3_EQ_UPDATE_CI_STEP 64
+
+/*lint -e806*/
+static uint g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+module_param(g_aeq_len, uint, 0444);
+MODULE_PARM_DESC(g_aeq_len,
+ "aeq depth, valid range is " __stringify(HINIC3_MIN_AEQ_LEN)
+ " - " __stringify(HINIC3_MAX_AEQ_LEN));
+
+static uint g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+module_param(g_ceq_len, uint, 0444);
+MODULE_PARM_DESC(g_ceq_len,
+ "ceq depth, valid range is " __stringify(HINIC3_MIN_CEQ_LEN)
+ " - " __stringify(HINIC3_MAX_CEQ_LEN));
+
+static uint g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+module_param(g_num_ceqe_in_tasklet, uint, 0444);
+MODULE_PARM_DESC(g_num_ceqe_in_tasklet,
+ "The max number of ceqe can be processed in tasklet, default = 1024");
+/*lint +e806*/
+
+#define CEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define CEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define CEQ_CTRL_0_LIMIT_KICK_SHIFT 20
+#define CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
+#define CEQ_CTRL_0_PAGE_SIZE_SHIFT 27
+#define CEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define CEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define CEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define CEQ_CTRL_0_LIMIT_KICK_MASK 0xFU
+#define CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3U
+#define CEQ_CTRL_0_PAGE_SIZE_MASK 0xF
+#define CEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define CEQ_CTRL_0_SET(val, member) \
+ (((val) & CEQ_CTRL_0_##member##_MASK) << \
+ CEQ_CTRL_0_##member##_SHIFT)
+
+#define CEQ_CTRL_1_LEN_SHIFT 0
+#define CEQ_CTRL_1_GLB_FUNC_ID_SHIFT 20
+
+#define CEQ_CTRL_1_LEN_MASK 0xFFFFFU
+#define CEQ_CTRL_1_GLB_FUNC_ID_MASK 0xFFFU
+
+#define CEQ_CTRL_1_SET(val, member) \
+ (((val) & CEQ_CTRL_1_##member##_MASK) << \
+ CEQ_CTRL_1_##member##_SHIFT)
+
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+#define EQ_CONS_IDX_CONS_IDX_SHIFT 0
+#define EQ_CONS_IDX_INT_ARMED_SHIFT 31
+
+#define EQ_CONS_IDX_CONS_IDX_MASK 0x1FFFFFU
+#define EQ_CONS_IDX_INT_ARMED_MASK 0x1U
+
+#define EQ_CONS_IDX_SET(val, member) \
+ (((val) & EQ_CONS_IDX_##member##_MASK) << \
+ EQ_CONS_IDX_##member##_SHIFT)
+
+#define EQ_CONS_IDX_CLEAR(val, member) \
+ ((val) & (~(EQ_CONS_IDX_##member##_MASK << \
+ EQ_CONS_IDX_##member##_SHIFT)))
+
+#define EQ_CI_SIMPLE_INDIR_CI_SHIFT 0
+#define EQ_CI_SIMPLE_INDIR_ARMED_SHIFT 21
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_SHIFT 30
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_SHIFT 24
+
+#define EQ_CI_SIMPLE_INDIR_CI_MASK 0x1FFFFFU
+#define EQ_CI_SIMPLE_INDIR_ARMED_MASK 0x1U
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_MASK 0x3U
+#define EQ_CI_SIMPLE_INDIR_CEQ_IDX_MASK 0xFFU
+
+#define EQ_CI_SIMPLE_INDIR_SET(val, member) \
+ (((val) & EQ_CI_SIMPLE_INDIR_##member##_MASK) << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)
+
+#define EQ_CI_SIMPLE_INDIR_CLEAR(val, member) \
+ ((val) & (~(EQ_CI_SIMPLE_INDIR_##member##_MASK << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) ((eq)->cons_idx | \
+ ((u32)(eq)->wrapped << EQ_WRAPPED_SHIFT))
+
+#define EQ_CONS_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CONS_IDX_ADDR : \
+ HINIC3_CSR_CEQ_CONS_IDX_ADDR)
+#define EQ_CI_SIMPLE_INDIR_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR : \
+ HINIC3_CSR_CEQ_CI_SIMPLE_INDIR_ADDR)
+
+#define EQ_PROD_IDX_REG_ADDR(eq) \
+ (((eq)->type == HINIC3_AEQ) ? \
+ HINIC3_CSR_AEQ_PROD_IDX_ADDR : \
+ HINIC3_CSR_CEQ_PROD_IDX_ADDR)
+
+#define HINIC3_EQ_HI_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_HI_PHYS_ADDR_REG(pg_num)))
+
+#define HINIC3_EQ_LO_PHYS_ADDR_REG(type, pg_num) \
+ ((u32)((type == HINIC3_AEQ) ? \
+ HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) : \
+ HINIC3_CEQ_LO_PHYS_ADDR_REG(pg_num)))
+
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ((u16)(ALIGN((u32)((eq)->eq_len * (eq)->elem_size), \
+ (size)) / (size)))
+
+#define HINIC3_EQ_MAX_PAGES(eq) \
+ ((eq)->type == HINIC3_AEQ ? HINIC3_AEQ_MAX_PAGES : \
+ HINIC3_CEQ_MAX_PAGES)
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ (((u8 *)(eq)->eq_pages[(idx) / (eq)->num_elem_in_pg].align_vaddr) + \
+ (u32)(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
+
+#define GET_AEQ_ELEM(eq, idx) \
+ ((struct hinic3_aeq_elem *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CEQ_ELEM(eq, idx) ((u32 *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM((eq), (eq)->cons_idx)
+
+#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM((eq), (eq)->cons_idx)
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) \
+ ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+#define CEQ_DMA_ATTR_DEFAULT 0
+
+#define CEQ_LMT_KICK_DEFAULT 0
+
+#define EQ_MSIX_RESEND_TIMER_CLEAR 1
+
+#define EQ_WRAPPED_SHIFT 20
+
+#define EQ_VALID_SHIFT 31
+
+#define CEQE_TYPE_SHIFT 23
+#define CEQE_TYPE_MASK 0x7
+
+#define CEQE_TYPE(type) (((type) >> CEQE_TYPE_SHIFT) & \
+ CEQE_TYPE_MASK)
+
+#define CEQE_DATA_MASK 0x3FFFFFF
+#define CEQE_DATA(data) ((data) & CEQE_DATA_MASK)
+
+#define aeq_to_aeqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_aeqs, aeq[0])
+
+#define ceq_to_ceqs(eq) \
+ container_of((eq) - (eq)->q_id, struct hinic3_ceqs, ceq[0])
+
+static irqreturn_t ceq_interrupt(int irq, void *data);
+static irqreturn_t aeq_interrupt(int irq, void *data);
+
+static void ceq_tasklet(ulong ceq_data);
+
+/**
+ * hinic3_aeq_register_hw_cb - register aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @hw_cb: callback function
+ **/
+int hinic3_aeq_register_hw_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_type event,
+ hinic3_aeq_hwe_cb hwe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !hwe_cb || event >= HINIC3_MAX_AEQ_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_hwe_cb[event] = hwe_cb;
+ aeqs->aeq_hwe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_hw_cb);
+
+/**
+ * hinic3_aeq_unregister_hw_cb - unregister the aeq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_aeq_unregister_hw_cb(void *hwdev, enum hinic3_aeq_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_HW_CB_REG, &aeqs->aeq_hw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_hwe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_hw_cb);
+
+/**
+ * hinic3_aeq_register_swe_cb - register aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: soft event for the handler
+ * @sw_cb: callback function
+ **/
+int hinic3_aeq_register_swe_cb(void *hwdev, void *pri_handle, enum hinic3_aeq_sw_type event,
+ hinic3_aeq_swe_cb aeq_swe_cb)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || !aeq_swe_cb || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return -EINVAL;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ aeqs->aeq_swe_cb[event] = aeq_swe_cb;
+ aeqs->aeq_swe_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_aeq_register_swe_cb);
+
+/**
+ * hinic3_aeq_unregister_swe_cb - unregister the aeq callback for sw event
+ * @hwdev: the pointer to hw device
+ * @event: soft event for the handler
+ **/
+void hinic3_aeq_unregister_swe_cb(void *hwdev, enum hinic3_aeq_sw_type event)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_AEQ_SW_EVENTS)
+ return;
+
+ aeqs = ((struct hinic3_hwdev *)hwdev)->aeqs;
+
+ clear_bit(HINIC3_AEQ_SW_CB_REG, &aeqs->aeq_sw_cb_state[event]);
+
+ while (test_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ aeqs->aeq_swe_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_aeq_unregister_swe_cb);
+
+/**
+ * hinic3_ceq_register_cb - register ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @pri_handle: the pointer to private invoker device
+ * @event: event for the handler
+ * @ceq_cb: callback function
+ **/
+int hinic3_ceq_register_cb(void *hwdev, void *pri_handle, enum hinic3_ceq_event event,
+ hinic3_ceq_event_cb callback)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return -EINVAL;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ ceqs->ceq_cb[event] = callback;
+ ceqs->ceq_cb_data[event] = pri_handle;
+
+ set_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_ceq_register_cb);
+
+/**
+ * hinic3_ceq_unregister_cb - unregister ceq callback for specific event
+ * @hwdev: the pointer to hw device
+ * @event: event for the handler
+ **/
+void hinic3_ceq_unregister_cb(void *hwdev, enum hinic3_ceq_event event)
+{
+ struct hinic3_ceqs *ceqs = NULL;
+
+ if (!hwdev || event >= HINIC3_MAX_CEQ_EVENTS)
+ return;
+
+ ceqs = ((struct hinic3_hwdev *)hwdev)->ceqs;
+
+ clear_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]);
+
+ while (test_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]))
+ usleep_range(EQ_USLEEP_LOW_BOUND, EQ_USLEEP_HIG_BOUND);
+
+ ceqs->ceq_cb[event] = NULL;
+}
+EXPORT_SYMBOL(hinic3_ceq_unregister_cb);
+
+/**
+ * set_eq_cons_idx - write the cons idx to the hw
+ * @eq: The event queue to update the cons idx for
+ * @cons idx: consumer index value
+ **/
+static void set_eq_cons_idx(struct hinic3_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci, val;
+ u32 addr = EQ_CI_SIMPLE_INDIR_REG_ADDR(eq);
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* if use poll mode only eq0 use int_arm mode */
+ if (eq->q_id != 0 && eq->hwdev->poll)
+ val = EQ_CI_SIMPLE_INDIR_SET(HINIC3_EQ_NOT_ARMED, ARMED);
+ else
+ val = EQ_CI_SIMPLE_INDIR_SET(arm_state, ARMED);
+ if (eq->type == HINIC3_AEQ) {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, AEQ_IDX);
+ } else {
+ val = val |
+ EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, CEQ_IDX);
+ }
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * ceq_event_handler - handle for the ceq events
+ * @ceqs: ceqs part of the chip
+ * @ceqe: ceq element of the event
+ **/
+static void ceq_event_handler(struct hinic3_ceqs *ceqs, u32 ceqe)
+{
+ struct hinic3_hwdev *hwdev = ceqs->hwdev;
+ enum hinic3_ceq_event event = CEQE_TYPE(ceqe);
+ u32 ceqe_data = CEQE_DATA(ceqe);
+
+ if (event >= HINIC3_MAX_CEQ_EVENTS) {
+ sdk_err(hwdev->dev_hdl, "Ceq unknown event:%d, ceqe date: 0x%x\n",
+ event, ceqe_data);
+ return;
+ }
+
+ set_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+
+ if (ceqs->ceq_cb[event] &&
+ test_bit(HINIC3_CEQ_CB_REG, &ceqs->ceq_cb_state[event]))
+ ceqs->ceq_cb[event](ceqs->ceq_cb_data[event], ceqe_data);
+
+ clear_bit(HINIC3_CEQ_CB_RUNNING, &ceqs->ceq_cb_state[event]);
+}
+
+static void aeq_elem_handler(struct hinic3_eq *eq, u32 aeqe_desc)
+{
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+ struct hinic3_aeq_elem *aeqe_pos;
+ enum hinic3_aeq_type event;
+ enum hinic3_aeq_sw_type sw_type;
+ u32 sw_event;
+ u8 data[HINIC3_AEQE_DATA_SIZE], size;
+
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ eq->hwdev->cur_recv_aeq_cnt++;
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
+ sw_event = event;
+ sw_type = sw_event >= HINIC3_NIC_FATAL_ERROR_MAX ?
+ HINIC3_STATEFUL_EVENT : HINIC3_STATELESS_EVENT;
+ /* SW event uses only the first 8B */
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+ set_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ if (aeqs->aeq_swe_cb[sw_type] &&
+ test_bit(HINIC3_AEQ_SW_CB_REG,
+ &aeqs->aeq_sw_cb_state[sw_type]))
+ aeqs->aeq_swe_cb[sw_type](aeqs->aeq_swe_cb_data[sw_type], event, data);
+
+ clear_bit(HINIC3_AEQ_SW_CB_RUNNING,
+ &aeqs->aeq_sw_cb_state[sw_type]);
+ return;
+ }
+
+ if (event < HINIC3_MAX_AEQ_EVENTS) {
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+ set_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ if (aeqs->aeq_hwe_cb[event] &&
+ test_bit(HINIC3_AEQ_HW_CB_REG,
+ &aeqs->aeq_hw_cb_state[event]))
+ aeqs->aeq_hwe_cb[event](aeqs->aeq_hwe_cb_data[event], data, size);
+ clear_bit(HINIC3_AEQ_HW_CB_RUNNING,
+ &aeqs->aeq_hw_cb_state[event]);
+ return;
+ }
+ sdk_warn(eq->hwdev->dev_hdl, "Unknown aeq hw event %d\n", event);
+}
+
+/**
+ * aeq_irq_handler - handler for the aeq event
+ * @eq: the async event queue of the event
+ **/
+static bool aeq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ u32 aeqe_desc;
+ u32 i, eqe_cnt = 0;
+
+ for (i = 0; i < HINIC3_TASK_PROCESS_EQE_LIMIT; i++) {
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+
+ /* Data in HW is in Big endian Format */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
+ return false;
+
+ dma_rmb();
+
+ aeq_elem_handler(eq, aeqe_desc);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+/**
+ * ceq_irq_handler - handler for the ceq event
+ * @eq: the completion event queue of the event
+ **/
+static bool ceq_irq_handler(struct hinic3_eq *eq)
+{
+ struct hinic3_ceqs *ceqs = ceq_to_ceqs(eq);
+ u32 ceqe, eqe_cnt = 0;
+ u32 i;
+
+ for (i = 0; i < g_num_ceqe_in_tasklet; i++) {
+ ceqe = *(GET_CURR_CEQ_ELEM(eq));
+ ceqe = be32_to_cpu(ceqe);
+
+ /* HW updates wrapped bit, when it adds eq element event */
+ if (EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
+ return false;
+
+ ceq_event_handler(ceqs, ceqe);
+
+ eq->cons_idx++;
+
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+
+ return true;
+}
+
+static void reschedule_eq_handler(struct hinic3_eq *eq)
+{
+ if (eq->type == HINIC3_AEQ) {
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(eq);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(eq->hwdev, WORK_TYPE_AEQ),
+ aeqs->workq, &eq->aeq_work);
+ } else {
+ tasklet_schedule(&eq->ceq_tasklet);
+ }
+}
+
+/**
+ * eq_irq_handler - handler for the eq event
+ * @data: the event queue of the event
+ **/
+static bool eq_irq_handler(void *data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)data;
+ bool uncompleted = false;
+
+ if (eq->type == HINIC3_AEQ)
+ uncompleted = aeq_irq_handler(eq);
+ else
+ uncompleted = ceq_irq_handler(eq);
+
+ set_eq_cons_idx(eq, uncompleted ? HINIC3_EQ_NOT_ARMED :
+ HINIC3_EQ_ARMED);
+
+ return uncompleted;
+}
+
+/**
+ * eq_irq_work - eq work for the event
+ * @work: the work that is associated with the eq
+ **/
+static void eq_irq_work(struct work_struct *work)
+{
+ struct hinic3_eq *eq = container_of(work, struct hinic3_eq, aeq_work);
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * aeq_interrupt - aeq interrupt handler
+ * @irq: irq number
+ * @data: the async event queue of the event
+ **/
+static irqreturn_t aeq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *aeq = (struct hinic3_eq *)data;
+ struct hinic3_hwdev *hwdev = aeq->hwdev;
+ struct hinic3_aeqs *aeqs = aeq_to_aeqs(aeq);
+ struct workqueue_struct *workq = aeqs->workq;
+
+ /* clear resend timer cnt register */
+ hinic3_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_AEQ),
+ workq, &aeq->aeq_work);
+ return IRQ_HANDLED;
+}
+
+/**
+ * ceq_tasklet - ceq tasklet for the event
+ * @ceq_data: data that will be used by the tasklet(ceq)
+ **/
+static void ceq_tasklet(ulong ceq_data)
+{
+ struct hinic3_eq *eq = (struct hinic3_eq *)ceq_data;
+
+ eq->soft_intr_jif = jiffies;
+
+ if (eq_irq_handler(eq))
+ reschedule_eq_handler(eq);
+}
+
+/**
+ * ceq_interrupt - ceq interrupt handler
+ * @irq: irq number
+ * @data: the completion event queue of the event
+ **/
+static irqreturn_t ceq_interrupt(int irq, void *data)
+{
+ struct hinic3_eq *ceq = (struct hinic3_eq *)data;
+
+ ceq->hard_intr_jif = jiffies;
+
+ /* clear resend timer counters */
+ hinic3_misx_intr_clear_resend_bit(ceq->hwdev,
+ ceq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+
+ tasklet_schedule(&ceq->ceq_tasklet);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * set_eq_ctrls - setting eq's ctrls registers
+ * @eq: the event queue for setting
+ **/
+static int set_eq_ctrls(struct hinic3_eq *eq)
+{
+ enum hinic3_eq_type type = eq->type;
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = HINIC3_PCI_INTF_IDX(hwif);
+ int err;
+
+ if (type == HINIC3_AEQ) {
+ /* set ctrl0 */
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ AEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* set ctrl1 */
+ addr = HINIC3_CSR_AEQ_CTRL_1_ADDR;
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl1);
+ } else {
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ ctrl0 = CEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ CEQ_CTRL_0_SET(CEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ CEQ_CTRL_0_SET(CEQ_LMT_KICK_DEFAULT, LIMIT_KICK) |
+ CEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ CEQ_CTRL_0_SET(page_size_val, PAGE_SIZE) |
+ CEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ ctrl1 = CEQ_CTRL_1_SET(eq->eq_len, LEN);
+
+ /* set ceq ctrl reg through mgmt cpu */
+ err = hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, ctrl0,
+ ctrl1);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * ceq_elements_init - Initialize all the elements in the ceq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void ceq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ u32 *ceqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ ceqe = GET_CEQ_ELEM(eq, i);
+ *(ceqe) = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+/**
+ * aeq_elements_init - initialize all the elements in the aeq
+ * @eq: the event queue
+ * @init_val: value to init with it the elements
+ **/
+static void aeq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ struct hinic3_aeq_elem *aeqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ wmb(); /* Write the init values */
+}
+
+static void eq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ if (eq->type == HINIC3_AEQ)
+ aeq_elements_init(eq, init_val);
+ else
+ ceq_elements_init(eq, init_val);
+}
+
+/**
+ * alloc_eq_pages - allocate the pages for the queue
+ * @eq: the event queue
+ **/
+static int alloc_eq_pages(struct hinic3_eq *eq)
+{
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct hinic3_dma_addr_align *eq_page = NULL;
+ u32 reg, init_val;
+ u16 pg_idx, i;
+ int err;
+
+ eq->eq_pages = kcalloc(eq->num_pages, sizeof(*eq->eq_pages),
+ GFP_KERNEL);
+ if (!eq->eq_pages) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq pages description\n");
+ return -ENOMEM;
+ }
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++) {
+ eq_page = &eq->eq_pages[pg_idx];
+ err = hinic3_dma_zalloc_coherent_align(eq->hwdev->dev_hdl,
+ eq->page_size,
+ HINIC3_MIN_EQ_PAGE_SIZE,
+ GFP_KERNEL, eq_page);
+ if (err) {
+ sdk_err(eq->hwdev->dev_hdl, "Failed to alloc eq page, page index: %hu\n",
+ pg_idx);
+ goto dma_alloc_err;
+ }
+
+ reg = HINIC3_EQ_HI_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq_page->align_paddr));
+
+ reg = HINIC3_EQ_LO_PHYS_ADDR_REG(eq->type, pg_idx);
+ hinic3_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq_page->align_paddr));
+ }
+
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
+ sdk_err(eq->hwdev->dev_hdl, "Number element in eq page != power of 2\n");
+ err = -EINVAL;
+ goto dma_alloc_err;
+ }
+ init_val = EQ_WRAPPED(eq);
+
+ eq_elements_init(eq, init_val);
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_idx; i++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[i]);
+
+ kfree(eq->eq_pages);
+
+ return err;
+}
+
+/**
+ * free_eq_pages - free the pages of the queue
+ * @eq: the event queue
+ **/
+static void free_eq_pages(struct hinic3_eq *eq)
+{
+ u16 pg_idx;
+
+ for (pg_idx = 0; pg_idx < eq->num_pages; pg_idx++)
+ hinic3_dma_free_coherent_align(eq->hwdev->dev_hdl,
+ &eq->eq_pages[pg_idx]);
+
+ kfree(eq->eq_pages);
+}
+
+static inline u32 get_page_size(const struct hinic3_eq *eq)
+{
+ u32 total_size;
+ u32 count;
+
+ total_size = ALIGN((eq->eq_len * eq->elem_size),
+ HINIC3_MIN_EQ_PAGE_SIZE);
+ if (total_size <= (HINIC3_EQ_MAX_PAGES(eq) * HINIC3_MIN_EQ_PAGE_SIZE))
+ return HINIC3_MIN_EQ_PAGE_SIZE;
+
+ count = (u32)(ALIGN((total_size / HINIC3_EQ_MAX_PAGES(eq)),
+ HINIC3_MIN_EQ_PAGE_SIZE) / HINIC3_MIN_EQ_PAGE_SIZE);
+
+ /* round up to nearest power of two */
+ count = 1U << (u8)fls((int)(count - 1));
+
+ return ((u32)HINIC3_MIN_EQ_PAGE_SIZE) * count;
+}
+
+static int request_eq_irq(struct hinic3_eq *eq, struct irq_info *entry)
+{
+ int err = 0;
+
+ if (eq->type == HINIC3_AEQ)
+ INIT_WORK(&eq->aeq_work, eq_irq_work);
+ else
+ tasklet_init(&eq->ceq_tasklet, ceq_tasklet, (ulong)eq);
+
+ if (eq->type == HINIC3_AEQ) {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_aeq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+
+ err = request_irq(entry->irq_id, aeq_interrupt, 0UL,
+ eq->irq_name, eq);
+ } else {
+ snprintf(eq->irq_name, sizeof(eq->irq_name),
+ "hinic3_ceq%u@pci:%s", eq->q_id,
+ pci_name(eq->hwdev->pcidev_hdl));
+ err = request_irq(entry->irq_id, ceq_interrupt, 0UL,
+ eq->irq_name, eq);
+ }
+
+ return err;
+}
+
+static void reset_eq(struct hinic3_eq *eq)
+{
+ /* clear eq_len to force eqe drop in hardware */
+ if (eq->type == HINIC3_AEQ)
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ else
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+
+ wmb(); /* clear eq_len before clear prod idx */
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
+}
+
+/**
+ * init_eq - initialize eq
+ * @eq: the event queue
+ * @hwdev: the pointer to hw device
+ * @q_id: Queue id number
+ * @q_len: the number of EQ elements
+ * @type: the type of the event queue, ceq or aeq
+ * @entry: msix entry associated with the event queue
+ * Return: 0 - Success, Negative - failure
+ **/
+static int init_eq(struct hinic3_eq *eq, struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 q_len, enum hinic3_eq_type type, struct irq_info *entry)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->type = type;
+ eq->eq_len = q_len;
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ reset_eq(eq);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = (type == HINIC3_AEQ) ? HINIC3_AEQE_SIZE : HINIC3_CEQE_SIZE;
+
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+
+ if (eq->num_pages > HINIC3_EQ_MAX_PAGES(eq)) {
+ sdk_err(hwdev->dev_hdl, "Number pages: %u too many pages for eq\n",
+ eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to allocate pages for eq\n");
+ return err;
+ }
+
+ eq->eq_irq.msix_entry_idx = entry->msix_entry_idx;
+ eq->eq_irq.irq_id = entry->irq_id;
+
+ err = set_eq_ctrls(eq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ctrls for eq\n");
+ goto init_eq_ctrls_err;
+ }
+
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ err = request_eq_irq(eq, entry);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to request irq for the eq, err: %d\n", err);
+ goto req_irq_err;
+ }
+
+ hinic3_set_msix_state(hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+
+init_eq_ctrls_err:
+req_irq_err:
+ free_eq_pages(eq);
+ return err;
+}
+
+/**
+ * remove_eq - remove eq
+ * @eq: the event queue
+ **/
+static void remove_eq(struct hinic3_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ hinic3_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+ synchronize_irq(entry->irq_id);
+
+ free_irq(entry->irq_id, eq);
+
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+
+ wmb(); /* write index before config */
+
+ if (eq->type == HINIC3_AEQ) {
+ cancel_work_sync(&eq->aeq_work);
+
+ /* clear eq_len to avoid hw access host memory */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ } else {
+ tasklet_kill(&eq->ceq_tasklet);
+
+ hinic3_set_ceq_ctrl_reg(eq->hwdev, eq->q_id, 0, 0);
+ }
+
+ /* update cons_idx to avoid invalid interrupt */
+ eq->cons_idx = hinic3_hwif_read_reg(eq->hwdev->hwif,
+ EQ_PROD_IDX_REG_ADDR(eq));
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * hinic3_aeqs_init - init all the aeqs
+ * @hwdev: the pointer to hw device
+ * @num_aeqs: number of AEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+ int err;
+ u16 i, q_id;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ aeqs = kzalloc(sizeof(*aeqs), GFP_KERNEL);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+ aeqs->workq = alloc_workqueue(HINIC3_EQS_WQ_NAME, WQ_MEM_RECLAIM,
+ HINIC3_MAX_AEQS);
+ if (!aeqs->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize aeq workqueue\n");
+ err = -ENOMEM;
+ goto create_work_err;
+ }
+
+ if (g_aeq_len < HINIC3_MIN_AEQ_LEN || g_aeq_len > HINIC3_MAX_AEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_aeq_len value %u out of range, resetting to %d\n",
+ g_aeq_len, HINIC3_DEFAULT_AEQ_LEN);
+ g_aeq_len = HINIC3_DEFAULT_AEQ_LEN;
+ }
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_eq(&aeqs->aeq[q_id], hwdev, q_id, g_aeq_len,
+ HINIC3_AEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeq %u\n",
+ q_id);
+ goto init_aeq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_aeqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&aeqs->aeq[i]);
+
+ destroy_workqueue(aeqs->workq);
+
+create_work_err:
+ kfree(aeqs);
+
+ return err;
+}
+
+/**
+ * hinic3_aeqs_free - free all the aeqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ enum hinic3_aeq_type aeq_event = HINIC3_HW_INTER_INT;
+ enum hinic3_aeq_sw_type sw_aeq_event = HINIC3_STATELESS_EVENT;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_eq(&aeqs->aeq[q_id]);
+
+ for (; sw_aeq_event < HINIC3_MAX_AEQ_SW_EVENTS; sw_aeq_event++)
+ hinic3_aeq_unregister_swe_cb(hwdev, sw_aeq_event);
+
+ for (; aeq_event < HINIC3_MAX_AEQ_EVENTS; aeq_event++)
+ hinic3_aeq_unregister_hw_cb(hwdev, aeq_event);
+
+ destroy_workqueue(aeqs->workq);
+
+ kfree(aeqs);
+}
+
+/**
+ * hinic3_ceqs_init - init all the ceqs
+ * @hwdev: the pointer to hw device
+ * @num_ceqs: number of CEQs
+ * @msix_entries: msix entries associated with the event queues
+ * Return: 0 - Success, Negative - failure
+ **/
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries)
+{
+ struct hinic3_ceqs *ceqs;
+ int err;
+ u16 i, q_id;
+
+ ceqs = kzalloc(sizeof(*ceqs), GFP_KERNEL);
+ if (!ceqs)
+ return -ENOMEM;
+
+ hwdev->ceqs = ceqs;
+
+ ceqs->hwdev = hwdev;
+ ceqs->num_ceqs = num_ceqs;
+
+ if (g_ceq_len < HINIC3_MIN_CEQ_LEN || g_ceq_len > HINIC3_MAX_CEQ_LEN) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_ceq_len value %u out of range, resetting to %d\n",
+ g_ceq_len, HINIC3_DEFAULT_CEQ_LEN);
+ g_ceq_len = HINIC3_DEFAULT_CEQ_LEN;
+ }
+
+ if (!g_num_ceqe_in_tasklet) {
+ sdk_warn(hwdev->dev_hdl, "Module Parameter g_num_ceqe_in_tasklet can not be zero, resetting to %d\n",
+ HINIC3_TASK_PROCESS_EQE_LIMIT);
+ g_num_ceqe_in_tasklet = HINIC3_TASK_PROCESS_EQE_LIMIT;
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++) {
+ err = init_eq(&ceqs->ceq[q_id], hwdev, q_id, g_ceq_len,
+ HINIC3_CEQ, &msix_entries[q_id]);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceq %u\n",
+ q_id);
+ goto init_ceq_err;
+ }
+ }
+ for (q_id = 0; q_id < num_ceqs; q_id++)
+ hinic3_set_msix_state(hwdev, msix_entries[q_id].msix_entry_idx,
+ HINIC3_MSIX_ENABLE);
+
+ for (i = 0; i < HINIC3_MAX_CEQ_EVENTS; i++)
+ ceqs->ceq_cb_state[i] = 0;
+
+ return 0;
+
+init_ceq_err:
+ for (i = 0; i < q_id; i++)
+ remove_eq(&ceqs->ceq[i]);
+
+ kfree(ceqs);
+
+ return err;
+}
+
+/**
+ * hinic3_ceqs_free - free all the ceqs
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ enum hinic3_ceq_event ceq_event = HINIC3_CMDQ;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
+ remove_eq(&ceqs->ceq[q_id]);
+
+ for (; ceq_event < HINIC3_MAX_CEQ_EVENTS; ceq_event++)
+ hinic3_ceq_unregister_cb(hwdev, ceq_event);
+
+ kfree(ceqs);
+}
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ irqs[q_id].irq_id = ceqs->ceq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ ceqs->ceq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = ceqs->num_ceqs;
+}
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ irqs[q_id].irq_id = aeqs->aeq[q_id].eq_irq.irq_id;
+ irqs[q_id].msix_entry_idx =
+ aeqs->aeq[q_id].eq_irq.msix_entry_idx;
+ }
+
+ *num_irqs = aeqs->num_aeqs;
+}
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi, ctrl0, idx;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ ctrl0 = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ idx = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_EQ_INDIR_IDX_ADDR(eq->type));
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ sdk_err(hwdev->dev_hdl,
+ "Aeq id: %d, idx: %u, ctrl0: 0x%08x, ci: 0x%08x, pi: 0x%x, work_state: 0x%x, wrap: %u, desc: 0x%x swci:0x%x\n",
+ q_id, idx, ctrl0, ci, pi, work_busy(&eq->aeq_work),
+ eq->wrapped, be32_to_cpu(aeqe_pos->desc), eq->cons_idx);
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->ceqs->num_ceqs; q_id++) {
+ eq = &hwdev->ceqs->ceq[q_id];
+ /* Indirect access should set q_id first */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_EQ_INDIR_IDX_ADDR(eq->type),
+ eq->q_id);
+ wmb(); /* write index before config */
+
+ addr = EQ_CONS_IDX_REG_ADDR(eq);
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = EQ_PROD_IDX_REG_ADDR(eq);
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ sdk_err(hwdev->dev_hdl,
+ "Ceq id: %d, ci: 0x%08x, sw_ci: 0x%08x, pi: 0x%x, tasklet_state: 0x%lx, wrap: %u, ceqe: 0x%x\n",
+ q_id, ci, eq->cons_idx, pi,
+ tasklet_state(&eq->ceq_tasklet),
+ eq->wrapped, be32_to_cpu(*(GET_CURR_CEQ_ELEM(eq))));
+
+ sdk_err(hwdev->dev_hdl, "Ceq last response hard interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->hard_intr_jif));
+ sdk_err(hwdev->dev_hdl, "Ceq last response soft interrupt time: %u\n",
+ jiffies_to_msecs(jiffies - eq->soft_intr_jif));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+}
+
+int hinic3_get_ceq_info(void *hwdev, u16 q_id, struct hinic3_ceq_info *ceq_info)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !ceq_info)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ ceq_info->q_len = eq->eq_len;
+ ceq_info->num_pages = eq->num_pages;
+ ceq_info->page_size = eq->page_size;
+ ceq_info->num_elem_in_pg = eq->num_elem_in_pg;
+ ceq_info->elem_size = eq->elem_size;
+ sdk_info(dev->dev_hdl, "get_ceq_info: qid=0x%x page_size=%ul\n",
+ q_id, eq->page_size);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_info);
+
+int hinic3_get_ceq_page_phy_addr(void *hwdev, u16 q_id,
+ u16 page_idx, u64 *page_phy_addr)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *eq = NULL;
+
+ if (!hwdev || !page_phy_addr)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ eq = &dev->ceqs->ceq[q_id];
+ if (page_idx >= eq->num_pages)
+ return -EINVAL;
+
+ *page_phy_addr = eq->eq_pages[page_idx].align_paddr;
+ sdk_info(dev->dev_hdl, "ceq_page_phy_addr: 0x%llx page_idx=%u\n",
+ eq->eq_pages[page_idx].align_paddr, page_idx);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_ceq_page_phy_addr);
+
+int hinic3_set_ceq_irq_disable(void *hwdev, u16 q_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_eq *ceq = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (q_id >= dev->ceqs->num_ceqs)
+ return -EINVAL;
+
+ ceq = &dev->ceqs->ceq[q_id];
+
+ hinic3_set_msix_state(ceq->hwdev, ceq->eq_irq.msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ceq_irq_disable);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
new file mode 100644
index 000000000000..a6b83c3bc563
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_eqs.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_EQS_H
+#define HINIC3_EQS_H
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MAX_AEQS 4
+#define HINIC3_MAX_CEQS 32
+
+#define HINIC3_AEQ_MAX_PAGES 4
+#define HINIC3_CEQ_MAX_PAGES 8
+
+#define HINIC3_AEQE_SIZE 64
+#define HINIC3_CEQE_SIZE 4
+
+#define HINIC3_AEQE_DESC_SIZE 4
+#define HINIC3_AEQE_DATA_SIZE \
+ (HINIC3_AEQE_SIZE - HINIC3_AEQE_DESC_SIZE)
+
+#define HINIC3_DEFAULT_AEQ_LEN 0x10000
+#define HINIC3_DEFAULT_CEQ_LEN 0x10000
+
+#define HINIC3_MIN_EQ_PAGE_SIZE 0x1000 /* min eq page size 4K Bytes */
+#define HINIC3_MAX_EQ_PAGE_SIZE 0x400000 /* max eq page size 4M Bytes */
+
+#define HINIC3_MIN_AEQ_LEN 64
+#define HINIC3_MAX_AEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_AEQE_SIZE) * HINIC3_AEQ_MAX_PAGES)
+
+#define HINIC3_MIN_CEQ_LEN 64
+#define HINIC3_MAX_CEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_CEQE_SIZE) * HINIC3_CEQ_MAX_PAGES)
+#define HINIC3_CEQ_ID_CMDQ 0
+
+#define EQ_IRQ_NAME_LEN 64
+
+#define EQ_USLEEP_LOW_BOUND 900
+#define EQ_USLEEP_HIG_BOUND 1000
+
+enum hinic3_eq_type {
+ HINIC3_AEQ,
+ HINIC3_CEQ
+};
+
+enum hinic3_eq_intr_mode {
+ HINIC3_INTR_MODE_ARMED,
+ HINIC3_INTR_MODE_ALWAYS,
+};
+
+enum hinic3_eq_ci_arm_state {
+ HINIC3_EQ_NOT_ARMED,
+ HINIC3_EQ_ARMED,
+};
+
+struct hinic3_eq {
+ struct hinic3_hwdev *hwdev;
+ u16 q_id;
+ u16 rsvd1;
+ enum hinic3_eq_type type;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+ u16 rsvd2;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+ char irq_name[EQ_IRQ_NAME_LEN];
+
+ struct hinic3_dma_addr_align *eq_pages;
+
+ struct work_struct aeq_work;
+ struct tasklet_struct ceq_tasklet;
+
+ u64 hard_intr_jif;
+ u64 soft_intr_jif;
+
+ u64 rsvd3;
+};
+
+struct hinic3_aeq_elem {
+ u8 aeqe_data[HINIC3_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+enum hinic3_aeq_cb_state {
+ HINIC3_AEQ_HW_CB_REG = 0,
+ HINIC3_AEQ_HW_CB_RUNNING,
+ HINIC3_AEQ_SW_CB_REG,
+ HINIC3_AEQ_SW_CB_RUNNING,
+};
+
+struct hinic3_aeqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_aeq_hwe_cb aeq_hwe_cb[HINIC3_MAX_AEQ_EVENTS];
+ void *aeq_hwe_cb_data[HINIC3_MAX_AEQ_EVENTS];
+ hinic3_aeq_swe_cb aeq_swe_cb[HINIC3_MAX_AEQ_SW_EVENTS];
+ void *aeq_swe_cb_data[HINIC3_MAX_AEQ_SW_EVENTS];
+ unsigned long aeq_hw_cb_state[HINIC3_MAX_AEQ_EVENTS];
+ unsigned long aeq_sw_cb_state[HINIC3_MAX_AEQ_SW_EVENTS];
+
+ struct hinic3_eq aeq[HINIC3_MAX_AEQS];
+ u16 num_aeqs;
+ u16 rsvd1;
+ u32 rsvd2;
+
+ struct workqueue_struct *workq;
+};
+
+enum hinic3_ceq_cb_state {
+ HINIC3_CEQ_CB_REG = 0,
+ HINIC3_CEQ_CB_RUNNING,
+};
+
+struct hinic3_ceqs {
+ struct hinic3_hwdev *hwdev;
+
+ hinic3_ceq_event_cb ceq_cb[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_cb_data[HINIC3_MAX_CEQ_EVENTS];
+ void *ceq_data[HINIC3_MAX_CEQ_EVENTS];
+ unsigned long ceq_cb_state[HINIC3_MAX_CEQ_EVENTS];
+
+ struct hinic3_eq ceq[HINIC3_MAX_CEQS];
+ u16 num_ceqs;
+ u16 rsvd1;
+ u32 rsvd2;
+};
+
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev, u16 num_aeqs,
+ struct irq_info *msix_entries);
+
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_ceqs_init(struct hinic3_hwdev *hwdev, u16 num_ceqs,
+ struct irq_info *msix_entries);
+
+void hinic3_ceqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_get_ceq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_get_aeq_irqs(struct hinic3_hwdev *hwdev, struct irq_info *irqs,
+ u16 *num_irqs);
+
+void hinic3_dump_ceq_info(struct hinic3_hwdev *hwdev);
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
new file mode 100644
index 000000000000..a4cbac8e4cc1
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.c
@@ -0,0 +1,453 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_api.h"
+ #ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static void hinic3_sml_ctr_read_build_req(struct chipif_sml_ctr_rd_req *msg,
+ u8 instance_id, u8 op_id,
+ u8 ack, u32 ctr_id, u32 init_val)
+{
+ msg->head.value = 0;
+ msg->head.bs.instance = instance_id;
+ msg->head.bs.op_id = op_id;
+ msg->head.bs.ack = ack;
+ msg->head.value = HTONL(msg->head.value);
+ msg->ctr_id = ctr_id;
+ msg->ctr_id = HTONL(msg->ctr_id);
+ msg->initial = init_val;
+}
+
+static void sml_ctr_htonl_n(u32 *node, u32 len)
+{
+ u32 i;
+ u32 *node_new = node;
+
+ for (i = 0; i < len; i++) {
+ *node_new = HTONL(*node_new);
+ node_new++;
+ }
+}
+
+/**
+ * hinic3_sm_ctr_rd16 - small single 16 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd16(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u16 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 16bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss16_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32 - small single 32 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd32(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter read fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd32_clear - small single 32 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ * according to ACN error code (ERR_OK, ERR_PARAM, ERR_FAILED...etc)
+ **/
+int hinic3_sm_ctr_rd32_clear(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u32 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req),
+ (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 32bit counter clear fail, err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = rsp.bs_ss32_rsp.value1;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair - big pair 128 counter read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair(void *hwdev, u8 node, u8 instance,
+ u32 ctr_id, u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!value1) {
+ pr_err("First value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!value2) {
+ pr_err("Second value is NULL for read 64 bit pair\n");
+ return -EFAULT;
+ }
+
+ if (!hwdev || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev is NULL or ctr_id(%d) is odd number for read 64 bit pair\n",
+ ctr_id);
+ return -EFAULT;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit rd pair ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64_pair_clear - big pair 128 counter read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value1: read counter value ptr
+ * @value2: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_pair_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value1, u64 *value2)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value1 || !value2 || ((ctr_id & 0x1) != 0)) {
+ pr_err("Hwdev or value1 or value2 is NULL or ctr_id(%u) is odd number\n", ctr_id);
+ return -EINVAL;
+ }
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64 bit clear pair fail. ret(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value1 = ((u64)rsp.bs_bp64_rsp.val1_h << BIT_32) | rsp.bs_bp64_rsp.val1_l;
+ *value2 = ((u64)rsp.bs_bp64_rsp.val2_h << BIT_32) | rsp.bs_bp64_rsp.val2_l;
+
+ return 0;
+}
+
+/**
+ * hinic3_sm_ctr_rd64 - big counter 64 read
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req;
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&req, 0, sizeof(req));
+
+ hinic3_sml_ctr_read_build_req(&req, instance, CHIPIF_SM_CTR_OP_READ,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter read fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_sm_ctr_rd64);
+
+/**
+ * hinic3_sm_ctr_rd64_clear - big counter 64 read and clear to zero
+ * @hwdev: the hardware device
+ * @node: the node id
+ * @ctr_id: counter id
+ * @value: read counter value ptr
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_sm_ctr_rd64_clear(void *hwdev, u8 node, u8 instance, u32 ctr_id,
+ u64 *value)
+{
+ struct chipif_sml_ctr_rd_req req = {0};
+ union ctr_rd_rsp rsp;
+ int ret;
+
+ if (!hwdev || !value)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ hinic3_sml_ctr_read_build_req(&req, instance,
+ CHIPIF_SM_CTR_OP_READ_CLEAR,
+ CHIPIF_ACK, ctr_id, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, node, (u8 *)&req,
+ (unsigned short)sizeof(req), (void *)&rsp,
+ (unsigned short)sizeof(rsp));
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Sm 64bit counter clear fail err(%d)\n", ret);
+ return ret;
+ }
+ sml_ctr_htonl_n((u32 *)&rsp, sizeof(rsp) / sizeof(u32));
+ *value = ((u64)rsp.bs_bs64_rsp.value1 << BIT_32) | rsp.bs_bs64_rsp.value2;
+
+ return 0;
+}
+
+int hinic3_api_csr_rd32(void *hwdev, u8 dest, u32 addr, u32 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u32 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x4);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 32 bit csr fail, dest %u addr 0x%x, ret: 0x%x\n",
+ dest, addr, ret);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+
+int hinic3_api_csr_wr32(void *hwdev, u8 dest, u32 addr, u32 val)
+{
+ struct hinic3_csr_request_api_data api_data;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_WRITE_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NO_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_32;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+ api_data.csr_write_data_h = 0xffffffff;
+ api_data.csr_write_data_l = val;
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, (u8 *)(&api_data),
+ in_size);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write 32 bit csr fail! dest %u addr 0x%x val 0x%x\n",
+ dest, addr, val);
+ return ret;
+ }
+
+ return 0;
+}
+
+int hinic3_api_csr_rd64(void *hwdev, u8 dest, u32 addr, u64 *val)
+{
+ struct hinic3_csr_request_api_data api_data = {0};
+ u64 csr_val = 0;
+ u16 in_size = sizeof(api_data);
+ int ret;
+
+ if (!hwdev || !val)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&api_data, 0, sizeof(struct hinic3_csr_request_api_data));
+ api_data.dw0 = 0;
+ api_data.dw1.bits.operation_id = HINIC3_CSR_OPERATION_READ_CSR;
+ api_data.dw1.bits.need_response = HINIC3_CSR_NEED_RESP_DATA;
+ api_data.dw1.bits.data_size = HINIC3_CSR_DATA_SZ_64;
+ api_data.dw1.val32 = cpu_to_be32(api_data.dw1.val32);
+ api_data.dw2.bits.csr_addr = addr;
+ api_data.dw2.val32 = cpu_to_be32(api_data.dw2.val32);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&api_data),
+ in_size, &csr_val, 0x8);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read 64 bit csr fail, dest %u addr 0x%x\n",
+ dest, addr);
+ return ret;
+ }
+
+ *val = csr_val;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_api_csr_rd64);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
new file mode 100644
index 000000000000..9ec812eac684
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_api.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_API_H
+#define HINIC3_HW_API_H
+
+#include <linux/types.h>
+
+#define CHIPIF_ACK 1
+#define CHIPIF_NOACK 0
+
+#define CHIPIF_SM_CTR_OP_READ 0x2
+#define CHIPIF_SM_CTR_OP_READ_CLEAR 0x6
+
+#define BIT_32 32
+
+/* request head */
+union chipif_sml_ctr_req_head {
+ struct {
+ u32 pad:15;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+/* counter read request struct */
+struct chipif_sml_ctr_rd_req {
+ u32 extra;
+ union chipif_sml_ctr_req_head head;
+ u32 ctr_id;
+ u32 initial;
+ u32 pad;
+};
+
+struct hinic3_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+
+/* counter read response union */
+union ctr_rd_rsp {
+ struct {
+ u32 value1:16;
+ u32 pad0:16;
+ u32 pad1[3];
+ } bs_ss16_rsp;
+
+ struct {
+ u32 value1;
+ u32 pad[3];
+ } bs_ss32_rsp;
+
+ struct {
+ u32 value1:20;
+ u32 pad0:12;
+ u32 value2:12;
+ u32 pad1:20;
+ u32 pad2[2];
+ } bs_sp_rsp;
+
+ struct {
+ u32 value1;
+ u32 value2;
+ u32 pad[2];
+ } bs_bs64_rsp;
+
+ struct {
+ u32 val1_h;
+ u32 val1_l;
+ u32 val2_h;
+ u32 val2_l;
+ } bs_bp64_rsp;
+};
+
+enum HINIC3_CSR_API_DATA_OPERATION_ID {
+ HINIC3_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC3_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC3_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC3_CSR_NO_RESP_DATA = 0,
+ HINIC3_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC3_CSR_API_DATA_DATA_SIZE {
+ HINIC3_CSR_DATA_SZ_32 = 0,
+ HINIC3_CSR_DATA_SZ_64 = 1
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
new file mode 100644
index 000000000000..08a1b8f15cb7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.c
@@ -0,0 +1,1480 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/mutex.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "cfg_mgt_comm_pub.h"
+#include "hinic3_hw_cfg.h"
+
+static void parse_pub_res_cap_dfx(struct hinic3_hwdev *hwdev,
+ const struct service_cap *cap)
+{
+ sdk_info(hwdev->dev_hdl, "Get public resource capbility: svc_cap_en: 0x%x\n",
+ cap->svc_type);
+ sdk_info(hwdev->dev_hdl, "Host_id: 0x%x, ep_id: 0x%x, er_id: 0x%x, port_id: 0x%x\n",
+ cap->host_id, cap->ep_id, cap->er_id, cap->port_id);
+ sdk_info(hwdev->dev_hdl, "cos_bitmap: 0x%x, flexq: 0x%x, virtio_vq_size: 0x%x\n",
+ cap->cos_valid_bitmap, cap->flexq_en, cap->virtio_vq_size);
+ sdk_info(hwdev->dev_hdl, "Host_total_function: 0x%x, host_oq_id_mask_val: 0x%x, max_vf: 0x%x\n",
+ cap->host_total_function, cap->host_oq_id_mask_val,
+ cap->max_vf);
+ sdk_info(hwdev->dev_hdl, "Host_pf_num: 0x%x, pf_id_start: 0x%x, host_vf_num: 0x%x, vf_id_start: 0x%x\n",
+ cap->pf_num, cap->pf_id_start, cap->vf_num, cap->vf_id_start);
+ sdk_info(hwdev->dev_hdl, "host_valid_bitmap: 0x%x, master_host_id: 0x%x, srv_multi_host_mode: 0x%x\n",
+ cap->host_valid_bitmap, cap->master_host_id, cap->srv_multi_host_mode);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_pctx: 0x%x\n",
+ cap->fake_vf_start_id, cap->fake_vf_num, cap->fake_vf_max_pctx);
+ sdk_info(hwdev->dev_hdl, "fake_vf_bfilter_start_addr: 0x%x, fake_vf_bfilter_len: 0x%x\n",
+ cap->fake_vf_bfilter_start_addr, cap->fake_vf_bfilter_len);
+}
+
+static void parse_cqm_res_cap(struct hinic3_hwdev *hwdev, struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap)
+{
+ struct dev_sf_svc_attr *attr = &cap->sf_svc_attr;
+
+ cap->fake_vf_start_id = dev_cap->fake_vf_start_id;
+ cap->fake_vf_num = dev_cap->fake_vf_num;
+ cap->fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ cap->fake_vf_num_cfg = dev_cap->fake_vf_num;
+ cap->fake_vf_bfilter_start_addr = dev_cap->fake_vf_bfilter_start_addr;
+ cap->fake_vf_bfilter_len = dev_cap->fake_vf_bfilter_len;
+
+ if (COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev))
+ cap->virtio_vq_size = (u16)(VIRTIO_BASE_VQ_SIZE << dev_cap->virtio_vq_size);
+ else
+ cap->virtio_vq_size = VIRTIO_DEFAULT_VQ_SIZE;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_FT_BIT)
+ attr->ft_en = true;
+ else
+ attr->ft_en = false;
+
+ if (dev_cap->sf_svc_attr & SF_SVC_RDMA_BIT)
+ attr->rdma_en = true;
+ else
+ attr->rdma_en = false;
+
+ /* PPF will overwrite it when parse dynamic resource */
+ if (dev_cap->func_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+
+ cap->lb_mode = dev_cap->lb_mode;
+ cap->smf_pg = dev_cap->smf_pg;
+
+ cap->timer_en = dev_cap->timer_en;
+ cap->host_oq_id_mask_val = dev_cap->host_oq_id_mask_val;
+ cap->max_connect_num = dev_cap->max_conn_num;
+ cap->max_stick2cache_num = dev_cap->max_stick2cache_num;
+ cap->bfilter_start_addr = dev_cap->max_bfilter_start_addr;
+ cap->bfilter_len = dev_cap->bfilter_len;
+ cap->hash_bucket_num = dev_cap->hash_bucket_num;
+}
+
+static void parse_pub_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ cap->host_id = dev_cap->host_id;
+ cap->ep_id = dev_cap->ep_id;
+ cap->er_id = dev_cap->er_id;
+ cap->port_id = dev_cap->port_id;
+
+ cap->svc_type = dev_cap->svc_cap_en;
+ cap->chip_svc_type = cap->svc_type;
+
+ cap->cos_valid_bitmap = dev_cap->valid_cos_bitmap;
+ cap->port_cos_valid_bitmap = dev_cap->port_cos_valid_bitmap;
+ cap->flexq_en = dev_cap->flexq_en;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->host_valid_bitmap = dev_cap->host_valid_bitmap;
+ cap->master_host_id = dev_cap->master_host_id;
+ cap->srv_multi_host_mode = dev_cap->srv_multi_host_mode;
+
+ if (type != TYPE_VF) {
+ cap->max_vf = dev_cap->max_vf;
+ cap->pf_num = dev_cap->host_pf_num;
+ cap->pf_id_start = dev_cap->pf_id_start;
+ cap->vf_num = dev_cap->host_vf_num;
+ cap->vf_id_start = dev_cap->vf_id_start;
+ } else {
+ cap->max_vf = 0;
+ }
+
+ parse_cqm_res_cap(hwdev, cap, dev_cap);
+ parse_pub_res_cap_dfx(hwdev, cap);
+}
+
+static void parse_dynamic_share_res_cap(struct service_cap *cap,
+ const struct cfg_cmd_dev_cap *dev_cap)
+{
+ if (dev_cap->host_sf_en)
+ cap->sf_en = true;
+ else
+ cap->sf_en = false;
+}
+
+static void parse_l2nic_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct nic_service_cap *nic_cap = &cap->nic_cap;
+
+ nic_cap->max_sqs = dev_cap->nic_max_sq_id + 1;
+ nic_cap->max_rqs = dev_cap->nic_max_rq_id + 1;
+ nic_cap->default_num_queues = dev_cap->nic_default_num_queues;
+
+ sdk_info(hwdev->dev_hdl, "L2nic resource capbility, max_sqs: 0x%x, max_rqs: 0x%x\n",
+ nic_cap->max_sqs, nic_cap->max_rqs);
+
+ /* Check parameters from firmware */
+ if (nic_cap->max_sqs > HINIC3_CFG_MAX_QP ||
+ nic_cap->max_rqs > HINIC3_CFG_MAX_QP) {
+ sdk_info(hwdev->dev_hdl, "Number of qp exceed limit[1-%d]: sq: %u, rq: %u\n",
+ HINIC3_CFG_MAX_QP, nic_cap->max_sqs, nic_cap->max_rqs);
+ nic_cap->max_sqs = HINIC3_CFG_MAX_QP;
+ nic_cap->max_rqs = HINIC3_CFG_MAX_QP;
+ }
+}
+
+static void parse_fc_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_fc_svc_cap *fc_cap = &cap->fc_cap.dev_fc_cap;
+
+ fc_cap->max_parent_qpc_num = dev_cap->fc_max_pctx;
+ fc_cap->scq_num = dev_cap->fc_max_scq;
+ fc_cap->srq_num = dev_cap->fc_max_srq;
+ fc_cap->max_child_qpc_num = dev_cap->fc_max_cctx;
+ fc_cap->child_qpc_id_start = dev_cap->fc_cctx_id_start;
+ fc_cap->vp_id_start = dev_cap->fc_vp_id_start;
+ fc_cap->vp_id_end = dev_cap->fc_vp_id_end;
+
+ sdk_info(hwdev->dev_hdl, "Get fc resource capbility\n");
+ sdk_info(hwdev->dev_hdl,
+ "Max_parent_qpc_num: 0x%x, scq_num: 0x%x, srq_num: 0x%x, max_child_qpc_num: 0x%x, child_qpc_id_start: 0x%x\n",
+ fc_cap->max_parent_qpc_num, fc_cap->scq_num, fc_cap->srq_num,
+ fc_cap->max_child_qpc_num, fc_cap->child_qpc_id_start);
+ sdk_info(hwdev->dev_hdl, "Vp_id_start: 0x%x, vp_id_end: 0x%x\n",
+ fc_cap->vp_id_start, fc_cap->vp_id_end);
+}
+
+static void parse_roce_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->max_qps = dev_cap->roce_max_qp;
+ roce_cap->max_cqs = dev_cap->roce_max_cq;
+ roce_cap->max_srqs = dev_cap->roce_max_srq;
+ roce_cap->max_mpts = dev_cap->roce_max_mpt;
+ roce_cap->max_drc_qps = dev_cap->roce_max_drc_qp;
+
+ roce_cap->wqe_cl_start = dev_cap->roce_wqe_cl_start;
+ roce_cap->wqe_cl_end = dev_cap->roce_wqe_cl_end;
+ roce_cap->wqe_cl_sz = dev_cap->roce_wqe_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get roce resource capbility, type: 0x%x\n",
+ type);
+ sdk_info(hwdev->dev_hdl, "Max_qps: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, max_mpts: 0x%x, max_drcts: 0x%x\n",
+ roce_cap->max_qps, roce_cap->max_cqs, roce_cap->max_srqs,
+ roce_cap->max_mpts, roce_cap->max_drc_qps);
+
+ sdk_info(hwdev->dev_hdl, "Wqe_start: 0x%x, wqe_end: 0x%x, wqe_sz: 0x%x\n",
+ roce_cap->wqe_cl_start, roce_cap->wqe_cl_end,
+ roce_cap->wqe_cl_sz);
+
+ if (roce_cap->max_qps == 0) {
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ roce_cap->max_qps = 0x400;
+ roce_cap->max_cqs = 0x800;
+ roce_cap->max_srqs = 0x400;
+ roce_cap->max_mpts = 0x400;
+ roce_cap->max_drc_qps = 0x40;
+ } else {
+ roce_cap->max_qps = 0x200;
+ roce_cap->max_cqs = 0x400;
+ roce_cap->max_srqs = 0x200;
+ roce_cap->max_mpts = 0x200;
+ roce_cap->max_drc_qps = 0x40;
+ }
+ }
+}
+
+static void parse_rdma_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_roce_svc_own_cap *roce_cap =
+ &cap->rdma_cap.dev_rdma_cap.roce_own_cap;
+
+ roce_cap->cmtt_cl_start = dev_cap->roce_cmtt_cl_start;
+ roce_cap->cmtt_cl_end = dev_cap->roce_cmtt_cl_end;
+ roce_cap->cmtt_cl_sz = dev_cap->roce_cmtt_cl_size;
+
+ roce_cap->dmtt_cl_start = dev_cap->roce_dmtt_cl_start;
+ roce_cap->dmtt_cl_end = dev_cap->roce_dmtt_cl_end;
+ roce_cap->dmtt_cl_sz = dev_cap->roce_dmtt_cl_size;
+
+ sdk_info(hwdev->dev_hdl, "Get rdma resource capbility, Cmtt_start: 0x%x, cmtt_end: 0x%x, cmtt_sz: 0x%x\n",
+ roce_cap->cmtt_cl_start, roce_cap->cmtt_cl_end,
+ roce_cap->cmtt_cl_sz);
+
+ sdk_info(hwdev->dev_hdl, "Dmtt_start: 0x%x, dmtt_end: 0x%x, dmtt_sz: 0x%x\n",
+ roce_cap->dmtt_cl_start, roce_cap->dmtt_cl_end,
+ roce_cap->dmtt_cl_sz);
+}
+
+static void parse_ovs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->dev_ovs_cap.max_pctxs = dev_cap->ovs_max_qpc;
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx = dev_cap->fake_vf_max_pctx;
+ ovs_cap->dev_ovs_cap.fake_vf_start_id = dev_cap->fake_vf_start_id;
+ ovs_cap->dev_ovs_cap.fake_vf_num = dev_cap->fake_vf_num;
+ ovs_cap->dev_ovs_cap.dynamic_qp_en = dev_cap->flexq_en;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get ovs resource capbility, max_qpc: 0x%x, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x\n",
+ ovs_cap->dev_ovs_cap.max_pctxs,
+ ovs_cap->dev_ovs_cap.fake_vf_start_id,
+ ovs_cap->dev_ovs_cap.fake_vf_num);
+ sdk_info(hwdev->dev_hdl,
+ "fake_vf_max_qpc: 0x%x, dynamic_qp_en: 0x%x\n",
+ ovs_cap->dev_ovs_cap.fake_vf_max_pctx,
+ ovs_cap->dev_ovs_cap.dynamic_qp_en);
+}
+
+static void parse_ppa_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ppa_service_cap *dip_cap = &cap->ppa_cap;
+
+ dip_cap->qpc_fake_vf_ctx_num = dev_cap->fake_vf_max_pctx;
+ dip_cap->qpc_fake_vf_start = dev_cap->fake_vf_start_id;
+ dip_cap->qpc_fake_vf_num = dev_cap->fake_vf_num;
+ dip_cap->bloomfilter_en = dev_cap->fake_vf_bfilter_len ? 1 : 0;
+ dip_cap->bloomfilter_length = dev_cap->fake_vf_bfilter_len;
+ sdk_info(hwdev->dev_hdl,
+ "Get ppa resource capbility, fake_vf_start_id: 0x%x, fake_vf_num: 0x%x, fake_vf_max_qpc: 0x%x\n",
+ dip_cap->qpc_fake_vf_start,
+ dip_cap->qpc_fake_vf_num,
+ dip_cap->qpc_fake_vf_ctx_num);
+}
+
+static void parse_toe_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct dev_toe_svc_cap *toe_cap = &cap->toe_cap.dev_toe_cap;
+
+ toe_cap->max_pctxs = dev_cap->toe_max_pctx;
+ toe_cap->max_cqs = dev_cap->toe_max_cq;
+ toe_cap->max_srqs = dev_cap->toe_max_srq;
+ toe_cap->srq_id_start = dev_cap->toe_srq_id_start;
+ toe_cap->max_mpts = dev_cap->toe_max_mpt;
+ toe_cap->max_cctxt = dev_cap->toe_max_cctxt;
+
+ sdk_info(hwdev->dev_hdl,
+ "Get toe resource capbility, max_pctxs: 0x%x, max_cqs: 0x%x, max_srqs: 0x%x, srq_id_start: 0x%x, max_mpts: 0x%x\n",
+ toe_cap->max_pctxs, toe_cap->max_cqs, toe_cap->max_srqs,
+ toe_cap->srq_id_start, toe_cap->max_mpts);
+}
+
+static void parse_ipsec_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->dev_ipsec_cap.max_sactxs = dev_cap->ipsec_max_sactx;
+ ipsec_cap->dev_ipsec_cap.max_cqs = dev_cap->ipsec_max_cq;
+
+ sdk_info(hwdev->dev_hdl, "Get IPsec resource capbility, max_sactxs: 0x%x, max_cqs: 0x%x\n",
+ dev_cap->ipsec_max_sactx, dev_cap->ipsec_max_cq);
+}
+
+static void parse_vbs_res_cap(struct hinic3_hwdev *hwdev,
+ struct service_cap *cap,
+ struct cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct vbs_service_cap *vbs_cap = &cap->vbs_cap;
+
+ vbs_cap->vbs_max_volq = dev_cap->vbs_max_volq;
+
+ sdk_info(hwdev->dev_hdl, "Get VBS resource capbility, vbs_max_volq: 0x%x\n",
+ dev_cap->vbs_max_volq);
+}
+
+static void parse_dev_cap(struct hinic3_hwdev *dev,
+ struct cfg_cmd_dev_cap *dev_cap, enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ /* Public resource */
+ parse_pub_res_cap(dev, cap, dev_cap, type);
+
+ /* PPF managed dynamic resource */
+ if (type == TYPE_PPF)
+ parse_dynamic_share_res_cap(cap, dev_cap);
+
+ /* L2 NIC resource */
+ if (IS_NIC_TYPE(dev))
+ parse_l2nic_res_cap(dev, cap, dev_cap, type);
+
+ /* FC without virtulization */
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ if (IS_FC_TYPE(dev))
+ parse_fc_res_cap(dev, cap, dev_cap, type);
+ }
+
+ /* toe resource */
+ if (IS_TOE_TYPE(dev))
+ parse_toe_res_cap(dev, cap, dev_cap, type);
+
+ /* mtt cache line */
+ if (IS_RDMA_ENABLE(dev))
+ parse_rdma_res_cap(dev, cap, dev_cap, type);
+
+ /* RoCE resource */
+ if (IS_ROCE_TYPE(dev))
+ parse_roce_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_OVS_TYPE(dev))
+ parse_ovs_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_IPSEC_TYPE(dev))
+ parse_ipsec_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_PPA_TYPE(dev))
+ parse_ppa_res_cap(dev, cap, dev_cap, type);
+
+ if (IS_VBS_TYPE(dev))
+ parse_vbs_res_cap(dev, cap, dev_cap, type);
+}
+
+static int get_cap_from_fw(struct hinic3_hwdev *dev, enum func_type type)
+{
+ struct cfg_cmd_dev_cap dev_cap;
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ memset(&dev_cap, 0, sizeof(dev_cap));
+ dev_cap.func_id = hinic3_global_func_id(dev);
+ sdk_info(dev->dev_hdl, "Get cap from fw, func_idx: %u\n",
+ dev_cap.func_id);
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_DEV_CAP,
+ &dev_cap, sizeof(dev_cap),
+ &dev_cap, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || dev_cap.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get capability from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, dev_cap.head.status, out_len);
+ return -EIO;
+ }
+
+ parse_dev_cap(dev, &dev_cap, type);
+
+ return 0;
+}
+
+static int hinic3_get_dev_cap(struct hinic3_hwdev *dev)
+{
+ enum func_type type = HINIC3_FUNC_TYPE(dev);
+ int err;
+
+ switch (type) {
+ case TYPE_PF:
+ case TYPE_PPF:
+ case TYPE_VF:
+ err = get_cap_from_fw(dev, type);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to get PF/PPF capability\n");
+ return err;
+ }
+ break;
+ default:
+ sdk_err(dev->dev_hdl, "Unsupported PCI Function type: %d\n",
+ type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_get_ppf_timer_cfg(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_cmd_host_timer cfg_host_timer;
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ u16 out_len = sizeof(cfg_host_timer);
+ int err;
+
+ memset(&cfg_host_timer, 0, sizeof(cfg_host_timer));
+ cfg_host_timer.host_id = dev->cfg_mgmt->svc_cap.host_id;
+
+ err = hinic3_msg_to_mgmt_sync(dev, HINIC3_MOD_CFGM, CFG_CMD_GET_HOST_TIMER,
+ &cfg_host_timer, sizeof(cfg_host_timer),
+ &cfg_host_timer, &out_len, 0,
+ HINIC3_CHANNEL_COMM);
+ if (err || cfg_host_timer.head.status || !out_len) {
+ sdk_err(dev->dev_hdl,
+ "Failed to get host timer cfg from FW, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, cfg_host_timer.head.status, out_len);
+ return -EIO;
+ }
+
+ cap->timer_pf_id_start = cfg_host_timer.timer_pf_id_start;
+ cap->timer_pf_num = cfg_host_timer.timer_pf_num;
+ cap->timer_vf_id_start = cfg_host_timer.timer_vf_id_start;
+ cap->timer_vf_num = cfg_host_timer.timer_vf_num;
+
+ return 0;
+}
+
+static void nic_param_fix(struct hinic3_hwdev *dev)
+{
+}
+
+static void rdma_mtt_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+}
+
+static void rdma_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct rdma_service_cap *rdma_cap = &cap->rdma_cap;
+ struct dev_roce_svc_own_cap *roce_cap =
+ &rdma_cap->dev_rdma_cap.roce_own_cap;
+
+ rdma_cap->log_mtt = LOG_MTT_SEG;
+ rdma_cap->log_rdmarc = LOG_RDMARC_SEG;
+ rdma_cap->reserved_qps = RDMA_RSVD_QPS;
+ rdma_cap->max_sq_sg = RDMA_MAX_SQ_SGE;
+
+ /* RoCE */
+ if (IS_ROCE_TYPE(dev)) {
+ roce_cap->qpc_entry_sz = ROCE_QPC_ENTRY_SZ;
+ roce_cap->max_wqes = ROCE_MAX_WQES;
+ roce_cap->max_rq_sg = ROCE_MAX_RQ_SGE;
+ roce_cap->max_sq_inline_data_sz = ROCE_MAX_SQ_INLINE_DATA_SZ;
+ roce_cap->max_rq_desc_sz = ROCE_MAX_RQ_DESC_SZ;
+ roce_cap->rdmarc_entry_sz = ROCE_RDMARC_ENTRY_SZ;
+ roce_cap->max_qp_init_rdma = ROCE_MAX_QP_INIT_RDMA;
+ roce_cap->max_qp_dest_rdma = ROCE_MAX_QP_DEST_RDMA;
+ roce_cap->max_srq_wqes = ROCE_MAX_SRQ_WQES;
+ roce_cap->reserved_srqs = ROCE_RSVD_SRQS;
+ roce_cap->max_srq_sge = ROCE_MAX_SRQ_SGE;
+ roce_cap->srqc_entry_sz = ROCE_SRQC_ENTERY_SZ;
+ roce_cap->max_msg_sz = ROCE_MAX_MSG_SZ;
+ }
+
+ rdma_cap->max_sq_desc_sz = RDMA_MAX_SQ_DESC_SZ;
+ rdma_cap->wqebb_size = WQEBB_SZ;
+ rdma_cap->max_cqes = RDMA_MAX_CQES;
+ rdma_cap->reserved_cqs = RDMA_RSVD_CQS;
+ rdma_cap->cqc_entry_sz = RDMA_CQC_ENTRY_SZ;
+ rdma_cap->cqe_size = RDMA_CQE_SZ;
+ rdma_cap->reserved_mrws = RDMA_RSVD_MRWS;
+ rdma_cap->mpt_entry_sz = RDMA_MPT_ENTRY_SZ;
+
+ /* 2^8 - 1
+ * +------------------------+-----------+
+ * | 4B | 1M(20b) | Key(8b) |
+ * +------------------------+-----------+
+ * key = 8bit key + 24bit index,
+ * now Lkey of SGE uses 2bit(bit31 and bit30), so key only have 10bit,
+ * we use original 8bits directly for simpilification
+ */
+ rdma_cap->max_fmr_maps = 0xff;
+ rdma_cap->num_mtts = RDMA_NUM_MTTS;
+ rdma_cap->log_mtt_seg = LOG_MTT_SEG;
+ rdma_cap->mtt_entry_sz = MTT_ENTRY_SZ;
+ rdma_cap->log_rdmarc_seg = LOG_RDMARC_SEG;
+ rdma_cap->local_ca_ack_delay = LOCAL_ACK_DELAY;
+ rdma_cap->num_ports = RDMA_NUM_PORTS;
+ rdma_cap->db_page_size = DB_PAGE_SZ;
+ rdma_cap->direct_wqe_size = DWQE_SZ;
+ rdma_cap->num_pds = NUM_PD;
+ rdma_cap->reserved_pds = RSVD_PD;
+ rdma_cap->max_xrcds = MAX_XRCDS;
+ rdma_cap->reserved_xrcds = RSVD_XRCDS;
+ rdma_cap->max_gid_per_port = MAX_GID_PER_PORT;
+ rdma_cap->gid_entry_sz = GID_ENTRY_SZ;
+ rdma_cap->reserved_lkey = RSVD_LKEY;
+ rdma_cap->num_comp_vectors = (u32)dev->cfg_mgmt->eq_info.num_ceq;
+ rdma_cap->page_size_cap = PAGE_SZ_CAP;
+ rdma_cap->flags = (RDMA_BMME_FLAG_LOCAL_INV |
+ RDMA_BMME_FLAG_REMOTE_INV |
+ RDMA_BMME_FLAG_FAST_REG_WR |
+ RDMA_DEV_CAP_FLAG_XRC |
+ RDMA_DEV_CAP_FLAG_MEM_WINDOW |
+ RDMA_BMME_FLAG_TYPE_2_WIN |
+ RDMA_BMME_FLAG_WIN_TYPE_2B |
+ RDMA_DEV_CAP_FLAG_ATOMIC);
+ rdma_cap->max_frpl_len = MAX_FRPL_LEN;
+ rdma_cap->max_pkeys = MAX_PKEYS;
+}
+
+static void toe_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct toe_service_cap *toe_cap = &cap->toe_cap;
+
+ toe_cap->pctx_sz = TOE_PCTX_SZ;
+ toe_cap->scqc_sz = TOE_CQC_SZ;
+}
+
+static void ovs_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ovs_service_cap *ovs_cap = &cap->ovs_cap;
+
+ ovs_cap->pctx_sz = OVS_PCTX_SZ;
+}
+
+static void ppa_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ppa_service_cap *ppa_cap = &cap->ppa_cap;
+
+ ppa_cap->pctx_sz = PPA_PCTX_SZ;
+}
+
+static void fc_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct fc_service_cap *fc_cap = &cap->fc_cap;
+
+ fc_cap->parent_qpc_size = FC_PCTX_SZ;
+ fc_cap->child_qpc_size = FC_CCTX_SZ;
+ fc_cap->sqe_size = FC_SQE_SZ;
+
+ fc_cap->scqc_size = FC_SCQC_SZ;
+ fc_cap->scqe_size = FC_SCQE_SZ;
+
+ fc_cap->srqc_size = FC_SRQC_SZ;
+ fc_cap->srqe_size = FC_SRQE_SZ;
+}
+
+static void ipsec_param_fix(struct hinic3_hwdev *dev)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+ struct ipsec_service_cap *ipsec_cap = &cap->ipsec_cap;
+
+ ipsec_cap->sactx_sz = IPSEC_SACTX_SZ;
+}
+
+static void init_service_param(struct hinic3_hwdev *dev)
+{
+ if (IS_NIC_TYPE(dev))
+ nic_param_fix(dev);
+ if (IS_RDMA_ENABLE(dev))
+ rdma_mtt_fix(dev);
+ if (IS_ROCE_TYPE(dev))
+ rdma_param_fix(dev);
+ if (IS_FC_TYPE(dev))
+ fc_param_fix(dev);
+ if (IS_TOE_TYPE(dev))
+ toe_param_fix(dev);
+ if (IS_OVS_TYPE(dev))
+ ovs_param_fix(dev);
+ if (IS_IPSEC_TYPE(dev))
+ ipsec_param_fix(dev);
+ if (IS_PPA_TYPE(dev))
+ ppa_param_fix(dev);
+}
+
+static void cfg_get_eq_num(struct hinic3_hwdev *dev)
+{
+ struct cfg_eq_info *eq_info = &dev->cfg_mgmt->eq_info;
+
+ eq_info->num_ceq = dev->hwif->attr.num_ceqs;
+ eq_info->num_ceq_remain = eq_info->num_ceq;
+}
+
+static int cfg_init_eq(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_eq *eq = NULL;
+ u8 num_ceq, i = 0;
+
+ cfg_get_eq_num(dev);
+ num_ceq = cfg_mgmt->eq_info.num_ceq;
+
+ sdk_info(dev->dev_hdl, "Cfg mgmt: ceqs=0x%x, remain=0x%x\n",
+ cfg_mgmt->eq_info.num_ceq, cfg_mgmt->eq_info.num_ceq_remain);
+
+ if (!num_ceq) {
+ sdk_err(dev->dev_hdl, "Ceq num cfg in fw is zero\n");
+ return -EFAULT;
+ }
+
+ eq = kcalloc(num_ceq, sizeof(*eq), GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
+
+ for (i = 0; i < num_ceq; ++i) {
+ eq[i].eqn = i;
+ eq[i].free = CFG_FREE;
+ eq[i].type = SERVICE_T_MAX;
+ }
+
+ cfg_mgmt->eq_info.eq = eq;
+
+ mutex_init(&cfg_mgmt->eq_info.eq_mutex);
+
+ return 0;
+}
+
+int hinic3_vector_to_eqn(void *hwdev, enum hinic3_service_type type, int vector)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq *eq = NULL;
+ int eqn = -EINVAL;
+ int vector_num = vector;
+
+ if (!hwdev || vector < 0)
+ return -EINVAL;
+
+ if (type != SERVICE_T_ROCE) {
+ sdk_err(dev->dev_hdl,
+ "Service type :%d, only RDMA service could get eqn by vector.\n",
+ type);
+ return -EINVAL;
+ }
+
+ cfg_mgmt = dev->cfg_mgmt;
+ vector_num = (vector_num % cfg_mgmt->eq_info.num_ceq) + CFG_RDMA_CEQ_BASE;
+
+ eq = cfg_mgmt->eq_info.eq;
+ if (eq[vector_num].type == SERVICE_T_ROCE && eq[vector_num].free == CFG_BUSY)
+ eqn = eq[vector_num].eqn;
+
+ return eqn;
+}
+EXPORT_SYMBOL(hinic3_vector_to_eqn);
+
+static int cfg_init_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ struct cfg_irq_info *irq_info = &cfg_mgmt->irq_param_info;
+ u16 intr_num = dev->hwif->attr.num_irqs;
+ u16 intr_needed = dev->hwif->attr.msix_flex_en ? (dev->hwif->attr.num_aeqs +
+ dev->hwif->attr.num_ceqs + dev->hwif->attr.num_sq) : intr_num;
+
+ if (!intr_num) {
+ sdk_err(dev->dev_hdl, "Irq num cfg in fw is zero, msix_flex_en %d\n",
+ dev->hwif->attr.msix_flex_en);
+ return -EFAULT;
+ }
+
+ if (intr_needed > intr_num) {
+ sdk_warn(dev->dev_hdl, "Irq num cfg(%d) is less than the needed irq num(%d) msix_flex_en %d\n",
+ intr_num, intr_needed, dev->hwif->attr.msix_flex_en);
+ intr_needed = intr_num;
+ }
+
+ irq_info->alloc_info = kcalloc(intr_num, sizeof(*irq_info->alloc_info),
+ GFP_KERNEL);
+ if (!irq_info->alloc_info)
+ return -ENOMEM;
+
+ irq_info->num_irq_hw = intr_needed;
+ /* Production requires VF only surppots MSI-X */
+ if (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+ cfg_mgmt->svc_cap.interrupt_type = INTR_TYPE_MSIX;
+ else
+ cfg_mgmt->svc_cap.interrupt_type = 0;
+
+ mutex_init(&irq_info->irq_mutex);
+ return 0;
+}
+
+static int cfg_enable_interrupt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+ u16 nreq = cfg_mgmt->irq_param_info.num_irq_hw;
+
+ void *pcidev = dev->pcidev_hdl;
+ struct irq_alloc_info_st *irq_info = NULL;
+ struct msix_entry *entry = NULL;
+ u16 i = 0;
+ int actual_irq;
+
+ irq_info = cfg_mgmt->irq_param_info.alloc_info;
+
+ sdk_info(dev->dev_hdl, "Interrupt type: %u, irq num: %u.\n",
+ cfg_mgmt->svc_cap.interrupt_type, nreq);
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ if (!nreq) {
+ sdk_err(dev->dev_hdl, "Interrupt number cannot be zero\n");
+ return -EINVAL;
+ }
+ entry = kcalloc(nreq, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ for (i = 0; i < nreq; i++)
+ entry[i].entry = i;
+
+ actual_irq = pci_enable_msix_range(pcidev, entry,
+ VECTOR_THRESHOLD, nreq);
+ if (actual_irq < 0) {
+ sdk_err(dev->dev_hdl, "Alloc msix entries with threshold 2 failed. actual_irq: %d\n",
+ actual_irq);
+ kfree(entry);
+ return -ENOMEM;
+ }
+
+ nreq = (u16)actual_irq;
+ cfg_mgmt->irq_param_info.num_total = nreq;
+ cfg_mgmt->irq_param_info.num_irq_remain = nreq;
+ sdk_info(dev->dev_hdl, "Request %u msix vector success.\n",
+ nreq);
+
+ for (i = 0; i < nreq; ++i) {
+ /* u16 driver uses to specify entry, OS writes */
+ irq_info[i].info.msix_entry_idx = entry[i].entry;
+ /* u32 kernel uses to write allocated vector */
+ irq_info[i].info.irq_id = entry[i].vector;
+ irq_info[i].type = SERVICE_T_MAX;
+ irq_info[i].free = CFG_FREE;
+ }
+
+ kfree(entry);
+
+ break;
+
+ default:
+ sdk_err(dev->dev_hdl, "Unsupport interrupt type %d\n",
+ cfg_mgmt->svc_cap.interrupt_type);
+ break;
+ }
+
+ return 0;
+}
+
+int hinic3_alloc_irqs(void *hwdev, enum hinic3_service_type type, u16 num,
+ struct irq_info *irq_info_array, u16 *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ u16 free_num_irq;
+ int i, j;
+ u16 num_new = num;
+
+ if (!hwdev || !irq_info_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+ free_num_irq = irq_info->num_irq_remain;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ if (num > free_num_irq) {
+ if (free_num_irq == 0) {
+ sdk_err(dev->dev_hdl, "no free irq resource in cfg mgmt.\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "only %u irq resource in cfg mgmt.\n", free_num_irq);
+ num_new = free_num_irq;
+ }
+
+ *act_num = 0;
+
+ for (i = 0; i < num_new; i++) {
+ for (j = 0; j < max_num_irq; j++) {
+ if (alloc_info[j].free == CFG_FREE) {
+ if (irq_info->num_irq_remain == 0) {
+ sdk_err(dev->dev_hdl, "No free irq resource in cfg mgmt\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return -EINVAL;
+ }
+ alloc_info[j].type = type;
+ alloc_info[j].free = CFG_BUSY;
+
+ irq_info_array[i].msix_entry_idx =
+ alloc_info[j].info.msix_entry_idx;
+ irq_info_array[i].irq_id = alloc_info[j].info.irq_id;
+ (*act_num)++;
+ irq_info->num_irq_remain--;
+
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&irq_info->irq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_irqs);
+
+void hinic3_free_irq(void *hwdev, enum hinic3_service_type type, u32 irq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_irq_info *irq_info = NULL;
+ struct irq_alloc_info_st *alloc_info = NULL;
+ int max_num_irq;
+ int i;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ irq_info = &cfg_mgmt->irq_param_info;
+ alloc_info = irq_info->alloc_info;
+ max_num_irq = irq_info->num_total;
+
+ mutex_lock(&irq_info->irq_mutex);
+
+ for (i = 0; i < max_num_irq; i++) {
+ if (irq_id == alloc_info[i].info.irq_id &&
+ type == alloc_info[i].type) {
+ if (alloc_info[i].free == CFG_BUSY) {
+ alloc_info[i].free = CFG_FREE;
+ irq_info->num_irq_remain++;
+ if (irq_info->num_irq_remain > max_num_irq) {
+ sdk_err(dev->dev_hdl, "Find target,but over range\n");
+ mutex_unlock(&irq_info->irq_mutex);
+ return;
+ }
+ break;
+ }
+ }
+ }
+
+ if (i >= max_num_irq)
+ sdk_warn(dev->dev_hdl, "Irq %u don`t need to free\n", irq_id);
+
+ mutex_unlock(&irq_info->irq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_irq);
+
+int hinic3_alloc_ceqs(void *hwdev, enum hinic3_service_type type, int num,
+ int *ceq_id_array, int *act_num)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ int free_ceq;
+ int i, j;
+ int num_new = num;
+
+ if (!hwdev || !ceq_id_array || !act_num)
+ return -EINVAL;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ free_ceq = eq->num_ceq_remain;
+
+ mutex_lock(&eq->eq_mutex);
+
+ if (num > free_ceq) {
+ if (free_ceq <= 0) {
+ sdk_err(dev->dev_hdl, "No free ceq resource in cfg mgmt\n");
+ mutex_unlock(&eq->eq_mutex);
+ return -ENOMEM;
+ }
+
+ sdk_warn(dev->dev_hdl, "Only %d ceq resource in cfg mgmt\n",
+ free_ceq);
+ }
+
+ *act_num = 0;
+
+ num_new = min(num_new, eq->num_ceq - CFG_RDMA_CEQ_BASE);
+ for (i = 0; i < num_new; i++) {
+ if (eq->num_ceq_remain == 0) {
+ sdk_warn(dev->dev_hdl, "Alloc %d ceqs, less than required %d ceqs\n",
+ *act_num, num_new);
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+ }
+
+ for (j = CFG_RDMA_CEQ_BASE; j < eq->num_ceq; j++) {
+ if (eq->eq[j].free == CFG_FREE) {
+ eq->eq[j].type = type;
+ eq->eq[j].free = CFG_BUSY;
+ eq->num_ceq_remain--;
+ ceq_id_array[i] = eq->eq[j].eqn;
+ (*act_num)++;
+ break;
+ }
+ }
+ }
+
+ mutex_unlock(&eq->eq_mutex);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_ceqs);
+
+void hinic3_free_ceq(void *hwdev, enum hinic3_service_type type, int ceq_id)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ struct cfg_eq_info *eq = NULL;
+ u8 num_ceq;
+ u8 i = 0;
+
+ if (!hwdev)
+ return;
+
+ cfg_mgmt = dev->cfg_mgmt;
+ eq = &cfg_mgmt->eq_info;
+ num_ceq = eq->num_ceq;
+
+ mutex_lock(&eq->eq_mutex);
+
+ for (i = 0; i < num_ceq; i++) {
+ if (ceq_id == eq->eq[i].eqn &&
+ type == cfg_mgmt->eq_info.eq[i].type) {
+ if (eq->eq[i].free == CFG_BUSY) {
+ eq->eq[i].free = CFG_FREE;
+ eq->num_ceq_remain++;
+ if (eq->num_ceq_remain > num_ceq)
+ eq->num_ceq_remain %= num_ceq;
+
+ mutex_unlock(&eq->eq_mutex);
+ return;
+ }
+ }
+ }
+
+ if (i >= num_ceq)
+ sdk_warn(dev->dev_hdl, "ceq %d don`t need to free.\n", ceq_id);
+
+ mutex_unlock(&eq->eq_mutex);
+}
+EXPORT_SYMBOL(hinic3_free_ceq);
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ cfg_mgmt = kzalloc(sizeof(*cfg_mgmt), GFP_KERNEL);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ dev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = dev;
+
+ err = cfg_init_eq(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg event queue, err: %d\n",
+ err);
+ goto free_mgmt_mem;
+ }
+
+ err = cfg_init_interrupt(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to init cfg interrupt, err: %d\n",
+ err);
+ goto free_eq_mem;
+ }
+
+ err = cfg_enable_interrupt(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Failed to enable cfg interrupt, err: %d\n",
+ err);
+ goto free_interrupt_mem;
+ }
+
+ return 0;
+
+free_interrupt_mem:
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+
+free_eq_mem:
+ kfree(cfg_mgmt->eq_info.eq);
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+ cfg_mgmt->eq_info.eq = NULL;
+
+free_mgmt_mem:
+ kfree(cfg_mgmt);
+ return err;
+}
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev)
+{
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ /* if the allocated resource were recycled */
+ if (cfg_mgmt->irq_param_info.num_irq_remain !=
+ cfg_mgmt->irq_param_info.num_total ||
+ cfg_mgmt->eq_info.num_ceq_remain != cfg_mgmt->eq_info.num_ceq)
+ sdk_err(dev->dev_hdl, "Can't reclaim all irq and event queue, please check\n");
+
+ switch (cfg_mgmt->svc_cap.interrupt_type) {
+ case INTR_TYPE_MSIX:
+ pci_disable_msix(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_MSI:
+ pci_disable_msi(dev->pcidev_hdl);
+ break;
+
+ case INTR_TYPE_INT:
+ default:
+ break;
+ }
+
+ kfree(cfg_mgmt->irq_param_info.alloc_info);
+ cfg_mgmt->irq_param_info.alloc_info = NULL;
+ mutex_deinit(&((cfg_mgmt->irq_param_info).irq_mutex));
+
+ kfree(cfg_mgmt->eq_info.eq);
+ cfg_mgmt->eq_info.eq = NULL;
+ mutex_deinit(&cfg_mgmt->eq_info.eq_mutex);
+
+ kfree(cfg_mgmt);
+}
+
+int init_capability(struct hinic3_hwdev *dev)
+{
+ int err;
+ struct cfg_mgmt_info *cfg_mgmt = dev->cfg_mgmt;
+
+ cfg_mgmt->svc_cap.sf_svc_attr.ft_pf_en = false;
+ cfg_mgmt->svc_cap.sf_svc_attr.rdma_pf_en = false;
+
+ err = hinic3_get_dev_cap(dev);
+ if (err != 0)
+ return err;
+
+ init_service_param(dev);
+
+ sdk_info(dev->dev_hdl, "Init capability success\n");
+ return 0;
+}
+
+void free_capability(struct hinic3_hwdev *dev)
+{
+ sdk_info(dev->dev_hdl, "Free capability success");
+}
+
+bool hinic3_support_nic(void *hwdev, struct nic_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_NIC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.nic_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_nic);
+
+bool hinic3_support_ppa(void *hwdev, struct ppa_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_PPA_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ppa_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ppa);
+
+bool hinic3_support_migr(void *hwdev, struct migr_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_MIGR_TYPE(dev))
+ return false;
+
+ if (cap)
+ cap->master_host_id = dev->cfg_mgmt->svc_cap.master_host_id;
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_migr);
+
+bool hinic3_support_ipsec(void *hwdev, struct ipsec_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_IPSEC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ipsec_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ipsec);
+
+bool hinic3_support_roce(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_ROCE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_roce);
+
+bool hinic3_support_fc(void *hwdev, struct fc_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_FC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.fc_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_fc);
+
+bool hinic3_support_rdma(void *hwdev, struct rdma_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_RDMA_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.rdma_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_rdma);
+
+bool hinic3_support_ovs(void *hwdev, struct ovs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_OVS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.ovs_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_ovs);
+
+bool hinic3_support_vbs(void *hwdev, struct vbs_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_VBS_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.vbs_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_vbs);
+
+/* Only PPF support it, PF is not */
+bool hinic3_support_toe(void *hwdev, struct toe_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_TOE_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.toe_cap, sizeof(*cap));
+
+ return true;
+}
+EXPORT_SYMBOL(hinic3_support_toe);
+
+bool hinic3_func_for_mgmt(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (dev->cfg_mgmt->svc_cap.chip_svc_type)
+ return false;
+ else
+ return true;
+}
+
+bool hinic3_get_stateful_enable(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ return dev->cfg_mgmt->svc_cap.sf_en;
+}
+EXPORT_SYMBOL(hinic3_get_stateful_enable);
+
+u8 hinic3_host_oq_id_mask(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host oq id mask\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_oq_id_mask_val;
+}
+EXPORT_SYMBOL(hinic3_host_oq_id_mask);
+
+u8 hinic3_host_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_id;
+}
+EXPORT_SYMBOL(hinic3_host_id);
+
+u16 hinic3_host_total_func(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting host total function number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.host_total_function;
+}
+EXPORT_SYMBOL(hinic3_host_total_func);
+
+u16 hinic3_func_max_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_qnum);
+
+u16 hinic3_func_max_nic_qnum(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting function max queue number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+EXPORT_SYMBOL(hinic3_func_max_nic_qnum);
+
+u8 hinic3_ep_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting ep id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.ep_id;
+}
+EXPORT_SYMBOL(hinic3_ep_id);
+
+u8 hinic3_er_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting er id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.er_id;
+}
+EXPORT_SYMBOL(hinic3_er_id);
+
+u8 hinic3_physical_port_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting physical port id\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.port_id;
+}
+EXPORT_SYMBOL(hinic3_physical_port_id);
+
+u16 hinic3_func_max_vf(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting max vf number\n");
+ return 0;
+ }
+ return dev->cfg_mgmt->svc_cap.max_vf;
+}
+EXPORT_SYMBOL(hinic3_func_max_vf);
+
+int hinic3_cos_valid_bitmap(void *hwdev, u8 *func_dft_cos, u8 *port_cos_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting cos valid bitmap\n");
+ return 1;
+ }
+ *func_dft_cos = dev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+ *port_cos_bitmap = dev->cfg_mgmt->svc_cap.port_cos_valid_bitmap;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_cos_valid_bitmap);
+
+void hinic3_shutdown_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ if (IS_SLAVE_HOST(dev))
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+}
+
+u32 hinic3_host_pf_num(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf number capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_num;
+}
+EXPORT_SYMBOL(hinic3_host_pf_num);
+
+u32 hinic3_host_pf_id_start(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for getting pf id start capability\n");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.pf_id_start;
+}
+EXPORT_SYMBOL(hinic3_host_pf_id_start);
+
+u8 hinic3_flexq_en(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return 0;
+
+ return dev->cfg_mgmt->svc_cap.flexq_en;
+}
+EXPORT_SYMBOL(hinic3_flexq_en);
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
new file mode 100644
index 000000000000..0a27530ba522
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_cfg.h
@@ -0,0 +1,332 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_CFG_H
+#define HINIC3_HW_CFG_H
+
+#include <linux/types.h>
+#include "cfg_mgt_comm_pub.h"
+#include "hinic3_hwdev.h"
+
+#define CFG_MAX_CMD_TIMEOUT 30000 /* ms */
+
+enum {
+ CFG_FREE = 0,
+ CFG_BUSY = 1
+};
+
+/* start position for CEQs allocation, Max number of CEQs is 32 */
+/*lint -save -e849*/
+enum {
+ CFG_RDMA_CEQ_BASE = 0
+};
+
+/*lint -restore*/
+
+/* RDMA resource */
+#define K_UNIT BIT(10)
+#define M_UNIT BIT(20)
+#define G_UNIT BIT(30)
+
+#define VIRTIO_BASE_VQ_SIZE 2048U
+#define VIRTIO_DEFAULT_VQ_SIZE 8192U
+
+/* L2NIC */
+#define HINIC3_CFG_MAX_QP 256
+
+/* RDMA */
+#define RDMA_RSVD_QPS 2
+#define ROCE_MAX_WQES (8 * K_UNIT - 1)
+#define IWARP_MAX_WQES (8 * K_UNIT)
+
+#define RDMA_MAX_SQ_SGE 16
+
+#define ROCE_MAX_RQ_SGE 16
+
+/* value changed should change ROCE_MAX_WQE_BB_PER_WR synchronously */
+#define RDMA_MAX_SQ_DESC_SZ (256)
+
+/* (256B(cache_line_len) - 16B(ctrl_seg_len) - 48B(max_task_seg_len)) */
+#define ROCE_MAX_SQ_INLINE_DATA_SZ 192
+
+#define ROCE_MAX_RQ_DESC_SZ 256
+
+#define ROCE_QPC_ENTRY_SZ 512
+
+#define WQEBB_SZ 64
+
+#define ROCE_RDMARC_ENTRY_SZ 32
+#define ROCE_MAX_QP_INIT_RDMA 128
+#define ROCE_MAX_QP_DEST_RDMA 128
+
+#define ROCE_MAX_SRQ_WQES (16 * K_UNIT - 1)
+#define ROCE_RSVD_SRQS 0
+#define ROCE_MAX_SRQ_SGE 15
+#define ROCE_SRQC_ENTERY_SZ 64
+
+#define RDMA_MAX_CQES (8 * M_UNIT - 1)
+#define RDMA_RSVD_CQS 0
+
+#define RDMA_CQC_ENTRY_SZ 128
+
+#define RDMA_CQE_SZ 64
+#define RDMA_RSVD_MRWS 128
+#define RDMA_MPT_ENTRY_SZ 64
+#define RDMA_NUM_MTTS (1 * G_UNIT)
+#define LOG_MTT_SEG 5
+#define MTT_ENTRY_SZ 8
+#define LOG_RDMARC_SEG 3
+
+#define LOCAL_ACK_DELAY 15
+#define RDMA_NUM_PORTS 1
+#define ROCE_MAX_MSG_SZ (2 * G_UNIT)
+
+#define DB_PAGE_SZ (4 * K_UNIT)
+#define DWQE_SZ 256
+
+#define NUM_PD (128 * K_UNIT)
+#define RSVD_PD 0
+
+#define MAX_XRCDS (64 * K_UNIT)
+#define RSVD_XRCDS 0
+
+#define MAX_GID_PER_PORT 128
+#define GID_ENTRY_SZ 32
+#define RSVD_LKEY ((RDMA_RSVD_MRWS - 1) << 8)
+#define NUM_COMP_VECTORS 32
+#define PAGE_SZ_CAP ((1UL << 12) | (1UL << 16) | (1UL << 21))
+#define ROCE_MODE 1
+
+#define MAX_FRPL_LEN 511
+#define MAX_PKEYS 1
+
+/* ToE */
+#define TOE_PCTX_SZ 1024
+#define TOE_CQC_SZ 64
+
+/* IoE */
+#define IOE_PCTX_SZ 512
+
+/* FC */
+#define FC_PCTX_SZ 256
+#define FC_CCTX_SZ 256
+#define FC_SQE_SZ 128
+#define FC_SCQC_SZ 64
+#define FC_SCQE_SZ 64
+#define FC_SRQC_SZ 64
+#define FC_SRQE_SZ 32
+
+/* OVS */
+#define OVS_PCTX_SZ 512
+
+/* PPA */
+#define PPA_PCTX_SZ 512
+
+/* IPsec */
+#define IPSEC_SACTX_SZ 512
+
+struct dev_sf_svc_attr {
+ bool ft_en; /* business enable flag (not include RDMA) */
+ bool ft_pf_en; /* In FPGA Test VF resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+ bool rdma_en;
+ bool rdma_pf_en;/* In FPGA Test VF RDMA resource is in PF or not,
+ * 0 - VF, 1 - PF, VF doesn't need this bit.
+ */
+};
+
+enum intr_type {
+ INTR_TYPE_MSIX,
+ INTR_TYPE_MSI,
+ INTR_TYPE_INT,
+ INTR_TYPE_NONE,
+ /* PXE,OVS need single thread processing,
+ * synchronization messages must use poll wait mechanism interface
+ */
+};
+
+/* device capability */
+struct service_cap {
+ struct dev_sf_svc_attr sf_svc_attr;
+ u16 svc_type; /* user input service type */
+ u16 chip_svc_type; /* HW supported service type, reference to servic_bit_define_e */
+
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id; /* PF/VF's ER */
+ u8 port_id; /* PF/VF's physical port */
+
+ /* Host global resources */
+ u16 host_total_function;
+ u8 pf_num;
+ u8 pf_id_start;
+ u16 vf_num; /* max numbers of vf in current host */
+ u16 vf_id_start;
+ u8 host_oq_id_mask_val;
+ u8 host_valid_bitmap;
+ u8 master_host_id;
+ u8 srv_multi_host_mode;
+ u16 virtio_vq_size;
+
+ u8 timer_pf_num;
+ u8 timer_pf_id_start;
+ u16 timer_vf_num;
+ u16 timer_vf_id_start;
+
+ u8 flexq_en;
+ u8 cos_valid_bitmap;
+ u8 port_cos_valid_bitmap;
+ u16 max_vf; /* max VF number that PF supported */
+
+ u16 fake_vf_start_id;
+ u16 fake_vf_num;
+ u32 fake_vf_max_pctx;
+ u16 fake_vf_bfilter_start_addr;
+ u16 fake_vf_bfilter_len;
+
+ u16 fake_vf_num_cfg;
+
+ /* DO NOT get interrupt_type from firmware */
+ enum intr_type interrupt_type;
+
+ bool sf_en; /* stateful business status */
+ u8 timer_en; /* 0:disable, 1:enable */
+ u8 bloomfilter_en; /* 0:disable, 1:enable */
+
+ u8 lb_mode;
+ u8 smf_pg;
+
+ /* For test */
+ u32 test_mode;
+ u32 test_qpc_num;
+ u32 test_qpc_resvd_num;
+ u32 test_page_size_reorder;
+ bool test_xid_alloc_mode;
+ bool test_gpa_check_enable;
+ u8 test_qpc_alloc_mode;
+ u8 test_scqc_alloc_mode;
+
+ u32 test_max_conn_num;
+ u32 test_max_cache_conn_num;
+ u32 test_scqc_num;
+ u32 test_mpt_num;
+ u32 test_scq_resvd_num;
+ u32 test_mpt_recvd_num;
+ u32 test_hash_num;
+ u32 test_reorder_num;
+
+ u32 max_connect_num; /* PF/VF maximum connection number(1M) */
+ /* The maximum connections which can be stick to cache memory, max 1K */
+ u16 max_stick2cache_num;
+ /* Starting address in cache memory for bloom filter, 64Bytes aligned */
+ u16 bfilter_start_addr;
+ /* Length for bloom filter, aligned on 64Bytes. The size is length*64B.
+ * Bloom filter memory size + 1 must be power of 2.
+ * The maximum memory size of bloom filter is 4M
+ */
+ u16 bfilter_len;
+ /* The size of hash bucket tables, align on 64 entries.
+ * Be used to AND (&) the hash value. Bucket Size +1 must be power of 2.
+ * The maximum number of hash bucket is 4M
+ */
+ u16 hash_bucket_num;
+
+ struct nic_service_cap nic_cap; /* NIC capability */
+ struct rdma_service_cap rdma_cap; /* RDMA capability */
+ struct fc_service_cap fc_cap; /* FC capability */
+ struct toe_service_cap toe_cap; /* ToE capability */
+ struct ovs_service_cap ovs_cap; /* OVS capability */
+ struct ipsec_service_cap ipsec_cap; /* IPsec capability */
+ struct ppa_service_cap ppa_cap; /* PPA capability */
+ struct vbs_service_cap vbs_cap; /* VBS capability */
+};
+
+struct svc_cap_info {
+ u32 func_idx;
+ struct service_cap cap;
+};
+
+struct cfg_eq {
+ enum hinic3_service_type type;
+ int eqn;
+ int free; /* 1 - alocated, 0- freed */
+};
+
+struct cfg_eq_info {
+ struct cfg_eq *eq;
+
+ u8 num_ceq;
+
+ u8 num_ceq_remain;
+
+ /* mutex used for allocate EQs */
+ struct mutex eq_mutex;
+};
+
+struct irq_alloc_info_st {
+ enum hinic3_service_type type;
+ int free; /* 1 - alocated, 0- freed */
+ struct irq_info info;
+};
+
+struct cfg_irq_info {
+ struct irq_alloc_info_st *alloc_info;
+ u16 num_total;
+ u16 num_irq_remain;
+ u16 num_irq_hw; /* device max irq number */
+
+ /* mutex used for allocate EQs */
+ struct mutex irq_mutex;
+};
+
+#define VECTOR_THRESHOLD 2
+
+struct cfg_mgmt_info {
+ struct hinic3_hwdev *hwdev;
+ struct service_cap svc_cap;
+ struct cfg_eq_info eq_info; /* EQ */
+ struct cfg_irq_info irq_param_info; /* IRQ */
+ u32 func_seq_num; /* temporary */
+};
+
+#define CFG_SERVICE_FT_EN (CFG_SERVICE_MASK_VBS | CFG_SERVICE_MASK_TOE | \
+ CFG_SERVICE_MASK_IPSEC | CFG_SERVICE_MASK_FC | \
+ CFG_SERVICE_MASK_VIRTIO | CFG_SERVICE_MASK_OVS)
+#define CFG_SERVICE_RDMA_EN CFG_SERVICE_MASK_ROCE
+
+#define IS_NIC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_NIC)
+#define IS_ROCE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_ROCE)
+#define IS_VBS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_VBS)
+#define IS_TOE_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_TOE)
+#define IS_IPSEC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_IPSEC)
+#define IS_FC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_FC)
+#define IS_OVS_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_OVS)
+#define IS_FT_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_FT_EN)
+#define IS_RDMA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_RDMA_EN)
+#define IS_RDMA_ENABLE(dev) \
+ ((dev)->cfg_mgmt->svc_cap.sf_svc_attr.rdma_en)
+#define IS_PPA_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_PPA)
+#define IS_MIGR_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SERVICE_MASK_MIGRATE)
+
+int init_cfg_mgmt(struct hinic3_hwdev *dev);
+
+void free_cfg_mgmt(struct hinic3_hwdev *dev);
+
+int init_capability(struct hinic3_hwdev *dev);
+
+void free_capability(struct hinic3_hwdev *dev);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
new file mode 100644
index 000000000000..f207408b19d6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.c
@@ -0,0 +1,1540 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/msi.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_cmdq.h"
+#include "comm_msg_intf.h"
+#include "hinic3_hw_comm.h"
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define HINIC3_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define HINIC3_MSIX_CNT_PENDING_SHIFT 8
+#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define HINIC3_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU
+#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define HINIC3_MSIX_CNT_SET(val, member) \
+ (((val) & HINIC3_MSIX_CNT_##member##_MASK) << \
+ HINIC3_MSIX_CNT_##member##_SHIFT)
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+enum hinic3_rx_buf_size {
+ HINIC3_RX_BUF_SIZE_32B = 0x20,
+ HINIC3_RX_BUF_SIZE_64B = 0x40,
+ HINIC3_RX_BUF_SIZE_96B = 0x60,
+ HINIC3_RX_BUF_SIZE_128B = 0x80,
+ HINIC3_RX_BUF_SIZE_192B = 0xC0,
+ HINIC3_RX_BUF_SIZE_256B = 0x100,
+ HINIC3_RX_BUF_SIZE_384B = 0x180,
+ HINIC3_RX_BUF_SIZE_512B = 0x200,
+ HINIC3_RX_BUF_SIZE_768B = 0x300,
+ HINIC3_RX_BUF_SIZE_1K = 0x400,
+ HINIC3_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC3_RX_BUF_SIZE_2K = 0x800,
+ HINIC3_RX_BUF_SIZE_3K = 0xC00,
+ HINIC3_RX_BUF_SIZE_4K = 0x1000,
+ HINIC3_RX_BUF_SIZE_8K = 0x2000,
+ HINIC3_RX_BUF_SIZE_16K = 0x4000,
+};
+
+const int hinic3_hw_rx_buf_size[] = {
+ HINIC3_RX_BUF_SIZE_32B,
+ HINIC3_RX_BUF_SIZE_64B,
+ HINIC3_RX_BUF_SIZE_96B,
+ HINIC3_RX_BUF_SIZE_128B,
+ HINIC3_RX_BUF_SIZE_192B,
+ HINIC3_RX_BUF_SIZE_256B,
+ HINIC3_RX_BUF_SIZE_384B,
+ HINIC3_RX_BUF_SIZE_512B,
+ HINIC3_RX_BUF_SIZE_768B,
+ HINIC3_RX_BUF_SIZE_1K,
+ HINIC3_RX_BUF_SIZE_1_5K,
+ HINIC3_RX_BUF_SIZE_2K,
+ HINIC3_RX_BUF_SIZE_3K,
+ HINIC3_RX_BUF_SIZE_4K,
+ HINIC3_RX_BUF_SIZE_8K,
+ HINIC3_RX_BUF_SIZE_16K,
+};
+
+static inline int comm_msg_to_mgmt_sync(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0,
+ HINIC3_CHANNEL_COMM);
+}
+
+static inline int comm_msg_to_mgmt_sync_ch(struct hinic3_hwdev *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, cmd, buf_in,
+ in_size, buf_out, out_size, 0, channel);
+}
+
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct hinic3_hwdev *hwdev = dev;
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_GET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to get interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ info->lli_timer_cfg = msix_cfg.lli_timer_cnt;
+ info->pending_limt = msix_cfg.pending_cnt;
+ info->coalesc_timer_cfg = msix_cfg.coalesce_timer_cnt;
+ info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg_direct(void *hwdev, struct interrupt_info *info,
+ u16 channel)
+{
+ struct comm_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = (u16)info->msix_index;
+ msix_cfg.opcode = MGMT_MSG_CMD_OP_SET;
+
+ msix_cfg.lli_credit_cnt = info->lli_credit_limit;
+ msix_cfg.lli_timer_cnt = info->lli_timer_cfg;
+ msix_cfg.pending_cnt = info->pending_limt;
+ msix_cfg.coalesce_timer_cnt = info->coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = info->resend_timer_cfg;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg,
+ &out_size, channel);
+ if (err || !out_size || msix_cfg.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set interrupt config, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, msix_cfg.head.status, out_size, channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info, u16 channel)
+{
+ struct interrupt_info temp_info;
+ struct hinic3_hwdev *hwdev = dev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = info.msix_index;
+
+ err = hinic3_get_interrupt_cfg(hwdev, &temp_info, channel);
+ if (err)
+ return -EINVAL;
+
+ if (!info.lli_set) {
+ info.lli_credit_limit = temp_info.lli_credit_limit;
+ info.lli_timer_cfg = temp_info.lli_timer_cfg;
+ }
+
+ if (!info.interrupt_coalesc_set) {
+ info.pending_limt = temp_info.pending_limt;
+ info.coalesc_timer_cfg = temp_info.coalesc_timer_cfg;
+ info.resend_timer_cfg = temp_info.resend_timer_cfg;
+ }
+
+ return hinic3_set_interrupt_cfg_direct(hwdev, &info, channel);
+}
+EXPORT_SYMBOL(hinic3_set_interrupt_cfg);
+
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX) |
+ HINIC3_MSI_CLR_INDIR_SET(clear_resend_en, RESEND_TIMER_CLR);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+EXPORT_SYMBOL(hinic3_misx_intr_clear_resend_bit);
+
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size,
+ u16 channel)
+{
+ struct comm_cmd_wq_page_size page_size_info;
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ memset(&page_size_info, 0, sizeof(page_size_info));
+ page_size_info.func_id = func_idx;
+ page_size_info.page_size = HINIC3_PAGE_SIZE_HW(page_size);
+ page_size_info.opcode = MGMT_MSG_CMD_OP_SET;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_CFG_PAGESIZE,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, channel);
+ if (err || !out_size || page_size_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set wq page size, err: %d, status: 0x%x, out_size: 0x%0x, channel: 0x%x\n",
+ err, page_size_info.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_func_reset(void *dev, u16 func_id, u64 reset_flag, u16 channel)
+{
+ struct comm_cmd_func_reset func_reset;
+ struct hinic3_hwdev *hwdev = dev;
+ u16 out_size = sizeof(func_reset);
+ int err = 0;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Function is reset, flag: 0x%llx, channel:0x%x\n",
+ reset_flag, channel);
+
+ memset(&func_reset, 0, sizeof(func_reset));
+ func_reset.func_id = func_id;
+ func_reset.reset_flag = reset_flag;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FUNC_RESET,
+ &func_reset, sizeof(func_reset),
+ &func_reset, &out_size, channel);
+ if (err || !out_size || func_reset.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to reset func resources, reset_flag 0x%llx, err: %d, status: 0x%x, out_size: 0x%x\n",
+ reset_flag, err, func_reset.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_func_reset);
+
+static u16 get_hw_rx_buf_size(int rx_buf_sz)
+{
+ u16 num_hw_types =
+ sizeof(hinic3_hw_rx_buf_size) /
+ sizeof(hinic3_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (hinic3_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ pr_err("Chip can't support rx buf size of %d\n", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE; /* default 2K */
+}
+
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, int rx_buf_sz,
+ u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+
+ root_ctxt.lro_en = 1;
+
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_root_ctxt);
+
+int hinic3_clean_root_ctxt(void *hwdev, u16 channel)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, channel);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set root context, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, root_ctxt.head.status, out_size, channel);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_clean_root_ctxt);
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
+{
+ struct comm_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_id = hinic3_global_func_id(hwdev);
+
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size);
+ if (err || !out_size || root_ctxt.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set cmdq depth, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, root_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt)
+{
+ struct comm_cmd_cmdq_ctxt cmdq_ctxt;
+ u16 out_size = sizeof(cmdq_ctxt);
+ int err;
+
+ memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt));
+ memcpy(&cmdq_ctxt.ctxt, ctxt, sizeof(*ctxt));
+ cmdq_ctxt.func_id = hinic3_global_func_id(hwdev);
+ cmdq_ctxt.cmdq_id = cmdq_id;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CMDQ_CTXT,
+ &cmdq_ctxt, sizeof(cmdq_ctxt),
+ &cmdq_ctxt, &out_size);
+ if (err || !out_size || cmdq_ctxt.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq ctxt, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, cmdq_ctxt.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1)
+{
+ struct comm_cmd_ceq_ctrl_reg ceq_ctrl;
+ u16 out_size = sizeof(ceq_ctrl);
+ int err;
+
+ memset(&ceq_ctrl, 0, sizeof(ceq_ctrl));
+ ceq_ctrl.func_id = hinic3_global_func_id(hwdev);
+ ceq_ctrl.q_id = q_id;
+ ceq_ctrl.ctrl0 = ctrl0;
+ ceq_ctrl.ctrl1 = ctrl1;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
+ &ceq_ctrl, sizeof(ceq_ctrl),
+ &ceq_ctrl, &out_size);
+ if (err || !out_size || ceq_ctrl.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ceq %u ctrl reg, err: %d status: 0x%x, out_size: 0x%x\n",
+ q_id, err, ceq_ctrl.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en)
+{
+ struct comm_cmd_dma_attr_config dma_attr;
+ u16 out_size = sizeof(dma_attr);
+ int err;
+
+ memset(&dma_attr, 0, sizeof(dma_attr));
+ dma_attr.func_id = hinic3_global_func_id(hwdev);
+ dma_attr.entry_idx = entry_idx;
+ dma_attr.st = st;
+ dma_attr.at = at;
+ dma_attr.ph = ph;
+ dma_attr.no_snooping = no_snooping;
+ dma_attr.tph_en = tph_en;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_DMA_ATTR, &dma_attr, sizeof(dma_attr),
+ &dma_attr, &out_size);
+ if (err || !out_size || dma_attr.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set dma attr, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, dma_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_bdf_ctxt(void *hwdev, u8 bus, u8 device, u8 function)
+{
+ struct comm_cmd_bdf_info bdf_info;
+ u16 out_size = sizeof(bdf_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bdf_info, 0, sizeof(bdf_info));
+ bdf_info.function_idx = hinic3_global_func_id(hwdev);
+ bdf_info.bus = bus;
+ bdf_info.device = device;
+ bdf_info.function = function;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SEND_BDF_INFO,
+ &bdf_info, sizeof(bdf_info),
+ &bdf_info, &out_size);
+ if (err || !out_size || bdf_info.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set bdf info to MPU, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bdf_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_sync_time(void *hwdev, u64 time)
+{
+ struct comm_cmd_sync_time time_info;
+ u16 out_size = sizeof(time_info);
+ int err;
+
+ memset(&time_info, 0, sizeof(time_info));
+ time_info.mstime = time;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SYNC_TIME, &time_info,
+ sizeof(time_info), &time_info, &out_size);
+ if (err || time_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to sync time to mgmt, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, time_info.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int hinic3_set_ppf_flr_type(void *hwdev, enum hinic3_ppf_flr_type flr_type)
+{
+ struct comm_cmd_ppf_flr_type_set flr_type_set;
+ u16 out_size = sizeof(struct comm_cmd_ppf_flr_type_set);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&flr_type_set, 0, sizeof(flr_type_set));
+ flr_type_set.func_id = hinic3_global_func_id(hwdev);
+ flr_type_set.ppf_flr_type = flr_type;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_FLR_TYPE,
+ &flr_type_set, sizeof(flr_type_set),
+ &flr_type_set, &out_size);
+ if (err || !out_size || flr_type_set.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to set ppf flr type, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, flr_type_set.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_ppf_flr_type);
+
+static int hinic3_get_fw_ver(struct hinic3_hwdev *hwdev, enum hinic3_fw_ver_type type,
+ u8 *mgmt_ver, u8 version_size, u16 channel)
+{
+ struct comm_cmd_get_fw_version fw_ver;
+ u16 out_size = sizeof(fw_ver);
+ int err;
+
+ if (!hwdev || !mgmt_ver)
+ return -EINVAL;
+
+ memset(&fw_ver, 0, sizeof(fw_ver));
+ fw_ver.fw_type = type;
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_FW_VERSION,
+ &fw_ver, sizeof(fw_ver), &fw_ver,
+ &out_size, channel);
+ if (err || !out_size || fw_ver.head.status) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to get fw version, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, fw_ver.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ err = snprintf(mgmt_ver, version_size, "%s", fw_ver.ver);
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+int hinic3_get_mgmt_version(void *hwdev, u8 *mgmt_ver, u8 version_size,
+ u16 channel)
+{
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU, mgmt_ver,
+ version_size, channel);
+}
+EXPORT_SYMBOL(hinic3_get_mgmt_version);
+
+int hinic3_get_fw_version(void *hwdev, struct hinic3_fw_version *fw_ver,
+ u16 channel)
+{
+ int err;
+
+ if (!hwdev || !fw_ver)
+ return -EINVAL;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_MPU,
+ fw_ver->mgmt_ver, sizeof(fw_ver->mgmt_ver),
+ channel);
+ if (err)
+ return err;
+
+ err = hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_NPU,
+ fw_ver->microcode_ver,
+ sizeof(fw_ver->microcode_ver), channel);
+ if (err)
+ return err;
+
+ return hinic3_get_fw_ver(hwdev, HINIC3_FW_VER_TYPE_BOOT,
+ fw_ver->boot_ver, sizeof(fw_ver->boot_ver),
+ channel);
+}
+EXPORT_SYMBOL(hinic3_get_fw_version);
+
+static int hinic3_comm_features_nego(void *hwdev, u8 opcode, u64 *s_feature,
+ u16 size)
+{
+ struct comm_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, (size * sizeof(u64)));
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.head.status) {
+ sdk_err(dev->dev_hdl, "Failed to negotiate feature, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, feature_nego.head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, (size * sizeof(u64)));
+
+ return 0;
+}
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_GET, s_feature,
+ size);
+}
+
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_SET, s_feature,
+ size);
+}
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_channel_detect channel_detect_info;
+ u16 out_size = sizeof(channel_detect_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&channel_detect_info, 0, sizeof(channel_detect_info));
+ channel_detect_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_CHANNEL_DETECT,
+ &channel_detect_info, sizeof(channel_detect_info),
+ &channel_detect_info, &out_size);
+ if ((channel_detect_info.head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ channel_detect_info.head.status) || err || !out_size) {
+ sdk_err(hwdev->dev_hdl, "Failed to send channel detect, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, channel_detect_info.head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int hinic3_func_tmr_bitmap_set(void *hwdev, u16 func_id, bool en)
+{
+ struct comm_cmd_func_tmr_bitmap_op bitmap_op;
+ u16 out_size = sizeof(bitmap_op);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&bitmap_op, 0, sizeof(bitmap_op));
+ bitmap_op.func_id = func_id;
+ bitmap_op.opcode = en ? FUNC_TMR_BITMAP_ENABLE : FUNC_TMR_BITMAP_DISABLE;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT,
+ &bitmap_op, sizeof(bitmap_op),
+ &bitmap_op, &out_size);
+ if (err || !out_size || bitmap_op.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set timer bitmap, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, bitmap_op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int ppf_ht_gpa_set(struct hinic3_hwdev *hwdev, struct hinic3_page_addr *pg0,
+ struct hinic3_page_addr *pg1)
+{
+ struct comm_cmd_ht_gpa ht_gpa_set;
+ u16 out_size = sizeof(ht_gpa_set);
+ int ret;
+
+ memset(&ht_gpa_set, 0, sizeof(ht_gpa_set));
+ pg0->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg0->phys_addr, GFP_KERNEL);
+ if (!pg0->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg0 page addr failed\n");
+ return -EFAULT;
+ }
+
+ pg1->virt_addr = dma_zalloc_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ &pg1->phys_addr, GFP_KERNEL);
+ if (!pg1->virt_addr) {
+ sdk_err(hwdev->dev_hdl, "Alloc pg1 page addr failed\n");
+ return -EFAULT;
+ }
+
+ ht_gpa_set.host_id = hinic3_host_id(hwdev);
+ ht_gpa_set.page_pa0 = pg0->phys_addr;
+ ht_gpa_set.page_pa1 = pg1->phys_addr;
+ sdk_info(hwdev->dev_hdl, "PPF ht gpa set: page_addr0.pa=0x%llx, page_addr1.pa=0x%llx\n",
+ pg0->phys_addr, pg1->phys_addr);
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_HT_GPA,
+ &ht_gpa_set, sizeof(ht_gpa_set),
+ &ht_gpa_set, &out_size);
+ if (ret || !out_size || ht_gpa_set.head.status) {
+ sdk_warn(hwdev->dev_hdl, "PPF ht gpa set failed, ret: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, ht_gpa_set.head.status, out_size);
+ return -EFAULT;
+ }
+
+ hwdev->page_pa0.phys_addr = pg0->phys_addr;
+ hwdev->page_pa0.virt_addr = pg0->virt_addr;
+
+ hwdev->page_pa1.phys_addr = pg1->phys_addr;
+ hwdev->page_pa1.virt_addr = pg1->virt_addr;
+
+ return 0;
+}
+
+int hinic3_ppf_ht_gpa_init(void *dev)
+{
+ struct hinic3_page_addr page_addr0[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_page_addr page_addr1[HINIC3_PPF_HT_GPA_SET_RETRY_TIMES];
+ struct hinic3_hwdev *hwdev = dev;
+ int ret;
+ int i;
+ int j;
+ size_t size;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return -EINVAL;
+ }
+
+ size = HINIC3_PPF_HT_GPA_SET_RETRY_TIMES * sizeof(page_addr0[0]);
+ memset(page_addr0, 0, size);
+ memset(page_addr1, 0, size);
+
+ for (i = 0; i < HINIC3_PPF_HT_GPA_SET_RETRY_TIMES; i++) {
+ ret = ppf_ht_gpa_set(hwdev, &page_addr0[i], &page_addr1[i]);
+ if (ret == 0)
+ break;
+ }
+
+ for (j = 0; j < i; j++) {
+ if (page_addr0[j].virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ page_addr0[j].virt_addr,
+ (dma_addr_t)page_addr0[j].phys_addr);
+ page_addr0[j].virt_addr = NULL;
+ }
+ if (page_addr1[j].virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl,
+ HINIC3_HT_GPA_PAGE_SIZE,
+ page_addr1[j].virt_addr,
+ (dma_addr_t)page_addr1[j].phys_addr);
+ page_addr1[j].virt_addr = NULL;
+ }
+ }
+
+ if (i >= HINIC3_PPF_HT_GPA_SET_RETRY_TIMES) {
+ sdk_err(hwdev->dev_hdl, "PPF ht gpa init failed, retry times: %d\n",
+ i);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+void hinic3_ppf_ht_gpa_deinit(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Invalid para: dev is null.\n");
+ return;
+ }
+
+ if (hwdev->page_pa0.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa0.virt_addr,
+ (dma_addr_t)(hwdev->page_pa0.phys_addr));
+ hwdev->page_pa0.virt_addr = NULL;
+ }
+
+ if (hwdev->page_pa1.virt_addr) {
+ dma_free_coherent(hwdev->dev_hdl, HINIC3_HT_GPA_PAGE_SIZE,
+ hwdev->page_pa1.virt_addr,
+ (dma_addr_t)hwdev->page_pa1.phys_addr);
+ hwdev->page_pa1.virt_addr = NULL;
+ }
+}
+
+static int set_ppf_tmr_status(struct hinic3_hwdev *hwdev,
+ enum ppf_tmr_status status)
+{
+ struct comm_cmd_ppf_tmr_op op;
+ u16 out_size = sizeof(op);
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&op, 0, sizeof(op));
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return -EFAULT;
+
+ op.opcode = status;
+ op.ppf_id = hinic3_ppf_idx(hwdev);
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_PPF_TMR, &op,
+ sizeof(op), &op, &out_size);
+ if (err || !out_size || op.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to set ppf timer, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, op.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_tmr_start(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for starting ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_START);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_start);
+
+int hinic3_ppf_tmr_stop(void *hwdev)
+{
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for stop ppf timer\n");
+ return -EINVAL;
+ }
+
+ return set_ppf_tmr_status(hwdev, HINIC_PPF_TMR_FLAG_STOP);
+}
+EXPORT_SYMBOL(hinic3_ppf_tmr_stop);
+
+static int mqm_eqm_try_alloc_mem(struct hinic3_hwdev *hwdev, u32 page_size,
+ u32 page_num)
+{
+ struct hinic3_page_addr *page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ u32 valid_num = 0;
+ u32 flag = 1;
+ u32 i = 0;
+
+ for (i = 0; i < page_num; i++) {
+ page_addr->virt_addr =
+ dma_zalloc_coherent(hwdev->dev_hdl, page_size,
+ &page_addr->phys_addr, GFP_KERNEL);
+ if (!page_addr->virt_addr) {
+ flag = 0;
+ break;
+ }
+ valid_num++;
+ page_addr++;
+ }
+
+ if (flag == 1) {
+ hwdev->mqm_att.page_size = page_size;
+ hwdev->mqm_att.page_num = page_num;
+ } else {
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ for (i = 0; i < valid_num; i++) {
+ dma_free_coherent(hwdev->dev_hdl, page_size,
+ page_addr->virt_addr,
+ (dma_addr_t)page_addr->phys_addr);
+ page_addr++;
+ }
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int mqm_eqm_alloc_page_mem(struct hinic3_hwdev *hwdev)
+{
+ int ret = 0;
+ u32 page_num;
+
+ /* apply for 2M page, page number is chunk_num/1024 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x3ff) >> 0xa;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x2 * 0x400 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 2M OK\n");
+ return 0;
+ }
+
+ /* apply for 64KB page, page number is chunk_num/32 */
+ page_num = (hwdev->mqm_att.chunk_num + 0x1f) >> 0x5;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x40 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 64K OK\n");
+ return 0;
+ }
+
+ /* apply for 4KB page, page number is chunk_num/2 */
+ page_num = (hwdev->mqm_att.chunk_num + 1) >> 1;
+ ret = mqm_eqm_try_alloc_mem(hwdev, 0x4 * 0x400, page_num);
+ if (ret == 0) {
+ sdk_info(hwdev->dev_hdl, "[mqm_eqm_init] Alloc page_size 4K OK\n");
+ return 0;
+ }
+
+ return ret;
+}
+
+static void mqm_eqm_free_page_mem(struct hinic3_hwdev *hwdev)
+{
+ u32 i;
+ struct hinic3_page_addr *page_addr;
+ u32 page_size;
+
+ page_size = hwdev->mqm_att.page_size;
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ dma_free_coherent(hwdev->dev_hdl, page_size,
+ page_addr->virt_addr, (dma_addr_t)(page_addr->phys_addr));
+ page_addr++;
+ }
+}
+
+static int mqm_eqm_set_cfg_2_hw(struct hinic3_hwdev *hwdev, u8 valid)
+{
+ struct comm_cmd_eqm_cfg info_eqm_cfg;
+ u16 out_size = sizeof(info_eqm_cfg);
+ int err;
+
+ memset(&info_eqm_cfg, 0, sizeof(info_eqm_cfg));
+
+ info_eqm_cfg.host_id = hinic3_host_id(hwdev);
+ info_eqm_cfg.page_size = hwdev->mqm_att.page_size;
+ info_eqm_cfg.valid = valid;
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_SET_MQM_CFG_INFO,
+ &info_eqm_cfg, sizeof(info_eqm_cfg),
+ &info_eqm_cfg, &out_size);
+ if (err || !out_size || info_eqm_cfg.head.status) {
+ sdk_err(hwdev->dev_hdl, "Failed to init func table, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info_eqm_cfg.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define EQM_DATA_BUF_SIZE 1024
+#define MQM_ATT_PAGE_NUM 128
+
+static int mqm_eqm_set_page_2_hw(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_eqm_search_gpa *info = NULL;
+ struct hinic3_page_addr *page_addr = NULL;
+ void *send_buf = NULL;
+ u16 send_buf_size;
+ u32 i;
+ u64 *gpa_hi52 = NULL;
+ u64 gpa;
+ u32 num;
+ u32 start_idx;
+ int err = 0;
+ u16 out_size;
+ u8 cmd;
+
+ send_buf_size = sizeof(struct comm_cmd_eqm_search_gpa) +
+ EQM_DATA_BUF_SIZE;
+ send_buf = kzalloc(send_buf_size, GFP_KERNEL);
+ if (!send_buf) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ page_addr = hwdev->mqm_att.brm_srch_page_addr;
+ info = (struct comm_cmd_eqm_search_gpa *)send_buf;
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = 0;
+ cmd = COMM_MGMT_CMD_SET_MQM_SRCH_GPA;
+ for (i = 0; i < hwdev->mqm_att.page_num; i++) {
+ /* gpa align to 4K, save gpa[31:12] */
+ gpa = page_addr->phys_addr >> 12;
+ gpa_hi52[num] = gpa;
+ num++;
+ if (num == MQM_ATT_PAGE_NUM) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size,
+ info, &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+
+ gpa_hi52 = info->gpa_hi52;
+ num = 0;
+ start_idx = i + 1;
+ }
+ page_addr++;
+ }
+
+ if (num != 0) {
+ info->num = num;
+ info->start_idx = start_idx;
+ info->host_id = hinic3_host_id(hwdev);
+ out_size = send_buf_size;
+ err = comm_msg_to_mgmt_sync(hwdev, cmd, info,
+ (u16)send_buf_size, info,
+ &out_size);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size,
+ info->head.status)) {
+ sdk_err(hwdev->dev_hdl, "Set mqm srch gpa fail, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, info->head.status, out_size);
+ err = -EFAULT;
+ goto set_page_2_hw_end;
+ }
+ }
+
+set_page_2_hw_end:
+ kfree(send_buf);
+ return err;
+}
+
+static int get_eqm_num(struct hinic3_hwdev *hwdev, struct comm_cmd_get_eqm_num *info_eqm_fix)
+{
+ int ret;
+ u16 len = sizeof(*info_eqm_fix);
+
+ memset(info_eqm_fix, 0, sizeof(*info_eqm_fix));
+
+ ret = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_MQM_FIX_INFO,
+ info_eqm_fix, sizeof(*info_eqm_fix), info_eqm_fix, &len);
+ if (ret || !len || info_eqm_fix->head.status) {
+ sdk_err(hwdev->dev_hdl, "Get mqm fix info fail,err: %d, status: 0x%x, out_size: 0x%x\n",
+ ret, info_eqm_fix->head.status, len);
+ return -EFAULT;
+ }
+
+ sdk_info(hwdev->dev_hdl, "get chunk_num: 0x%x, search_gpa_num: 0x%08x\n",
+ info_eqm_fix->chunk_num, info_eqm_fix->search_gpa_num);
+
+ return 0;
+}
+
+static int mqm_eqm_init(struct hinic3_hwdev *hwdev)
+{
+ struct comm_cmd_get_eqm_num info_eqm_fix;
+ int ret;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ ret = get_eqm_num(hwdev, &info_eqm_fix);
+ if (ret)
+ return ret;
+
+ if (!(info_eqm_fix.chunk_num))
+ return 0;
+
+ hwdev->mqm_att.chunk_num = info_eqm_fix.chunk_num;
+ hwdev->mqm_att.search_gpa_num = info_eqm_fix.search_gpa_num;
+ hwdev->mqm_att.page_size = 0;
+ hwdev->mqm_att.page_num = 0;
+
+ hwdev->mqm_att.brm_srch_page_addr =
+ kcalloc(hwdev->mqm_att.chunk_num, sizeof(struct hinic3_page_addr), GFP_KERNEL);
+ if (!(hwdev->mqm_att.brm_srch_page_addr)) {
+ sdk_err(hwdev->dev_hdl, "Alloc virtual mem failed\r\n");
+ return -EFAULT;
+ }
+
+ ret = mqm_eqm_alloc_page_mem(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Alloc eqm page mem failed\r\n");
+ goto err_page;
+ }
+
+ ret = mqm_eqm_set_page_2_hw(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 1);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set page to hw failed\r\n");
+ goto err_ecmd;
+ }
+
+ sdk_info(hwdev->dev_hdl, "ppf_ext_db_init ok\r\n");
+
+ return 0;
+
+err_ecmd:
+ mqm_eqm_free_page_mem(hwdev);
+
+err_page:
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ return ret;
+}
+
+static void mqm_eqm_deinit(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return;
+
+ if (!(hwdev->mqm_att.chunk_num))
+ return;
+
+ mqm_eqm_free_page_mem(hwdev);
+ kfree(hwdev->mqm_att.brm_srch_page_addr);
+
+ ret = mqm_eqm_set_cfg_2_hw(hwdev, 0);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "Set mqm eqm cfg to chip fail! err: %d\n",
+ ret);
+ return;
+ }
+
+ hwdev->mqm_att.chunk_num = 0;
+ hwdev->mqm_att.search_gpa_num = 0;
+ hwdev->mqm_att.page_num = 0;
+ hwdev->mqm_att.page_size = 0;
+}
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = mqm_eqm_init(hwdev);
+ if (ret) {
+ sdk_err(hwdev->dev_hdl, "MQM eqm init fail!\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hwdev->hwif->attr.func_type != TYPE_PPF)
+ return 0;
+
+ mqm_eqm_deinit(hwdev);
+
+ return 0;
+}
+
+#define HINIC3_FLR_TIMEOUT 1000
+
+static enum hinic3_wait_return check_flr_finish_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_pf_status status;
+
+ status = hinic3_get_pf_status(hwif);
+ if (status == HINIC3_PF_STATUS_FLR_FINISH_FLAG) {
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+ return WAIT_PROCESS_CPL;
+ }
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_for_flr_finish(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_flr_finish_handler,
+ HINIC3_FLR_TIMEOUT, 0xa * USEC_PER_MSEC);
+}
+
+#define HINIC3_WAIT_CMDQ_IDLE_TIMEOUT 5000
+
+static enum hinic3_wait_return check_cmdq_stop_handler(void *priv_data)
+{
+ struct hinic3_hwdev *hwdev = priv_data;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hinic3_cmdq_type cmdq_type;
+
+ /* Stop waiting when card unpresent */
+ if (!hwdev->chip_present_flag)
+ return WAIT_PROCESS_CPL;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ return WAIT_PROCESS_WAITING;
+ }
+
+ return WAIT_PROCESS_CPL;
+}
+
+static int wait_cmdq_stop(struct hinic3_hwdev *hwdev)
+{
+ enum hinic3_cmdq_type cmdq_type;
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ int err;
+
+ if (!(cmdqs->status & HINIC3_CMDQ_ENABLE))
+ return 0;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ err = hinic3_wait_for_timeout(hwdev, check_cmdq_stop_handler,
+ HINIC3_WAIT_CMDQ_IDLE_TIMEOUT,
+ USEC_PER_MSEC);
+ if (err == 0)
+ return 0;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < cmdqs->cmdq_num; cmdq_type++) {
+ if (!hinic3_cmdq_idle(&cmdqs->cmdq[cmdq_type]))
+ sdk_err(hwdev->dev_hdl, "Cmdq %d is busy\n", cmdq_type);
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+
+ return err;
+}
+
+static int hinic3_rx_tx_flush(struct hinic3_hwdev *hwdev, u16 channel)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ struct comm_cmd_clear_doorbell clear_db;
+ struct comm_cmd_clear_resource clr_res;
+ u16 out_size;
+ int err;
+ int ret = 0;
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ err = wait_cmdq_stop(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "CMDQ is still working, please check CMDQ timeout value is reasonable\n");
+ ret = err;
+ }
+
+ hinic3_disable_doorbell(hwif);
+
+ out_size = sizeof(clear_db);
+ memset(&clear_db, 0, sizeof(clear_db));
+ clear_db.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_FLUSH_DOORBELL,
+ &clear_db, sizeof(clear_db),
+ &clear_db, &out_size, channel);
+ if (err != 0 || !out_size || clear_db.head.status) {
+ sdk_warn(hwdev->dev_hdl, "Failed to flush doorbell, err: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
+ err, clear_db.head.status, out_size, channel);
+ if (err != 0)
+ ret = err;
+ else
+ ret = -EFAULT;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_pf_status(hwif, HINIC3_PF_STATUS_FLR_START_FLAG);
+ else
+ msleep(100); /* wait ucode 100 ms stop I/O */
+
+ memset(&clr_res, 0, sizeof(clr_res));
+ clr_res.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+
+ err = hinic3_msg_to_mgmt_no_ack(hwdev, HINIC3_MOD_COMM,
+ COMM_MGMT_CMD_START_FLUSH, &clr_res,
+ sizeof(clr_res), channel);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to notice flush message, err: %d, channel: 0x%x\n",
+ err, channel);
+ ret = err;
+ }
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF) {
+ err = wait_for_flr_finish(hwif);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Wait firmware FLR timeout\n");
+ ret = err;
+ }
+ }
+
+ hinic3_enable_doorbell(hwif);
+
+ err = hinic3_reinit_cmdq_ctxts(hwdev);
+ if (err != 0) {
+ sdk_warn(hwdev->dev_hdl, "Failed to reinit cmdq\n");
+ ret = err;
+ }
+
+ return ret;
+}
+
+int hinic3_func_rx_tx_flush(void *hwdev, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (dev->chip_present_flag == 0)
+ return 0;
+
+ return hinic3_rx_tx_flush(dev, channel);
+}
+EXPORT_SYMBOL(hinic3_func_rx_tx_flush);
+
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info,
+ u16 channel)
+{
+ struct comm_cmd_board_info board_info;
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&board_info, 0, sizeof(board_info));
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info),
+ &board_info, &out_size, channel);
+ if (err || board_info.head.status || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get board info, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, board_info.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_board_info);
+
+int hinic3_get_hw_pf_infos(void *hwdev, struct hinic3_hw_pf_infos *infos,
+ u16 channel)
+{
+ struct comm_cmd_hw_pf_infos *pf_infos = NULL;
+ u16 out_size = sizeof(*pf_infos);
+ int err = 0;
+
+ if (!hwdev || !infos)
+ return -EINVAL;
+
+ pf_infos = kzalloc(sizeof(*pf_infos), GFP_KERNEL);
+ if (!pf_infos)
+ return -ENOMEM;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev, COMM_MGMT_CMD_GET_HW_PF_INFOS,
+ pf_infos, sizeof(*pf_infos),
+ pf_infos, &out_size, channel);
+ if (pf_infos->head.status || err || !out_size) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get hw pf information, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n",
+ err, pf_infos->head.status, out_size, channel);
+ err = -EIO;
+ goto free_buf;
+ }
+
+ memcpy(infos, &pf_infos->infos, sizeof(*infos));
+
+free_buf:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_get_hw_pf_infos);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr)
+{
+ struct comm_cmd_get_glb_attr get_attr;
+ u16 out_size = sizeof(get_attr);
+ int err = 0;
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_GLOBAL_ATTR,
+ &get_attr, sizeof(get_attr), &get_attr,
+ &out_size);
+ if (err || !out_size || get_attr.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get global attribute, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, get_attr.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(attr, &get_attr.attr, sizeof(*attr));
+
+ return 0;
+}
+
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state,
+ u16 channel)
+{
+ struct comm_cmd_func_svc_used_state used_state;
+ u16 out_size = sizeof(used_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&used_state, 0, sizeof(used_state));
+ used_state.func_id = hinic3_global_func_id(hwdev);
+ used_state.svc_type = svc_type;
+ used_state.used_state = state;
+
+ err = comm_msg_to_mgmt_sync_ch(hwdev,
+ COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
+ &used_state, sizeof(used_state),
+ &used_state, &out_size, channel);
+ if (err || !out_size || used_state.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to set func service used state, err: %d, status: 0x%x, out size: 0x%x, channel: 0x%x\n\n",
+ err, used_state.head.status, out_size, channel);
+ return -EIO;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_func_svc_used_state);
+
+int hinic3_get_sml_table_info(void *hwdev, u32 tbl_id, u8 *node_id, u8 *instance_id)
+{
+ struct sml_table_id_info sml_table[TABLE_INDEX_MAX];
+ struct comm_cmd_get_sml_tbl_data sml_tbl;
+ u16 out_size = sizeof(sml_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (tbl_id >= TABLE_INDEX_MAX) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "sml table index out of range [0, %u]",
+ TABLE_INDEX_MAX - 1);
+ return -EINVAL;
+ }
+
+ err = comm_msg_to_mgmt_sync(hwdev, COMM_MGMT_CMD_GET_SML_TABLE_INFO,
+ &sml_tbl, sizeof(sml_tbl), &sml_tbl, &out_size);
+ if (err || !out_size || sml_tbl.head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to get sml table information, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, sml_tbl.head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(sml_table, sml_tbl.tbl_data, sizeof(sml_table));
+
+ *node_id = sml_table[tbl_id].node_id;
+ *instance_id = sml_table[tbl_id].instance_id;
+
+ return 0;
+}
+
+int hinic3_activate_firmware(void *hwdev, u8 cfg_index)
+{
+ struct hinic3_cmd_activate_firmware activate_msg;
+ u16 out_size = sizeof(activate_msg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&activate_msg, 0, sizeof(activate_msg));
+ activate_msg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_ACTIVE_FW,
+ &activate_msg, sizeof(activate_msg),
+ &activate_msg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || activate_msg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to activate firmware, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, activate_msg.msg_head.status, out_size);
+ err = activate_msg.msg_head.status ? activate_msg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_switch_config(void *hwdev, u8 cfg_index)
+{
+ struct hinic3_cmd_switch_config switch_cfg;
+ u16 out_size = sizeof(switch_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) != TYPE_PF)
+ return -EOPNOTSUPP;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ memset(&switch_cfg, 0, sizeof(switch_cfg));
+ switch_cfg.index = cfg_index;
+
+ err = hinic3_pf_to_mgmt_sync(hwdev, HINIC3_MOD_COMM, COMM_MGMT_CMD_SWITCH_CFG,
+ &switch_cfg, sizeof(switch_cfg),
+ &switch_cfg, &out_size, FW_UPDATE_MGMT_TIMEOUT);
+ if (err || !out_size || switch_cfg.msg_head.status) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Failed to switch cfg, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, switch_cfg.msg_head.status, out_size);
+ err = switch_cfg.msg_head.status ? switch_cfg.msg_head.status : -EIO;
+ return err;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
new file mode 100644
index 000000000000..be9e4a6b24f9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_comm.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_COMM_H
+#define HINIC3_COMM_H
+
+#include <linux/types.h>
+
+#include "comm_msg_intf.h"
+#include "hinic3_hwdev.h"
+
+#define MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, status) \
+ ((err) || (status) || !(out_size))
+
+#define HINIC3_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+enum func_tmr_bitmap_status {
+ FUNC_TMR_BITMAP_DISABLE,
+ FUNC_TMR_BITMAP_ENABLE,
+};
+
+enum ppf_tmr_status {
+ HINIC_PPF_TMR_FLAG_STOP,
+ HINIC_PPF_TMR_FLAG_START,
+};
+
+#define HINIC3_HT_GPA_PAGE_SIZE 4096UL
+#define HINIC3_PPF_HT_GPA_SET_RETRY_TIMES 10
+
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+
+int hinic3_set_cmdq_ctxt(struct hinic3_hwdev *hwdev, u8 cmdq_id,
+ struct cmdq_ctxt_info *ctxt);
+
+int hinic3_ppf_ext_db_init(struct hinic3_hwdev *hwdev);
+
+int hinic3_ppf_ext_db_deinit(struct hinic3_hwdev *hwdev);
+
+int hinic3_set_ceq_ctrl_reg(struct hinic3_hwdev *hwdev, u16 q_id,
+ u32 ctrl0, u32 ctrl1);
+
+int hinic3_set_dma_attr_tbl(struct hinic3_hwdev *hwdev, u8 entry_idx, u8 st, u8 at, u8 ph,
+ u8 no_snooping, u8 tph_en);
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+int hinic3_comm_channel_detect(struct hinic3_hwdev *hwdev);
+
+int hinic3_get_global_attr(void *hwdev, struct comm_global_attr *attr);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
new file mode 100644
index 000000000000..79e4dacbd0c9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.c
@@ -0,0 +1,599 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_comm_cmd.h"
+#include "hinic3_hw_mt.h"
+
+#define HINIC3_CMDQ_BUF_MAX_SIZE 2048U
+#define DW_WIDTH 4
+
+#define MSG_MAX_IN_SIZE (2048 * 1024)
+#define MSG_MAX_OUT_SIZE (2048 * 1024)
+
+/* completion timeout interval, unit is millisecond */
+#define MGMT_MSG_UPDATE_TIMEOUT 50000U
+
+void free_buff_in(void *hwdev, const struct msg_module *nt_msg, void *buf_in)
+{
+ if (!buf_in)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU)
+ hinic3_free_cmd_buf(hwdev, buf_in);
+ else
+ kfree(buf_in);
+}
+
+void free_buff_out(void *hwdev, struct msg_module *nt_msg,
+ void *buf_out)
+{
+ if (!buf_out)
+ return;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ hinic3_free_cmd_buf(hwdev, buf_out);
+ else
+ kfree(buf_out);
+}
+
+int alloc_buff_in(void *hwdev, struct msg_module *nt_msg,
+ u32 in_size, void **buf_in)
+{
+ void *msg_buf = NULL;
+
+ if (!in_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (in_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq in size(%u) more than 2KB\n", in_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ pr_err("Alloc cmdq cmd buffer failed in %s\n",
+ __func__);
+ return -ENOMEM;
+ }
+ msg_buf = cmd_buf->buf;
+ *buf_in = (void *)cmd_buf;
+ cmd_buf->size = (u16)in_size;
+ } else {
+ if (in_size > MSG_MAX_IN_SIZE) {
+ pr_err("In size(%u) more than 2M\n", in_size);
+ return -ENOMEM;
+ }
+ msg_buf = kzalloc(in_size, GFP_KERNEL);
+ *buf_in = msg_buf;
+ }
+ if (!(*buf_in)) {
+ pr_err("Alloc buffer in failed\n");
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(msg_buf, nt_msg->in_buf, in_size)) {
+ pr_err("%s:%d: Copy from user failed\n",
+ __func__, __LINE__);
+ free_buff_in(hwdev, nt_msg, *buf_in);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int alloc_buff_out(void *hwdev, struct msg_module *nt_msg,
+ u32 out_size, void **buf_out)
+{
+ if (!out_size)
+ return 0;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp) {
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+
+ if (out_size > HINIC3_CMDQ_BUF_MAX_SIZE) {
+ pr_err("Cmdq out size(%u) more than 2KB\n", out_size);
+ return -ENOMEM;
+ }
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ *buf_out = (void *)cmd_buf;
+ } else {
+ if (out_size > MSG_MAX_OUT_SIZE) {
+ pr_err("out size(%u) more than 2M\n", out_size);
+ return -ENOMEM;
+ }
+ *buf_out = kzalloc(out_size, GFP_KERNEL);
+ }
+ if (!(*buf_out)) {
+ pr_err("Alloc buffer out failed\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+int copy_buf_out_to_user(struct msg_module *nt_msg,
+ u32 out_size, void *buf_out)
+{
+ int ret = 0;
+ void *msg_out = NULL;
+
+ if (nt_msg->module == SEND_TO_NPU &&
+ !nt_msg->npu_cmd.direct_resp)
+ msg_out = ((struct hinic3_cmd_buf *)buf_out)->buf;
+ else
+ msg_out = buf_out;
+
+ if (copy_to_user(nt_msg->out_buf, msg_out, out_size))
+ ret = -EFAULT;
+
+ return ret;
+}
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_type;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_type = hinic3_func_type(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+
+ *(u16 *)buf_out = func_type;
+ return 0;
+}
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 func_id;
+
+ if (*out_size != sizeof(u16) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u16));
+ return -EFAULT;
+ }
+
+ func_id = hinic3_global_func_id(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u16 *)buf_out = func_id;
+
+ return 0;
+}
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ return hinic3_dbg_get_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ buf_out, (u16 *)out_size);
+}
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u16 size;
+
+ size = hinic3_dbg_clear_hw_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ if (*out_size != size) {
+ pr_err("Unexpect out buf size from user :%u, expect: %u\n",
+ *out_size, size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 result;
+
+ if (*out_size != sizeof(u32) || !buf_out) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(u32));
+ return -EFAULT;
+ }
+
+ result = hinic3_get_self_test_result(hinic3_get_sdk_hwdev_by_lld(lld_dev));
+ *(u32 *)buf_out = result;
+
+ return 0;
+}
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ u32 offset = 0;
+ struct nic_cmd_chip_fault_stats *fault_info = NULL;
+
+ if (!buf_in || !buf_out || *out_size != sizeof(*fault_info) ||
+ in_size != sizeof(*fault_info)) {
+ pr_err("Unexpect out buf size from user: %d, expect: %lu\n",
+ *out_size, sizeof(*fault_info));
+ return -EFAULT;
+ }
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_in;
+ offset = fault_info->offset;
+
+ fault_info = (struct nic_cmd_chip_fault_stats *)buf_out;
+ hinic3_get_chip_fault_stats(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ fault_info->chip_fault_stats, offset);
+
+ return 0;
+}
+
+static u32 get_up_timeout_val(enum hinic3_mod_type mod, u16 cmd)
+{
+ if (mod == HINIC3_MOD_COMM &&
+ (cmd == COMM_MGMT_CMD_UPDATE_FW ||
+ cmd == COMM_MGMT_CMD_UPDATE_BIOS ||
+ cmd == COMM_MGMT_CMD_ACTIVE_FW ||
+ cmd == COMM_MGMT_CMD_SWITCH_CFG ||
+ cmd == COMM_MGMT_CMD_HOT_ACTIVE_FW))
+ return MGMT_MSG_UPDATE_TIMEOUT;
+
+ return 0; /* use default mbox/apichain timeout time */
+}
+
+static int api_csr_read(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct up_log_msg_st *up_log_msg = (struct up_log_msg_st *)buf_in;
+ int ret = 0;
+ u32 rd_len;
+ u32 rd_addr;
+ u32 rd_cnt = 0;
+ u32 offset = 0;
+ u8 node_id;
+ u32 i;
+
+ if (!buf_in || !buf_out || in_size != sizeof(*up_log_msg) ||
+ *out_size != up_log_msg->rd_len || up_log_msg->rd_len % DW_WIDTH != 0)
+ return -EINVAL;
+
+ rd_len = up_log_msg->rd_len;
+ rd_addr = up_log_msg->addr;
+ node_id = (u8)nt_msg->mpu_cmd.mod;
+
+ rd_cnt = rd_len / DW_WIDTH;
+
+ for (i = 0; i < rd_cnt; i++) {
+ ret = hinic3_api_csr_rd32(hwdev, node_id,
+ rd_addr + offset,
+ (u32 *)(((u8 *)buf_out) + offset));
+ if (ret) {
+ pr_err("Csr rd fail, err: %d, node_id: %u, csr addr: 0x%08x\n",
+ ret, node_id, rd_addr + offset);
+ return ret;
+ }
+ offset += DW_WIDTH;
+ }
+ *out_size = rd_len;
+
+ return ret;
+}
+
+static int api_csr_write(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out,
+ u32 *out_size)
+{
+ struct csr_write_st *csr_write_msg = (struct csr_write_st *)buf_in;
+ int ret = 0;
+ u32 rd_len;
+ u32 rd_addr;
+ u32 rd_cnt = 0;
+ u32 offset = 0;
+ u8 node_id;
+ u32 i;
+ u8 *data = NULL;
+
+ if (!buf_in || in_size != sizeof(*csr_write_msg) || csr_write_msg->rd_len % DW_WIDTH != 0)
+ return -EINVAL;
+
+ rd_len = csr_write_msg->rd_len;
+ rd_addr = csr_write_msg->addr;
+ node_id = (u8)nt_msg->mpu_cmd.mod;
+
+ rd_cnt = rd_len / DW_WIDTH;
+
+ data = kzalloc(rd_len, GFP_KERNEL);
+ if (!data) {
+ pr_err("No more memory\n");
+ return -EFAULT;
+ }
+ if (copy_from_user(data, (void *)csr_write_msg->data, rd_len)) {
+ pr_err("Copy information from user failed\n");
+ kfree(data);
+ return -EFAULT;
+ }
+
+ for (i = 0; i < rd_cnt; i++) {
+ ret = hinic3_api_csr_wr32(hwdev, node_id,
+ rd_addr + offset,
+ *((u32 *)(data + offset)));
+ if (ret) {
+ pr_err("Csr wr fail, ret: %d, node_id: %u, csr addr: 0x%08x\n",
+ ret, rd_addr + offset, node_id);
+ kfree(data);
+ return ret;
+ }
+ offset += DW_WIDTH;
+ }
+
+ *out_size = 0;
+ kfree(data);
+ return ret;
+}
+
+int send_to_mpu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ enum hinic3_mod_type mod;
+ u32 timeout;
+ int ret = 0;
+ u16 cmd;
+
+ mod = (enum hinic3_mod_type)nt_msg->mpu_cmd.mod;
+ cmd = nt_msg->mpu_cmd.cmd;
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX || nt_msg->mpu_cmd.api_type == API_TYPE_CLP) {
+ timeout = get_up_timeout_val(mod, cmd);
+
+ if (nt_msg->mpu_cmd.api_type == API_TYPE_MBOX)
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ else
+ ret = hinic3_clp_to_mgmt(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)out_size);
+ if (ret) {
+ pr_err("Message to mgmt cpu return fail, mod: %d, cmd: %u\n", mod, cmd);
+ return ret;
+ }
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_BYPASS) {
+ if (nt_msg->mpu_cmd.cmd == API_CSR_WRITE)
+ return api_csr_write(hwdev, nt_msg, buf_in, in_size, buf_out, out_size);
+
+ ret = api_csr_read(hwdev, nt_msg, buf_in, in_size, buf_out, out_size);
+ } else if (nt_msg->mpu_cmd.api_type == API_TYPE_API_CHAIN_TO_MPU) {
+ timeout = get_up_timeout_val(mod, cmd);
+ if (hinic3_pcie_itf_id(hwdev) != SPU_HOST_ID)
+ ret = hinic3_msg_to_mgmt_api_chain_sync(hwdev, mod, cmd, buf_in,
+ (u16)in_size, buf_out,
+ (u16 *)out_size, timeout);
+ else
+ ret = hinic3_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, (u16)in_size,
+ buf_out, (u16 *)out_size, timeout,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret) {
+ pr_err("Message to mgmt api chain cpu return fail, mod: %d, cmd: %u\n",
+ mod, cmd);
+ return ret;
+ }
+ } else {
+ pr_err("Unsupported api_type %d\n", nt_msg->mpu_cmd.api_type);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+int send_to_npu(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+ u8 cmd;
+ enum hinic3_mod_type mod;
+
+ mod = (enum hinic3_mod_type)nt_msg->npu_cmd.mod;
+ cmd = nt_msg->npu_cmd.cmd;
+
+ if (nt_msg->npu_cmd.direct_resp) {
+ ret = hinic3_cmdq_direct_resp(hwdev, mod, cmd,
+ buf_in, buf_out, 0,
+ HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send direct cmdq failed, err: %d\n", ret);
+ } else {
+ ret = hinic3_cmdq_detail_resp(hwdev, mod, cmd, buf_in, buf_out,
+ NULL, 0, HINIC3_CHANNEL_DEFAULT);
+ if (ret)
+ pr_err("Send detail cmdq failed, err: %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int sm_rd16(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u16 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd16(hwdev, node, instance, id, &val1);
+ if (ret != 0) {
+ pr_err("Get sm ctr information (16 bits)failed!\n");
+ val1 = 0xffff;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (32 bits)failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd32_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u32 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd32_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(32 bits) failed!\n");
+ val1 = ~0;
+ }
+
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_pair(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0, val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair(hwdev, node, instance, id, &val1, &val2);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits pair)failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64_pair_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1 = 0;
+ u64 val2 = 0;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_pair_clear(hwdev, node, instance, id, &val1,
+ &val2);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits pair) failed!\n");
+ val1 = ~0;
+ val2 = ~0;
+ }
+
+ buf_out->val1 = val1;
+ buf_out->val2 = val2;
+
+ return ret;
+}
+
+static int sm_rd64(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr information (64 bits)failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+static int sm_rd64_clear(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out)
+{
+ u64 val1;
+ int ret;
+
+ ret = hinic3_sm_ctr_rd64_clear(hwdev, node, instance, id, &val1);
+ if (ret) {
+ pr_err("Get sm ctr clear information(64 bits) failed!\n");
+ val1 = ~0;
+ }
+ buf_out->val1 = val1;
+
+ return ret;
+}
+
+typedef int (*sm_module)(void *hwdev, u32 id, u8 instance,
+ u8 node, struct sm_out_st *buf_out);
+
+struct sm_module_handle {
+ enum sm_cmd_type sm_cmd_name;
+ sm_module sm_func;
+};
+
+const struct sm_module_handle sm_module_cmd_handle[] = {
+ {SM_CTR_RD16, sm_rd16},
+ {SM_CTR_RD32, sm_rd32},
+ {SM_CTR_RD64_PAIR, sm_rd64_pair},
+ {SM_CTR_RD64, sm_rd64},
+ {SM_CTR_RD32_CLEAR, sm_rd32_clear},
+ {SM_CTR_RD64_PAIR_CLEAR, sm_rd64_pair_clear},
+ {SM_CTR_RD64_CLEAR, sm_rd64_clear}
+};
+
+int send_to_sm(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ struct sm_in_st *sm_in = buf_in;
+ struct sm_out_st *sm_out = buf_out;
+ u32 msg_formate = nt_msg->msg_formate;
+ int index, num_cmds = sizeof(sm_module_cmd_handle) /
+ sizeof(sm_module_cmd_handle[0]);
+ int ret = 0;
+
+ if (!buf_in || !buf_out || in_size != sizeof(*sm_in) || *out_size != sizeof(*sm_out)) {
+ pr_err("Unexpect out buf size :%u, in buf size: %u\n",
+ *out_size, in_size);
+ return -EINVAL;
+ }
+
+ for (index = 0; index < num_cmds; index++) {
+ if (msg_formate != sm_module_cmd_handle[index].sm_cmd_name)
+ continue;
+
+ ret = sm_module_cmd_handle[index].sm_func(hwdev, (u32)sm_in->id,
+ (u8)sm_in->instance,
+ (u8)sm_in->node, sm_out);
+ break;
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", msg_formate);
+ return -EINVAL;
+ }
+
+ if (ret != 0)
+ pr_err("Get sm information fail, id:%u, instance:%u, node:%u\n",
+ sm_in->id, sm_in->instance, sm_in->node);
+
+ *out_size = sizeof(struct sm_out_st);
+
+ return ret;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
new file mode 100644
index 000000000000..9330200823b9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hw_mt.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HW_MT_H
+#define HINIC3_HW_MT_H
+
+#include "hinic3_lld.h"
+
+struct sm_in_st {
+ int node;
+ int id;
+ int instance;
+};
+
+struct sm_out_st {
+ u64 val1;
+ u64 val2;
+};
+
+struct up_log_msg_st {
+ u32 rd_len;
+ u32 addr;
+};
+
+struct csr_write_st {
+ u32 rd_len;
+ u32 addr;
+ u8 *data;
+};
+
+int get_func_type(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_func_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int clear_hw_driver_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_self_test_result(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+int get_chip_faults_stats(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
new file mode 100644
index 000000000000..2d29290f59e9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.c
@@ -0,0 +1,2141 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/interrupt.h>
+#include <linux/vmalloc.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_mbox.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_devlink.h"
+#include "hinic3_hwdev.h"
+
+static unsigned int wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+module_param(wq_page_order, uint, 0444);
+MODULE_PARM_DESC(wq_page_order, "Set wq page size order, wq page size is 4K * (2 ^ wq_page_order) - default is 8");
+
+enum hinic3_pcie_nosnoop {
+ HINIC3_PCIE_SNOOP = 0,
+ HINIC3_PCIE_NO_SNOOP = 1,
+};
+
+enum hinic3_pcie_tph {
+ HINIC3_PCIE_TPH_DISABLE = 0,
+ HINIC3_PCIE_TPH_ENABLE = 1,
+};
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SHIFT 0
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_MASK 0x3FF
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_INDIR_##member##_MASK) << \
+ HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_INDIR_##member##_MASK \
+ << HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)))
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define HINIC3_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define HINIC3_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define HINIC3_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define HINIC3_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_ENTRY_##member##_MASK) << \
+ HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_ENTRY_##member##_MASK \
+ << HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define HINIC3_PCIE_ST_DISABLE 0
+#define HINIC3_PCIE_AT_DISABLE 0
+#define HINIC3_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define HINIC3_CHIP_PRESENT 1
+#define HINIC3_CHIP_ABSENT 0
+
+#define HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
+
+#define HINIC3_HWDEV_WQ_NAME "hinic3_hardware"
+#define HINIC3_WQ_MAX_REQ 10
+
+#define SLAVE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define SLAVE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define SLAVE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+void set_slave_host_enable(void *hwdev, u8 host_id, bool enable)
+{
+ u32 reg_val;
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF)
+ return;
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+
+ reg_val = SLAVE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= SLAVE_HOST_STATUS_SET(host_id, enable);
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set slave host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+}
+
+int hinic3_get_slave_host_enable(void *hwdev, u8 host_id, u8 *slave_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_HOST_SLAVE_STATUS_ADDR);
+ *slave_en = SLAVE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_host_enable);
+
+int hinic3_get_slave_bitmap(void *hwdev, u8 *slave_host_bitmap)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ *slave_host_bitmap = cap->host_valid_bitmap & (~(1U << cap->master_host_id));
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_slave_bitmap);
+
+static void set_func_host_mode(struct hinic3_hwdev *hwdev, enum hinic3_func_mode mode)
+{
+ switch (mode) {
+ case FUNC_MOD_MULTI_BM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_BM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host BM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_BM_SLAVE;
+ break;
+ case FUNC_MOD_MULTI_VM_MASTER:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM master host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_MASTER;
+ break;
+ case FUNC_MOD_MULTI_VM_SLAVE:
+ sdk_info(hwdev->dev_hdl, "Detect multi-host VM slave host\n");
+ hwdev->func_mode = FUNC_MOD_MULTI_VM_SLAVE;
+ break;
+ default:
+ hwdev->func_mode = FUNC_MOD_NORMAL_HOST;
+ break;
+ }
+}
+
+static void hinic3_init_host_mode_pre(struct hinic3_hwdev *hwdev)
+{
+ struct service_cap *cap = &hwdev->cfg_mgmt->svc_cap;
+ u8 host_id = hwdev->hwif->attr.pci_intf_idx;
+
+ if (HINIC3_FUNC_TYPE(hwdev) == TYPE_VF) {
+ set_func_host_mode(hwdev, FUNC_MOD_NORMAL_HOST);
+ return;
+ }
+
+ switch (cap->srv_multi_host_mode) {
+ case HINIC3_SDI_MODE_BM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_BM_SLAVE);
+ break;
+ case HINIC3_SDI_MODE_VM:
+ if (host_id == cap->master_host_id)
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_MASTER);
+ else
+ set_func_host_mode(hwdev, FUNC_MOD_MULTI_VM_SLAVE);
+ break;
+ default:
+ set_func_host_mode(hwdev, FUNC_MOD_NORMAL_HOST);
+ break;
+ }
+}
+
+static int hinic3_multi_host_init(struct hinic3_hwdev *hwdev)
+{
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ if (IS_SLAVE_HOST(hwdev))
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), true);
+
+ return 0;
+}
+
+static int hinic3_multi_host_free(struct hinic3_hwdev *hwdev)
+{
+ if (!IS_MULTI_HOST(hwdev) || !HINIC3_IS_PPF(hwdev))
+ return 0;
+
+ if (IS_SLAVE_HOST(hwdev))
+ set_slave_host_enable(hwdev, hinic3_pcie_itf_id(hwdev), false);
+
+ return 0;
+}
+
+static u8 hinic3_nic_sw_aeqe_handler(void *hwdev, u8 event, u8 *data)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev)
+ return 0;
+
+ sdk_err(dev->dev_hdl, "Received nic ucode aeq event type: 0x%x, data: 0x%llx\n",
+ event, *((u64 *)data));
+
+ if (event < HINIC3_NIC_FATAL_ERROR_MAX)
+ atomic_inc(&dev->hw_stats.nic_ucode_event_stats[event]);
+
+ return 0;
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev);
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev);
+
+typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+struct mgmt_event_handle {
+ u16 cmd;
+ mgmt_event_cb proc;
+};
+
+static int pf_handle_vf_comm_mbox(void *pri_handle,
+ u16 vf_id, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported vf mbox event %u to process\n",
+ cmd);
+
+ return 0;
+}
+
+static int vf_handle_pf_comm_mbox(void *pri_handle,
+ u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = pri_handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported pf mbox event %u to process\n",
+ cmd);
+ return 0;
+}
+
+static void chip_fault_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_level[FAULT_LEVEL_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "fatal", "reset", "host", "flr", "general", "suggestion"};
+ char level_str[FAULT_SHOW_STR_LEN + 1];
+ u8 level;
+
+ memset(level_str, 0, FAULT_SHOW_STR_LEN + 1);
+ level = event->event.chip.err_level;
+ if (level < FAULT_LEVEL_MAX)
+ strncpy(level_str, fault_level[level],
+ FAULT_SHOW_STR_LEN);
+ else
+ strncpy(level_str, "Unknown", FAULT_SHOW_STR_LEN);
+
+ if (level == FAULT_LEVEL_SERIOUS_FLR)
+ dev_err(hwdev->dev_hdl, "err_level: %u [%s], flr func_id: %u\n",
+ level, level_str, event->event.chip.func_id);
+
+ dev_err(hwdev->dev_hdl,
+ "Module_id: 0x%x, err_type: 0x%x, err_level: %u[%s], err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ event->event.chip.node_id,
+ event->event.chip.err_type, level, level_str,
+ event->event.chip.err_csr_addr,
+ event->event.chip.err_csr_value);
+}
+
+static void fault_report_show(struct hinic3_hwdev *hwdev,
+ struct hinic3_fault_event *event)
+{
+ char fault_type[FAULT_TYPE_MAX][FAULT_SHOW_STR_LEN + 1] = {
+ "chip", "ucode", "mem rd timeout", "mem wr timeout",
+ "reg rd timeout", "reg wr timeout", "phy fault", "tsensor fault"};
+ char type_str[FAULT_SHOW_STR_LEN + 1] = {0};
+ struct fault_event_stats *fault = NULL;
+
+ sdk_err(hwdev->dev_hdl, "Fault event report received, func_id: %u\n",
+ hinic3_global_func_id(hwdev));
+
+ fault = &hwdev->hw_stats.fault_event_stats;
+
+ if (event->type < FAULT_TYPE_MAX) {
+ strncpy(type_str, fault_type[event->type], sizeof(type_str));
+ atomic_inc(&fault->fault_type_stat[event->type]);
+ } else {
+ strncpy(type_str, "Unknown", sizeof(type_str));
+ }
+
+ sdk_err(hwdev->dev_hdl, "Fault type: %u [%s]\n", event->type, type_str);
+ /* 0, 1, 2 and 3 word Represents array event->event.val index */
+ sdk_err(hwdev->dev_hdl, "Fault val[0]: 0x%08x, val[1]: 0x%08x, val[2]: 0x%08x, val[3]: 0x%08x\n",
+ event->event.val[0x0], event->event.val[0x1],
+ event->event.val[0x2], event->event.val[0x3]);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ switch (event->type) {
+ case FAULT_TYPE_CHIP:
+ chip_fault_show(hwdev, event);
+ break;
+ case FAULT_TYPE_UCODE:
+ sdk_err(hwdev->dev_hdl, "Cause_id: %u, core_id: %u, c_id: %u, epc: 0x%08x\n",
+ event->event.ucode.cause_id, event->event.ucode.core_id,
+ event->event.ucode.c_id, event->event.ucode.epc);
+ break;
+ case FAULT_TYPE_MEM_RD_TIMEOUT:
+ case FAULT_TYPE_MEM_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr_ctrl: 0x%08x, err_csr_data: 0x%08x, ctrl_tab: 0x%08x, mem_index: 0x%08x\n",
+ event->event.mem_timeout.err_csr_ctrl,
+ event->event.mem_timeout.err_csr_data,
+ event->event.mem_timeout.ctrl_tab, event->event.mem_timeout.mem_index);
+ break;
+ case FAULT_TYPE_REG_RD_TIMEOUT:
+ case FAULT_TYPE_REG_WR_TIMEOUT:
+ sdk_err(hwdev->dev_hdl, "Err_csr: 0x%08x\n", event->event.reg_timeout.err_csr);
+ break;
+ case FAULT_TYPE_PHY_FAULT:
+ sdk_err(hwdev->dev_hdl, "Op_type: %u, port_id: %u, dev_ad: %u, csr_addr: 0x%08x, op_data: 0x%08x\n",
+ event->event.phy_fault.op_type, event->event.phy_fault.port_id,
+ event->event.phy_fault.dev_ad, event->event.phy_fault.csr_addr,
+ event->event.phy_fault.op_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void fault_event_handler(void *dev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_cmd_fault_event *fault_event = NULL;
+ struct hinic3_fault_event *fault = NULL;
+ struct hinic3_event_info event_info;
+ struct hinic3_hwdev *hwdev = dev;
+ u8 fault_src = HINIC3_FAULT_SRC_TYPE_MAX;
+ u8 fault_level;
+
+ if (in_size != sizeof(*fault_event)) {
+ sdk_err(hwdev->dev_hdl, "Invalid fault event report, length: %u, should be %ld\n",
+ in_size, sizeof(*fault_event));
+ return;
+ }
+
+ fault_event = buf_in;
+ fault_report_show(hwdev, &fault_event->event);
+
+ if (fault_event->event.type == FAULT_TYPE_CHIP)
+ fault_level = fault_event->event.event.chip.err_level;
+ else
+ fault_level = FAULT_LEVEL_FATAL;
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type = EVENT_COMM_FAULT;
+ fault = (void *)event_info.event_data;
+ memcpy(fault, &fault_event->event, sizeof(struct hinic3_fault_event));
+ fault->fault_level = fault_level;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (fault_event->event.type <= FAULT_TYPE_REG_WR_TIMEOUT)
+ fault_src = fault_event->event.type;
+ else if (fault_event->event.type == FAULT_TYPE_PHY_FAULT)
+ fault_src = HINIC3_FAULT_SRC_HW_PHY_FAULT;
+
+ hisdk3_fault_post_process(hwdev, fault_src, fault_level);
+}
+
+static void ffm_event_record(struct hinic3_hwdev *dev, struct dbgtool_k_glb_info *dbgtool_info,
+ struct ffm_intr_info *intr)
+{
+ struct rtc_time rctm;
+ struct timeval txc;
+ u32 ffm_idx;
+ u32 last_err_csr_addr;
+ u32 last_err_csr_value;
+
+ ffm_idx = dbgtool_info->ffm->ffm_num;
+ last_err_csr_addr = dbgtool_info->ffm->last_err_csr_addr;
+ last_err_csr_value = dbgtool_info->ffm->last_err_csr_value;
+ if (ffm_idx < FFM_RECORD_NUM_MAX) {
+ if (intr->err_csr_addr == last_err_csr_addr &&
+ intr->err_csr_value == last_err_csr_value) {
+ dbgtool_info->ffm->ffm[ffm_idx - 1].times++;
+ sdk_err(dev->dev_hdl, "Receive intr same, ffm_idx: %u\n", ffm_idx - 1);
+ return;
+ }
+ sdk_err(dev->dev_hdl, "Receive intr, ffm_idx: %u\n", ffm_idx);
+
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.node_id = intr->node_id;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_level = intr->err_level;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_type = intr->err_type;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->ffm[ffm_idx].intr_info.err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->last_err_csr_addr = intr->err_csr_addr;
+ dbgtool_info->ffm->last_err_csr_value = intr->err_csr_value;
+ dbgtool_info->ffm->ffm[ffm_idx].times = 1;
+
+ /* Obtain the current UTC time */
+ do_gettimeofday(&txc);
+
+ /* Calculate the time in date value to tm, i.e. GMT + 8, mutiplied by 60 * 60 */
+ rtc_time_to_tm((unsigned long)txc.tv_sec + 60 * 60 * 8, &rctm);
+
+ /* tm_year starts from 1900; 0->1900, 1->1901, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].year = (u16)(rctm.tm_year + 1900);
+ /* tm_mon starts from 0, 0 indicates January, and so on */
+ dbgtool_info->ffm->ffm[ffm_idx].mon = (u8)rctm.tm_mon + 1;
+ dbgtool_info->ffm->ffm[ffm_idx].mday = (u8)rctm.tm_mday;
+ dbgtool_info->ffm->ffm[ffm_idx].hour = (u8)rctm.tm_hour;
+ dbgtool_info->ffm->ffm[ffm_idx].min = (u8)rctm.tm_min;
+ dbgtool_info->ffm->ffm[ffm_idx].sec = (u8)rctm.tm_sec;
+
+ dbgtool_info->ffm->ffm_num++;
+ }
+}
+
+static void ffm_event_msg_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+ struct card_node *card_info = NULL;
+ struct ffm_intr_info *intr = NULL;
+
+ if (in_size != sizeof(*intr)) {
+ sdk_err(dev->dev_hdl, "Invalid fault event report, length: %u, should be %ld.\n",
+ in_size, sizeof(*intr));
+ return;
+ }
+
+ intr = buf_in;
+
+ sdk_err(dev->dev_hdl, "node_id: 0x%x, err_type: 0x%x, err_level: %u, err_csr_addr: 0x%08x, err_csr_value: 0x%08x\n",
+ intr->node_id, intr->err_type, intr->err_level,
+ intr->err_csr_addr, intr->err_csr_value);
+
+ hinic3_show_chip_err_info(hwdev);
+
+ card_info = dev->chip_node;
+ dbgtool_info = card_info->dbgtool_info;
+
+ *out_size = sizeof(*intr);
+
+ if (!dbgtool_info)
+ return;
+
+ if (!dbgtool_info->ffm)
+ return;
+
+ ffm_event_record(dev, dbgtool_info, intr);
+}
+
+#define X_CSR_INDEX 30
+
+static void sw_watchdog_timeout_info_show(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct comm_info_sw_watchdog *watchdog_info = buf_in;
+ u32 stack_len, i, j, tmp;
+ u32 *dump_addr = NULL;
+ u64 *reg = NULL;
+
+ if (in_size != sizeof(*watchdog_info)) {
+ sdk_err(hwdev->dev_hdl, "Invalid mgmt watchdog report, length: %d, should be %ld\n",
+ in_size, sizeof(*watchdog_info));
+ return;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt deadloop time: 0x%x 0x%x, task id: 0x%x, sp: 0x%llx\n",
+ watchdog_info->curr_time_h, watchdog_info->curr_time_l,
+ watchdog_info->task_id, watchdog_info->sp);
+ sdk_err(hwdev->dev_hdl,
+ "Stack current used: 0x%x, peak used: 0x%x, overflow flag: 0x%x, top: 0x%llx, bottom: 0x%llx\n",
+ watchdog_info->curr_used, watchdog_info->peak_used,
+ watchdog_info->is_overflow, watchdog_info->stack_top, watchdog_info->stack_bottom);
+
+ sdk_err(hwdev->dev_hdl, "Mgmt pc: 0x%llx, elr: 0x%llx, spsr: 0x%llx, far: 0x%llx, esr: 0x%llx, xzr: 0x%llx\n",
+ watchdog_info->pc, watchdog_info->elr, watchdog_info->spsr, watchdog_info->far,
+ watchdog_info->esr, watchdog_info->xzr); /*lint !e10 !e26 */
+
+ sdk_err(hwdev->dev_hdl, "Mgmt register info\n");
+ reg = &watchdog_info->x30;
+ for (i = 0; i <= X_CSR_INDEX; i++)
+ sdk_err(hwdev->dev_hdl, "x%02u:0x%llx\n",
+ X_CSR_INDEX - i, reg[i]); /*lint !e661 !e662 */
+
+ if (watchdog_info->stack_actlen <= DATA_LEN_1K) {
+ stack_len = watchdog_info->stack_actlen;
+ } else {
+ sdk_err(hwdev->dev_hdl, "Oops stack length: 0x%x is wrong\n",
+ watchdog_info->stack_actlen);
+ stack_len = DATA_LEN_1K;
+ }
+
+ sdk_err(hwdev->dev_hdl, "Mgmt dump stack, 16 bytes per line(start from sp)\n");
+ for (i = 0; i < (stack_len / DUMP_16B_PER_LINE); i++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data + (u32)(i * DUMP_16B_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *dump_addr, *(dump_addr + 0x1), *(dump_addr + 0x2), *(dump_addr + 0x3));
+ }
+
+ tmp = (stack_len % DUMP_16B_PER_LINE) / DUMP_4_VAR_PER_LINE;
+ for (j = 0; j < tmp; j++) {
+ dump_addr = (u32 *)(watchdog_info->stack_data +
+ (u32)(i * DUMP_16B_PER_LINE + j * DUMP_4_VAR_PER_LINE));
+ sdk_err(hwdev->dev_hdl, "0x%08x ", *dump_addr);
+ }
+
+ *out_size = sizeof(*watchdog_info);
+ watchdog_info = buf_out;
+ watchdog_info->head.status = 0;
+}
+
+static void mgmt_watchdog_timeout_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *dev = hwdev;
+
+ sw_watchdog_timeout_info_show(dev, buf_in, in_size, buf_out, out_size);
+
+ if (dev->event_callback) {
+ event_info.type = EVENT_COMM_MGMT_WATCHDOG;
+ dev->event_callback(dev->event_pri_handle, &event_info);
+ }
+}
+
+static void show_exc_info(struct hinic3_hwdev *hwdev, EXC_INFO_S *exc_info)
+{
+ u32 i;
+
+ /* key information */
+ sdk_err(hwdev->dev_hdl, "==================== Exception Info Begin ====================\n");
+ sdk_err(hwdev->dev_hdl, "Exception CpuTick : 0x%08x 0x%08x\n",
+ exc_info->cpu_tick.cnt_hi, exc_info->cpu_tick.cnt_lo);
+ sdk_err(hwdev->dev_hdl, "Exception Cause : %u\n", exc_info->exc_cause);
+ sdk_err(hwdev->dev_hdl, "Os Version : %s\n", exc_info->os_ver);
+ sdk_err(hwdev->dev_hdl, "App Version : %s\n", exc_info->app_ver);
+ sdk_err(hwdev->dev_hdl, "CPU Type : 0x%08x\n", exc_info->cpu_type);
+ sdk_err(hwdev->dev_hdl, "CPU ID : 0x%08x\n", exc_info->cpu_id);
+ sdk_err(hwdev->dev_hdl, "Thread Type : 0x%08x\n", exc_info->thread_type);
+ sdk_err(hwdev->dev_hdl, "Thread ID : 0x%08x\n", exc_info->thread_id);
+ sdk_err(hwdev->dev_hdl, "Byte Order : 0x%08x\n", exc_info->byte_order);
+ sdk_err(hwdev->dev_hdl, "Nest Count : 0x%08x\n", exc_info->nest_cnt);
+ sdk_err(hwdev->dev_hdl, "Fatal Error Num : 0x%08x\n", exc_info->fatal_errno);
+ sdk_err(hwdev->dev_hdl, "Current SP : 0x%016llx\n", exc_info->uw_sp);
+ sdk_err(hwdev->dev_hdl, "Stack Bottom : 0x%016llx\n", exc_info->stack_bottom);
+
+ /* register field */
+ sdk_err(hwdev->dev_hdl, "Register contents when exception occur.\n");
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TTBR0",
+ exc_info->reg_info.ttbr0, "TTBR1", exc_info->reg_info.ttbr1);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "TCR",
+ exc_info->reg_info.tcr, "MAIR", exc_info->reg_info.mair);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "SCTLR",
+ exc_info->reg_info.sctlr, "VBAR", exc_info->reg_info.vbar);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "CURRENTE1",
+ exc_info->reg_info.current_el, "SP", exc_info->reg_info.sp);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "ELR",
+ exc_info->reg_info.elr, "SPSR", exc_info->reg_info.spsr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx \t %-14s: 0x%016llx\n", "FAR",
+ exc_info->reg_info.far_r, "ESR", exc_info->reg_info.esr);
+ sdk_err(hwdev->dev_hdl, "%-14s: 0x%016llx\n", "XZR", exc_info->reg_info.xzr);
+
+ for (i = 0; i < XREGS_NUM - 1; i += 0x2)
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t XREGS[%02u]%-5s: 0x%016llx",
+ i, " ", exc_info->reg_info.xregs[i],
+ (u32)(i + 0x1U), " ", exc_info->reg_info.xregs[(u32)(i + 0x1U)]);
+
+ sdk_err(hwdev->dev_hdl, "XREGS[%02u]%-5s: 0x%016llx \t ", XREGS_NUM - 1, " ",
+ exc_info->reg_info.xregs[XREGS_NUM - 1]);
+}
+
+#define FOUR_REG_LEN 16
+
+static void mgmt_lastword_report_event_handler(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ comm_info_up_lastword_s *lastword_info = buf_in;
+ EXC_INFO_S *exc_info = &lastword_info->stack_info;
+ u32 stack_len = lastword_info->stack_actlen;
+ struct hinic3_hwdev *dev = hwdev;
+ u32 *curr_reg = NULL;
+ u32 reg_i, cnt;
+
+ if (in_size != sizeof(*lastword_info)) {
+ sdk_err(dev->dev_hdl, "Invalid mgmt lastword, length: %u, should be %ld\n",
+ in_size, sizeof(*lastword_info));
+ return;
+ }
+
+ show_exc_info(dev, exc_info);
+
+ /* call stack dump */
+ sdk_err(dev->dev_hdl, "Dump stack when exceptioin occurs, 16Bytes per line.\n");
+
+ cnt = stack_len / FOUR_REG_LEN;
+ for (reg_i = 0; reg_i < cnt; reg_i++) {
+ curr_reg = (u32 *)(lastword_info->stack_data + ((u64)(u32)(reg_i * FOUR_REG_LEN)));
+ sdk_err(dev->dev_hdl, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+ *curr_reg, *(curr_reg + 0x1), *(curr_reg + 0x2), *(curr_reg + 0x3));
+ }
+
+ sdk_err(dev->dev_hdl, "==================== Exception Info End ====================\n");
+}
+
+const struct mgmt_event_handle mgmt_event_proc[] = {
+ {
+ .cmd = COMM_MGMT_CMD_FAULT_REPORT,
+ .proc = fault_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_FFM_SET,
+ .proc = ffm_event_msg_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_WATCHDOG_INFO,
+ .proc = mgmt_watchdog_timeout_event_handler,
+ },
+
+ {
+ .cmd = COMM_MGMT_CMD_LASTWORD_GET,
+ .proc = mgmt_lastword_report_event_handler,
+ },
+};
+
+static void pf_handle_mgmt_comm_event(void *handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ u32 i, event_num = ARRAY_LEN(mgmt_event_proc);
+
+ if (!hwdev)
+ return;
+
+ for (i = 0; i < event_num; i++) {
+ if (cmd == mgmt_event_proc[i].cmd) {
+ if (mgmt_event_proc[i].proc)
+ mgmt_event_proc[i].proc(handle, buf_in, in_size,
+ buf_out, out_size);
+
+ return;
+ }
+ }
+
+ sdk_warn(hwdev->dev_hdl, "Unsupported mgmt cpu event %u to process\n",
+ cmd);
+ *out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+}
+
+static void hinic3_set_chip_present(struct hinic3_hwdev *hwdev)
+{
+ hwdev->chip_present_flag = HINIC3_CHIP_PRESENT;
+}
+
+static void hinic3_set_chip_absent(struct hinic3_hwdev *hwdev)
+{
+ sdk_err(hwdev->dev_hdl, "Card not present\n");
+ hwdev->chip_present_flag = HINIC3_CHIP_ABSENT;
+}
+
+int hinic3_get_chip_present_flag(const void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return ((struct hinic3_hwdev *)hwdev)->chip_present_flag;
+}
+EXPORT_SYMBOL(hinic3_get_chip_present_flag);
+
+void hinic3_force_complete_all(void *dev)
+{
+ struct hinic3_recv_msg *recv_resp_msg = NULL;
+ struct hinic3_hwdev *hwdev = dev;
+ struct hinic3_mbox *func_to_func = NULL;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ if (hinic3_func_type(hwdev) != TYPE_VF &&
+ test_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state)) {
+ recv_resp_msg = &hwdev->pf_to_mgmt->recv_resp_msg_from_mgmt;
+ spin_lock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ if (hwdev->pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ complete(&recv_resp_msg->recv_done);
+ hwdev->pf_to_mgmt->event_flag = SEND_EVENT_TIMEOUT;
+ }
+ spin_unlock_bh(&hwdev->pf_to_mgmt->sync_event_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ func_to_func = hwdev->func_to_func;
+ spin_lock(&func_to_func->mbox_lock);
+ if (func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_TIMEOUT;
+ spin_unlock(&func_to_func->mbox_lock);
+ }
+
+ if (test_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state))
+ hinic3_cmdq_flush_sync_cmd(hwdev);
+
+ spin_unlock_bh(&hwdev->channel_lock);
+}
+EXPORT_SYMBOL(hinic3_force_complete_all);
+
+void hinic3_detect_hw_present(void *hwdev)
+{
+ if (!get_card_present_state((struct hinic3_hwdev *)hwdev)) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ }
+}
+
+/**
+ * dma_attr_table_init - initialize the default dma attributes
+ * @hwdev: the pointer to hw device
+ **/
+static int dma_attr_table_init(struct hinic3_hwdev *hwdev)
+{
+ u32 addr, val, dst_attr;
+
+ /* Use indirect access should set entry_idx first */
+ addr = HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ val = HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, IDX);
+
+ val |= HINIC3_DMA_ATTR_INDIR_IDX_SET(PCIE_MSIX_ATTR_ENTRY, IDX);
+
+ hinic3_hwif_write_reg(hwdev->hwif, addr, val);
+
+ wmb(); /* write index before config */
+
+ addr = HINIC3_CSR_DMA_ATTR_TBL_ADDR;
+ val = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ dst_attr = HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_ST_DISABLE, ST) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_AT_DISABLE, AT) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_PH_DISABLE, PH) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_SNOOP, NO_SNOOPING) |
+ HINIC3_DMA_ATTR_ENTRY_SET(HINIC3_PCIE_TPH_DISABLE, TPH_EN);
+
+ if (val == dst_attr)
+ return 0;
+
+ return hinic3_set_dma_attr_tbl(hwdev, PCIE_MSIX_ATTR_ENTRY, HINIC3_PCIE_ST_DISABLE,
+ HINIC3_PCIE_AT_DISABLE, HINIC3_PCIE_PH_DISABLE,
+ HINIC3_PCIE_SNOOP, HINIC3_PCIE_TPH_DISABLE);
+}
+
+static int init_aeqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ int q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = aeqs->num_aeqs - 1; q_id >= 0; q_id--) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg_direct(hwdev, &info,
+ HINIC3_CHANNEL_COMM);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for aeq %d failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int init_ceqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_ceqs *ceqs = hwdev->ceqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < ceqs->num_ceqs; q_id++) {
+ eq = &ceqs->ceq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg(hwdev, info,
+ HINIC3_CHANNEL_COMM);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Set msix attr for ceq %u failed\n",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int hinic3_comm_clp_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ err = hinic3_clp_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static void hinic3_comm_clp_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF || !COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ hinic3_clp_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_aeqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_aeqs, resp_num_irq = 0, i;
+ int err;
+
+ num_aeqs = HINIC3_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > HINIC3_MAX_AEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %d\n",
+ HINIC3_MAX_AEQS);
+ num_aeqs = HINIC3_MAX_AEQS;
+ }
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_aeqs, aeq_irqs,
+ &resp_num_irq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc aeq irqs, num_aeqs: %u\n",
+ num_aeqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_aeqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust aeq num to %u\n",
+ resp_num_irq);
+ num_aeqs = resp_num_irq;
+ }
+
+ err = hinic3_aeqs_init(hwdev, num_aeqs, aeq_irqs);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs\n");
+ goto aeqs_init_err;
+ }
+
+ return 0;
+
+aeqs_init_err:
+ for (i = 0; i < num_aeqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info aeq_irqs[HINIC3_MAX_AEQS] = {{0} };
+ u16 num_irqs, i;
+
+ hinic3_get_aeq_irqs(hwdev, aeq_irqs, &num_irqs);
+
+ hinic3_aeqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, aeq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_ceqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_ceqs, resp_num_irq = 0, i;
+ int err;
+
+ num_ceqs = HINIC3_HWIF_NUM_CEQS(hwdev->hwif);
+ if (num_ceqs > HINIC3_MAX_CEQS) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %d\n",
+ HINIC3_MAX_CEQS);
+ num_ceqs = HINIC3_MAX_CEQS;
+ }
+
+ err = hinic3_alloc_irqs(hwdev, SERVICE_T_INTF, num_ceqs, ceq_irqs,
+ &resp_num_irq);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc ceq irqs, num_ceqs: %u\n",
+ num_ceqs);
+ return err;
+ }
+
+ if (resp_num_irq < num_ceqs) {
+ sdk_warn(hwdev->dev_hdl, "Adjust ceq num to %u\n",
+ resp_num_irq);
+ num_ceqs = resp_num_irq;
+ }
+
+ err = hinic3_ceqs_init(hwdev, num_ceqs, ceq_irqs);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to init ceqs, err:%d\n", err);
+ goto ceqs_init_err;
+ }
+
+ return 0;
+
+ceqs_init_err:
+ for (i = 0; i < num_ceqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+
+ return err;
+}
+
+static void hinic3_comm_ceqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct irq_info ceq_irqs[HINIC3_MAX_CEQS] = {{0} };
+ u16 num_irqs;
+ int i;
+
+ hinic3_get_ceq_irqs(hwdev, ceq_irqs, &num_irqs);
+
+ hinic3_ceqs_free(hwdev);
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_free_irq(hwdev, SERVICE_T_INTF, ceq_irqs[i].irq_id);
+}
+
+static int hinic3_comm_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_func_to_func_init(hwdev);
+ if (err)
+ return err;
+
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MBX_FROM_FUNC,
+ hinic3_mbox_func_aeqe_handler);
+ hinic3_aeq_register_hw_cb(hwdev, hwdev, HINIC3_MSG_FROM_MGMT_CPU,
+ hinic3_mgmt_msg_aeqe_handler);
+
+ if (!HINIC3_IS_VF(hwdev))
+ hinic3_register_pf_mbox_cb(hwdev, HINIC3_MOD_COMM,
+ hwdev,
+ pf_handle_vf_comm_mbox);
+ else
+ hinic3_register_vf_mbox_cb(hwdev, HINIC3_MOD_COMM,
+ hwdev,
+ vf_handle_pf_comm_mbox);
+
+ set_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MBX_FROM_FUNC);
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ hinic3_unregister_pf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+ } else {
+ hinic3_unregister_vf_mbox_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+ }
+
+ hinic3_func_to_func_free(hwdev);
+}
+
+static int hinic3_comm_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ err = hinic3_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
+
+ hinic3_register_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM, hwdev,
+ pf_handle_mgmt_comm_event);
+
+ set_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+
+ return 0;
+}
+
+static void hinic3_comm_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_MGMT_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_unregister_mgmt_msg_cb(hwdev, HINIC3_MOD_COMM);
+
+ hinic3_aeq_unregister_hw_cb(hwdev, HINIC3_MSG_FROM_MGMT_CPU);
+
+ hinic3_pf_to_mgmt_free(hwdev);
+}
+
+static int hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_cmdqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ return err;
+ }
+
+ hinic3_ceq_register_cb(hwdev, hwdev, HINIC3_CMDQ, hinic3_cmdq_ceq_handler);
+
+ err = hinic3_set_cmdq_depth(hwdev, HINIC3_CMDQ_DEPTH);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set cmdq depth\n");
+ goto set_cmdq_depth_err;
+ }
+
+ set_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+
+ return 0;
+
+set_cmdq_depth_err:
+ hinic3_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_bh(&hwdev->channel_lock);
+ clear_bit(HINIC3_HWDEV_CMDQ_INITED, &hwdev->func_state);
+ spin_unlock_bh(&hwdev->channel_lock);
+
+ hinic3_ceq_unregister_cb(hwdev, HINIC3_CMDQ);
+ hinic3_cmdqs_free(hwdev);
+}
+
+static void hinic3_sync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+}
+
+static void hinic3_unsync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+static int init_basic_attributes(struct hinic3_hwdev *hwdev)
+{
+ u64 drv_features[COMM_MAX_FEATURE_QWORD] = {HINIC3_DRV_FEATURE_QW0, 0, 0, 0};
+ int err, i;
+
+ if (hinic3_func_type(hwdev) == TYPE_PPF)
+ drv_features[0] |= COMM_F_CHANNEL_DETECT;
+
+ err = hinic3_get_board_info(hwdev, &hwdev->board_info,
+ HINIC3_CHANNEL_COMM);
+ if (err)
+ return err;
+
+ err = hinic3_get_comm_features(hwdev, hwdev->features,
+ COMM_MAX_FEATURE_QWORD);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Get comm features failed\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl, "Comm hw features: 0x%llx, drv features: 0x%llx\n",
+ hwdev->features[0], drv_features[0]);
+
+ for (i = 0; i < COMM_MAX_FEATURE_QWORD; i++)
+ hwdev->features[i] &= drv_features[i];
+
+ err = hinic3_get_global_attr(hwdev, &hwdev->glb_attr);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to get global attribute\n");
+ return err;
+ }
+
+ sdk_info(hwdev->dev_hdl,
+ "global attribute: max_host: 0x%x, max_pf: 0x%x, vf_id_start: 0x%x, mgmt node id: 0x%x, cmdq_num: 0x%x\n",
+ hwdev->glb_attr.max_host_num, hwdev->glb_attr.max_pf_num,
+ hwdev->glb_attr.vf_id_start,
+ hwdev->glb_attr.mgmt_host_node_id,
+ hwdev->glb_attr.cmdq_num);
+
+ return 0;
+}
+
+static int init_basic_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_aeqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init async event queues\n");
+ return err;
+ }
+
+ err = hinic3_comm_func_to_func_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mailbox\n");
+ goto func_to_func_init_err;
+ }
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init aeqs msix attr\n");
+ goto aeqs_msix_attr_init_err;
+ }
+
+ return 0;
+
+aeqs_msix_attr_init_err:
+ hinic3_comm_func_to_func_free(hwdev);
+
+func_to_func_init_err:
+ hinic3_comm_aeqs_free(hwdev);
+
+ return err;
+}
+
+static void free_base_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_func_to_func_free(hwdev);
+ hinic3_comm_aeqs_free(hwdev);
+}
+
+static int init_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_comm_clp_to_mgmt_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init clp\n");
+ return err;
+ }
+
+ err = hinic3_comm_pf_to_mgmt_init(hwdev);
+ if (err) {
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ sdk_err(hwdev->dev_hdl, "Failed to init pf to mgmt\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static void free_pf_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_clp_to_mgmt_free(hwdev);
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+}
+
+static int init_mgmt_channel_post(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ /* mbox host channel resources will be freed in
+ * hinic3_func_to_func_free
+ */
+ if (HINIC3_IS_PPF(hwdev)) {
+ err = hinic3_mbox_init_host_msg_channel(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init mbox host channel\n");
+ return err;
+ }
+ }
+
+ err = init_pf_mgmt_channel(hwdev);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static void free_mgmt_msg_channel_post(struct hinic3_hwdev *hwdev)
+{
+ free_pf_mgmt_channel(hwdev);
+}
+
+static int init_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = dma_attr_table_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init dma attr table\n");
+ goto dma_attr_init_err;
+ }
+
+ err = hinic3_comm_ceqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init completion event queues\n");
+ goto ceqs_init_err;
+ }
+
+ err = init_ceqs_msix_attr(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init ceqs msix attr\n");
+ goto init_ceq_msix_err;
+ }
+
+ /* set default wq page_size */
+ if (wq_page_order > HINIC3_MAX_WQ_PAGE_SIZE_ORDER) {
+ sdk_info(hwdev->dev_hdl, "wq_page_order exceed limit[0, %d], reset to %d\n",
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER,
+ HINIC3_MAX_WQ_PAGE_SIZE_ORDER);
+ wq_page_order = HINIC3_MAX_WQ_PAGE_SIZE_ORDER;
+ }
+ hwdev->wq_page_size = HINIC3_HW_WQ_PAGE_SIZE * (1U << wq_page_order);
+ sdk_info(hwdev->dev_hdl, "WQ page size: 0x%x\n", hwdev->wq_page_size);
+ err = hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ hwdev->wq_page_size, HINIC3_CHANNEL_COMM);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set wq page size\n");
+ goto init_wq_pg_size_err;
+ }
+
+ err = hinic3_comm_cmdqs_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmd queues\n");
+ goto cmdq_init_err;
+ }
+
+ return 0;
+
+cmdq_init_err:
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+init_wq_pg_size_err:
+init_ceq_msix_err:
+ hinic3_comm_ceqs_free(hwdev);
+
+ceqs_init_err:
+dma_attr_init_err:
+
+ return err;
+}
+
+static void hinic3_free_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_comm_cmdqs_free(hwdev);
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE, HINIC3_CHANNEL_COMM);
+
+ hinic3_comm_ceqs_free(hwdev);
+}
+
+static int hinic3_init_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = init_basic_mgmt_channel(hwdev);
+ if (err)
+ return err;
+
+ err = hinic3_func_reset(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_COMM_RES, HINIC3_CHANNEL_COMM);
+ if (err)
+ goto func_reset_err;
+
+ err = init_basic_attributes(hwdev);
+ if (err)
+ goto init_basic_attr_err;
+
+ err = init_mgmt_channel_post(hwdev);
+ if (err)
+ goto init_mgmt_channel_post_err;
+
+ err = hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 1, HINIC3_CHANNEL_COMM);
+ if (err)
+ goto set_used_state_err;
+
+ err = init_cmdqs_channel(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init cmdq channel\n");
+ goto init_cmdqs_channel_err;
+ }
+
+ hinic3_sync_mgmt_func_state(hwdev);
+
+ if (HISDK3_F_CHANNEL_LOCK_EN(hwdev)) {
+ hinic3_mbox_enable_channel_lock(hwdev, true);
+ hinic3_cmdq_enable_channel_lock(hwdev, true);
+ }
+
+ err = hinic3_aeq_register_swe_cb(hwdev, hwdev, HINIC3_STATELESS_EVENT,
+ hinic3_nic_sw_aeqe_handler);
+ if (err) {
+ sdk_err(hwdev->dev_hdl,
+ "Failed to register sw aeqe handler\n");
+ goto register_ucode_aeqe_err;
+ }
+
+ return 0;
+
+register_ucode_aeqe_err:
+ hinic3_unsync_mgmt_func_state(hwdev);
+ hinic3_free_cmdqs_channel(hwdev);
+init_cmdqs_channel_err:
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+set_used_state_err:
+ free_mgmt_msg_channel_post(hwdev);
+init_mgmt_channel_post_err:
+init_basic_attr_err:
+func_reset_err:
+ free_base_mgmt_channel(hwdev);
+
+ return err;
+}
+
+static void hinic3_uninit_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ hinic3_aeq_unregister_swe_cb(hwdev, HINIC3_STATELESS_EVENT);
+
+ hinic3_unsync_mgmt_func_state(hwdev);
+
+ hinic3_free_cmdqs_channel(hwdev);
+
+ hinic3_set_func_svc_used_state(hwdev, SVC_T_COMM, 0, HINIC3_CHANNEL_COMM);
+
+ free_mgmt_msg_channel_post(hwdev);
+
+ free_base_mgmt_channel(hwdev);
+}
+
+static void hinic3_auto_sync_time_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, sync_time_task);
+ int err;
+
+ err = hinic3_sync_time(hwdev, ossl_get_real_time());
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+}
+
+static void hinic3_auto_channel_detect_work(struct work_struct *work)
+{
+ struct delayed_work *delay = to_delayed_work(work);
+ struct hinic3_hwdev *hwdev = container_of(delay, struct hinic3_hwdev, channel_detect_task);
+ struct card_node *chip_node = NULL;
+
+ hinic3_comm_channel_detect(hwdev);
+
+ chip_node = hwdev->chip_node;
+ if (!atomic_read(&chip_node->channel_busy_cnt))
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+}
+
+static int hinic3_init_ppf_work(struct hinic3_hwdev *hwdev)
+{
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return 0;
+
+ INIT_DELAYED_WORK(&hwdev->sync_time_task, hinic3_auto_sync_time_work);
+ queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
+ msecs_to_jiffies(HINIC3_SYNFW_TIME_PERIOD));
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ INIT_DELAYED_WORK(&hwdev->channel_detect_task,
+ hinic3_auto_channel_detect_work);
+ queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+
+ return 0;
+
+}
+
+static void hinic3_free_ppf_work(struct hinic3_hwdev *hwdev)
+{
+ if (hinic3_func_type(hwdev) != TYPE_PPF)
+ return;
+
+
+ if (COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
+ hwdev->features[0] &= ~(COMM_F_CHANNEL_DETECT);
+ cancel_delayed_work_sync(&hwdev->channel_detect_task);
+ }
+
+ cancel_delayed_work_sync(&hwdev->sync_time_task);
+}
+
+static int init_hwdew(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev;
+
+ hwdev = kzalloc(sizeof(*hwdev), GFP_KERNEL);
+ if (!hwdev)
+ return -ENOMEM;
+
+ *para->hwdev = hwdev;
+ hwdev->adapter_hdl = para->adapter_hdl;
+ hwdev->pcidev_hdl = para->pcidev_hdl;
+ hwdev->dev_hdl = para->dev_hdl;
+ hwdev->chip_node = para->chip_node;
+ hwdev->poll = para->poll;
+ hwdev->probe_fault_level = para->probe_fault_level;
+ hwdev->func_state = 0;
+
+ hwdev->chip_fault_stats = vzalloc(HINIC3_CHIP_FAULT_SIZE);
+ if (!hwdev->chip_fault_stats)
+ goto alloc_chip_fault_stats_err;
+
+ hwdev->stateful_ref_cnt = 0;
+ memset(hwdev->features, 0, sizeof(hwdev->features));
+
+ spin_lock_init(&hwdev->channel_lock);
+ mutex_init(&hwdev->stateful_mutex);
+
+ return 0;
+
+alloc_chip_fault_stats_err:
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+ return -EFAULT;
+}
+
+int hinic3_init_hwdev(struct hinic3_init_para *para)
+{
+ struct hinic3_hwdev *hwdev;
+ int err;
+
+ err = init_hwdew(para);
+ if (err)
+ return err;
+
+ hwdev = *para->hwdev;
+
+ err = hinic3_init_hwif(hwdev, para->cfg_reg_base, para->intr_reg_base, para->mgmt_reg_base,
+ para->db_base_phy, para->db_base, para->db_dwqe_len);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init hwif\n");
+ goto init_hwif_err;
+ }
+
+ hinic3_set_chip_present(hwdev);
+
+ hisdk3_init_profile_adapter(hwdev);
+
+ hwdev->workq = alloc_workqueue(HINIC3_HWDEV_WQ_NAME, WQ_MEM_RECLAIM, HINIC3_WQ_MAX_REQ);
+ if (!hwdev->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc hardware workq\n");
+ goto alloc_workq_err;
+ }
+
+ hinic3_init_heartbeat_detect(hwdev);
+
+ err = init_cfg_mgmt(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init config mgmt\n");
+ goto init_cfg_mgmt_err;
+ }
+
+ err = hinic3_init_comm_ch(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init communication channel\n");
+ goto init_comm_ch_err;
+ }
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ err = hinic3_init_devlink(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init devlink\n");
+ goto init_devlink_err;
+ }
+#endif
+
+ err = init_capability(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init capability\n");
+ goto init_cap_err;
+ }
+
+ hinic3_init_host_mode_pre(hwdev);
+
+ err = hinic3_multi_host_init(hwdev);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init function mode\n");
+ goto init_multi_host_fail;
+ }
+
+ err = hinic3_init_ppf_work(hwdev);
+ if (err)
+ goto init_ppf_work_fail;
+
+ err = hinic3_set_comm_features(hwdev, hwdev->features, COMM_MAX_FEATURE_QWORD);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to set comm features\n");
+ goto set_feature_err;
+ }
+
+ return 0;
+
+set_feature_err:
+ hinic3_free_ppf_work(hwdev);
+
+init_ppf_work_fail:
+ hinic3_multi_host_free(hwdev);
+
+init_multi_host_fail:
+ free_capability(hwdev);
+
+init_cap_err:
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(hwdev);
+
+init_devlink_err:
+#endif
+ hinic3_uninit_comm_ch(hwdev);
+
+init_comm_ch_err:
+ free_cfg_mgmt(hwdev);
+
+init_cfg_mgmt_err:
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(hwdev->workq);
+
+alloc_workq_err:
+ hisdk3_deinit_profile_adapter(hwdev);
+
+ hinic3_free_hwif(hwdev);
+
+init_hwif_err:
+ spin_lock_deinit(&hwdev->channel_lock);
+ vfree(hwdev->chip_fault_stats);
+ para->probe_fault_level = hwdev->probe_fault_level;
+ kfree(hwdev);
+ *para->hwdev = NULL;
+
+ return -EFAULT;
+}
+
+void hinic3_free_hwdev(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u64 drv_features[COMM_MAX_FEATURE_QWORD];
+
+ memset(drv_features, 0, sizeof(drv_features));
+ hinic3_set_comm_features(hwdev, drv_features, COMM_MAX_FEATURE_QWORD);
+
+ hinic3_free_ppf_work(dev);
+
+ hinic3_multi_host_free(dev);
+
+ hinic3_func_rx_tx_flush(hwdev, HINIC3_CHANNEL_COMM);
+
+ free_capability(dev);
+
+#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+ hinic3_uninit_devlink(dev);
+#endif
+
+ hinic3_uninit_comm_ch(dev);
+
+ free_cfg_mgmt(dev);
+ hinic3_destroy_heartbeat_detect(hwdev);
+ destroy_workqueue(dev->workq);
+
+ hisdk3_deinit_profile_adapter(hwdev);
+ hinic3_free_hwif(dev);
+
+ spin_lock_deinit(&dev->channel_lock);
+ vfree(dev->chip_fault_stats);
+
+ kfree(dev);
+}
+
+void *hinic3_get_pcidev_hdl(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return NULL;
+
+ return dev->pcidev_hdl;
+}
+
+int hinic3_register_service_adapter(void *hwdev, void *service_adapter,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !service_adapter || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ if (dev->service_adapter[type])
+ return -EINVAL;
+
+ dev->service_adapter[type] = service_adapter;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_service_adapter);
+
+void hinic3_unregister_service_adapter(void *hwdev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return;
+
+ dev->service_adapter[type] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_service_adapter);
+
+void *hinic3_get_service_adapter(void *hwdev, enum hinic3_service_type type)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || type >= SERVICE_T_MAX)
+ return NULL;
+
+ return dev->service_adapter[type];
+}
+EXPORT_SYMBOL(hinic3_get_service_adapter);
+
+int hinic3_dbg_get_hw_stats(const void *hwdev, u8 *hw_stats, const u16 *out_size)
+{
+ struct hinic3_hw_stats *tmp_hw_stats = (struct hinic3_hw_stats *)hw_stats;
+ struct card_node *chip_node = NULL;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (*out_size != sizeof(struct hinic3_hw_stats) || !hw_stats) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(struct hinic3_hw_stats));
+ return -EFAULT;
+ }
+
+ memcpy(hw_stats, &((struct hinic3_hwdev *)hwdev)->hw_stats,
+ sizeof(struct hinic3_hw_stats));
+
+ chip_node = ((struct hinic3_hwdev *)hwdev)->chip_node;
+
+ atomic_set(&tmp_hw_stats->nic_ucode_event_stats[HINIC3_CHANNEL_BUSY],
+ atomic_read(&chip_node->channel_busy_cnt));
+
+ return 0;
+}
+
+u16 hinic3_dbg_clear_hw_stats(void *hwdev)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_hwdev *dev = hwdev;
+
+ memset((void *)&dev->hw_stats, 0, sizeof(struct hinic3_hw_stats));
+ memset((void *)dev->chip_fault_stats, 0, HINIC3_CHIP_FAULT_SIZE);
+
+ chip_node = dev->chip_node;
+ if (COMM_SUPPORT_CHANNEL_DETECT(dev) && atomic_read(&chip_node->channel_busy_cnt)) {
+ atomic_set(&chip_node->channel_busy_cnt, 0);
+ dev->aeq_busy_cnt = 0;
+ queue_delayed_work(dev->workq, &dev->channel_detect_task,
+ msecs_to_jiffies(HINIC3_CHANNEL_DETECT_PERIOD));
+ }
+
+ return sizeof(struct hinic3_hw_stats);
+}
+
+void hinic3_get_chip_fault_stats(const void *hwdev, u8 *chip_fault_stats,
+ u32 offset)
+{
+ if (offset >= HINIC3_CHIP_FAULT_SIZE) {
+ pr_err("Invalid chip offset value: %d\n", offset);
+ return;
+ }
+
+ if (offset + MAX_DRV_BUF_SIZE <= HINIC3_CHIP_FAULT_SIZE)
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, MAX_DRV_BUF_SIZE);
+ else
+ memcpy(chip_fault_stats,
+ ((struct hinic3_hwdev *)hwdev)->chip_fault_stats
+ + offset, HINIC3_CHIP_FAULT_SIZE - offset);
+}
+
+void hinic3_event_register(void *dev, void *pri_handle,
+ hinic3_event_handler callback)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = callback;
+ hwdev->event_pri_handle = pri_handle;
+}
+
+void hinic3_event_unregister(void *dev)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (!dev) {
+ pr_err("Hwdev pointer is NULL for register event\n");
+ return;
+ }
+
+ hwdev->event_callback = NULL;
+ hwdev->event_pri_handle = NULL;
+}
+
+void hinic3_event_callback(void *hwdev, struct hinic3_event_info *event)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev) {
+ pr_err("Hwdev pointer is NULL for event callback\n");
+ return;
+ }
+
+ if (!dev->event_callback) {
+ sdk_info(dev->dev_hdl, "Event callback function not register\n");
+ return;
+ }
+
+ dev->event_callback(dev->event_pri_handle, event);
+}
+EXPORT_SYMBOL(hinic3_event_callback);
+
+void hinic3_set_pcie_order_cfg(void *handle)
+{
+}
+
+void hinic3_disable_mgmt_msg_report(void *hwdev)
+{
+ struct hinic3_hwdev *hw_dev = (struct hinic3_hwdev *)hwdev;
+
+ hinic3_set_pf_status(hw_dev->hwif, HINIC3_PF_STATUS_INIT);
+}
+
+void hinic3_record_pcie_error(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return;
+
+ atomic_inc(&dev->hw_stats.fault_event_stats.pcie_fault_stats);
+}
+
+bool hinic3_need_init_stateful_default(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 chip_svc_type = dev->cfg_mgmt->svc_cap.svc_type;
+
+
+ /* Current virtio net have to init cqm in PPF. */
+ if (hinic3_func_type(hwdev) == TYPE_PPF && (chip_svc_type & CFG_SERVICE_MASK_VIRTIO) != 0)
+ return true;
+
+ /* Other service type will init cqm when uld call. */
+ return false;
+}
+
+static inline void stateful_uninit(struct hinic3_hwdev *hwdev)
+{
+ u32 stateful_en;
+
+
+ stateful_en = IS_FT_TYPE(hwdev) | IS_RDMA_TYPE(hwdev);
+ if (stateful_en)
+ hinic3_ppf_ext_db_deinit(hwdev);
+}
+
+int hinic3_stateful_init(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int stateful_en;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!hinic3_get_stateful_enable(dev))
+ return 0;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (dev->stateful_ref_cnt++) {
+ mutex_unlock(&dev->stateful_mutex);
+ return 0;
+ }
+
+ stateful_en = (int)(IS_FT_TYPE(dev) | IS_RDMA_TYPE(dev));
+ if (stateful_en != 0 && HINIC3_IS_PPF(dev)) {
+ err = hinic3_ppf_ext_db_init(dev);
+ if (err)
+ goto out;
+ }
+
+
+ mutex_unlock(&dev->stateful_mutex);
+ sdk_info(dev->dev_hdl, "Initialize stateful resource success\n");
+
+ return 0;
+
+
+out:
+ dev->stateful_ref_cnt--;
+ mutex_unlock(&dev->stateful_mutex);
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_stateful_init);
+
+void hinic3_stateful_deinit(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev))
+ return;
+
+ mutex_lock(&dev->stateful_mutex);
+ if (!dev->stateful_ref_cnt || --dev->stateful_ref_cnt) {
+ mutex_unlock(&dev->stateful_mutex);
+ return;
+ }
+
+ stateful_uninit(hwdev);
+ mutex_unlock(&dev->stateful_mutex);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+EXPORT_SYMBOL(hinic3_stateful_deinit);
+
+void hinic3_free_stateful(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev || !hinic3_get_stateful_enable(dev) || !dev->stateful_ref_cnt)
+ return;
+
+ if (!hinic3_need_init_stateful_default(hwdev) || dev->stateful_ref_cnt > 1)
+ sdk_info(dev->dev_hdl, "Current stateful resource ref is incorrect, ref_cnt:%u\n",
+ dev->stateful_ref_cnt);
+
+ stateful_uninit(hwdev);
+
+ sdk_info(dev->dev_hdl, "Clear stateful resource success\n");
+}
+
+int hinic3_get_card_present_state(void *hwdev, bool *card_present_state)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev || !card_present_state)
+ return -EINVAL;
+
+ *card_present_state = get_card_present_state(dev);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_card_present_state);
+
+void hinic3_link_event_stats(void *dev, u8 link)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (link)
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_up_stats);
+ else
+ atomic_inc(&hwdev->hw_stats.link_event_stats.link_down_stats);
+}
+EXPORT_SYMBOL(hinic3_link_event_stats);
+
+u8 hinic3_max_pf_num(void *hwdev)
+{
+ if (!hwdev)
+ return 0;
+
+ return HINIC3_MAX_PF_NUM((struct hinic3_hwdev *)hwdev);
+}
+EXPORT_SYMBOL(hinic3_max_pf_num);
+
+void hinic3_fault_event_report(void *hwdev, u16 src, u16 level)
+{
+ if (!hwdev)
+ return;
+
+ sdk_info(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Fault event report, src: %u, level: %u\n",
+ src, level);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+EXPORT_SYMBOL(hinic3_fault_event_report);
+
+void hinic3_probe_success(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hisdk3_probe_success(hwdev);
+}
+
+#define HINIC3_CHANNEL_BUSY_TIMEOUT 25
+
+static void hinic3_update_channel_status(struct hinic3_hwdev *hwdev)
+{
+ struct card_node *chip_node = hwdev->chip_node;
+
+ if (!chip_node)
+ return;
+
+ if (hinic3_func_type(hwdev) != TYPE_PPF || !COMM_SUPPORT_CHANNEL_DETECT(hwdev) ||
+ atomic_read(&chip_node->channel_busy_cnt))
+ return;
+
+ if (test_bit(HINIC3_HWDEV_MBOX_INITED, &hwdev->func_state)) {
+ if (hwdev->last_recv_aeq_cnt != hwdev->cur_recv_aeq_cnt) {
+ hwdev->aeq_busy_cnt = 0;
+ hwdev->last_recv_aeq_cnt = hwdev->cur_recv_aeq_cnt;
+ } else {
+ hwdev->aeq_busy_cnt++;
+ }
+
+ if (hwdev->aeq_busy_cnt > HINIC3_CHANNEL_BUSY_TIMEOUT) {
+ atomic_inc(&chip_node->channel_busy_cnt);
+ sdk_err(hwdev->dev_hdl, "Detect channel busy\n");
+ }
+ }
+}
+
+static void hinic3_heartbeat_lost_handler(struct work_struct *work)
+{
+ struct hinic3_event_info event_info = { 0 };
+ struct hinic3_hwdev *hwdev = container_of(work, struct hinic3_hwdev,
+ heartbeat_lost_work);
+ u16 src, level;
+
+ atomic_inc(&hwdev->hw_stats.heart_lost_stats);
+
+ if (hwdev->event_callback) {
+ event_info.service = EVENT_SRV_COMM;
+ event_info.type =
+ hwdev->pcie_link_down ? EVENT_COMM_PCIE_LINK_DOWN :
+ EVENT_COMM_HEART_LOST;
+ hwdev->event_callback(hwdev->event_pri_handle, &event_info);
+ }
+
+ if (hwdev->pcie_link_down) {
+ src = HINIC3_FAULT_SRC_PCIE_LINK_DOWN;
+ level = FAULT_LEVEL_HOST;
+ sdk_err(hwdev->dev_hdl, "Detect pcie is link down\n");
+ } else {
+ src = HINIC3_FAULT_SRC_HOST_HEARTBEAT_LOST;
+ level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Heart lost report received, func_id: %d\n",
+ hinic3_global_func_id(hwdev));
+ }
+
+ hinic3_show_chip_err_info(hwdev);
+
+ hisdk3_fault_post_process(hwdev, src, level);
+}
+
+#define DETECT_PCIE_LINK_DOWN_RETRY 2
+#define HINIC3_HEARTBEAT_START_EXPIRE 5000
+#define HINIC3_HEARTBEAT_PERIOD 1000
+
+static bool hinic3_is_hw_abnormal(struct hinic3_hwdev *hwdev)
+{
+ u32 status;
+
+ if (!hinic3_get_chip_present_flag(hwdev))
+ return false;
+
+ status = hinic3_get_heartbeat_status(hwdev);
+ if (status == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Detect BAR register read failed\n");
+ hwdev->rd_bar_err_cnt++;
+ if (hwdev->rd_bar_err_cnt >= DETECT_PCIE_LINK_DOWN_RETRY) {
+ hinic3_set_chip_absent(hwdev);
+ hinic3_force_complete_all(hwdev);
+ hwdev->pcie_link_down = true;
+ return true;
+ }
+
+ return false;
+ }
+
+ if (status) {
+ hwdev->heartbeat_lost = true;
+ return true;
+ }
+
+ hwdev->rd_bar_err_cnt = 0;
+
+ return false;
+}
+
+#ifdef HAVE_TIMER_SETUP
+static void hinic3_heartbeat_timer_handler(struct timer_list *t)
+#else
+static void hinic3_heartbeat_timer_handler(unsigned long data)
+#endif
+{
+#ifdef HAVE_TIMER_SETUP
+ struct hinic3_hwdev *hwdev = from_timer(hwdev, t, heartbeat_timer);
+#else
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)data;
+#endif
+
+ if (hinic3_is_hw_abnormal(hwdev)) {
+ stop_timer(&hwdev->heartbeat_timer);
+ queue_work(hwdev->workq, &hwdev->heartbeat_lost_work);
+ } else {
+ mod_timer(&hwdev->heartbeat_timer,
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_PERIOD));
+ }
+
+ hinic3_update_channel_status(hwdev);
+}
+
+static void hinic3_init_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+#ifdef HAVE_TIMER_SETUP
+ timer_setup(&hwdev->heartbeat_timer, hinic3_heartbeat_timer_handler, 0);
+#else
+ initialize_timer(hwdev->adapter_hdl, &hwdev->heartbeat_timer);
+ hwdev->heartbeat_timer.data = (u64)hwdev;
+ hwdev->heartbeat_timer.function = hinic3_heartbeat_timer_handler;
+#endif
+
+ hwdev->heartbeat_timer.expires =
+ jiffies + msecs_to_jiffies(HINIC3_HEARTBEAT_START_EXPIRE);
+
+ add_to_timer(&hwdev->heartbeat_timer, HINIC3_HEARTBEAT_PERIOD);
+
+ INIT_WORK(&hwdev->heartbeat_lost_work, hinic3_heartbeat_lost_handler);
+}
+
+static void hinic3_destroy_heartbeat_detect(struct hinic3_hwdev *hwdev)
+{
+ destroy_work(&hwdev->heartbeat_lost_work);
+ stop_timer(&hwdev->heartbeat_timer);
+ delete_timer(&hwdev->heartbeat_timer);
+}
+
+void hinic3_set_api_stop(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return;
+
+ dev->chip_present_flag = HINIC3_CHIP_ABSENT;
+ sdk_info(dev->dev_hdl, "Set card absent\n");
+ hinic3_force_complete_all(dev);
+ sdk_info(dev->dev_hdl, "All messages interacting with the chip will stop\n");
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
new file mode 100644
index 000000000000..9f7d8a4859ec
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwdev.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWDEV_H
+#define HINIC3_HWDEV_H
+
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_profile.h"
+
+struct cfg_mgmt_info;
+
+struct hinic3_hwif;
+struct hinic3_aeqs;
+struct hinic3_ceqs;
+struct hinic3_mbox;
+struct hinic3_msg_pf_to_mgmt;
+struct hinic3_hwdev;
+
+#define HINIC3_CHANNEL_DETECT_PERIOD (5 * 1000)
+
+struct hinic3_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
+struct mqm_addr_trans_tbl_info {
+ u32 chunk_num;
+ u32 search_gpa_num;
+ u32 page_size;
+ u32 page_num;
+ struct hinic3_page_addr *brm_srch_page_addr;
+};
+
+struct hinic3_devlink {
+ struct hinic3_hwdev *hwdev;
+ u8 activate_fw; /* 0 ~ 7 */
+ u8 switch_cfg; /* 0 ~ 7 */
+};
+
+enum hinic3_func_mode {
+ /* single host */
+ FUNC_MOD_NORMAL_HOST,
+ /* multi host, bare-metal, sdi side */
+ FUNC_MOD_MULTI_BM_MASTER,
+ /* multi host, bare-metal, host side */
+ FUNC_MOD_MULTI_BM_SLAVE,
+ /* multi host, vm mode, sdi side */
+ FUNC_MOD_MULTI_VM_MASTER,
+ /* multi host, vm mode, host side */
+ FUNC_MOD_MULTI_VM_SLAVE,
+};
+
+#define IS_BMGW_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_MASTER)
+#define IS_BMGW_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_BM_SLAVE)
+#define IS_VM_MASTER_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_MASTER)
+#define IS_VM_SLAVE_HOST(hwdev) \
+ ((hwdev)->func_mode == FUNC_MOD_MULTI_VM_SLAVE)
+
+#define IS_MASTER_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_VM_MASTER_HOST(hwdev))
+
+#define IS_SLAVE_HOST(hwdev) \
+ (IS_BMGW_SLAVE_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define IS_MULTI_HOST(hwdev) \
+ (IS_BMGW_MASTER_HOST(hwdev) || IS_BMGW_SLAVE_HOST(hwdev) || \
+ IS_VM_MASTER_HOST(hwdev) || IS_VM_SLAVE_HOST(hwdev))
+
+#define NEED_MBOX_FORWARD(hwdev) IS_BMGW_SLAVE_HOST(hwdev)
+
+enum hinic3_host_mode_e {
+ HINIC3_MODE_NORMAL = 0,
+ HINIC3_SDI_MODE_VM,
+ HINIC3_SDI_MODE_BM,
+ HINIC3_SDI_MODE_MAX,
+};
+
+struct hinic3_hwdev {
+ void *adapter_hdl; /* pointer to hinic3_pcidev or NDIS_Adapter */
+ void *pcidev_hdl; /* pointer to pcidev or Handler */
+ void *dev_hdl; /* pointer to pcidev->dev or Handler, for
+ * sdk_err() or dma_alloc()
+ */
+
+ void *service_adapter[SERVICE_T_MAX];
+ void *chip_node;
+ void *ppf_hwdev;
+
+ u32 wq_page_size;
+ int chip_present_flag;
+ bool poll; /* use polling mode or int mode */
+ u32 rsvd1;
+
+ struct hinic3_hwif *hwif; /* include void __iomem *bar */
+ struct comm_global_attr glb_attr;
+ u64 features[COMM_MAX_FEATURE_QWORD];
+
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ struct hinic3_cmdqs *cmdqs;
+ struct hinic3_aeqs *aeqs;
+ struct hinic3_ceqs *ceqs;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ void *cqm_hdl;
+ struct mqm_addr_trans_tbl_info mqm_att;
+ struct hinic3_page_addr page_pa0;
+ struct hinic3_page_addr page_pa1;
+ u32 stateful_ref_cnt;
+ u32 rsvd2;
+
+ struct mutex stateful_mutex; /* protect cqm init and deinit */
+
+ struct hinic3_hw_stats hw_stats;
+ u8 *chip_fault_stats;
+
+ hinic3_event_handler event_callback;
+ void *event_pri_handle;
+
+ struct hinic3_board_info board_info;
+
+ struct delayed_work sync_time_task;
+ struct delayed_work channel_detect_task;
+ struct hisdk3_prof_attr *prof_attr;
+ struct hinic3_prof_adapter *prof_adap;
+
+ struct workqueue_struct *workq;
+
+ u32 rd_bar_err_cnt;
+ bool pcie_link_down;
+ bool heartbeat_lost;
+ struct timer_list heartbeat_timer;
+ struct work_struct heartbeat_lost_work;
+
+ ulong func_state;
+ spinlock_t channel_lock; /* protect channel init and deinit */
+
+ u16 probe_fault_level;
+
+ struct hinic3_devlink *devlink_dev;
+
+ enum hinic3_func_mode func_mode;
+ u32 rsvd3;
+
+ u64 cur_recv_aeq_cnt;
+ u64 last_recv_aeq_cnt;
+ u16 aeq_busy_cnt;
+ u64 rsvd4[8];
+};
+
+#define HINIC3_DRV_FEATURE_QW0 \
+ (COMM_F_API_CHAIN | COMM_F_CLP | COMM_F_MBOX_SEGMENT | \
+ COMM_F_CMDQ_NUM | COMM_F_VIRTIO_VQ_SIZE)
+
+#define HINIC3_MAX_HOST_NUM(hwdev) ((hwdev)->glb_attr.max_host_num)
+#define HINIC3_MAX_PF_NUM(hwdev) ((hwdev)->glb_attr.max_pf_num)
+#define HINIC3_MGMT_CPU_NODE_ID(hwdev) ((hwdev)->glb_attr.mgmt_host_node_id)
+
+#define COMM_FEATURE_QW0(hwdev, feature) \
+ ((hwdev)->features[0] & COMM_F_##feature)
+#define COMM_SUPPORT_API_CHAIN(hwdev) COMM_FEATURE_QW0(hwdev, API_CHAIN)
+#define COMM_SUPPORT_CLP(hwdev) COMM_FEATURE_QW0(hwdev, CLP)
+#define COMM_SUPPORT_CHANNEL_DETECT(hwdev) COMM_FEATURE_QW0(hwdev, CHANNEL_DETECT)
+#define COMM_SUPPORT_MBOX_SEGMENT(hwdev) (hinic3_pcie_itf_id(hwdev) == SPU_HOST_ID)
+#define COMM_SUPPORT_CMDQ_NUM(hwdev) COMM_FEATURE_QW0(hwdev, CMDQ_NUM)
+#define COMM_SUPPORT_VIRTIO_VQ_SIZE(hwdev) COMM_FEATURE_QW0(hwdev, VIRTIO_VQ_SIZE)
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
new file mode 100644
index 000000000000..9b749135dbed
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.c
@@ -0,0 +1,994 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_csr.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#ifndef CONFIG_MODULE_PROF
+#define WAIT_HWIF_READY_TIMEOUT 10000
+#else
+#define WAIT_HWIF_READY_TIMEOUT 30000
+#endif
+
+#define HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+
+#define MAX_MSIX_ENTRY 2048
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / \
+ HINIC3_DB_PAGE_SIZE))
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC3_AF0_P2P_IDX_SHIFT 12
+#define HINIC3_AF0_PCI_INTF_IDX_SHIFT 17
+#define HINIC3_AF0_VF_IN_PF_SHIFT 20
+#define HINIC3_AF0_FUNC_TYPE_SHIFT 28
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_MASK 0xFFF
+#define HINIC3_AF0_P2P_IDX_MASK 0x1F
+#define HINIC3_AF0_PCI_INTF_IDX_MASK 0x7
+#define HINIC3_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC3_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC3_AF0_GET(val, member) \
+ (((val) >> HINIC3_AF0_##member##_SHIFT) & HINIC3_AF0_##member##_MASK)
+
+#define HINIC3_AF1_PPF_IDX_SHIFT 0
+#define HINIC3_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC3_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC3_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC3_AF1_PPF_IDX_MASK 0x3F
+#define HINIC3_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC3_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC3_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC3_AF1_GET(val, member) \
+ (((val) >> HINIC3_AF1_##member##_SHIFT) & HINIC3_AF1_##member##_MASK)
+
+#define HINIC3_AF2_CEQS_PER_FUNC_SHIFT 0
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_SHIFT 9
+#define HINIC3_AF2_IRQS_PER_FUNC_SHIFT 16
+
+#define HINIC3_AF2_CEQS_PER_FUNC_MASK 0x1FF
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC3_AF2_IRQS_PER_FUNC_MASK 0x7FF
+
+#define HINIC3_AF2_GET(val, member) \
+ (((val) >> HINIC3_AF2_##member##_SHIFT) & HINIC3_AF2_##member##_MASK)
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_SHIFT 0
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_SHIFT 16
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_MASK 0xFFF
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_MASK 0xFFF
+
+#define HINIC3_AF3_GET(val, member) \
+ (((val) >> HINIC3_AF3_##member##_SHIFT) & HINIC3_AF3_##member##_MASK)
+
+#define HINIC3_AF4_DOORBELL_CTRL_SHIFT 0
+#define HINIC3_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC3_AF4_GET(val, member) \
+ (((val) >> HINIC3_AF4_##member##_SHIFT) & HINIC3_AF4_##member##_MASK)
+
+#define HINIC3_AF4_SET(val, member) \
+ (((val) & HINIC3_AF4_##member##_MASK) << HINIC3_AF4_##member##_SHIFT)
+
+#define HINIC3_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF4_##member##_MASK << HINIC3_AF4_##member##_SHIFT)))
+
+#define HINIC3_AF5_OUTBOUND_CTRL_SHIFT 0
+#define HINIC3_AF5_OUTBOUND_CTRL_MASK 0x1
+
+#define HINIC3_AF5_GET(val, member) \
+ (((val) >> HINIC3_AF5_##member##_SHIFT) & HINIC3_AF5_##member##_MASK)
+
+#define HINIC3_AF5_SET(val, member) \
+ (((val) & HINIC3_AF5_##member##_MASK) << HINIC3_AF5_##member##_SHIFT)
+
+#define HINIC3_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF5_##member##_MASK << HINIC3_AF5_##member##_SHIFT)))
+
+#define HINIC3_AF6_PF_STATUS_SHIFT 0
+#define HINIC3_AF6_PF_STATUS_MASK 0xFFFF
+
+#define HINIC3_AF6_FUNC_MAX_SQ_SHIFT 23
+#define HINIC3_AF6_FUNC_MAX_SQ_MASK 0x1FF
+
+#define HINIC3_AF6_MSIX_FLEX_EN_SHIFT 22
+#define HINIC3_AF6_MSIX_FLEX_EN_MASK 0x1
+
+#define HINIC3_AF6_SET(val, member) \
+ ((((u32)(val)) & HINIC3_AF6_##member##_MASK) << \
+ HINIC3_AF6_##member##_SHIFT)
+
+#define HINIC3_AF6_GET(val, member) \
+ (((u32)(val) >> HINIC3_AF6_##member##_SHIFT) & HINIC3_AF6_##member##_MASK)
+
+#define HINIC3_AF6_CLEAR(val, member) \
+ ((u32)(val) & (~(HINIC3_AF6_##member##_MASK << \
+ HINIC3_AF6_##member##_SHIFT)))
+
+#define HINIC3_PPF_ELECT_PORT_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECT_PORT_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECT_PORT_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECT_PORT_##member##_SHIFT) & \
+ HINIC3_PPF_ELECT_PORT_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECTION_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_PPF_ELECTION_##member##_MASK) << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_PPF_ELECTION_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_PPF_ELECTION_##member##_MASK << \
+ HINIC3_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC3_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_MPF_ELECTION_##member##_MASK) << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_MPF_ELECTION_##member##_MASK)
+
+#define HINIC3_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_MPF_ELECTION_##member##_MASK << \
+ HINIC3_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_GET_REG_FLAG(reg) ((reg) & (~(HINIC3_REGS_FLAG_MAKS)))
+
+#define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MAKS))
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ return be32_to_cpu(readl(hwif->mgmt_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+ else
+ return be32_to_cpu(readl(hwif->cfg_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+}
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ writel(cpu_to_be32(val),
+ hwif->mgmt_regs_base + HINIC3_GET_REG_ADDR(reg));
+ else
+ writel(cpu_to_be32(val),
+ hwif->cfg_regs_base + HINIC3_GET_REG_ADDR(reg));
+}
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev)
+{
+ u32 attr1;
+
+ attr1 = hinic3_hwif_read_reg(hwdev->hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN) {
+ sdk_warn(hwdev->dev_hdl, "Card is not present\n");
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * hinic3_get_heartbeat_status - get heart beat status
+ * @hwdev: the pointer to hw device
+ * Return: 0 - normal, 1 - heart lost, 0xFFFFFFFF - Pcie link down
+ **/
+u32 hinic3_get_heartbeat_status(void *hwdev)
+{
+ u32 attr1;
+
+ if (!hwdev)
+ return HINIC3_PCIE_LINK_DOWN;
+
+ attr1 = hinic3_hwif_read_reg(((struct hinic3_hwdev *)hwdev)->hwif,
+ HINIC3_CSR_FUNC_ATTR1_ADDR);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return attr1;
+
+ return !HINIC3_AF1_GET(attr1, MGMT_INIT_STATUS);
+}
+EXPORT_SYMBOL(hinic3_get_heartbeat_status);
+
+#define MIGRATE_HOST_STATUS_CLEAR(host_id, val) ((val) & (~(1U << (host_id))))
+#define MIGRATE_HOST_STATUS_SET(host_id, enable) (((u8)(enable) & 1U) << (host_id))
+#define MIGRATE_HOST_STATUS_GET(host_id, val) (!!((val) & (1U << (host_id))))
+
+int hinic3_set_host_migrate_enable(void *hwdev, u8 host_id, bool enable)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ reg_val = MIGRATE_HOST_STATUS_CLEAR(host_id, reg_val);
+ reg_val |= MIGRATE_HOST_STATUS_SET(host_id, enable);
+
+ hinic3_hwif_write_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR, reg_val);
+
+ sdk_info(dev->dev_hdl, "Set migrate host %d status %d, reg value: 0x%x\n",
+ host_id, enable, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_host_migrate_enable);
+
+int hinic3_get_host_migrate_enable(void *hwdev, u8 host_id, u8 *migrate_en)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ u32 reg_val;
+
+ if (HINIC3_FUNC_TYPE(dev) != TYPE_PPF) {
+ sdk_warn(dev->dev_hdl, "hwdev should be ppf\n");
+ return -EINVAL;
+ }
+
+ reg_val = hinic3_hwif_read_reg(dev->hwif, HINIC3_MULT_MIGRATE_HOST_STATUS_ADDR);
+ *migrate_en = MIGRATE_HOST_STATUS_GET(host_id, reg_val);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_get_host_migrate_enable);
+
+static enum hinic3_wait_return check_hwif_ready_handler(void *priv_data)
+{
+ u32 status;
+
+ status = hinic3_get_heartbeat_status(priv_data);
+ if (status == HINIC3_PCIE_LINK_DOWN)
+ return WAIT_PROCESS_ERR;
+ else if (!status)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_hwif_ready(struct hinic3_hwdev *hwdev)
+{
+ int ret;
+
+ ret = hinic3_wait_for_timeout(hwdev, check_hwif_ready_handler,
+ WAIT_HWIF_READY_TIMEOUT, USEC_PER_MSEC);
+ if (ret == -ETIMEDOUT) {
+ hwdev->probe_fault_level = FAULT_LEVEL_FATAL;
+ sdk_err(hwdev->dev_hdl, "Wait for hwif timeout\n");
+ }
+
+ return ret;
+}
+
+/**
+ * set_hwif_attr - set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ * @attr0: the first attribute that was read from the hw
+ * @attr1: the second attribute that was read from the hw
+ * @attr2: the third attribute that was read from the hw
+ * @attr3: the fourth attribute that was read from the hw
+ **/
+static void set_hwif_attr(struct hinic3_hwif *hwif, u32 attr0, u32 attr1,
+ u32 attr2, u32 attr3, u32 attr6)
+{
+ hwif->attr.func_global_idx = HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = HINIC3_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = HINIC3_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = HINIC3_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = HINIC3_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = HINIC3_AF1_GET(attr1, PPF_IDX);
+ hwif->attr.num_aeqs = BIT(HINIC3_AF1_GET(attr1, AEQS_PER_FUNC));
+ hwif->attr.num_ceqs = (u8)HINIC3_AF2_GET(attr2, CEQS_PER_FUNC);
+ hwif->attr.num_irqs = HINIC3_AF2_GET(attr2, IRQS_PER_FUNC);
+ if (hwif->attr.num_irqs > MAX_MSIX_ENTRY)
+ hwif->attr.num_irqs = MAX_MSIX_ENTRY;
+
+ hwif->attr.num_dma_attr = BIT(HINIC3_AF2_GET(attr2, DMA_ATTR_PER_FUNC));
+
+ hwif->attr.global_vf_id_of_pf = HINIC3_AF3_GET(attr3,
+ GLOBAL_VF_ID_OF_PF);
+
+ hwif->attr.num_sq = HINIC3_AF6_GET(attr6, FUNC_MAX_SQ);
+ hwif->attr.msix_flex_en = HINIC3_AF6_GET(attr6, MSIX_FLEX_EN);
+}
+
+/**
+ * get_hwif_attr - read and set the attributes as members in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static int get_hwif_attr(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2, attr3, attr6;
+
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr0 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR2_ADDR;
+ attr2 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr2 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR3_ADDR;
+ attr3 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr3 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ addr = HINIC3_CSR_FUNC_ATTR6_ADDR;
+ attr6 = hinic3_hwif_read_reg(hwif, addr);
+ if (attr6 == HINIC3_PCIE_LINK_DOWN)
+ return -EFAULT;
+
+ set_hwif_attr(hwif, attr0, attr1, attr2, attr3, attr6);
+
+ return 0;
+}
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ attr6 = HINIC3_AF6_CLEAR(attr6, PF_STATUS);
+ attr6 |= HINIC3_AF6_SET(status, PF_STATUS);
+
+ if (hwif->attr.func_type == TYPE_VF)
+ return;
+
+ hinic3_hwif_write_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR, attr6);
+}
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ return HINIC3_AF6_GET(attr6, PF_STATUS);
+}
+
+static enum hinic3_doorbell_ctrl hinic3_get_doorbell_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+
+ return HINIC3_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+static enum hinic3_outbound_ctrl hinic3_get_outbound_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+
+ return HINIC3_AF5_GET(attr5, OUTBOUND_CTRL);
+}
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * set_ppf - try to set hwif as ppf and set the type of hwif in this case
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_ppf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_PPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+ val = HINIC3_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = HINIC3_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF */
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = HINIC3_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * get_mpf - get the mpf index into the hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void get_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = hinic3_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = HINIC3_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * set_mpf - try to set hwif as mpf and set the mpf idx in hwif
+ * @hwif: the hardware interface of a pci function device
+ **/
+static void set_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ /* Read Modify Write */
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = HINIC3_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = HINIC3_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+static int init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base, void *intr_reg_base,
+ void *mgmt_regs_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ hwif = kzalloc(sizeof(*hwif), GFP_KERNEL);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev->hwif = hwif;
+ hwif->pdev = hwdev->pcidev_hdl;
+
+ /* if function is VF, mgmt_regs_base will be NULL */
+ hwif->cfg_regs_base = mgmt_regs_base ? cfg_reg_base :
+ (u8 *)cfg_reg_base + HINIC3_VF_CFG_REG_OFFSET;
+
+ hwif->intr_regs_base = intr_reg_base;
+ hwif->mgmt_regs_base = mgmt_regs_base;
+
+ return 0;
+}
+
+static int init_db_area_idx(struct hinic3_hwif *hwif, u64 db_base_phy, u8 *db_base,
+ u64 db_dwqe_len)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 db_max_areas;
+
+ hwif->db_base_phy = db_base_phy;
+ hwif->db_base = db_base;
+ hwif->db_dwqe_len = db_dwqe_len;
+
+ db_max_areas = (db_dwqe_len > HINIC3_DB_DWQE_SIZE) ?
+ HINIC3_DB_MAX_AREAS :
+ (u32)(db_dwqe_len / HINIC3_DB_PAGE_SIZE);
+ free_db_area->db_bitmap_array = bitmap_zalloc(db_max_areas, GFP_KERNEL);
+ if (!free_db_area->db_bitmap_array) {
+ pr_err("Failed to allocate db area.\n");
+ return -ENOMEM;
+ }
+ free_db_area->db_max_areas = db_max_areas;
+ spin_lock_init(&free_db_area->idx_lock);
+ return 0;
+}
+
+static void free_db_area(struct hinic3_free_db_area *free_db_area)
+{
+ spin_lock_deinit(&free_db_area->idx_lock);
+ kfree(free_db_area->db_bitmap_array);
+}
+
+static int get_db_idx(struct hinic3_hwif *hwif, u32 *idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pg_idx;
+
+ spin_lock(&free_db_area->idx_lock);
+ pg_idx = (u32)find_first_zero_bit(free_db_area->db_bitmap_array,
+ free_db_area->db_max_areas);
+ if (pg_idx == free_db_area->db_max_areas) {
+ spin_unlock(&free_db_area->idx_lock);
+ return -ENOMEM;
+ }
+ set_bit(pg_idx, free_db_area->db_bitmap_array);
+ spin_unlock(&free_db_area->idx_lock);
+
+ *idx = pg_idx;
+
+ return 0;
+}
+
+static void free_db_idx(struct hinic3_hwif *hwif, u32 idx)
+{
+ struct hinic3_free_db_area *free_db_area = &hwif->free_db_area;
+
+ if (idx >= free_db_area->db_max_areas)
+ return;
+
+ spin_lock(&free_db_area->idx_lock);
+ clear_bit((int)idx, free_db_area->db_bitmap_array);
+
+ spin_unlock(&free_db_area->idx_lock);
+}
+
+void hinic3_free_db_addr(void *hwdev, const void __iomem *db_base,
+ void __iomem *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev || !db_base)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_addr);
+
+int hinic3_alloc_db_addr(void *hwdev, void __iomem **db_base,
+ void __iomem **dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx = 0;
+ int err;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base + idx * HINIC3_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_addr);
+
+void hinic3_free_db_phy_addr(void *hwdev, u64 db_base, u64 dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base_phy);
+
+ free_db_idx(hwif, idx);
+}
+EXPORT_SYMBOL(hinic3_free_db_phy_addr);
+
+int hinic3_alloc_db_phy_addr(void *hwdev, u64 *db_base, u64 *dwqe_base)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 idx;
+ int err;
+
+ if (!hwdev || !db_base || !dwqe_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base_phy + idx * HINIC3_DB_PAGE_SIZE;
+ *dwqe_base = *db_base + HINIC3_DWQE_OFFSET;
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_alloc_db_phy_addr);
+
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_CLR);
+
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_auto_mask_state);
+
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+ u8 int_msk = 1;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_CLR);
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+EXPORT_SYMBOL(hinic3_set_msix_state);
+
+static void disable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_DISABLE);
+}
+
+static enum hinic3_wait_return check_db_outbound_enable_handler(void *priv_data)
+{
+ struct hinic3_hwif *hwif = priv_data;
+ enum hinic3_doorbell_ctrl db_ctrl;
+ enum hinic3_outbound_ctrl outbound_ctrl;
+
+ db_ctrl = hinic3_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = hinic3_get_outbound_ctrl_status(hwif);
+ if (outbound_ctrl == ENABLE_OUTBOUND && db_ctrl == ENABLE_DOORBELL)
+ return WAIT_PROCESS_CPL;
+
+ return WAIT_PROCESS_WAITING;
+}
+
+static int wait_until_doorbell_and_outbound_enabled(struct hinic3_hwif *hwif)
+{
+ return hinic3_wait_for_timeout(hwif, check_db_outbound_enable_handler,
+ HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT, USEC_PER_MSEC);
+}
+
+static void select_ppf_mpf(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ set_ppf(hwif);
+
+ if (HINIC3_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+ }
+}
+
+/**
+ * hinic3_init_hwif - initialize the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 attr1, attr4, attr5;
+ int err;
+
+ err = init_hwif(hwdev, cfg_reg_base, intr_reg_base, mgmt_regs_base);
+ if (err)
+ return err;
+
+ hwif = hwdev->hwif;
+
+ err = init_db_area_idx(hwif, db_base_phy, db_base, db_dwqe_len);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to init db area.\n");
+ goto init_db_area_err;
+ }
+
+ err = wait_hwif_ready(hwdev);
+ if (err) {
+ attr1 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR1_ADDR);
+ sdk_err(hwdev->dev_hdl, "Chip status is not ready, attr1:0x%x\n", attr1);
+ goto hwif_ready_err;
+ }
+
+ err = get_hwif_attr(hwif);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Get hwif attr failed\n");
+ goto hwif_ready_err;
+ }
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err) {
+ attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+ attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+ sdk_err(hwdev->dev_hdl, "Hw doorbell/outbound is disabled, attr4 0x%x attr5 0x%x\n",
+ attr4, attr5);
+ goto hwif_ready_err;
+ }
+
+ select_ppf_mpf(hwdev);
+
+ disable_all_msix(hwdev);
+ /* disable mgmt cpu report any event */
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+
+ sdk_info(hwdev->dev_hdl, "global_func_idx: %u, func_type: %d, host_id: %u, ppf: %u, mpf: %u\n",
+ hwif->attr.func_global_idx, hwif->attr.func_type, hwif->attr.pci_intf_idx,
+ hwif->attr.ppf_idx, hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ hinic3_show_chip_err_info(hwdev);
+ free_db_area(&hwif->free_db_area);
+init_db_area_err:
+ kfree(hwif);
+
+ return err;
+}
+
+/**
+ * hinic3_free_hwif - free the hw interface
+ * @hwif: the hardware interface of a pci function device
+ * @pdev: the pci device that will be part of the hwif struct
+ **/
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev)
+{
+ spin_lock_deinit(&hwdev->hwif->free_db_area.idx_lock);
+ free_db_area(&hwdev->hwif->free_db_area);
+ kfree(hwdev->hwif);
+}
+
+u16 hinic3_global_func_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+EXPORT_SYMBOL(hinic3_global_func_id);
+
+u16 hinic3_intr_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_irqs;
+}
+EXPORT_SYMBOL(hinic3_intr_num);
+
+u8 hinic3_pf_id_of_vf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.port_to_port_idx;
+}
+EXPORT_SYMBOL(hinic3_pf_id_of_vf);
+
+u8 hinic3_pcie_itf_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+EXPORT_SYMBOL(hinic3_pcie_itf_id);
+
+u8 hinic3_vf_in_pf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.vf_in_pf;
+}
+EXPORT_SYMBOL(hinic3_vf_in_pf);
+
+enum func_type hinic3_func_type(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+EXPORT_SYMBOL(hinic3_func_type);
+
+u8 hinic3_ceq_num(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.num_ceqs;
+}
+EXPORT_SYMBOL(hinic3_ceq_num);
+
+u16 hinic3_glb_pf_vf_offset(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.global_vf_id_of_pf;
+}
+EXPORT_SYMBOL(hinic3_glb_pf_vf_offset);
+
+u8 hinic3_ppf_idx(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.ppf_idx;
+}
+EXPORT_SYMBOL(hinic3_ppf_idx);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id)
+{
+ u32 ppf_elect_port_addr;
+ u32 val;
+
+ if (!hwdev)
+ return 0;
+
+ ppf_elect_port_addr = HINIC3_CSR_FUNC_PPF_ELECT(host_id);
+ val = hinic3_hwif_read_reg(hwdev->hwif, ppf_elect_port_addr);
+
+ return HINIC3_PPF_ELECT_PORT_GET(val, IDX);
+}
+
+u32 hinic3_get_self_test_result(void *hwdev)
+{
+ struct hinic3_hwif *hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+}
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ u32 value;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_BASE_INFO_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip base info: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_MGMT_HEALTH_STATUS_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Mgmt CPU health status: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_CHIP_ERR_STATUS1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip fatal error status1: 0x%08x\n", value);
+
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO0_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info0: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO1_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info1: 0x%08x\n", value);
+ value = hinic3_hwif_read_reg(hwif, HINIC3_ERR_INFO2_ADDR);
+ sdk_warn(hwdev->dev_hdl, "Chip exception info2: 0x%08x\n", value);
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
new file mode 100644
index 000000000000..b204b213c43f
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_hwif.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_HWIF_H
+#define HINIC3_HWIF_H
+
+#include "hinic3_hwdev.h"
+
+#define HINIC3_PCIE_LINK_DOWN 0xFFFFFFFF
+
+struct hinic3_free_db_area {
+ unsigned long *db_bitmap_array;
+ u32 db_max_areas;
+ /* spinlock for allocating doorbell area */
+ spinlock_t idx_lock;
+};
+
+struct hinic3_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ u8 rsvd1;
+ u16 rsvd2;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /* max: 2 ^ 15 */
+ u8 num_aeqs; /* max: 2 ^ 3 */
+ u8 num_ceqs; /* max: 2 ^ 7 */
+
+ u16 num_sq; /* max: 2 ^ 8 */
+ u8 num_dma_attr; /* max: 2 ^ 6 */
+ u8 msix_flex_en;
+
+ u16 global_vf_id_of_pf;
+};
+
+struct hinic3_hwif {
+ u8 __iomem *cfg_regs_base;
+ u8 __iomem *intr_regs_base;
+ u8 __iomem *mgmt_regs_base;
+ u64 db_base_phy;
+ u64 db_dwqe_len;
+ u8 __iomem *db_base;
+
+ struct hinic3_free_db_area free_db_area;
+
+ struct hinic3_func_attr attr;
+
+ void *pdev;
+ u64 rsvd;
+};
+
+enum hinic3_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hinic3_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+enum hinic3_pf_status {
+ HINIC3_PF_STATUS_INIT = 0X0,
+ HINIC3_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC3_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC3_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+#define HINIC3_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC3_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HINIC3_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC3_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC3_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC3_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC3_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HINIC3_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC3_IS_PF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC3_IS_VF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC3_IS_PPF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PPF)
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg);
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val);
+
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status);
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif);
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif);
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif);
+
+int hinic3_init_hwif(struct hinic3_hwdev *hwdev, void *cfg_reg_base,
+ void *intr_reg_base, void *mgmt_regs_base, u64 db_base_phy,
+ void *db_base, u64 db_dwqe_len);
+
+void hinic3_free_hwif(struct hinic3_hwdev *hwdev);
+
+void hinic3_show_chip_err_info(struct hinic3_hwdev *hwdev);
+
+u8 hinic3_host_ppf_idx(struct hinic3_hwdev *hwdev, u8 host_id);
+
+bool get_card_present_state(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
new file mode 100644
index 000000000000..6c0ddfeaa424
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c
@@ -0,0 +1,1410 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/addrconf.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/io-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/inetdevice.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+#include <linux/rtc.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_common.h"
+#include "hinic3_crm.h"
+#include "hinic3_pci_id_tbl.h"
+#include "hinic3_sriov.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_nictool.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+
+#include "hinic3_profile.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_prof_adap.h"
+#include "comm_msg_intf.h"
+
+static bool disable_vf_load;
+module_param(disable_vf_load, bool, 0444);
+MODULE_PARM_DESC(disable_vf_load,
+ "Disable virtual functions probe or not - default is false");
+
+static bool disable_attach;
+module_param(disable_attach, bool, 0444);
+MODULE_PARM_DESC(disable_attach, "disable_attach or not - default is false");
+
+#define HINIC3_WAIT_SRIOV_CFG_TIMEOUT 15000
+
+MODULE_AUTHOR("Huawei Technologies CO., Ltd");
+MODULE_DESCRIPTION(HINIC3_DRV_DESC);
+MODULE_VERSION(HINIC3_DRV_VERSION);
+MODULE_LICENSE("GPL");
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+static DEVICE_ATTR(sriov_numvfs, 0664,
+ hinic3_sriov_numvfs_show, hinic3_sriov_numvfs_store);
+static DEVICE_ATTR(sriov_totalvfs, 0444,
+ hinic3_sriov_totalvfs_show, NULL);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+static struct attribute *hinic3_attributes[] = {
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ &dev_attr_sriov_numvfs.attr,
+ &dev_attr_sriov_totalvfs.attr,
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+ NULL
+};
+
+static const struct attribute_group hinic3_attr_group = {
+ .attrs = hinic3_attributes,
+};
+
+struct hinic3_uld_info g_uld_info[SERVICE_T_MAX] = { {0} };
+
+#define HINIC3_EVENT_PROCESS_TIMEOUT 10000
+struct mutex g_uld_mutex;
+
+void hinic3_uld_lock_init(void)
+{
+ mutex_init(&g_uld_mutex);
+}
+
+static const char *s_uld_name[SERVICE_T_MAX] = {
+ "nic", "ovs", "roce", "toe", "ioe",
+ "fc", "vbs", "ipsec", "virtio", "migrate", "ppa", "custom"};
+
+const char **hinic3_get_uld_names(void)
+{
+ return s_uld_name;
+}
+
+static int attach_uld(struct hinic3_pcidev *dev, enum hinic3_service_type type,
+ const struct hinic3_uld_info *uld_info)
+{
+ void *uld_dev = NULL;
+ int err;
+
+ mutex_lock(&dev->pdev_mutex);
+
+ if (dev->uld_dev[type]) {
+ sdk_err(&dev->pcidev->dev,
+ "%s driver has attached to pcie device\n",
+ s_uld_name[type]);
+ err = 0;
+ goto out_unlock;
+ }
+
+ atomic_set(&dev->uld_ref_cnt[type], 0);
+
+ err = uld_info->probe(&dev->lld_dev, &uld_dev, dev->uld_dev_name[type]);
+ if (err) {
+ sdk_err(&dev->pcidev->dev,
+ "Failed to add object for %s driver to pcie device\n",
+ s_uld_name[type]);
+ goto probe_failed;
+ }
+
+ dev->uld_dev[type] = uld_dev;
+ set_bit(type, &dev->uld_state);
+ mutex_unlock(&dev->pdev_mutex);
+
+ sdk_info(&dev->pcidev->dev,
+ "Attach %s driver to pcie device succeed\n", s_uld_name[type]);
+ return 0;
+
+probe_failed:
+out_unlock:
+ mutex_unlock(&dev->pdev_mutex);
+
+ return err;
+}
+
+static void wait_uld_unused(struct hinic3_pcidev *dev, enum hinic3_service_type type)
+{
+ u32 loop_cnt = 0;
+
+ while (atomic_read(&dev->uld_ref_cnt[type])) {
+ loop_cnt++;
+ if (loop_cnt % PRINT_ULD_DETACH_TIMEOUT_INTERVAL == 0)
+ sdk_err(&dev->pcidev->dev, "Wait for uld unused for %lds, reference count: %d\n",
+ loop_cnt / MSEC_PER_SEC, atomic_read(&dev->uld_ref_cnt[type]));
+
+ usleep_range(ULD_LOCK_MIN_USLEEP_TIME, ULD_LOCK_MAX_USLEEP_TIME);
+ }
+}
+
+static void detach_uld(struct hinic3_pcidev *dev,
+ enum hinic3_service_type type)
+{
+ struct hinic3_uld_info *uld_info = &g_uld_info[type];
+ unsigned long end;
+ bool timeout = true;
+
+ mutex_lock(&dev->pdev_mutex);
+ if (!dev->uld_dev[type]) {
+ mutex_unlock(&dev->pdev_mutex);
+ return;
+ }
+
+ end = jiffies + msecs_to_jiffies(HINIC3_EVENT_PROCESS_TIMEOUT);
+ do {
+ if (!test_and_set_bit(type, &dev->state)) {
+ timeout = false;
+ break;
+ }
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+ } while (time_before(jiffies, end));
+
+ if (timeout && !test_and_set_bit(type, &dev->state))
+ timeout = false;
+
+ spin_lock_bh(&dev->uld_lock);
+ clear_bit(type, &dev->uld_state);
+ spin_unlock_bh(&dev->uld_lock);
+
+ wait_uld_unused(dev, type);
+
+ uld_info->remove(&dev->lld_dev, dev->uld_dev[type]);
+
+ dev->uld_dev[type] = NULL;
+ if (!timeout)
+ clear_bit(type, &dev->state);
+
+ sdk_info(&dev->pcidev->dev,
+ "Detach %s driver from pcie device succeed\n",
+ s_uld_name[type]);
+ mutex_unlock(&dev->pdev_mutex);
+}
+
+static void attach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+ struct pci_dev *pdev = dev->pcidev;
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+
+ for (type = SERVICE_T_NIC; type < SERVICE_T_MAX; type++) {
+ if (g_uld_info[type].probe) {
+ if (pdev->is_virtfn &&
+ (!hinic3_get_vf_service_load(pdev, (u16)type))) {
+ sdk_info(&pdev->dev, "VF device disable service_type = %d load in host\n",
+ type);
+ continue;
+ }
+ attach_uld(dev, type, &g_uld_info[type]);
+ }
+ }
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+
+static void detach_ulds(struct hinic3_pcidev *dev)
+{
+ enum hinic3_service_type type;
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ for (type = SERVICE_T_MAX - 1; type > SERVICE_T_NIC; type--) {
+ if (g_uld_info[type].probe)
+ detach_uld(dev, type);
+ }
+
+ if (g_uld_info[SERVICE_T_NIC].probe)
+ detach_uld(dev, SERVICE_T_NIC);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+
+int hinic3_register_uld(enum hinic3_service_type type,
+ struct hinic3_uld_info *uld_info)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to register\n",
+ type);
+ return -EINVAL;
+ }
+
+ if (!uld_info || !uld_info->probe || !uld_info->remove) {
+ pr_err("Invalid information of %s driver to register\n",
+ s_uld_name[type]);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+
+ if (g_uld_info[type].probe) {
+ pr_err("%s driver has registered\n", s_uld_name[type]);
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+ return -EINVAL;
+ }
+
+ chip_list = get_hinic3_chip_list();
+ memcpy(&g_uld_info[type], uld_info, sizeof(*uld_info));
+ list_for_each_entry(chip_node, chip_list, node) {
+ list_for_each_entry(dev, &chip_node->func_list, node) {
+ if (attach_uld(dev, type, uld_info)) {
+ sdk_err(&dev->pcidev->dev,
+ "Attach %s driver to pcie device failed\n",
+ s_uld_name[type]);
+#ifdef CONFIG_MODULE_PROF
+ hinic3_probe_fault_process(dev->pcidev, FAULT_LEVEL_HOST);
+ break;
+#else
+ continue;
+#endif
+ }
+ }
+ }
+
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+
+ pr_info("Register %s driver succeed\n", s_uld_name[type]);
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_uld);
+
+void hinic3_unregister_uld(enum hinic3_service_type type)
+{
+ struct card_node *chip_node = NULL;
+ struct hinic3_pcidev *dev = NULL;
+ struct hinic3_uld_info *uld_info = NULL;
+ struct list_head *chip_list = NULL;
+
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Unknown type %d of up layer driver to unregister\n",
+ type);
+ return;
+ }
+
+ lld_hold();
+ mutex_lock(&g_uld_mutex);
+ chip_list = get_hinic3_chip_list();
+ list_for_each_entry(chip_node, chip_list, node) {
+ /* detach vf first */
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_VF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PF)
+ detach_uld(dev, type);
+
+ list_for_each_entry(dev, &chip_node->func_list, node)
+ if (hinic3_func_type(dev->hwdev) == TYPE_PPF)
+ detach_uld(dev, type);
+ }
+
+ uld_info = &g_uld_info[type];
+ memset(uld_info, 0, sizeof(*uld_info));
+ mutex_unlock(&g_uld_mutex);
+ lld_put();
+}
+EXPORT_SYMBOL(hinic3_unregister_uld);
+
+int hinic3_attach_nic(struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, SERVICE_T_NIC, &g_uld_info[SERVICE_T_NIC]);
+}
+EXPORT_SYMBOL(hinic3_attach_nic);
+
+void hinic3_detach_nic(const struct hinic3_lld_dev *lld_dev)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, SERVICE_T_NIC);
+}
+EXPORT_SYMBOL(hinic3_detach_nic);
+
+int hinic3_attach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return -EINVAL;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ return attach_uld(dev, type, &g_uld_info[type]);
+}
+EXPORT_SYMBOL(hinic3_attach_service);
+
+void hinic3_detach_service(const struct hinic3_lld_dev *lld_dev, enum hinic3_service_type type)
+{
+ struct hinic3_pcidev *dev = NULL;
+
+ if (!lld_dev || type >= SERVICE_T_MAX)
+ return;
+
+ dev = container_of(lld_dev, struct hinic3_pcidev, lld_dev);
+ detach_uld(dev, type);
+}
+EXPORT_SYMBOL(hinic3_detach_service);
+
+static void hinic3_sync_time_to_fmw(struct hinic3_pcidev *pdev_pri)
+{
+ struct timeval tv = {0};
+ struct rtc_time rt_time = {0};
+ u64 tv_msec;
+ int err;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)(tv.tv_sec * MSEC_PER_SEC + tv.tv_usec / USEC_PER_MSEC);
+ err = hinic3_sync_time(pdev_pri->hwdev, tv_msec);
+ if (err) {
+ sdk_err(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware failed, errno:%d.\n",
+ err);
+ } else {
+ rtc_time_to_tm((unsigned long)(tv.tv_sec), &rt_time);
+ sdk_info(&pdev_pri->pcidev->dev, "Synchronize UTC time to firmware succeed. UTC time %d-%02d-%02d %02d:%02d:%02d.\n",
+ rt_time.tm_year + HINIC3_SYNC_YEAR_OFFSET,
+ rt_time.tm_mon + HINIC3_SYNC_MONTH_OFFSET,
+ rt_time.tm_mday, rt_time.tm_hour,
+ rt_time.tm_min, rt_time.tm_sec);
+ }
+}
+
+static void send_uld_dev_event(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ enum hinic3_service_type type;
+
+ for (type = SERVICE_T_NIC; type < SERVICE_T_MAX; type++) {
+ if (test_and_set_bit(type, &dev->state)) {
+ sdk_warn(&dev->pcidev->dev, "Svc: 0x%x, event: 0x%x can't handler, %s is in detach\n",
+ event->service, event->type, s_uld_name[type]);
+ continue;
+ }
+
+ if (g_uld_info[type].event)
+ g_uld_info[type].event(&dev->lld_dev,
+ dev->uld_dev[type], event);
+ clear_bit(type, &dev->state);
+ }
+}
+
+static void send_event_to_dst_pf(struct hinic3_pcidev *dev, u16 func_id,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ if (hinic3_global_func_id(des_dev->hwdev) == func_id) {
+ send_uld_dev_event(des_dev, event);
+ break;
+ }
+ }
+ lld_put();
+}
+
+static void send_event_to_all_pf(struct hinic3_pcidev *dev,
+ struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *des_dev = NULL;
+
+ lld_hold();
+ list_for_each_entry(des_dev, &dev->chip_node->func_list, node) {
+ if (dev->lld_state == HINIC3_IN_REMOVE)
+ continue;
+
+ if (hinic3_func_type(des_dev->hwdev) == TYPE_VF)
+ continue;
+
+ send_uld_dev_event(des_dev, event);
+ }
+ lld_put();
+}
+
+static void hinic3_event_process(void *adapter, struct hinic3_event_info *event)
+{
+ struct hinic3_pcidev *dev = adapter;
+ struct hinic3_fault_event *fault = (void *)event->event_data;
+ u16 func_id;
+
+ if ((event->service == EVENT_SRV_COMM && event->type == EVENT_COMM_FAULT) &&
+ fault->fault_level == FAULT_LEVEL_SERIOUS_FLR &&
+ fault->event.chip.func_id < hinic3_max_pf_num(dev->hwdev)) {
+ func_id = fault->event.chip.func_id;
+ return send_event_to_dst_pf(adapter, func_id, event);
+ }
+
+ if (event->type == EVENT_COMM_MGMT_WATCHDOG)
+ send_event_to_all_pf(adapter, event);
+ else
+ send_uld_dev_event(adapter, event);
+}
+
+static void uld_def_init(struct hinic3_pcidev *pci_adapter)
+{
+ int type;
+
+ for (type = 0; type < SERVICE_T_MAX; type++) {
+ atomic_set(&pci_adapter->uld_ref_cnt[type], 0);
+ clear_bit(type, &pci_adapter->uld_state);
+ }
+
+ spin_lock_init(&pci_adapter->uld_lock);
+}
+
+static int mapping_bar(struct pci_dev *pdev,
+ struct hinic3_pcidev *pci_adapter)
+{
+ int cfg_bar;
+
+ cfg_bar = HINIC3_IS_VF_DEV(pdev) ?
+ HINIC3_VF_PCI_CFG_REG_BAR : HINIC3_PF_PCI_CFG_REG_BAR;
+
+ pci_adapter->cfg_reg_base = pci_ioremap_bar(pdev, cfg_bar);
+ if (!pci_adapter->cfg_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map configuration regs\n");
+ return -ENOMEM;
+ }
+
+ pci_adapter->intr_reg_base = pci_ioremap_bar(pdev,
+ HINIC3_PCI_INTR_REG_BAR);
+ if (!pci_adapter->intr_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map interrupt regs\n");
+ goto map_intr_bar_err;
+ }
+
+ if (!HINIC3_IS_VF_DEV(pdev)) {
+ pci_adapter->mgmt_reg_base =
+ pci_ioremap_bar(pdev, HINIC3_PCI_MGMT_REG_BAR);
+ if (!pci_adapter->mgmt_reg_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map mgmt regs\n");
+ goto map_mgmt_bar_err;
+ }
+ }
+
+ pci_adapter->db_base_phy = pci_resource_start(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_dwqe_len = pci_resource_len(pdev, HINIC3_PCI_DB_BAR);
+ pci_adapter->db_base = pci_ioremap_bar(pdev, HINIC3_PCI_DB_BAR);
+ if (!pci_adapter->db_base) {
+ sdk_err(&pdev->dev,
+ "Failed to map doorbell regs\n");
+ goto map_db_err;
+ }
+
+ return 0;
+
+map_db_err:
+ if (!HINIC3_IS_VF_DEV(pdev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+map_mgmt_bar_err:
+ iounmap(pci_adapter->intr_reg_base);
+
+map_intr_bar_err:
+ iounmap(pci_adapter->cfg_reg_base);
+
+ return -ENOMEM;
+}
+
+static void unmapping_bar(struct hinic3_pcidev *pci_adapter)
+{
+ iounmap(pci_adapter->db_base);
+
+ if (!HINIC3_IS_VF_DEV(pci_adapter->pcidev))
+ iounmap(pci_adapter->mgmt_reg_base);
+
+ iounmap(pci_adapter->intr_reg_base);
+ iounmap(pci_adapter->cfg_reg_base);
+}
+
+static int hinic3_pci_init(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ int err;
+
+ pci_adapter = kzalloc(sizeof(*pci_adapter), GFP_KERNEL);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev,
+ "Failed to alloc pci device adapter\n");
+ return -ENOMEM;
+ }
+ pci_adapter->pcidev = pdev;
+ mutex_init(&pci_adapter->pdev_mutex);
+
+ pci_set_drvdata(pdev, pci_adapter);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to enable PCI device\n");
+ goto pci_enable_err;
+ }
+
+ err = pci_request_regions(pdev, HINIC3_DRV_NAME);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to request regions\n");
+ goto pci_regions_err;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+
+ pci_set_master(pdev);
+
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev, "Couldn't set 64-bit DMA mask\n");
+ err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set DMA mask\n");
+ goto dma_mask_err;
+ }
+ }
+
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); /* 64 bit DMA mask */
+ if (err) {
+ sdk_warn(&pdev->dev,
+ "Couldn't set 64-bit coherent DMA mask\n");
+ err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); /* 32 bit DMA mask */
+ if (err) {
+ sdk_err(&pdev->dev,
+ "Failed to set coherent DMA mask\n");
+ goto dma_consistnet_mask_err;
+ }
+ }
+
+ return 0;
+
+dma_consistnet_mask_err:
+dma_mask_err:
+ pci_clear_master(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_release_regions(pdev);
+
+pci_regions_err:
+ pci_disable_device(pdev);
+
+pci_enable_err:
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+
+ return err;
+}
+
+static void hinic3_pci_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ pci_clear_master(pdev);
+ pci_release_regions(pdev);
+ pci_disable_pcie_error_reporting(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ kfree(pci_adapter);
+}
+
+#ifdef CONFIG_X86
+/**
+ * cfg_order_reg - when cpu model is haswell or broadwell, should configure dma
+ * order register to zero
+ * @pci_adapter: pci_adapter
+ **/
+/*lint -save -e40 */
+static void cfg_order_reg(struct hinic3_pcidev *pci_adapter)
+{
+ u8 cpu_model[] = {0x3c, 0x3f, 0x45, 0x46, 0x3d, 0x47, 0x4f, 0x56};
+ struct cpuinfo_x86 *cpuinfo = NULL;
+ u32 i;
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return;
+
+ cpuinfo = &cpu_data(0);
+ for (i = 0; i < sizeof(cpu_model); i++) {
+ if (cpu_model[i] == cpuinfo->x86_model)
+ hinic3_set_pcie_order_cfg(pci_adapter->hwdev);
+ }
+}
+
+/*lint -restore*/
+#endif
+
+static int hinic3_func_init(struct pci_dev *pdev, struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_init_para init_para = {0};
+ bool cqm_init_en = false;
+ int err;
+
+ init_para.adapter_hdl = pci_adapter;
+ init_para.pcidev_hdl = pdev;
+ init_para.dev_hdl = &pdev->dev;
+ init_para.cfg_reg_base = pci_adapter->cfg_reg_base;
+ init_para.intr_reg_base = pci_adapter->intr_reg_base;
+ init_para.mgmt_reg_base = pci_adapter->mgmt_reg_base;
+ init_para.db_base = pci_adapter->db_base;
+ init_para.db_base_phy = pci_adapter->db_base_phy;
+ init_para.db_dwqe_len = pci_adapter->db_dwqe_len;
+ init_para.hwdev = &pci_adapter->hwdev;
+ init_para.chip_node = pci_adapter->chip_node;
+ init_para.probe_fault_level = pci_adapter->probe_fault_level;
+ err = hinic3_init_hwdev(&init_para);
+ if (err) {
+ pci_adapter->hwdev = NULL;
+ pci_adapter->probe_fault_level = init_para.probe_fault_level;
+ sdk_err(&pdev->dev, "Failed to initialize hardware device\n");
+ return -EFAULT;
+ }
+
+ cqm_init_en = hinic3_need_init_stateful_default(pci_adapter->hwdev);
+ if (cqm_init_en) {
+ err = hinic3_stateful_init(pci_adapter->hwdev);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to init stateful\n");
+ goto stateful_init_err;
+ }
+ }
+
+ pci_adapter->lld_dev.pdev = pdev;
+
+ pci_adapter->lld_dev.hwdev = pci_adapter->hwdev;
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ set_bit(HINIC3_FUNC_PERSENT, &pci_adapter->sriov_info.state);
+
+ hinic3_event_register(pci_adapter->hwdev, pci_adapter,
+ hinic3_event_process);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF)
+ hinic3_sync_time_to_fmw(pci_adapter);
+
+ /* dbgtool init */
+ lld_lock_chip_node();
+ err = nictool_k_init(pci_adapter->hwdev, pci_adapter->chip_node);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to initialize dbgtool\n");
+ goto nictool_init_err;
+ }
+ list_add_tail(&pci_adapter->node, &pci_adapter->chip_node->func_list);
+ lld_unlock_chip_node();
+
+ if (!disable_attach) {
+ attach_ulds(pci_adapter);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = sysfs_create_group(&pdev->dev.kobj,
+ &hinic3_attr_group);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to create sysfs group\n");
+ goto create_sysfs_err;
+ }
+ }
+
+#ifdef CONFIG_X86
+ cfg_order_reg(pci_adapter);
+#endif
+ }
+
+ return 0;
+
+create_sysfs_err:
+ detach_ulds(pci_adapter);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+nictool_init_err:
+ hinic3_event_unregister(pci_adapter->hwdev);
+ if (cqm_init_en)
+ hinic3_stateful_deinit(pci_adapter->hwdev);
+stateful_init_err:
+ hinic3_free_hwdev(pci_adapter->hwdev);
+
+ return err;
+}
+
+static void hinic3_func_deinit(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ /* When function deinit, disable mgmt initiative report events firstly,
+ * then flush mgmt work-queue.
+ */
+ hinic3_disable_mgmt_msg_report(pci_adapter->hwdev);
+
+ hinic3_flush_mgmt_workq(pci_adapter->hwdev);
+
+ lld_lock_chip_node();
+ list_del(&pci_adapter->node);
+ lld_unlock_chip_node();
+
+ detach_ulds(pci_adapter);
+
+ wait_lld_dev_unused(pci_adapter);
+
+ lld_lock_chip_node();
+ nictool_k_uninit(pci_adapter->hwdev, pci_adapter->chip_node);
+ lld_unlock_chip_node();
+
+ hinic3_event_unregister(pci_adapter->hwdev);
+
+ hinic3_free_stateful(pci_adapter->hwdev);
+
+ hinic3_free_hwdev(pci_adapter->hwdev);
+}
+
+static void wait_sriov_cfg_complete(struct hinic3_pcidev *pci_adapter)
+{
+ struct hinic3_sriov_info *sriov_info;
+ unsigned long end;
+
+ sriov_info = &pci_adapter->sriov_info;
+ clear_bit(HINIC3_FUNC_PERSENT, &sriov_info->state);
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+
+ end = jiffies + msecs_to_jiffies(HINIC3_WAIT_SRIOV_CFG_TIMEOUT);
+ do {
+ if (!test_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state) &&
+ !test_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state))
+ return;
+
+ usleep_range(9900, 10000); /* sleep 9900 us ~ 10000 us */
+ } while (time_before(jiffies, end));
+}
+
+bool hinic3_get_vf_load_state(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ /* vf used in vm */
+ if (pci_is_root_bus(pdev->bus))
+ return false;
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ return !pci_adapter->disable_vf_load;
+}
+
+int hinic3_set_vf_load_state(struct pci_dev *pdev, bool vf_load_state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_vf_load = !vf_load_state;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_load_state ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_load_state);
+
+bool hinic3_get_vf_service_load(struct pci_dev *pdev, u16 service)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ struct pci_dev *pf_pdev = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return false;
+ }
+
+ if (pdev->is_virtfn)
+ pf_pdev = pdev->physfn;
+ else
+ pf_pdev = pdev;
+
+ pci_adapter = pci_get_drvdata(pf_pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return false;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return false;
+ }
+
+ return !pci_adapter->disable_srv_load[service];
+}
+
+int hinic3_set_vf_service_load(struct pci_dev *pdev, u16 service,
+ bool vf_srv_load)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ if (!pdev) {
+ pr_err("pdev is null.\n");
+ return -EINVAL;
+ }
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "service_type = %u state is error\n",
+ service);
+ return -EFAULT;
+ }
+
+ pci_adapter = pci_get_drvdata(pdev);
+ if (!pci_adapter) {
+ sdk_err(&pdev->dev, "pci_adapter is null.\n");
+ return -EINVAL;
+ }
+
+ if (hinic3_func_type(pci_adapter->hwdev) == TYPE_VF)
+ return 0;
+
+ pci_adapter->disable_srv_load[service] = !vf_srv_load;
+ sdk_info(&pci_adapter->pcidev->dev, "Current function %s vf load in host\n",
+ vf_srv_load ? "enable" : "disable");
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_load);
+
+static int hinic3_remove_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state != HINIC3_PROBE_OK) {
+ sdk_warn(&pdev->dev, "Current function don not need remove\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return 0;
+ }
+ pci_adapter->lld_state = HINIC3_IN_REMOVE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ hinic3_detect_hw_present(pci_adapter->hwdev);
+
+ hisdk3_remove_pre_process(pci_adapter->hwdev);
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ sysfs_remove_group(&pdev->dev.kobj, &hinic3_attr_group);
+ wait_sriov_cfg_complete(pci_adapter);
+ hinic3_pci_sriov_disable(pdev);
+ }
+
+ hinic3_func_deinit(pdev);
+
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+ unmapping_bar(pci_adapter);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_NOT_PROBE;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ sdk_info(&pdev->dev, "Pcie device removed function\n");
+
+ return 0;
+}
+
+static void hinic3_remove(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ if (!pci_adapter)
+ return;
+
+ sdk_info(&pdev->dev, "Pcie device remove begin\n");
+
+ hinic3_remove_func(pci_adapter);
+
+ hinic3_pci_deinit(pdev);
+ hinic3_probe_pre_unprocess(pdev);
+
+ sdk_info(&pdev->dev, "Pcie device removed\n");
+}
+
+static int probe_func_param_init(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = NULL;
+
+ if (!pci_adapter)
+ return -EFAULT;
+
+ pdev = pci_adapter->pcidev;
+ if (!pdev)
+ return -EFAULT;
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ if (pci_adapter->lld_state >= HINIC3_PROBE_START) {
+ sdk_warn(&pdev->dev, "Don not probe repeat\n");
+ mutex_unlock(&pci_adapter->pdev_mutex);
+ return 0;
+ }
+ pci_adapter->lld_state = HINIC3_PROBE_START;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ return 0;
+}
+
+static int hinic3_probe_func(struct hinic3_pcidev *pci_adapter)
+{
+ struct pci_dev *pdev = pci_adapter->pcidev;
+ int err;
+
+ err = probe_func_param_init(pci_adapter);
+ if (err)
+ return err;
+
+ err = mapping_bar(pdev, pci_adapter);
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to map bar\n");
+ goto map_bar_failed;
+ }
+
+ uld_def_init(pci_adapter);
+
+ /* if chip information of pcie function exist, add the function into chip */
+ lld_lock_chip_node();
+ err = alloc_chip_node(pci_adapter);
+ if (err) {
+ lld_unlock_chip_node();
+ sdk_err(&pdev->dev, "Failed to add new chip node to global list\n");
+ goto alloc_chip_node_fail;
+ }
+ lld_unlock_chip_node();
+
+ err = hinic3_func_init(pdev, pci_adapter);
+ if (err)
+ goto func_init_err;
+
+ if (hinic3_func_type(pci_adapter->hwdev) != TYPE_VF) {
+ err = hinic3_set_bdf_ctxt(pci_adapter->hwdev, pdev->bus->number,
+ PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+ if (err) {
+ sdk_err(&pdev->dev, "Failed to set BDF info to MPU\n");
+ goto set_bdf_err;
+ }
+ }
+
+ hinic3_probe_success(pci_adapter->hwdev);
+
+ mutex_lock(&pci_adapter->pdev_mutex);
+ pci_adapter->lld_state = HINIC3_PROBE_OK;
+ mutex_unlock(&pci_adapter->pdev_mutex);
+
+ return 0;
+
+set_bdf_err:
+ hinic3_func_deinit(pdev);
+
+func_init_err:
+ lld_lock_chip_node();
+ free_chip_node(pci_adapter);
+ lld_unlock_chip_node();
+
+alloc_chip_node_fail:
+ unmapping_bar(pci_adapter);
+
+map_bar_failed:
+ sdk_err(&pdev->dev, "Pcie device probe function failed\n");
+ return err;
+}
+
+static int hinic3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+ u16 probe_fault_level = FAULT_LEVEL_SERIOUS_FLR;
+ int err;
+
+ sdk_info(&pdev->dev, "Pcie device probe begin\n");
+
+ err = hinic3_probe_pre_process(pdev);
+ if (err != 0 && err != HINIC3_NOT_PROBE)
+ goto out;
+
+ if (err == HINIC3_NOT_PROBE)
+ return 0;
+
+ err = hinic3_pci_init(pdev);
+ if (err)
+ goto pci_init_err;
+
+ pci_adapter = pci_get_drvdata(pdev);
+ pci_adapter->disable_vf_load = disable_vf_load;
+ pci_adapter->id = *id;
+ pci_adapter->lld_state = HINIC3_NOT_PROBE;
+ pci_adapter->probe_fault_level = probe_fault_level;
+ lld_dev_cnt_init(pci_adapter);
+
+ if (pdev->is_virtfn && (!hinic3_get_vf_load_state(pdev))) {
+ sdk_info(&pdev->dev, "VF device disable load in host\n");
+ return 0;
+ }
+
+ err = hinic3_probe_func(pci_adapter);
+ if (err)
+ goto hinic3_probe_func_fail;
+
+ sdk_info(&pdev->dev, "Pcie device probed\n");
+ return 0;
+
+hinic3_probe_func_fail:
+ probe_fault_level = pci_adapter->probe_fault_level;
+ hinic3_pci_deinit(pdev);
+
+pci_init_err:
+ hinic3_probe_pre_unprocess(pdev);
+
+out:
+ hinic3_probe_fault_process(pdev, probe_fault_level);
+ sdk_err(&pdev->dev, "Pcie device probe failed\n");
+ return err;
+}
+
+static int hinic3_get_pf_info(struct pci_dev *pdev, u16 service,
+ struct hinic3_hw_pf_infos **pf_infos)
+{
+ struct hinic3_pcidev *dev = pci_get_drvdata(pdev);
+ int err;
+
+ if (service >= SERVICE_T_MAX) {
+ sdk_err(&pdev->dev, "Current vf do not supports set service_type = %u state in host\n",
+ service);
+ return -EFAULT;
+ }
+
+ *pf_infos = kzalloc(sizeof(struct hinic3_hw_pf_infos), GFP_KERNEL);
+ err = hinic3_get_hw_pf_infos(dev->hwdev, *pf_infos, HINIC3_CHANNEL_COMM);
+ if (err) {
+ kfree(*pf_infos);
+ sdk_err(&pdev->dev, "Get chipf pf info failed, ret %d\n", err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int hinic3_set_func_en(struct pci_dev *des_pdev, struct hinic3_pcidev *dst_dev,
+ bool en, u16 vf_func_id)
+{
+ int err;
+
+ /* unload invalid vf func id */
+ if (!en && vf_func_id != hinic3_global_func_id(dst_dev->hwdev) &&
+ !strcmp(des_pdev->driver->name, HINIC3_DRV_NAME)) {
+ pr_err("dst_dev func id:%u, vf_func_id:%u\n",
+ hinic3_global_func_id(dst_dev->hwdev), vf_func_id);
+ mutex_unlock(&dst_dev->pdev_mutex);
+ return -EFAULT;
+ }
+
+ if (!en && dst_dev->lld_state == HINIC3_PROBE_OK) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ hinic3_remove_func(dst_dev);
+ } else if (en && dst_dev->lld_state == HINIC3_NOT_PROBE) {
+ mutex_unlock(&dst_dev->pdev_mutex);
+ err = hinic3_probe_func(dst_dev);
+ if (err)
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int get_vf_service_state_param(struct pci_dev *pdev, struct hinic3_pcidev **dev_ptr,
+ u16 service, struct hinic3_hw_pf_infos **pf_infos)
+{
+ int err;
+
+ if (!pdev)
+ return -EINVAL;
+
+ *dev_ptr = pci_get_drvdata(pdev);
+ if (!(*dev_ptr))
+ return -EINVAL;
+
+ err = hinic3_get_pf_info(pdev, service, pf_infos);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+#define BUS_MAX_DEV_NUM 256
+static int hinic3_dst_pdev_valid(struct hinic3_pcidev *dst_dev, struct pci_dev **des_pdev_ptr,
+ u16 vf_devfn, bool en)
+{
+ u16 bus;
+
+ bus = dst_dev->pcidev->bus->number + vf_devfn / BUS_MAX_DEV_NUM;
+ *des_pdev_ptr = pci_get_domain_bus_and_slot(pci_domain_nr(dst_dev->pcidev->bus),
+ bus, vf_devfn % BUS_MAX_DEV_NUM);
+ if (!(*des_pdev_ptr)) {
+ pr_err("des_pdev is NULL\n");
+ return -EFAULT;
+ }
+
+ if ((*des_pdev_ptr)->driver == NULL) {
+ pr_err("des_pdev_ptr->driver is NULL\n");
+ return -EFAULT;
+ }
+
+ /* OVS sriov hw scene, when vf bind to vf_io return error. */
+ if ((!en && strcmp((*des_pdev_ptr)->driver->name, HINIC3_DRV_NAME))) {
+ pr_err("vf bind driver:%s\n", (*des_pdev_ptr)->driver->name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int paramerter_is_unexpected(struct hinic3_pcidev *dst_dev, u16 *func_id, u16 *vf_start,
+ u16 *vf_end, u16 vf_func_id)
+{
+ if (hinic3_func_type(dst_dev->hwdev) == TYPE_VF)
+ return -EPERM;
+
+ *func_id = hinic3_global_func_id(dst_dev->hwdev);
+ *vf_start = hinic3_glb_pf_vf_offset(dst_dev->hwdev) + 1;
+ *vf_end = *vf_start + hinic3_func_max_vf(dst_dev->hwdev);
+ if (vf_func_id < *vf_start || vf_func_id > *vf_end)
+ return -EPERM;
+
+ return 0;
+}
+
+int hinic3_set_vf_service_state(struct pci_dev *pdev, u16 vf_func_id, u16 service, bool en)
+{
+ struct hinic3_hw_pf_infos *pf_infos = NULL;
+ struct hinic3_pcidev *dev = NULL, *dst_dev = NULL;
+ struct pci_dev *des_pdev = NULL;
+ u16 vf_start, vf_end, vf_devfn, func_id;
+ int err;
+ bool find_dst_dev = false;
+
+ err = get_vf_service_state_param(pdev, &dev, service, &pf_infos);
+ if (err)
+ return err;
+
+ lld_hold();
+ list_for_each_entry(dst_dev, &dev->chip_node->func_list, node) {
+ if (paramerter_is_unexpected(dst_dev, &func_id, &vf_start, &vf_end, vf_func_id))
+ continue;
+
+ vf_devfn = pf_infos->infos[func_id].vf_offset + (vf_func_id - vf_start) +
+ (u16)dst_dev->pcidev->devfn;
+ err = hinic3_dst_pdev_valid(dst_dev, &des_pdev, vf_devfn, en);
+ if (err) {
+ sdk_err(&pdev->dev, "Can not get vf func_id %u from pf %u\n",
+ vf_func_id, func_id);
+ lld_put();
+ goto free_pf_info;
+ }
+
+ dst_dev = pci_get_drvdata(des_pdev);
+ /* When enable vf scene, if vf bind to vf-io, return ok */
+ if (strcmp(des_pdev->driver->name, HINIC3_DRV_NAME) ||
+ !dst_dev || (!en && dst_dev->lld_state != HINIC3_PROBE_OK) ||
+ (en && dst_dev->lld_state != HINIC3_NOT_PROBE)) {
+ lld_put();
+ goto free_pf_info;
+ }
+
+ if (en)
+ pci_dev_put(des_pdev);
+ mutex_lock(&dst_dev->pdev_mutex);
+ find_dst_dev = true;
+ break;
+ }
+ lld_put();
+
+ if (!find_dst_dev) {
+ err = -EFAULT;
+ sdk_err(&pdev->dev, "Invalid parameter vf_id %u \n", vf_func_id);
+ goto free_pf_info;
+ }
+
+ err = hinic3_set_func_en(des_pdev, dst_dev, en, vf_func_id);
+
+free_pf_info:
+ kfree(pf_infos);
+ return err;
+}
+EXPORT_SYMBOL(hinic3_set_vf_service_state);
+
+/*lint -save -e133 -e10*/
+static const struct pci_device_id hinic3_pci_table[] = {
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SPU), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_STANDARD), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_1_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_SDI_5_0_PF), 0},
+ {PCI_VDEVICE(HUAWEI, HINIC3_DEV_ID_VF), 0},
+ {0, 0}
+
+};
+
+/*lint -restore*/
+
+MODULE_DEVICE_TABLE(pci, hinic3_pci_table);
+
+/**
+ * hinic3_io_error_detected - called when PCI error is detected
+ * @pdev: Pointer to PCI device
+ * @state: The current pci connection state
+ *
+ * This function is called after a PCI bus error affecting
+ * this device has been detected.
+ *
+ * Since we only need error detecting not error handling, so we
+ * always return PCI_ERS_RESULT_CAN_RECOVER to tell the AER
+ * driver that we don't need reset(error handling).
+ */
+static pci_ers_result_t hinic3_io_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct hinic3_pcidev *pci_adapter = NULL;
+
+ sdk_err(&pdev->dev,
+ "Uncorrectable error detected, log and cleanup error status: 0x%08x\n",
+ state);
+
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ pci_adapter = pci_get_drvdata(pdev);
+ if (pci_adapter)
+ hinic3_record_pcie_error(pci_adapter->hwdev);
+
+ return PCI_ERS_RESULT_CAN_RECOVER;
+}
+
+static void hinic3_shutdown(struct pci_dev *pdev)
+{
+ struct hinic3_pcidev *pci_adapter = pci_get_drvdata(pdev);
+
+ sdk_info(&pdev->dev, "Shutdown device\n");
+
+ if (pci_adapter)
+ hinic3_shutdown_hwdev(pci_adapter->hwdev);
+
+ pci_disable_device(pdev);
+
+ if (pci_adapter)
+ hinic3_set_api_stop(pci_adapter->hwdev);
+}
+
+#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
+static struct pci_driver_rh hinic3_driver_rh = {
+ .sriov_configure = hinic3_pci_sriov_configure,
+};
+#endif
+
+/* Cause we only need error detecting not error handling, so only error_detected
+ * callback is enough.
+ */
+static struct pci_error_handlers hinic3_err_handler = {
+ .error_detected = hinic3_io_error_detected,
+};
+
+static struct pci_driver hinic3_driver = {
+ .name = HINIC3_DRV_NAME,
+ .id_table = hinic3_pci_table,
+ .probe = hinic3_probe,
+ .remove = hinic3_remove,
+ .shutdown = hinic3_shutdown,
+#if defined(HAVE_SRIOV_CONFIGURE)
+ .sriov_configure = hinic3_pci_sriov_configure,
+#elif defined(HAVE_RHEL6_SRIOV_CONFIGURE)
+ .rh_reserved = &hinic3_driver_rh,
+#endif
+ .err_handler = &hinic3_err_handler
+};
+
+int hinic3_lld_init(void)
+{
+ int err;
+
+ pr_info("%s - version %s\n", HINIC3_DRV_DESC, HINIC3_DRV_VERSION);
+ memset(g_uld_info, 0, sizeof(g_uld_info));
+
+ hinic3_lld_lock_init();
+ hinic3_uld_lock_init();
+
+ err = hinic3_module_pre_init();
+ if (err) {
+ pr_err("Init custom failed\n");
+ return err;
+ }
+
+ err = pci_register_driver(&hinic3_driver);
+ if (err) {
+ hinic3_module_post_exit();
+ return err;
+ }
+
+ return 0;
+}
+
+void hinic3_lld_exit(void)
+{
+ pci_unregister_driver(&hinic3_driver);
+
+ hinic3_module_post_exit();
+
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
new file mode 100644
index 000000000000..f23910d53573
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.c
@@ -0,0 +1,1841 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwif.h"
+#include "hinic3_eqs.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_common.h"
+#include "hinic3_mbox.h"
+
+#define HINIC3_MBOX_INT_DST_AEQN_SHIFT 10
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define HINIC3_MBOX_INT_STAT_DMA_SHIFT 14
+/* The size of data to be send (unit of 4 bytes) */
+#define HINIC3_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order) */
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define HINIC3_MBOX_INT_WB_EN_SHIFT 28
+
+#define HINIC3_MBOX_INT_DST_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_STAT_DMA_MASK 0x3F
+#define HINIC3_MBOX_INT_TX_SIZE_MASK 0x1F
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define HINIC3_MBOX_INT_WB_EN_MASK 0x1
+
+#define HINIC3_MBOX_INT_SET(val, field) \
+ (((val) & HINIC3_MBOX_INT_##field##_MASK) << \
+ HINIC3_MBOX_INT_##field##_SHIFT)
+
+enum hinic3_mbox_tx_status {
+ TX_NOT_DONE = 1,
+};
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+/* specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define HINIC3_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define HINIC3_MBOX_CTRL_DST_FUNC_SHIFT 16
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define HINIC3_MBOX_CTRL_TX_STATUS_MASK 0x1
+#define HINIC3_MBOX_CTRL_DST_FUNC_MASK 0x1FFF
+
+#define HINIC3_MBOX_CTRL_SET(val, field) \
+ (((val) & HINIC3_MBOX_CTRL_##field##_MASK) << \
+ HINIC3_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_POLLING_TIMEOUT 8000
+#define HINIC3_MBOX_COMP_TIME 40000U
+
+#define MBOX_MAX_BUF_SZ 2048U
+#define MBOX_HEADER_SZ 8
+#define HINIC3_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+/* MBOX size is 64B, 8B for mbox_header, 8B reserved */
+#define MBOX_SEG_LEN 48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL 42
+#define MBOX_LAST_SEG_MAX_LEN (MBOX_MAX_BUF_SZ - \
+ SEQ_ID_MAX_VAL * MBOX_SEG_LEN)
+
+/* mbox write back status is 16B, only first 4B is used */
+#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF
+#define MBOX_WB_STATUS_MASK 0xFF
+#define MBOX_WB_ERROR_CODE_MASK 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED 0x00
+
+#define MBOX_STATUS_FINISHED(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) \
+ ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL 0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+ ((hwif)->cfg_regs_base + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define MBOX_DMA_MSG_QUEUE_DEPTH 32
+
+#define MBOX_MQ_CI_OFFSET (HINIC3_CFG_REGS_FLAG + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF + \
+ MBOX_HEADER_SZ + MBOX_SEG_LEN)
+
+#define MBOX_MQ_SYNC_CI_SHIFT 0
+#define MBOX_MQ_ASYNC_CI_SHIFT 8
+
+#define MBOX_MQ_SYNC_CI_MASK 0xFF
+#define MBOX_MQ_ASYNC_CI_MASK 0xFF
+
+#define MBOX_MQ_CI_SET(val, field) \
+ (((val) & MBOX_MQ_##field##_CI_MASK) << MBOX_MQ_##field##_CI_SHIFT)
+#define MBOX_MQ_CI_GET(val, field) \
+ (((val) >> MBOX_MQ_##field##_CI_SHIFT) & MBOX_MQ_##field##_CI_MASK)
+#define MBOX_MQ_CI_CLEAR(val, field) \
+ ((val) & (~(MBOX_MQ_##field##_CI_MASK << MBOX_MQ_##field##_CI_SHIFT)))
+
+#define IS_PF_OR_PPF_SRC(hwdev, src_func_idx) \
+ ((src_func_idx) < HINIC3_MAX_PF_NUM(hwdev))
+
+#define MBOX_RESPONSE_ERROR 0x1
+#define MBOX_MSG_ID_MASK 0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func) \
+ (MBOX_MSG_ID(func_to_func) = \
+ (MBOX_MSG_ID(func_to_func) + 1) & MBOX_MSG_ID_MASK)
+
+/* max message counter wait to process for one function */
+#define HINIC3_MAX_MSG_CNT_TO_PROCESS 10
+
+#define MBOX_MSG_CHANNEL_STOP(func_to_func) \
+ ((((func_to_func)->lock_channel_en) && \
+ test_bit((func_to_func)->cur_msg_channel, \
+ &(func_to_func)->channel_stop)) ? true : false)
+
+enum mbox_ordering_type {
+ STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+ WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+ NOT_TRIGGER,
+ TRIGGER,
+};
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id);
+
+/**
+ * hinic3_register_ppf_mbox_cb - register mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_ppf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_ppf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->ppf_mbox_cb[mod] = callback;
+ func_to_func->ppf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PPF_MBOX_CB_REG, &func_to_func->ppf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_ppf_mbox_cb);
+
+/**
+ * hinic3_register_pf_mbox_cb - register mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_pf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_pf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->pf_mbox_cb[mod] = callback;
+ func_to_func->pf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_pf_mbox_cb);
+
+/**
+ * hinic3_register_vf_mbox_cb - register mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ * @pri_handle specific mod's private data that will be used in callback
+ * @callback: callback function
+ * Return: 0 - success, negative - failure
+ */
+int hinic3_register_vf_mbox_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_vf_mbox_cb callback)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return -EFAULT;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ func_to_func->vf_mbox_cb[mod] = callback;
+ func_to_func->vf_mbox_data[mod] = pri_handle;
+
+ set_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->ppf_mbox_data[mod] = NULL;
+ func_to_func->ppf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_ppf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PF_MBOX_CB_REG, &func_to_func->pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PF_MBOX_CB_RUNNING, &func_to_func->pf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_mbox_data[mod] = NULL;
+ func_to_func->pf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_pf_mbox_cb);
+
+/**
+ * hinic3_unregister_vf_mbox_cb - unregister the mbox callback for vf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_vf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_VF_MBOX_CB_REG, &func_to_func->vf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_VF_MBOX_CB_RUNNING, &func_to_func->vf_mbox_cb_state[mod]) != 0)
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->vf_mbox_data[mod] = NULL;
+ func_to_func->vf_mbox_cb[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_vf_mbox_cb);
+
+/**
+ * hinic3_unregister_ppf_mbox_cb - unregister the mbox callback for pf from ppf
+ * @hwdev: the pointer to hw device
+ * @mod: specific mod that the callback will handle
+ */
+void hinic3_unregister_ppf_to_pf_mbox_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+
+ if (mod >= HINIC3_MOD_MAX || !hwdev)
+ return;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ while (test_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ func_to_func->pf_recv_ppf_mbox_data[mod] = NULL;
+ func_to_func->pf_recv_ppf_mbox_cb[mod] = NULL;
+}
+
+static int recv_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_vf_mbox_cb cb;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->vf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_VF_MBOX_CB_REG,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->vf_mbox_data[recv_mbox->mod],
+ recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "VF mbox cb is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_VF_MBOX_CB_RUNNING,
+ &func_to_func->vf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_ppf_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size)
+{
+ hinic3_pf_recv_from_ppf_mbox_cb cb;
+ enum hinic3_mod_type mod = recv_mbox->mod;
+ int ret;
+
+ if (mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %d\n",
+ mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ cb = func_to_func->pf_recv_ppf_mbox_cb[mod];
+ if (cb && test_bit(HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]) != 0) {
+ ret = cb(func_to_func->pf_recv_ppf_mbox_data[mod],
+ recv_mbox->cmd, recv_mbox->msg, recv_mbox->msg_len,
+ buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF receive ppf mailbox callback is not registered\n");
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+ &func_to_func->ppf_to_pf_mbox_cb_state[mod]);
+
+ return ret;
+}
+
+static int recv_ppf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u8 pf_id, void *buf_out, u16 *out_size)
+{
+ hinic3_ppf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->ppf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PPF_MBOX_CB_REG,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod])) {
+ ret = cb(func_to_func->ppf_mbox_data[recv_mbox->mod],
+ pf_id, vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PPF mbox cb is not registered, mod = %hhu\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PPF_MBOX_CB_RUNNING,
+ &func_to_func->ppf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static int recv_pf_from_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ u16 src_func_idx, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_pf_mbox_cb cb;
+ u16 vf_id = 0;
+ int ret;
+
+ if (recv_mbox->mod >= HINIC3_MOD_MAX) {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "Receive illegal mbox message, mod = %hhu\n",
+ recv_mbox->mod);
+ return -EINVAL;
+ }
+
+ set_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ cb = func_to_func->pf_mbox_cb[recv_mbox->mod];
+ if (cb && test_bit(HINIC3_PF_MBOX_CB_REG,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]) != 0) {
+ vf_id = src_func_idx -
+ hinic3_glb_pf_vf_offset(func_to_func->hwdev);
+ ret = cb(func_to_func->pf_mbox_data[recv_mbox->mod],
+ vf_id, recv_mbox->cmd, recv_mbox->msg,
+ recv_mbox->msg_len, buf_out, out_size);
+ } else {
+ sdk_warn(func_to_func->hwdev->dev_hdl, "PF mbox mod(0x%x) cb is not registered\n",
+ recv_mbox->mod);
+ ret = -EINVAL;
+ }
+
+ clear_bit(HINIC3_PF_MBOX_CB_RUNNING,
+ &func_to_func->pf_mbox_cb_state[recv_mbox->mod]);
+
+ return ret;
+}
+
+static void response_for_recv_func_mbox(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox,
+ int err, u16 out_size, u16 src_func_idx)
+{
+ struct mbox_msg_info msg_info = {0};
+ u16 size = out_size;
+
+ msg_info.msg_id = recv_mbox->msg_id;
+ if (err)
+ msg_info.status = HINIC3_MBOX_PF_SEND_ERR;
+
+ /* if not data need to response, set out_size to 1 */
+ if (!out_size || err)
+ size = MBOX_MSG_NO_DATA_LEN;
+
+ if (size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Response msg len(%d) exceed limit(%d)\n",
+ size, HINIC3_MBOX_DATA_SIZE);
+ size = HINIC3_MBOX_DATA_SIZE;
+ }
+
+ send_mbox_msg(func_to_func, recv_mbox->mod, recv_mbox->cmd,
+ recv_mbox->resp_buff, size, src_func_idx,
+ HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+static void recv_func_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox)
+{
+ struct hinic3_hwdev *dev = func_to_func->hwdev;
+ void *buf_out = recv_mbox->resp_buff;
+ u16 src_func_idx = recv_mbox->src_func_idx;
+ u16 out_size = HINIC3_MBOX_DATA_SIZE;
+ int err = 0;
+
+ if (HINIC3_IS_VF(dev)) {
+ err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+ &out_size);
+ } else { /* pf/ppf process */
+ if (IS_PF_OR_PPF_SRC(dev, src_func_idx)) {
+ if (HINIC3_IS_PPF(dev)) {
+ err = recv_ppf_mbox_handler(func_to_func,
+ recv_mbox,
+ (u8)src_func_idx,
+ buf_out, &out_size);
+ if (err)
+ goto out;
+ } else {
+ err = recv_pf_from_ppf_handler(func_to_func,
+ recv_mbox,
+ buf_out,
+ &out_size);
+ if (err)
+ goto out;
+ }
+ /* The source is neither PF nor PPF, so it is from VF */
+ } else {
+ err = recv_pf_from_vf_mbox_handler(func_to_func,
+ recv_mbox,
+ src_func_idx,
+ buf_out, &out_size);
+ }
+ }
+
+out:
+ if (recv_mbox->ack_type == HINIC3_MSG_ACK)
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+}
+
+static struct hinic3_recv_mbox *alloc_recv_mbox(void)
+{
+ struct hinic3_recv_mbox *recv_msg = NULL;
+
+ recv_msg = kzalloc(sizeof(*recv_msg), GFP_KERNEL);
+ if (!recv_msg)
+ return NULL;
+
+ recv_msg->msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->msg)
+ goto alloc_msg_err;
+
+ recv_msg->resp_buff = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!recv_msg->resp_buff)
+ goto alloc_resp_bff_err;
+
+ return recv_msg;
+
+alloc_resp_bff_err:
+ kfree(recv_msg->msg);
+
+alloc_msg_err:
+ kfree(recv_msg);
+
+ return NULL;
+}
+
+static void free_recv_mbox(struct hinic3_recv_mbox *recv_msg)
+{
+ kfree(recv_msg->resp_buff);
+ kfree(recv_msg->msg);
+ kfree(recv_msg);
+}
+
+static void recv_func_mbox_work_handler(struct work_struct *work)
+{
+ struct hinic3_mbox_work *mbox_work =
+ container_of(work, struct hinic3_mbox_work, work);
+
+ recv_func_mbox_handler(mbox_work->func_to_func, mbox_work->recv_mbox);
+
+ atomic_dec(&mbox_work->msg_ch->recv_msg_cnt);
+
+ destroy_work(&mbox_work->work);
+
+ free_recv_mbox(mbox_work->recv_mbox);
+ kfree(mbox_work);
+}
+
+static void resp_mbox_handler(struct hinic3_mbox *func_to_func,
+ const struct hinic3_msg_desc *msg_desc)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ if (msg_desc->msg_info.msg_id == func_to_func->send_msg_id &&
+ func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_SUCCESS;
+ else
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox response timeout, current send msg id(0x%x), recv msg id(0x%x), status(0x%x)\n",
+ func_to_func->send_msg_id, msg_desc->msg_info.msg_id,
+ msg_desc->msg_info.status);
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static void recv_mbox_msg_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_recv_mbox *recv_msg = NULL;
+ struct hinic3_mbox_work *mbox_work = NULL;
+ struct hinic3_msg_channel *msg_ch =
+ container_of(msg_desc, struct hinic3_msg_channel, recv_msg);
+ u16 src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (atomic_read(&msg_ch->recv_msg_cnt) >
+ HINIC3_MAX_MSG_CNT_TO_PROCESS) {
+ sdk_warn(hwdev->dev_hdl, "This function(%u) have %d message wait to process, can't add to work queue\n",
+ src_func_idx, atomic_read(&msg_ch->recv_msg_cnt));
+ return;
+ }
+
+ recv_msg = alloc_recv_mbox();
+ if (!recv_msg) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc receive mbox message buffer\n");
+ return;
+ }
+ recv_msg->msg_len = msg_desc->msg_len;
+ memcpy(recv_msg->msg, msg_desc->msg, recv_msg->msg_len);
+ recv_msg->msg_id = msg_desc->msg_info.msg_id;
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->ack_type = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_msg->src_func_idx = src_func_idx;
+
+ mbox_work = kzalloc(sizeof(*mbox_work), GFP_KERNEL);
+ if (!mbox_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mbox work memory failed.\n");
+ free_recv_mbox(recv_msg);
+ return;
+ }
+
+ atomic_inc(&msg_ch->recv_msg_cnt);
+
+ mbox_work->func_to_func = func_to_func;
+ mbox_work->recv_mbox = recv_msg;
+ mbox_work->msg_ch = msg_ch;
+
+ INIT_WORK(&mbox_work->work, recv_func_mbox_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MBOX),
+ func_to_func->workq, &mbox_work->work);
+}
+
+static bool check_mbox_segment(struct hinic3_mbox *func_to_func,
+ struct hinic3_msg_desc *msg_desc,
+ u64 mbox_header, void *mbox_body)
+{
+ u8 seq_id, seg_len, msg_id, mod;
+ u16 src_func_idx, cmd;
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN ||
+ (seq_id == SEQ_ID_MAX_VAL && seg_len > MBOX_LAST_SEG_MAX_LEN))
+ goto seg_err;
+
+ if (seq_id == 0) {
+ msg_desc->seq_id = seq_id;
+ msg_desc->msg_info.msg_id = msg_id;
+ msg_desc->mod = mod;
+ msg_desc->cmd = cmd;
+ } else {
+ if (seq_id != msg_desc->seq_id + 1 || msg_id != msg_desc->msg_info.msg_id ||
+ mod != msg_desc->mod || cmd != msg_desc->cmd)
+ goto seg_err;
+
+ msg_desc->seq_id = seq_id;
+ }
+
+ return true;
+
+seg_err:
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox segment check failed, src func id: 0x%x, front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ src_func_idx, msg_desc->seq_id, msg_desc->msg_info.msg_id,
+ msg_desc->mod, msg_desc->cmd);
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Current seg info: seg len: 0x%x, seq id: 0x%x, msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ seg_len, seq_id, msg_id, mod, cmd);
+
+ return false;
+}
+
+static void recv_mbox_handler(struct hinic3_mbox *func_to_func,
+ u64 *header, struct hinic3_msg_desc *msg_desc)
+{
+ u64 mbox_header = *header;
+ void *mbox_body = MBOX_BODY_FROM_HDR(((void *)header));
+ u8 seq_id, seg_len;
+ int pos;
+
+ if (!check_mbox_segment(func_to_func, msg_desc, mbox_header, mbox_body)) {
+ msg_desc->seq_id = SEQ_ID_MAX_VAL;
+ return;
+ }
+
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+
+ pos = seq_id * MBOX_SEG_LEN;
+ memcpy((u8 *)msg_desc->msg + pos, mbox_body, seg_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ msg_desc->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ msg_desc->msg_info.status = HINIC3_MSG_HEADER_GET(mbox_header, STATUS);
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ resp_mbox_handler(func_to_func, msg_desc);
+ return;
+ }
+
+ recv_mbox_msg_handler(func_to_func, msg_desc, mbox_header);
+}
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ struct hinic3_msg_desc *msg_desc = NULL;
+ u64 mbox_header = *((u64 *)header);
+ u64 src, dir;
+
+ func_to_func = ((struct hinic3_hwdev *)handle)->func_to_func;
+
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ src = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ msg_desc = get_mbox_msg_desc(func_to_func, dir, src);
+ if (!msg_desc) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mailbox source function id: %u is invalid for current function\n",
+ (u32)src);
+ return;
+ }
+
+ recv_mbox_handler(func_to_func, (u64 *)header, msg_desc);
+}
+
+static int init_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ u32 size;
+
+ mq->depth = MBOX_DMA_MSG_QUEUE_DEPTH;
+ mq->prod_idx = 0;
+ mq->cons_idx = 0;
+
+ size = mq->depth * MBOX_MAX_BUF_SZ;
+ mq->dma_buff_vaddr = dma_zalloc_coherent(hwdev->dev_hdl, size, &mq->dma_buff_paddr,
+ GFP_KERNEL);
+ if (!mq->dma_buff_vaddr) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc dma_buffer\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void deinit_mbox_dma_queue(struct hinic3_hwdev *hwdev, struct mbox_dma_queue *mq)
+{
+ dma_free_coherent(hwdev->dev_hdl, mq->depth * MBOX_MAX_BUF_SZ,
+ mq->dma_buff_vaddr, mq->dma_buff_paddr);
+}
+
+static int hinic3_init_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ u32 val;
+ int err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ if (err)
+ return err;
+
+ err = init_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+ if (err) {
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ return err;
+ }
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ val = MBOX_MQ_CI_CLEAR(val, SYNC);
+ val = MBOX_MQ_CI_CLEAR(val, ASYNC);
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET, val);
+
+ return 0;
+}
+
+static void hinic3_deinit_mbox_dma_queue(struct hinic3_mbox *func_to_func)
+{
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->sync_msg_queue);
+ deinit_mbox_dma_queue(func_to_func->hwdev, &func_to_func->async_msg_queue);
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+#define MBOX_XOR_DATA_ALIGN 4
+static u32 mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+ u32 xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+ u16 dw_len = msg_len / sizeof(u32);
+ u16 i;
+
+ for (i = 0; i < dw_len; i++)
+ xor ^= data[i];
+
+ return xor;
+}
+
+#define MQ_ID_MASK(mq, idx) ((idx) & ((mq)->depth - 1))
+#define IS_MSG_QUEUE_FULL(mq) (MQ_ID_MASK(mq, (mq)->prod_idx + 1) == \
+ MQ_ID_MASK(mq, (mq)->cons_idx))
+
+static int mbox_prepare_dma_entry(struct hinic3_mbox *func_to_func, struct mbox_dma_queue *mq,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ u64 dma_addr, offset;
+ void *dma_vaddr;
+
+ if (IS_MSG_QUEUE_FULL(mq)) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Mbox sync message queue is busy, pi: %u, ci: %u\n",
+ mq->prod_idx, MQ_ID_MASK(mq, mq->cons_idx));
+ return -EBUSY;
+ }
+
+ /* copy data to DMA buffer */
+ offset = mq->prod_idx * MBOX_MAX_BUF_SZ;
+ dma_vaddr = (u8 *)mq->dma_buff_vaddr + offset;
+ memcpy(dma_vaddr, msg, msg_len);
+ dma_addr = mq->dma_buff_paddr + offset;
+ dma_msg->dma_addr_high = upper_32_bits(dma_addr);
+ dma_msg->dma_addr_low = lower_32_bits(dma_addr);
+ dma_msg->msg_len = msg_len;
+ /* The firmware obtains message based on 4B alignment. */
+ dma_msg->xor = mbox_dma_msg_xor(dma_vaddr, ALIGN(msg_len, MBOX_XOR_DATA_ALIGN));
+
+ mq->prod_idx++;
+ mq->prod_idx = MQ_ID_MASK(mq, mq->prod_idx);
+
+ return 0;
+}
+
+static int mbox_prepare_dma_msg(struct hinic3_mbox *func_to_func, enum hinic3_msg_ack_type ack_type,
+ struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len)
+{
+ struct mbox_dma_queue *mq = NULL;
+ u32 val;
+
+ val = hinic3_hwif_read_reg(func_to_func->hwdev->hwif, MBOX_MQ_CI_OFFSET);
+ if (ack_type == HINIC3_MSG_ACK) {
+ mq = &func_to_func->sync_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, SYNC);
+ } else {
+ mq = &func_to_func->async_msg_queue;
+ mq->cons_idx = MBOX_MQ_CI_GET(val, ASYNC);
+ }
+
+ return mbox_prepare_dma_entry(func_to_func, mq, dma_msg, msg, msg_len);
+}
+
+static void clear_mbox_status(struct hinic3_send_mbox *mbox)
+{
+ *mbox->wb_status = 0;
+
+ /* clear mailbox write back status */
+ wmb();
+}
+
+static void mbox_copy_header(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, u64 *header)
+{
+ u32 *data = (u32 *)header;
+ u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + i * sizeof(u32));
+ }
+}
+
+static void mbox_copy_send_data(struct hinic3_hwdev *hwdev,
+ struct hinic3_send_mbox *mbox, void *seg,
+ u16 seg_len)
+{
+ u32 *data = seg;
+ u32 data_len, chk_sz = sizeof(u32);
+ u32 i, idx_max;
+ u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
+
+ /* The mbox message should be aligned in 4 bytes. */
+ if (seg_len % chk_sz) {
+ memcpy(mbox_max_buf, seg, seg_len);
+ data = (u32 *)mbox_max_buf;
+ }
+
+ data_len = seg_len;
+ idx_max = ALIGN(data_len, chk_sz) / chk_sz;
+
+ for (i = 0; i < idx_max; i++) {
+ __raw_writel(cpu_to_be32(*(data + i)),
+ mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+ }
+}
+
+static void write_mbox_msg_attr(struct hinic3_mbox *func_to_func,
+ u16 dst_func, u16 dst_aeqn, u16 seg_len)
+{
+ u32 mbox_int, mbox_ctrl;
+ u16 func = dst_func;
+
+ /* for VF to PF's message, dest func id will self-learning by HW */
+ if (HINIC3_IS_VF(func_to_func->hwdev) && dst_func != HINIC3_MGMT_SRC_ID)
+ func = 0; /* the destination is the VF's PF */
+
+ mbox_int = HINIC3_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+ HINIC3_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+ HINIC3_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+ HINIC3_MBOX_INT_SET(ALIGN(seg_len + MBOX_HEADER_SZ,
+ MBOX_SEG_LEN_ALIGN) >> 2,
+ TX_SIZE) |
+ HINIC3_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+ HINIC3_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+ wmb(); /* writing the mbox int attributes */
+ mbox_ctrl = HINIC3_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(func, DST_FUNC);
+
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+static void dump_mbox_reg(struct hinic3_hwdev *hwdev)
+{
+ u32 val;
+
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox control reg: 0x%x\n", val);
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+ sdk_err(hwdev->dev_hdl, "Mailbox interrupt offset: 0x%x\n", val);
+}
+
+static u16 get_mbox_status(const struct hinic3_send_mbox *mbox)
+{
+ /* write back is 16B, but only use first 4B */
+ u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+ rmb(); /* verify reading before check */
+
+ return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+static enum hinic3_wait_return check_mbox_wb_status(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+ u16 wb_status;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || !func_to_func->hwdev->chip_present_flag)
+ return WAIT_PROCESS_ERR;
+
+ wb_status = get_mbox_status(&func_to_func->send_mbox);
+
+ return MBOX_STATUS_FINISHED(wb_status) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int send_mbox_seg(struct hinic3_mbox *func_to_func, u64 header,
+ u16 dst_func, void *seg, u16 seg_len, void *msg_info)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+ int err;
+
+ /* mbox to mgmt cpu, hardware don't care dst aeq id */
+ if (num_aeqs > HINIC3_MBOX_RSP_MSG_AEQ)
+ dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND) ?
+ HINIC3_ASYNC_MSG_AEQ : HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+
+ mbox_copy_header(hwdev, send_mbox, &header);
+
+ mbox_copy_send_data(hwdev, send_mbox, seg, seg_len);
+
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+
+ wmb(); /* writing the mbox msg attributes */
+
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_wb_status,
+ MBOX_MSG_POLLING_TIMEOUT, USEC_PER_MSEC);
+ wb_status = get_mbox_status(send_mbox);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment timeout, wb status: 0x%x\n",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ sdk_err(hwdev->dev_hdl, "Send mailbox segment to function %u error, wb status: 0x%x\n",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int send_mbox_msg(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct mbox_dma_msg dma_msg = {0};
+ enum hinic3_data_type data_type = HINIC3_DATA_INLINE;
+ int err = 0;
+ u32 seq_id = 0;
+ u16 seg_len = MBOX_SEG_LEN;
+ u16 rsp_aeq_id, left;
+ u8 *msg_seg = NULL;
+ u64 header = 0;
+
+ if (hwdev->poll || hwdev->hwif->attr.num_aeqs >= 0x2)
+ rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ rsp_aeq_id = 0;
+
+ mutex_lock(&func_to_func->msg_send_lock);
+
+ if (IS_DMA_MBX_MSG(dst_func) && !COMM_SUPPORT_MBOX_SEGMENT(hwdev)) {
+ err = mbox_prepare_dma_msg(func_to_func, ack_type, &dma_msg, msg, msg_len);
+ if (err != 0)
+ goto send_err;
+
+ msg = &dma_msg;
+ msg_len = sizeof(dma_msg);
+ data_type = HINIC3_DATA_DMA;
+ }
+
+ msg_seg = (u8 *)msg;
+ left = msg_len;
+
+ header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(seg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(data_type, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ HINIC3_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ /* The vf's offset to it's associated pf */
+ HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+ HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+ HINIC3_MSG_HEADER_SET(hinic3_global_func_id(hwdev),
+ SRC_GLB_FUNC_IDX);
+
+ while (!(HINIC3_MSG_HEADER_GET(header, LAST))) {
+ if (left <= MBOX_SEG_LEN) {
+ header &= ~MBOX_SEGLEN_MASK;
+ header |= HINIC3_MSG_HEADER_SET(left, SEG_LEN);
+ header |= HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+ seg_len = left;
+ }
+
+ err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ seg_len, msg_info);
+ if (err != 0) {
+ sdk_err(hwdev->dev_hdl, "Failed to send mbox seg, seq_id=0x%llx\n",
+ HINIC3_MSG_HEADER_GET(header, SEQID));
+ goto send_err;
+ }
+
+ left -= MBOX_SEG_LEN;
+ msg_seg += MBOX_SEG_LEN; /*lint !e662 */
+
+ seq_id++;
+ header &= ~(HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEQID_MASK,
+ SEQID));
+ header |= HINIC3_MSG_HEADER_SET(seq_id, SEQID);
+ }
+
+send_err:
+ mutex_unlock(&func_to_func->msg_send_lock);
+
+ return err;
+}
+
+static void set_mbox_to_func_event(struct hinic3_mbox *func_to_func,
+ enum mbox_event_state event_flag)
+{
+ spin_lock(&func_to_func->mbox_lock);
+ func_to_func->event_flag = event_flag;
+ spin_unlock(&func_to_func->mbox_lock);
+}
+
+static enum hinic3_wait_return check_mbox_msg_finish(void *priv_data)
+{
+ struct hinic3_mbox *func_to_func = priv_data;
+
+ if (MBOX_MSG_CHANNEL_STOP(func_to_func) || func_to_func->hwdev->chip_present_flag == 0)
+ return WAIT_PROCESS_ERR;
+
+ if (func_to_func->hwdev->poll) {
+ }
+
+ return (func_to_func->event_flag == EVENT_SUCCESS) ?
+ WAIT_PROCESS_CPL : WAIT_PROCESS_WAITING;
+}
+
+static int wait_mbox_msg_completion(struct hinic3_mbox *func_to_func,
+ u32 timeout)
+{
+ u32 wait_time;
+ int err;
+
+ wait_time = (timeout != 0) ? timeout : HINIC3_MBOX_COMP_TIME;
+ err = hinic3_wait_for_timeout(func_to_func, check_mbox_msg_finish,
+ wait_time, USEC_PER_MSEC);
+ if (err) {
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ return -ETIMEDOUT;
+ }
+
+ set_mbox_to_func_event(func_to_func, EVENT_END);
+
+ return 0;
+}
+
+#define TRY_MBOX_LOCK_SLEPP 1000
+static int send_mbox_msg_lock(struct hinic3_mbox *func_to_func, u16 channel)
+{
+ if (!func_to_func->lock_channel_en) {
+ mutex_lock(&func_to_func->mbox_send_lock);
+ return 0;
+ }
+
+ while (test_bit(channel, &func_to_func->channel_stop) == 0) {
+ if (mutex_trylock(&func_to_func->mbox_send_lock) != 0)
+ return 0;
+
+ usleep_range(TRY_MBOX_LOCK_SLEPP - 1, TRY_MBOX_LOCK_SLEPP);
+ }
+
+ return -EAGAIN;
+}
+
+static void send_mbox_msg_unlock(struct hinic3_mbox *func_to_func)
+{
+ mutex_unlock(&func_to_func->mbox_send_lock);
+}
+
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ /* use mbox_resp to hole data which responsed from other function */
+ struct hinic3_msg_desc *msg_desc = NULL;
+ struct mbox_msg_info msg_info = {0};
+ int err;
+
+ if (func_to_func->hwdev->chip_present_flag == 0)
+ return -EPERM;
+
+ /* expect response message */
+ msg_desc = get_mbox_msg_desc(func_to_func, HINIC3_MSG_RESPONSE,
+ dst_func);
+ if (!msg_desc)
+ return -EFAULT;
+
+ err = send_mbox_msg_lock(func_to_func, channel);
+ if (err)
+ return err;
+
+ func_to_func->cur_msg_channel = channel;
+ msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+
+ set_mbox_to_func_event(func_to_func, EVENT_START);
+
+ err = send_mbox_msg(func_to_func, mod, cmd, buf_in, in_size, dst_func,
+ HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_ACK, &msg_info);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Send mailbox mod %u, cmd %u failed, msg_id: %u, err: %d\n",
+ mod, cmd, msg_info.msg_id, err);
+ set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+ goto send_err;
+ }
+
+ if (wait_mbox_msg_completion(func_to_func, timeout)) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Send mbox msg timeout, msg_id: %u\n", msg_info.msg_id);
+ hinic3_dump_aeq_info(func_to_func->hwdev);
+ err = -ETIMEDOUT;
+ goto send_err;
+ }
+
+ if (mod != msg_desc->mod || cmd != msg_desc->cmd) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x\n",
+ msg_desc->mod, msg_desc->cmd, mod, cmd);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_info.status) {
+ err = msg_desc->msg_info.status;
+ goto send_err;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < msg_desc->msg_len) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid response mbox message length: %u for mod %d cmd %u, should less than: %u\n",
+ msg_desc->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (msg_desc->msg_len)
+ memcpy(buf_out, msg_desc->msg, msg_desc->msg_len);
+
+ *out_size = msg_desc->msg_len;
+ }
+
+send_err:
+ send_mbox_msg_unlock(func_to_func);
+
+ return err;
+}
+
+static int mbox_func_params_valid(struct hinic3_mbox *func_to_func,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ if (!buf_in || !in_size)
+ return -EINVAL;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Mbox msg len %u exceed limit: [1, %u]\n",
+ in_size, HINIC3_MBOX_DATA_SIZE);
+ return -EINVAL;
+ }
+
+ if (channel >= HINIC3_CHANNEL_MAX) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Invalid channel id: 0x%x\n", channel);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 channel)
+{
+ struct mbox_msg_info msg_info = {0};
+ int err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg_lock(hwdev->func_to_func, channel);
+ if (err)
+ return err;
+
+ err = send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ func_idx, HINIC3_MSG_DIRECT_SEND,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ if (err)
+ sdk_err(hwdev->dev_hdl, "Send mailbox no ack failed\n");
+
+ send_mbox_msg_unlock(hwdev->func_to_func);
+
+ return err;
+}
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ /* TODO: MPU have not implement this cmd yet */
+ if (mod == HINIC3_MOD_COMM && cmd == COMM_MGMT_CMD_SEND_API_ACK_BY_UP)
+ return 0;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, HINIC3_MGMT_SRC_ID,
+ buf_in, in_size, buf_out, out_size, timeout,
+ channel);
+}
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id)
+{
+ struct mbox_msg_info msg_info;
+
+ msg_info.msg_id = (u8)msg_id;
+ msg_info.status = 0;
+
+ send_mbox_msg(hwdev->func_to_func, mod, cmd, buf_in, in_size,
+ HINIC3_MGMT_SRC_ID, HINIC3_MSG_RESPONSE,
+ HINIC3_MSG_NO_ACK, &msg_info);
+}
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err = mbox_func_params_valid(func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, HINIC3_MGMT_SRC_ID, mod, cmd,
+ buf_in, in_size, channel);
+}
+
+int hinic3_mbox_ppf_to_host(void *hwdev, u8 mod, u16 cmd, u8 host_id,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u16 dst_ppf_func;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_PPF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, only ppf support send mbox to ppf. func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ if (host_id >= HINIC3_MAX_HOST_NUM(dev) ||
+ host_id == HINIC3_PCI_INTF_IDX(dev->hwif)) {
+ sdk_err(dev->dev_hdl, "Params error, host id: %u\n", host_id);
+ return -EINVAL;
+ }
+
+ dst_ppf_func = hinic3_host_ppf_idx(dev, host_id);
+ if (dst_ppf_func >= HINIC3_MAX_PF_NUM(dev)) {
+ sdk_err(dev->dev_hdl, "Dest host(%u) have not elect ppf(0x%x).\n",
+ host_id, dst_ppf_func);
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ dst_ppf_func, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_ppf_to_host);
+
+int hinic3_mbox_to_pf(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (!(dev->chip_present_flag))
+ return -EPERM;
+
+ err = mbox_func_params_valid(dev->func_to_func, buf_in, in_size,
+ channel);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_VF(dev)) {
+ sdk_err(dev->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(dev));
+ return -EINVAL;
+ }
+
+ return hinic3_mbox_to_func(dev->func_to_func, mod, cmd,
+ hinic3_pf_id_of_vf(dev), buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_pf);
+
+int hinic3_mbox_to_vf(void *hwdev, u16 vf_id, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size, u32 timeout,
+ u16 channel)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ int err = 0;
+ u16 dst_func_idx;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = ((struct hinic3_hwdev *)hwdev)->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size, channel);
+ if (err != 0)
+ return err;
+
+ if (HINIC3_IS_VF((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl, "Params error, func_type: %d\n",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF id(%u) error!\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* vf_offset_to_pf + vf_id is the vf's global function id of vf in
+ * this pf
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+ in_size, buf_out, out_size, timeout,
+ channel);
+}
+EXPORT_SYMBOL(hinic3_mbox_to_vf);
+
+int hinic3_mbox_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable)
+{
+ if (channel >= HINIC3_CHANNEL_MAX) {
+ sdk_err(hwdev->dev_hdl, "Invalid channel id: 0x%x\n", channel);
+ return -EINVAL;
+ }
+
+ if (enable)
+ clear_bit(channel, &hwdev->func_to_func->channel_stop);
+ else
+ set_bit(channel, &hwdev->func_to_func->channel_stop);
+
+ sdk_info(hwdev->dev_hdl, "%s mbox channel 0x%x\n",
+ enable ? "Enable" : "Disable", channel);
+
+ return 0;
+}
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable)
+{
+ hwdev->func_to_func->lock_channel_en = enable;
+
+ sdk_info(hwdev->dev_hdl, "%s mbox channel lock\n",
+ enable ? "Enable" : "Disable");
+}
+
+static int alloc_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ msg_ch->resp_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->resp_msg.msg)
+ return -ENOMEM;
+
+ msg_ch->recv_msg.msg = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
+ if (!msg_ch->recv_msg.msg) {
+ kfree(msg_ch->resp_msg.msg);
+ return -ENOMEM;
+ }
+
+ msg_ch->resp_msg.seq_id = SEQ_ID_MAX_VAL;
+ msg_ch->recv_msg.seq_id = SEQ_ID_MAX_VAL;
+ atomic_set(&msg_ch->recv_msg_cnt, 0);
+
+ return 0;
+}
+
+static void free_mbox_msg_channel(struct hinic3_msg_channel *msg_ch)
+{
+ kfree(msg_ch->recv_msg.msg);
+ kfree(msg_ch->resp_msg.msg);
+}
+
+static int init_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int err;
+
+ err = alloc_mbox_msg_channel(&func_to_func->mgmt_msg);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc mgmt message channel\n");
+ return err;
+ }
+
+ err = hinic3_init_mbox_dma_queue(func_to_func);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to init mbox dma queue\n");
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+ }
+
+ return err;
+}
+
+static void deinit_mgmt_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ hinic3_deinit_mbox_dma_queue(func_to_func);
+ free_mbox_msg_channel(&func_to_func->mgmt_msg);
+}
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u8 host_num = HINIC3_MAX_HOST_NUM(hwdev);
+ int i, host_id, err;
+
+ if (host_num == 0)
+ return 0;
+
+ func_to_func->host_msg = kcalloc(host_num,
+ sizeof(*func_to_func->host_msg),
+ GFP_KERNEL);
+ if (!func_to_func->host_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc host message array\n");
+ return -ENOMEM;
+ }
+
+ for (host_id = 0; host_id < host_num; host_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->host_msg[host_id]);
+ if (err) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc host %d message channel\n",
+ host_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->support_h2h_msg = true;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < host_id; i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void deinit_host_msg_channel(struct hinic3_mbox *func_to_func)
+{
+ int i;
+
+ if (!func_to_func->host_msg)
+ return;
+
+ for (i = 0; i < HINIC3_MAX_HOST_NUM(func_to_func->hwdev); i++)
+ free_mbox_msg_channel(&func_to_func->host_msg[i]);
+
+ kfree(func_to_func->host_msg);
+ func_to_func->host_msg = NULL;
+}
+
+int hinic3_init_func_mbox_msg_channel(void *hwdev, u16 num_func)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ struct hinic3_mbox *func_to_func = NULL;
+ u16 func_id, i;
+ int err;
+
+ if (!hwdev || !num_func || num_func > HINIC3_MAX_FUNCTIONS)
+ return -EINVAL;
+
+ func_to_func = dev->func_to_func;
+ if (func_to_func->func_msg)
+ return (func_to_func->num_func_msg == num_func) ? 0 : -EFAULT;
+
+ func_to_func->func_msg =
+ kcalloc(num_func, sizeof(*func_to_func->func_msg), GFP_KERNEL);
+ if (!func_to_func->func_msg) {
+ sdk_err(func_to_func->hwdev->dev_hdl, "Failed to alloc func message array\n");
+ return -ENOMEM;
+ }
+
+ for (func_id = 0; func_id < num_func; func_id++) {
+ err = alloc_mbox_msg_channel(&func_to_func->func_msg[func_id]);
+ if (err != 0) {
+ sdk_err(func_to_func->hwdev->dev_hdl,
+ "Failed to alloc func %hu message channel\n",
+ func_id);
+ goto alloc_msg_ch_err;
+ }
+ }
+
+ func_to_func->num_func_msg = num_func;
+
+ return 0;
+
+alloc_msg_ch_err:
+ for (i = 0; i < func_id; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+
+ return -ENOMEM;
+}
+
+static void hinic3_deinit_func_mbox_msg_channel(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ u16 i;
+
+ if (!func_to_func->func_msg)
+ return;
+
+ for (i = 0; i < func_to_func->num_func_msg; i++)
+ free_mbox_msg_channel(&func_to_func->func_msg[i]);
+
+ kfree(func_to_func->func_msg);
+ func_to_func->func_msg = NULL;
+}
+
+static struct hinic3_msg_desc *get_mbox_msg_desc(struct hinic3_mbox *func_to_func,
+ u64 dir, u64 src_func_id)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ struct hinic3_msg_channel *msg_ch = NULL;
+ u16 id;
+
+ if (src_func_id == HINIC3_MGMT_SRC_ID) {
+ msg_ch = &func_to_func->mgmt_msg;
+ } else if (HINIC3_IS_VF(hwdev)) {
+ /* message from pf */
+ msg_ch = func_to_func->func_msg;
+ if (src_func_id != hinic3_pf_id_of_vf(hwdev) || !msg_ch)
+ return NULL;
+ } else if (src_func_id > hinic3_glb_pf_vf_offset(hwdev)) {
+ /* message from vf */
+ id = (u16)(src_func_id - 1U) - hinic3_glb_pf_vf_offset(hwdev);
+ if (id >= func_to_func->num_func_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->func_msg[id];
+ } else {
+ /* message from other host's ppf */
+ if (!func_to_func->support_h2h_msg)
+ return NULL;
+
+ for (id = 0; id < HINIC3_MAX_HOST_NUM(hwdev); id++) {
+ if (src_func_id == hinic3_host_ppf_idx(hwdev, (u8)id))
+ break;
+ }
+
+ if (id == HINIC3_MAX_HOST_NUM(hwdev) || !func_to_func->host_msg)
+ return NULL;
+
+ msg_ch = &func_to_func->host_msg[id];
+ }
+
+ return (dir == HINIC3_MSG_DIRECT_SEND) ?
+ &msg_ch->recv_msg : &msg_ch->resp_msg;
+}
+
+static void prepare_send_mbox(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+static int alloc_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u32 addr_h, addr_l;
+
+ send_mbox->wb_vaddr = dma_zalloc_coherent(hwdev->dev_hdl,
+ MBOX_WB_STATUS_LEN,
+ &send_mbox->wb_paddr,
+ GFP_KERNEL);
+ if (!send_mbox->wb_vaddr)
+ return -ENOMEM;
+
+ send_mbox->wb_status = send_mbox->wb_vaddr;
+
+ addr_h = upper_32_bits(send_mbox->wb_paddr);
+ addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ addr_h);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ addr_l);
+
+ return 0;
+}
+
+static void free_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ 0);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ 0);
+
+ dma_free_coherent(hwdev->dev_hdl, MBOX_WB_STATUS_LEN,
+ send_mbox->wb_vaddr, send_mbox->wb_paddr);
+}
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func;
+ int err = -ENOMEM;
+
+ func_to_func = kzalloc(sizeof(*func_to_func), GFP_KERNEL);
+ if (!func_to_func)
+ return -ENOMEM;
+
+ hwdev->func_to_func = func_to_func;
+ func_to_func->hwdev = hwdev;
+ mutex_init(&func_to_func->mbox_send_lock);
+ mutex_init(&func_to_func->msg_send_lock);
+ spin_lock_init(&func_to_func->mbox_lock);
+ func_to_func->workq = create_singlethread_workqueue(HINIC3_MBOX_WQ_NAME);
+ if (!func_to_func->workq) {
+ sdk_err(hwdev->dev_hdl, "Failed to initialize MBOX workqueue\n");
+ goto create_mbox_workq_err;
+ }
+
+ err = init_mgmt_msg_channel(func_to_func);
+ if (err)
+ goto init_mgmt_msg_ch_err;
+
+ if (HINIC3_IS_VF(hwdev)) {
+ /* VF to PF mbox message channel */
+ err = hinic3_init_func_mbox_msg_channel(hwdev, 1);
+ if (err)
+ goto init_func_msg_ch_err;
+ }
+
+ err = alloc_mbox_wb_status(func_to_func);
+ if (err) {
+ sdk_err(hwdev->dev_hdl, "Failed to alloc mbox write back status\n");
+ goto alloc_wb_status_err;
+ }
+
+ prepare_send_mbox(func_to_func);
+
+ return 0;
+
+alloc_wb_status_err:
+ if (HINIC3_IS_VF(hwdev))
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+
+init_func_msg_ch_err:
+ deinit_mgmt_msg_channel(func_to_func);
+
+init_mgmt_msg_ch_err:
+ destroy_workqueue(func_to_func->workq);
+
+create_mbox_workq_err:
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ kfree(func_to_func);
+
+ return err;
+}
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+
+ /* destroy workqueue before free related mbox resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(func_to_func->workq);
+
+ free_mbox_wb_status(func_to_func);
+ if (HINIC3_IS_PPF(hwdev))
+ deinit_host_msg_channel(func_to_func);
+ hinic3_deinit_func_mbox_msg_channel(hwdev);
+ deinit_mgmt_msg_channel(func_to_func);
+ spin_lock_deinit(&func_to_func->mbox_lock);
+ mutex_deinit(&func_to_func->mbox_send_lock);
+ mutex_deinit(&func_to_func->msg_send_lock);
+
+ kfree(func_to_func);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
new file mode 100644
index 000000000000..bf723e8a68fb
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mbox.h
@@ -0,0 +1,267 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MBOX_H
+#define HINIC3_MBOX_H
+
+#include "hinic3_crm.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MBOX_PF_SEND_ERR 0x1
+
+#define HINIC3_MGMT_SRC_ID 0x1FFF
+#define HINIC3_MAX_FUNCTIONS 4096
+
+/* message header define */
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define HINIC3_MSG_HEADER_STATUS_SHIFT 13
+#define HINIC3_MSG_HEADER_SOURCE_SHIFT 15
+#define HINIC3_MSG_HEADER_AEQ_ID_SHIFT 16
+#define HINIC3_MSG_HEADER_MSG_ID_SHIFT 18
+#define HINIC3_MSG_HEADER_CMD_SHIFT 22
+
+#define HINIC3_MSG_HEADER_MSG_LEN_SHIFT 32
+#define HINIC3_MSG_HEADER_MODULE_SHIFT 43
+#define HINIC3_MSG_HEADER_SEG_LEN_SHIFT 48
+#define HINIC3_MSG_HEADER_NO_ACK_SHIFT 54
+#define HINIC3_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define HINIC3_MSG_HEADER_SEQID_SHIFT 56
+#define HINIC3_MSG_HEADER_LAST_SHIFT 62
+#define HINIC3_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+#define HINIC3_MSG_HEADER_STATUS_MASK 0x1
+#define HINIC3_MSG_HEADER_SOURCE_MASK 0x1
+#define HINIC3_MSG_HEADER_AEQ_ID_MASK 0x3
+#define HINIC3_MSG_HEADER_MSG_ID_MASK 0xF
+#define HINIC3_MSG_HEADER_CMD_MASK 0x3FF
+
+#define HINIC3_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define HINIC3_MSG_HEADER_MODULE_MASK 0x1F
+#define HINIC3_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define HINIC3_MSG_HEADER_NO_ACK_MASK 0x1
+#define HINIC3_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define HINIC3_MSG_HEADER_SEQID_MASK 0x3F
+#define HINIC3_MSG_HEADER_LAST_MASK 0x1
+#define HINIC3_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define HINIC3_MSG_HEADER_GET(val, field) \
+ (((val) >> HINIC3_MSG_HEADER_##field##_SHIFT) & \
+ HINIC3_MSG_HEADER_##field##_MASK)
+#define HINIC3_MSG_HEADER_SET(val, field) \
+ ((u64)(((u64)(val)) & HINIC3_MSG_HEADER_##field##_MASK) << \
+ HINIC3_MSG_HEADER_##field##_SHIFT)
+
+#define IS_DMA_MBX_MSG(dst_func) ((dst_func) == HINIC3_MGMT_SRC_ID)
+
+enum hinic3_msg_direction_type {
+ HINIC3_MSG_DIRECT_SEND = 0,
+ HINIC3_MSG_RESPONSE = 1,
+};
+
+enum hinic3_msg_segment_type {
+ NOT_LAST_SEGMENT = 0,
+ LAST_SEGMENT = 1,
+};
+
+enum hinic3_msg_ack_type {
+ HINIC3_MSG_ACK,
+ HINIC3_MSG_NO_ACK,
+};
+
+enum hinic3_data_type {
+ HINIC3_DATA_INLINE = 0,
+ HINIC3_DATA_DMA = 1,
+};
+
+enum hinic3_msg_src_type {
+ HINIC3_MSG_FROM_MGMT = 0,
+ HINIC3_MSG_FROM_MBOX = 1,
+};
+
+enum hinic3_msg_aeq_type {
+ HINIC3_ASYNC_MSG_AEQ = 0,
+ /* indicate dest func or mgmt cpu which aeq to response mbox message */
+ HINIC3_MBOX_RSP_MSG_AEQ = 1,
+ /* indicate mgmt cpu which aeq to response api cmd message */
+ HINIC3_MGMT_RSP_MSG_AEQ = 2,
+};
+
+#define HINIC3_MBOX_WQ_NAME "hinic3_mbox"
+
+struct mbox_msg_info {
+ u8 msg_id;
+ u8 status; /* can only use 1 bit */
+};
+
+struct hinic3_msg_desc {
+ void *msg;
+ u16 msg_len;
+ u8 seq_id;
+ u8 mod;
+ u16 cmd;
+ struct mbox_msg_info msg_info;
+};
+
+struct hinic3_msg_channel {
+ struct hinic3_msg_desc resp_msg;
+ struct hinic3_msg_desc recv_msg;
+
+ atomic_t recv_msg_cnt;
+};
+
+/* Receive other functions mbox message */
+struct hinic3_recv_mbox {
+ void *msg;
+ u16 msg_len;
+ u8 msg_id;
+ u8 mod;
+ u16 cmd;
+ u16 src_func_idx;
+
+ enum hinic3_msg_ack_type ack_type;
+ u32 rsvd1;
+
+ void *resp_buff;
+};
+
+struct hinic3_send_mbox {
+ u8 *data;
+
+ u64 *wb_status; /* write back status */
+ void *wb_vaddr;
+ dma_addr_t wb_paddr;
+};
+
+enum mbox_event_state {
+ EVENT_START = 0,
+ EVENT_FAIL,
+ EVENT_SUCCESS,
+ EVENT_TIMEOUT,
+ EVENT_END,
+};
+
+enum hinic3_mbox_cb_state {
+ HINIC3_VF_MBOX_CB_REG = 0,
+ HINIC3_VF_MBOX_CB_RUNNING,
+ HINIC3_PF_MBOX_CB_REG,
+ HINIC3_PF_MBOX_CB_RUNNING,
+ HINIC3_PPF_MBOX_CB_REG,
+ HINIC3_PPF_MBOX_CB_RUNNING,
+ HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG,
+};
+
+struct mbox_dma_msg {
+ u32 xor;
+ u32 dma_addr_high;
+ u32 dma_addr_low;
+ u32 msg_len;
+ u64 rsvd;
+};
+
+struct mbox_dma_queue {
+ void *dma_buff_vaddr;
+ dma_addr_t dma_buff_paddr;
+
+ u16 depth;
+ u16 prod_idx;
+ u16 cons_idx;
+};
+
+struct hinic3_mbox {
+ struct hinic3_hwdev *hwdev;
+
+ bool lock_channel_en;
+ unsigned long channel_stop;
+ u16 cur_msg_channel;
+ u32 rsvd1;
+
+ /* lock for send mbox message and ack message */
+ struct mutex mbox_send_lock;
+ /* lock for send mbox message */
+ struct mutex msg_send_lock;
+ struct hinic3_send_mbox send_mbox;
+
+ struct mbox_dma_queue sync_msg_queue;
+ struct mbox_dma_queue async_msg_queue;
+
+ struct workqueue_struct *workq;
+
+ struct hinic3_msg_channel mgmt_msg; /* driver and MGMT CPU */
+ struct hinic3_msg_channel *host_msg; /* PPF message between hosts */
+ struct hinic3_msg_channel *func_msg; /* PF to VF or VF to PF */
+ u16 num_func_msg;
+ bool support_h2h_msg; /* host to host */
+
+ /* vf receive pf/ppf callback */
+ hinic3_vf_mbox_cb vf_mbox_cb[HINIC3_MOD_MAX];
+ void *vf_mbox_data[HINIC3_MOD_MAX];
+ /* pf/ppf receive vf callback */
+ hinic3_pf_mbox_cb pf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_mbox_data[HINIC3_MOD_MAX];
+ /* ppf receive pf/ppf callback */
+ hinic3_ppf_mbox_cb ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *ppf_mbox_data[HINIC3_MOD_MAX];
+ /* pf receive ppf callback */
+ hinic3_pf_recv_from_ppf_mbox_cb pf_recv_ppf_mbox_cb[HINIC3_MOD_MAX];
+ void *pf_recv_ppf_mbox_data[HINIC3_MOD_MAX];
+ unsigned long ppf_to_pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long ppf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long pf_mbox_cb_state[HINIC3_MOD_MAX];
+ unsigned long vf_mbox_cb_state[HINIC3_MOD_MAX];
+
+ u8 send_msg_id;
+ u16 rsvd2;
+ enum mbox_event_state event_flag;
+ /* lock for mbox event flag */
+ spinlock_t mbox_lock;
+ u64 rsvd3;
+};
+
+struct hinic3_mbox_work {
+ struct work_struct work;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_recv_mbox *recv_mbox;
+ struct hinic3_msg_channel *msg_ch;
+};
+
+struct vf_cmd_check_handle {
+ u16 cmd;
+ bool (*check_cmd)(struct hinic3_hwdev *hwdev, u16 src_func_idx,
+ void *buf_in, u16 in_size);
+};
+
+void hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, u8 size);
+
+bool hinic3_mbox_check_cmd_valid(struct hinic3_hwdev *hwdev,
+ struct vf_cmd_check_handle *cmd_handle,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ u8 size);
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout, u16 channel);
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id);
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 channel);
+int hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, u8 mod, u16 cmd,
+ u16 dst_func, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout, u16 channel);
+
+int hinic3_mbox_init_host_msg_channel(struct hinic3_hwdev *hwdev);
+
+int hinic3_mbox_set_channel_status(struct hinic3_hwdev *hwdev, u16 channel,
+ bool enable);
+
+void hinic3_mbox_enable_channel_lock(struct hinic3_hwdev *hwdev, bool enable);
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
new file mode 100644
index 000000000000..f633262e8b71
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.c
@@ -0,0 +1,1515 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/semaphore.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_common.h"
+#include "hinic3_comm_cmd.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_eqs.h"
+#include "hinic3_mbox.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_prof_adap.h"
+#include "hinic3_csr.h"
+#include "hinic3_mgmt.h"
+
+#define HINIC3_MSG_TO_MGMT_MAX_LEN 2016
+
+#define HINIC3_API_CHAIN_AEQ_ID 2
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define SEGMENT_LEN 48
+#define ASYNC_MSG_FLAG 0x8
+#define MGMT_MSG_MAX_SEQ_ID (ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, \
+ SEGMENT_LEN) / SEGMENT_LEN)
+
+#define MGMT_MSG_LAST_SEG_MAX_LEN (MAX_PF_MGMT_BUF_SIZE - \
+ SEGMENT_LEN * MGMT_MSG_MAX_SEQ_ID)
+
+#define BUF_OUT_DEFAULT_SIZE 1
+
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+
+#define SYNC_MSG_ID_MASK 0x7
+#define ASYNC_MSG_ID_MASK 0x7
+
+#define SYNC_FLAG 0
+#define ASYNC_FLAG 1
+
+#define MSG_NO_RESP 0xFFFF
+
+#define MGMT_MSG_TIMEOUT 20000 /* millisecond */
+
+#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
+
+#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
+ (SYNC_MSG_ID(pf_to_mgmt) + 1) & SYNC_MSG_ID_MASK)
+#define ASYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->async_msg_id)
+
+#define ASYNC_MSG_ID_INC(pf_to_mgmt) (ASYNC_MSG_ID(pf_to_mgmt) = \
+ ((ASYNC_MSG_ID(pf_to_mgmt) + 1) & ASYNC_MSG_ID_MASK) \
+ | ASYNC_MSG_FLAG)
+
+static void pf_to_mgmt_send_event_set(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ int event_flag)
+{
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ pf_to_mgmt->event_flag = event_flag;
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+/**
+ * hinic3_register_mgmt_msg_cb - register sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ * @pri_handle: specific mod's private data that will be used in callback
+ * @callback: the handler for a sync message that will handle messages
+ **/
+int hinic3_register_mgmt_msg_cb(void *hwdev, u8 mod, void *pri_handle,
+ hinic3_mgmt_msg_cb callback)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (mod >= HINIC3_MOD_HW_MAX || !hwdev)
+ return -EFAULT;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return -EINVAL;
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = callback;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = pri_handle;
+
+ set_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ return 0;
+}
+EXPORT_SYMBOL(hinic3_register_mgmt_msg_cb);
+
+/**
+ * hinic3_unregister_mgmt_msg_cb - unregister sync msg handler for a module
+ * @hwdev: the pointer to hw device
+ * @mod: module in the chip that this handler will handle its sync messages
+ **/
+void hinic3_unregister_mgmt_msg_cb(void *hwdev, u8 mod)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+
+ if (!hwdev || mod >= HINIC3_MOD_HW_MAX)
+ return;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ clear_bit(HINIC3_MGMT_MSG_CB_REG, &pf_to_mgmt->mgmt_msg_cb_state[mod]);
+
+ while (test_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[mod]))
+ usleep_range(900, 1000); /* sleep 900 us ~ 1000 us */
+
+ pf_to_mgmt->recv_mgmt_msg_cb[mod] = NULL;
+ pf_to_mgmt->recv_mgmt_msg_data[mod] = NULL;
+}
+EXPORT_SYMBOL(hinic3_unregister_mgmt_msg_cb);
+
+/**
+ * mgmt_msg_len - calculate the total message length
+ * @msg_data_len: the length of the message data
+ * Return: the total message length
+ **/
+static u16 mgmt_msg_len(u16 msg_data_len)
+{
+ /* u64 - the size of the header */
+ u16 msg_size;
+
+ msg_size = (u16)(MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len);
+
+ if (msg_size > MGMT_MSG_SIZE_MIN)
+ msg_size = MGMT_MSG_SIZE_MIN +
+ ALIGN((msg_size - MGMT_MSG_SIZE_MIN),
+ MGMT_MSG_SIZE_STEP);
+ else
+ msg_size = MGMT_MSG_SIZE_MIN;
+
+ return msg_size;
+}
+
+/**
+ * prepare_header - prepare the header of the message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: pointer of the header to prepare
+ * @msg_len: the length of the message
+ * @mod: module in the chip that will get the message
+ * @direction: the direction of the original message
+ * @msg_id: message id
+ **/
+static void prepare_header(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u64 *header, u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = pf_to_mgmt->hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MGMT, SOURCE) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+static void clp_prepare_header(struct hinic3_hwdev *hwdev, u64 *header,
+ u16 msg_len, u8 mod,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_mgmt_cmd cmd, u32 msg_id)
+{
+ struct hinic3_hwif *hwif = hwdev->hwif;
+
+ *header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(msg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(0, SEQID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_API_CHAIN_AEQ_ID, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ HINIC3_MSG_HEADER_SET(hwif->attr.func_global_idx,
+ SRC_GLB_FUNC_IDX) |
+ HINIC3_MSG_HEADER_SET(msg_id, MSG_ID);
+}
+
+/**
+ * prepare_mgmt_cmd - prepare the mgmt command
+ * @mgmt_cmd: pointer to the command to prepare
+ * @header: pointer of the header to prepare
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ **/
+static void prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, const void *msg,
+ int msg_len)
+{
+ u8 *mgmt_cmd_new = mgmt_cmd;
+
+ memset(mgmt_cmd_new, 0, MGMT_MSG_RSVD_FOR_DEV);
+
+ mgmt_cmd_new += MGMT_MSG_RSVD_FOR_DEV;
+ memcpy(mgmt_cmd_new, header, sizeof(*header));
+
+ mgmt_cmd_new += sizeof(*header);
+ memcpy(mgmt_cmd_new, msg, (size_t)(u32)msg_len);
+}
+
+/**
+ * send_msg_to_mgmt_sync - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the msg data
+ * @msg_len: the msg data length
+ * @direction: the direction of the original message
+ * @resp_msg_id: msg id to response for
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_sync(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_ack_type ack_type,
+ enum hinic3_msg_direction_type direction,
+ u16 resp_msg_id)
+{
+ void *mgmt_cmd = pf_to_mgmt->sync_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ if (direction == HINIC3_MSG_RESPONSE)
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, resp_msg_id);
+ else
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, ack_type,
+ direction, cmd, SYNC_MSG_ID_INC(pf_to_mgmt));
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_TO_MGMT_CPU];
+
+ if (ack_type == HINIC3_MSG_ACK)
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_START);
+
+ prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+/**
+ * send_msg_to_mgmt_async - send async message
+ * @pf_to_mgmt: PF to MGMT channel
+ * @mod: module in the chip that will get the message
+ * @cmd: command of the message
+ * @msg: the data of the message
+ * @msg_len: the length of the message
+ * @direction: the direction of the original message
+ * Return: 0 - success, negative - failure
+ **/
+static int send_msg_to_mgmt_async(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, const void *msg, u16 msg_len,
+ enum hinic3_msg_direction_type direction)
+{
+ void *mgmt_cmd = pf_to_mgmt->async_msg_buf;
+ struct hinic3_api_cmd_chain *chain = NULL;
+ u8 node_id = HINIC3_MGMT_CPU_NODE_ID(pf_to_mgmt->hwdev);
+ u64 header;
+ u16 cmd_size = mgmt_msg_len(msg_len);
+
+ if (hinic3_get_chip_present_flag(pf_to_mgmt->hwdev) == 0)
+ return -EFAULT;
+
+ if (cmd_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EFAULT;
+
+ prepare_header(pf_to_mgmt, &header, msg_len, mod, HINIC3_MSG_NO_ACK,
+ direction, cmd, ASYNC_MSG_ID(pf_to_mgmt));
+
+ prepare_mgmt_cmd((u8 *)mgmt_cmd, &header, msg, msg_len);
+
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_WRITE_ASYNC_TO_MGMT_CPU];
+
+ return hinic3_api_cmd_write(chain, node_id, mgmt_cmd, cmd_size);
+}
+
+static inline void msg_to_mgmt_pre(u8 mod, void *buf_in)
+{
+ struct hinic3_msg_head *msg_head = NULL;
+
+ /* set aeq fix num to 3, need to ensure response aeq id < 3 */
+ if (mod == HINIC3_MOD_COMM || mod == HINIC3_MOD_L2NIC) {
+ msg_head = buf_in;
+
+ if (msg_head->resp_aeq_num >= HINIC3_MAX_AEQS)
+ msg_head->resp_aeq_num = 0;
+ }
+}
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ struct completion *recv_done = NULL;
+ ulong timeo;
+ int err;
+ ulong ret;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ msg_to_mgmt_pre(mod, buf_in);
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the sync_msg_buf */
+ down(&pf_to_mgmt->sync_msg_lock);
+ recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
+ recv_done = &recv_msg->recv_done;
+
+ init_completion(recv_done);
+
+ err = send_msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_ACK, HINIC3_MSG_DIRECT_SEND,
+ MSG_NO_RESP);
+ if (err) {
+ sdk_err(dev, "Failed to send sync msg to mgmt, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_FAIL);
+ goto unlock_sync_msg;
+ }
+
+ timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
+
+ ret = wait_for_completion_timeout(recv_done, timeo);
+ if (!ret) {
+ sdk_err(dev, "Mgmt response sync cmd timeout, sync_msg_id: %u\n",
+ pf_to_mgmt->sync_msg_id);
+ hinic3_dump_aeq_info((struct hinic3_hwdev *)hwdev);
+ err = -ETIMEDOUT;
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_TIMEOUT);
+ goto unlock_sync_msg;
+ }
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (pf_to_mgmt->event_flag == SEND_EVENT_TIMEOUT) {
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+ err = -ETIMEDOUT;
+ goto unlock_sync_msg;
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+
+ pf_to_mgmt_send_event_set(pf_to_mgmt, SEND_EVENT_END);
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag)) {
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+ return -ETIMEDOUT;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < recv_msg->msg_len) {
+ sdk_err(dev, "Invalid response message length: %u for mod %d cmd %u from mgmt, should less than: %u\n",
+ recv_msg->msg_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto unlock_sync_msg;
+ }
+
+ if (recv_msg->msg_len)
+ memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
+
+ *out_size = recv_msg->msg_len;
+ }
+
+unlock_sync_msg:
+ destroy_completion(recv_done);
+ up(&pf_to_mgmt->sync_msg_lock);
+
+ return err;
+}
+
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = ((struct hinic3_hwdev *)hwdev)->dev_hdl;
+ int err;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+
+ /* Lock the async_msg_buf */
+ spin_lock_bh(&pf_to_mgmt->async_msg_lock);
+ ASYNC_MSG_ID_INC(pf_to_mgmt);
+
+ err = send_msg_to_mgmt_async(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ HINIC3_MSG_DIRECT_SEND);
+ spin_unlock_bh(&pf_to_mgmt->async_msg_lock);
+
+ if (err) {
+ sdk_err(dev, "Failed to send async mgmt msg\n");
+ return err;
+ }
+
+ return 0;
+}
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (in_size > HINIC3_MSG_TO_MGMT_MAX_LEN)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ return hinic3_pf_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_sync);
+
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ return hinic3_send_mbox_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size, channel);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_no_ack);
+
+int hinic3_msg_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 channel)
+{
+ return hinic3_msg_to_mgmt_api_chain_async(hwdev, mod, cmd, buf_in,
+ in_size);
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_async);
+
+int hinic3_msg_to_mgmt_api_chain_sync(void *hwdev, u8 mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_get_chip_present_flag(hwdev) == 0)
+ return -EPERM;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ return -EPERM;
+ }
+
+ return hinic3_pf_msg_to_mgmt_sync(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int hinic3_msg_to_mgmt_api_chain_async(void *hwdev, u8 mod, u16 cmd,
+ const void *buf_in, u16 in_size)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ err = -EFAULT;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "VF don't support async cmd\n");
+ } else if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev)) {
+ err = -EPERM;
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "PF don't support api chain\n");
+ } else {
+ err = hinic3_pf_to_mgmt_async(hwdev, mod, cmd, buf_in, in_size);
+ }
+
+ return err;
+}
+EXPORT_SYMBOL(hinic3_msg_to_mgmt_api_chain_async);
+
+static void send_mgmt_ack(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ hinic3_response_mbox_to_mgmt(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+ buf_size, msg_id);
+}
+
+static void mgmt_recv_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id, int need_resp)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ enum hinic3_mod_type tmp_mod = mod;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ if (mod >= HINIC3_MOD_HW_MAX) {
+ sdk_warn(dev, "Receive illegal message from mgmt cpu, mod = %d\n",
+ mod);
+ goto unsupported;
+ }
+
+ set_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ if (!pf_to_mgmt->recv_mgmt_msg_cb[mod] ||
+ !test_bit(HINIC3_MGMT_MSG_CB_REG,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod])) {
+ sdk_warn(dev, "Receive mgmt callback is null, mod = %u, cmd=%u\n", mod, cmd);
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+ goto unsupported;
+ }
+
+ pf_to_mgmt->recv_mgmt_msg_cb[tmp_mod](pf_to_mgmt->recv_mgmt_msg_data[tmp_mod],
+ cmd, buf_in, in_size,
+ buf_out, &out_size);
+
+ clear_bit(HINIC3_MGMT_MSG_CB_RUNNING,
+ &pf_to_mgmt->mgmt_msg_cb_state[tmp_mod]);
+
+ goto resp;
+
+unsupported:
+ out_size = sizeof(struct mgmt_msg_head);
+ ((struct mgmt_msg_head *)buf_out)->status = HINIC3_MGMT_CMD_UNSUPPORTED;
+
+resp:
+ if (!ack_first && need_resp)
+ send_mgmt_ack(pf_to_mgmt, mod, cmd, buf_out, out_size, msg_id);
+}
+
+/**
+ * mgmt_resp_msg_handler - handler for response message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @recv_msg: received message details
+ **/
+static void mgmt_resp_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ /* delete async msg */
+ if (recv_msg->msg_id & ASYNC_MSG_FLAG)
+ return;
+
+ spin_lock(&pf_to_mgmt->sync_event_lock);
+ if (recv_msg->msg_id == pf_to_mgmt->sync_msg_id &&
+ pf_to_mgmt->event_flag == SEND_EVENT_START) {
+ pf_to_mgmt->event_flag = SEND_EVENT_SUCCESS;
+ complete(&recv_msg->recv_done);
+ } else if (recv_msg->msg_id != pf_to_mgmt->sync_msg_id) {
+ sdk_err(dev, "Send msg id(0x%x) recv msg id(0x%x) dismatch, event state=%d\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ } else {
+ sdk_err(dev, "Wait timeout, send msg id(0x%x) recv msg id(0x%x), event state=%d!\n",
+ pf_to_mgmt->sync_msg_id, recv_msg->msg_id,
+ pf_to_mgmt->event_flag);
+ }
+ spin_unlock(&pf_to_mgmt->sync_event_lock);
+}
+
+static void recv_mgmt_msg_work_handler(struct work_struct *work)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work =
+ container_of(work, struct hinic3_mgmt_msg_handle_work, work);
+
+ mgmt_recv_msg_handler(mgmt_work->pf_to_mgmt, mgmt_work->mod,
+ mgmt_work->cmd, mgmt_work->msg,
+ mgmt_work->msg_len, mgmt_work->msg_id,
+ !mgmt_work->async_mgmt_to_pf);
+
+ destroy_work(&mgmt_work->work);
+
+ kfree(mgmt_work->msg);
+ kfree(mgmt_work);
+}
+
+static bool check_mgmt_head_info(struct hinic3_recv_msg *recv_msg,
+ u8 seq_id, u8 seg_len, u16 msg_id)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN ||
+ (seq_id == MGMT_MSG_MAX_SEQ_ID && seg_len > MGMT_MSG_LAST_SEG_MAX_LEN))
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ recv_msg->msg_id = msg_id;
+ } else {
+ if (seq_id != recv_msg->seq_id + 1 || msg_id != recv_msg->msg_id)
+ return false;
+
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+static void init_mgmt_msg_work(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_mgmt_msg_handle_work *mgmt_work = NULL;
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+
+ mgmt_work = kzalloc(sizeof(*mgmt_work), GFP_KERNEL);
+ if (!mgmt_work) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt work memory failed\n");
+ return;
+ }
+
+ if (recv_msg->msg_len) {
+ mgmt_work->msg = kzalloc(recv_msg->msg_len, GFP_KERNEL);
+ if (!mgmt_work->msg) {
+ sdk_err(hwdev->dev_hdl, "Allocate mgmt msg memory failed\n");
+ kfree(mgmt_work);
+ return;
+ }
+ }
+
+ mgmt_work->pf_to_mgmt = pf_to_mgmt;
+ mgmt_work->msg_len = recv_msg->msg_len;
+ memcpy(mgmt_work->msg, recv_msg->msg, recv_msg->msg_len);
+ mgmt_work->msg_id = recv_msg->msg_id;
+ mgmt_work->mod = recv_msg->mod;
+ mgmt_work->cmd = recv_msg->cmd;
+ mgmt_work->async_mgmt_to_pf = recv_msg->async_mgmt_to_pf;
+
+ INIT_WORK(&mgmt_work->work, recv_mgmt_msg_work_handler);
+ queue_work_on(hisdk3_get_work_cpu_affinity(hwdev, WORK_TYPE_MGMT_MSG),
+ pf_to_mgmt->workq, &mgmt_work->work);
+}
+
+/**
+ * recv_mgmt_msg_handler - handler a message from mgmt cpu
+ * @pf_to_mgmt: PF to MGMT channel
+ * @header: the header of the message
+ * @recv_msg: received message details
+ **/
+static void recv_mgmt_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 *header, struct hinic3_recv_msg *recv_msg)
+{
+ struct hinic3_hwdev *hwdev = pf_to_mgmt->hwdev;
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u16 msg_id;
+ u32 offset;
+ u64 dir;
+
+ /* Don't need to get anything from hw when cmd is async */
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ if (dir == HINIC3_MSG_RESPONSE &&
+ (HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID) & ASYNC_MSG_FLAG))
+ return;
+
+ seq_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ if (!check_mgmt_head_info(recv_msg, seq_id, seq_len, msg_id)) {
+ sdk_err(hwdev->dev_hdl, "Mgmt msg sequence id and segment length check failed\n");
+ sdk_err(hwdev->dev_hdl,
+ "Front seq_id: 0x%x,current seq_id: 0x%x, seg len: 0x%x, front msg_id: %d, cur: %d\n",
+ recv_msg->seq_id, seq_id, seq_len, recv_msg->msg_id, msg_id);
+ /* set seq_id to invalid seq_id */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return;
+
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = HINIC3_MSG_HEADER_GET(mbox_header,
+ NO_ACK);
+ recv_msg->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = msg_id;
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
+ return;
+ }
+
+ init_mgmt_msg_work(pf_to_mgmt, recv_msg);
+}
+
+/**
+ * hinic3_mgmt_msg_aeqe_handler - handler for a mgmt message event
+ * @handle: PF to MGMT channel
+ * @header: the header of the message
+ * @size: unused
+ **/
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ bool is_send_dir = false;
+
+ if ((HINIC3_MSG_HEADER_GET(*(u64 *)header, SOURCE) ==
+ HINIC3_MSG_FROM_MBOX)) {
+ hinic3_mbox_func_aeqe_handler(hwdev, header, size);
+ return;
+ }
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+ if (!pf_to_mgmt)
+ return;
+
+ is_send_dir = (HINIC3_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ HINIC3_MSG_DIRECT_SEND) ? true : false;
+
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt :
+ &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
+}
+
+/**
+ * alloc_recv_msg - allocate received message memory
+ * @recv_msg: pointer that will hold the allocated data
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * free_recv_msg - free received message memory
+ * @recv_msg: pointer that holds the allocated data
+ **/
+static void free_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ kfree(recv_msg->msg);
+}
+
+/**
+ * alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static int alloc_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+ void *dev = pf_to_mgmt->hwdev->dev_hdl;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate recv msg\n");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate resp recv msg\n");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->async_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->async_msg_buf) {
+ err = -ENOMEM;
+ goto async_msg_buf_err;
+ }
+
+ pf_to_mgmt->sync_msg_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->sync_msg_buf) {
+ err = -ENOMEM;
+ goto sync_msg_buf_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = kzalloc(MAX_PF_MGMT_BUF_SIZE, GFP_KERNEL);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ kfree(pf_to_mgmt->sync_msg_buf);
+
+sync_msg_buf_err:
+ kfree(pf_to_mgmt->async_msg_buf);
+
+async_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * free_msg_buf - free all the message buffers of PF to MGMT channel
+ * @pf_to_mgmt: PF to MGMT channel
+ * Return: 0 - success, negative - failure
+ **/
+static void free_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ kfree(pf_to_mgmt->mgmt_ack_buf);
+ kfree(pf_to_mgmt->sync_msg_buf);
+ kfree(pf_to_mgmt->async_msg_buf);
+
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+}
+
+/**
+ * hinic_pf_to_mgmt_init - initialize PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ * Return: 0 - success, negative - failure
+ **/
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ void *dev = hwdev->dev_hdl;
+ int err;
+
+ pf_to_mgmt = kzalloc(sizeof(*pf_to_mgmt), GFP_KERNEL);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+ spin_lock_init(&pf_to_mgmt->async_msg_lock);
+ spin_lock_init(&pf_to_mgmt->sync_event_lock);
+ sema_init(&pf_to_mgmt->sync_msg_lock, 1);
+ pf_to_mgmt->workq = create_singlethread_workqueue(HINIC3_MGMT_WQ_NAME);
+ if (!pf_to_mgmt->workq) {
+ sdk_err(dev, "Failed to initialize MGMT workqueue\n");
+ err = -ENOMEM;
+ goto create_mgmt_workq_err;
+ }
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ sdk_err(dev, "Failed to allocate msg buffers\n");
+ goto alloc_msg_buf_err;
+ }
+
+ err = hinic3_api_cmd_init(hwdev, pf_to_mgmt->cmd_chain);
+ if (err) {
+ sdk_err(dev, "Failed to init the api cmd chains\n");
+ goto api_cmd_init_err;
+ }
+
+ return 0;
+
+api_cmd_init_err:
+ free_msg_buf(pf_to_mgmt);
+
+alloc_msg_buf_err:
+ destroy_workqueue(pf_to_mgmt->workq);
+
+create_mgmt_workq_err:
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * hinic_pf_to_mgmt_free - free PF to MGMT channel
+ * @hwdev: the pointer to hw device
+ **/
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ /* destroy workqueue before free related pf_to_mgmt resources in case of
+ * illegal resource access
+ */
+ destroy_workqueue(pf_to_mgmt->workq);
+ hinic3_api_cmd_free(hwdev, pf_to_mgmt->cmd_chain);
+
+ free_msg_buf(pf_to_mgmt);
+ spin_lock_deinit(&pf_to_mgmt->sync_event_lock);
+ spin_lock_deinit(&pf_to_mgmt->async_msg_lock);
+ sema_deinit(&pf_to_mgmt->sync_msg_lock);
+ kfree(pf_to_mgmt);
+}
+
+void hinic3_flush_mgmt_workq(void *hwdev)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ flush_workqueue(dev->aeqs->workq);
+
+ if (hinic3_func_type(dev) != TYPE_VF)
+ flush_workqueue(dev->pf_to_mgmt->workq);
+}
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd,
+ u16 size, void *ack, u16 ack_size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !cmd || (ack_size && !ack) || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_READ];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_read(chain, dest, cmd, size, ack, ack_size);
+}
+
+/**
+ * api cmd write or read bypass default use poll, if want to use aeq interrupt,
+ * please set wb_trigger_aeqe to 1
+ **/
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_api_cmd_chain *chain = NULL;
+
+ if (!hwdev || !size || !cmd || size > MAX_PF_MGMT_BUF_SIZE)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->pf_to_mgmt;
+ chain = pf_to_mgmt->cmd_chain[HINIC3_API_CMD_POLL_WRITE];
+
+ if (!(((struct hinic3_hwdev *)hwdev)->chip_present_flag))
+ return -EPERM;
+
+ return hinic3_api_cmd_write(chain, dest, cmd, size);
+}
+
+static int get_clp_reg(void *hwdev, enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *reg_addr)
+{
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQBASE) :
+ HINIC3_CLP_REG(RSPBASE);
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ *reg_addr = HINIC3_CLP_REG(SIZE);
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ *reg_addr = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_REG(REQ) : HINIC3_CLP_REG(RSP);
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ *reg_addr = HINIC3_CLP_REG(REQ);
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ *reg_addr = HINIC3_CLP_REG(RSP);
+ break;
+
+ default:
+ *reg_addr = 0;
+ break;
+ }
+ if (*reg_addr == 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline int clp_param_valid(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static u32 get_clp_reg_value(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 reg_addr)
+{
+ u32 value;
+
+ value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_BA_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(BASE)) &
+ HINIC3_CLP_MASK(BASE));
+ break;
+
+ case HINIC3_CLP_SIZE_HOST:
+ if (data_type == HINIC3_CLP_REQ_HOST)
+ value = ((value >> HINIC3_CLP_OFFSET(REQ_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ else
+ value = ((value >> HINIC3_CLP_OFFSET(RSP_SIZE)) &
+ HINIC3_CLP_MASK(SIZE));
+ break;
+
+ case HINIC3_CLP_LEN_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(LEN)) &
+ HINIC3_CLP_MASK(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(START)) &
+ HINIC3_CLP_MASK(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ value = ((value >> HINIC3_CLP_OFFSET(READY)) &
+ HINIC3_CLP_MASK(READY));
+ break;
+
+ default:
+ break;
+ }
+
+ return value;
+}
+
+static int hinic3_read_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 *read_value)
+{
+ u32 reg_addr;
+ int err;
+
+ err = clp_param_valid(hwdev, data_type, reg_type);
+ if (err)
+ return err;
+
+ err = get_clp_reg(hwdev, data_type, reg_type, ®_addr);
+ if (err)
+ return err;
+
+ *read_value = get_clp_reg_value(hwdev, data_type, reg_type, reg_addr);
+
+ return 0;
+}
+
+static int check_data_type(enum clp_data_type data_type,
+ enum clp_reg_type reg_type)
+{
+ if (data_type == HINIC3_CLP_REQ_HOST &&
+ reg_type == HINIC3_CLP_READY_RSP_HOST)
+ return -EINVAL;
+ if (data_type == HINIC3_CLP_RSP_HOST &&
+ reg_type == HINIC3_CLP_START_REQ_HOST)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int check_reg_value(enum clp_reg_type reg_type, u32 value)
+{
+ if (reg_type == HINIC3_CLP_BA_HOST &&
+ value > HINIC3_CLP_SRAM_BASE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_SIZE_HOST &&
+ value > HINIC3_CLP_SRAM_SIZE_REG_MAX)
+ return -EINVAL;
+
+ if (reg_type == HINIC3_CLP_LEN_HOST &&
+ value > HINIC3_CLP_LEN_REG_MAX)
+ return -EINVAL;
+
+ if ((reg_type == HINIC3_CLP_START_REQ_HOST ||
+ reg_type == HINIC3_CLP_READY_RSP_HOST) &&
+ value > HINIC3_CLP_START_OR_READY_REG_MAX)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int hinic3_check_clp_init_status(struct hinic3_hwdev *hwdev)
+{
+ int err;
+ u32 reg_value = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_BA_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp ba value: 0x%x\n",
+ reg_value);
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong req size\n");
+ return -EINVAL;
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_SIZE_HOST, ®_value);
+ if (err || !reg_value) {
+ sdk_err(hwdev->dev_hdl, "Wrong rsp size\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void hinic3_write_clp_reg(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type,
+ enum clp_reg_type reg_type, u32 value)
+{
+ u32 reg_addr, reg_value;
+
+ if (check_data_type(data_type, reg_type))
+ return;
+
+ if (check_reg_value(reg_type, value))
+ return;
+
+ if (get_clp_reg(hwdev, data_type, reg_type, ®_addr))
+ return;
+
+ reg_value = hinic3_hwif_read_reg(hwdev->hwif, reg_addr);
+
+ switch (reg_type) {
+ case HINIC3_CLP_LEN_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(LEN) << HINIC3_CLP_OFFSET(LEN)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(LEN));
+ break;
+
+ case HINIC3_CLP_START_REQ_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(START) <<
+ HINIC3_CLP_OFFSET(START)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(START));
+ break;
+
+ case HINIC3_CLP_READY_RSP_HOST:
+ reg_value = reg_value &
+ (~(HINIC3_CLP_MASK(READY) <<
+ HINIC3_CLP_OFFSET(READY)));
+ reg_value = reg_value | (value << HINIC3_CLP_OFFSET(READY));
+ break;
+
+ default:
+ return;
+ }
+
+ hinic3_hwif_write_reg(hwdev->hwif, reg_addr, reg_value);
+}
+
+static int hinic3_read_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_out, u16 *out_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(RSP);
+ u32 ready, delay_cnt;
+ u32 *ptr = (u32 *)buf_out;
+ u32 temp_out_size = 0;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err)
+ return err;
+
+ delay_cnt = 0;
+ while (ready == 0) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, &ready);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX) {
+ sdk_err(hwdev->dev_hdl, "Timeout with delay_cnt: %u\n",
+ delay_cnt);
+ return -EINVAL;
+ }
+ }
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_LEN_HOST, &temp_out_size);
+ if (err)
+ return err;
+
+ if (temp_out_size > HINIC3_CLP_SRAM_SIZE_REG_MAX || !temp_out_size) {
+ sdk_err(hwdev->dev_hdl, "Invalid temp_out_size: %u\n",
+ temp_out_size);
+ return -EINVAL;
+ }
+
+ *out_size = (u16)temp_out_size;
+ for (; temp_out_size > 0; temp_out_size--) {
+ *ptr = hinic3_hwif_read_reg(hwdev->hwif, reg);
+ ptr++;
+ /* read 4 bytes every time */
+ reg = reg + 4;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, (u32)0x0);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_RSP_HOST, HINIC3_CLP_LEN_HOST,
+ (u32)0x0);
+
+ return 0;
+}
+
+static int hinic3_write_clp_data(struct hinic3_hwdev *hwdev,
+ void *buf_in, u16 in_size)
+{
+ int err;
+ u32 reg = HINIC3_CLP_DATA(REQ);
+ u32 start = 1;
+ u32 delay_cnt = 0;
+ u32 *ptr = (u32 *)buf_in;
+ u16 size_in = in_size;
+
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err != 0)
+ return err;
+
+ while (start == 1) {
+ usleep_range(9000, 10000); /* sleep 9000 us ~ 10000 us */
+ delay_cnt++;
+ err = hinic3_read_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, &start);
+ if (err || delay_cnt > HINIC3_CLP_DELAY_CNT_MAX)
+ return -EINVAL;
+ }
+
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_LEN_HOST, size_in);
+ hinic3_write_clp_reg(hwdev, HINIC3_CLP_REQ_HOST,
+ HINIC3_CLP_START_REQ_HOST, (u32)0x1);
+
+ for (; size_in > 0; size_in--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, *ptr);
+ ptr++;
+ reg = reg + sizeof(u32);
+ }
+
+ return 0;
+}
+
+static void hinic3_clear_clp_data(struct hinic3_hwdev *hwdev,
+ enum clp_data_type data_type)
+{
+ u32 reg = (data_type == HINIC3_CLP_REQ_HOST) ?
+ HINIC3_CLP_DATA(REQ) : HINIC3_CLP_DATA(RSP);
+ u32 count = HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST;
+
+ for (; count > 0; count--) {
+ hinic3_hwif_write_reg(hwdev->hwif, reg, 0x0);
+ reg = reg + sizeof(u32);
+ }
+}
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt;
+ struct hinic3_hwdev *dev = hwdev;
+ u64 header;
+ u16 real_size;
+ u8 *clp_msg_buf;
+ int err;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ clp_pf_to_mgmt = ((struct hinic3_hwdev *)hwdev)->clp_pf_to_mgmt;
+ if (!clp_pf_to_mgmt)
+ return -EPERM;
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ /* 4 bytes alignment */
+ if (in_size % HINIC3_CLP_DATA_UNIT_HOST)
+ real_size = (in_size + (u16)sizeof(header) +
+ HINIC3_CLP_DATA_UNIT_HOST);
+ else
+ real_size = in_size + (u16)sizeof(header);
+ real_size = real_size / HINIC3_CLP_DATA_UNIT_HOST;
+
+ if (real_size >
+ (HINIC3_CLP_INPUT_BUF_LEN_HOST / HINIC3_CLP_DATA_UNIT_HOST)) {
+ sdk_err(dev->dev_hdl, "Invalid real_size: %u\n", real_size);
+ return -EINVAL;
+ }
+ down(&clp_pf_to_mgmt->clp_msg_lock);
+
+ err = hinic3_check_clp_init_status(dev);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Check clp init status failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return err;
+ }
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ hinic3_write_clp_reg(dev, HINIC3_CLP_RSP_HOST,
+ HINIC3_CLP_READY_RSP_HOST, 0x0);
+
+ /* Send request */
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ clp_prepare_header(dev, &header, in_size, mod, 0, 0, cmd, 0);
+
+ memcpy(clp_msg_buf, &header, sizeof(header));
+ clp_msg_buf += sizeof(header);
+ memcpy(clp_msg_buf, buf_in, in_size);
+
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+
+ hinic3_clear_clp_data(dev, HINIC3_CLP_REQ_HOST);
+ err = hinic3_write_clp_data(hwdev,
+ clp_pf_to_mgmt->clp_msg_buf, real_size);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Send clp request failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ /* Get response */
+ clp_msg_buf = clp_pf_to_mgmt->clp_msg_buf;
+ memset(clp_msg_buf, 0x0, HINIC3_CLP_INPUT_BUF_LEN_HOST);
+ err = hinic3_read_clp_data(hwdev, clp_msg_buf, &real_size);
+ hinic3_clear_clp_data(dev, HINIC3_CLP_RSP_HOST);
+ if (err) {
+ sdk_err(dev->dev_hdl, "Read clp response failed\n");
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ real_size = (u16)((real_size * HINIC3_CLP_DATA_UNIT_HOST) & 0xffff);
+ if (real_size <= sizeof(header) || real_size > HINIC3_CLP_INPUT_BUF_LEN_HOST) {
+ sdk_err(dev->dev_hdl, "Invalid response size: %u", real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+ real_size = real_size - sizeof(header);
+ if (real_size != *out_size) {
+ sdk_err(dev->dev_hdl, "Invalid real_size:%u, out_size: %u\n",
+ real_size, *out_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+ return -EINVAL;
+ }
+
+ memcpy(buf_out, (clp_msg_buf + sizeof(header)), real_size);
+ up(&clp_pf_to_mgmt->clp_msg_lock);
+
+ return 0;
+}
+
+int hinic3_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+
+{
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ if (!dev->chip_present_flag)
+ return -EPERM;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return -EINVAL;
+
+ if (!COMM_SUPPORT_CLP(dev))
+ return -EPERM;
+
+ err = hinic3_pf_clp_to_mgmt(dev, mod, cmd, buf_in, in_size, buf_out,
+ out_size);
+
+ return err;
+}
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return 0;
+
+ clp_pf_to_mgmt = kzalloc(sizeof(*clp_pf_to_mgmt), GFP_KERNEL);
+ if (!clp_pf_to_mgmt)
+ return -ENOMEM;
+
+ clp_pf_to_mgmt->clp_msg_buf = kzalloc(HINIC3_CLP_INPUT_BUF_LEN_HOST,
+ GFP_KERNEL);
+ if (!clp_pf_to_mgmt->clp_msg_buf) {
+ kfree(clp_pf_to_mgmt);
+ return -ENOMEM;
+ }
+ sema_init(&clp_pf_to_mgmt->clp_msg_lock, 1);
+
+ hwdev->clp_pf_to_mgmt = clp_pf_to_mgmt;
+
+ return 0;
+}
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_clp_pf_to_mgmt *clp_pf_to_mgmt = hwdev->clp_pf_to_mgmt;
+
+ if (!COMM_SUPPORT_CLP(hwdev))
+ return;
+
+ sema_deinit(&clp_pf_to_mgmt->clp_msg_lock);
+ kfree(clp_pf_to_mgmt->clp_msg_buf);
+ kfree(clp_pf_to_mgmt);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
new file mode 100644
index 000000000000..ad86a82e7040
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_mgmt.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_MGMT_H
+#define HINIC3_MGMT_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "comm_defs.h"
+#include "hinic3_hw.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_hwdev.h"
+
+#define HINIC3_MGMT_WQ_NAME "hinic3_mgmt"
+
+#define HINIC3_CLP_REG_GAP 0x20
+#define HINIC3_CLP_INPUT_BUF_LEN_HOST 4096UL
+#define HINIC3_CLP_DATA_UNIT_HOST 4UL
+
+enum clp_data_type {
+ HINIC3_CLP_REQ_HOST = 0,
+ HINIC3_CLP_RSP_HOST = 1
+};
+
+enum clp_reg_type {
+ HINIC3_CLP_BA_HOST = 0,
+ HINIC3_CLP_SIZE_HOST = 1,
+ HINIC3_CLP_LEN_HOST = 2,
+ HINIC3_CLP_START_REQ_HOST = 3,
+ HINIC3_CLP_READY_RSP_HOST = 4
+};
+
+#define HINIC3_CLP_REQ_SIZE_OFFSET 0
+#define HINIC3_CLP_RSP_SIZE_OFFSET 16
+#define HINIC3_CLP_BASE_OFFSET 0
+#define HINIC3_CLP_LEN_OFFSET 0
+#define HINIC3_CLP_START_OFFSET 31
+#define HINIC3_CLP_READY_OFFSET 31
+#define HINIC3_CLP_OFFSET(member) (HINIC3_CLP_##member##_OFFSET)
+
+#define HINIC3_CLP_SIZE_MASK 0x7ffUL
+#define HINIC3_CLP_BASE_MASK 0x7ffffffUL
+#define HINIC3_CLP_LEN_MASK 0x7ffUL
+#define HINIC3_CLP_START_MASK 0x1UL
+#define HINIC3_CLP_READY_MASK 0x1UL
+#define HINIC3_CLP_MASK(member) (HINIC3_CLP_##member##_MASK)
+
+#define HINIC3_CLP_DELAY_CNT_MAX 200UL
+#define HINIC3_CLP_SRAM_SIZE_REG_MAX 0x3ff
+#define HINIC3_CLP_SRAM_BASE_REG_MAX 0x7ffffff
+#define HINIC3_CLP_LEN_REG_MAX 0x3ff
+#define HINIC3_CLP_START_OR_READY_REG_MAX 0x1
+
+struct hinic3_recv_msg {
+ void *msg;
+
+ u16 msg_len;
+ u16 rsvd1;
+ enum hinic3_mod_type mod;
+
+ u16 cmd;
+ u8 seq_id;
+ u8 rsvd2;
+ u16 msg_id;
+ u16 rsvd3;
+
+ int async_mgmt_to_pf;
+ u32 rsvd4;
+
+ struct completion recv_done;
+};
+
+struct hinic3_msg_head {
+ u8 status;
+ u8 version;
+ u8 resp_aeq_num;
+ u8 rsvd0[5];
+};
+
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_SUCCESS,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END,
+};
+
+enum hinic3_mgmt_msg_cb_state {
+ HINIC3_MGMT_MSG_CB_REG = 0,
+ HINIC3_MGMT_MSG_CB_RUNNING,
+};
+
+struct hinic3_clp_pf_to_mgmt {
+ struct semaphore clp_msg_lock;
+ void *clp_msg_buf;
+};
+
+struct hinic3_msg_pf_to_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* Async cmd can not be scheduling */
+ spinlock_t async_msg_lock;
+ struct semaphore sync_msg_lock;
+
+ struct workqueue_struct *workq;
+
+ void *async_msg_buf;
+ void *sync_msg_buf;
+ void *mgmt_ack_buf;
+
+ struct hinic3_recv_msg recv_msg_from_mgmt;
+ struct hinic3_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 async_msg_id;
+ u16 sync_msg_id;
+ u32 rsvd1;
+ struct hinic3_api_cmd_chain *cmd_chain[HINIC3_API_CMD_MAX];
+
+ hinic3_mgmt_msg_cb recv_mgmt_msg_cb[HINIC3_MOD_HW_MAX];
+ void *recv_mgmt_msg_data[HINIC3_MOD_HW_MAX];
+ unsigned long mgmt_msg_cb_state[HINIC3_MOD_HW_MAX];
+
+ void *async_msg_cb_data[HINIC3_MOD_HW_MAX];
+
+ /* lock when sending msg */
+ spinlock_t sync_event_lock;
+ enum comm_pf_to_mgmt_event_state event_flag;
+ u64 rsvd2;
+};
+
+struct hinic3_mgmt_msg_handle_work {
+ struct work_struct work;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+
+ void *msg;
+ u16 msg_len;
+ u16 rsvd1;
+
+ enum hinic3_mod_type mod;
+ u16 cmd;
+ u16 msg_id;
+
+ int async_mgmt_to_pf;
+};
+
+void hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size);
+
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_pf_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+int hinic3_pf_to_mgmt_async(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size);
+
+int hinic3_pf_msg_to_mgmt_sync(void *hwdev, u8 mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+
+int hinic3_api_cmd_read_ack(void *hwdev, u8 dest, const void *cmd, u16 size,
+ void *ack, u16 ack_size);
+
+int hinic3_api_cmd_write_nack(void *hwdev, u8 dest, const void *cmd, u16 size);
+
+int hinic3_pf_clp_to_mgmt(void *hwdev, u8 mod, u16 cmd, const void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+int hinic3_clp_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_clp_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
new file mode 100644
index 000000000000..bd39ee7d54a7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.c
@@ -0,0 +1,974 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <net/sock.h>
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+
+#include "ossl_knl.h"
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_lld.h"
+#include "hinic3_hw_mt.h"
+#include "hinic3_nictool.h"
+
+static int g_nictool_ref_cnt;
+
+static dev_t g_dev_id = {0};
+/*lint -save -e104 -e808*/
+static struct class *g_nictool_class;
+/*lint -restore*/
+static struct cdev g_nictool_cdev;
+
+#define HINIC3_MAX_BUF_SIZE (2048 * 1024)
+
+void *g_card_node_array[MAX_CARD_NUM] = {0};
+void *g_card_vir_addr[MAX_CARD_NUM] = {0};
+u64 g_card_phy_addr[MAX_CARD_NUM] = {0};
+int card_id;
+
+#define HIADM3_DEV_PATH "/dev/hinic3_nictool_dev"
+#define HIADM3_DEV_CLASS "hinic3_nictool_class"
+#define HIADM3_DEV_NAME "hinic3_nictool_dev"
+
+typedef int (*hw_driv_module)(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size);
+struct hw_drv_module_handle {
+ enum driver_cmd_type driv_cmd_name;
+ hw_driv_module driv_func;
+};
+
+static int get_single_card_info(struct hinic3_lld_dev *lld_dev, const void *buf_in,
+ u32 in_size, void *buf_out, u32 *out_size)
+{
+ if (!buf_out || *out_size != sizeof(struct card_info)) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(struct card_info));
+ return -EINVAL;
+ }
+
+ hinic3_get_card_info(hinic3_get_sdk_hwdev_by_lld(lld_dev), buf_out);
+
+ return 0;
+}
+
+static int is_driver_in_vm(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ bool in_host = false;
+
+ if (!buf_out || (*out_size != sizeof(u8))) {
+ pr_err("buf_out is NULL, or out_size != %lu\n", sizeof(u8));
+ return -EINVAL;
+ }
+
+ in_host = hinic3_is_in_host();
+ if (in_host)
+ *((u8 *)buf_out) = 0;
+ else
+ *((u8 *)buf_out) = 1;
+
+ return 0;
+}
+
+static int get_all_chip_id_cmd(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ if (*out_size != sizeof(struct nic_card_id) || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(struct nic_card_id));
+ return -EFAULT;
+ }
+
+ hinic3_get_all_chip_id(buf_out);
+
+ return 0;
+}
+
+static int get_card_usr_api_chain_mem(int card_idx)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ card_id = card_idx;
+ if (!g_card_vir_addr[card_idx]) {
+ g_card_vir_addr[card_idx] =
+ (void *)__get_free_pages(GFP_KERNEL,
+ DBGTOOL_PAGE_ORDER);
+ if (!g_card_vir_addr[card_idx]) {
+ pr_err("Alloc api chain memory fail for card %d!\n", card_idx);
+ return -EFAULT;
+ }
+
+ memset(g_card_vir_addr[card_idx], 0,
+ PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER));
+
+ g_card_phy_addr[card_idx] =
+ virt_to_phys(g_card_vir_addr[card_idx]);
+ if (!g_card_phy_addr[card_idx]) {
+ pr_err("phy addr for card %d is 0\n", card_idx);
+ free_pages((unsigned long)g_card_vir_addr[card_idx], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[card_idx] = NULL;
+ return -EFAULT;
+ }
+
+ tmp = g_card_vir_addr[card_idx];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ SetPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+ }
+
+ return 0;
+}
+
+static void chipif_get_all_pf_dev_info(struct pf_dev_info *dev_info, int card_idx,
+ void **g_func_handle_array)
+{
+ u32 func_idx;
+ void *hwdev = NULL;
+ struct pci_dev *pdev = NULL;
+
+ for (func_idx = 0; func_idx < PF_DEV_INFO_NUM; func_idx++) {
+ hwdev = (void *)g_func_handle_array[func_idx];
+
+ dev_info[func_idx].phy_addr = g_card_phy_addr[card_idx];
+
+ if (!hwdev) {
+ dev_info[func_idx].bar0_size = 0;
+ dev_info[func_idx].bus = 0;
+ dev_info[func_idx].slot = 0;
+ dev_info[func_idx].func = 0;
+ } else {
+ pdev = (struct pci_dev *)hinic3_get_pcidev_hdl(hwdev);
+ dev_info[func_idx].bar0_size =
+ pci_resource_len(pdev, 0);
+ dev_info[func_idx].bus = pdev->bus->number;
+ dev_info[func_idx].slot = PCI_SLOT(pdev->devfn);
+ dev_info[func_idx].func = PCI_FUNC(pdev->devfn);
+ }
+ }
+}
+
+static int get_pf_dev_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct pf_dev_info *dev_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ if (!buf_out || *out_size != sizeof(struct pf_dev_info) * PF_DEV_INFO_NUM) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ *out_size, sizeof(dev_info) * PF_DEV_INFO_NUM);
+ return -EFAULT;
+ }
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ chipif_get_all_pf_dev_info(dev_info, id, card_info->func_handle_array);
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static long dbgtool_knl_free_mem(int id)
+{
+ unsigned char *tmp = NULL;
+ int i;
+
+ if (!g_card_vir_addr[id])
+ return 0;
+
+ tmp = g_card_vir_addr[id];
+ for (i = 0; i < (1 << DBGTOOL_PAGE_ORDER); i++) {
+ ClearPageReserved(virt_to_page(tmp));
+ tmp += PAGE_SIZE;
+ }
+
+ free_pages((unsigned long)g_card_vir_addr[id], DBGTOOL_PAGE_ORDER);
+ g_card_vir_addr[id] = NULL;
+ g_card_phy_addr[id] = 0;
+
+ return 0;
+}
+
+static int free_knl_mem(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int id, err;
+
+ err = sscanf(card_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (id >= MAX_CARD_NUM || id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n", id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ dbgtool_knl_free_mem(id);
+
+ return 0;
+}
+
+static int card_info_param_valid(char *dev_name, const void *buf_out, u32 buf_out_size, int *id)
+{
+ int err;
+
+ if (!buf_out || buf_out_size != sizeof(struct hinic3_card_func_info)) {
+ pr_err("Invalid parameter: out_buf_size %u, expect %lu\n",
+ buf_out_size, sizeof(struct hinic3_card_func_info));
+ return -EINVAL;
+ }
+
+ err = memcmp(dev_name, HINIC3_CHIP_NAME, strlen(HINIC3_CHIP_NAME));
+ if (err) {
+ pr_err("Invalid chip name %s\n", dev_name);
+ return err;
+ }
+
+ err = sscanf(dev_name, HINIC3_CHIP_NAME "%d", id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ return err;
+ }
+
+ if (*id >= MAX_CARD_NUM || *id < 0) {
+ pr_err("chip id %d exceed limit[0-%d]\n",
+ *id, MAX_CARD_NUM - 1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int get_card_func_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_card_func_info *card_func_info = buf_out;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ int err, id = 0;
+
+ err = card_info_param_valid(card_info->chip_name, buf_out, *out_size, &id);
+ if (err)
+ return err;
+
+ hinic3_get_card_func_info_by_card_name(card_info->chip_name, card_func_info);
+
+ if (!card_func_info->num_pf) {
+ pr_err("None function found for %s\n", card_info->chip_name);
+ return -EFAULT;
+ }
+
+ err = get_card_usr_api_chain_mem(id);
+ if (err) {
+ pr_err("Faile to get api chain memory for userspace %s\n",
+ card_info->chip_name);
+ return -EFAULT;
+ }
+
+ card_func_info->usr_api_phy_addr = g_card_phy_addr[id];
+
+ return 0;
+}
+
+static int get_pf_cap_info(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct service_cap *func_cap = NULL;
+ struct hinic3_hwdev *hwdev = NULL;
+ struct card_node *card_info = hinic3_get_chip_node_by_lld(lld_dev);
+ struct svc_cap_info *svc_cap_info_in = (struct svc_cap_info *)buf_in;
+ struct svc_cap_info *svc_cap_info_out = (struct svc_cap_info *)buf_out;
+
+ if (*out_size != sizeof(struct svc_cap_info) || in_size != sizeof(struct svc_cap_info) ||
+ !buf_in || !buf_out) {
+ pr_err("Invalid parameter: out_buf_size %u, in_size: %u, expect %lu\n",
+ *out_size, in_size, sizeof(struct svc_cap_info));
+ return -EINVAL;
+ }
+
+ if (svc_cap_info_in->func_idx >= MAX_FUNCTION_NUM) {
+ pr_err("func_idx is illegal. func_idx: %u, max_num: %u\n",
+ svc_cap_info_in->func_idx, MAX_FUNCTION_NUM);
+ return -EINVAL;
+ }
+
+ lld_hold();
+ hwdev = (struct hinic3_hwdev *)(card_info->func_handle_array)[svc_cap_info_in->func_idx];
+ if (!hwdev) {
+ lld_put();
+ return -EINVAL;
+ }
+
+ func_cap = &hwdev->cfg_mgmt->svc_cap;
+ memcpy(&svc_cap_info_out->cap, func_cap, sizeof(struct service_cap));
+ lld_put();
+
+ return 0;
+}
+
+static int get_hw_drv_version(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct drv_version_info *ver_info = buf_out;
+ int err;
+
+ if (!buf_out) {
+ pr_err("Buf_out is NULL.\n");
+ return -EINVAL;
+ }
+
+ if (*out_size != sizeof(*ver_info)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu\n",
+ *out_size, sizeof(*ver_info));
+ return -EINVAL;
+ }
+
+ err = snprintf(ver_info->ver, sizeof(ver_info->ver), "%s %s", HINIC3_DRV_VERSION,
+ "2023-05-17_19:56:38");
+ if (err < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int get_pf_id(struct hinic3_lld_dev *lld_dev, const void *buf_in, u32 in_size,
+ void *buf_out, u32 *out_size)
+{
+ struct hinic3_pf_info *pf_info = NULL;
+ struct card_node *chip_node = hinic3_get_chip_node_by_lld(lld_dev);
+ u32 port_id;
+ int err;
+
+ if (!chip_node)
+ return -ENODEV;
+
+ if (!buf_out || (*out_size != sizeof(*pf_info)) || !buf_in || in_size != sizeof(u32)) {
+ pr_err("Unexpect out buf size from user :%u, expect: %lu, in size:%u\n",
+ *out_size, sizeof(*pf_info), in_size);
+ return -EINVAL;
+ }
+
+ port_id = *((u32 *)buf_in);
+ pf_info = (struct hinic3_pf_info *)buf_out;
+ err = hinic3_get_pf_id(chip_node, port_id, &pf_info->pf_id, &pf_info->isvalid);
+ if (err)
+ return err;
+
+ *out_size = sizeof(*pf_info);
+
+ return 0;
+}
+
+struct hw_drv_module_handle hw_driv_module_cmd_handle[] = {
+ {FUNC_TYPE, get_func_type},
+ {GET_FUNC_IDX, get_func_id},
+ {GET_HW_STATS, (hw_driv_module)get_hw_driver_stats},
+ {CLEAR_HW_STATS, clear_hw_driver_stats},
+ {GET_SELF_TEST_RES, get_self_test_result},
+ {GET_CHIP_FAULT_STATS, (hw_driv_module)get_chip_faults_stats},
+ {GET_SINGLE_CARD_INFO, (hw_driv_module)get_single_card_info},
+ {IS_DRV_IN_VM, is_driver_in_vm},
+ {GET_CHIP_ID, get_all_chip_id_cmd},
+ {GET_PF_DEV_INFO, get_pf_dev_info},
+ {CMD_FREE_MEM, free_knl_mem},
+ {GET_CHIP_INFO, get_card_func_info},
+ {GET_FUNC_CAP, get_pf_cap_info},
+ {GET_DRV_VERSION, get_hw_drv_version},
+ {GET_PF_ID, get_pf_id},
+};
+
+static int alloc_tmp_buf(void *hwdev, struct msg_module *nt_msg, u32 in_size,
+ void **buf_in, u32 out_size, void **buf_out)
+{
+ int ret;
+
+ ret = alloc_buff_in(hwdev, nt_msg, in_size, buf_in);
+ if (ret) {
+ pr_err("Alloc tool cmd buff in failed\n");
+ return ret;
+ }
+
+ ret = alloc_buff_out(hwdev, nt_msg, out_size, buf_out);
+ if (ret) {
+ pr_err("Alloc tool cmd buff out failed\n");
+ goto out_free_buf_in;
+ }
+
+ return 0;
+
+out_free_buf_in:
+ free_buff_in(hwdev, nt_msg, *buf_in);
+
+ return ret;
+}
+
+static void free_tmp_buf(void *hwdev, struct msg_module *nt_msg,
+ void *buf_in, void *buf_out)
+{
+ free_buff_out(hwdev, nt_msg, buf_out);
+ free_buff_in(hwdev, nt_msg, buf_in);
+}
+
+static int send_to_hw_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int index, num_cmds = sizeof(hw_driv_module_cmd_handle) /
+ sizeof(hw_driv_module_cmd_handle[0]);
+ enum driver_cmd_type cmd_type =
+ (enum driver_cmd_type)(nt_msg->msg_formate);
+ int err = 0;
+
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ hw_driv_module_cmd_handle[index].driv_cmd_name) {
+ err = hw_driv_module_cmd_handle[index].driv_func
+ (lld_dev, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+ }
+
+ if (index == num_cmds) {
+ pr_err("Can't find callback for %d\n", cmd_type);
+ return -EINVAL;
+ }
+
+ return err;
+}
+
+static int send_to_service_driver(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ const void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ const char **service_name = NULL;
+ enum hinic3_service_type type;
+ void *uld_dev = NULL;
+ int ret = -EINVAL;
+
+ service_name = hinic3_get_uld_names();
+ type = nt_msg->module - SEND_TO_SRV_DRV_BASE;
+ if (type >= SERVICE_T_MAX) {
+ pr_err("Ioctl input module id: %u is incorrectly\n", nt_msg->module);
+ return -EINVAL;
+ }
+
+ uld_dev = hinic3_get_uld_dev(lld_dev, type);
+ if (!uld_dev) {
+ if (nt_msg->msg_formate == GET_DRV_VERSION)
+ return 0;
+
+ pr_err("Can not get the uld dev correctly: %s, %s driver may be not register\n",
+ nt_msg->device_name, service_name[type]);
+ return -EINVAL;
+ }
+
+ if (g_uld_info[type].ioctl)
+ ret = g_uld_info[type].ioctl(uld_dev, nt_msg->msg_formate,
+ buf_in, in_size, buf_out, out_size);
+ uld_dev_put(lld_dev, type);
+
+ return ret;
+}
+
+static int nictool_exec_cmd(struct hinic3_lld_dev *lld_dev, struct msg_module *nt_msg,
+ void *buf_in, u32 in_size, void *buf_out, u32 *out_size)
+{
+ int ret = 0;
+
+ switch (nt_msg->module) {
+ case SEND_TO_HW_DRIVER:
+ ret = send_to_hw_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_MPU:
+ ret = send_to_mpu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_SM:
+ ret = send_to_sm(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ case SEND_TO_NPU:
+ ret = send_to_npu(hinic3_get_sdk_hwdev_by_lld(lld_dev),
+ nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ default:
+ ret = send_to_service_driver(lld_dev, nt_msg, buf_in, in_size, buf_out, out_size);
+ break;
+ }
+
+ return ret;
+}
+
+static int cmd_parameter_valid(struct msg_module *nt_msg, unsigned long arg,
+ u32 *out_size_expect, u32 *in_size)
+{
+ if (copy_from_user(nt_msg, (void *)arg, sizeof(*nt_msg))) {
+ pr_err("Copy information from user failed\n");
+ return -EFAULT;
+ }
+
+ *out_size_expect = nt_msg->buf_out_size;
+ *in_size = nt_msg->buf_in_size;
+ if (*out_size_expect > HINIC3_MAX_BUF_SIZE ||
+ *in_size > HINIC3_MAX_BUF_SIZE) {
+ pr_err("Invalid in size: %u or out size: %u\n",
+ *in_size, *out_size_expect);
+ return -EFAULT;
+ }
+
+ nt_msg->device_name[IFNAMSIZ - 1] = '\0';
+
+ return 0;
+}
+
+static struct hinic3_lld_dev *get_lld_dev_by_nt_msg(struct msg_module *nt_msg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+
+ if (nt_msg->module >= SEND_TO_SRV_DRV_BASE && nt_msg->module < SEND_TO_DRIVER_MAX &&
+ nt_msg->module != SEND_TO_HW_DRIVER && nt_msg->msg_formate != GET_DRV_VERSION) {
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ nt_msg->module - SEND_TO_SRV_DRV_BASE);
+ } else {
+ lld_dev = hinic3_get_lld_dev_by_chip_name(nt_msg->device_name);
+ if (!lld_dev)
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name,
+ SERVICE_T_MAX);
+ }
+
+ if (nt_msg->module == SEND_TO_NIC_DRIVER && (nt_msg->msg_formate == GET_XSFP_INFO ||
+ nt_msg->msg_formate == GET_XSFP_PRESENT))
+ lld_dev = hinic3_get_lld_dev_by_chip_and_port(nt_msg->device_name,
+ nt_msg->port_id);
+
+ if (nt_msg->module == SEND_TO_CUSTOM_DRIVER &&
+ nt_msg->msg_formate == CMD_CUSTOM_BOND_GET_CHIP_NAME)
+ lld_dev = hinic3_get_lld_dev_by_dev_name(nt_msg->device_name, SERVICE_T_MAX);
+
+ return lld_dev;
+}
+
+static long hinicadm_k_unlocked_ioctl(struct file *pfile, unsigned long arg)
+{
+ struct hinic3_lld_dev *lld_dev = NULL;
+ struct msg_module nt_msg;
+ void *buf_out = NULL;
+ void *buf_in = NULL;
+ u32 out_size_expect = 0;
+ u32 out_size = 0;
+ u32 in_size = 0;
+ int ret = 0;
+
+ memset(&nt_msg, 0, sizeof(nt_msg));
+ if (cmd_parameter_valid(&nt_msg, arg, &out_size_expect, &in_size))
+ return -EFAULT;
+
+ lld_dev = get_lld_dev_by_nt_msg(&nt_msg);
+ if (!lld_dev) {
+ if (nt_msg.msg_formate != DEV_NAME_TEST)
+ pr_err("Can not find device %s for module %d\n",
+ nt_msg.device_name, nt_msg.module);
+
+ return -ENODEV;
+ }
+
+ if (nt_msg.msg_formate == DEV_NAME_TEST)
+ return 0;
+
+ ret = alloc_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg,
+ in_size, &buf_in, out_size_expect, &buf_out);
+ if (ret) {
+ pr_err("Alloc tmp buff failed\n");
+ goto out_free_lock;
+ }
+
+ out_size = out_size_expect;
+
+ ret = nictool_exec_cmd(lld_dev, &nt_msg, buf_in, in_size, buf_out, &out_size);
+ if (ret) {
+ pr_err("nictool_exec_cmd failed, module: %u, ret: %d.\n", nt_msg.module, ret);
+ goto out_free_buf;
+ }
+
+ if (out_size > out_size_expect) {
+ ret = -EFAULT;
+ pr_err("Out size is greater than expected out size from user: %u, out size: %u\n",
+ out_size_expect, out_size);
+ goto out_free_buf;
+ }
+
+ ret = copy_buf_out_to_user(&nt_msg, out_size, buf_out);
+ if (ret)
+ pr_err("Copy information to user failed\n");
+
+out_free_buf:
+ free_tmp_buf(hinic3_get_sdk_hwdev_by_lld(lld_dev), &nt_msg, buf_in, buf_out);
+
+out_free_lock:
+ lld_dev_put(lld_dev);
+ return (long)ret;
+}
+
+/**
+ * dbgtool_knl_ffm_info_rd - Read ffm information
+ * @para: the dbgtool parameter
+ * @dbgtool_info: the dbgtool info
+ **/
+static long dbgtool_knl_ffm_info_rd(struct dbgtool_param *para,
+ struct dbgtool_k_glb_info *dbgtool_info)
+{
+ /* Copy the ffm_info to user mode */
+ if (copy_to_user(para->param.ffm_rd, dbgtool_info->ffm,
+ (unsigned int)sizeof(struct ffm_record_info))) {
+ pr_err("Copy ffm_info to user fail\n");
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static long dbgtool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int real_cmd,
+ unsigned long arg)
+{
+ long ret;
+ struct dbgtool_param param;
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ struct card_node *card_info = NULL;
+ int i;
+
+ (void)memset(¶m, 0, sizeof(param));
+
+ if (copy_from_user(¶m, (void *)arg, sizeof(param))) {
+ pr_err("Copy param from user fail\n");
+ return -EFAULT;
+ }
+
+ lld_hold();
+ for (i = 0; i < MAX_CARD_NUM; i++) {
+ card_info = (struct card_node *)g_card_node_array[i];
+ if (!card_info)
+ continue;
+ if (!strncmp(param.chip_name, card_info->chip_name, IFNAMSIZ))
+ break;
+ }
+
+ if (i == MAX_CARD_NUM || !card_info) {
+ lld_put();
+ pr_err("Can't find this card %s\n", param.chip_name);
+ return -EFAULT;
+ }
+
+ card_id = i;
+ dbgtool_info = (struct dbgtool_k_glb_info *)card_info->dbgtool_info;
+
+ down(&dbgtool_info->dbgtool_sem);
+
+ switch (real_cmd) {
+ case DBGTOOL_CMD_FFM_RD:
+ ret = dbgtool_knl_ffm_info_rd(¶m, dbgtool_info);
+ break;
+ case DBGTOOL_CMD_MSG_2_UP:
+ pr_err("Not suppose to use this cmd(0x%x).\n", real_cmd);
+ ret = 0;
+ break;
+
+ default:
+ pr_err("Dbgtool cmd(0x%x) not support now\n", real_cmd);
+ ret = -EFAULT;
+ }
+
+ up(&dbgtool_info->dbgtool_sem);
+
+ lld_put();
+
+ return ret;
+}
+
+static int nictool_k_release(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static int nictool_k_open(struct inode *pnode, struct file *pfile)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_read(struct file *pfile, char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static ssize_t nictool_k_write(struct file *pfile, const char __user *ubuf,
+ size_t size, loff_t *ppos)
+{
+ return 0;
+}
+
+static long nictool_k_unlocked_ioctl(struct file *pfile,
+ unsigned int cmd, unsigned long arg)
+{
+ unsigned int real_cmd;
+
+ real_cmd = _IOC_NR(cmd);
+
+ return (real_cmd == NICTOOL_CMD_TYPE) ?
+ hinicadm_k_unlocked_ioctl(pfile, arg) :
+ dbgtool_k_unlocked_ioctl(pfile, real_cmd, arg);
+}
+
+static int hinic3_mem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ unsigned long vmsize = vma->vm_end - vma->vm_start;
+ phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
+ phys_addr_t phy_addr;
+
+ if (vmsize > (PAGE_SIZE * (1 << DBGTOOL_PAGE_ORDER))) {
+ pr_err("Map size = %lu is bigger than alloc\n", vmsize);
+ return -EAGAIN;
+ }
+
+ /* old version of tool set vma->vm_pgoff to 0 */
+ phy_addr = offset ? offset : g_card_phy_addr[card_id];
+
+ if (!phy_addr) {
+ pr_err("Card_id = %d physical address is 0\n", card_id);
+ return -EAGAIN;
+ }
+
+ /* Disable cache and write buffer in the mapping area */
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ if (remap_pfn_range(vma, vma->vm_start, (phy_addr >> PAGE_SHIFT),
+ vmsize, vma->vm_page_prot)) {
+ pr_err("Remap pfn range failed.\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static const struct file_operations fifo_operations = {
+ .owner = THIS_MODULE,
+ .release = nictool_k_release,
+ .open = nictool_k_open,
+ .read = nictool_k_read,
+ .write = nictool_k_write,
+ .unlocked_ioctl = nictool_k_unlocked_ioctl,
+ .mmap = hinic3_mem_mmap,
+};
+
+static void free_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ int err, id;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+
+ if (--chip_info->func_num)
+ return;
+
+ err = sscanf(chip_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0)
+ pr_err("Failed to get card id\n");
+
+ if (id < MAX_CARD_NUM)
+ g_card_node_array[id] = NULL;
+
+ dbgtool_info = chip_info->dbgtool_info;
+ /* FFM deinit */
+ kfree(dbgtool_info->ffm);
+ dbgtool_info->ffm = NULL;
+
+ kfree(dbgtool_info);
+ chip_info->dbgtool_info = NULL;
+
+ if (id < MAX_CARD_NUM)
+ (void)dbgtool_knl_free_mem(id);
+}
+
+static int alloc_dbgtool_info(void *hwdev, struct card_node *chip_info)
+{
+ struct dbgtool_k_glb_info *dbgtool_info = NULL;
+ int err, id = 0;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = hwdev;
+
+ if (chip_info->func_num++)
+ return 0;
+
+ dbgtool_info = (struct dbgtool_k_glb_info *)
+ kzalloc(sizeof(struct dbgtool_k_glb_info), GFP_KERNEL);
+ if (!dbgtool_info) {
+ pr_err("Failed to allocate dbgtool_info\n");
+ goto dbgtool_info_fail;
+ }
+
+ chip_info->dbgtool_info = dbgtool_info;
+
+ /* FFM init */
+ dbgtool_info->ffm = (struct ffm_record_info *)
+ kzalloc(sizeof(struct ffm_record_info), GFP_KERNEL);
+ if (!dbgtool_info->ffm) {
+ pr_err("Failed to allocate cell contexts for a chain\n");
+ goto dbgtool_info_ffm_fail;
+ }
+
+ sema_init(&dbgtool_info->dbgtool_sem, 1);
+
+ err = sscanf(chip_info->chip_name, HINIC3_CHIP_NAME "%d", &id);
+ if (err < 0) {
+ pr_err("Failed to get card id\n");
+ goto sscanf_chdev_fail;
+ }
+
+ g_card_node_array[id] = chip_info;
+
+ return 0;
+
+sscanf_chdev_fail:
+ kfree(dbgtool_info->ffm);
+
+dbgtool_info_ffm_fail:
+ kfree(dbgtool_info);
+ chip_info->dbgtool_info = NULL;
+
+dbgtool_info_fail:
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ chip_info->func_handle_array[hinic3_global_func_id(hwdev)] = NULL;
+ chip_info->func_num--;
+ return -ENOMEM;
+}
+
+/**
+ * nictool_k_init - initialize the hw interface
+ **/
+/* temp for dbgtool_info */
+/*lint -e438*/
+int nictool_k_init(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+ struct device *pdevice = NULL;
+ int err;
+
+ err = alloc_dbgtool_info(hwdev, chip_info);
+ if (err)
+ return err;
+
+ if (g_nictool_ref_cnt++) {
+ /* already initialized */
+ return 0;
+ }
+
+ err = alloc_chrdev_region(&g_dev_id, 0, 1, HIADM3_DEV_NAME);
+ if (err) {
+ pr_err("Register nictool_dev failed(0x%x)\n", err);
+ goto alloc_chdev_fail;
+ }
+
+ /* Create equipment */
+ /*lint -save -e160*/
+ g_nictool_class = class_create(THIS_MODULE, HIADM3_DEV_CLASS);
+ /*lint -restore*/
+ if (IS_ERR(g_nictool_class)) {
+ pr_err("Create nictool_class fail\n");
+ err = -EFAULT;
+ goto class_create_err;
+ }
+
+ /* Initializing the character device */
+ cdev_init(&g_nictool_cdev, &fifo_operations);
+
+ /* Add devices to the operating system */
+ err = cdev_add(&g_nictool_cdev, g_dev_id, 1);
+ if (err < 0) {
+ pr_err("Add nictool_dev to operating system fail(0x%x)\n", err);
+ goto cdev_add_err;
+ }
+
+ /* Export device information to user space
+ * (/sys/class/class name/device name)
+ */
+ pdevice = device_create(g_nictool_class, NULL,
+ g_dev_id, NULL, HIADM3_DEV_NAME);
+ if (IS_ERR(pdevice)) {
+ pr_err("Export nictool device information to user space fail\n");
+ err = -EFAULT;
+ goto device_create_err;
+ }
+
+ pr_info("Register nictool_dev to system succeed\n");
+
+ return 0;
+
+device_create_err:
+ cdev_del(&g_nictool_cdev);
+
+cdev_add_err:
+ class_destroy(g_nictool_class);
+
+class_create_err:
+ g_nictool_class = NULL;
+ unregister_chrdev_region(g_dev_id, 1);
+
+alloc_chdev_fail:
+ g_nictool_ref_cnt--;
+ free_dbgtool_info(hwdev, chip_info);
+
+ return err;
+} /*lint +e438*/
+
+void nictool_k_uninit(void *hwdev, void *chip_node)
+{
+ struct card_node *chip_info = (struct card_node *)chip_node;
+
+ free_dbgtool_info(hwdev, chip_info);
+
+ if (!g_nictool_ref_cnt)
+ return;
+
+ if (--g_nictool_ref_cnt)
+ return;
+
+ if (!g_nictool_class || IS_ERR(g_nictool_class)) {
+ pr_err("Nictool class is NULL.\n");
+ return;
+ }
+
+ device_destroy(g_nictool_class, g_dev_id);
+ cdev_del(&g_nictool_cdev);
+ class_destroy(g_nictool_class);
+ g_nictool_class = NULL;
+
+ unregister_chrdev_region(g_dev_id, 1);
+
+ pr_info("Unregister nictool_dev succeed\n");
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
new file mode 100644
index 000000000000..f368133e341e
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_nictool.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_NICTOOL_H
+#define HINIC3_NICTOOL_H
+
+#include "hinic3_mt.h"
+#include "hinic3_crm.h"
+
+#ifndef MAX_SIZE
+#define MAX_SIZE (16)
+#endif
+
+#define DBGTOOL_PAGE_ORDER (10)
+
+#define MAX_CARD_NUM (64)
+
+int nictool_k_init(void *hwdev, void *chip_node);
+void nictool_k_uninit(void *hwdev, void *chip_node);
+
+void hinic3_get_all_chip_id(void *id_info);
+
+void hinic3_get_card_func_info_by_card_name
+ (const char *chip_name, struct hinic3_card_func_info *card_func);
+
+void hinic3_get_card_info(const void *hwdev, void *bufin);
+
+bool hinic3_is_in_host(void);
+
+int hinic3_get_pf_id(struct card_node *chip_node, u32 port_id, u32 *pf_id, u32 *isvalid);
+
+extern struct hinic3_uld_info g_uld_info[SERVICE_T_MAX];
+
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
new file mode 100644
index 000000000000..d028ca62fab3
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_pci_id_tbl.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PCI_ID_TBL_H
+#define HINIC3_PCI_ID_TBL_H
+
+#define PCI_VENDOR_ID_HUAWEI 0x19e5
+#define HINIC3_DEV_ID_STANDARD 0x0222
+#define HINIC3_DEV_ID_SDI_5_1_PF 0x0226
+#define HINIC3_DEV_ID_SDI_5_0_PF 0x0225
+#define HINIC3_DEV_ID_VF 0x375F
+#define HINIC3_DEV_ID_VF_HV 0x379F
+#define HINIC3_DEV_ID_SPU 0xAC00
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
new file mode 100644
index 000000000000..fbb6198a30f6
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/semaphore.h>
+#include <linux/workqueue.h>
+
+#include "ossl_knl.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_profile.h"
+#include "hinic3_prof_adap.h"
+
+static bool is_match_prof_default_adapter(void *device)
+{
+ /* always match default profile adapter in standard scene */
+ return true;
+}
+
+struct hinic3_prof_adapter prof_adap_objs[] = {
+ /* Add prof adapter before default profile */
+ {
+ .type = PROF_ADAP_TYPE_DEFAULT,
+ .match = is_match_prof_default_adapter,
+ .init = NULL,
+ .deinit = NULL,
+ },
+};
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ u16 num_adap = ARRAY_SIZE(prof_adap_objs);
+
+ hwdev->prof_adap = hinic3_prof_init(hwdev, prof_adap_objs, num_adap,
+ (void *)&hwdev->prof_attr);
+ if (hwdev->prof_adap)
+ sdk_info(hwdev->dev_hdl, "Find profile adapter type: %d\n", hwdev->prof_adap->type);
+}
+
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev)
+{
+ hinic3_prof_deinit(hwdev->prof_adap, hwdev->prof_attr);
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
new file mode 100644
index 000000000000..e244d1197d42
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_prof_adap.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_PROF_ADAP_H
+#define HINIC3_PROF_ADAP_H
+
+#include <linux/workqueue.h>
+
+#include "hinic3_profile.h"
+#include "hinic3_hwdev.h"
+
+enum cpu_affinity_work_type {
+ WORK_TYPE_AEQ,
+ WORK_TYPE_MBOX,
+ WORK_TYPE_MGMT_MSG,
+ WORK_TYPE_COMM,
+};
+
+enum hisdk3_sw_features {
+ HISDK3_SW_F_CHANNEL_LOCK = BIT(0),
+};
+
+struct hisdk3_prof_ops {
+ void (*fault_recover)(void *data, u16 src, u16 level);
+ int (*get_work_cpu_affinity)(void *data, u32 work_type);
+ void (*probe_success)(void *data);
+ void (*remove_pre_handle)(struct hinic3_hwdev *hwdev);
+};
+
+struct hisdk3_prof_attr {
+ void *priv_data;
+ u64 hw_feature_cap;
+ u64 sw_feature_cap;
+ u64 dft_hw_feature;
+ u64 dft_sw_feature;
+
+ struct hisdk3_prof_ops *ops;
+};
+
+#define GET_PROF_ATTR_OPS(hwdev) \
+ ((hwdev)->prof_attr ? (hwdev)->prof_attr->ops : NULL)
+
+#ifdef static
+#undef static
+#define LLT_STATIC_DEF_SAVED
+#endif
+
+static inline int hisdk3_get_work_cpu_affinity(struct hinic3_hwdev *hwdev,
+ enum cpu_affinity_work_type type)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->get_work_cpu_affinity)
+ return ops->get_work_cpu_affinity(hwdev->prof_attr->priv_data, type);
+
+ return WORK_CPU_UNBOUND;
+}
+
+static inline void hisdk3_fault_post_process(struct hinic3_hwdev *hwdev,
+ u16 src, u16 level)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->fault_recover)
+ ops->fault_recover(hwdev->prof_attr->priv_data, src, level);
+}
+
+static inline void hisdk3_probe_success(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->probe_success)
+ ops->probe_success(hwdev->prof_attr->priv_data);
+}
+
+static inline bool hisdk3_sw_feature_en(const struct hinic3_hwdev *hwdev,
+ u64 feature_bit)
+{
+ if (!hwdev->prof_attr)
+ return false;
+
+ return (hwdev->prof_attr->sw_feature_cap & feature_bit) &&
+ (hwdev->prof_attr->dft_sw_feature & feature_bit);
+}
+
+#ifdef CONFIG_MODULE_PROF
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev)
+{
+ struct hisdk3_prof_ops *ops = NULL;
+
+ if (!hwdev)
+ return;
+
+ ops = GET_PROF_ATTR_OPS(hwdev);
+
+ if (ops && ops->remove_pre_handle)
+ ops->remove_pre_handle(hwdev);
+}
+#else
+static inline void hisdk3_remove_pre_process(struct hinic3_hwdev *hwdev) {};
+#endif
+#define SW_FEATURE_EN(hwdev, f_bit) \
+ hisdk3_sw_feature_en(hwdev, HISDK3_SW_F_##f_bit)
+#define HISDK3_F_CHANNEL_LOCK_EN(hwdev) SW_FEATURE_EN(hwdev, CHANNEL_LOCK)
+
+void hisdk3_init_profile_adapter(struct hinic3_hwdev *hwdev);
+void hisdk3_deinit_profile_adapter(struct hinic3_hwdev *hwdev);
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
new file mode 100644
index 000000000000..e204a9815ea8
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sm_lt.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef CHIPIF_SM_LT_H
+#define CHIPIF_SM_LT_H
+
+#include <linux/types.h>
+
+#define SM_LT_LOAD (0x12)
+#define SM_LT_STORE (0x14)
+
+#define SM_LT_NUM_OFFSET 13
+#define SM_LT_ABUF_FLG_OFFSET 12
+#define SM_LT_BC_OFFSET 11
+
+#define SM_LT_ENTRY_16B 16
+#define SM_LT_ENTRY_32B 32
+#define SM_LT_ENTRY_48B 48
+#define SM_LT_ENTRY_64B 64
+
+#define TBL_LT_OFFSET_DEFAULT 0
+
+#define SM_CACHE_LINE_SHFT 4 /* log2(16) */
+#define SM_CACHE_LINE_SIZE 16 /* the size of cache line */
+
+#define MAX_SM_LT_READ_LINE_NUM 4
+#define MAX_SM_LT_WRITE_LINE_NUM 3
+
+#define SM_LT_FULL_BYTEENB 0xFFFF
+
+#define TBL_GET_ENB3_MASK(bitmask) ((u16)(((bitmask) >> 32) & 0xFFFF))
+#define TBL_GET_ENB2_MASK(bitmask) ((u16)(((bitmask) >> 16) & 0xFFFF))
+#define TBL_GET_ENB1_MASK(bitmask) ((u16)((bitmask) & 0xFFFF))
+
+enum {
+ SM_LT_NUM_0 = 0, /* lt num = 0, load/store 16B */
+ SM_LT_NUM_1, /* lt num = 1, load/store 32B */
+ SM_LT_NUM_2, /* lt num = 2, load/store 48B */
+ SM_LT_NUM_3 /* lt num = 3, load 64B */
+};
+
+/* lt load request */
+union sml_lt_req_head {
+ struct {
+ u32 offset:8;
+ u32 pad:3;
+ u32 bc:1;
+ u32 abuf_flg:1;
+ u32 num:2;
+ u32 ack:1;
+ u32 op_id:5;
+ u32 instance:6;
+ u32 src:5;
+ } bs;
+
+ u32 value;
+};
+
+struct sml_lt_load_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 pad0;
+ u32 pad1;
+};
+
+struct sml_lt_store_req {
+ u32 extra;
+ union sml_lt_req_head head;
+ u32 index;
+ u32 byte_enb[2];
+ u8 write_data[48];
+};
+
+enum {
+ SM_LT_OFFSET_1 = 1,
+ SM_LT_OFFSET_2,
+ SM_LT_OFFSET_3,
+ SM_LT_OFFSET_4,
+ SM_LT_OFFSET_5,
+ SM_LT_OFFSET_6,
+ SM_LT_OFFSET_7,
+ SM_LT_OFFSET_8,
+ SM_LT_OFFSET_9,
+ SM_LT_OFFSET_10,
+ SM_LT_OFFSET_11,
+ SM_LT_OFFSET_12,
+ SM_LT_OFFSET_13,
+ SM_LT_OFFSET_14,
+ SM_LT_OFFSET_15
+};
+
+enum HINIC_CSR_API_DATA_OPERATION_ID {
+ HINIC_CSR_OPERATION_WRITE_CSR = 0x1E,
+ HINIC_CSR_OPERATION_READ_CSR = 0x1F
+};
+
+enum HINIC_CSR_API_DATA_NEED_RESPONSE_DATA {
+ HINIC_CSR_NO_RESP_DATA = 0,
+ HINIC_CSR_NEED_RESP_DATA = 1
+};
+
+enum HINIC_CSR_API_DATA_DATA_SIZE {
+ HINIC_CSR_DATA_SZ_32 = 0,
+ HINIC_CSR_DATA_SZ_64 = 1
+};
+
+struct hinic_csr_request_api_data {
+ u32 dw0;
+
+ union {
+ struct {
+ u32 reserved1:13;
+ /* this field indicates the write/read data size:
+ * 2'b00: 32 bits
+ * 2'b01: 64 bits
+ * 2'b10~2'b11:reserved
+ */
+ u32 data_size:2;
+ /* this field indicates that requestor expect receive a
+ * response data or not.
+ * 1'b0: expect not to receive a response data.
+ * 1'b1: expect to receive a response data.
+ */
+ u32 need_response:1;
+ /* this field indicates the operation that the requestor
+ * expected.
+ * 5'b1_1110: write value to csr space.
+ * 5'b1_1111: read register from csr space.
+ */
+ u32 operation_id:5;
+ u32 reserved2:6;
+ /* this field specifies the Src node ID for this API
+ * request message.
+ */
+ u32 src_node_id:5;
+ } bits;
+
+ u32 val32;
+ } dw1;
+
+ union {
+ struct {
+ /* it specifies the CSR address. */
+ u32 csr_addr:26;
+ u32 reserved3:6;
+ } bits;
+
+ u32 val32;
+ } dw2;
+
+ /* if data_size=2'b01, it is high 32 bits of write data. else, it is
+ * 32'hFFFF_FFFF.
+ */
+ u32 csr_write_data_h;
+ /* the low 32 bits of write data. */
+ u32 csr_write_data_l;
+};
+#endif
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
new file mode 100644
index 000000000000..b802104153c5
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sml_lt.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_sm_lt.h"
+#include "hinic3_hw.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_api_cmd.h"
+#include "hinic3_mgmt.h"
+
+#define ACK 1
+#define NOACK 0
+
+#define LT_LOAD16_API_SIZE (16 + 4)
+#define LT_STORE16_API_SIZE (32 + 4)
+
+#ifndef HTONL
+#define HTONL(x) \
+ ((((x) & 0x000000ff) << 24) \
+ | (((x) & 0x0000ff00) << 8) \
+ | (((x) & 0x00ff0000) >> 8) \
+ | (((x) & 0xff000000) >> 24))
+#endif
+
+static inline void sm_lt_build_head(union sml_lt_req_head *head,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u8 offset, u8 num)
+{
+ head->value = 0;
+ head->bs.instance = instance_id;
+ head->bs.op_id = op_id;
+ head->bs.ack = ack;
+ head->bs.num = num;
+ head->bs.abuf_flg = 0;
+ head->bs.bc = 1;
+ head->bs.offset = offset;
+ head->value = HTONL((head->value));
+}
+
+static inline void sm_lt_load_build_req(struct sml_lt_load_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset, u8 num)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->extra = 0;
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->pad0 = 0;
+ req->pad1 = 0;
+}
+
+static void sml_lt_store_data(u32 *dst, const u32 *src, u8 num)
+{
+ switch (num) {
+ case SM_LT_NUM_2:
+ *(dst + SM_LT_OFFSET_11) = *(src + SM_LT_OFFSET_11);
+ *(dst + SM_LT_OFFSET_10) = *(src + SM_LT_OFFSET_10);
+ *(dst + SM_LT_OFFSET_9) = *(src + SM_LT_OFFSET_9);
+ *(dst + SM_LT_OFFSET_8) = *(src + SM_LT_OFFSET_8);
+ /*lint -fallthrough */
+ case SM_LT_NUM_1:
+ *(dst + SM_LT_OFFSET_7) = *(src + SM_LT_OFFSET_7);
+ *(dst + SM_LT_OFFSET_6) = *(src + SM_LT_OFFSET_6);
+ *(dst + SM_LT_OFFSET_5) = *(src + SM_LT_OFFSET_5);
+ *(dst + SM_LT_OFFSET_4) = *(src + SM_LT_OFFSET_4);
+ /*lint -fallthrough */
+ case SM_LT_NUM_0:
+ *(dst + SM_LT_OFFSET_3) = *(src + SM_LT_OFFSET_3);
+ *(dst + SM_LT_OFFSET_2) = *(src + SM_LT_OFFSET_2);
+ *(dst + SM_LT_OFFSET_1) = *(src + SM_LT_OFFSET_1);
+ *dst = *src;
+ break;
+ default:
+ break;
+ }
+}
+
+static inline void sm_lt_store_build_req(struct sml_lt_store_req *req,
+ u8 instance_id,
+ u8 op_id, u8 ack,
+ u32 lt_index,
+ u8 offset,
+ u8 num,
+ u16 byte_enb3,
+ u16 byte_enb2,
+ u16 byte_enb1,
+ u8 *data)
+{
+ sm_lt_build_head(&req->head, instance_id, op_id, ack, offset, num);
+ req->index = lt_index;
+ req->index = HTONL(req->index);
+ req->extra = 0;
+ req->byte_enb[0] = (u32)(byte_enb3);
+ req->byte_enb[0] = HTONL(req->byte_enb[0]);
+ req->byte_enb[1] = HTONL((((u32)byte_enb2) << 16) | byte_enb1);
+ sml_lt_store_data((u32 *)req->write_data, (u32 *)(void *)data, num);
+}
+
+int hinic3_dbg_lt_rd_16byte(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data)
+{
+ struct sml_lt_load_req req;
+ int ret;
+
+ if (!hwdev)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_load_build_req(&req, instance, SM_LT_LOAD, ACK, lt_index, 0, 0);
+
+ ret = hinic3_api_cmd_read_ack(hwdev, dest, (u8 *)(&req),
+ LT_LOAD16_API_SIZE, (void *)data, 0x10);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Read linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int hinic3_dbg_lt_wr_16byte_mask(void *hwdev, u8 dest, u8 instance,
+ u32 lt_index, u8 *data, u16 mask)
+{
+ struct sml_lt_store_req req;
+ int ret;
+
+ if (!hwdev || !data)
+ return -EFAULT;
+
+ if (!COMM_SUPPORT_API_CHAIN((struct hinic3_hwdev *)hwdev))
+ return -EPERM;
+
+ sm_lt_store_build_req(&req, instance, SM_LT_STORE, NOACK, lt_index,
+ 0, 0, 0, 0, mask, data);
+
+ ret = hinic3_api_cmd_write_nack(hwdev, dest, &req, LT_STORE16_API_SIZE);
+ if (ret) {
+ sdk_err(((struct hinic3_hwdev *)hwdev)->dev_hdl,
+ "Write linear table 16byte fail, err: %d\n", ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
new file mode 100644
index 000000000000..b23b69f3dbe7
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.c
@@ -0,0 +1,267 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
+
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include "ossl_knl.h"
+#include "hinic3_crm.h"
+#include "hinic3_hw.h"
+#include "hinic3_lld.h"
+#include "hinic3_dev_mgmt.h"
+#include "hinic3_sriov.h"
+
+static int hinic3_init_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 i, func_idx;
+ int err;
+
+ /* mbox msg channel resources will be freed during remove process */
+ err = hinic3_init_func_mbox_msg_channel(hwdev,
+ hinic3_func_max_vf(hwdev));
+ if (err != 0)
+ return err;
+
+ /* vf use 256K as default wq page size, and can't change it */
+ for (i = start_vf_id; i <= end_vf_id; i++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + i;
+ err = hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_DEFAULT_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int hinic3_deinit_vf_hw(void *hwdev, u16 start_vf_id, u16 end_vf_id)
+{
+ u16 func_idx, idx;
+
+ for (idx = start_vf_id; idx <= end_vf_id; idx++) {
+ func_idx = hinic3_glb_pf_vf_offset(hwdev) + idx;
+ hinic3_set_wq_page_size(hwdev, func_idx,
+ HINIC3_HW_WQ_PAGE_SIZE,
+ HINIC3_CHANNEL_COMM);
+ }
+
+ return 0;
+}
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_sriov_get_totalvfs(pdev));
+}
+
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ return sprintf(buf, "%d\n", pci_num_vf(pdev));
+}
+
+/*lint -save -e713*/
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int ret;
+ u16 num_vfs;
+ int cur_vfs, total_vfs;
+
+ ret = kstrtou16(buf, 0, &num_vfs);
+ if (ret < 0)
+ return ret;
+
+ cur_vfs = pci_num_vf(pdev);
+ total_vfs = pci_sriov_get_totalvfs(pdev);
+ if (num_vfs > total_vfs)
+ return -ERANGE;
+
+ if (num_vfs == cur_vfs)
+ return count; /* no change */
+
+ if (num_vfs == 0) {
+ /* disable VFs */
+ ret = hinic3_pci_sriov_configure(pdev, 0);
+ if (ret < 0)
+ return ret;
+ return count;
+ }
+
+ /* enable VFs */
+ if (cur_vfs) {
+ nic_warn(&pdev->dev, "%d VFs already enabled. Disable before enabling %d VFs\n",
+ cur_vfs, num_vfs);
+ return -EBUSY;
+ }
+
+ ret = hinic3_pci_sriov_configure(pdev, num_vfs);
+ if (ret < 0)
+ return ret;
+
+ if (ret != num_vfs)
+ nic_warn(&pdev->dev, "%d VFs requested; only %d enabled\n",
+ num_vfs, ret);
+
+ return count;
+}
+
+/*lint -restore*/
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+int hinic3_pci_sriov_disable(struct pci_dev *dev)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ u16 tmp_vfs;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV disable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ /* if SR-IOV is already disabled then there is nothing to do */
+ if (!sriov_info->sriov_enabled)
+ return 0;
+
+ if (test_and_set_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev, "SR-IOV disable in process, please wait");
+ return -EPERM;
+ }
+
+ /* If our VFs are assigned we cannot shut down SR-IOV
+ * without causing issues, so just leave the hardware
+ * available but disabled
+ */
+ if (pci_vfs_assigned(dev)) {
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+ sdk_warn(&dev->dev, "Unloading driver while VFs are assigned - VFs will not be deallocated\n");
+ return -EPERM;
+ }
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 0;
+ hinic3_event_callback(hwdev, &event);
+
+ sriov_info->sriov_enabled = false;
+
+ /* disable iov and allow time for transactions to clear */
+ pci_disable_sriov(dev);
+
+ tmp_vfs = (u16)sriov_info->num_vfs;
+ sriov_info->num_vfs = 0;
+ hinic3_deinit_vf_hw(hwdev, 1, tmp_vfs);
+
+ clear_bit(HINIC3_SRIOV_DISABLE, &sriov_info->state);
+
+#endif
+
+ return 0;
+}
+
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
+{
+#ifdef CONFIG_PCI_IOV
+ struct hinic3_sriov_info *sriov_info = NULL;
+ struct hinic3_event_info event = {0};
+ void *hwdev = NULL;
+ int pre_existing_vfs = 0;
+ int err = 0;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ hwdev = hinic3_get_hwdev_by_pcidev(dev);
+ if (!hwdev) {
+ sdk_err(&dev->dev, "SR-IOV enable is not permitted, please wait...\n");
+ return -EPERM;
+ }
+
+ if (test_and_set_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state)) {
+ sdk_err(&dev->dev, "SR-IOV enable in process, please wait, num_vfs %d\n",
+ num_vfs);
+ return -EPERM;
+ }
+
+ pre_existing_vfs = pci_num_vf(dev);
+
+ if (num_vfs > pci_sriov_get_totalvfs(dev)) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return -ERANGE;
+ }
+ if (pre_existing_vfs && pre_existing_vfs != num_vfs) {
+ err = hinic3_pci_sriov_disable(dev);
+ if (err) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+ } else if (pre_existing_vfs == num_vfs) {
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return num_vfs;
+ }
+
+ err = hinic3_init_vf_hw(hwdev, 1, (u16)num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to init vf in hardware before enable sriov, error %d\n",
+ err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ err = pci_enable_sriov(dev, num_vfs);
+ if (err) {
+ sdk_err(&dev->dev, "Failed to enable SR-IOV, error %d\n", err);
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+ return err;
+ }
+
+ sriov_info->sriov_enabled = true;
+ sriov_info->num_vfs = num_vfs;
+
+ event.service = EVENT_SRV_COMM;
+ event.type = EVENT_COMM_SRIOV_STATE_CHANGE;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->enable = 1;
+ ((struct hinic3_sriov_state_info *)(void *)event.event_data)->num_vfs = (u16)num_vfs;
+ hinic3_event_callback(hwdev, &event);
+
+ clear_bit(HINIC3_SRIOV_ENABLE, &sriov_info->state);
+
+ return num_vfs;
+#else
+
+ return 0;
+#endif
+}
+
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs)
+{
+ struct hinic3_sriov_info *sriov_info = NULL;
+
+ sriov_info = hinic3_get_sriov_info_by_pcidev(dev);
+ if (!sriov_info)
+ return -EFAULT;
+
+ if (!test_bit(HINIC3_FUNC_PERSENT, &sriov_info->state))
+ return -EFAULT;
+
+ if (num_vfs == 0)
+ return hinic3_pci_sriov_disable(dev);
+ else
+ return hinic3_pci_sriov_enable(dev, num_vfs);
+}
+
+/*lint -restore*/
+
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
new file mode 100644
index 000000000000..4a640adf15b4
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_sriov.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef HINIC3_SRIOV_H
+#define HINIC3_SRIOV_H
+#include <linux/types.h>
+#include <linux/pci.h>
+
+#if !(defined(HAVE_SRIOV_CONFIGURE) || defined(HAVE_RHEL6_SRIOV_CONFIGURE))
+ssize_t hinic3_sriov_totalvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ssize_t hinic3_sriov_numvfs_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count);
+#endif /* !(HAVE_SRIOV_CONFIGURE || HAVE_RHEL6_SRIOV_CONFIGURE) */
+
+enum hinic3_sriov_state {
+ HINIC3_SRIOV_DISABLE,
+ HINIC3_SRIOV_ENABLE,
+ HINIC3_FUNC_PERSENT,
+};
+
+struct hinic3_sriov_info {
+ bool sriov_enabled;
+ unsigned int num_vfs;
+ unsigned long state;
+};
+
+struct hinic3_sriov_info *hinic3_get_sriov_info_by_pcidev(struct pci_dev *pdev);
+int hinic3_pci_sriov_disable(struct pci_dev *dev);
+int hinic3_pci_sriov_enable(struct pci_dev *dev, int num_vfs);
+int hinic3_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
new file mode 100644
index 000000000000..2f5e0984e429
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/hinic3_wq.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": [COMM]" fmt
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "ossl_knl.h"
+#include "hinic3_common.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+
+#define WQ_MIN_DEPTH 64
+#define WQ_MAX_DEPTH 65536
+#define WQ_MAX_NUM_PAGES (PAGE_SIZE / sizeof(u64))
+
+static int wq_init_wq_block(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (WQ_IS_0_LEVEL_CLA(wq)) {
+ wq->wq_block_paddr = wq->wq_pages[0].align_paddr;
+ wq->wq_block_vaddr = wq->wq_pages[0].align_vaddr;
+
+ return 0;
+ }
+
+ if (wq->num_wq_pages > WQ_MAX_NUM_PAGES) {
+ sdk_err(wq->dev_hdl, "num_wq_pages exceed limit: %lu\n",
+ WQ_MAX_NUM_PAGES);
+ return -EFAULT;
+ }
+
+ wq->wq_block_vaddr = dma_zalloc_coherent(wq->dev_hdl, PAGE_SIZE,
+ &wq->wq_block_paddr,
+ GFP_KERNEL);
+ if (!wq->wq_block_vaddr) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq block\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ wq->wq_block_vaddr[i] =
+ cpu_to_be64(wq->wq_pages[i].align_paddr);
+
+ return 0;
+}
+
+static int wq_alloc_pages(struct hinic3_wq *wq)
+{
+ int i, page_idx, err;
+
+ wq->wq_pages = kcalloc(wq->num_wq_pages, sizeof(*wq->wq_pages),
+ GFP_KERNEL);
+ if (!wq->wq_pages) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq pages handle\n");
+ return -ENOMEM;
+ }
+
+ for (page_idx = 0; page_idx < wq->num_wq_pages; page_idx++) {
+ err = hinic3_dma_zalloc_coherent_align(wq->dev_hdl,
+ wq->wq_page_size,
+ wq->wq_page_size,
+ GFP_KERNEL,
+ &wq->wq_pages[page_idx]);
+ if (err) {
+ sdk_err(wq->dev_hdl, "Failed to alloc wq page\n");
+ goto free_wq_pages;
+ }
+ }
+
+ err = wq_init_wq_block(wq);
+ if (err)
+ goto free_wq_pages;
+
+ return 0;
+
+free_wq_pages:
+ for (i = 0; i < page_idx; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+
+ return -ENOMEM;
+}
+
+static void wq_free_pages(struct hinic3_wq *wq)
+{
+ int i;
+
+ if (!WQ_IS_0_LEVEL_CLA(wq))
+ dma_free_coherent(wq->dev_hdl, PAGE_SIZE, wq->wq_block_vaddr,
+ wq->wq_block_paddr);
+
+ for (i = 0; i < wq->num_wq_pages; i++)
+ hinic3_dma_free_coherent_align(wq->dev_hdl, &wq->wq_pages[i]);
+
+ kfree(wq->wq_pages);
+ wq->wq_pages = NULL;
+}
+
+int hinic3_wq_create(void *hwdev, struct hinic3_wq *wq, u32 q_depth,
+ u16 wqebb_size)
+{
+ struct hinic3_hwdev *dev = hwdev;
+ u32 wq_page_size;
+
+ if (!wq || !dev) {
+ pr_err("Invalid wq or dev_hdl\n");
+ return -EINVAL;
+ }
+
+ if (q_depth < WQ_MIN_DEPTH || q_depth > WQ_MAX_DEPTH ||
+ (q_depth & (q_depth - 1)) || !wqebb_size ||
+ (wqebb_size & (wqebb_size - 1))) {
+ sdk_err(dev->dev_hdl, "Wq q_depth(%u) or wqebb_size(%u) is invalid\n",
+ q_depth, wqebb_size);
+ return -EINVAL;
+ }
+
+ wq_page_size = ALIGN(dev->wq_page_size, PAGE_SIZE);
+
+ memset(wq, 0, sizeof(*wq));
+ wq->dev_hdl = dev->dev_hdl;
+ wq->q_depth = q_depth;
+ wq->idx_mask = (u16)(q_depth - 1);
+ wq->wqebb_size = wqebb_size;
+ wq->wqebb_size_shift = (u16)ilog2(wq->wqebb_size);
+ wq->wq_page_size = wq_page_size;
+
+ wq->wqebbs_per_page = wq_page_size / wqebb_size;
+ /* In case of wq_page_size is larger than q_depth * wqebb_size */
+ if (wq->wqebbs_per_page > q_depth)
+ wq->wqebbs_per_page = q_depth;
+ wq->wqebbs_per_page_shift = (u16)ilog2(wq->wqebbs_per_page);
+ wq->wqebbs_per_page_mask = (u16)(wq->wqebbs_per_page - 1);
+ wq->num_wq_pages = (u16)(ALIGN(((u32)q_depth * wqebb_size),
+ wq_page_size) / wq_page_size);
+
+ return wq_alloc_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_create);
+
+void hinic3_wq_destroy(struct hinic3_wq *wq)
+{
+ if (!wq)
+ return;
+
+ wq_free_pages(wq);
+}
+EXPORT_SYMBOL(hinic3_wq_destroy);
diff --git a/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
new file mode 100644
index 000000000000..fafbc2f23067
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/hw/ossl_knl_linux.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#include <linux/vmalloc.h>
+#include "ossl_knl_linux.h"
+
+#define OSSL_MINUTE_BASE (60)
+
+
+struct file *file_creat(const char *file_name)
+{
+ return filp_open(file_name, O_CREAT | O_RDWR | O_APPEND, 0);
+}
+
+struct file *file_open(const char *file_name)
+{
+ return filp_open(file_name, O_RDONLY, 0);
+}
+
+void file_close(struct file *file_handle)
+{
+ (void)filp_close(file_handle, NULL);
+}
+
+u32 get_file_size(struct file *file_handle)
+{
+ struct inode *file_inode;
+
+ file_inode = file_handle->f_inode;
+
+ return (u32)(file_inode->i_size);
+}
+
+void set_file_position(struct file *file_handle, u32 position)
+{
+ file_handle->f_pos = position;
+}
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos)
+{
+ return (int)kernel_read(file_handle, log_buffer, rd_length,
+ &file_handle->f_pos);
+}
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length)
+{
+ return (u32)kernel_write(file_handle, log_buffer, wr_length,
+ &file_handle->f_pos);
+}
+
+static int _linux_thread_func(void *thread)
+{
+ struct sdk_thread_info *info = (struct sdk_thread_info *)thread;
+
+ while (!kthread_should_stop())
+ info->thread_fn(info->data);
+
+ return 0;
+}
+
+int creat_thread(struct sdk_thread_info *thread_info)
+{
+ thread_info->thread_obj = kthread_run(_linux_thread_func, thread_info,
+ thread_info->name);
+ if (!thread_info->thread_obj)
+ return -EFAULT;
+
+ return 0;
+}
+
+void stop_thread(struct sdk_thread_info *thread_info)
+{
+ if (thread_info->thread_obj)
+ (void)kthread_stop(thread_info->thread_obj);
+}
+
+void utctime_to_localtime(u64 utctime, u64 *localtime)
+{
+ *localtime = utctime - sys_tz.tz_minuteswest *
+ OSSL_MINUTE_BASE; /*lint !e647*/
+}
+
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer)
+{
+ if (!adapter_hdl || !timer)
+ return;
+
+ init_timer(timer);
+}
+#endif
+
+void add_to_timer(struct timer_list *timer, long period)
+{
+ if (!timer)
+ return;
+
+ add_timer(timer);
+}
+
+void stop_timer(struct timer_list *timer) {}
+
+void delete_timer(struct timer_list *timer)
+{
+ if (!timer)
+ return;
+
+ del_timer_sync(timer);
+}
+
+u64 ossl_get_real_time(void)
+{
+ struct timeval tv = {0};
+ u64 tv_msec;
+
+ do_gettimeofday(&tv);
+
+ tv_msec = (u64)tv.tv_sec * MSEC_PER_SEC + (u64)tv.tv_usec / USEC_PER_MSEC;
+ return tv_msec;
+}
diff --git a/drivers/net/ethernet/huawei/hinic3/mag_cmd.h b/drivers/net/ethernet/huawei/hinic3/mag_cmd.h
new file mode 100644
index 000000000000..3af40511c7b9
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/mag_cmd.h
@@ -0,0 +1,886 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2021. All rights reserved.
+ * Description: serdes/mag cmd definition between driver and mpu
+ * Author: ETH group
+ * Create: 2021-07-30
+ */
+
+#ifndef MAG_CMD_H
+#define MAG_CMD_H
+
+#include "mgmt_msg_base.h"
+
+/* serdes/mag消息命令字定义 */
+enum mag_cmd {
+ /* serdes命令字,统一封装所有serdes命令 */
+ SERDES_CMD_PROCESS = 0,
+
+ /* mag命令字,按功能划分 */
+ /* 端口配置相关 0-29 */
+ MAG_CMD_SET_PORT_CFG = 1,
+ MAG_CMD_SET_PORT_ADAPT = 2,
+ MAG_CMD_CFG_LOOPBACK_MODE = 3,
+
+ MAG_CMD_GET_PORT_ENABLE = 5,
+ MAG_CMD_SET_PORT_ENABLE = 6,
+ MAG_CMD_GET_LINK_STATUS = 7,
+ MAG_CMD_SET_LINK_FOLLOW = 8,
+ MAG_CMD_SET_PMA_ENABLE = 9,
+ MAG_CMD_CFG_FEC_MODE = 10,
+
+ MAG_CMD_CFG_AN_TYPE = 12, /* reserved for future use */
+ MAG_CMD_CFG_LINK_TIME = 13,
+
+ MAG_CMD_SET_PANGEA_ADAPT = 15,
+
+ /* bios link配置相关 30-49 */
+ MAG_CMD_CFG_BIOS_LINK_CFG = 31,
+ MAG_CMD_RESTORE_LINK_CFG = 32,
+ MAG_CMD_ACTIVATE_BIOS_LINK_CFG = 33,
+
+ /* 光模块、LED、PHY等外设配置管理 50-99 */
+ /* LED */
+ MAG_CMD_SET_LED_CFG = 50,
+
+ /* PHY */
+ MAG_CMD_GET_PHY_INIT_STATUS = 55, /* reserved for future use */
+
+ /* 光模块 */
+ MAG_CMD_GET_XSFP_INFO = 60,
+ MAG_CMD_SET_XSFP_ENABLE = 61,
+ MAG_CMD_GET_XSFP_PRESENT = 62,
+ MAG_CMD_SET_XSFP_RW = 63, /* sfp/qsfp single byte read/write, for equipment test */
+ MAG_CMD_CFG_XSFP_TEMPERATURE = 64,
+
+ /* 事件上报 100-149 */
+ MAG_CMD_WIRE_EVENT = 100,
+ MAG_CMD_LINK_ERR_EVENT = 101,
+
+ /* DFX、Counter相关 */
+ MAG_CMD_EVENT_PORT_INFO = 150,
+ MAG_CMD_GET_PORT_STAT = 151,
+ MAG_CMD_CLR_PORT_STAT = 152,
+ MAG_CMD_GET_PORT_INFO = 153,
+ MAG_CMD_GET_PCS_ERR_CNT = 154,
+ MAG_CMD_GET_MAG_CNT = 155,
+ MAG_CMD_DUMP_ANTRAIN_INFO = 156,
+
+ MAG_CMD_MAX = 0xFF
+};
+
+/* serdes cmd struct define */
+#define CMD_ARRAY_BUF_SIZE 64
+#define SERDES_CMD_DATA_BUF_SIZE 512
+struct serdes_in_info {
+ u32 chip_id : 16;
+ u32 macro_id : 16;
+ u32 start_sds_id : 16;
+ u32 sds_num : 16;
+
+ u32 cmd_type : 8; /* reserved for iotype */
+ u32 sub_cmd : 8;
+ u32 rw : 1; /* 0: read, 1: write */
+ u32 rsvd : 15;
+
+ u32 val;
+ union {
+ char field[CMD_ARRAY_BUF_SIZE];
+ u32 addr;
+ u8 *ex_param;
+ };
+};
+
+struct serdes_out_info {
+ u32 str_len; /* out_str length */
+ u32 result_offset;
+ u32 type; /* 0:data; 1:string */
+ char out_str[SERDES_CMD_DATA_BUF_SIZE];
+};
+
+struct serdes_cmd_in {
+ struct mgmt_msg_head head;
+
+ struct serdes_in_info serdes_in;
+};
+
+struct serdes_cmd_out {
+ struct mgmt_msg_head head;
+
+ struct serdes_out_info serdes_out;
+};
+
+enum mag_cmd_port_speed {
+ PORT_SPEED_NOT_SET = 0,
+ PORT_SPEED_10MB = 1,
+ PORT_SPEED_100MB = 2,
+ PORT_SPEED_1GB = 3,
+ PORT_SPEED_10GB = 4,
+ PORT_SPEED_25GB = 5,
+ PORT_SPEED_40GB = 6,
+ PORT_SPEED_50GB = 7,
+ PORT_SPEED_100GB = 8,
+ PORT_SPEED_200GB = 9,
+ PORT_SPEED_UNKNOWN
+};
+
+enum mag_cmd_port_an {
+ PORT_AN_NOT_SET = 0,
+ PORT_CFG_AN_ON = 1,
+ PORT_CFG_AN_OFF = 2
+};
+
+enum mag_cmd_port_adapt {
+ PORT_ADAPT_NOT_SET = 0,
+ PORT_CFG_ADAPT_ON = 1,
+ PORT_CFG_ADAPT_OFF = 2
+};
+
+enum mag_cmd_port_sriov {
+ PORT_SRIOV_NOT_SET = 0,
+ PORT_CFG_SRIOV_ON = 1,
+ PORT_CFG_SRIOV_OFF = 2
+};
+
+enum mag_cmd_port_fec {
+ PORT_FEC_NOT_SET = 0,
+ PORT_FEC_RSFEC = 1,
+ PORT_FEC_BASEFEC = 2,
+ PORT_FEC_NOFEC = 3,
+ PORT_FEC_LLRSFEC = 4,
+ PORT_FEC_AUTO = 5
+};
+
+enum mag_cmd_port_lanes {
+ PORT_LANES_NOT_SET = 0,
+ PORT_LANES_X1 = 1,
+ PORT_LANES_X2 = 2,
+ PORT_LANES_X4 = 4,
+ PORT_LANES_X8 = 8 /* reserved for future use */
+};
+
+enum mag_cmd_port_duplex {
+ PORT_DUPLEX_HALF = 0,
+ PORT_DUPLEX_FULL = 1
+};
+
+enum mag_cmd_wire_node {
+ WIRE_NODE_UNDEF = 0,
+ CABLE_10G = 1,
+ FIBER_10G = 2,
+ CABLE_25G = 3,
+ FIBER_25G = 4,
+ CABLE_40G = 5,
+ FIBER_40G = 6,
+ CABLE_50G = 7,
+ FIBER_50G = 8,
+ CABLE_100G = 9,
+ FIBER_100G = 10,
+ CABLE_200G = 11,
+ FIBER_200G = 12,
+ WIRE_NODE_NUM
+};
+
+enum mag_cmd_cnt_type {
+ MAG_RX_RSFEC_DEC_CW_CNT = 0,
+ MAG_RX_RSFEC_CORR_CW_CNT = 1,
+ MAG_RX_RSFEC_UNCORR_CW_CNT = 2,
+ MAG_RX_PCS_BER_CNT = 3,
+ MAG_RX_PCS_ERR_BLOCK_CNT = 4,
+ MAG_RX_PCS_E_BLK_CNT = 5,
+ MAG_RX_PCS_DEC_ERR_BLK_CNT = 6,
+ MAG_RX_PCS_LANE_BIP_ERR_CNT = 7,
+ MAG_CNT_NUM
+};
+
+/* mag_cmd_set_port_cfg config bitmap */
+#define MAG_CMD_SET_SPEED 0x1
+#define MAG_CMD_SET_AUTONEG 0x2
+#define MAG_CMD_SET_FEC 0x4
+#define MAG_CMD_SET_LANES 0x8
+struct mag_cmd_set_port_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 config_bitmap;
+ u8 speed;
+ u8 autoneg;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1[20];
+};
+
+/* mag supported/advertised link mode bitmap */
+enum mag_cmd_link_mode {
+ LINK_MODE_GE = 0,
+ LINK_MODE_10GE_BASE_R = 1,
+ LINK_MODE_25GE_BASE_R = 2,
+ LINK_MODE_40GE_BASE_R4 = 3,
+ LINK_MODE_50GE_BASE_R = 4,
+ LINK_MODE_50GE_BASE_R2 = 5,
+ LINK_MODE_100GE_BASE_R = 6,
+ LINK_MODE_100GE_BASE_R2 = 7,
+ LINK_MODE_100GE_BASE_R4 = 8,
+ LINK_MODE_200GE_BASE_R2 = 9,
+ LINK_MODE_200GE_BASE_R4 = 10,
+ LINK_MODE_MAX_NUMBERS,
+
+ LINK_MODE_UNKNOWN = 0xFFFF
+};
+
+#define LINK_MODE_GE_BIT 0x1u
+#define LINK_MODE_10GE_BASE_R_BIT 0x2u
+#define LINK_MODE_25GE_BASE_R_BIT 0x4u
+#define LINK_MODE_40GE_BASE_R4_BIT 0x8u
+#define LINK_MODE_50GE_BASE_R_BIT 0x10u
+#define LINK_MODE_50GE_BASE_R2_BIT 0x20u
+#define LINK_MODE_100GE_BASE_R_BIT 0x40u
+#define LINK_MODE_100GE_BASE_R2_BIT 0x80u
+#define LINK_MODE_100GE_BASE_R4_BIT 0x100u
+#define LINK_MODE_200GE_BASE_R2_BIT 0x200u
+#define LINK_MODE_200GE_BASE_R4_BIT 0x400u
+
+#define CABLE_10GE_BASE_R_BIT LINK_MODE_10GE_BASE_R_BIT
+#define CABLE_25GE_BASE_R_BIT (LINK_MODE_25GE_BASE_R_BIT | LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_40GE_BASE_R4_BIT LINK_MODE_40GE_BASE_R4_BIT
+#define CABLE_50GE_BASE_R_BIT (LINK_MODE_50GE_BASE_R_BIT | LINK_MODE_25GE_BASE_R_BIT | \
+ LINK_MODE_10GE_BASE_R_BIT)
+#define CABLE_50GE_BASE_R2_BIT LINK_MODE_50GE_BASE_R2_BIT
+#define CABLE_100GE_BASE_R2_BIT (LINK_MODE_100GE_BASE_R2_BIT | LINK_MODE_50GE_BASE_R2_BIT)
+#define CABLE_100GE_BASE_R4_BIT (LINK_MODE_100GE_BASE_R4_BIT | LINK_MODE_40GE_BASE_R4_BIT)
+#define CABLE_200GE_BASE_R4_BIT (LINK_MODE_200GE_BASE_R4_BIT | LINK_MODE_100GE_BASE_R4_BIT | \
+ LINK_MODE_40GE_BASE_R4_BIT)
+
+struct mag_cmd_get_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 an_support;
+ u8 an_en;
+ u8 duplex;
+
+ u8 speed;
+ u8 fec;
+ u8 lanes;
+ u8 rsvd1;
+
+ u32 supported_mode;
+ u32 advertised_mode;
+ u8 rsvd2[8];
+};
+
+#define MAG_CMD_OPCODE_GET 0
+#define MAG_CMD_OPCODE_SET 1
+struct mag_cmd_set_port_adapt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get adapt info 1:set adapt */
+ u8 enable;
+ u8 rsvd0;
+ u32 speed_mode;
+ u32 rsvd1[3];
+};
+
+#define MAG_CMD_LP_MODE_SDS_S_TX2RX 1
+#define MAG_CMD_LP_MODE_SDS_P_RX2TX 2
+#define MAG_CMD_LP_MODE_SDS_P_TX2RX 3
+#define MAG_CMD_LP_MODE_MAC_RX2TX 4
+#define MAG_CMD_LP_MODE_MAC_TX2RX 5
+#define MAG_CMD_LP_MODE_TXDP2RXDP 6
+struct mag_cmd_cfg_loopback_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get loopback mode 1:set loopback mode */
+ u8 lp_mode;
+ u8 lp_en; /* 0:disable 1:enable */
+
+ u32 rsvd0[2];
+};
+
+#define MAG_CMD_PORT_DISABLE 0x0
+#define MAG_CMD_TX_ENABLE 0x1
+#define MAG_CMD_RX_ENABLE 0x2
+/* the physical port is disable only when all pf of the port are set to down,
+ * if any pf is enable, the port is enable
+ */
+struct mag_cmd_set_port_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd1[3];
+};
+
+struct mag_cmd_get_port_enable {
+ struct mgmt_msg_head head;
+
+ u8 port;
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd0[2];
+};
+
+#define PMA_FOLLOW_DEFAULT 0x0
+#define PMA_FOLLOW_ENABLE 0x1
+#define PMA_FOLLOW_DISABLE 0x2
+#define PMA_FOLLOW_GET 0x4
+/* the physical port disable link follow only when all pf of the port are set to follow disable */
+struct mag_cmd_set_link_follow {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 rsvd0;
+
+ u8 follow;
+ u8 rsvd1[3];
+};
+
+/* firmware also use this cmd report link event to driver */
+struct mag_cmd_get_link_status {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:link down 1:link up */
+ u8 rsvd0[2];
+};
+
+struct mag_cmd_set_pma_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id; /* function_id should not more than the max support pf_id(32) */
+ u16 enable;
+};
+
+struct mag_cmd_cfg_an_type {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get an type 1:set an type */
+ u8 rsvd0[2];
+
+ u32 an_type; /* 0:ieee 1:25G/50 eth consortium */
+};
+
+struct mag_cmd_get_link_time {
+ struct mgmt_msg_head head;
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 link_up_begin;
+ u32 link_up_end;
+ u32 link_down_begin;
+ u32 link_down_end;
+};
+
+struct mag_cmd_cfg_fec_mode {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get fec mode 1:set fec mode */
+ u8 fec;
+ u8 rsvd0;
+};
+
+/* speed */
+#define PANGEA_ADAPT_10G_BITMAP 0xd
+#define PANGEA_ADAPT_25G_BITMAP 0x72
+#define PANGEA_ADAPT_40G_BITMAP 0x680
+#define PANGEA_ADAPT_100G_BITMAP 0x1900
+
+/* speed and fec */
+#define PANGEA_10G_NO_BITMAP 0x8
+#define PANGEA_10G_BASE_BITMAP 0x4
+#define PANGEA_25G_NO_BITMAP 0x10
+#define PANGEA_25G_BASE_BITMAP 0x20
+#define PANGEA_25G_RS_BITMAP 0x40
+#define PANGEA_40G_NO_BITMAP 0x400
+#define PANGEA_40G_BASE_BITMAP 0x200
+#define PANGEA_100G_NO_BITMAP 0x800
+#define PANGEA_100G_RS_BITMAP 0x1000
+
+/* adapt or fec */
+#define PANGEA_ADAPT_ADAPT_BITMAP 0x183
+#define PANGEA_ADAPT_NO_BITMAP 0xc18
+#define PANGEA_ADAPT_BASE_BITMAP 0x224
+#define PANGEA_ADAPT_RS_BITMAP 0x1040
+
+/* default cfg */
+#define PANGEA_ADAPT_CFG_10G_CR 0x200d
+#define PANGEA_ADAPT_CFG_10G_SRLR 0xd
+#define PANGEA_ADAPT_CFG_25G_CR 0x207f
+#define PANGEA_ADAPT_CFG_25G_SRLR 0x72
+#define PANGEA_ADAPT_CFG_40G_CR4 0x2680
+#define PANGEA_ADAPT_CFG_40G_SRLR4 0x680
+#define PANGEA_ADAPT_CFG_100G_CR4 0x3f80
+#define PANGEA_ADAPT_CFG_100G_SRLR4 0x1900
+typedef union {
+ struct {
+ u32 adapt_10g : 1; /* [0] adapt_10g */
+ u32 adapt_25g : 1; /* [1] adapt_25g */
+ u32 base_10g : 1; /* [2] base_10g */
+ u32 no_10g : 1; /* [3] no_10g */
+ u32 no_25g : 1; /* [4] no_25g */
+ u32 base_25g : 1; /* [5] base_25g */
+ u32 rs_25g : 1; /* [6] rs_25g */
+ u32 adapt_40g : 1; /* [7] adapt_40g */
+ u32 adapt_100g : 1; /* [8] adapt_100g */
+ u32 base_40g : 1; /* [9] base_40g */
+ u32 no_40g : 1; /* [10] no_40g */
+ u32 no_100g : 1; /* [11] no_100g */
+ u32 rs_100g : 1; /* [12] rs_100g */
+ u32 auto_neg : 1; /* [13] auto_neg */
+ u32 rsvd0 : 18; /* [31:14] reserved */
+ } bits;
+
+ u32 value;
+} pangea_adapt_bitmap_u;
+
+#define PANGEA_ADAPT_GET 0x0
+#define PANGEA_ADAPT_SET 0x1
+struct mag_cmd_set_pangea_adapt {
+ struct mgmt_msg_head head;
+
+ u16 port_id;
+ u8 opcode; /* 0:get adapt info 1:cfg adapt info */
+ u8 wire_type;
+
+ pangea_adapt_bitmap_u cfg_bitmap;
+ pangea_adapt_bitmap_u cur_bitmap;
+ u32 rsvd1[3];
+};
+
+struct mag_cmd_cfg_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 opcode; /* 0:get bios link info 1:set bios link cfg */
+ u8 clear;
+ u8 rsvd0;
+
+ u32 wire_type;
+ u8 an_en;
+ u8 speed;
+ u8 fec;
+ u8 rsvd1;
+ u32 speed_mode;
+ u32 rsvd2[3];
+};
+
+struct mag_cmd_restore_link_cfg {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd[7];
+};
+
+struct mag_cmd_activate_bios_link_cfg {
+ struct mgmt_msg_head head;
+
+ u32 rsvd[8];
+};
+
+/* led type */
+enum mag_led_type {
+ MAG_CMD_LED_TYPE_ALARM = 0x0,
+ MAG_CMD_LED_TYPE_LOW_SPEED = 0x1,
+ MAG_CMD_LED_TYPE_HIGH_SPEED = 0x2
+};
+
+/* led mode */
+enum mag_led_mode {
+ MAG_CMD_LED_MODE_DEFAULT = 0x0,
+ MAG_CMD_LED_MODE_FORCE_ON = 0x1,
+ MAG_CMD_LED_MODE_FORCE_OFF = 0x2,
+ MAG_CMD_LED_MODE_FORCE_BLINK_1HZ = 0x3,
+ MAG_CMD_LED_MODE_FORCE_BLINK_2HZ = 0x4,
+ MAG_CMD_LED_MODE_FORCE_BLINK_4HZ = 0x5,
+ MAG_CMD_LED_MODE_1HZ = 0x6,
+ MAG_CMD_LED_MODE_2HZ = 0x7,
+ MAG_CMD_LED_MODE_4HZ = 0x8
+};
+
+/* the led is report alarm when any pf of the port is alram */
+struct mag_cmd_set_led_cfg {
+ struct mgmt_msg_head head;
+
+ u16 function_id;
+ u8 type;
+ u8 mode;
+};
+
+#define XSFP_INFO_MAX_SIZE 640
+/* xsfp wire type, refer to cmis protocol definition */
+enum mag_wire_type {
+ MAG_CMD_WIRE_TYPE_UNKNOWN = 0x0,
+ MAG_CMD_WIRE_TYPE_MM = 0x1,
+ MAG_CMD_WIRE_TYPE_SM = 0x2,
+ MAG_CMD_WIRE_TYPE_COPPER = 0x3,
+ MAG_CMD_WIRE_TYPE_ACC = 0x4,
+ MAG_CMD_WIRE_TYPE_BASET = 0x5,
+ MAG_CMD_WIRE_TYPE_AOC = 0x40,
+ MAG_CMD_WIRE_TYPE_ELECTRIC = 0x41,
+ MAG_CMD_WIRE_TYPE_BACKPLANE = 0x42
+};
+
+struct mag_cmd_get_xsfp_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 wire_type;
+ u16 out_len;
+ u32 rsvd;
+ u8 sfp_info[XSFP_INFO_MAX_SIZE];
+};
+
+#define MAG_CMD_XSFP_DISABLE 0x0
+#define MAG_CMD_XSFP_ENABLE 0x1
+/* the sfp is disable only when all pf of the port are set sfp down,
+ * if any pf is enable, the sfp is enable
+ */
+struct mag_cmd_set_xsfp_enable {
+ struct mgmt_msg_head head;
+
+ u32 port_id;
+ u32 status; /* 0:on 1:off */
+};
+
+#define MAG_CMD_XSFP_PRESENT 0x0
+#define MAG_CMD_XSFP_ABSENT 0x1
+struct mag_cmd_get_xsfp_present {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 abs_status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+#define MAG_CMD_XSFP_READ 0x0
+#define MAG_CMD_XSFP_WRITE 0x1
+struct mag_cmd_set_xsfp_rw {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 operation; /* 0: read; 1: write */
+ u8 value;
+ u8 rsvd0;
+ u32 devaddr;
+ u32 offset;
+ u32 rsvd1;
+};
+
+struct mag_cmd_cfg_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ u8 opcode; /* 0:read 1:write */
+ u8 rsvd0[3];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+struct mag_cmd_get_xsfp_temperature {
+ struct mgmt_msg_head head;
+
+ s16 sfp_temp[8];
+ u8 rsvd[32];
+ s32 max_temp;
+ s32 min_temp;
+};
+
+/* xsfp plug event */
+struct mag_cmd_wire_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 status; /* 0:present, 1:absent */
+ u8 rsvd[2];
+};
+
+/* link err type definition */
+#define MAG_CMD_ERR_XSFP_UNKNOWN 0x0
+struct mag_cmd_link_err_event {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 link_err_type;
+ u8 rsvd[2];
+};
+
+#define MAG_PARAM_TYPE_DEFAULT_CFG 0x0
+#define MAG_PARAM_TYPE_BIOS_CFG 0x1
+#define MAG_PARAM_TYPE_TOOL_CFG 0x2
+#define MAG_PARAM_TYPE_FINAL_CFG 0x3
+#define MAG_PARAM_TYPE_WIRE_INFO 0x4
+#define MAG_PARAM_TYPE_ADAPT_INFO 0x5
+#define MAG_PARAM_TYPE_MAX_CNT 0x6
+struct param_head {
+ u8 valid_len;
+ u8 info_type;
+ u8 rsvd[2];
+};
+
+struct mag_port_link_param {
+ struct param_head head;
+
+ u8 an;
+ u8 fec;
+ u8 speed;
+ u8 rsvd0;
+
+ u32 used;
+ u32 an_fec_ability;
+ u32 an_speed_ability;
+ u32 an_pause_ability;
+};
+
+struct mag_port_wire_info {
+ struct param_head head;
+
+ u8 status;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 default_fec;
+ u8 speed;
+ u8 rsvd1;
+ u32 speed_ability;
+};
+
+struct mag_port_adapt_info {
+ struct param_head head;
+
+ u32 adapt_en;
+ u32 flash_adapt;
+ u32 rsvd0[2];
+
+ u32 wire_node;
+ u32 an_en;
+ u32 speed;
+ u32 fec;
+};
+
+struct mag_port_param_info {
+ u8 parameter_cnt;
+ u8 lane_id;
+ u8 lane_num;
+ u8 rsvd0;
+
+ struct mag_port_link_param default_cfg;
+ struct mag_port_link_param bios_cfg;
+ struct mag_port_link_param tool_cfg;
+ struct mag_port_link_param final_cfg;
+
+ struct mag_port_wire_info wire_info;
+ struct mag_port_adapt_info adapt_info;
+};
+
+#define XSFP_VENDOR_NAME_LEN 16
+struct mag_cmd_event_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 event_type;
+ u8 rsvd0[2];
+
+ // 光模块相关
+ u8 vendor_name[XSFP_VENDOR_NAME_LEN];
+ u32 port_type; /* fiber / copper */
+ u32 port_sub_type; /* sr / lr */
+ u32 cable_length; /* 1/3/5m */
+ u8 cable_temp; /* 温度 */
+ u8 max_speed; /* 光模块最大速率 */
+ u8 sfp_type; /* sfp/qsfp */
+ u8 rsvd1;
+ u32 power[4]; /* 光功率 */
+
+ u8 an_state;
+ u8 fec;
+ u16 speed;
+
+ u8 gpio_insert; /* 0:present 1:absent */
+ u8 alos;
+ u8 rx_los;
+ u8 pma_ctrl;
+
+ u32 pma_fifo_reg;
+ u32 pma_signal_ok_reg;
+ u32 pcs_64_66b_reg;
+ u32 rf_lf;
+ u8 pcs_link;
+ u8 pcs_mac_link;
+ u8 tx_enable;
+ u8 rx_enable;
+ u32 pcs_err_cnt;
+
+ u8 eq_data[38];
+ u8 rsvd2[2];
+
+ u32 his_link_machine_state;
+ u32 cur_link_machine_state;
+ u8 his_machine_state_data[128];
+ u8 cur_machine_state_data[128];
+ u8 his_machine_state_length;
+ u8 cur_machine_state_length;
+
+ struct mag_port_param_info param_info;
+ u8 rsvd3[360];
+};
+
+struct mag_cmd_port_stats {
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+ u64 mac_rx_unfilter_pkt_num;
+};
+
+struct mag_cmd_port_stats_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_get_port_stat {
+ struct mgmt_msg_head head;
+
+ struct mag_cmd_port_stats counter;
+ u64 rsvd1[15];
+};
+
+struct mag_cmd_get_pcs_err_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+
+ u32 pcs_err_cnt;
+};
+
+struct mag_cmd_get_mag_cnt {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 mag_csr[128];
+};
+
+struct mag_cmd_dump_antrain_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 len;
+ u8 rsvd0[2];
+
+ u32 antrain_csr[256];
+};
+
+#define MAG_SFP_PORT_NUM 24
+/* 芯片光模块温度结构体定义 */
+struct mag_cmd_sfp_temp_in_info {
+ struct mgmt_msg_head head; /* 8B */
+ u8 opt_type; /* 0:read operation 1:cfg operation */
+ u8 rsv[3];
+ s32 max_temp; /* 芯片光模块阈值 */
+ s32 min_temp; /* 芯片光模块阈值 */
+};
+
+struct mag_cmd_sfp_temp_out_info {
+ struct mgmt_msg_head head; /* 8B */
+ s16 sfp_temp_data[MAG_SFP_PORT_NUM]; /* 读出的温度 */
+ s32 max_temp; /* 芯片光模块阈值 */
+ s32 min_temp; /* 芯片光模块阈值 */
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/mgmt_msg_base.h b/drivers/net/ethernet/huawei/hinic3/mgmt_msg_base.h
new file mode 100644
index 000000000000..257bf6761df0
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/mgmt_msg_base.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
+ * File Name : mgmt_msg_base.h
+ * Version : Initial Draft
+ * Created : 2021/6/28
+ * Last Modified :
+ * Description : COMM Command interfaces between Driver and MPU
+ * Function List :
+ */
+
+#ifndef MGMT_MSG_BASE_H
+#define MGMT_MSG_BASE_H
+
+#define MGMT_MSG_CMD_OP_SET 1
+#define MGMT_MSG_CMD_OP_GET 0
+
+#define MGMT_MSG_CMD_OP_START 1
+#define MGMT_MSG_CMD_OP_STOP 0
+
+struct mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/nic_cfg_comm.h b/drivers/net/ethernet/huawei/hinic3/nic_cfg_comm.h
new file mode 100644
index 000000000000..9fb4232716da
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/nic_cfg_comm.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C), 2001-2021, Huawei Tech. Co., Ltd.
+ * File Name : nic_cfg_comm.h
+ * Version : Initial Draft
+ * Description : nic config common header file
+ * Function List :
+ * History :
+ * Modification: Created file
+ */
+
+#ifndef NIC_CFG_COMM_H
+#define NIC_CFG_COMM_H
+
+/* rss */
+#define HINIC3_RSS_TYPE_VALID_SHIFT 23
+#define HINIC3_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC3_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC3_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC3_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC3_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC3_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC3_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC3_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC3_RSS_TYPE_SET(val, member) (((u32)(val) & 0x1) << HINIC3_RSS_TYPE_##member##_SHIFT)
+#define HINIC3_RSS_TYPE_GET(val, member) (((u32)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+enum nic_rss_hash_type {
+ NIC_RSS_HASH_TYPE_XOR = 0,
+ NIC_RSS_HASH_TYPE_TOEP,
+
+ NIC_RSS_HASH_TYPE_MAX /* MUST BE THE LAST ONE */
+};
+
+#define NIC_RSS_INDIR_SIZE 256
+#define NIC_RSS_KEY_SIZE 40
+
+/* *
+ * Definition of the NIC receiving mode
+ */
+#define NIC_RX_MODE_UC 0x01
+#define NIC_RX_MODE_MC 0x02
+#define NIC_RX_MODE_BC 0x04
+#define NIC_RX_MODE_MC_ALL 0x08
+#define NIC_RX_MODE_PROMISC 0x10
+
+/* IEEE 802.1Qaz std */
+#define NIC_DCB_COS_MAX 0x8
+#define NIC_DCB_UP_MAX 0x8
+#define NIC_DCB_TC_MAX 0x8
+#define NIC_DCB_PG_MAX 0x8
+#define NIC_DCB_TSA_SP 0x0
+#define NIC_DCB_TSA_CBS 0x1 /* hi1822 do NOT support */
+#define NIC_DCB_TSA_ETS 0x2
+#define NIC_DCB_DSCP_NUM 0x8
+#define NIC_DCB_IP_PRI_MAX 0x40
+
+#define NIC_DCB_PRIO_DWRR 0x0
+#define NIC_DCB_PRIO_STRICT 0x1
+
+#define NIC_DCB_MAX_PFC_NUM 0x4
+#endif
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
new file mode 100644
index 000000000000..fe14371c8f35
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_H
+#define OSSL_KNL_H
+
+#include "ossl_knl_linux.h"
+
+
+#define sdk_err(dev, format, ...) dev_err(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_warn(dev, format, ...) dev_warn(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_notice(dev, format, ...) dev_notice(dev, "[COMM]" format, ##__VA_ARGS__)
+#define sdk_info(dev, format, ...) dev_info(dev, "[COMM]" format, ##__VA_ARGS__)
+
+#define nic_err(dev, format, ...) dev_err(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_warn(dev, format, ...) dev_warn(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_notice(dev, format, ...) dev_notice(dev, "[NIC]" format, ##__VA_ARGS__)
+#define nic_info(dev, format, ...) dev_info(dev, "[NIC]" format, ##__VA_ARGS__)
+
+#ifndef BIG_ENDIAN
+#define BIG_ENDIAN 0x4321
+#endif
+
+#ifndef LITTLE_ENDIAN
+#define LITTLE_ENDIAN 0x1234
+#endif
+
+#ifdef BYTE_ORDER
+#undef BYTE_ORDER
+#endif
+/* X86 */
+#define BYTE_ORDER LITTLE_ENDIAN
+#define USEC_PER_MSEC 1000L
+#define MSEC_PER_SEC 1000L
+
+#endif /* OSSL_KNL_H */
diff --git a/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
new file mode 100644
index 000000000000..1bda9e99355c
--- /dev/null
+++ b/drivers/net/ethernet/huawei/hinic3/ossl_knl_linux.h
@@ -0,0 +1,284 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
+
+#ifndef OSSL_KNL_LINUX_H_
+#define OSSL_KNL_LINUX_H_
+
+#include <net/checksum.h>
+#include <net/ipv6.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/version.h>
+#include <linux/ethtool.h>
+#include <linux/fs.h>
+#include <linux/kthread.h>
+#include <linux/if_vlan.h>
+#include <linux/udp.h>
+#include <linux/highmem.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+
+#ifndef NETIF_F_SCTP_CSUM
+#define NETIF_F_SCTP_CSUM 0
+#endif
+
+#ifndef __GFP_COLD
+#define __GFP_COLD 0
+#endif
+
+#ifndef __GFP_COMP
+#define __GFP_COMP 0
+#endif
+
+#undef __always_unused
+#define __always_unused __attribute__((__unused__))
+
+#define ETH_TYPE_TRANS_SETS_DEV
+#define HAVE_NETDEV_STATS_IN_NETDEV
+
+#ifndef HAVE_SET_RX_MODE
+#define HAVE_SET_RX_MODE
+#endif
+
+#define HAVE_INET6_IFADDR_LIST
+
+#define HAVE_NDO_GET_STATS64
+
+#ifndef HAVE_MQPRIO
+#define HAVE_MQPRIO
+#endif
+
+#ifndef HAVE_SETUP_TC
+#define HAVE_SETUP_TC
+#endif
+
+#ifndef HAVE_NDO_SET_FEATURES
+#define HAVE_NDO_SET_FEATURES
+#endif
+
+#define HAVE_IRQ_AFFINITY_NOTIFY
+
+#define HAVE_ETHTOOL_SET_PHYS_ID
+
+#define HAVE_NETDEV_WANTED_FEAUTES
+
+#ifndef HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_PCI_DEV_FLAGS_ASSIGNED
+#define HAVE_VF_SPOOFCHK_CONFIGURE
+#endif
+
+#ifndef HAVE_SKB_L4_RXHASH
+#define HAVE_SKB_L4_RXHASH
+#endif
+
+#define HAVE_ETHTOOL_GRXFHINDIR_SIZE
+#define HAVE_INT_NDO_VLAN_RX_ADD_VID
+
+#ifdef ETHTOOL_SRXNTUPLE
+#undef ETHTOOL_SRXNTUPLE
+#endif
+
+#define _kc_kmap_atomic(page) kmap_atomic(page)
+#define _kc_kunmap_atomic(addr) kunmap_atomic(addr)
+
+#include <linux/of_net.h>
+
+#define HAVE_FDB_OPS
+#define HAVE_ETHTOOL_GET_TS_INFO
+#define HAVE_NAPI_GRO_FLUSH_OLD
+
+#ifndef HAVE_SRIOV_CONFIGURE
+#define HAVE_SRIOV_CONFIGURE
+#endif
+
+#define HAVE_ENCAP_TSO_OFFLOAD
+#define HAVE_SKB_INNER_NETWORK_HEADER
+
+#define HAVE_NDO_SET_VF_LINK_STATE
+#define HAVE_SKB_INNER_PROTOCOL
+#define HAVE_MPLS_FEATURES
+
+#define HAVE_NDO_GET_PHYS_PORT_ID
+#define HAVE_NETIF_SET_XPS_QUEUE_CONST_MASK
+
+#define HAVE_VXLAN_CHECKS
+#define HAVE_NDO_SELECT_QUEUE_ACCEL
+#define HAVE_NET_GET_RANDOM_ONCE
+#define HAVE_HWMON_DEVICE_REGISTER_WITH_GROUPS
+
+#define HAVE_NDO_SELECT_QUEUE_ACCEL_FALLBACK
+
+#define HAVE_NDO_SET_VF_MIN_MAX_TX_RATE
+#define HAVE_VLAN_FIND_DEV_DEEP_RCU
+
+#define HAVE_SKBUFF_CSUM_LEVEL
+#define HAVE_MULTI_VLAN_OFFLOAD_EN
+#define HAVE_ETH_GET_HEADLEN_FUNC
+
+#define HAVE_RXFH_HASHFUNC
+
+#define HAVE_NDO_SET_VF_TRUST
+
+#include <net/devlink.h>
+
+#define HAVE_IO_MAP_WC_SIZE
+
+#define HAVE_NETDEVICE_MIN_MAX_MTU
+
+#define HAVE_VOID_NDO_GET_STATS64
+#define HAVE_VM_OPS_FAULT_NO_VMA
+
+#define HAVE_HWTSTAMP_FILTER_NTP_ALL
+#define HAVE_NDO_SETUP_TC_CHAIN_INDEX
+#define HAVE_PCI_ERROR_HANDLER_RESET_PREPARE
+#define HAVE_PTP_CLOCK_DO_AUX_WORK
+
+#define HAVE_NDO_SETUP_TC_REMOVE_TC_TO_NETDEV
+
+#define HAVE_XDP_SUPPORT
+
+#define HAVE_NDO_BPF_NETDEV_BPF
+#define HAVE_TIMER_SETUP
+#define HAVE_XDP_DATA_META
+
+#define HAVE_MACRO_VM_FAULT_T
+
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV
+
+#define dev_open(x) dev_open(x, NULL)
+#define HAVE_NEW_ETHTOOL_LINK_SETTINGS_ONLY
+
+#ifndef get_ds
+#define get_ds() (KERNEL_DS)
+#endif
+
+#ifndef dma_zalloc_coherent
+#define dma_zalloc_coherent(d, s, h, f) _hinic3_dma_zalloc_coherent(d, s, h, f)
+static inline void *_hinic3_dma_zalloc_coherent(struct device *dev,
+ size_t size, dma_addr_t *dma_handle,
+ gfp_t gfp)
+{
+ /* Above kernel 5.0, fixed up all remaining architectures
+ * to zero the memory in dma_alloc_coherent, and made
+ * dma_zalloc_coherent a no-op wrapper around dma_alloc_coherent,
+ * which fixes all of the above issues.
+ */
+ return dma_alloc_coherent(dev, size, dma_handle, gfp);
+}
+#endif
+
+struct timeval {
+ __kernel_old_time_t tv_sec; /* seconds */
+ __kernel_suseconds_t tv_usec; /* microseconds */
+};
+
+#ifndef do_gettimeofday
+#define do_gettimeofday(time) _kc_do_gettimeofday(time)
+static inline void _kc_do_gettimeofday(struct timeval *tv)
+{
+ struct timespec64 ts;
+
+ ktime_get_real_ts64(&ts);
+ tv->tv_sec = ts.tv_sec;
+ tv->tv_usec = ts.tv_nsec / NSEC_PER_USEC;
+}
+#endif
+
+#define HAVE_NDO_SELECT_QUEUE_SB_DEV_ONLY
+#define ETH_GET_HEADLEN_NEED_DEV
+#define HAVE_GENL_OPS_FIELD_VALIDATE
+
+#ifndef FIELD_SIZEOF
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->f))
+#endif
+
+#define HAVE_DEVLINK_FLASH_UPDATE_PARAMS
+
+#ifndef rtc_time_to_tm
+#define rtc_time_to_tm rtc_time64_to_tm
+#endif
+#define HAVE_NDO_TX_TIMEOUT_TXQ
+#define HAVE_PROC_OPS
+
+#define SUPPORTED_COALESCE_PARAMS
+
+#ifndef pci_cleanup_aer_uncorrect_error_status
+#define pci_cleanup_aer_uncorrect_error_status pci_aer_clear_nonfatal_status
+#endif
+
+#define HAVE_XDP_FRAME_SZ
+
+#define HAVE_DEVLINK_FW_FILE_NAME_MEMBER
+
+#define HAVE_ENCAPSULATION_TSO
+
+#define HAVE_ENCAPSULATION_CSUM
+
+#ifndef netdev_hw_addr_list_for_each
+#define netdev_hw_addr_list_for_each(ha, l) \
+ list_for_each_entry(ha, &(l)->list, list)
+#endif
+
+#define spin_lock_deinit(lock)
+
+struct file *file_creat(const char *file_name);
+
+struct file *file_open(const char *file_name);
+
+void file_close(struct file *file_handle);
+
+u32 get_file_size(struct file *file_handle);
+
+void set_file_position(struct file *file_handle, u32 position);
+
+int file_read(struct file *file_handle, char *log_buffer, u32 rd_length,
+ u32 *file_pos);
+
+u32 file_write(struct file *file_handle, const char *log_buffer, u32 wr_length);
+
+struct sdk_thread_info {
+ struct task_struct *thread_obj;
+ char *name;
+ void (*thread_fn)(void *x);
+ void *thread_event;
+ void *data;
+};
+
+int creat_thread(struct sdk_thread_info *thread_info);
+
+void stop_thread(struct sdk_thread_info *thread_info);
+
+#define destroy_work(work)
+
+void utctime_to_localtime(u64 utctime, u64 *localtime);
+
+#ifndef HAVE_TIMER_SETUP
+void initialize_timer(const void *adapter_hdl, struct timer_list *timer);
+#endif
+
+void add_to_timer(struct timer_list *timer, long period);
+void stop_timer(struct timer_list *timer);
+void delete_timer(struct timer_list *timer);
+u64 ossl_get_real_time(void);
+
+#define nicif_err(priv, type, dev, fmt, args...) \
+ netif_level(err, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_warn(priv, type, dev, fmt, args...) \
+ netif_level(warn, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_notice(priv, type, dev, fmt, args...) \
+ netif_level(notice, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_info(priv, type, dev, fmt, args...) \
+ netif_level(info, priv, type, dev, "[NIC]" fmt, ##args)
+#define nicif_dbg(priv, type, dev, fmt, args...) \
+ netif_level(dbg, priv, type, dev, "[NIC]" fmt, ##args)
+
+#define destroy_completion(completion)
+#define sema_deinit(lock)
+#define mutex_deinit(lock)
+#define rwlock_deinit(lock)
+
+#define tasklet_state(tasklet) ((tasklet)->state)
+
+#endif
--
2.24.0
1
0

18 May '23
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I6WAZX
--------------------------------
When the number of cores is greater than the number of ECMDQs, the number
of ECMDQs occupied by each NUMA node is less than the number of cores of
the node. Therefore, the first smmu->nr_ecmdq cores do not cover all
ECMDQs.
For example:
---------------------------------------
| Node0 | Node1 |
|---------------------------------------|
| 0 1 2 3 | 4 5 6 7 | CPU ID
|---------------------------------------|
| 0 1 | 2 3 | ECMDQ ID
---------------------------------------
Fixes: 3965519baff5 ("iommu/arm-smmu-v3: Add support for less than one ECMDQ per core")
Signed-off-by: Zhen Lei <thunder.leizhen(a)huawei.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 114 ++++++++++++--------
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 +-
2 files changed, 73 insertions(+), 44 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 8064c5da79612f8..5793b51d44750cb 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -387,7 +387,7 @@ static struct arm_smmu_cmdq *arm_smmu_get_cmdq(struct arm_smmu_device *smmu)
if (smmu->ecmdq_enabled) {
struct arm_smmu_ecmdq *ecmdq;
- ecmdq = *this_cpu_ptr(smmu->ecmdq);
+ ecmdq = *this_cpu_ptr(smmu->ecmdqs);
return &ecmdq->cmdq;
}
@@ -486,7 +486,7 @@ static void arm_smmu_ecmdq_skip_err(struct arm_smmu_device *smmu)
for (i = 0; i < smmu->nr_ecmdq; i++) {
unsigned long flags;
- ecmdq = *per_cpu_ptr(smmu->ecmdq, i);
+ ecmdq = *per_cpu_ptr(smmu->ecmdqs, i);
q = &ecmdq->cmdq.q;
prod = readl_relaxed(q->prod_reg);
@@ -4925,9 +4925,50 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
return ret;
}
+static int arm_smmu_ecmdq_reset(struct arm_smmu_device *smmu)
+{
+ int i, cpu, ret = 0;
+ u32 reg;
+
+ if (!smmu->nr_ecmdq)
+ return 0;
+
+ i = 0;
+ for_each_possible_cpu(cpu) {
+ struct arm_smmu_ecmdq *ecmdq;
+ struct arm_smmu_queue *q;
+
+ ecmdq = *per_cpu_ptr(smmu->ecmdqs, cpu);
+ if (ecmdq != per_cpu_ptr(smmu->ecmdq, cpu))
+ continue;
+
+ q = &ecmdq->cmdq.q;
+ i++;
+
+ if (WARN_ON(q->llq.prod != q->llq.cons)) {
+ q->llq.prod = 0;
+ q->llq.cons = 0;
+ }
+ writeq_relaxed(q->q_base, ecmdq->base + ARM_SMMU_ECMDQ_BASE);
+ writel_relaxed(q->llq.prod, ecmdq->base + ARM_SMMU_ECMDQ_PROD);
+ writel_relaxed(q->llq.cons, ecmdq->base + ARM_SMMU_ECMDQ_CONS);
+
+ /* enable ecmdq */
+ writel(ECMDQ_PROD_EN | q->llq.prod, q->prod_reg);
+ ret = readl_relaxed_poll_timeout(q->cons_reg, reg, reg & ECMDQ_CONS_ENACK,
+ 1, ARM_SMMU_POLL_TIMEOUT_US);
+ if (ret) {
+ dev_err(smmu->dev, "ecmdq[%d] enable failed\n", i);
+ smmu->ecmdq_enabled = 0;
+ break;
+ }
+ }
+
+ return ret;
+}
+
static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool resume)
{
- int i;
int ret;
u32 reg, enables;
struct arm_smmu_cmdq_ent cmd;
@@ -4975,31 +5016,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool resume)
writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
- for (i = 0; i < smmu->nr_ecmdq; i++) {
- struct arm_smmu_ecmdq *ecmdq;
- struct arm_smmu_queue *q;
-
- ecmdq = *per_cpu_ptr(smmu->ecmdq, i);
- q = &ecmdq->cmdq.q;
-
- if (WARN_ON(q->llq.prod != q->llq.cons)) {
- q->llq.prod = 0;
- q->llq.cons = 0;
- }
- writeq_relaxed(q->q_base, ecmdq->base + ARM_SMMU_ECMDQ_BASE);
- writel_relaxed(q->llq.prod, ecmdq->base + ARM_SMMU_ECMDQ_PROD);
- writel_relaxed(q->llq.cons, ecmdq->base + ARM_SMMU_ECMDQ_CONS);
-
- /* enable ecmdq */
- writel(ECMDQ_PROD_EN | q->llq.prod, q->prod_reg);
- ret = readl_relaxed_poll_timeout(q->cons_reg, reg, reg & ECMDQ_CONS_ENACK,
- 1, ARM_SMMU_POLL_TIMEOUT_US);
- if (ret) {
- dev_err(smmu->dev, "ecmdq[%d] enable failed\n", i);
- smmu->ecmdq_enabled = 0;
- break;
- }
- }
+ arm_smmu_ecmdq_reset(smmu);
enables = CR0_CMDQEN;
ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
@@ -5099,10 +5116,11 @@ static int arm_smmu_ecmdq_layout(struct arm_smmu_device *smmu)
ecmdq = devm_alloc_percpu(smmu->dev, *ecmdq);
if (!ecmdq)
return -ENOMEM;
+ smmu->ecmdq = ecmdq;
if (num_possible_cpus() <= smmu->nr_ecmdq) {
for_each_possible_cpu(cpu)
- *per_cpu_ptr(smmu->ecmdq, cpu) = per_cpu_ptr(ecmdq, cpu);
+ *per_cpu_ptr(smmu->ecmdqs, cpu) = per_cpu_ptr(ecmdq, cpu);
/* A core requires at most one ECMDQ */
smmu->nr_ecmdq = num_possible_cpus();
@@ -5139,7 +5157,16 @@ static int arm_smmu_ecmdq_layout(struct arm_smmu_device *smmu)
* may be left due to truncation rounding.
*/
nr_ecmdqs[node] = nr_cpus_node(node) * nr_remain / num_possible_cpus();
+ }
+
+ for_each_node(node) {
+ if (!nr_cpus_node(node))
+ continue;
+
nr_remain -= nr_ecmdqs[node];
+
+ /* An ECMDQ has been reserved for each node at above [1] */
+ nr_ecmdqs[node]++;
}
/* Divide the remaining ECMDQs */
@@ -5157,25 +5184,23 @@ static int arm_smmu_ecmdq_layout(struct arm_smmu_device *smmu)
}
for_each_node(node) {
- int i, round, shared = 0;
+ int i, round, shared;
if (!nr_cpus_node(node))
continue;
- /* An ECMDQ has been reserved for each node at above [1] */
- nr_ecmdqs[node]++;
-
+ shared = 0;
if (nr_ecmdqs[node] < nr_cpus_node(node))
shared = 1;
i = 0;
for_each_cpu(cpu, cpumask_of_node(node)) {
round = i % nr_ecmdqs[node];
- if (i++ < nr_ecmdqs[node]) {
+ if (i++ < nr_ecmdqs[node])
ecmdqs[round] = per_cpu_ptr(ecmdq, cpu);
+ else
ecmdqs[round]->cmdq.shared = shared;
- }
- *per_cpu_ptr(smmu->ecmdq, cpu) = ecmdqs[round];
+ *per_cpu_ptr(smmu->ecmdqs, cpu) = ecmdqs[round];
}
}
@@ -5199,6 +5224,8 @@ static int arm_smmu_ecmdq_probe(struct arm_smmu_device *smmu)
numq = 1 << FIELD_GET(IDR6_LOG2NUMQ, reg);
smmu->nr_ecmdq = nump * numq;
gap = ECMDQ_CP_RRESET_SIZE >> FIELD_GET(IDR6_LOG2NUMQ, reg);
+ if (!smmu->nr_ecmdq)
+ return -EOPNOTSUPP;
smmu_dma_base = (vmalloc_to_pfn(smmu->base) << PAGE_SHIFT);
cp_regs = ioremap(smmu_dma_base + ARM_SMMU_ECMDQ_CP_BASE, PAGE_SIZE);
@@ -5231,8 +5258,8 @@ static int arm_smmu_ecmdq_probe(struct arm_smmu_device *smmu)
if (!cp_base)
return -ENOMEM;
- smmu->ecmdq = devm_alloc_percpu(smmu->dev, struct arm_smmu_ecmdq *);
- if (!smmu->ecmdq)
+ smmu->ecmdqs = devm_alloc_percpu(smmu->dev, struct arm_smmu_ecmdq *);
+ if (!smmu->ecmdqs)
return -ENOMEM;
ret = arm_smmu_ecmdq_layout(smmu);
@@ -5246,7 +5273,7 @@ static int arm_smmu_ecmdq_probe(struct arm_smmu_device *smmu)
struct arm_smmu_ecmdq *ecmdq;
struct arm_smmu_queue *q;
- ecmdq = *per_cpu_ptr(smmu->ecmdq, cpu);
+ ecmdq = *per_cpu_ptr(smmu->ecmdqs, cpu);
q = &ecmdq->cmdq.q;
/*
@@ -5254,10 +5281,11 @@ static int arm_smmu_ecmdq_probe(struct arm_smmu_device *smmu)
* CPUs. The CPUs that are not selected are not showed in
* cpumask_of_node(node), their 'ecmdq' may be NULL.
*
- * (q->ecmdq_prod & ECMDQ_PROD_EN) indicates that the ECMDQ is
- * shared by multiple cores and has been initialized.
+ * (ecmdq != per_cpu_ptr(smmu->ecmdq, cpu)) indicates that the
+ * ECMDQ is shared by multiple cores and should be initialized
+ * only by the first owner.
*/
- if (!ecmdq || (q->ecmdq_prod & ECMDQ_PROD_EN))
+ if (!ecmdq || (ecmdq != per_cpu_ptr(smmu->ecmdq, cpu)))
continue;
ecmdq->base = cp_base + addr;
@@ -5700,7 +5728,7 @@ static int arm_smmu_ecmdq_disable(struct device *dev)
struct arm_smmu_device *smmu = dev_get_drvdata(dev);
for (i = 0; i < smmu->nr_ecmdq; i++) {
- ecmdq = *per_cpu_ptr(smmu->ecmdq, i);
+ ecmdq = *per_cpu_ptr(smmu->ecmdqs, i);
q = &ecmdq->cmdq.q;
prod = readl_relaxed(q->prod_reg);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 1dd49bed58df305..3820452bf30210e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -728,7 +728,8 @@ struct arm_smmu_device {
u32 nr_ecmdq;
u32 ecmdq_enabled;
};
- struct arm_smmu_ecmdq *__percpu *ecmdq;
+ struct arm_smmu_ecmdq *__percpu *ecmdqs;
+ struct arm_smmu_ecmdq __percpu *ecmdq;
struct arm_smmu_cmdq cmdq;
struct arm_smmu_evtq evtq;
--
2.25.1
1
0
您好!
Kernel SIG 邀请您参加 2023-05-19 14:00 召开的Zoom会议(自动录制)
会议主题:openEuler Kernel SIG 双周例会
会议内容:
1. 进展update
2. 议题征集中
欢迎大家积极申报议题(新增议题可以直接回复邮件,或录入会议看板)
会议链接:https://us06web.zoom.us/j/87319358677?pwd=dVFTTkYxNTR0TStZUEVsWFZaVmtOUT09
会议纪要:https://etherpad.openeuler.org/p/Kernel-meetings
温馨提醒:建议接入会议后修改参会人的姓名,也可以使用您在gitee.com的ID
更多资讯尽在:https://openeuler.org/zh/
Hello!
openEuler Kernel SIG invites you to attend the Zoom conference(auto recording) will be held at 2023-05-19 14:00,
The subject of the conference is openEuler Kernel SIG 双周例会,
Summary:
1. 进展update
2. 议题征集中
欢迎大家积极申报议题(新增议题可以直接回复邮件,或录入会议看板)
You can join the meeting at https://us06web.zoom.us/j/87319358677?pwd=dVFTTkYxNTR0TStZUEVsWFZaVmtOUT09.
Add topics at https://etherpad.openeuler.org/p/Kernel-meetings.
Note: You are advised to change the participant name after joining the conference or use your ID at gitee.com.
More information: https://openeuler.org/en/
1
0

[PATCH openEuler-22.03-LTS 01/14] uaccess: Add speculation barrier to copy_from_user()
by Jialin Zhang 16 May '23
by Jialin Zhang 16 May '23
16 May '23
From: Dave Hansen <dave.hansen(a)linux.intel.com>
stable inclusion
from stable-v5.10.170
commit 3b6ce54cfa2c04f0636fd0c985913af8703b408d
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I71N8L
CVE: CVE-2023-0459
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 74e19ef0ff8061ef55957c3abd71614ef0f42f47 upstream.
The results of "access_ok()" can be mis-speculated. The result is that
you can end speculatively:
if (access_ok(from, size))
// Right here
even for bad from/size combinations. On first glance, it would be ideal
to just add a speculation barrier to "access_ok()" so that its results
can never be mis-speculated.
But there are lots of system calls just doing access_ok() via
"copy_to_user()" and friends (example: fstat() and friends). Those are
generally not problematic because they do not _consume_ data from
userspace other than the pointer. They are also very quick and common
system calls that should not be needlessly slowed down.
"copy_from_user()" on the other hand uses a user-controller pointer and
is frequently followed up with code that might affect caches. Take
something like this:
if (!copy_from_user(&kernelvar, uptr, size))
do_something_with(kernelvar);
If userspace passes in an evil 'uptr' that *actually* points to a kernel
addresses, and then do_something_with() has cache (or other)
side-effects, it could allow userspace to infer kernel data values.
Add a barrier to the common copy_from_user() code to prevent
mis-speculated values which happen after the copy.
Also add a stub for architectures that do not define barrier_nospec().
This makes the macro usable in generic code.
Since the barrier is now usable in generic code, the x86 #ifdef in the
BPF code can also go away.
Reported-by: Jordy Zomer <jordyzomer(a)google.com>
Suggested-by: Linus Torvalds <torvalds(a)linuxfoundation.org>
Signed-off-by: Dave Hansen <dave.hansen(a)linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> # BPF bits
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
lib/usercopy.c
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Nanyong Sun <sunnanyong(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com>
---
include/linux/nospec.h | 4 ++++
kernel/bpf/core.c | 2 --
lib/usercopy.c | 7 +++++++
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/include/linux/nospec.h b/include/linux/nospec.h
index c1e79f72cd89..9f0af4f116d9 100644
--- a/include/linux/nospec.h
+++ b/include/linux/nospec.h
@@ -11,6 +11,10 @@
struct task_struct;
+#ifndef barrier_nospec
+# define barrier_nospec() do { } while (0)
+#endif
+
/**
* array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
* @index: array element index
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index fd2aa6b9909e..c18aed60ce40 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1642,9 +1642,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
* reuse preexisting logic from Spectre v1 mitigation that
* happens to produce the required code on x86 for v4 as well.
*/
-#ifdef CONFIG_X86
barrier_nospec();
-#endif
CONT;
#define LDST(SIZEOP, SIZE) \
STX_MEM_##SIZEOP: \
diff --git a/lib/usercopy.c b/lib/usercopy.c
index 7413dd300516..7ee63df042d7 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -3,6 +3,7 @@
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/uaccess.h>
+#include <linux/nospec.h>
/* out-of-line parts */
@@ -12,6 +13,12 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
unsigned long res = n;
might_fault();
if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ /*
+ * Ensure that bad access_ok() speculation will not
+ * lead to nasty side effects *after* the copy is
+ * finished:
+ */
+ barrier_nospec();
instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n);
}
--
2.25.1
1
13

[PATCH OLK-5.10 01/14] uaccess: Add speculation barrier to copy_from_user()
by Jialin Zhang 16 May '23
by Jialin Zhang 16 May '23
16 May '23
From: Dave Hansen <dave.hansen(a)linux.intel.com>
stable inclusion
from stable-v5.10.170
commit 3b6ce54cfa2c04f0636fd0c985913af8703b408d
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I71N8L
CVE: CVE-2023-0459
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit 74e19ef0ff8061ef55957c3abd71614ef0f42f47 upstream.
The results of "access_ok()" can be mis-speculated. The result is that
you can end speculatively:
if (access_ok(from, size))
// Right here
even for bad from/size combinations. On first glance, it would be ideal
to just add a speculation barrier to "access_ok()" so that its results
can never be mis-speculated.
But there are lots of system calls just doing access_ok() via
"copy_to_user()" and friends (example: fstat() and friends). Those are
generally not problematic because they do not _consume_ data from
userspace other than the pointer. They are also very quick and common
system calls that should not be needlessly slowed down.
"copy_from_user()" on the other hand uses a user-controller pointer and
is frequently followed up with code that might affect caches. Take
something like this:
if (!copy_from_user(&kernelvar, uptr, size))
do_something_with(kernelvar);
If userspace passes in an evil 'uptr' that *actually* points to a kernel
addresses, and then do_something_with() has cache (or other)
side-effects, it could allow userspace to infer kernel data values.
Add a barrier to the common copy_from_user() code to prevent
mis-speculated values which happen after the copy.
Also add a stub for architectures that do not define barrier_nospec().
This makes the macro usable in generic code.
Since the barrier is now usable in generic code, the x86 #ifdef in the
BPF code can also go away.
Reported-by: Jordy Zomer <jordyzomer(a)google.com>
Suggested-by: Linus Torvalds <torvalds(a)linuxfoundation.org>
Signed-off-by: Dave Hansen <dave.hansen(a)linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> # BPF bits
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Conflicts:
lib/usercopy.c
Signed-off-by: Ma Wupeng <mawupeng1(a)huawei.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Reviewed-by: Nanyong Sun <sunnanyong(a)huawei.com>
Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com>
Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com>
---
include/linux/nospec.h | 4 ++++
kernel/bpf/core.c | 2 --
lib/usercopy.c | 7 +++++++
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/include/linux/nospec.h b/include/linux/nospec.h
index c1e79f72cd89..9f0af4f116d9 100644
--- a/include/linux/nospec.h
+++ b/include/linux/nospec.h
@@ -11,6 +11,10 @@
struct task_struct;
+#ifndef barrier_nospec
+# define barrier_nospec() do { } while (0)
+#endif
+
/**
* array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
* @index: array element index
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index fd2aa6b9909e..c18aed60ce40 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1642,9 +1642,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
* reuse preexisting logic from Spectre v1 mitigation that
* happens to produce the required code on x86 for v4 as well.
*/
-#ifdef CONFIG_X86
barrier_nospec();
-#endif
CONT;
#define LDST(SIZEOP, SIZE) \
STX_MEM_##SIZEOP: \
diff --git a/lib/usercopy.c b/lib/usercopy.c
index 7413dd300516..7ee63df042d7 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -3,6 +3,7 @@
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/uaccess.h>
+#include <linux/nospec.h>
/* out-of-line parts */
@@ -12,6 +13,12 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
unsigned long res = n;
might_fault();
if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ /*
+ * Ensure that bad access_ok() speculation will not
+ * lead to nasty side effects *after* the copy is
+ * finished:
+ */
+ barrier_nospec();
instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n);
}
--
2.25.1
1
13
Currently, only x86 architecture supports the CLOCKSOURCE_VALIDATE_LAST_CYCLE
option. This option ensures that the timestamps returned by the clocksource are
monotonically increasing, and helps avoid issues caused by hardware failures.
This commit makes CLOCKSOURCE_VALIDATE_LAST_CYCLE configurable on
the arm64 architecture, helps increase system stability and reliability.
Yu Liao (1):
timekeeping: Make CLOCKSOURCE_VALIDATE_LAST_CYCLE configurable
kernel/time/Kconfig | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
--
2.25.1
1
1